id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
236700953
pes2o/s2orc
v3-fos-license
STUDY ON ANTIOXIDANT ACTIVITY OF PHYTOESTROGEN EXTRACTS FROM SOY GERM Soy germ is one of the richest phytoestrogen sources and thus has many benefits for health such as improving bone density, cardiovascular health, cancer prevention, and menopausal treatment. In addition, phytoestrogens are reported to act as antioxidants, removing reactive oxygen species and thereby preventing oxidative damage in living tissue. Phytoestrogens in soy germ include isoflavone compounds and their derivatives: daidzein, genistein, glycitein, daidzin, genistin, glycitin, acetyl daidzin, acetyl genistin, acetyl glycitin, malonyl daidzin, malonyl genistin, malonyl glycitin. Isoflavones aglycone forms comprise only about 2 – 5 % of total isoflavones, however, they express more biological effects than the others. The objective of this study was to compare the antioxidant activity among three extracts: purified isoflavone aglycone extract, crude isoflavone aglycone extract and total phytoestrogen extract. The IC50 values of DPPH free radical scavenging capacity of purified isoflavone aglycone extract, crude isoflavone aglycone extract and total phytoestrogen extract were 0.763 ± 0.016; 3.345 ± 0.076; 6.142 ± 0.050 mg/ml, respectively. The IC50 values of reducing power activity of purified isoflavone aglycone extract, crude isoflavone aglycone extract and total phytoestrogen extract were 1.248 ± 0.024; 3.961 ± 0.172; 9.385 ± 0.272 (mg/ml). As our result, the ranking order of the antioxidant activity (from highest to lowest level) was purified isoflavone aglycone extract > crude isoflavone aglycone extract > total phytoestrogens extract. INTRODUCTION Phytoestrogens from soy germ including isoflavones and their derivatives have many benefits for health. The effects of phytoestrogens on improving bone density, cardiovascular health, cancer prevention, cognitive ability and menopausal symptoms have been reported in many works [1 -4]. Germ contains the highest level of phytoestrogen among parts of soybean seed. Phytoestrogens in soy germ include: daidzein, genistein, glycitein, daidzin, genistin, glycitin, acetyl daidzin, acetyl genistin, acetyl glycitin, malonyl daidzin, malonyl genistin, and malonyl glycitin. The average concentration of total phytoestrogens was 2887 μg/g in germ (embryo), which was four to five times higher than that of whole seed (575 μg/g) [5]. Phytoestrogen extraction processes mostly are reported from whole soybean seeds or germinated soybean seeds (whole or separated from the sprout). There is a few study on phytoestrogen extraction from soy germ. Since Wang found 12 phytoestrogens in commercial soybean foods in 1994 [6], there have been a number of studies focused on these natural substances. Among them aglycones have been known to play an important role in all biological mechanisms. Isoflavone aglycones (daidzein, genistein, and glycitein) comprised only about 2 -5 % of total isoflavones but were rapidly absorbed and showed higher biological activity than other derivatives in digestive [7]. Many works focused on improving isoflavone aglycones in extracts. The commonly used agents which transfer isoflavone glucosides to aglycones were chemicals: acid HCl [8] and biological agents such as enzymes β-glucosidase (cellobiase) [9], galactosidase [10], cellulase [11] and microorganisms capable of producing β-1-4 glucoside binding enzyme [12]. Moreover, in the process of extraction under the effect of temperature, isoflavone derivatives were able to convert one form to another and to the free form of aglycone [13]. The objective of this study was to compare antioxidant activity including the DPPH scavenging and reducing power properties of different extracts (total phytoestrogens extract, crude isoflavone aglycone extract and purified isoflavone aglycone extract) during process of phytoestrogens extraction by cellobiase and purification. Methods Soy germ was defatted using n-Hexane 95 % at ratio solid/liquid of 1:5 and was shook for 5 hours at 180 rpm. Defatted soy germ with moisture of 5.79 ± 0.09 % was packed in dark glass grinder and stored at -4 o C until further analysis. Preparation of extract Total phytoestrogen extraction was conducted as followed: defatted soy germ flours was added with ethanol 65 %, pH 9 and the solid/liquid ratio was 1:12; extraction time was 90 minutes. The liquid extract was then separated from insoluble fractions by filtration then evaporation [14]. Crude isoflavone aglycone extraction: The extract of total phytoestrogens was adjusted to pH 5 using HCl 0,02 N and kept at room temperature for 1 hour. Then this cloudy suspension was centrifuged at 6000 rpm, at 4 0 C for 10 minutes to remove the insoluble. Enzyme reaction was carried out at pH 5, 50 0 C with 1.5 U cellobiase/g defatted soy germ flour; reaction time was five hours. The solution was then filtered through to filter paper to remove the insoluble material in order to obtain crude aglycone phytoestrogens [11]. Purification of isoflavone aglycone extract: The crude aglycone phytoestrogens were purified using ethanol with ratio 100 ml of ethanol: 1.0 g crude aglycone for 5 hours at 4 0 C. Then ethanol was evaporated and 100 ml ethyl acetate together with 70 ml water were added. After stirring at 500 rpm for 4 hours the solution stayed for next two hours, aglycone phytoestrogens was moved into ethylacetate phase. The ethyl acetate was evaporated to obtain purified aglycone phytoestrogen extract. Total phytoestrogens extract, crude isoflavone aglycone extract and purified isoflavone aglycone were analysed for soluble dry matter of the extracts. Total phytoestrogens and aglycones were quantified using HPLC. HPLC analysis Phytoestrogens were analysed by Allicance System, Waters, USA equipped with a Zorbax SB-C18 (5 µm × 4.6 mm × 150 mm). The HPLC conditions were set at 35 o C of column temperature, 260 nm of detective wavelength, mobile phases were A -acetic acid 0.1 %; Bacetic acid/acetonitrile 20/80, flow rate of 1.0 ml /min. The detection was carried out under the linear gradient elution with percentage of mobile phase changing from A 88 %, B 12 % to A 60 %, B 40 % and finish at A 88 %, B12 %. The quantification of each phytoestrogens was calculated by integrating chromatographic peak areas into calibration curves. DPPH radical scavenging activity The DPPH radical scavenging activity was determined according to Blois with some improvement [15]. The DPPH scavenging activity was performed using a solution of 0.1 mM in methanol. 1.0 ml sample was added 2.0 ml DPPH 0.1 mM and kept in darkness. After 30 minutes, the absorbance was measured at 517 nm. A blank was prepared without adding the extract. Ascorbic acid at 5, 10, 15, 50, 25 μg/ml was used as standard. Lower the absorbance of the reaction mixture indicates higher free radical scavenging activity. The capability to scavenge the DPPH radical was calculated using the following equation R 1 : The absorbance of a control measured at 517 nm; R 2 : The absorbance of a test measured at 517 nm. IC 50 value was determined to express antioxidant activity. It is the concentration of fractions that inhibits the formation of DPPH radicals by 50 %. Reducing power assay (RPA) The reducing activity of the extracts was determined according to the method of Oyaizu [16]. A test sample will transfer ion Fe 3+ in potassium ferricyanide (K 3 [Fe(CN) 6 ]) to Fe 2+ in potassium ferrocyanide (K 4 [Fe(CN) 6 ]). By adding FeCl 3 , Fe 3+ would react with ferrocyanide to blue ferrous ferrocyanide (K 4 [Fe(CN) 6 ] 3 ). 1.0 ml sample was added with 2.5 ml phosphate buffer 0.2M (pH = 6.6), at 50 0 C in 20 minutes. After that, each tube was added with 2.5 ml trichloroacetic acid 10 %. The upper layer (2.5 ml) was mixed with 0.5 ml of 0.1 % ferric chloride and distilled water (2.5 ml). The absorbance was measured at 700 nm, higher the absorbance of the reaction mixture indicated higher reducing antioxidant power assay. A blank was prepared without adding the extract. Ascorbic acid at 20, 40, 60, 80 μg/ml was used as standard. IC 50 value was determined to express antioxidant activity. It is the concentration of fractions that increase the formation of reducing antioxidant power assay by 0.5. Statistical analysis: All measurements were conducted in triplicates and statistically analyzed by analysis of variance (ANOVA). Duncan's multiple range test was performed and the relation between using the SPSS software programme version 25 (SPSS Inc., Chicago, IL, USA). Significance of difference was defined at p < 0.05. . The concentration of extract were diluted to a range of concentration that decreased by 50 %. Antioxidant quality is a measure of the effectiveness of total phytoestrogens, crude aglycone phytoestrogens and purified aglycone estrogens. The percentage scavenging and IC 50 values were calculated for all models by Microsoft Excel 2010. The regression models provided by SPSS version 25 are: The regression model was chosen by following criteria:  The "R Square" column represents the value (also called the coefficient of determination), which is the proportion of variance in the dependent variable that can be explained by the independent variables. "Adjusted R Square" (adj. R square) to accurately report the data, the adj. R square higher the more accurate the data (describe the varying of DPPH radical scavenging ratio, explained by extraction concentration)  Sig. value (corresponding P-value) is used to evaluate the suitability (existence) of the model. If the p-value is less than 0.05, we reject the null hypothesis that there's no difference between the means and conclude that a significant difference does exist. If the pvalue is larger than 0.05, we cannot conclude that a significant difference exists.  The correlation coefficient is a statistical measure of the strength of the relationship between the relative movements of two variables. RESULTS AND DISCUSSION Results of the quantity of total phytoestrogen extract, crude isoflavone aglycone extract and purified isoflavone aglycone extract were 72.20 ± 2.48; 14.50 ± 0.36; 9.40 ± 0.46 (mg/ml), respectively. Then three extracts were subjected to determine the antioxidant activity by DPPH radical scavenging method and reducing power assay with two-fold serial dilution method. Reducing power activity The results of reducing power activity of the standard and three extracts: total phytoestrogen extract, crude isoflavone aglycone extract, and purified isoflavone aglycone extract were performed in Table 2. The IC 50 value of standard was 39.796 ± 0.874 (µg/ml) Discussion Phytoestrogens are isoflavones which occur naturally in a wide range of food and plants. The isoflavones are best studied group a mong polyphenols. A number of phytoestrogens are either being actively developed or already currently sold as dietary supplements and herbal, derived medicines because their antioxidant properties. Since 1997, Ruiz-Larrea et al. determined antioxidant activity of phytoestogenic isoflavone [17], the health benefits of phytoestrogens have been attributed to the antioxidant capacity. Extracts issued from soy germ, normally a waste product from soybean processing due to its offflavour, could offer an interesting alternative as staring material for extract phytoestrogens. According to Kim et al., the concentration of phytoestrogens in germ was higher (2887 μg/g) than that of whole seed or other parts of seed such as cotyledon, seed coat [5]. Moreover, the radical scavenging capacity of soy germ extracts was reported at much higher level than those of cotyledon extracts [18]. In this paper, the antioxidant activity of three extracts from soy germ were evaluated using two different assays. The first model of scavenging the stable radical DPPH was sensitive enough to show that soy germ extracts contain particular hydrogen -donor substances which may convert free radicals into harmless substances. Secondly, the reducing power of the soy germ extract was strong. Applied SPSS in statistical analysis helped to establish the relation between phytoestrogen concentrations and antioxidant activities (DPPH radical scavenging or reducing power). The regression models of phytoestrogens concentrations and antioxidant activities were not linear, they might be quadratic or logarithmic. Result of DPPH radical scavenging method showed that the IC 50 value of total phytoestrogen extract higher than that of crude isoflavone aglycone extract; the last one was higher than purified isoflavone aglycone extract. It meant the DPPH radical scavenging activity of purified isoflavone aglycone extract was strongest with IC 50 = 0.763 ± 0.016 mg/ml, subsequently crude isoflavone aglycone extract and the weakest was total phytoestrogen extract ( Table 1). Result of RPA indicated that the IC 50 value of total phytoestrogen extract was higher than crude isoflavone aglycone extract, and higher than purified isoflavone aglycone extract. It meant the reducing power activity of purified isoflavone aglycone extract is highest then crude isoflavone aglycone and reducing power activity of total phytoestrogen extract is the weakest. ( Table 2). As our result, the antioxidant activity showed the highest in purified isoflavone aglycone extract, subsequently lower in crude isoflavone aglycone extract, and the lowest was total phytoestrogens extract. All these results could be explained by the HPLC analysis of three extracts showed in Table 3. When the concentration of total isoflavones in the extract increased the antioxidant activity of this extract increased, too. Morever, these results indicated that the antioxidant activity not only depended on the concentration of total isoflavones but also depended on the concentration of total isoflavone aglycones in the extract. The higher value of concentration of total isoflavone aglycone in the extract was the higher of antioxidant activity of this extract was. (The value of concentration of isoflavone aglycones/total isoflavones in total phytoestrogens extract, crude isoflavone aglycones extract, purified isoflavone aglycones extract was 8.43 %; 73.88 % and 87.09 %, respectively). The intensity of the isoflavones antioxidant was reported to be strongly dependent on chemical structure, especially influenced by the number and position of hydroxyl groups that linked with two aromatic rings [19]. Thus after hydrolysis reaction and purification process, the conversion of isoflavones glycoside and conjugated forms to isoflavone aglycones created a large amount of hydroxyl groups, this is the major cause that increased the antioxidant activity in crude isoflavone aglycones extract and purified isoflavone aglycones extract [13]. The result of this work showed that antioxidant activity of purified aglycone phytoestrogens extract was the highest, followed by the antioxidant activity of crude aglycone phytoestrogens extract and the lowest was the total phytoestrogens extract. CONCLUSIONS Soy germ provides an interesting combination of several potential antioxidant substances. With three extracts from soy germ, our findings indicated that phytoestrogens in soy germ have antioxidant activity evaluated by DPPH radical scavenging method and reducing power assay. The ranking order of the antioxidant activity was purified isoflavone aglycone extract > crude isoflavone aglycone extract > total phytoestrogens extract.
2021-08-03T00:04:14.553Z
2021-03-31T00:00:00.000
{ "year": 2021, "sha1": "4d79b5d38e7df06a1b551de95727cac9edec7e00", "oa_license": null, "oa_url": "https://vjs.ac.vn/index.php/jst/article/download/15606/103810384402", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1c4b7788c73a93adf1d31cce896d8fd7d982733f", "s2fieldsofstudy": [ "Medicine", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
6543799
pes2o/s2orc
v3-fos-license
Jekyll or Hyde: does Matrigel provide a more or less physiological environment in mammary repopulating assays? In vivo transplantation is the current 'gold-standard' assay for evaluating mammary stem cell (MaSC) function. Matrigel, a reconstituted extracellular matrix derived from a mouse sarcoma line, is increasingly being utilized for mammary repopulating assays, although original studies were carried out in its absence. This matrix has also been shown to enhance tumor-initiating capacity. Whilst Matrigel increases the rate of engraftment by MaSCs, it also appears to promote progenitor activity that is distinct from bona fide stem cell activity. This caveat should be considered when interpreting mammary reconstitution assays that incorporate Matrigel, particularly when transplanting high cell numbers. In vivo transplantation into the mammary fat pad represents the cornerstone assay for evaluating mammary stem cell (MaSC) activity. Pioneering work has shown that mammary epithelial outgrowths can be generated in de-epithelialized (or cleared) fat pads transplanted with explants or admixtures of mammary cells [1]. More recently, MaSCs have been prospectively isolated and demonstrated to exhibit multilineage diff erentiation and self-renewal properties through the transplantation of limiting numbers of empirically derived cell sub populations. A MaSC-enriched basal population was identifi ed on the basis of high expression of integrin β 1 (CD29) or integrin α 6 (CD49f ) and moderate levels of CD24 [2,3], with an estimated stem cell frequency of 1 in 60. Using CD24 as a single marker, the CD24 mod subset was shown to comprise almost all repopulating activity [4,5]. A number of recent studies have incorporated the reconstituted extracellular matrix Matrigel (BD Biosciences) in their mammary transplantation assays, with a view to creating an improved microenvironment for the implantation of stem cells. Th ese studies have included the transplantation of unsorted mammary cells, in which as few as 100 cells could reconstitute an entire mammary gland [6], and the transplantation of sorted epithelial subpopulations embedded in Matrigel [7][8][9][10]. Interestingly, Matrigel was recently shown to enhance melanoma cell tumor-initiating capacity several-fold [11]. Given the increasing use of Matrigel in transplantation assays, we have directly assessed the eff ect of this matrix on the repopulating capacity of two distinct subpopu lations isolated from normal mouse mammary glands: the MaSC-enriched subset and the luminal cell subset, the latter of which comprises committed luminal progenitor and mature luminal cells. We report here that the luminal sub popu lation can yield limited ductal out growths, but only in the presence of Matrigel. Th ese data raise the possibility that rare bipotent cells in this subset are activated by matrix components or that committed luminal progenitor cells can undergo dediff erentiation. In either case, these cells do not represent true MaSCs. MaSCs have previously been shown to lie within the CD29 hi (or CD49f hi ) CD24 + population, while extensive transplantation assays of luminal cell fractions including the CD61 + luminal progenitor subset have demonstrated that this luminal population lacks repopulating potential [2,3,12]. In human breast tissue, stem cell activity was similarly demonstrated to occur in the basal population [13,14]. To address the infl uence of Matrigel on in vivo mammary repopulating capacity, we transplanted doublesorted cells from the MaSC-enriched subset (CD29 hi CD24 + ) and the luminal subset (CD29 lo CD24 + ) in either 0%, 25% or 50% Matrigel. Donor cells were derived from Rosa26 mice to allow defi nitive identifi cation of outgrowths from implanted cells by virtue of β-galactosidase activity. Cells within the CD29 hi CD24 + subset were transplanted at limiting dilution, in which 1 in 75 cells is estimated to be a MaSC [2], while an excess of luminal cells (1,000 cells) Abstract In vivo transplantation is the current 'gold-standard' assay for evaluating mammary stem cell (MaSC) function. Matrigel, a reconstituted extracellular matrix derived from a mouse sarcoma line, is increasingly being utilized for mammary repopulating assays, although original studies were carried out in its absence. This matrix has also been shown to enhance tumor-initiating capacity. Whilst Matrigel increases the rate of engraftment by MaSCs, it also appears to promote progenitor activity that is distinct from bona fi de stem cell activity. This caveat should be considered when interpreting mammary reconstitution assays that incorporate Matrigel, particularly when transplanting high cell numbers. were injected. Matrigel at both concentrations was found to substantially enhance the mammary repopulating frequency of the MaSC-enriched subpopulation, with the percentage of outgrowths from transplanted cells almost doubling in the presence of 50% Matrigel compared with no Matrigel (Figure 1). In general, more extensive fi lling of the fat pad was apparent in the presence of this matrix. Th ese data are compatible with the increased engraftment observed upon inclusion of 50% Matrigel [9]. Constituents within Matrigel may enhance the viability and/or activity of stem cells, resulting in increased repopulating capacity. Unexpectedly, transplantation of the luminal subpopulation in Matrigel gave rise to small branched structures (Figure 1a,b): 10.7% and 22.5% were observed in the presence of 25% and 50% Matrigel, respectively. No outgrowths, however, were generated from this subpopulation in the absence of Matrigel, consistent with previous studies [2,3]. Notably, only diminutive outgrowths arose from luminal subset cells inoculated in 50% Matrigel, although each structure exhibited ductal branching from a central point and was therefore scored (Figure 1b,c). In the case of 25% Matrigel, the structures fi lled approximately 1% of the fat pad. Table showing the number of outgrowths per number of mammary fad pads injected with either 75 CD29 hi CD24 + (mammary stem cell (MaSC)-enriched) cells or 1,000 CD29 lo CD24 + (luminal) cells, in either 0%, 25% or 50% Matrigel. Single cell suspensions were prepared from the mammary glands of 8-week-old to 10-week-old FVB/N-Rosa26 female mice, labeled with fl uorochrome-conjugated antibodies and double-sorted as described [2]. The MaSC-enriched and luminal cell populations were identifi ed following depletion of endothelial and hematopoietic cells using anti-CD45, anti-CD31 and anti-TER119 antibodies. Cells were injected (10 μl volume) into the cleared inguinal mammary fat pads of 3-week-old FVB/N female recipients and were collected 8 weeks post transplantation for X-gal staining. β-Gal + branched ductal structures were scored as positive. Data are shown for four independent experiments. (b) Images of X-gal-stained outgrowths: outgrowth derived from transplantation of 75 CD29 hi CD24 + cells in 50% Matrigel (top), and largest outgrowth obtained from transplantation of 1,000 CD29 lo CD24 + cells in 50% Matrigel (bottom). Bar = 1 mm. (c) Bar chart representation of mammary outgrowths as a function of fat-pad fi lling following transplantation of each subpopulation. The axes shown diff er for the two populations, since very few structures were generated by the CD29 lo CD24 + population and these did not exceed 5%. Data are shown for four independent experiments. MFP, mammary fat pad. Secondary transplantation experiments were carried out from luminal cell-derived (n = 3) or MaSC-derived (n = 3) outgrowths to determine whether the luminal cell-derived outgrowths contained cells with self-renewal capacity. No outgrowths were present in 20 recipient glands, whereas prominent ductal outgrowths were evident in recipient glands from all three MaSC-derived outgrowths (15/20). Th us the Matrigel-associated luminal cell-derived (CD29 lo CD24 + ) outgrowths did not exhibit self-renewal properties, a hallmark feature of stem cells. Contamination of this luminal subpopulation (doublesorted and purity confi rmed by reanalysis) with MaSCs seems unlikely as no outgrowths were evident in the absence of Matrigel, and no extensive outgrowths were ever observed. Rather, Matrigel may be providing a micro environment that activates rare bipotent progenitor cells capable of regeneration, albeit limited. Alternatively, luminal progenitor cells within this subpopulation may occasion ally adopt a more primitive state. Th ese data diff er from those recently reported in which Matrigel was found to be necessary for the generation of outgrowths from both the CD49f hi CD24 med and CD49f lo CD24 hi subpopulations [10]. Contrary to the fi ndings described here, a similar degree of engraftment was noted for each population, perhaps refl ecting the large number of cells transplanted (50,000 cells) [10]. It is conceivable that the activation of signaling pathways by Matrigel components can stimulate certain cells to acquire a more primitive state. Matrigel is a solubilized basement membrane extracted from Engelbreth-Holm-Swarm mouse sarcoma and is rich in laminin, collagen IV, proteoglycans as well as a number of diff erent growth factors [15]. Th e nature of the substance or growth factors in Matrigel that may confer a more permissive environment for progenitor activity is yet to be determined. Growth factor-reduced Matrigel could be considered an alternative to complete Matrigel to perhaps distinguish eff ects of the substratum components from those of growth factors on mammary reconstitution. Matrigel has been widely used to study tumor cell invasion, and an altered extracellular matrix has been shown to promote tumorigenesis [16]. In xenotrans plantation assays to identify cancer stem cells in primary tumors, it is pertinent that only the cancer stem cell fraction and not the negative fraction had tumorinitiating capacity in mice when inoculated in Matrigel [17]. Th is reconstituted basement membrane, however, has been found to facilitate tumorigenesis of human breast cancers, squamous cell carcinomas and teratomas in mice [18][19][20], suggesting it has the potential to provide tumor cells with additional survival and/or proliferative signals. Th e infl uence of Matrigel on established tumors, however, is a distinct question from its impact on normal cells. In summary, our data suggest that, in addition to increasing the rate of engraftment by MaSCs, Matrigel appears to promote progenitor activity in the luminal subset that is not seen in its absence. It is important to note that these cells with limited regenerative potential are distinct from bona fi de MaSCs that lie within the basal population and should not be scored as such. A degree of caution should thus be applied to interpreting data from mammary cell transplantation experiments that incorporate Matrigel, particularly when trans planting high cell numbers. Additional studies (such as comparison of complete Matrigel and growth factor-reduced Matrigel) will be required to resolve the question of whether it is more or less physiological to include this matrix in transplantation assays for MaSC function.
2017-06-25T22:13:23.190Z
2011-05-25T00:00:00.000
{ "year": 2011, "sha1": "1a4fd19906ef7a846f2b81ced9cc6233349ee2e0", "oa_license": "CCBY", "oa_url": "https://breast-cancer-research.biomedcentral.com/track/pdf/10.1186/bcr2851", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "22d1b45d451819390dce409fd0f03bebad68857e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
269523733
pes2o/s2orc
v3-fos-license
Exploring the casual association between gut microbiome, circulating inflammatory cytokines and chronic pancreatitis: A Mendelian randomization analysis It has been established that gut dysbiosis contributed to the pathogenesis of digestive disorders. We aimed to explore the causal relationships between intestinal microbiota, circulating inflammatory cytokines and chronic pancreatitis (CP). Summary statistics of genome-wide association studies (GWAS) of intestinal microbiome was retrieved from the MiBioGen study and the GWAS data of 91 circulating inflammatory cytokines and CP were obtained from the GWAS catalog. The 2-sample bidirectional Mendelian randomization (MR) analysis was performed between gut microbiota, circulating inflammatory cytokines and CP, in which the inverse variance weighted (IVW) method was regarded as the primary analysis approach. To prove the reliability of the causal estimations, multiple sensitivity analyses were utilized. IVW results revealed that genetically predicted 2 genera, including Sellimonas and Eubacteriumventriosumgroup, and plasm C-C motif chemokine 23 (CCL23) level were positively associated with CP risk, while genus Escherichia Shigella, Eubacteriumruminantiumgroup and Prevotella9, and plasma Caspase 8, Adenosine Deaminase (ADA), and SIR2-like protein 2 (SIRT2) level, demonstrated an ameliorative effect on CP. Leave-one-out analysis confirmed the robustness of the aforementioned causal effects and no significant horizontal pleiotropy or heterogeneity of the instrumental variables was detected. However, no association was found from the identified genera to the CP-related circulating inflammatory cytokines. Besides, the reverse MR analysis demonstrated no causal relationship from CP to the identified genera and circulating inflammatory cytokines. Taken together, our comprehensive analyses offer evidence in favor of the estimated causal connections from the 5 genus-level microbial taxa and 4 circulating inflammatory cytokines to CP risk, which may help to reveal the underlying pathogenesis of CP. Introduction Chronic pancreatitis (CP) is characterized by persistent pancreatic inflammation and the onset of parenchyma fibrosis.Continuous and irreparable localized, segmental, or widespread damages are caused by the chronic inflammation in the both exocrine and endocrine pancreatic tissues.On clinical manifestations, acute pancreatitis, epigastric pain, pancreatic exocrine dysfunction, and diabetes mellitus, which result from progressive damage to the pancreas, can be the problem of CP patients. [1,2]When it worse, CP is one of the major risk factors for pancreatic cancer, which carries a dismal prognosis, and approximately 5% of CP patients will develop such malignancy. [3]Conventional managements of CP include lifestyle modifications, medications, endoscopic intervention and surgical intervention. [4]n terms of etiology, although alcohol use is still believed to be the dominant factor for CP pathogenesis, it is not the only provoking agent.Additionally, tobacco consumption acts as a significant contributing factor.In recent years, the onset and progression of many digestive diseases is influenced by interactions between the host and intestinal microbiota, which change the immune system and complex biological processes.The pancreatic duct structure, which anatomically connects the pancreas to the digestive system directly, inevitably results in a link with the gut microbiota.Pancreas-Microbiota Cross Talk has been established [5] and it is proved that the gut microbiota is crucial in regulating pancreatic functioning.In the course of acute pancreatitis (AP), there is a bidirectional modulation between the intestinal micropopulation and activated NOD-like receptor thermal protein domain associated protein 3. [6] Besides, a fecal microbiota signature with high specificity for the diagnosis of pancreatic ductal adenocarcinoma was uncovered. [7]For CP, one of the hallmarks is unremitting inflammation.The emergence of inflammatory mediator, such as reactive oxygen species, cooperating with oxidative stress reaction acts an essential role in the pathogenesis of AP and CP.The regulating neurogen of immunological responses induced by gut to pancreas axis and intestinal bacterial metabolites, such as short-chain fatty acids (SCFAs), could contribute to conferring the inflammatory process of pancreas.Certainly, the factors from pancreatic disorders might carry a significant influence on the functioning species and abundance of the intestinal microbiota. [8]Besides, multiple cytokine profiles have been demonstrated to promote the progression of CP, including IL 1β, IFN-γ, fractalkine, C-reactive protein and TGF-β and so on. [9,10]These changes on the microbial composition and circulating inflammatory factor may serve as the novel biomarker of pancreatic fibrosis. [11]Nevertheless, the causal associations for specific gut microbiome and CP are still elusive. A statistical method known as Mendelian randomization (MR) analysis seeks to use the innate characteristics of common genetic variations to explain observational association findings.In our study, we explored the causal relationship between the intestinal microbiota, plasm proteins and CP through carrying out a thorough bidirectional 2-sample MR analysis.Manipulation of the gut microbiome is a potential approach to improve a series of human disorders and cancers and microbiome-based therapeutics is showing a decent future prospect. [12,13]Hence, our research can aid in the development of novel therapeutic modalities, including probiotic treatment, dietary modifications, and fecal microbiota transplantation (FMT).Future advances in the field of microbiome-based management will offer the chances for medical practitioners to exploit the fecal microbiota to work out the pancreatic diseases, including CP. Study design As a genetic approach, MR analysis based on single nucleotide polymorphisms (SNPs) explores the causal effects of exposure on outcome utilizing the random distribution of genetic variants, which is chiefly applied to causal inference in genetic association and genomics studies.The SNPs believed to the instrumental variables (IVs) for MR analysis should adhere to the following fundamental premises: The SNPs and the issue of exposure need to be strongly correlated.There should be no link between SNPs and the outcome through confounding factors.The SNPs should not have a direct effect on the outcome.To explore the casual connections between gut microbiota at the feature level of genus, plasm proteins and CP, the comprehensive bidirectional 2-sample MR analyses were conducted and the study design was presented in Figure 1."TwoSampleMR" and "MR-PRESSO" packages were utilized to complete our comprehensive analyses in R program (version 4.3.1). GWAS summary statistics for gut microbiome (Exposure-1) The SNPs related to the human gut microbiome composition were obtained from a GWAS dataset of the international consortium MiBioGen (https://mibiogen.gcc.rug.nl/), a summary statistics based on the substantial GWAS meta-analysis. [14]The 24 cohorts included MiBioGen study have made adjustments for sex and age as covariates in the calculation process.The date of data download was July 20,2023 and the microbial taxa at the level of genus were retained. GWAS summary statistics for circulating inflammatory cytokines (Exposure-2) Genetic instruments for plasm proteins were extracted from the large-scale GWAS summary statistics (accession numbers GCST90274758 to GCST90274848), which were derived from a genome-wide protein quantitative trait locus study mapping for 91 plasma proteins measured using the Olink Target Inflammation panel in 11 cohorts totaling 14,824 European ancestry participants. [15]All of the 91 metabolites were selected for our genetic investigation as Exposure-2. Outcome data As the outcome in MR analysis, SNPs data for CP were extracted from the large-scale GWAS summary statistics, the GWAS catalog (GWAS ID: ebi-a-GCST90018821, Trait name: CP; Published by Sakaue S et al, https://www.ebi.ac.uk/gwas).This outcome data of CP contains 1424 cases and 476,104 controls of European ancestry and the number of SNPs is 24,195,431. Genetic instruments selection [18] Afterward, to avoid bias within the causal estimates, independent IVs were further selected for every taxon utilizing linkage disequilibrium (LD) analysis, in which each collection of SNPs were clumped with LD r² < 0.001 and distance > 10,000 kb in the PLINK clumping algorithm and the SNPs absent from LD reference pane were also excluded. [19]he LD reference panel was established utilizing the 1000 Genomes Project European sample. [20]Furthermore, to measure the statistical strength of the link between the candidate IVs and the associated taxon or proteins, the F-statistic for each SNP was determined and IVs with a F-statistic < 10 were excluded.A detailed description of the F-statistic computation was found elsewhere. [21] MR estimates and reverse MR analysis Subsequently, the effect variants of the screened IVs related to each bacterial taxon and plasm proteins and the SNPs for CP were harmonized and any alleles that were incompatible or www.md-journal.compalindromic with intermediate allele frequency were removed.The 2-sample MR analyses were carried out for the bacterial taxa or plasm protein with at least 3 IVs.The inverse variance weighted (IVW) method was acknowledged as the dominant statistical analysis method, [22] which was supplemented with robust multiple MR methods, including weighted median, Weighted mode, MR-Egger regression, and MR-pleiotropy residual sum and outlier (MR-PRESSO) approaches.The causal associations are considered statistically significant if the P value of the IVW method is <.05.In weighted median method, to achieve accordant assumptions of causal effects, the requirement that half of SNPs being significant IVs is necessary.In the MR-PRESSO method, NbDistribution was set to 1000 and it is possible to identify potential outliers and provide outlier-corrected causal estimates.To examine the contribution of the pathogenesis of CP to the identified bacterial taxon and plasm proteins, we further performed the reverse MR analyses.The procedures of the reverse MR analysis followed the identical steps of the abovementioned forward MR analysis. Sensitivity analyses The existence significant heterogeneity of IVs could skew the causal assumptions in summary-level MR analysis, weakening the validity of the causal inferences.Therefore, Cochran Q test (IVW method) and Rucker Q test (MR-Egger method) were employed to measure the heterogeneity of IVs for each microbial taxon.P < .05suggested the existence of significant heterogeneity.In the MR-Egger method, if the corresponding P value was >.05, we assumed that the horizontal pleiotropy of the IVs was not noticeable enough to have an influence on the causal inferences.If the horizontal pleiotropy was identified, MR-PRESSO outlier test was further performed to eliminate any probable outlier SNPs.Moreover, MR-PRESSO global test was carried out before and after eliminating outliers to confirm that the IVs with pleiotropy were excluded (P > .05).Subsequently, "leave-one-out" analysis was undertaken to determine the impact of each instrumental SNP on the MR conclusions and the outliers should be excluded to make sure the robustness of causal estimates. Overview In this study, the selected level of gut microbiota composition was genus, and 131 genera were extracted.As a results, 119 genus-level taxa, excluding 12 unidentified taxa, were acquired.Then the SNPs of them were screened by multiple steps and a total of 1531 eligible SNPs were consequently included for MR analysis.The details of genus-level taxa and according IVs are listed in Supplementary Table S1, http://links.lww.com/MD/M281.Besides, the concert types of plasm protein were presented in Supplementary Table S2, http://links.lww.com/MD/M282. Exploring the casual association from CP-related gut microbiome to CP-related plasma proteins Changes in intestinal flora can affect body metabolism and substance synthesis.Therefore, to explore possible mechanism of the identified genus-level taxa genetically related to CP, we further carried out the 2-sample forward MR analysis from the identified 5 CP-related gut taxa to CP-related plasm proteins.Unfortunately, the significant causal link from the identified 5 CP-related gut taxa to CP-related plasm proteins were not found (P IVW > 0.05, Supplementary Table S4, http://links.lww.com/MD/M284). Reverse mendelian randomization Based on the aforementioned approaches of genetic instruments selection, a total of 32 eligible SNPs for CP were included in reverse MR analysis (P IVW > 0.05, Supplementary Table S5, http://links.lww.com/MD/M285).Then, the reverse MR analyses from CP to the identified 5 bacterial taxon and 4 plasm proteins were conducted.According to the IVW method, there was not statistically significant causality from CP to the identified 5 bacterial taxon and 4 plasm proteins (P IVW > 0.05, Supplementary Table S6-7, http://links.lww.com/MD/M286,http://links.lww.com/MD/M287). Sensitivity analysis The according P values of heterogeneity test and the intercept terms for the 5 identified genus-level taxa and 4 plasm proteins were all > 0.05, revealing no notable horizontal pleiotropy and heterogeneity of these significant estimates (Table 1 and 2, Supplementary Figure 1, http://links.lww.com/MD/M278 and 2, http://links.lww.com/MD/M279).Moreover, leave-one-out analysis of these 5 bacterial taxa revealed that no SNPs with dominant effects on the MR inference was identified, which confirmed the reliability and stability of the causal effects of the 5 identified associations (Supplementary Figure 3, http://links.lww.com/MD/M280). Discussion Using a thorough MR analysis, our study revealed that the genetically estimated 5 genus-level taxa, in which Sellimonas, Eubacteriumventriosumgroup indicated an unfavorable influence on CP, whereas Escherichia Shigella, Eubacteriumruminantiumgroup and Prevotella 9 was detrimental for an increased risk of CP, indicating a protective impact of the above 3 genera on CP.Additionally, the reliability and consistency of our results were further testified utilizing the multiple methods of sensitivity analyses.To the best of our knowledge, this is the initial MR study to thoroughly explore the effect of intestinal microbiota on CP in a causal way.Even with advancements in endoscopic and radiological techniques, diagnosing CP, especially the pathogenesis of CP in an earlier disease, can still be difficult, as they are more likely to be lack of symptom and not show typical signs of the disease on routine imaging. [2]nowing the identified microbial taxa related to CP in a causal way may aid in the early diagnosis of this underlying illness.Genus_Sellimonas was reported to be associated with breast cancer and chronic granulomatous disease. [23,24]Prevotella_9 was related to the production of SCFAs. [25]A study demonstrated that compared to patients with ulcerative colitis without depression, patients with ulcerative colitis and depression had more Sellimona but less Prevotella_9 in fecal microbial community, suggesting the abundance of these 2 taxa may be related to the mental state of patients. [26]Increasing prevalence of anxiety and depression in CP patients was observed [27] and a MR analysis revealed that genetic liability to depression was associated with an increased risk of CP. [28] Combined with our MR results, we believed that changes in the gut microbiota of CP may be partly due to its accompanying black mood or depression. The gut microbiota has been well recognized as a crucial immunomodulation regulator, which contribute to host metabolism, immunological homeostasis and immune diseases in human. [29,30]Besides, it has been established that the pancreatic injury and fibrosis in CP may be driven by host immune activation. [31]Actually, substantial evidence has established links between gut microbiota dysbiosis and pancreatic disorders.Damage-associated molecular patternsmediated cytokine activation, which results in the migration of gut microbiota organisms into the circulation and the regulation of innate immune responses in the intestinal cells, is what drives the inflammation that underlies pancreatitis. [32]P is linked to considerable dysbiosis of intestinal microbiota, including a significant decrease in diversity and richness and increased abundance of opportunistic pathogens. [33,34]The study using 16s RNA gene sequencing technique demonstrated that a severely reduced microbial diversity was shown in the CP patients, in which the bacteria that produce SCFAs such as Faecalibacterium reduced, whereas the increased abundance of facultative pathogenic organisms, such as Enterococcus, Streptococcus, and Escherichia Shigella was detected. [34]As the main energy source of intestinal epithelial cells, SCFAs can promote the proliferation and differentiation of epithelial cells, reduce cell apoptosis, and maintain the mechanical barrier of intestinal mucosa. [35]Additionally, it is shown that, in the composition of gut microbiomes in the CP patients, the abundances of Escherichia Shigella and other genera were relatively high and that of Faecalibacterium was low. [36]The plasma concentration of bile acid increased in CP patients, which may be due to an impaired bile acid circulation resulting from gut microbiota alteration. [37]Therefore, patients with CP may have reduced level of some taxa that may be helpful for intestinal barrier function and similar results have been confirmed by other studies. [38]In addition, the exocrine function of the pancreas has the ability to modify the makeup of the gut microbiota. [8]Steatorrhea caused by CP also can result in the alteration of host gut microbiota.Nevertheless, our reverse MR study demonstrated there was no the causal estimates from CP to the 5 identified bacterial taxa in MR analysis. Furthermore, we found a causal contributing connection of CCL23 in CP pathogenesis.CCL23 is a known chemoattractant for resting T cells, monocytes and dendritic cells via their CCR1 receptor, promoting migration to sites of inflammation. [39]Additionally, positive correlation was found between plasma CCL23 levels and established indicators of the severity of systemic mastocytosis disease. [40]Reflexively, plasma Caspase 8, ADA and SIRT2 level negatively related to CP risk in our study.Caspase-8 is involved in the regulation of cell death mechanisms and immune responses in conditions like infection, autoimmunity, and T cell signal transduction.In humans, immunodeficiency, inflammatory bowel disease, and autoimmune lymphoproliferative syndrome can result from a lack of Caspase-8. [41]In human, ADA is an enzyme involved in purine metabolism and the production of immune cells.If this enzyme is mutated or lacking, the complex immune deficiency and dysfunction of T cells, B cells, and natural killer cells may occur. [42]The function of ADA in the pathogenesis of autoimmune diseases, such as rheumatoid arthritis, have been noticed. [43]SIRT2 were associated with red cell distribution width and red cell distribution width was an important hematological parameter prognostic marker for acute pancreatitis. [44,45]here is growing recognition of the role of the microbiome in pancreatic disorders and altering the bacterial ecology in the gut has the potential to be a potent therapeutic modality for CP.Experimental data demonstrated that FMT from CP mouse increased the fibrosis of pancreas through raising the infiltration level of CD4 + T cells and macrophage. [11]It has been proposed that FMT coordinates immunological responses in the gut mucosa and pancreas-resident immune system. [6,46]sing Ganoderma lucidum strain S3, the pancreatitis in mice can be relieved by reducing the inflammation mediator level and enhancing the antioxidant activity which was related to the increased abundance of the advantageous taxa. [47]In addition, administration of inonotus obliquus polysaccharide could improve intestinal microecology and then generated a therapeutic effect on CP in mice. [48,49]In our project, there are some limitations that baseline information for participants was lacking and P value instead of FDR was applied as threshold to determine the causality, which may result in more false positives. Conclusions In summary, this MR study illuminates a possible causative involvement of gut microbiota in the pathophysiology of CP.The detection of gut microbiota could be a practical method of disease screening to identify groups with a higher risk of CP.This would have an implication to clinicians that early stool examination might be a feasible practice for disease screening to recognize populations at a higher risk of CP.Furthermore, gut microbiota conditioning may be a future CP treatment.Therefore, it is necessary to conduct more researches to testify our MR results and uncover the underlying mechanism. XY and HX contributed equally to this work.The authors have no funding and conflicts of interest to disclose.The datasets generated during and/or analyzed during the current study are publicly available.No additional ethics approval or informed consent was required due to our study was based on public databases.Supplemental Digital Content is available for this article.a Department of Anesthesiology, Affiliated Hospital of Guangdong Medical University, Zhanjiang, Guangdong Province, People's Republic of China, b Department of Hepatobiliary and Pancreatic Surgery, Affiliated Hospital of Guangdong Medical University, Zhanjiang, Guangdong Province, People's Republic of China. Figure 1 . Figure 1.Workflow illustrating the 2-sample bidirectional MR analysis for exploring the casual associations between gut microbiota, plasm proteins and chronic pancreatitis.GWAS = genome-wide association study, MR = Mendelian randomization, SNP = single nucleotide polymorphism. Figure 2 . Figure 2. Forest plots illustrating the results of 2-sample MR analyses in significance level (IVW P < .05)with IVW method, in which the 5 genus-level taxa and 4 circulating proteins were identified.IVW = inverse variance weighted, MR = Mendelian randomization, OR = odd ratio. Figure 3 . Figure 3.The scatter plots of the causal associations of the 5 identified genus-level microbial taxa with chronic pancreatitis.The scatter point represents the SNP effect of microbial taxa to chronic pancreatitis.The slope of each line corresponds to the estimated effect from different models.(A) Sellimonas; (B) Escherichia Shigella; (C) Eubacteriumruminantiumgroup; (D) Eubacteriumventriosumgroup; (E) Prevotella 9; (F) Adenosine Deaminase level; (G) Caspase 8 level; (H) C-C motif chemokine level; (I) SIR2-like protein 2 level.MR = Mendelian randomization, SNP = single nucleotide polymorphism. Table 1 Association of genetically predicted gut microbiota with chronic pancreatitis.
2024-05-03T14:58:44.017Z
2024-05-03T00:00:00.000
{ "year": 2024, "sha1": "d8533e8aefbdf3dc33970007849fae06a5e42826", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d8533e8aefbdf3dc33970007849fae06a5e42826", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
1548721
pes2o/s2orc
v3-fos-license
Recombinant Tissue Plasminogen Activator Increases Blood-Brain Barrier Disruption in Acute Ischemic Stroke: An MR Imaging Permeability Study BACKGROUND AND PURPOSE: Although thrombolytic therapy (recombinant tissue plasminogen activator [rtPA]) represents an important step forward in acute ischemic stroke (AIS) management, there is a clear need to identify high-risk patients. The purpose of this study was to investigate the role of quantitative permeability (KPS) MR imaging in patients with AIS treated with and without rtPA. We hypothesized that rtPA would increase KPS and that KPS MR imaging can be used to predict the risk of hemorrhagic transformation (HT). MATERIALS AND METHODS: Thirty-six patients with AIS were examined within a mean of 3.6 hours of documented symptom onset. KPS MR imaging was performed as part of our AIS protocol. KPS coefficients in the stroke lesion were estimated for all patients, and the relationship between KPS and both HT and rtPA was investigated by using Student t tests. Receiver operating characteristic (ROC) curves were computed for predicting HT from KPS. RESULTS: The occurrence rate of HT for patients who received rtPA and those who did not was 43% and 37%, respectively. Assessment of KPS in the lesion revealed significant differences between those who hemorrhaged and those who did not (P < .0001) as well as between rtPA-treated and untreated patients (P = .008). ROC analysis indicated a KPS threshold of 0.67 mL/100 g/min, with a sensitivity of 92% and a specificity of 78%. CONCLUSIONS: The results of this study indicate that KPS is able to identify patients at higher risk of HT and may allow use of physiologic imaging rather than time from onset of symptoms to guide treatment decision. R ecent evidence indicates that early use of thrombolytic agents has a positive impact on neurologic outcome following acute ischemic stroke (AIS). However, increased risk of hemorrhagic transformation (HT) limits the general use of thrombolytic therapies (ie, recombinant tissue plasminogen activator [rtPA]) for treatment of AIS. 1 It has been estimated that Ͻ10% of patients with AIS receive this treatment. 2 Although clinical 3,4 and radiologic 5,6 findings have been associated, retrospectively, with subsequent neurologic outcome, it remains difficult to identify patients at high risk of HT before the administration of rtPA. 4,5,[7][8][9][10] At present, the selection criteria for rtPA are clinical and depend on nonenhanced CT or MR imaging to rule out intracerebral hemorrhage. Beyond a certain fixed time window, rtPA administration is contraindicated (before the European Cooperative Acute Stroke Study [ECASS III] recommenda-tions 11 : 0 -3 hours for intravenous rtPA and 3-6 hours for intra-arterial rtPA). However, these heuristics may not be optimal and have been questioned by many authors. 12 MR imaging offers the option of selecting patients for treatment on the basis of physiology rather than time. When used in conjunction with clinical criteria, MR imagingϪguided treatment selection could potentially lead to an increase in treatment eligibility, improved outcomes, and reduced complications. Development of physiologically relevant MR imaging, including diffusion and perfusion MR imaging, has improved characterization of ischemic tissue to the point where it is possible to identify viable tissue at risk (the penumbra). Re-establishing perfusion to the penumbral tissue has become integral to modern stroke management and new drug trials. An example is the intra-arterial Prolyse in Acute Cerebral Thromboembolism (PROACT II) trial, the first to show the benefit of thrombolysis at the 3-to 6-hour time window 13 and to show the potential of advanced imaging to identify who could still benefit beyond 3 hours. The recently completed sister trials, Desmoteplase in Acute Ischemic Stroke and the Dose Escalation Study of Desmoteplase in Acute Ischemic Stroke, also successfully stretched the treatment window to 3-9 hours by using a novel intravenous thrombolytic drug and MR imagingϪbased selection. 14,15 The role of MR imaging as a physiologic imaging tool with the ability to characterize tissue injury continues to expand in clinically meaningful directions. For example, we have recently shown the feasibility of quantitative permeability (KPS) MR imaging 16 to measure blood-brain barrier (BBB) disruption at infarct presentation in 10 patients with acute symptoms not treated with rtPA. Of these 10 patients, 3 showed HT within 48 hours. More important, the presence of significantly increased KPS was only observed in the 3 patients who later proceeded to HT. Although none of the patients examined in this study received rtPA, accumulating evidence suggests that rtPA therapy may amplify BBB disruption. In this study, we investigated the role of KPS MR imaging in patients with AIS (mean time to MR imaging of 3.6 hours from symptom onset) treated with rtPA when clinically indicated. We hypothesized that the presence of rtPA would increase permeability (KPS) and that KPS could be used to stratify the risk of HT. Patient Population All studies were performed in accordance with the institutional guidelines for human research. All participating subjects (or their substitute decision-makers) provided written informed consent. Patients with a working diagnosis of AIS based on clinical assessment and CT findings were included. Additional inclusion criteria were onset of symptoms Ͻ6 hours to presentation (3 patients exceeded this time window by 0.5-1.5 hours) and successful screening for MR imaging safety. Patients with nonstroke lesions shown on CT, prior history of intracranial hemorrhage, uncontrolled hypertension, seizure at onset of stroke, known bleeding diathesis, and abnormal blood glucose levels were excluded. Thirty-six patients (18 women, 18 men; 27-93 years of age) with AIS satisfied the inclusion/exclusion criteria and underwent MR imaging within a mean of 3.6 hours of documented symptom onset. Stroke severity was determined by using the National Institutes of Health Stroke Scale (NIHSS). Fifteen patients received rtPA within a mean of 2.17 hours from symptom onset. The decision to initiate rtPA treatment occurred after the admission CT and before MR imaging for 9/15 patients. For 3 patients, the time of rtPA administration was not documented; for another 3 patients, rtPA was not administered until after MR imaging. Follow-up imaging was performed 24 -72 hours later to assess HT by using either CT or MR imaging. These imaging studies did not include angiographic sequences; therefore, recanalization rates were not determined. Demographics and treatment characteristics of all patients studied are listed in the Table. MR Imaging Protocol MR imaging consisted of a comprehensive acute stroke MR imaging protocol including anatomic, whole-brain diffusion-and perfusionweighted imaging of both hemispheres excluding the cerebellum, contrast-enhanced MR angiography, and high-resolution postcontrast T1-weighted imaging. In addition, a dynamic contrast-enhanced (DCE) 3D gradient recalled-echo (GRE) sequence was performed to assess KPS/BBB integrity. This scan was always obtained before both perfusion-weighted MR imaging and contrast-enhanced MR angiography (MRA). Each patient received a total of 3 ϫ 15 mL doses of gadodiamide (Omniscan formulation; GE Healthcare, Milwaukee, Wis). Note that these data were collected before the 2006 US Food and Drug Administration (FDA) public health advisory statement regarding nephrogenic systemic fibrosis. We, therefore, no longer use this high-dose protocol in patients with poor or uncertain renal status, and we strongly recommend adherence to the most recent FDA guidelines on this matter (http://www.fda.gov/Drugs/DrugSafety/Postmarket DrugSafetyInformationforPatientsandProviders/ucm142884.htm). All subjects were imaged on a 1.5T clinical MR imaging system (Signa Excite; GE Healthcare), equipped with echo-speed gradients and an 8-channel head coil. Imaging parameters for the 3D GRE acquisition were as follows: FOV, 240 mm; 128 ϫ 128 matrix, flip angle, 20°; section thickness, 5 mm; 12-14 sections; TR, 5.9 ms; TE, 1.5 ms. The total acquisition time was 4 minutes 48 seconds for a collection of 31 volumes at a temporal resolution of 9 seconds. The DCE sequence covered the entire infarct in all cases. Contrast media were injected as a bolus (5 mL/s) 30 seconds following the start of the 3D acquisition by using a power injector (Spectris Solaris; Medrad, Indianola, Pa). Image Analysis Data were transferred to an independent workstation for image registration and quantitative analysis. Image registration was performed by using an automated local affine model implemented in Matlab, Version 6.3 (MathWorks, Natick, Mass) to maximize mutual information between datasets. 17 Parametric maps of KPS were calculated on a pixel-by-pixel basis by using in-house software (MR analyst, Version 1.3; University of Toronto, Toronto, Ontario, Canada). A unidirectional 2-compartment kinetic model was implemented to model the relationship between the tissue concentration of gadolinium-gadopentate dimeglumine (Gd-DTPA) (residue function) and the blood concentration time curve of Gd-DTPA (input function) by using linear regression as previously described by Roberts et al. 18 For each voxel, the endothelial KPS transfer constant KPS was calculated as the slope of the best fit, and we assumed that the reflux of Gd-DTPA back into the intravascular space was negligible. KPS values were expressed in mL/100 g/min. The input function for all patients was obtained in the sagittal sinus, because a previous report by Ewing et al 19 indicated that this vessel can provide a reasonable surrogate input function, with contrast concentrations matching data obtained from simultaneous sampling of arterial blood. Diffusion-weighted images with bϭ0,1000 were converted to apparent diffusion coefficient (ADC) maps on a pixel-by-pixel basis by using MR analyst. ADCs for a given direction were calculated by fitting the normalized logarithmic signal-intensity decay as a function of the b-value. 20 Areas of ischemia were identified as regions of reduced diffusion relative to normal cortex on ADC maps and were the basis for the region-of-interest selection. We selected 2 regions of interest: 1 within the area of ischemia (lesion) and the other within the homologous location in the contralateral hemisphere. Regions of interest were then copied to the corresponding KPS image (Fig 1). For each section, mean values for ADC and KPS were recorded for each regionof-interest pair. To identify HT, we performed follow-up imaging with noncontrast CT and magnetic susceptibility sensitive gradientecho MR imaging within 24 -72 hours after initial imaging. The presence of HT was assessed by using the ECASS grading system. 8 Statistical Analysis Patients were grouped on the basis of whether they received rtPA and whether they experienced HT. Comparisons among ADC or KPS values for the different groups were evaluated by using a 1-way analysis of variance (between groups). Infarct ADC and KPS data were subsequently collapsed according to HT status and treatment, and differences were assessed for significance by using Student t tests. Patients in whom the time of rtPA administration was either indeterminate or delayed until after MR imaging were excluded from the comparisons between treated and untreated patients. Mean KPS values for HT and non-HT groups were adjusted for initial stroke severity (NIHSS) by using an analysis of covariance (ANCOVA) procedure. Receiver operating characteristic (ROC) curves were computed for predicting HT from KPS. The ideal threshold was chosen to be the one with the best average sensitivity and specificity. A P value Ͻ .05 was considered significant. All statistical analyses were performed by using R2.4. 21 Results Six patients in the rtPA group and 7 patients in the untreated group proceeded to HT. Among the 13 patients with HT, 2 were categorized as hemorrhagic infarction (HI1); 3, as HI2; 4, as parenchymal hematoma (PH1); and 4, as PH 2. ADC was reduced in all patients within the infarct zone, and the 1-way between-groups analysis of variance (ANOVA) did not reveal any significant differences (P ϭ .61). When the groups were collapsed according to HT status at follow-up, the mean ADC measured in those who subsequently hemorrhaged was 603 Ϯ 98 versus 626 Ϯ 103 ϫ 10 Ϫ8 mm 2 /s in those without HT (P ϭ .53). Collapsing the groups according to rtPA treatment did not reveal any significant difference in infarct-zone ADC (630 Ϯ 90 versus 608 Ϯ 109 ϫ 10 Ϫ8 mm 2 /s for treated and untreated groups, respectively; P ϭ .52). Assessment of KPS in the lesion for all groups is summarized in Fig 2. The 1-way between-groups ANOVA did not reveal any significant differences (P ϭ .55). KPS within the infarct zone revealed significant differences between those who proceeded to hemorrhage and those who did not (1.27 Ϯ 0.59 versus 0.50 Ϯ 0.29 mL/100 g/min, P Ͻ .0001) as well as those who were treated with rtPA and those who were not (1.08 Ϯ 0.39 versus 0.56 Ϯ 0.45 mL/100 g/min, P ϭ .008). The ANCOVA revealed that NIHSS was a significant covariate (P ϭ .014) of KPS. However, NIHSSadjusted means indicated that the mean HT KPS remained significantly greater than the non-HT KPS (NIHSS-adjusted P ϭ .012; unadjusted P Ͻ .0001). A sample KPS map obtained in an rtPA-treated patient who progressed to PH1 is shown in Fig 1. Overall, mean KPS values were significantly increased in infarct regions of interest relative to the mean KPS values computed for the contralateral regions of interest (0.78 Ϯ 0.56 versus 0.32 Ϯ 0.24 mL/100 g/min, P Ͻ .0001). ROC analysis indicated a KPS threshold of 0.67 mL/100 g/min with a sensitivity of 92% and a specificity of 78% for identifying HT. In 3 patients, KPS exceeded this threshold despite no visible evidence of gadolinium enhancement (Fig 3). All 3 of these patients subsequently hemorrhaged (classified as HI1, HI2, and PH1). Overall, there were no significant differences in KPS between HI and PH subtypes (1.00 Ϯ 0.43 versus 1.47 Ϯ 0.60 mL/100 g/min; P ϭ .13). Discussion We investigated the role of KPS MR imaging, comparing patients who received rtPA with those who did not within a mean time to MR imaging of 3.6 hours from documented symptom onset. KPS was significantly elevated in patients who hemorrhaged, compared with those who did not. This was observed in patients who received rtPA and in patients who did not receive rtPA, suggesting that the compromise of BBB integrity plays a critical role in HT. Furthermore, ROC analysis indicated a threshold of 0.67 mL/100 g/min, which could be considered a physiologic threshold for decision-making regarding rtPA administration. However, a larger series of patients needs to be studied before clinical implementation of this threshold. Furthermore, there was insufficient statistical power to identify any effects of HT severity on KPS. Although follow-up evidence of petechial hemorrhage is unlikely to be clinically important, its existence on admission might portend an increased risk of secondary hemorrhage and might influence the decision to treat with rtPA therapy. Another important issue is the precision of the KPS estimates, as suggested by the wide range of KPS values reported in this study. To some degree, a larger sample size would almost certainly reduce the SD of our KPS estimates, but further gains in precision will probably demand an improvement in the signal intensity-to-noise ratio (SNR) of the underlying MR imaging data. Although it was possible to identify statistically significant differences between treatment groups with our protocol, better SNR may be required to obtain reliable data for clinical decision-making in single subjects. Strategies for attaining high SNR KPS data are currently under active investigation at our institution. In fact, compared with deconvolution-based procedures, the multiple time-graphic or Patlak plot method used in this study is considered less sensitive to noise. 19 However, one of the limitations of the Patlak method is the assumption that there is no back-diffusion of tracer (or, in this case, the MR imaging contrast agent) into the blood stream. We have assumed that there is an effectively irreversible space where contrast becomes trapped-at least on the time scale considered in the DCE experiment. Nevertheless, this assumption is thought to be reasonable for as long as 20 -30 minutes in regions of BBB leakage. 22 Several investigators have recently used DCE CT perfusion imaging for the evaluation of BBB KPS in patients with AIS. [23][24][25] At potentially Ͻ1 minute in length, perfusion CT can be readily appended to the standard emergency CT evaluation for patients with AIS. Lin et al 23 reported focal increases in KPS of Ͼ10-fold relative to normal parenchyma. However, a recent study by Dankbaar et al 25 reported that KPS values de- rived from a 45-second CT acquisition overestimated those that were based on 3-minute datasets (7.6 versus 1.3 mL/100 g/min in AIS infarct regions of interest). In fact, the delayed or steady-state KPS estimates reported by Dankbaar et al were comparable with the MR imaging results presented here for DCE data (acquired in slightly less than 5 minutes). However, the acquisition of KPS MR imaging data is more feasible than CT acquisition due to improved contrast-to-noise. In addition, MR imaging has the advantage of capturing the entire spectrum of acute ischemic lesions from multiple small foci (invisible on CT) all the way to large vascular territory lesions (for which CT is equally good). The MR imaging protocol used in this study was developed from a previous feasibility study that showed a potential correlation between BBB disruption and subsequent HT in AIS not treated with rtPA. 16 In that study, 3 of the 10 patients converted to HT within 48 hours. More important, significant increases in KPS were observed exclusively in the 3 patients with HT. Our present study not only confirmed these previous results but, in addition, demonstrated the association between rtPA administration and elevated KPS values. Two pieces of evidence support these findings: The first is the fact that rtPA can activate matrix metalloproteinases (MMPs), which are involved in the catabolism of microvascular basement membranes. 26 The inference is that leakage of rtPA through defects in the BBB can cause basement membrane breakdown, vascular fragility, and vascular wall compromise, resulting in hemorrhage. Patients with stroke with elevated MMP-9 levels have greater brain injury, and poorer neurologic outcome 27 and are more likely to undergo HT after rtPA treatment. 28 The second piece of evidence is the clinical finding that patients presenting with AIS who are treated with rtPA have a 10-fold higher risk of HT (National Institute of Neurological Disorders and Stroke [NINDS] trial). 4 Current treatment options, however, do not use patient-derived physiologic information for guiding therapy. Rather, therapeutic decisions are based on epidemiologically derived timing parameters, clinical presentation, and an imaging study, usually CT, to exclude the presence of hemorrhage. At our institution, patients undergo an initial noncontrast CT before MR imaging. This CT scan serves 2 purposes: to exclude the presence of nonischemic pathologies such as primary brain hemorrhage and to screen the patient for metallic objects. Safety for MR imaging can, therefore, be ensured and expedited without having to interrogate potentially language-impaired patients or relatives who may not have this important information. On the basis of the clinical presentation and this noncontrast CT scan, a decision is made to treat with intravenous rtPA. Exceptions to these guidelines may include patients such as the 3 included in this study in whom the noncontrast CT was inconclusive and the MR imaging performed immediately thereafter revealed AIS, enabling decisions regarding rtPA treatment. If, however, rtPA treatment is selected before MR imaging, then rtPA is delivered via an MR imagingϪcompatible infusion pump. Although the full impact of rtPA on KPS could not be assessed in the current study, a previous report indicated that parenchy -Fig 3. A, An 81-year old male patient with AIS, visible as an area of reduced diffusion (dark region) on the ADC map obtained at 2 hours 47 minutes post-symptom onset. B and C, rtPA is administered during MR imaging. A region of interest is placed within the infarct, defined as the core area of reduced ADC, and then is copied to the equivalent DCE image set (B), to generate KPS maps (C). In this patient, the mean infarct KPS is 0.84 mL/100 g/min. D and E, A region of hyperintensity is also clearly visible on the equivalent diffusion-weighted image (D), but not on the postcontrast T1-weighted image obtained 20 minutes after completion of the KPS scan (E). F, Follow-up T2-weighted MR image obtained 48 hours later reveals 2 "hotspots," or areas of hypointensity, characteristic of HT colocalized with the zones of hyper-KPS on the KPS map. mal contrast-enhancement (visible on T1-weighted MR imaging) within 30 minutes of rtPA treatment was strongly associated with symptomatic HT. 29 However, we observed elevated BBB KPS (ie, infarct KPS Ͼ0.67 mL/100 g/min) in the absence of visible enhancement in 3 patients who subsequently hemorrhaged, suggesting that static subjective assessments may be less sensitive than quantitative measures of BBB integrity. Another interesting finding from our data provides insight into the rapidity of BBB disruption induced by rtPA. The patients who were on rtPA infusion already had a significantly elevated KPS compared with those who did not receive the rtPA infusion. The data suggest that KPS elevations are occurring during the infusion itself, even before the full dose has been administered (note that patients were on an intravenous infusion of rtPA during MR imaging). Direct observation of this adverse effect has not been previously noted in patients but does raise important questions concerning ways to mitigate this effect. For example, introducing BBB stabilizing agents during the rtPA infusion to prevent further degradation of the BBB might be worthwhile investigating. Single highdose steroid administration during rtPA infusion might also be considered for this purpose; however, it is not known whether the ischemic endothelium can respond to this agent. It may also be possible to counteract the effect of extravasated rtPA by administering MMP inhibitors. These agents are currently being tested in clinical trials, not for stabilization of the BBB but for inhibition of the vascular remodelling associated with neoangiogenesis in patients with cancer. 30 Conclusions KPS evaluated within the first 4 hours following symptom onset is a significant predictor of HT. This work also revealed that the KPS measured in patients who received rtPA was significantly greater than that in untreated patients. With further validation of our findings, we believe that KPS values could help to stratify the risk of secondary hemorrhage. Our current data suggest that the risk would be low for patients with KPS values Ͻ0.67 mL/100 g/min and significantly higher for those with KPS values Ͼ0.67 mL/100 g/min. The patients with AIS who would potentially benefit the most from image-guided risk stratification are those who are otherwise excluded from rtPA therapy because they present for treatment beyond the current fixed 4.5-hour time window or those in whom the time of onset is unknown. Low KPS values in such patients would argue that rtPA administration is safe.
2017-07-22T18:22:15.482Z
2009-11-01T00:00:00.000
{ "year": 2009, "sha1": "8823b37b311df92d17d7424b0a45fc402ff866ce", "oa_license": "CCBY", "oa_url": "http://www.ajnr.org/content/30/10/1864.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c989dd165eb8142e4babfab37ac01846e92bc9d2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
140283610
pes2o/s2orc
v3-fos-license
Transvenous lead extraction with laser reduces need for femoral approach during the procedure Introduction Cardiac implantable electronic device (CIED) trans venous lead extraction (TLE) is technically challenging. Whether the use of a laser sheath reduces complications and improves outcomes is still in debate. We therefore aimed at comparing our experience with and without laser in a large referral center. Methods Information of all patients undergoing TLE was collected prospectively. We retrospectively compared procedural outcomes prior to the introduction of the laser sheath lead extraction technique to use of laser sheath. Results During the years 2007–2017, there were 850 attempted lead removals in 407 pts. Of them, 339 (83%) were extracted due to infection, device upgrade/lead malfunction in 42 (10%) cases, and other (7%). Complete removal (radiological success) of all leads was achieved in (88%). Partial removal was achieved in another 6% of the patients. Comparison of cases prior to and after laser technique introduction, showed that with laser, a significantly smaller proportion of cases required conversion to femoral approach [31/275 (6%) laser vs. 40/132 (15%) non-laser; p<0.001]. However, success rates of removal [259/275 (94%) vs. 124/132 (94%) respectively; p = 0.83] and total complication rates [35 (13%) vs. 19 (14%) respectively; p = 0.86] did not differ prior to and after laser use. In multivariate analysis, laser-assisted extraction was an independent predictor for no need for femoral extraction (OR = 0.39; 95% CI 0.23–0.69; p = 0.01). Conclusion Introduction of laser lead removal resulted in decreased need to convert to femoral approach, albeit without improving success rates or preventing major complications. Introduction With the steady increase in the population life expectancy and the progress in medical knowledge and technology, the number of CIEDs (cardiovascular implantable electronic devices) implanted continues to rise worldwide [1,2]. This results in an increase need for lead extraction above and beyond the rise in implantations [1,3]. Leads are extracted due to infection, malfunction, venous stenosis, occlusion, need for device upgrade and more [4]. There are a large variety of tools available for an extractor. Options include locking stylets, mechanical and powered sheaths. In the last two decades the laser sheath has been introduced and is now widely used [5]. The decision which tool to use is made by the operator and the institute. Whether laser transvenous lead extraction (TLE) is preferable over non-laser (mechanical) methods, remains a challenging question. Several recent studies reported conflicting results [6][7][8][9]. However this has never been compared in a prospective randomized study. Our center is the largest referral center in our country. Since introduction of the laser sheath (January 2012) this has been the preferred method of TLE in our institute. The goal of this study is to summarize our experience in the last decade, with the aim of specifically examine whether the use of laser sheaths improves efficacy and safety during extractions. Study patients and design All consecutive patients who underwent TLE between January 2007 and October 2017 at our center were prospectively included. Group A (Non-Laser Era) comprised all consecutive patients until December 2011 (including), in whom mechanical sheath extraction was the first line strategy. Group B (Laser Era) comprised all consecutive patients who underwent TLE during the rest of the study period, in whom a change of our institute approach was performed (first line laser-assisted strategy). Please see Fig 1. All patients provided written informed consent. The study was approved by the Sheba International Review Board for human and animal trials Committee (IRB-Helsinki Committee). TLE procedure All TLE procedures were performed with a cardiothoracic surgeon immediately available on site. Patients were under general anesthesia, with hemodynamic monitoring A transesophageal echocardiography probe was available in the room. A large-bore femoral venous access was inserted in all patients. The procedure was performed by qualified experienced operators (E.N, M.G, and D.L). A stepwise approach was used in all patients as following [ The TLE procedure was terminated after complete removal of the leads, or when lead fragments could not be removed, or in the event of major complication. Procedural success and endpoints Procedural outcome and success was defined in accordance with the 2017 HRS expert consensus [11]; • Complete Procedural Success: Removal of all targeted leads and all lead material from the vascular space, with the absence of any permanently disabling complication or procedure related death. • Clinical Success: Removal of all targeted leads and lead material from the vascular space, or retention of a small portion of the lead (less than 4 cm) that does not negatively impact the outcome goals of the procedure. When the residual part does not increase the risk of perforation, embolic events, perpetuation of infection or cause any undesired outcome. • Complications: were classified using the 2017 HRS conventional criteria and were attributed to the method used at the time the complication was observed. Complications were recorded until discharge from hospital. Statistical analysis Continuous data are presented as mean ± standard deviation or as mean with range. Categorical data are presented as number (percentage). For univariate analysis, continuous variables were compared using the Mann-Whitney U test. Categorical variables were compared using the χ2 test or Fisher exact test, as appropriate. Linear and logistic regression with robust standard errors was used to analyse the relations between the groups. Statistical significance was accepted for a 2-sided P value of <0.05. Statistical analysis was performed with SPSS version 21.0 (IBM Corp., Chicago, IL) and SAS version 9.2 (SAS Institute Inc., Cary, NC). Study population During the last decade, a total of 407 patients underwent TLE. From January 2007 through December 2011, 132 patients underwent TLE (before laser era). Since the introduction of laser in our institute, January 2012, till December 2017, 275 patients underwent TLE (the laser era). Overall, 850 leads were extracted from the 407 patients. The mean age in both groups was similar (64 ± 16 years). Patients in the laser era group had significantly more comorbidities (Atrial fibrillation, hypertension, diabetes mellitus and current malignancy). Table 1. The main indication for TLE in both groups was infection. Pocket related infection was significantly more common prior to laser era (48% vs. 32%, p<0.001, respectively). In contrast, TLE due to lead malfunction/avoiding abandoned leads was performed more during the laser era (13% vs 3%, p<0.001, respectively). The leads were significantly older during the laser era (mean lead dwelling time in years; 8.2 vs. 6, p<0.001, respectively). Detailed characteristics of the study groups are presented in Table 1. Procedural characteristics and success The procedural characteristics and success rate of the two eras are detailed in Table 2. During the "laser era", a higher proportion of leads (296 (51%)) were extracted using simple traction/ locking stylet than during the "before laser era" (93 (34%)) p<0.001 for the comparison. Cases of need to crossover from subclavian to femoral station were significantly more common prior to laser era (15% vs. 6%, p<0.001, respectively). The rates of both, radiological and clinical success were identical in both groups. Complications The periprocedural complications during the two eras are detailed in Table 3. In both eras, there were similar rates of complications (14% during the "before laser era", and 13% during the laser era, p = 0.86 for the comparison). There was a trend for periprocedural mortality (all due to superior vena cava (SVC) tear) during the "laser era" (3 (1.1%) patient's vs. none, p = 0.24). As expected, SVC tear occurred only while using the laser sheaths {4 (1.5%) patients versus none with other sheaths (p = 0.13 for the comparison)}. During the "laser era", 8 (3%) patients required pericardiocentesis due to significant pericardial effusion/tamponade, compared to 2 (1.5%) during the "before laser era", p = 0.65. The use of the femoral approach In 71 leads there was need to crossover to femoral station, all leads were extracted successfully (clinical success) using this approach Use of femoral approach was significantly higher in the pre laser era compared to laser era (15% vs. 6%, p<0.001, respectively). Complications of the femoral approach were mainly minor bleedings and hematoma, there was no significant difference regarding complications rate among both eras. Multivariate analysis (Table 4), based on the results of the univariate analysis/ clinical perspective including age, gender and infection, demonstrated that the "laser era" was associated with a decreased risk for the use of the femoral station with a HR of 0.39, 95% CI of 0.23-0.69, p = 0.01. Discussion The main finding of our study is that the use of laser significantly reduced the need to convert to femoral approach during the procedure without increasing the complication rates. However the overall procedural success did not improve with laser. A comprehensive review of the literature from the recent decade makes it impossible to draw clear conclusions, but a certain trend is being repeated. Most studies demonstrate higher procedural success rates along with higher overall complication rates for laser extraction methods over non-laser methods, especially higher SVC tear rate [6][7][8][9]. A meta-analysis of 62 studies spanning a 15-year period and comprising 13,000 patients with 20,000 leads undergoing extraction, showed that the use of laser sheath was associated with an increased risk of major complications or death (p = 0.029) despite being associated with higher technical success of extraction (p = 0.003) [12]. An observational retrospective study including 1,449 patients who underwent laser-assisted lead extraction of 2,405 leads in 13 sites in the US and Canada between 2004-2007, showed higher procedural success rate and a similar procedural-related complication rate (with the exception of mortality rate, which was lower) [13]. A recent prospective study, which includes a large series of patients, summarizes the progress of mechanical lead-extraction methods over a decade. It concludes that newer mechanical methods may both improve success rate and lower complication rate. However, a greater need for a femoral approach may emerge while using these methods [14]. The above studies are not entirely in agreement with our findings in terms of success rates or complications. However our study did show that mechanical approach was an independent predictor for the need to convert to a femoral approach. Our study is the first to evaluate the rate of conversion to femoral approach thus significantly shortening the procedure. There was no significant evolution in mechanical tolls over the last decade thus differences between studies cannot be attributed to older tools. The use of mechanical sheaths even powered ones obliges the operator to use more stress on the lead by pulling back the lead while advancing the sheath. The laser sheath allows relatively gentler traction as the main aim of pulling back it to keep a straight rail for the laser sheath. This is the reason that less leads tear apart translating to lesser need for femoral approach. We do not have data on or measure the length of the procedure but there is no doubt that conversion to femoral approach will significantly lengthen the procedure. Thus in our experience since the use of laser TLE procedures shortened significantly. One of the serious frequent major complications associated with laser-assisted lead extraction is SVC or innominate vein injury. Despite the awareness of this complication, and despite having cardiac surgeons present, this complication remains one with very high mortality rate reaching as high as 50% of the cases [13,15]. However, vein tear is rarely seen while using the mechanical-assisted sheaths. Although our study failed to show significant differences in the rates of death or complications rate among the two different eras, yet we can clearly see a trend for more vascular injury and secondary death among the laser era. This observation failed to reach statistical significance mainly due to the small number of patients included in this cohort. Both observations, the converting to the femoral station, and the vascular injury could be attributed to the power and efficacy of the laser sheaths. Limitations The main limitation of our study is the non-randomized design. There are some significant differences in the characteristics of the patients and the indications for the procedure. These differences are e secondary to expanding the indications for the extraction (i.e. older patients with more comorbidities). However in a multivariable analysis the use of laser was still the only predictive variable. This can be found in Table 4. The design of the study cannot eliminate the fact that the ability to crossover to another sheath may have biased the results. However, since we used the same stepwise approach in all patients during each period, we assume that this bias was low. Our relative low number of patients with complication resulted in difficulties concluding statistical significance. We assume that for evaluating the differences in complication rates, a larger sample size is needed. Lastly, our data are might be prone to practice effect; physicians are usually more experienced and specialized in the laser era. However we would like to state that the extractors (MG and DL) participating in the pre laser era gained a lot of experience prior to initiation of our study registry while the other extractor (EN) joined the group only after introduction of laser. Therefore we believe that experience did not have a bias effect on our conclusions. Conclusion In our high volume center, introduction of laser lead removal resulted in lesser need to convert to femoral approach, albeit without improving success rates or preventing major complications. Ethic approval-Ethics approval and consent to participate: The study was approved by the Sheba institute's internal review board and was performed according to the principles expressed in the Declaration of Helsinki and the ethics policy of the Sheba Medical Center.
2019-05-01T13:04:01.533Z
2019-04-29T00:00:00.000
{ "year": 2019, "sha1": "8060bf33e4475e82066bb86c805dcfbb2a1b9122", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0215589", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8060bf33e4475e82066bb86c805dcfbb2a1b9122", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14080660
pes2o/s2orc
v3-fos-license
Social, Economic, and Resource Predictors of Variability in Household Air Pollution from Cookstove Emissions We examine if social and economic factors, fuelwood availability, market and media access are associated with owning a modified stove and variation in household emissions from biomass combustion, a significant environmental and health concern in rural India. We analyze cross-sectional household socio-economic data, and PM2.5 and particulate surface area concentration in household emissions from cookstoves (n = 100). This data set combines household social and economic variables with particle emissions indexes associated with the household stove. The data are from the Foundation for Ecological Society, India, from a field study of household emissions. In our analysis, we find that less access to ready and free fuelwood and higher wealth are associated with owning a replacement/modified stove. We also find that additional kitchen ventilation is associated with a 12% reduction in particulate emissions concentration (p<0.05), after we account for the type of stove used. We did not find a significant association between replacement/modified stove on household emissions when controlling for additional ventilation. Higher wealth and education are associated with having additional ventilation. Social caste, market and media access did not have any effect on the presence of replacement or modified stoves or additional ventilation. While the data available to us does not allow an examination of direct health outcomes from emissions variations, adverse environmental and health impacts of toxic household emissions are well established elsewhere in the literature. The value of this study is in its further examination of the role of social and economic factors and available fuelwood from commons in type of stove use, and additional ventilation, and their effect on household emissions. These associations are important since the two direct routes to improving household air quality among the poor are stove type and better ventilation. Introduction Around the globe, 2.7 billion people depend on traditional biomass fuels to meet their daily household energy needs for cooking and heating, and the estimates are for this number is to rise to 2.8 billion by 2030 [1]. Burning wood, crop waste, grasses, shrubs, and dung is inefficient, unhealthy, and has adverse effects on the environment. Mounting interest from governments and international multilateral agencies in sustainably replacing traditional cookstoves with improved stoves and substitute cleaner fuels for solid biomass is motivated by the potential to improve human health and local environments as well as climate benefits [2]. Approximately 2 million people die annually because of indoor air pollution from solid biomass combustion, and 99 percent of these deaths occur in developing countries [3], with 570,000 annual deaths in India [4,5]. Adverse health conditions associated with exposure to biomass emissions include: chronic bronchitis [6]; chronic obstructive pulmonary disease (COPD) and asthma [6,7], acute respiratory infections [7,8]; decreased lung function [9]; tuberculosis [10], nasopharyngeal, laryngeal, and lung cancer [11], pneumonia [12], and low birth weight among children [13]. Recent studies suggest that biomass combustion is an even greater risk factor for COPD than cigarette smoking, particularly in India where 156 million households still depend on solid biomass for cooking and heating [14]. The urgency to address the health of millions is reflected in the newly formed Global Alliance for Clean Cookstoves to promote improved biomass cookstoves [15]. Replacement cookstoves designed for high efficiency and low emissions, and modifications to ventilation in the cooking area, are two solutions to reducing exposure to harmful household air pollution. The available evidence suggests that a household's behavioral response to interventions depends on livelihood strategies, household characteristics, variability in solid biomass availability, culture-based preferences around food preparation, and the cost of obtaining traditional fuels [16,17]. There is some initial evidence that when traditional fuels are abundant, households are less likely to adopt new and cleaner energy solutions [18]. Social and economic class of a household is a significant factor in the type of fuel used, and how efficiently that fuel is combusted. The type of fuel and efficiency of combustion determine the subsequent harmful impacts on the household. There has been greater attention to understanding health outcomes across social and economic groups [19][20][21], including a focus on how access to type of fuel and exposure to varying environmental conditions differ by social and economic class and drive variations in health [22]. The poor primarily use polluting fuels like wood, crop-waste, and dung cakes obtained from common lands -pastures and forests -that are not privately owned and accessible to all members of a community for resource extraction [23]. The dependence of poor on such freely available solid fuels will more likely expose them to higher levels of emissions. Although many studies have analyzed household cookstove emissions in rural India and throughout the developing world, few examine how social and economic and other contextual factors, such as access to fuelwood from commons, access to markets place households at a continued risk of using traditional stoves and solid biomass fuels [2]. One study that directly analyzes the relationship between socio-economic variables and emissions levels by Dasgupta, et al. concludes that households with higher income and education have lower levels of PM 10 exposure [24]. The study further investigates factors associated with socio-economic status that affect the level of emissions exposure of household members, including stove type and ventilation. A systematic review of stove research concludes that there is evidence for positive income and education effect on uptake of cleaner stoves and fuels, but it is not across the board, and the need for more evidence on an expanded set of context variables such as fuelwood availability and proximity to markets in addition to income and education on stoves and household air pollution [2]. This paper is in part a response to such calls with a particular focus on testing if social and economic factors, open access to biomass fuel, easy access of village to markets, and media exposure affect the type of household stove used and likelihood of additional ventilation, and their relationship to level of emissions in a household. A particular focus on these factors is especially timely given that India is poised to launch a new, large-scale programthe initiative on improved biomass stoves -in which millions of stoves will be disseminated with the objective of reducing household air pollution [25]. Understanding how socio-economic factors, and access to fuelwood, markets, and media are related to the likelihood of a traditional or an improved stove, and associated levels of cookstove emissions will help in identifying potential barriers to reducing harmful household emissions in biomass dependent rural households [25]. We analyze the importance of these factors on household air quality in rural Andhra Pradesh and Karnataka, India. Particulate emissions indices described in Sahu et al have been calculated for each household and combined with data on social, market access, media exposure, and fuelwood availability from commons to examine variation in emissions levels across households as a function of these variables [26]. The National Institutes of Health has underscored the need for better understanding of such contextual factors on household air pollution from cookstoves in addition to more research on health risks associated with such emissions [27]. The data available to us do not provide health information and therefore is not possible to associate variations in emissions to specific household health outcomes. In this paper, however, we offer additional empirical evidence for understanding household air pollution, a significant factor in adverse health of the poor, from household variability in social and economic privilege, fuelwood access, market access, and media exposure. Methods Data come from a cross-sectional study of a random sample of households by the energy team of the Foundation for Ecological Security (FES), India. We have obtained a formal approval for this study from Washington University Human Research Protection Office (HRPO) and they have determined that it does not involve activities that are subject to Institutional Review Board oversight. The data obtained for this analysis are anonymized and therefore this activity is not considered to meet federal definitions under the jurisdiction of an IRB and therefore falls outside the purview of the HRPO. From Households were selected through a stratified random sampling of habitations from among all the habitations where a proportion of the traditional stoves were replaced with new stoves in these two regions. Households within the selected habitations were randomly chosen. Thirty habitations were randomly selected from these clusters of habitations -10 habitations each from Thambalapalle and Kalicherla region of Andhra Pradesh, and another 10 habitations from Rayalpadu region of Karnataka. In all these 30 habitations, there are households that received improved stoves and those with traditional stoves. In each habitation, four households were selected for a total sample size of 120 households. After excluding cases with missing data, our analysis in this paper is based on 100 households with traditional and replacement stoves. Data obtained for this analysis does not contain identifiable information on households and the villages. Social-economic data on age of respondent, caste, wealth, livelihood strategies, availability of commons for biomass, perceptions of fuelwood scarcity, whether household owns a TV (proxy for media exposure), presence of an all weather road access to the village (proxy for market access), and household air quality are available for each household in this data (Table 1). Emissions sampling was conducted concurrently with household surveys, wherein PM 2.5 concentration and particulate surface area concentration were measured for cookstove emissions in these households. PM 2.5 concentration data were gathered using a personal aerosol monitor -the TSI SidePak AM 510, St. Paul, MN, USA -and a UCB monitor (designed at University of California Berkeley). The real-time surface area concentration of airborne particles deposited in the tracheobronchial and alveolar regions of the lung were collected using a nanoparticle surface area monitor, the TSI AEROTRAK 9000, St. Paul, MN, USA. PM 2.5 and particulate surface area were measured for households with traditional biomass stoves, replacement Deenabandhu model biogas stoves, and Sarala model improved chulas with flue and chimney. Particles deposited in tracheobronchial (TB) and alveolar (A) regions of the lung were calculated using the established deposition curves given by International Commission on Radiological Protection (ICRP). Then the surface area size distributions obtained from different stoves were weighted with the deposition fraction (that depends on particle properties) is integrated for the desired size range of particles for determining the SA of particles deposited in lung. Emissions concentrations were sampled at two distances from the stoves: Location 1-breathing-zone of the stove user (within 0.5 m away); Location 2-distances representing where non-stoveusing members would carry out daily activities (between 1-5 m away). We use only the Location 1 emissions data in our analysis. A detailed description of the IAQ sampling methods and development of emissions indices is provided in a previous publication by Sahu et al [26]. Sahu et al focus exclusively on the utility of an emissions index for particles lodging in the tracheobronchial and alveolar regions, and provide methodological details for calculating such an index. Emissions Indices Calculation The emissions indices were calculated based on the measured emission values normalized to the range between safety standard and lowest emission level observed. The dose metric for which no established safety standard is available, the highest value observed during the field measurements and/or the upper limit of the instrument was used for calculating the index. The index calculations are formulated as described in Sahu et al [26] as, Where, E I is the emissions index corresponding to the dose metric selected, C is the Surface area concentration (in tracheobronchial or alveolar region) in mm 2 /cm 3 for calculating SA index at TB and A region. C LO is the lowest concentration and C HI is taken as the highest concentration. The normalized indices values ranged between 0 to1, where ''0'' indicates the lowest emissions and ''1'' indicates highest emissions. This indices approach was used for comparison of emissions levels from all the cookstoves studied in the field campaign. More details about the types of stove and detailed explanation can be found from Sahu et al., as we provide only a brief summary of the methodology for calculating the indices used in our analysis. The primary objective of this analysis is to use the emissions indices as outcome measures and examine the key social, household, livelihood and other factors related to household emissions harmful to human health. Measures and Statistical Analysis Social and economic privilege is status and associated benefits to a household that directly flow from caste and wealth. The effect of social and economic privilege on emissions is through the type of stove or fuel a household uses. As households are socially advantaged and economically secure they are able to afford stoves that are efficient and even cleaner fuels. In addition, privilege conferred by income or social caste increase household interaction with the outside world, and the flow of new information, and may increase the chances of adopting a new stove. Therefore, we first examined the relationship between social and economic variables and the type of stove used, and the presence of additional ventilation in the household. We then tested the relationship between stoves used, additional ventilation, and the level of particulate concentration in the household. In doing so, we sought to understand the associations between social and economic privilege and household air pollution, harmful to the environment and human health. We used Stata Version 10 for our statistical analysis. We first fitted a multivariable logistic regression model using stepwise selection to examine associations between social and economic privilege of a household and owning a replacement/ modified cookstove -a stove with a flue or chimney to vent smoke or a biogas unit that eliminates smoke. Traditional stoves used by households include biomass stoves: 3-stone construction stoves, or earthen chulhas without a chimney; and kerosene stoves. The predictors of owning an improved stove include the respondent's age, household caste, the quantity of land owned by the household in hectares, livestock ownership, the years of education of household head, quantity of common land available for fuelwood collection, whether the household perceives fuelwood scarcity, whether the household owns a television, and if there is an allweather road to the households' village, a proxy for market access. In this survey, the respondent was the household head. We hypothesized that older household heads, more constrained by social norms, may have a more difficult time adopting new, or modifying extant cooking technology or ventilation. In addition, older household heads may be less influenced by media to shift to new technologies. Household caste variable was coded into four categories of caste privilege based on consultation with FES about local norms in the study villages. We relied on local experts' classification of castes and the privilege conferred from belonging to these categories: (1) highly privileged; (2) somewhat privileged; (3) under-privileged; and (4) extremely underprivileged. In our analysis, the highly privileged and somewhat privileged groups were collapsed into a single category, yielding a three-category variable. We hypothesized that privileged caste households would be more likely to have a replacement stove and additional ventilation. Quantity of land owned was defined in hectares of irrigated and non-irrigated land owned by the household, a proxy for household wealth. We log-transformed to obtain a normal distribution of land-owned variable. Livestock-ownership index weights small livestock (sheep and goats) at 0.1 and large livestock (cows, buffalo, pigs, others) at 1.0, also a proxy for household wealth [23]. We hypothesized that households with more land and greater livestock wealth would be more likely to have a replacement stove and additional ventilation. The quantity of common land available to households for collecting fuelwood was used as a proxy for availability of free or low cost biomass fuel. Freely available fuelwood lowers the opportunity costs to shifting to newer stoves and is an important disincentive to adopt replacement stoves. Access to more common land equates to greater availability of free fuelwood, where 1 is greater than 400 hectares of common land available, and 0 is 400 or less hectares of common land available. Along with the quantity of common land available for fuelwood collection, perception of fuelwood scarcity tracks the pressure households may feel to adopt a replacement stove or modify their own stove to improve combustion efficiency. We hypothesized households with access to less common land (thereby less free fuelwood) and households perceiving greater fuelwood scarcity would be more likely to have a replacement stove. Television ownership tracks the degree to which households might be exposed to media or external information disseminating better cookstove technologies which have been shown to impact household energy decisions [28]. We hypothesized that households owning a television would be more likely to have a replacement stove and have additional ventilation. All weather roads allow for easier and more travel into and out of the household's village. All weather roads increase market penetration, but also of government programs, and overall increase bi-directional interaction between households and outside influences [29,30,31]. Travelling out of the village may enable household members more easy access to urban and peri-urban centers where new cooking technologies are available. Further, if a village has an all-weather road, extension workers from NGOs and other groups disseminating emissions-reducing cooking technology and information will have an easier time reaching the village regularly, raising awareness about harmful stove emissions, and the benefits of new stoves. Therefore, we hypothesized that households in a village with an all-weather road would be more likely to have a replacement stove and have additional ventilation. Previous studies have also shown that variation in indoor emissions could be due to home ventilation unrelated to a stove [32,33]. Therefore, we fitted a multivariable logistic regression model using stepwise selection to estimate the effect of socioeconomic privilege on the likelihood of having additional ventilation; whether or not a household has additional ventilation in the kitchen, or room where the stove is predominantly used. In this model we used the same predictors as the model for having a replacement stove. We extended our logistic regression analyses by estimating the predicted probability of having a replacement/ modified cookstove for specific categories of households classified by their socioeconomic privilege conferred by education, livestock wealth index, and access to free fuelwood from local common lands. The fit and performance of the logistic regression models was assessed using the likelihood ratio x 2 and c statistic. We then in an Ordinary Least Squares (OLS) regression analysis compared the relative impact of replacement/modified cookstoves and additional ventilation on the two types of particulate emission indices from cooking that are harmful to the environment and respiratory health. The first index combines measures of the mass concentration (PM 2.5 ) of particulate matter in cookstove emissions with the surface area concentration of particulate emissions deposited in the tracheobronchial (TB) region of the human lung (TB particle index). The second index combines measures of the mass concentration (PM 2.5 ) of cookstove emissions with the surface area concentration of particulate emissions deposited in the alveolar (A) region of the human lung (A particle index). For both indices, scores range between 0 and 1, and higher scores represent higher particulate emissions and potential harm to health. The model F-test and the model R 2 are used to assess the fit and performance of the OLS regression models. Likelihood of Owning a Replacement/modified Cookstove The likelihood of owning a replacement/modified cookstove increased with livestock wealth and decreased as a household had greater access to biomass from the commons such as forests and other land. The logistic regression model predicting ownership of a replacement/modified cookstove fit the data well: likelihood ratio x 2 (2, n = 100) = 18.6, p,0.0001; c-statistic = 0.73 ( Table 2). The livestock index, a proxy for wealth, was significantly associated with owning a replacement/modified stove; the odds of owning a replacement/modified cookstove increased with wealth as measured by the number of livestock owned (OR = 1.10, 95% CI: 1.01, 1.21). The quantity of common land available for fuelwood collection was also significantly associated with owning a replacement/modified stove (OR = 0.20, 95% CI: 0.07, 0.85). The odds of a household owning a replacement/modified cookstove were 80% lower when a household had access to commons and therefore more fuelwood. Perceptions of fuelwood scarcity, access to all weather roads, ownership of a TV, age of respondent, the number of school years for household head, the quantity of land owned by the household, and caste of a household did not have a significant effect on owning a replacement/ modified stove. We then used the SPost utilities for logistic regression in Stata [34] to compare the predicted probability of owning a replacement/modified stove for households that had access to greater than 400 ha of common land to collect fuelwood to those with less than 400 ha or less of common land across various levels of education and wealth (See Figure 1). Two trends were clear in the predicted probabilities of owning a replacement/modified stove across different levels of livestock wealth and years of education. Among all households, as wealth and education increased, so did the predicted probability of owning a replacement/modified stove. Yet, even as this positive relationship between education and wealth with the likelihood of owning a replacement/modified stove held, those households with greater access to common lands, therefore more free fuelwood and less fuelwood scarcity had a consistently lower predicted probability of owning a replacement/ modified stove than households with less access to commons. This finding underscores how, irrespective of caste and wealth privileges, a households' access to free fuelwood has a bearing on the decision to replace traditional stoves with new stoves. Likelihood of Additional Ventilation in the Kitchen The logistic regression model predicting additional ventilation had a good fit to the data: likelihood ratio x 2 (2, n = 100) = 23.0, p,0.0001); c statistic = 0.77 ( Table 2). Years of education of household head was significantly associated with having additional ventilation in the kitchen (OR = 1.14, 95% CI: 1.02, 1.27); the odds of a household having additional ventilation increased 14% for each additional year of education of household head. Wealth, as in land owned, was also significantly associated with having additional ventilation: the odds of having additional ventilation increased with amount of land owned (OR = 3.35, 95% CI: 1.74, 6.44). Effect of Owning a Replacement/modified Cookstove and Having Additional Ventilation on Household Emissions Predictors of particle emissions from household cookstoves, our normative outcome measure of household emissions, were estimated using OLS regressions. The effect on household emissions from owning a replacement/modified cookstove and additional household ventilation are shown in Table 3 (Models 1-3). Emissions were differentiated by their deposition in the tracheobronchial region (TB particle index) and alveolar region (A particle index). Additional ventilation in a home significantly reduced emissions of the TB particle index (Table 3-Models 1-3). Unadjusted, both having a replacement/modified stove and additional ventilation were significantly associated with lower TB particle index scores (Tables 3, Models 1 & 2). Having a replacement/modified stove, however, was not significantly associated with the TB particle index after controlling for additional ventilation in the kitchen (Table 3 -Model 3). Additional ventilation, after controlling for ownership of a replacement/modified stove, was associated with a 12% reduction in the TB particle index in a household. In the unadjusted regression (Table 3 -Model 4), additional ventilation was significantly associated with our second emissions index -A particle index. Multivariable regression analysis, however, indicated that neither ventilation nor ownership of a replacement/ modified stove were significantly associated with this second particulate index when controlling for the other (Table 3-Model 6). Discussion Our results indicate that the less wealthy households are exposed to higher concentrations of particulate emissions. Betteroff households were more likely to have additional ventilation and having additional ventilation effectively reduced particulate emissions concentrations. Even among these rural households, the well-to-do modified their homes to improve ventilation and potentially offset emissions from a poorly functioning stove over time. Notably, reductions in emissions, in this study were not observed from owning a replacement/modified cookstove. Emissions were reduced due to additional ventilation that is unrelated to the stove. One possibility is that the replacement biomass stoves with chimneys may not have reduced particulate emissions concentrations because of flues that functioned sub-optimally or maintenance requirements that were beyond the capability of households. FES teams in their routine community visits have observed several replacement stoves with malfunctioning flues. Informal interviews with such households by FES village teams indicate that household members were unable to perform regular maintenance which resulted in broken flues. Perhaps the level of maintenance required for the stoves to work properly was beyond the capability of a household. Our findings resonate with research indicating high maintenance requirements seem unreasonable to poor rural households, leading to disrepair of replacement stoves [32,35]. Stoves, implemented by FES, at the time of study, were approximately 30 months old, underscoring the need for maintenance and support after installation as an important factor for consideration in improving household air quality. More nuanced research is warranted to understand how rural households address household air quality through better maintenance and proper functioning of stoves or through such other measures as improved ventilation. Another insight from this study that warrants greater attention in future studies is the reduced propensity to shift to newer stoves when households have greater access to free fuelwood from the commons, as measured by amount of commons available to a household. Ours is one of the few studies that examine the association between fuelwood availability and reduced likelihood of shifting to new stoves. While we are cautious about this conclusion given that our analysis is from cross-sectional data, this association between abundant fuelwood from commons and incentives to shift to new energy systems needs further study to test for a causal connection. We conjecture two important pathways by which biomass access from commons play a significant role in the uptake and sustained use of replacement cookstoves. First, sheer availability of fuelwood, irrespective of quality, could significantly reduce the opportunity cost of shifting to replacement cookstoves. Second, as burden of fuelwood collection from the commons typically falls on women and children, it could be a significant barrier for shifting to newer efficient stoves. As households undervalue women and children's labor, the perceived opportunity cost of collecting freely available biomass from commons to meet daily energy needs remains low, and therefore potential economic and health gains from shifting to either a biogas stove or cleaner burning woodstove are possibly discounted by poor households. Social caste, age of household head, perceptions of fuelwood scarcity, media and market access were all not significant in having a replacement stove or additional ventilation. Our analysis points to some important associations, but is also limited. First, our analysis is from cross-sectional data, and we must caution against strict causal attributions. While such data may yield useful insights on an understudied issue, generalizability is less than if we used data with a much larger sample of households representing greater regional variation. While on one hand, our narrow geographical focus controls for household socioeconomic and geographical factors that may differ across regions of India, a larger-n study exploring similar research questions that would be generalizable to larger geographical regions may yield a higher impact on future household air pollution reduction interventions. Second, data from a large, randomized control trial directly comparing the effectiveness of replacement/modified cookstoves to traditional stoves and ventilation in reducing indoor emissions, isolating the effect of each intervention have much greater purchase in testing the impact of new stoves. Such randomized control studies are now being implemented in India, and the results from this analysis are useful in providing insights into productive hypotheses for testing such studies. Third, models predicting exposure to cookstove emissions due to socio-economic variables are under-developed in the scientific literature, and, therefore, our models will likely suffer from a degree of inaccuracy. More research including such variables will contribute to a better understanding of the predictors of exposure to cookstove emissions, refining theory, generating more parsimonious models, and yielding evidence that is useful to the design of household air pollution in the field. Our results and analysis in this paper should be viewed in light of these limitations but also as evidence for some robust associations that warrant pursuit in both large sample household surveys and randomized control trials examining sustainability of improved stoves in rural India. Our findings while insightful are associational in nature, and therefore merit further research to establish causal pathways to improved household air quality and health outcomes. Finally, as we stated previously, the relationship between the emissions indices and actual health outcomes are not available in this study. However, the emissions indices are useful presently as indicators of health impact in as much as they: 1) indicate the scale of particle deposition in two areas of the human lung; 2) can be associated with the PM 2.5 and surface area concentrations reported in our previous work [26]. We have now designed a randomized control trial in rural India to examine the impact of improved stoves on respiratory health outcomes of the poor, and it will be a further test of the emission indices we used in this analysis. Despite these limitations, our analysis yields important lessons for the next generation of cookstove programs. While exposure to emissions from cookstoves is presently a rural reality, social and economic inequalities are an important dimension of exposure variation. Variations in exposure to household emissions are likely to increase from new efforts to disseminate clean fuel and stove technologies that are without regard to who is likely to accept and use them in a sustained manner. Our results resonate with findings of a recent systematic review of cookstove studies that social and economic variables are important in the uptake of clean stoves and fuels. We also respond to their call to examine understudied contextual factors such as social marginalization, market access, abundance of fuelwood, possibly risking uptake and sustained use of newer stoves and cleaner fuels [2]. While our results do not show any association between market and media access on uptake of modified stoves, there is a need for sharper focus on barriers that the poorest rural households face in adopting and sustaining replacement cookstoves is urgent. Greater attention towards inclusion of the poorest households is paramount, as findings in our paper suggest that socio-economic and resource conditions are closely coupled with household behavior around stove adoption and use, which in turn determines exposure to household air pollution and subsequent health outcomes [36].
2016-05-12T22:15:10.714Z
2012-10-03T00:00:00.000
{ "year": 2012, "sha1": "558a52ce5177bc012309ca04d02b5fa414a502e1", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0046381&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a0ee71d38003b264cf246665f4aeeb727350ded8", "s2fieldsofstudy": [ "Economics", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
270655925
pes2o/s2orc
v3-fos-license
Analysis of deficiency of uridine monophosphate synthase syndrome and complex vertebral malformation in repeat breeding and anoestrous cattle using PCR-RFLP In India the awareness about genetic disorders is too less among famers at field level, reproductive efficiency is a critical component of growing dairy, but reproductive insufficiency is highly economic problems today. Among that all reproductive problems, anoestrous and repeat breeding are major seen at field by many veterinarians, so it is important and needful to research and resolve the issues regarding to avoid farmers economic loss. There is possibility of mutant genes that are directly affect to livestock reproductive performance. With that concerned it is essential to screen the animals to avoid the increase disease prevalence. Present investigation undertaken, cows with anoestrous and repeat breeding to identify Complex Vertebral Malformation (CVM) and Uridine -5 Monophosphate Synthase (UMPS) molecular genetic disorders by Polymerase Chain Reaction (PCR) - Restriction Fragment Length Polymorphism (RFLP). For this study, Blood of 50 Anoestrous and 52 Repeat Breeding cows (102) were collected from Maharashtra state, India. The de-oxy ribose nucleic acid (DNA) has extracted, and PCR amplification was performed for SLC35A3 and UMPS genes at respective annealing temperature, shown 287 and 108 bp bands respectively. PCR product digested with restriction enzymes PST I and Ava I at 37 °C for 4 hours for CVM and DUMPS respectively. The results of CVM and DUMPS shown 264 & 23 bp bands and 53, 36 and 19 bp bands, respectively. But due to modest size, 19 bp has not visible in RFLP which is seen in wild animals. All selected animals had shown normal results for CVM and DUMPS. Introduction As per 20 th livestock census (2019) cattle population in India is 193.46 million (20 th Livestock census-2019, All India Report).Crossbred females account for 47 percent of the overall cattle population, while Indigenous or non-descript females account for 98 percent.Milking animals account for 20 million of the total crossbred or exotic females, while dry animals account for only 5 million.However, in indigenous females, milking animals are 31 million while dry animals are 16 million.The milk production produced by crossbred cattle is more than that of indigenous cattle.Its 27 and 21 percent in crossbred and indigenous cattle respectively (Basic Animal Husbandry Statistics, Government of India, 2017).In comparison to indigenous cattle breeds, Holstein Friesian crossbreds contribute more to milk production.The percentage of dry cows is only 2 percent in Crossbreed and 15 percent in Indigenous or nondescript cows.So, it is more important to keep the crossbred animals healthy and fit and it gives support for the rural economy (Ramesha et al., 2017) [41] .Milk is good source of Minerals, protein and Vitamin and fulfils the human health requirement.In Indian dairy sector today, reproductive efficiency is a critical component of a successful dairy operation, whereas reproductive insufficiency is one of the costliest concerns.Lactating animals are more susceptible to reproductive issues, which can have a substantial impact on the reproductive efficiency of a dairy herd (Ghavi, 2013) [42] .The most frequently occurring reproductive problems are Repeat breeding, anoestrous, extended calving intervals, early embryonic loss, ovarian cysts, and retained placenta. There are certain monogenic disorders which affecting to reproductive efficiency in cattle.In exotic cattle breeds more than 50 genetic disorders have been identified (Debnath et al., 2016) [4] .Autosomal recessive genes, which impair bovine fertility, are responsible for most inherited problems in cattle.It involves Factor XI Deficiency syndrome (FXID), Complex Vertebral Malformation (CVM), Deficiency of Uridine Monophosphate Synthase syndrome (DUMPS).CVM is a congenital autosomal recessive disorder that affects predominantly to the Holstein cattle.It is characterised by stillbirth, abortions, and preterm births.(Khade et al., 2014) [13] .Malformations of the spinal column's cervical and thoracic segments cause minor scoliosis, symmetric bilateral carpal joint contractions, neck, and shortening of the anterior limbs with medial rotation of the latter.Malformation of multiple vertebrae, mainly involving those at the cervico-thoracic junction, is a common feature (Agreholm et al., 2001) [37] .The causative mutation is on chromosome 3 in the bovine solute carrier family 35-member 3 (SLC35A3) gene (Thomson et al., 2006) [34] .There is G to T substitution at location 559 of the gene SLC35A3 on chromosome 3. CVM causes intrauterine mortality throughout pregnancy, resulting in repeated breeding and involuntary culling of cows, both of which result in economic losses (Berglund et al. 2004) [3] .DUMPS refers to deficiency of uridine 5 monophosphate enzyme in cattle.It is an inherited disease caused by single point mutation (CT) at codon 405 within exon 5. DUMPS is a genetic disorder of cattle that causes early embryonic death upon uterine implantation.DUMPS is characterised by lowered blood activity of UMPS enzyme in the Holstein cattle (Schwenger et al., 1993) [32] .DUMPS homozygous embryos do not sustain to term and die early in pregnancy.Approximately 40 days following conception, the embryos aborted or reabsorbed, resulting in recurring breeding issues.During the first trimester of gestation, the embryos are often reabsorbed which causes to form more services and longer calving intervals than normal animal and that makes additional problems in dairy herds.It is seen that the heterozygous carrier shows a decrease of almost 50% UMPS activity in kidney, spleen, muscle, and mammary glands.This leads to producing five to ten times higher concentrations of orotic acid in cow milk which is at high risk in human consumption (Robinson, 1980) [28] and might lead to threshold for fatty liver development.These, in analogy with a comparable human condition, would be expected to exhibit high perinatal morbidity and mortality (Robinson et al., 1983) [27] .These genetic disorders primarily affect the Holstein Friesian breed.These conditions have been linked to embryonic deaths, abortions, and stillbirths, all of which have a negative impact on reproductive efficiency.In India, breeding bulls are commonly used for breeding without being screened for monogenic autosomal disorders.With the widespread application of artificial insemination (AI), semen trading at National or International level and breeding bulls, genetic diseases can spread to a vast population in short time.The development of simple and fast procedures for accurately diagnosing mutations that cause genetic problems would help breeders in identifying carriers and carrying out breeding programmes to reduce genetic defects from the dairy population.As a result, considering the financial losses and deadly effects of various illnesses in dairy cows, the carried investigation was undertaken to analyse animals with the reproductive disorders for these disorders.The objective is to investigate CVM and DUMPS monogenic disorders with repeat breeding and anoestrous cattle by using PCR-RFLP in cattle. Animal selection In the carried study, initially selected cows were not showing Oestrous or not conceiving even after proper nutrition and management practices despite all normal anatomical structure of animals.Total one hundred and two (102) animals were included with Holstein Friesian (HF) crossbred, Dangi, Gaolao and Nondescript breeds.Among all the animals selected, fifty-two (52) animals were under repeat breeding and fifty (50) animals were anoestrous.Almost of the animals were from well-established dairy farm, veterinary clinics, and farmers' fields from Maharashtra State. Blood sample collection The selected animals' blood was collected with proper hygiene and precautions to avoid any contamination.It was collected in the vacutainers (Make: BD Vacutainer ® ) containing ethylene diamine tetra acetate (EDTA) as blood anticoagulant.Firstly, gently shake the blood vacutainer to mix blood with EDTA properly and store in the cold chainmaintained transportation box.After collection, it carried out to the cytogenetic investigation laboratory then clean the vacutainers externally with ethanol (70%) and store the samples in refrigerator at 14 °C till the extraction of de-oxy ribose nucleic acid (DNA) DNA extraction and Evaluation Phenol Chloroform (PCI) method was applied and DNA extracted as described by Sambrook and Russel, 2001.DNA quality by spectrophotometry and quantity by agarose gel electrophoresis was checked for effective outcome.DNA was stored in the refrigerator for further requirements for Polymerase Chain Reaction -Restriction Fragment Length Polymorphism (PCR-RFLP). a) Spectrophotometry The DNA were checked for its quantification and purity by ultraviolet (UV) spectrophotometry.For the PCR-RFLP analysis, used DNA samples with an (Optical density-OD) OD260/OD280 ratio of 1.7 to 2.0. PCR amplification of SLC35A3 and UMPS: The PCR reactions were performed in 0.2 ml thin-walled PCR tubes with a final volume of 25 μl (Table 3).A composition for PCR were master mix constituted PCR super mix, forward primer, reverse primer, and distilled water and each PCR tube was then filled with ~100 ng/μl DNA, resulting to a final volume 25 μl.And the PCR tubes shifted in a preprogrammed thermal cycler (Master cycler Epigradient, Eppendorf, Germany) as per PCR protocol (Table 4) for different genetic disorders. Restriction digestion of amplified PCR product: The PCR products of the SLC35A3 gene were digested for 4 hours, UMPS gene for 6 hours in 37 °C water bath with concerned restriction enzymes.(Table 5). Gel electrophoresis of digested PCR product: The digested PCR products was visualized in 2.5% and 5% agarose for SLC35A3 and UMPS genes respectively.In agarose gel, ethidium bromide @ 5 µl/100 ml was added.The 6X gel loading dye was used @ 1.5 ml/15 ml of RFLP product and electrophoresed at 90V for 60 minutes in 1X TAE buffer.For SLC35A3 Step DNA Ladder (Range 50-3000 bp) and Ultra Low range (ULR) for UMPS was used as a molecular marker and visualized bands under UV light.By comparing the band size to a molecular size marker, the band size was determined. Results and discussion: The aim of this study was to observe for monogenic disorders such as CVM and DUMPS in animals with reproductive disorders such as repeat breeding and anoestrous, which obstruct production and reproduction and result in financial losses.It has already been observed that a repeat breeder cow results in a $324 economic loss over a fertile cow.The occurrence of factor XI deficiency (FXID) carriers in generally fertile and repeat breeder cows, as well as possible economic losses from 'longer calving intervals' and 'additional service (Akyuz et al., 2012) [2] .CVM and DUMPS were created primarily to investigate the link between these disorders and reproductive problems with the help of molecular techniques and results were discussed in detail as under. Genomic DNA extraction and Analysis: Gel electrophoresis was carried out for genomic DNA examination (Plate 1).Nanodrop was used to check the DNA quantity.The DNA with (Optical density-OD) OD260/OD280 ratio ranging between 1.7 to 2.0 were subjected for further use. Amplification of SLC35A3 gene: The PCR products were analysed in 1.7 percent agarose gel and revealed 287 bp single band (Plate 2) band size in UV Spectrophotometer.The same results were observed by Yathish et al. (2011) [35] and Khade et al. (2014) [13] . RFLP analysis of SLC35A3 gene The PCR products were digested with Pst I restriction enzyme (RE) for 4 hours at 37 °C and visualised under UV spectrophotometer light.It revealed 264 bp and 23 bp sizes two bands (Plate 3).But the size of 23 bp was too small so it is not apparent.The result shown that selected animals were found to be normal, with no carriers or affected for SLC35A3 gene. Plate 3: RFLP of SLC35A3 Gene (100 bp ladder) The digested PCR products of SLC35A3 gene produces 264 bp and 23 bp size bands in wild animal, 287, 264 and 23 bp size in carrier animals, and only one fragment of size 287 bp in homozygous recessive animals.The findings for CVM were wild type genotype and that agreed with Yathish et al. RFLP analysis of UMPS gene Following that, the PCR products were digested with the Ava I restriction enzyme at 37 °C for 6 hours, revealing the normal genotype with three bands of 53, 36, and 19 bp for all the animals.(Plate 5).But 19 bp band is too short so it is not visible in RFLP.In carrier animal, four bands viz., 89, 53, 36 and 19 bp will show in RFLP product.First time DUMPS was recorded in North America and Europe by Shanks and Robinson, (1990) [33] .The results shown that all animals were wild type allele for DUMPS.The findings are in accordance with Kaminski et al. (2005) [11] , with no DUMPS occurrence.In a study of 642 HF animals, Patel et al. (2006) [23] found no DUMPS.In Iranian HF bulls, Rahimi et al. (2006) [25] , Rezaee et al. (2009) [26] , and Eydivandi et al. (2011) [5] found no carriers for DUMPS.In Turkish HF cattle, Meydan et al. (2010) [17] and Oner et al. (2010) [20] found no carriers for DUMPS.Karsli et al. (2011) found no carriers for DUMPS.Gaur et al. (2020) [40] all have screened the animals for the UMPS gene and found no carriers. Conclusion Genetic disorders are one of the most important part and it play a significant role in the dairy industry.Now as days there is vast use of assisted reproductive techniques such as artificial insemination, embryo transfer, and in-vitro fertilisation for livestock development through selective breeding.Genetic diseases in cattle are having a significant influence on the national livestock economy by affecting to the production and reproduction.There are a variety of monogenic recessive disorders that are peculiar to the Holstein Friesian cattle breed and have been documented to cause significant economic losses.The primary cause of these problems in the population is the breeding bulls utilised.Increased numbers of undiscovered carriers of cytogenetic abnormalities and recessive disorders might result in significant financial losses for the livestock business.Other potential sources of disease are superior germplasm bull mothers and other breeding cows.Hence, the study carried out in 102 cows with reproductive disorders to screen out animals for genetic disorders, if any.The carried work can be concluded as under. All animals found normal CVM and DUMPS monogenic disorders.As a result, it's possible that these diseases aren't directly linked to reproductive issues in the population investigated. These genetic diseases in the affected animal were directly or indirectly linked to reproductive disorders in cattle, resulting in diminished or subfertility in cattle.  Screening of animals is important to avoid the transfer of disease to normal animals, reduce risk to human health and to avoid loss to the farmers. Plate 5 : RFLP of UMPS Gene (Ultra low range ladder) (53 and 36 bp size and 19 bp is not visible) Table 1 : Chemicals used for Agarose Gel Electrophoresis Table 2 : Primers used for Polymerase Chain Reaction Table 3 : PCR reaction component details Table 4 : PCR protocol for SLC35A3 and UMPS gene Table 6 : Components used in digestion of PCR product
2024-06-22T15:13:04.208Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "90b839ec95a08395579a98f0b10d44897b6dbbcc", "oa_license": null, "oa_url": "https://doi.org/10.33545/26174693.2024.v8.i6e.1329", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5b0ff1b97c781f841c6c04dbd1c449784663f57a", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology" ], "extfieldsofstudy": [] }
226263695
pes2o/s2orc
v3-fos-license
Control of neurogenic competence in mammalian hypothalamic tanycytes Loss of function of NFI transcription factors in hypothalamic tanycytes generates diverse subtypes of functional neurons. INTRODUCTION Hypothalamic tanycytes are radial glial cells that line the ventricular walls of the mediobasal third ventricle (1,2). Tanycytes are subdivided into alpha1, alpha2, beta1, and beta2 subtypes based on dorsoventral position and marker gene expression and closely resemble neural progenitors in morphology and gene expression profile. Tanycytes have been reported to generate small numbers of neurons and glia in the postnatal period, although at much lower levels than in more extensively characterized sites of ongoing neurogenesis, such as the subventricular zone of the lateral ventricles or the subgranular zone of the dentate gyrus (3)(4)(5)(6). While tanycytederived newborn neurons may play a role in regulating a range of behaviors (3,7,8), levels of postnatal tanycyte-derived neurogenesis are low and virtually undetectable in adulthood (9). As a result, little is known about the molecular identity or connectivity of tanycytederived neurons (TDNs) (6,9). A better understanding of the gene regulatory networks that control neurogenic competence in hypothalamic tanycytes would both give insight into the function of TDNs and potentially identify new therapeutic approaches for modulation and repair of hypothalamic neural circuitry. Studying retinal Müller glia, which closely resemble hypothalamic tanycytes in both morphology and gene expression, provides valuable insight into the neurogenic potential of tanycytes (3,9,10). Zebrafish Müller glia function as quiescent neural stem cells and are able to regenerate every major retinal cell type following injury (11). While mammalian Müller glia effectively lack neurogenic competence, in posthatch chick, they retain a limited neurogenic competence that resembles that of mammalian tanycytes (12). Recent studies in the retina have identified the NFI family of transcription factors Nfia/b/x as being essential negative regulators of neurogenesis in both late-stage progenitor cells and in mature mammalian Müller glia (13,14). Moreover, like in the retina, NFI factors are expressed in late-stage hypothalamic neural progenitors (15), and Nfia is necessary for hypothalamic glia specification (16). These findings raise the possibility that NFI factors may actively repress proliferation and neurogenic competence in tanycytes. We hypothesize here that suppression of NFI factor activity in tanycytes may enhance their proliferative and neurogenic capacity. To address this possibility, we selectively disrupted Nfia/b/x function in hypothalamic tanycytes of both juvenile and adult mice. We observed that early loss of function of NFI activity in hypothalamic tanycytes led to a robust induction of proliferation and neurogenesis, while Nfia/b/x disruption in adults led to lower levels of tanycytederived proliferation that are seen following neonatal loss of function. NFI loss of function activated both Shh and Wnt signaling in tanycytes, and this, in turn, stimulated proliferation and neurogenesis. These TDNs survive, mature, and migrate radially away from the ventricular zone (VZ), express molecular markers of diverse hypothalamic neuronal subtypes, fire action potentials, and receive synaptic inputs. These findings demonstrate that hypothalamic tanycytes have a latent neurogenic competence that is actively suppressed by NFI family transcription factors, which can be modulated to induce generation of multiple hypothalamic neuronal subtypes. We induced Cre activity using daily intraperitoneal injections of 4-hydroxytamoxifen (4-OHT) between postnatal days 3 and 5 (P3 and P5) (Fig. 1D). At this point, neurogenesis in the mediobasal hypothalamus is low under baseline conditions (3,9). Following 4-OHT treatment, we observe that NFIA/B/X immunoreactivity is first reduced in the tanycyte layer beginning at P6 following 4-OHT injections between P3 and P5, initially in more ventral regions where Rax expression is strongest ( fig. S1A). NFIA/B/X immunoreactivity is largely undetectable by P10, and Cre-dependent GFP expression was correspondingly induced (fig. S1A). 5-Bromo-2′-deoxyuridine (BrdU) incorporation and Ki67 labeling was seen beginning at P6 in dorsally located alpha tanycytes, with labeling spreading to beta1 tanycytes of the arcuate nucleus (ArcN) by P8 and beta2 tanycytes of the median eminence (ME) by P10 ( fig. S1, A and B). At P12, Ki67 labeling was observed in the tanycyte layer immediately adjacent to the third ventricle lumen, and a small subset of GFP + cells in the tanycyte layer closest to the hypothalamic parenchyma (HP) began to express neuronal markers (fig. S1C). By P17, Nfia/b/x expression was completely lost in the tanycytes, although it was preserved in Rax-negative ependymal cells where GFP expression is not induced (fig. S1D), and tanycyte-derived GFP + neuronal precursors in the VZ were actively amplified and had begun to migrate outward into the HP in Nfia/b/x-deficient mice (Fig. 1, E and F, and fig. S1D). In contrast, few migrating GFP + cells were observed in Rax-CreER;CAG-lsl-Sun1-GFP controls ( Fig. 1, E and F). At P45, a substantial increase in GFP + cells expressing mature neuronal markers was observed in the parenchyma of the ArcN and dorsomedial hypothalamic (DMH) nuclei ( Loss of function of Nfia/b/x induces tanycyte proliferation in older mice Our previous studies showed that the loss of function of Nfia/b/x in late-stage retinal progenitors robustly induces proliferation and neurogenesis under baseline conditions, while in adult Müller glia, it induces limited levels of proliferation and neurogenesis but only following neuronal injury (13,14). This suggests that neurogenic competence in mature murine tanycytes could be lower than that seen in neonates. To test this, we conducted 4-OHT treatment at older ages. While we still observed a robust induction of proliferation and neurogenesis following treatment at P7, this was less effective at P10. Only very low levels were observed at P12, not significantly different from control mice ( fig. S2, A to C). However, this low level of proliferation reflected a substantially reduced efficiency of 4-OHTdependent disruption of Nfia/b/x, as confirmed by the largely intact pattern of immunoreactivity for NFIA/B/X in TKO mice ( fig. S2D). To improve the efficiency of Nfia/b/x deletion and to study the effects of NFI loss of function in adult animals, we applied viral-mediated Cre delivery. Intracerebroventricular injection of adeno-associated virus 1 (AAV1)-Cre-mCherry into both Nfia lox/lox ;Nfib lox/lox ;Nfix lox/lox ;CAG-lsl-Sun1-GFP and CAG-lsl-Sun1-GFP control mice at P60 resulted in robust mCherry expression in ventricular hypothalamic cells within 2 weeks, along with Cre-dependent induction of Sun1-GFP expression in both control and Nfia/b/x-floxed conditional mice ( fig. S3). We observed efficient loss of NFI expression in GFP + tanycytes by P74 and coimmunolabeling with BrdU, delivered continuously by osmotic mini pump during the 2 weeks ( fig. S3C). To determine the specific induction of proliferation initiated from tanycytes, we administered EdU (5-ethynyl-2′-deoxyuridine) using a once daily intraperitoneal injection between P75 and P77 and analyzed mice at P78 ( Fig. 2A). We observed selective EdU incorporation into alpha tanycytes adjacent to the dorsal part of ArcN (Fig. 2, C and D). Much lower levels of EdU incorporation were observed in ventrally located beta tanycytes, while no EdU labeling was observed in controls (Fig. 2, B and D). Although we observed a few GFP + cells in the mediobasal HP, these cells were not labeled with EdU, and there was no difference in their number between control and TKO animals (Fig. 2E). We confirmed the specific and local induction of tanycyte proliferation with the increased endogenous expression of Ki67, a proliferation marker, only in Nfia/b/x-deficient TKO mice ( Fig. 2F and fig. S3C). Single-cell RNA-seq and assay for transposase-accessible chromatin sequencing analysis identify gene regulatory networks controlling neurogenesis in tanycytes To obtain a comprehensive picture of the cellular and molecular changes that occur during proliferation and neurogenesis, we conducted single-cell RNA sequencing (scRNA-seq) analysis of FACSisolated GFP-positive tanycytes and tanycyte-derived cells from both control and Nfia/b/x-deficient mice. To do this, we induced Cre activity in tanycytes between P3 and P5 and harvested the GFP + cells at P8, P17, and P45 time points in TKO mice. We next profiled a total of >60,000 cells using the Chromium platform (10x Genomics), generated separate uniform manifold approximation and projection (UMAP) plots for control and Nfia/b/x-deficient tanycytes, and then aggregated data obtained from all samples ( Fig. 3A and fig. S4). In control mice, we not only could readily distinguish tanycyte subtypes (alpha1, alpha2, beta1, and beta2), based on previously characterized molecular markers (20,21), but also observed that tanycytes give rise to a range of other hypothalamic cell types (Fig. 3B). In controls, at P8, we observe a small fraction of proliferative tanycytes, from which arise differentiation trajectories that give rise to astrocytes and ependymal cells, as well as small numbers of oligodendrocyte progenitor cells (OPCs) and neurons (Fig. 3, C and D; fig. S4; and table S2). At P17 and P45, however, very few proliferating tanycytes are observed, and evidence for ongoing generation of neurons and glia is lacking (Fig. 3, C and D, and fig. S4). In TKO mice, in contrast, we observe that a remarkably larger fraction of cells are proliferating tanycytes at all ages, along with clear evidence for ongoing neurogenesis ( Fig. 3D and fig. S4). Furthermore, a substantial reduction in the relative fraction of nonneuronal cells, including astrocytes, OPCs, and ependymal cells, is observed, in line with previous studies reporting an essential role for NFI family genes in gliogenesis in other 3 of 20 (17) compared to the GFP − cells in adult hypothalamus. The tanycyte-specific marker Rax and the neuronal marker Npy are enriched in GFP + and GFP − cells, respectively FPKM, fragments per kilobase of transcript per million mapped reads. (B) Distribution of Nfia/b/x protein in Rax-GFP + tanycytes. Schematic of mouse lines is used in this study. (C and D) Schematic of a genetic approach for simultaneous tanycyte-specific disruption of Nfia/b/x and reporter gene labeling of tanycytes and tanycyte-derived cells using tamoxifen-dependent activation of CreER. (E) Induction of proliferation and neurogenesis in NFI-deficient tanycytes by P17. The Bregma position of the control section is slightly posterior to that of the mutant. 3V, the third ventricle. (F) Quantification of proliferation and neurogenesis in the VZ and hypothalamic parenchyma (HP) at P17 (n = 3 to 5 mice). (G) In NFI TKO mice by P45, mature neuronal marker NeuN and neurofilament M (NF-M) were simultaneously detected in TDNs migrating into the parenchyma of the arcuate nucleus (ArcN) and dorsomedial hypothalamus (DMH), with a small number of neurons remaining in the subventricular zone (yellow arrowheads). Enlarged image of parenchymal TDNs in (g) with the orthogonal view showing costaining within the cell. (H) Substantially increased numbers of NeuN + /GFP + TDNs are observed in NFI TKO mice in ArcN and DMH relative to wild-type controls, but comparable numbers of neurons are observed in median eminence (ME) and ventromedial hypothalamus (VMH) (n = 2 to 3 mice). (I) The number of GFP + tanycytes is reduced in NFI-deficient mice at P45, and ectopic neurons are seen in the VZ (n = 2 to 3 mice). Scale bars, 100 m (B and E), 50 m (G), and 25 m (g). of 20 central nervous system regions (13,22,23). While controls show a higher fraction of tanycyte-derived astrocytes, a much higher fraction of GFP + cells are neurons in TKO mice ( Fig. 3D and table S2). Furthermore, the density and relative fraction of cells expressing alpha2 tanycyte markers is increased in TKO mice, which is consistent with previously published neurosphere and cell lineage analysis (4), demonstrating a higher neurogenic competence for alpha2 tanycytes [ Fig. 3, C (bottom) and D, and table S2] (4). To identify critical regulators of proliferative and neurogenic competence in tanycytes, we performed a differential gene expression analysis between control and TKO mice in each tanycyte subtype. This analysis uncovered differential expression of multiple extrinsic and intrinsic regulators of these processes, particularly in alpha2 tanycytes (Fig. 3E, fig. S5, and table S4). Control tanycytes also selectively expressed many genes expressed by both mature, quiescent tanycytes and retinal Müller glia, whose expression is down-regulated following cell-specific deletion of Nfia/b/x (14). These include genes that are highly and selectively expressed in mature alpha2 tanycytes, such as Apoe and Kcnj10, the Notch pathway target Hes1, the Wnt inhibitor Frzb, and the transcription factors Klf6 and Bhlh40. Nfia/b/x-deficient tanycytes, in contrast, up-regulated Shh, the Notch inhibitor Dlk1, the bone morphogenetic protein inhibitor Fst, the neurogenic factors Ascl1 and Sox4, and the Notch pathway target Hes5 (Fig. 3E). Alpha1 and beta1 tanycytes also showed reduced expression of Tgfb2 and Fgf18 ( fig. S5), which were previously shown to be strongly expressed in these cells (21,24). To validate these results, we conducted multiplexed single-molecule fluorescence in situ hybridization (smfISH) (HiPlex, Advanced Cell Diagnostics Bio-Techne) and observed strong up-regulation of Shh in Pdzph1-positive alpha2 tanycytes at P45 in TKO mice (Fig. 3F). To infer cell lineage relationships between specific tanycyte subtypes and tanycyte-derived neural progenitors, we conducted RNA velocity analysis (25) on the full aggregated scRNA-seq dataset (Fig. 3G). We observe that alpha2 tanycytes give rise to proliferating tanycytes, which, in turn, give rise to neural precursors following cell cycle exit (Fig. 3G, insets). Note that astrocytes appear to arise directly from alpha1 and alpha2 tanycytes without going through a clear proliferative stage (Fig. 3, G and H). We then used pseudo-time analysis to identify six major temporally dynamic patterns of gene expression that occurred during the process of alpha2 tanycyte-derived neurogenesis ( Fig. 3I and table S5). On the basis of the gene ontology (GO) term enriched in each cluster (Fig. 3J), the transition from a quiescent to an actively proliferating state is associated with the down-regulation of metabolic genes (Glul), ion channels (Kcnj10), transcription factors (Lhx2), and Notch pathway components (Notch1), all of which are expressed at high levels in mature tanycytes (3,10). In addition, genes regulating ciliogenesis (Dnah7a and Cfap65) are rapidly down-regulated. Following the up-regulation of genes controlling cell cycle progression and DNA replication (Cenpf and Mcm3) and cell cycle exit (Btg2), tanycyte-derived neural precursors up-regulate genes that control chromatin conformation (Phf3), RNA splicing (Sf3b1), and neurogenesis (Hes6). This up-regulation is then followed by the expression of transcription factors that control the specification of specific hypothalamic neuronal subtypes (Dlx1 and Lhx6) and regulators of synaptogenesis (Erbb4), neurotransmitter biogenesis and reuptake (Gad1, Pdyn, and Slc32a1), neurotransmitter receptors (Grin1 and Gria2), and leptin signaling (Lepr). To investigate changes of chromatin accessibility in TKO mice, we conducted single-cell assay for transposase-accessible chromatin sequencing (scATAC-seq) analysis in FACS-isolated GFP + tanycytes and tanycyte-derived cells in both control and TKO mice at P8. UMAP analysis indicated that cell identity in both control and mutant samples could be readily assigned on the basis of gene expression data obtained from scRNA-seq (Fig. 4A). The overall distribution of cell types was much like that seen for scRNA-seq data ( Fig. 3A) with more proliferating tanycytes and TDNs observed in TKO cells compared to controls (Fig. 4B). Accessibility of the consensus NFI motif was assessed in all cell types in Nfia/b/x-deficient mice (Fig. 4C), and reduced levels of bound transcription factors were observed at these sites by footprinting analysis (Fig. 4E), indicating that Nfia/b/x proteins are actively required to maintain accessible chromatin at a subset of their target sites. We observed 1333 chromatin regions that showed increased accessibility and 4564 regions that showed decreased accessibility in Nfia/b/x-deficient alpha2 tanycytes relative to controls (Fig. 4D). As expected, Hypergeometric Optimization of Motif EnRichment (HOMER) analysis indicated that open chromatin regions (OCRs) specific to controls were most highly enriched for consensus sites for NFI family members ( Fig. 4D and table S6). In contrast, motifs for the Wnt effector Lef1 were enriched at OCRs specifically detected in Nfia/b/x-deficient alpha2 tanycytes. We observed that a subset of genes with altered expression in scRNA-seq showed altered accessibility in putative associated cis-regulatory sequences, although changes in gene expression and chromatin accessibility often diverged (tables S7 and S8). While putative regulatory elements of the down-regulated genes Aqp4, Hes1, and Fgf18 showed reduced accessibility, elements associated with other down-regulated genes such as Tgfb2 and Sox8 showed increased accessibility. Likewise, while most up-regulated genes showed increased accessibility, such as Shh and Sox4, some showed decreased accessibility. These divergent responses indicate that NFI genes control the expression of a large number of transcription factors in tanycytes, which appear to perform dual roles as both activators and repressors, as has previously been reported in other astroglial cell types (22,26,27). To identify direct targets of NFI factors and to better clarify their function in regulating proliferation and neurogenesis in alpha2 tanycytes, we integrated scRNA-seq and scATAC-seq data from al-pha2 tanycytes to identify genes with both altered expression and altered accessibility at sites containing NFI consensus sequences and identified 62 genes in total ( Fig. 4F and table S9). These include down-regulated genes such as Kcnj10 and Apoe and Notch pathway effectors such as Hes1 and Hey2, as well as Shh and Sox4, which are up-regulated with direct target genes enriched for genes controlling proliferation and neural development (Fig. 4G). Transcription of Nfia and Nfib are themselves strongly activated by Nfia/b/x, consistent with findings in the retina (13,14). We found NFI binding sites in peaks that are negatively correlated with the promoter of Shh, suggesting that NFI may directly repress Shh expression (Fig. 4H). Thus, NFI factors may act as both activators and repressors in alpha2 tanycytes that promote quiescence while inhibiting proliferation and neurogenesis (Fig. 4I). Shh and Wnt signaling regulate tanycyte proliferation and neurogenic competence The increased expression of Shh and Wnt regulators that are observed in Nfia/b/x-deficient alpha2 tanycytes (Figs. 3E and 5, A and B, and table S4) suggested that Shh and Wnt signaling might promote proliferation and/or neurogenesis in tanycytes. We observe substantially increased expression of Shh in both alpha2 and beta1 tanycytes ( Fig. 3F and fig. S5) and more complex regulation of Wnt signaling modulators. We observe increased expression of Sulf1, which, by regulating the synthesis of heparan sulfate proteoglycans, typically enhances Wnt signaling (28). However, the broad-spectrum Wnt inhibitor Notum, which was recently shown to regulate quiescence in adult neural stem cells of the lateral ventricles (29), is up-regulated from P17, potentially acting in a cell-autonomous manner counteracting the effects of increased cellular levels of Wnt signaling (Fig. 5 , A and B). To determine whether this increased Shh inhibition might inhibit regulated tanycyte proliferation and/or tanycyte-derived neurogenesis, we administered the blood-brain barrier-permeable Shh antagonist cyclopamine via intraperitoneal injection to Nfia/b/xdeficient mice every 2 days from P8 to P16 in conjunction with daily intraperitoneal injections of BrdU from P12 to P16 (Fig. 5C). At these ages, Shh is both highly expressed in tanycytes, and levels of proliferation and neurogenesis are high in Nfia/b/x-deficient alpha2 tanycytes. Cyclopamine administration resulted in a significant reduction in both the numbers of total GFP + cells and GFP/NeuN double-positive neurons in both the tanycytic layers in VZ and HP when compared to vehicle controls, while BrdU incorporation was only significantly altered in parenchymal neurons, indicating a stronger effect on tanycyte-derived neurogenesis than on selfrenewing tanycyte proliferation (Fig. 5C). To determine whether Notum-dependent Wnt signaling played a role in inhibiting the tanycyte proliferation and/or neurogenesis at later ages, we treated P45 Nfia/b/x-deficient mice with the bloodbrain barrier-permeable Notum inhibitor ABC99 (30) once daily for 5 days, with EdU coadministered on the last 3 days. At this age, Notum expression is high, and levels of tanycyte proliferation are substantially reduced relative to the early postnatal period. This led to a significant increase in proliferation in alpha2 tanycytes (Fig. 5D), indicating that activation of Wnt signaling stimulates tanycyte proliferation at later ages. Nfia/b/x-deficient tanycytes give rise to a diverse range of hypothalamic neuronal subtypes To investigate the identity of TDNs, we analyzed a neuronal subset of scRNA-seq data obtained from both control and Nfia/b/x-deficient mice (Fig. 6A). A total of 582 neurons were obtained from controls, while 15,489 neurons were obtained from Nfia/b/x-deficient mice (table S2). The great majority of control TDNs was obtained at P8, while large numbers of TDNs were seen at all ages in Nfia/b/x-deficient mice. UMAP analysis revealed that both control and Nfia/b/x-deficient TDNs fell into two major clusters each of glutamatergic and -aminobutyric RNA velocity analysis indicated three distinct major differentiation trajectories in both control and Nfia/b/x-deficient TDNs, which give rise to the two major glutamatergic clusters and the GABAergic neurons (Fig. 6C). Fifty-three percent of TDNs were GABAergic, as determined by expression of Gad1, Gad2, and/or Slc32a1, while 30% were Slc17a6positive glutamatergic neurons ( Fig. 6B and tables S10 and S11). The glutamatergic cluster Glu_1 was enriched for the transcription factors Nhlh2 and Neurod2, as well as markers of glutamatergic ventromedial hypothalamus (VMH) neurons, such as Nr5a1 and Cnr1, and the androgen receptor Ar, while Glu_2 was enriched for markers of glutamatergic DMH neurons, such as Ppp1r17, and ArcN markers, such as Chgb (Fig. 6B). GABAergic neurons expressed a diverse collection of molecular markers expressed by neurons in the ArcN and DMH as well as the adjacent zona incerta (ZI), which regulates a broad range of internal behaviors including feeding, sleep, and defensive behaviors (31). GABA_1 was enriched for a subset of ZI-and DMH-enriched genes (Lhx6 and Pnoc), while GABA_2 was enriched for genes selectively expressed in ArcN neurons (Isl1), as well as for genes expressed in GABAergic neurons in both the ArcN and DMH (Cartpt, Npy, Sst, Gal, Trh, and Th). We used both multiplexed smfISH (Fig. 6, D and F) and immunohistochemistry ( Fig. 6E and table S12) to confirm the expression of Th, Lhx6, and Gal in GFP + Nfia/b/x-deficient tanycyte-derived GABAergic neurons in the DMH, as verified by Egfp/Slc32a1 (or Gad1) colabeling. Expression of Cartpt and Th was observed in GABAergic TDNs in both the ArcN and DMH (Fig. 6G and fig. S6A), and coexpression of Lhx6 and Th, as well as Cartpt and Th, was observed in a subset of TDNs and non-TDNs. Small numbers of Nr5a1-expressing glutamatergic (Slc17a6-positive) neurons were detected in the dorsomedial VMH (Fig. 6H). No expression of markers specific to neurons of more anterior or posterior hypothalamic regions (e.g., Avp, Crh, Oxt, Pmch, Hcrt, Vip, etc.) was detected (table S11). These data demonstrate that TDNs express molecular markers of multiple neuronal subtypes located in the tuberal hypothalamus, including the ArcN and VMH. To determine how closely the gene expression profile of TDNs more broadly resembles the profile of hypothalamic neurons in these regions, we used Linked Inference of Genomic Experimental Relationships (LIGER) analysis (32) to integrate clustered scRNA-seq from a previous study of adult ArcN, in which a small number of VMH neurons was also profiled (fig. S6B and table S13) (21). Integration of these datasets using LIGER (33) GABAergic tanycyte-derived GABA_1 and GABA_2 clusters overlapped with several different ArcN neuronal clusters, including clusters that contained neurons expressing Th, Ghrh, and/or Trh. In contrast, some subtypes of ArcN neurons were represented only sparsely or not at all among TDNs, while other TDNs appeared to correspond to cell types not found in the published scRNA-seq dataset. For instance, while some TDNs closely resembled Pomcexpressing ArcN neurons, the relative fraction of Pomc-positive TDNs was substantially lower than in ArcN (LIGER cluster 4). Likewise, no TDNs mapped to LIGER cluster 7, which corresponded to Agrp-positive ArcN neurons, and no Agrp-positive TDNs were detected using smfISH ( Fig. 6G and fig. S6, B and C). While few immature TDNs mapped to neurons in the mature ArcN scRNA-seq dataset, as expected, two clusters of mature glutamatergic (LIGER clusters 8 and 9) and one cluster of GABAergic (LIGER cluster 11) also showed little correspondence to ArcN neurons and appeared to correspond to DMH-like neurons based on their expression of DMH-enriched genes (e.g., Ppp1r17 and Lhx6). At P45, 4.4% of TDNs in the GABA_2 cluster expressed the leptin receptor Lepr (Fig. 6B), despite Lepr being essentially undetectable in tanycytes themselves, as previously reported (17). As a result, we tested whether TDNs were capable of responding to leptin signaling. P90 mice that had undergone an overnight fast were injected intraperitoneally with leptin (3 mg/kg) and euthanized after 45 min, providing sufficient time for leptin-responsive neurons throughout the hypothalamus to induce phosphorylation of signal transducer and activator of transcription 3 (pStat3) immunoreactivity (34). We observed robust induction of pStat3 immunoreactivity in GFP + TDNs under these conditions (Fig. 6I), with low levels of immunoreactivity under unstimulated conditions, as previously reported (34). A total of 42.3 (±2.5)% of parenchymal TDNs in DMH and ArcN show pStat3 induction under these conditions, along with 22.3 (±3.9)% of TDNs in the subventricular region (Fig. 6J), confirming that a subset of TDNs are leptin responsive. Neurons derived from Nfia/b/x-deficient tanycytes fire action potentials and receive synaptic inputs Since our scRNA-seq analysis identified the majority of tanycytederived cells as neurons in Nfia/b/x-deficient mice, we next investigated whether these cells showed electrophysiological properties of functional neurons. To characterize tanycyte-derived cells, we performed whole-cell patch-clamp recordings from GFP + parenchymal cells in acute brain slices obtained from Nfia/b/x-deficient mice that had undergone 4-OHT treatment between P3 and P5. Biocytin filling of recorded tanycyte-derived cells revealed neuronlike morphology, typically showing three to five major dendritic processes, similar to GFP − control neurons ( Fig. 7A and fig. S7). We found that the majority of GFP + tanycyte-derived cells in the HP fired action potentials in response to depolarizing current steps (95%, 40 of the 42 cells from P15 to P97; Fig. 7, B and C). In contrast, GFP + cells located in the tanycytic layer retained nonspiking, glial-like electrophysiological properties in Nfia/b/x-deficient mice ( fig. S8). Similar to typical hypothalamic neurons (35), many TDNs fired spontaneous action potentials (sAPs) and exhibited relatively depolarized resting membrane potentials ( fig. S9A). However, the proportion of TDNs showing sAPs was significantly lower than that of GFP − control neurons in young (P15 to P19) mice ( fig. S9B). Nonetheless, TDNs revealed more depolarized resting membrane potentials compared to control neurons in both young (P15 to P19) and adult (P86 to P97) mice ( fig. S9B). In addition, we found that the input resistance was significantly higher in TDNs than in GFP − control neurons in young (P15 to P19) mice, although the input resistances were similar for tanycyte-derived and control neurons in adult mice (P86 to P97) (Fig. 7, D and E, and table S14). Together, these data indicate that, although there were some differences with GFP − control neurons, almost all tanycyte-derived cells fired action potentials and shared electrophysiological features of control neurons, indicating that they are neurons. Since the great majority of tanycyte-derived cells appear to be functional neurons, we next asked whether their evoked action potential firing properties were similar to those of neighboring GFP − hypothalamic neurons. Although TDNs fired action potentials to depolarizing current steps, the average current-frequency curve of TDNs was significantly different from the curve for control neurons in both young and adult mice (Fig. 8, A and B). Although the number of action potentials elicited initially increased with age, the TDNs were unable to reliably generate repetitive actions potentials with larger current steps in contrast to control neurons, leading to saturation of the current-frequency curves (Fig. 8B and fig. S9, C and D). These results suggest that TDNs may have a different ion channel composition from control neurons. This was confirmed by a direct comparison of expression levels of voltage-gated ion channels, which was performed using scRNA-seq data obtained from P45 TDNs and arcuate neuronal clusters found to match these TDNs through LIGER analysis ( fig. S6). We observed that TDNs expressed lower levels of the pore-forming subunits of multiple voltage-gated potassium (Kcna2, Kcnc1, Kcnq2/3, and Kcnd2), sodium (Scn1a, Scn2b, Scn3a, and Scn8a), calcium (Cacna1b/e, Cacna2d1/2, Cacnb4, and Cacng4), and chloride channels (Clcn3), as well as the calcium-activated potassium channel Kcnma1 (table S15). We next asked whether TDNs receive synaptic inputs from other neurons. We detected spontaneous postsynaptic currents (sPSCs) in 34 of 35 recorded TDNs (Fig. 8, C and D). However, the frequency of sPSCs in adult TDNs was significantly lower than for control neurons in adult mice (Fig. 8, C and D, and table S14), suggesting that the number of functional synapses received is nonetheless fewer than for wild-type neurons. As our histological data suggest that TDNs undergo progressive radial migration away from the tanycyte layer into the HP as they mature, we asked whether there was any correlation between a TDN's distance from the tanycyte layer and the number of synaptic inputs it receives. We observed a positive correlation between a TDN's distance from the tanycyte layer and the sPSC frequency in both young and adult mice, suggesting that the further a TDN migrates from the tanycytic layer, the more functional synapses from local neurons it receives (Fig. 8E). Having established that TDNs show neuronal electrophysiological properties, fire action potentials, and receive synaptic inputs, we next tested whether TDNs are activated in response to changes in internal states that modulate the activity of nearby hypothalamic neurons. We first investigated the response of TDNs in the DMH to heat stress, which is known to robustly modulate the activity of DMH neurons (36,37). At P45, we observed that 16.0 ± 1.6% of parenchymal TDNs in the DMH induce c-fos expression following 4 hours of exposure to 38°C ambient temperature in Nfia/b/x-deficient mice, which is essentially equivalent to the portion of GFP − control neurons activated, 13.7 ± 0.3% (Fig. 8, F and G). DISCUSSION This study demonstrates that tanycytes retain the ability to generate a broad range of different subtypes of hypothalamic neurons in the postnatal brain and that this latent ability is actively repressed by NFI family transcription factors. Induction of proliferative and neurogenic competence by selective loss of function of Nfia/b/x leads to the robust generation of hypothalamic neuronal precursors that undergo outward radial migration, mature, fire action potentials, and receive synaptic inputs. TDNs respond to dietary signals, such as leptin, and heat stress. This implies that tanycyte-derived neurogenesis can potentially modulate a broad range of hypothalamicregulated physiological processes. NFI factors have historically been primarily studied in the context of promoting astrocyte specification and differentiation (38,39), and loss of function of Nfia/b/x disrupts the generation of tanycytederived astrocytes, ependymal cells, and oligodendrocyte progenitors and down-regulates the expression of tanycyte-enriched genes that are also expressed in astrocytes such as Kcnj10 and Aqp4 (Fig. 3). In addition to their role in promoting gliogenesis, however, recent studies have shown that NFI factors confer late-stage temporal identity on retinal progenitors (13), allowing the generation of late-born bipolar neurons and Müller glia and decreasing proliferative and neurogenic competence. Selective loss of function of Nfia/b/x in mature Müller glia likewise induces proliferation and generation of inner retinal neurons (14), although the levels seen are lower than those seen following the loss of function in retinal progenitors (13). As in the retina, Nfia/b/x genes are more strongly expressed in latestage than early-stage mediobasal hypothalamic progenitors (15), and adult tanycytes show substantially lower levels of proliferation following the loss of function of Nfia/b/x than those seen in neonates ( Figs. 1 and 2). However, the levels of proliferation and neurogenesis seen in neonatal tanycytes are much greater than those seen in Müller glia, which likely reflects the fact that mammalian Müller glia proliferate only rarely and essentially lack neurogenic competence (11), while tanycytes retain limited neurogenic competence (9). This implies that NFI factors may be part of a common gene regulatory network that represses proliferation and neurogenic competence in radial glia of the postnatal forebrain and retina. Using scRNA-seq and scATAC-seq analysis, we identified multiple genes that are strong candidate extrinsic and intrinsic regulators of proliferative and neurogenic competence in tanycytes. We observe that loss of function of Nfia/b/x effectively regresses tanycytes to a progenitor-like state. Both Shh (40,41) and Wnt (42,43) signaling are required for progenitor proliferation and neurogenesis in embryonic tuberal hypothalamus, and our study strongly suggests that they play similar roles in Nfia/b/x-deficient tanycytes, although, in the absence of genetic analysis, off-target effects of the smallmolecule ligands used for this analysis cannot be formally ruled out (44). Tanycyte-specific loss of function of Nfia/b/x both down-regulates Notch pathway components and up-regulates the Notch inhibitor Dlk1 while also down-regulating Tgfb2. Both Notch signaling and Tgfb2 promote quiescence and inhibit proliferation in retinal Müller glia and cortical astrocytes (45,46) and likely play a similar role in tanycytes. Nfia/b/x-deficient tanycytes likewise down-regulate transcription factors that are required for specification of astrocytes and Müller glia-including Sox8/9 (47, 48)-while up-regulating neurogenic basic helix-loop-helix factors such as Ascl1 and Sox4. Ascl1 is both required for differentiation of VMH neurons (49) and also sufficient to confer neurogenic competence on retinal Müller glia (50). NFI factors thus control the expression of a complex network of extrinsic and intrinsic factors that regulate neurogenic competence in tanycytes, and it may be possible to further stimulate tanycyte-derived neurogenesis by modulating select components of this network. scRNA-seq analysis reveals that TDNs arise from Ascl1 + precursors and are heterogeneous, falling into several molecularly distinct clusters. The gene expression profiles of control and Nfia/b/x-deficient TDNs closely resemble one another, indicating that NFI factors are not obviously required for the differentiation of individual neuronal subtypes in contrast to their role in the retina and cerebellum (13,51). However, the number of TDNs in control samples drops markedly after P8, with few detected at P45 (Fig. 3). We observe an agedependent decline in the number of TDNs in controls, potentially in line with the high levels of cell death reported in postnatally generated hippocampal and olfactory bulb neurons (52). However, this decline is insufficient to account for this effect, and it may instead result from the well-established difficulties in obtaining viable, dissociated mature neurons following FACS for whole-cell scRNA-seq analysis (53). TDNs are predominantly found in the DMH and ArcN, with much smaller numbers detected in the VMH and ME (Fig. 1H). They are mostly GABAergic and express molecular markers of DMH and ArcN neurons (Fig. 6, A and B), and substantial subsets closely match the scRNA-seq profiles of neuronal subtypes obtained from ArcN and VMH ( fig. S6) (21). They include neuronal subtypes that regulate feeding and sleep and directly regulate pituitary function and many other subtypes whose function has yet to be characterized. In light of findings that tanycyte-derived neurogenesis can be stimulated by dietary and hormonal signals and potentially modulate body weight and activity levels (3,7), this raises the possibility that different internal states may trigger generation and/or survival of functionally distinct tanycyte-derived neuronal subtypes, leading to long-term changes in both hypothalamic neural circuitry and physiological function. We observe that TDNs survive for months and show neuronal electrophysiological properties, with subsets showing c-fos induction in response to heat stress (Figs. 7 and 8). Older TDNs receive more synaptic inputs than younger TDNs, and their input resistance lowers to become equivalent to that of GFP − nearby neurons, demonstrating progressive maturation. However, the frequencies of sPSCs and the evoked action potentials of TDNs remain consistently lower than those of the GFP − control neurons. This may be an intrinsic property of TDNs, distinguishing them with preexisting local neurons. Alternatively, the excess TDNs generated from Nfia/b/x-deficient tanycytes may form synaptic connections less efficiently. Distinguishing these possibilities will require electrophysiological recording of TDNs from control animals, although the far smaller population of these cells in wild-type animals make this experiment very challenging. While levels of tanycyte-derived neurogenesis are low under baseline conditions, adding small numbers of newly generated neurons to existing hypothalamic neural circuits may have important effects on hypothalamic-regulated behaviors and physiology. Furthermore, selective loss of specific hypothalamic neuronal subtypes, as well as accompanying metabolic and behavior changes, is observed both in neurodegenerative disorders such as Alzheimer's disease and frontotemporal dementia (54) and under conditions such as obesity and type 2 diabetes (55). Identifying new means of enhancing neurogenic competence in mature tanycytes, while selectively promoting the differentiation of specific tanycyte-derived neuronal subtypes, may thus ultimately prove useful in generating long-term changes in hypothalamic physiology in a variety of therapeutic contexts. Contact for reagent and resource sharing Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, S.B. (sblack@ jhmi.edu). Animals Rax-CreER T2 mice (the Jackson laboratory, stock no. 025521) generated in the laboratory (18) were crossed with the Cre-inducible Sun1-GFP reporter (Sun1-sfGFP-Myc; the Jackson laboratory, stock no. 021039, provided by J. Nathans) (19). Nfia lox/lox (13,14), Nfib lox/lox (56), and Nfix lox/lox (57) mice were used to generate Nfia/b/x homozygous triple mutant mice previously described (13,14). To generate tanycyte-specific loss-of-function mutants of Nfia/b/x genes, the triple mutant mice were crossed to Rax-CreER T2 ;Sun1-GFP mice. To induce the Cre recombination, these mice were intraperitoneally injected with 4-OHT (0.2 mg) dissolved in corn oil for three consecutive days from P3 to P5. Mice were housed on a 14-to 10-hour light/dark cycle (lights on at 07:00 and lights off at 21:00) in a climatecontrolled pathogen-free facility. All experimental procedures were preapproved by the Institutional Animal Care and Use Committee of the Johns Hopkins University School of Medicine. BrdU and EdU incorporation assay To label the proliferating cells, BrdU (Sigma-Aldrich, #B5002) was dissolved in saline solution, and 100 mg/kg of body weight was intraperitoneally injected for five consecutive days for the dates indicated. For the AAV1-Cre-mCherry stereotaxic injection, an osmotic mini pump (ALZET model 1002, #0004317) was filled with BrdU dissolved in artificial cerebrospinal fluid (aCSF) (Tocris Bioscience, #3525) and installed immediately into the hole remaining after the virus injection needle was removed. The 2-cm-long tube connecting the mini pump and cannula was filled with aCSF, so that the actual infusion (30 g/day) of BrdU was started from the third day following implantation. To avoid potential toxic effects of longterm BrdU exposure, we used EdU (Thermo Fisher Scientific, #A10044) for quantitative studies of cell proliferation. For this purpose, we used a dose of EdU (50 g/g), which has been previously validated for proliferation studies in the adult brain (58). Tissue processing and immunohistochemistry Mice were anesthetized with an intraperitoneal injection of tribromoethanol/Avertin, followed by transcardial perfusion with 2% paraformaldehyde (PFA) as previously described (60). Brains were dissected, postfixed in the same fixative, and prepared for the cryopreservation in optimal cutting temperature (OCT)-embedding compound. A series of 25-m coronal sections were stored in antifreeze solution at −20°C until ready for immunostaining. After brief washing with 1× PBS to remove the antifreeze solution, sections were mounted on Superfrost Plus slides (Thermo Fisher Scientific) before starting immunohistochemistry and dried at room temperature (RT) for 30 min. To ensure adequate fixation for nuclear staining, sections were immersed into the fixative solution for 10 min at this point. Antigen retrieval was performed by incubating the slides with the prewarmed sodium citrate buffer [10 mM sodium citrate (pH 6.0)] in an 80°C water bath for 30 min. For HuC/HuD (HuC/D) antibody staining, sections were also treated with 0.3% hydrogen peroxide to block endogenous peroxidase activity before the blocking step with 10% horse serum/0.1% Triton X-100 in 1× PBS for an hour. pSTAT3 staining required different pretreatments as described before (17,60). To detect the overall expression of three different Nfi family members, mouse antibodies recognizing NFIA, NFIA/B, and NFIX were mixed and visualized with the same secondary antibody. After finishing the first round of immunostaining, the fluorescence signal was cross-linked by incubation in 2% PFA for 10 min, followed by either EdU staining using the Click-iT EdU detection kit conjugated with Alexa Fluor 647 (Thermo Fisher Scientific, #C10340) or BrdU antibody staining. For BrdU staining, freshly prepared 2 N HCl was spread on the slides and incubated at 37°C for 30 min on the humidified chamber. Borate buffer (0.1 M) (pH 8.5) was used for acid neutralization by incubating for 10 min at RT. Antibodies used were listed in table S1. After counterstaining with 4′,6-diamidino-2-phenylindole (DAPI), the slides were coverslipped with VECTASHIELD antifade mounting medium (Vector Laboratories, #H-1200) and dried at RT for no more than 30 min. The slides were stored at 4°C and imaged within 2 days to achieve the best quality using a Zeiss LSM 700 confocal microscope at the Microscope Facility at Johns Hopkins University School of Medicine. RNAscope HiPlex assay For the RNAscope HiPlex assay, P45 triple knockout and control male mice were euthanized by cervical dislocation, and brains were dissected out. The brains were immediately immersed in 4% PFA in diethyl pyrocarbonate-treated 1× PBS and incubated overnight at 4°C. All other sample preparation procedures were performed as recommended in the manufacturer's instructions for OCT-embedded fresh-frozen tissue preparation. Fourteen-micrometer sections were cut on a cryostat and briefly washed with 1× PBS before mounting on Superfrost Plus slides (Thermo Fisher Scientific). The slides were dried at −20°C and stored at −80°C before use. The HiPlex assay was performed by following the manufacturer's instructions using probes listed in table S12. The sections were imaged on a Zeiss LSM 800 confocal microscope at the Multiphoton Imaging Core in the Department of Neuroscience at Johns Hopkins University School of Medicine. Heat stimulation and leptin injection P45 male mice were exposed to ambient heat (38°C) for 4 hours (61) by incubating in a prewarmed light-controlled cabinet in the Rodent Metabolic Core Facility at the Center for Metabolism and Obesity Research of the Johns Hopkins University School of Medicine. During the procedure, mice were provided ad libitum access to water and food and carefully monitored. Transcardiac perfusion with 4% PFA in 1× PBS was performed immediately after heat exposure. The dissected brains were processed as described above and used for c-fos immunostaining. Leptin injection was performed on P90 male mice that were fasted for 18 hours before treatment. Leptin (3 mg/kg of body weight; PeproTech, #450-31) dissolved in saline solution was intraperitoneally injected, and, 45 min later, transcardial perfusion was performed using 2% PFA as described above. Cell counting and statistical analysis All cell counts were performed blindly and manually by five independent observers using Fiji/ImageJ software. Five sections corresponding to −1.55, −1.67, −1.79, −1.91, and −2.15 mm from bregma were chosen among the serial sections for cell counting. Initially, cell numbers were normalized by the size (in millimeters) of hypothalamic nuclei measured. Because Nfia/b/x-deficient animals did not show any obvious structural differences, in subsequent experiments, we used absolute numbers. All values are expressed as means ± SEM. Comparisons were analyzed by two-tailed Student's t test using Microsoft Excel unless stated otherwise. A P < 0.05 was considered statistically significant. Cell preparation for scRNA-seq P8, P17, and P45 TKO mutant mice and control Rax-CreER;CAG-lsl-Sun1-GFP mice were euthanized by cervical dislocation, and brains were dissected. One biological replicate of each time point and genotype were analyzed, with the exception of P45 TKO mice, where two biological replicates were analyzed. Two-micrometer-thick coronal slices including the hypothalamic protruding ME were collected using adult mouse brain matrix (Kent Scientific). The mediobasal hypothalamic region was microdissected using a surgical scalpel, dampened in Hibernate-A medium supplemented with 0.5 mM GlutaMAX and 2% B-27 (HABG), and chopped with a razor blade. Brain tissues were transferred into preequilibrated papain/ deoxyribonuclease I mix (Worthington Papain Dissociation System, #LK003150) and incubated for 30 min at 37°C with frequent agitation using a fire-polished glass pipette. Dissociated cells were filtered through a 40-m strainer and subjected to density gradient centrifugation to remove the cell debris as suggested in the manufacturer's protocol. Cells were resuspended in HABG medium, and GFP + cells were FACS isolated in the Bloomberg Flow Cytometry and Immunology Core at Johns Hopkins University. Cells were resuspended with ice-cold PBS containing 0.04% bovine serum albumin and ribonuclease inhibitor (0.5 U/l), and 10,000 to 15,000 cells were loaded on a 10x Genomics Chromium Single Cell system (10x Genomics, Redwood City, CA), using the v3 chemistry following the manufacturer's instructions, and libraries were sequenced on an Illumina NextSeq with ~200 million reads per library. Sequencing results were processed through the Cell Ranger 3.1 pipeline (10x Genomics, Redwood City, CA) with default parameters. Single-cell ATAC-seq scATAC-seq was performed using the 10x Genomic scATAC reagent V1 kit following the manufacturer's instructions. Briefly, FACS-sorted cells (~30,000 cells) were centrifuged at 300g for 5 min at 4°C. The cell pellet was resuspended in 100 l of lysis buffer, mixed 10× by pipetting, and incubated on ice for 3 min. Wash buffer (1 ml) was added to the lysed cells, and cell nuclei were centrifuged at 500g for 5 min at 4°C. The nuclei pellet was resuspended in 250 l of 1× nuclei buffer. Cell nuclei were then counted using trypan blue staining. Resuspended cell nuclei (10,000 to 15,000) were used for transposition and loaded into the 10x Genomics Chromium Single Cell system. Libraries were amplified with 10 polymerase chain reaction cycles and were sequenced on an Illumina NextSeq with ~200 million reads per library. Sequencing data were processed through the Cell Ranger ATAC 1.1.0 pipeline (10x Genomics) with default parameters. scRNA-seq data preprocessing Raw scRNA-seq data were processed with the Cell Ranger software (62) (version 3.1) for formatting reads, demultiplexing samples, genomic alignment, and generating the cell-by-gene count matrix. First, the "cellranger mkfastq" function was used to generate FASTQ files from BCL files. Second, the "cellranger count" function was used to process FASTQ files for each library using default parameters and the mm10 mouse reference index provided by 10x Genomics. Last, we obtained the cell-by-gene count matrix for each library and used this for all downstream analysis. Using the Seurat R package (63), we created Seurat objects for each sample with the cell-by-gene count matrix using the function "CreateSeuratObject" (min.cells = 3, min.features = 200). After visual analysis of the violin plot of the total counts for each cell, we filtered out cells with nCount_RNA < 600 or nCount_RNA > 6000. Next, we calculated the fraction of mitochondrial genes for each cell and filtered out the cells with a mitochondrial fraction of >8%. Last, we predicted multiplet artifacts and removed potential doublet cells using Scrublet (64) for each sample. As a result, 6609 (P8 Ctrl), 6494 (P17 Ctrl), 2607 (P45 Ctrl), 12,930 (P8 TKO), 12,413 (P17 TKO), 7531 (P45 KO, replicate 1), and 5886 (P45 KO, replicate 2) cells were used for downstream analysis. scRNA-seq data analysis Dimensional reduction, clustering, and visualization To integrate the cells from different ages and genotypes, we aligned all cells for each sample and obtained an integrated Seurat object using the Seurat function "FindIntegrationAnchors" and "IntegrateData" with the default parameters, respectively. Using the integrated data, we normalized, log-transformed, and scaled the count matrix using the function "NormalizeData" and "ScaleData." We next selected the variable genes using the function "FindVariableFeatures"(selection. method = "mvp") and performed dimension reduction analysis with "RunPCA." To annotate individual cell types in the integrated dataset, we first clustered all the cells using the function "FindNeighbors" and "FindClusters" with a resolution of 0.3 and 1st to 30th dimensions. Then, we matched the clusters to major cell types using expression of known cell marker genes. As a result, we identified the following nine major cell types: alpha1 tanycytes, alpha2 tanycytes, beta1 tanycytes, beta2 tanycytes, proliferating tanycytes, astrocytes, neurons, ependymal cells, and OPCs. To visualize the integrated data, we used the 1st to 30th dimensions to perform nonlinear dimension reduction and obtained UMAP coordinates with the function "RunUMAP." We further characterized molecularly distinct subtypes of TDNs in Fig. 5. First, we restricted our analysis to cells in the neuron cluster and from the following ages and genotypes: P8 Ctrl, P8 TKO, P17 TKO, and P45 TKO. P17 Ctrl and P45 Ctrl were excluded from analysis due to the very small numbers of TDNs present in these datasets. Second, we integrated all the neurons from different conditions using "RunHarmony" in the Harmony R package (65). Next, we used the 1st to 10th harmony dimensions to identify neuronal subclusters with a resolution of 0.5. Last, we aggregated the clusters into individual neuronal subtypes based on known neuron markers and RNA velocity results. To visualize TDNs, we used the 1st to 10th harmony dimensions to obtain UMAP coordinates with the function "RunUMAP." Identification of differentially expressed genes To identify markers for each cell type and differentially expressed genes (DEGs) between Ctrl and TKO samples, we used the Seurat function "FindAllMarkers" and "FindMarkers" with the options: min.pct = 0.2 or 0.1 and logfc.threshold = 0.25. We then retained differential genes with an adjusted P < 0.001. RNA velocity analysis To characterize cellular differentiation trajectories associated with tanycyte-derived neurogenesis, we used scVelo software (66) to perform RNA velocity analysis by comparing levels of spliced and unspliced transcripts. Briefly, we converted BAM files for each sample to loom files using a command line tool (25). We then combined these loom files and retained cells that passed filtering in the previous step. Using scVelo, we normalized the spliced and unspliced matrix, filtered the genes, and selected the top 1500 variable genes with the function: "pp.normalize_per_cell," "pp.filter_genes_dispersion," and "pp.log1p." Next, we performed a principal components analysis (PCA) and calculated the velocity vectors and velocity graph using the function: "pp.moments" (n_pcs = 35, n_neighbors = 50), "tl.recover_dynamics," "tl.velocity" (mode = "dynamical"), and "tl.velocity_graph." Last, we visualized velocities on the previously calculated UMAP coordinates with the "pl.velocity_embedding_grid" function. We applied the same pipeline to analyze the RNA velocity in differentiating TDNs. Cell-cycle stage inference The function "CellCycleScoring" in the Seurat package was used to calculate cell cycle phase scores (S score and G 2 -M score), with the G 2 -M and S phase marker genes obtained from Tirosh et al. (67). Slingshot (68) was applied to infer differentiation trajectories from alpha2 tanycytes to neurons. To construct the trajectory, we included cells in the "alpha2 tanycytes," "proliferating tanycytes," and "neuron" clusters. We then then ran the Slingshot using the dimensionality reduction results (UMAP) identified previously. We set the alpha2 tanycytes cluster as the initial cluster to identify lineages with the function "getLineages" and "getCurves" with default parameters. Last, we assigned cells to the lineages and calculated pseudo-time values for each cell using the function "slingPseudotime." Monocle 2 (69) was applied to identify developmentally dynamic genes that are significantly altered along the trajectory. First, we converted the expression matrix to Monocle datasets with the function "newCellDataSet," then processed and normalized the Monocle datasets following the Monocle recommended pipeline, and lastly identified the DEGs using the "differentialGeneTest" function with the following criteria: q < 1 ×10 −10 and expressed cell number of >200. Comparison between TDNs and mature hypothalamic neurons To further explore the biological similarity between TDNs and the broader population of neurons in mouse hypothalamus, we first used the scRNA-seq datasets for mature neurons in hypothalamic ArcN provided by Campbell et al. (21) and downloaded the cell-bygene matrix and the annotation file of the mature neuronal cell types from the Gene Expression Omnibus (GEO) database under the accession no. GSE93374. The LIGER (32) package was used to integrate our tanycyte-derived cells identified in the previous rounds of analysis with these mature hypothalamic neurons, using the default pipeline recommended in the LIGER guidelines (https://github. com/welch-lab/liger). After LIGER integration, we reclustered the integrated datasets and calculated new UMAP coordinates using the functions "FindNeighbors," "FindClusters," and "RunUMAP" with the following parameters: reduction = "iNMF" and dims = 1:30. Last, we made an alluvial plot to visualize the alignments between the tanycyte-derived neuronal subtypes (six subtypes), LIGER clusters (14 clusters), and the mature arcuate neuronal cell types (34 subtypes). To compare the gene expression between TDNs and mature neurons, we first reclustered the integrated datasets with a higher resolution (resulting in 22 clusters). To remove the unique cell clusters between TDNs and mature neurons, we only kept the 12 LIGER clusters (C2 to C5, C9, C11, C13, C16 to C18, C20, and C21), which contained both TDNs and mature neurons for downstream comparison. Second, we performed quantile normalization for each cell on the combined and normalized cell-by-gene matrix to reduce the batch effects between TDNs and mature neurons. Last, we used a Wilcoxon rank sum test to identify the differential genes with the function "FindMarkers." scATAC-seq data preprocessing We processed sequencing output data using the Cell Ranger ATAC software (v.1.0) for alignment, deduplication, and identification of transposase cut sites. First, the "cellranger-atac mkfastq" function was used for generating FASTQ files from BCL files. Second, the "cellranger-atac count" function was used to process the FASTQ files for each library using default parameters and the mouse mm10 reference index provided by 10x Genomics (refdata-cellrangeratac-GRCh38-1.2.0). Last, we obtained the barcoded, aligned, and Tn5-corrected fragment file (fragments.tsv.gz) for each library and used these for downstream analysis. scATAC-seq data analysis Generating union peaks We generated the cell-by-peak matrix for each sample using the same method as described in Satpathy et al. (70). First, we constructed 2.5-kb tiled windows across the mm10 genome using the local script. Next, a cell-by-window sparse matrix was computed by counting the Tn5 insertion overlaps for each cell, and this matrix was then binarized and inputted to Signac package (0.2.5) to create a Seurat object using "CreateSeuratObject." Second, we normalized the cell-by-window matrix by TF-IDF (term frequency-inverse document frequency) methods using "RunTFIDF" and ran a singular value decomposition (SVD) on the TF-IDF normalized matrix with "RunSVD." We retained the 2nd to 30th dimensions and identified clusters using SNN (shared nearest neighbor) graph clustering with "FindClusters" with a resolution of 0.3. Third, to identify high-quality peaks for each cluster in each sample, we called peaks for each cluster using MACS2 (71) with the command: "-shift -75 --extsize 150 --nomodel --callsummits --nolambda --keep-dup all -q 0.05." The peak summits were then extended to 250 base pairs (bp) on either side to a final width of 500 bp and then filtered by the mm10 v2 blacklist regions (https://github.com/Boyle-Lab/Blacklist/blob/ master/lists/mm10-blacklist.v2.bed.gz). The top 50,000 peaks for each cluster in each sample were kept, converted to Granges, and merged into a union peak set with the function "reduce" in the GenomicRanges package. Last, we obtained 107,377 union peaks and generated a cell-by-peak sparse matrix for all these cells for downstream analysis. Filter cells by transcription start site enrichment, unique fragments, and nucleosome banding We calculated the transcription start site (TSS) enrichment, unique fragments, and nucleosome banding for each cell using the Signac package. The cell-by-peak sparse matrices were inputted to the "CreateSeuratObject" function to create a Seurat object with default parameters. Then, we filtered the cells using the following criteria: (i) the number of unique nuclear fragments of >1000, (ii) TSS enrichment score of >2, (iii) nucleosome banding score of <4, and (iv) blacklist_ratio of < 0.05. As a result, 8948 (P8 Ctrl) and 13337 (P8 TKO) cells were identified and used for downstream analysis. Dimensional reduction, clustering, and visualization The Harmony package was applied to integrate the scATAC-seq data from control and Nfia/b/x TKO samples. First, we put the Seurat object created in the previous step into the Signac process pipeline. We normalized and obtained a low-dimensional representation of the cell-by-peak matrix using the functions "FindTopFeatures," "RunTFIDF," and "RunSVD." Next, we integrated all the cells from both genotypes (control and TKO) using the "RunHarmony" function with the options: dim.use = 2:50, group.by.vars = "condition," reduction = "lsi," and project.dim = FALSE. Third, we used the 2nd to 30th harmony dimensions to identify clusters with a resolution of 0.8 and used the same harmony dimensions to calculate the UMAP coordinates for visualization. To annotate the cell types for each cluster, we used the integration method in the Seurat package to transfer the previously annotated cell type labels from scRNA-seq data to scATAC-seq d a t a . First, we estimated RNA-seq levels using the function "CreateGeneActivityMatrix" from the scATAC-seq data using the mm10 genome build GTF file. Next, we found anchors between the scATAC-seq datasets (P8 Ctrl and P8 TKO) and the corresponding scRNA-seq datasets (P8 Ctrl and P8 TKO) using the function "transfer.anchors." Then, with the "TransferData" function, we obtained a matrix of cell type predictions and prediction scores for each cell in the scATAC-seq dataset. We further filtered the cells with max(prediction score) of <0.5. Last, for each cluster, we calculated the number of cells for each predicted cell type and set its final annotation based on the cell type that was most highly represented in the cluster. Using this approach, we identified the following nine major cell types: alpha1 tanycytes, alpha2 tanycytes, beta1 tanycytes, beta2 tanycytes, proliferating tanycytes, astrocytes, neurons, ependymal cells, and OPCs. Global NFI activity and footprint analysis We inferred global NFI activity with the chromVAR R package (72). The raw cell-by-peak matrix from the total cells was used as input to chromVAR. The mm10 reference genome was used to correct GC bias. We used the mouse NFI motifs (including Nfia, Nfib, and Nfix) in the TransFac2018 database to generate the transcription factor z-score matrix. The z-score for each cell was used to visualize the global NFI activity using the previously calculated UMAP coordinates. To analyze the NFI footprint in alpha2 tanycytes, we used the same methods described in Corces et al. (73). First, we used the NFI motifs and all accessible regions to predict the NFI binding sites with the function "match motifs" in the motif matching R package. Second, we calculated the Tn5 insertion bias around every NFI binding site. We generated the aggregated observed 6-bp hexamer table relative to the ±250 bp region from all motif centers and also calculated the aggregated expected 6-bp hexamer table from the mm10 genome. We then obtained the observed/expected (O/E) 6-bp hexamer table by dividing these two hexamer tables. Last, we calculated the observed Tn5 insertion signal at ±250 bp from the motif center and normalized the signal using the O/E 6-bp hexamer table to obtain the final Tn5 bias-corrected signal. Differential peak analysis To explore which ATAC regions are changed following Nfia/b/x loss of function, we applied the MAnorm algorithm (74) to perform differential peak analysis between control and Nfia/b/x TKO alpha2 tanycytes. First, we selected cells in the alpha2 tanycytes cluster and then separated these cells by genotype (control and Nfia/b/x TKO). Second, we aggregated the cells of the same genotype by summing the count signals for each peak, then created a new condition-by-peak count matrix, and put it into the MAnorm pipeline. Last, we performed the MAnorm test and identified differential peaks with the cutoff LOG_P > 25 and abs(M_value_rescaled) > 0.5. De novo motif enrichment analysis HOMER software (75) was applied to identify motifs enriched in the differential ATAC regions between control and Nfia/b/x TKO alpha2 tanycytes. We analyzed the up-regulated peaks and down-regulated peaks separately using the Homer function "findMotifsGenome.pl" with the default options, except the following: mm10, -size given, -mask. Identification of genes directly regulated by Nfia/b/x To further explore the biological function of NFI factors in alpha2 tanycytes, we developed a method to infer potential Nfia/b/x targets with the information in scRNA-seq and scATAC-seq data. Our methods included the following three steps: 1) Identification of Nfia/b/x-binding regions. In the previous motif enrichment analysis, we found that NFI motifs are enriched in the down-regulated peaks (Nfia/b/x TKO/control), so, in the first step, we aimed to identify which down-regulated peaks are bound by NFI factors. Using the NFI motif information in the TransFac2018 database, we first scanned the NFI motifs in the down-regulated peaks with the function "matchMotifs." Next, with the Tn5 insertion signal in P8 Ctrl alpha2 tanycytes, we calculated the footprint occupancy score (FOS) (76) for each predicted NFI binding region and filtered out the regions with an FOS of <2. Last, we kept only the NFI binding peaks that contain NFI binding sites and used them for downstream analysis. 2) Identification of promoters associated with Nfia/b/x binding regions. To identify genes that are potentially regulated by these NFI binding regions, we used the Cicero algorithm (77) to identify all the distal elements-promoter connections genome wide. First, we converted the cell-by-peak sparse binary matrix into the Cicero pipeline with the functions "make_atac_cds," "detectedGenes," and "estimatedSizeFactors." Next, we created low overlapping cell groups based on the kNN (k-nearest neighbors) in the UMAP dimension and aggregated signals for each cell group with the function "make_cicero_ cds." We then calculated the correlation between each peak-peak pair using the function "run_cicero" with default parameters. Third, we annotated the peak pairs using "annotate_cds_by_site" with mm10 GTF files. We kept the peak pairs with the following criteria: (i) One of the peaks overlapped with ±2 kb of TSS region, and (ii) one of the peaks contained at least one NFI binding motif. Last, we identified NFI-related distal elements-promoter connections from the peak pairs if their coaccessibility score is >0.03 or <−0.03 and their distance is <150 kb. 3) Inference of potential Nfia/b/x targets by integrating with scRNA-seq data. In this step, we aimed to integrate the NFI-related distal elements-promoter connections and differential genes following loss of function of Nfia/b/x to identify NFI target genes. First, we selected enhancer-promoter pairs from the distal elements-promoter connections in step 2 with coaccessibility scores of >0.03. If the gene associated with the promoter in question was down-regulated following the loss of function of Nfia/b/x, then we treated these genes as potential Nfia/b/x targets. Conversely, we selected silencer-promoter pairs with coaccessibility scores of <−0.03. If the promoter genes were up-regulated following the loss of function of Nfia/b/x, then we also treated these genes as potential Nfia/b/x targets. Using this approach, we identified 62 NFI target genes. GO term analysis To understand the biological functions associated with genes dynamically expressed during the process of alpha2 tanycyte-derived neurogenesis, we applied GOrilla algorithm (78) to identify enriched GO terms for each gene cluster using the default parameters (P = 0.001, ontology = "Process"). The output of GO terms from GOrilla was further processed by REVIGO (79) to remove redundant terms. This pipeline was also used to identify the GO term enrichment in NFI-regulated gene sets. Brain slice preparation and cell type identification To investigate the electrophysiological characteristics of tanycytederived and other hypothalamic neurons, the acute brain slices were generated as previously described (35). TKO mice (P15 to P97, male) were anesthetized with isoflurane and decapitated, and the brains were rapidly removed and chilled in ice-cold sucrose solution containing 76 mM NaCl, 25 mM NaHCO 3 , 25 mM glucose, 75 mM sucrose, 2.5 mM KCl, 1.25 mM NaH 2 PO 4 , 0.5 mM CaCl 2 , and 7 mM MgSO 4 (pH 7.3). Acute brain slices (300 m) including the hypothalamus were prepared using a vibratome (VT 1200 s, Leica) and transferred to warm (32° to 35°C) sucrose solution for 30 min for recovery. The slices were transferred to warm (32° to 34°C) aCSF composed of 125 mM NaCl, 26 mM NaHCO 3 , 2.5 mM KCl, 1.25 mM NaH 2 PO 4 , 1 mM MgSO 4 , 20 mM glucose, 2 mM CaCl 2 , 0.4 mM ascorbic acid, 2 mM pyruvic acid, and 4 mM l-(+)-lactic acid (pH 7.3, 315 mOsm) and allowed to cool to RT. All solutions were continuously bubbled with 95% O 2 /5% CO 2 . For whole-cell patch-clamp recordings, slices were transferred to a submersion chamber on an upright microscope [Zeiss Axio Examiner, objective lens: 5×, 0.16 numerical aperture (NA) and 40×, 1.0 NA] fitted for infrared differential interference contrast and fluorescence microscopy. Slices were continuously superfused (2 to 4 ml/min) with warm, oxygenated aCSF (32° to 34°C). Hypothalamic areas and cells were identified under a digital camera (Sensicam QE, Cooke) using either transmitted light or green fluorescence. Tanycytes were identified as GFP + cells located in the ependymal cell layer at the third ventricle. Tanycyte-derived cells were identified as GFP + cells located in the HP but not in the ependymal cell layer. GFP − hypothalamic neurons in the HP, among which were intermingled the sparse tanycyte-derived cells, were targeted as control neurons. Whole-cell recordings and analysis Borosilicate glass pipettes (2 to 4 MΩ) were filled with an internal solution containing 2.7 mM KCl, 120 mM KMeSO 4 , 9 mM Hepes, 0.18 mM EGTA, 4 mM Mg-adenosine 5′-triphosphate, 0.3 mM Naguanosine 5′-triphosphate, and 20 mM phosphocreatine(Na) (pH 7.3, 295 mOsm). Biocytin (0.25%, w/v) was added to the internal solution for post hoc morphological characterization. Whole-cell patch-clamp recordings were conducted through a MultiClamp 700B amplifier (Molecular Devices) and an ITC-18 (InstruTECH), which were controlled by customized routines written in Igor Pro (WaveMetrics). The series resistance averaged 14.2 ± 5.8 MΩ SD (n = 81 cells, 12 mice, all <36 MΩ, no significant difference between cell types or age groups; P > 0.05, Mann-Whitney U test) and was not compensated. The input resistance was determined by measuring the voltage change in response to a 1-s-long −100-pA hyperpolarizing current step. The current-spike frequency relationship was measured with a series of depolarizing current steps (1-s long, 0 to 50 pA, 10-pA increments, and 5-s interstimulus intervals). Cells were held at −70 mV, and the current steps were applied from −70 mV for the current-spike frequency relationship test. For each current intensity, the total number of action potentials exceeding 0 mV generated during each step was measured and then averaged across the three trials. sPSCs were measured in voltage-clamp mode at −70 mV. sPSCs were recorded for 25 s (250-ms-long current traces, 100 times), and ~110 events, on average, were recorded per cell. High-amplitude, high-frequency depolarizing current steps (10 nA at 100 Hz for 100 ms) were injected into the recorded cells at the end of recording to increase efficiency of biocytin infusion (80). All signals were low-pass filtered at 10 kHz and sampled at 20 kHz for voltage traces and 100 kHz for series resistance and sPSC measurements. Electrophysiology data analysis and statistical testing Data analysis was performed in Igor Pro (WaveMetrics), Excel (Microsoft), ImageJ (National Institutes of Health), and Minhee analysis (https://github.com/parkgilbong/Minhee_Analysis_Pack). Data are presented as the means ± SEM unless otherwise noted. A Mann-Whitney U test was used to compare membrane properties and sPSC frequencies between cell types and between age groups. Spearman's rho test was used to determine the correlation between sPSC frequency and cell location. The location of the cells (distance to tanycytic layer) was measured from low (5×, 0.16 NA) and high (40×, 1.0 NA) magnification images of the recorded cells using ImageJ. The statistical difference in current-spike frequency relationships was tested using a two-way analysis of variance (ANOVA) test with Bonferroni correction. The sPSC events were automatically detected by Minhee analysis software with a 10-pA amplitude threshold. In the figures, the statistical significance is expressed as follows: *P < 0.05, **P < 0.01, or ***P < 0.001.
2021-05-29T13:04:58.209Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "d07676a3dc0a15b58a99d07576426529f0e192d5", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1126/sciadv.abg3777", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4b1b4c458df5e5c458b9a2fcfc0000dd388ac71e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
109929275
pes2o/s2orc
v3-fos-license
The Encounter with the Identical Other : The Literary Double as a Manifestation of Failure in Self-Constitution Literary pieces featuring the double depict an encounter between the protagonist and another person, who is her identical other. Therefore they face various difficulties related to a threat cast on their unique identity, and this encounter challenges their process of self-definition. Martin Buber sees the existence of the other as essential for the occurrence of self-constitution within an individual. He maintains that any person needs another person to obtain confirmation of what she is and is born equipped with the ability to confirm her fellow-person in the same way (1959). However, as the other encountered by a doppelgänger protagonist is not truly “other”, the latter might confront a difficulty in the different stages of Buber’s self-constitution process. This paper seeks to shed light on the interand intra-personal relationships depicted in literary pieces focusing on the theme of the double, such as The Double (Dostoyevsky [1846] 1997; Saramago 2002), Despair (1965), and Too Much Nina (Orbach 2011), emphasizing the limitations cast by the encounter with the identical other on the protagonist’s self-constitution, as put forward by Buber. Introduction Doppelgänger literature deals with human identity and the fragile concept of the self by featuring protagonists whose self is radically challenged by the formation or appearance of another identical character, or by the splitting of the self into several independent elements.This phenomenon has been extensively studied from a psychoanalytic perspective (Rank 1971;Rogers 1970;Tymms 1999;among others).This paper seeks to show that studying it from a philosophical perspective, adopting Buber's theory of self-constitution by way of genuine dialogue, can throw a different light on this phenomenon, thus deepening the discussion of this intriguing experience.In particular, it may reveal why this phenomenon involves so much distress among the protagonists and perhaps shed light on why it is often perceived as captivating yet disconcerting for readers of this corpus of literary works. Although this paper discusses the dynamics taking place between fictional characters, it does so as if they were real human beings.The assumption underlying this liberty is that duplicity, or self-split, are emotional experiences not entirely foreign-if not quite common-to human beings.To some extent, the incompleteness of selfhood exists and is sensed by anyone, and the sense of inner unity generally experienced by human beings is rather often delusionary (Dennett 1992;Bromberg 1998).Reading the doppelganger literature introduces us to extreme manifestations of this human phenomenon, thus allowing us to study it and analyze its significance more conveniently. For Buber (1959Buber ( , 1998aBuber ( , 2002;;Buber [1923Buber [ ] 1996)), the existence of the other is essential for an individual's self-constitution to occur.He maintains that any person needs another person to obtain confirmation of what she is and is born equipped with the ability to confirm her fellow-person in the same way (Buber 1959).The course of self-constitution begins with positioning the other at a distance and continues with entering into a relationship with that other. Literary pieces featuring the double depict an encounter between the protagonist and another person, who is his or her identical other.This is an "other" which is not truly "other".For this reason, the protagonists might confront a difficulty in one or both stages of Buber's self-constitution process.They might have difficulty positioning the other away from themselves.If they do manage to do so, they might enter a relationship with that other in a way which is not characterized by 'genuine encounter," that is, they may not be able to make their other present as a subject in itself.Instead, they tend to perceive the double as their own reflection.As a result, the encounter with the other does not assist them in acquiring an independent existence. Buber differentiates between two types of inter-personal relationships: 'I-it', characterizing the relation of a person with an object which serves her needs; and 'I-thou', where one positions oneself across from another person and both make each other present.The protagonists of doppelgänger works often develop an 'I-it' relationship with their doubles, as they tend to make use of the other for their needs, hence naturally fail in entering a relationship aimed at making each other present.Neither party of the doppelgänger relationship views the other as possessing an independent existence, and rather than relating to the other as a particular person at a given moment, they see him as their own reflection.Therefore, they cannot be involved in a sincere "dialogue," as defined by Buber. The positioning of the other at a distance means, according to Buber (1998a), the turning of the other into an independent opposition.This is a prerequisite for entering a relationship with the other.The basis for one's life with the other is double and single simultaneously-one's will to be confirmed of what she is, and even what she might be, on the one hand; and the innate ability to confirm the other in the same way, on the other hand. The protagonists of doppelgänger literary works often fail in the basic movement needed to form a relationship with the other: they are not able to position the other at a distance, and instead merge with her or see her as their own reflection.Therefore, it is hardly surprising that most of them experience considerable difficulties in developing significant relationships with others.Moreover, not only do they not obtain confirmation of their existence as they are, but they also experience a constant threat and a deep turmoil as to who they are.Thus, for example, Herman, the protagonist of Despair (Nabokov [1965] 1989), completely fails in positioning Felix at a distance and cannot see the latter except as his own reflection despite the differences between the two.The tumult he experiences shortly after he meets Felix for the first time is expressed by the limpness, dizziness, and terrible fatigue he feels. In a different way, Tertulliano, Saramago's protagonist in The Double (Saramago 2002) does not keep Claro, his double actor, at a distance at all.Rather, once he stumbles upon him accidentally while watching a video at home, he cannot help following his steps until he finds him and senses a critical threat to his authentic identity.In a way, Claro functions as an indication for what Tertuliano might be, as he asks himself what his life would look like if he were the actor.However, these thoughts do not provide the required confirmation of his existence, and the exemplary similarity between the two raises within him serious questions regarding the certainty of his independent existence.The act of speaking, which Buber (1959) sees as the most typical characteristic of Man's life with the other, is based on the principle of the independent otherness of the other with whom one enters a relationship.It starts with acknowledging the other's otherness and continues with turning to the other on the basis of this acknowledgement.In order to constitute an interpersonal life this way, one needs to experience "genuine" encounters, where one perceives one's partiality and finiteness-hence one's need for the other's confirmation. The protagonists of doppelgänger works frequently do not acknowledge the other's otherness, so their turning to this other does not enable a genuine encounter.Some do experience their partiality (e.g., the cloven viscount and Calvino's protagonist (Calvino [1952(Calvino [ ] 1998))).However, the presence of the other, who is not acknowledged as existing independently of the protagonist, does not provide confirmation of the existence of the latter.In fact, it deeply questions the very authentic existence of the protagonist.In other words, these protagonists are hardly capable of constituting inter-personal lives: they form an internal version of such a life with their doubles, which to a large extent remains intra-personal. In an essay anthology titled The Knowledge of Man (Buber 1998a), Buber elaborates on the essence of a genuine encounter among people.Such an encounter culminates in an event named "making present": an individual stops perceiving the other as a component present in the world for his own service.Instead, one starts imagining what the other would ask for, feel, perceive, and think at a given moment.That is, one is capable of perceiving the living process in the other.Positioning the other at a distance and making her present positions the one involved in these actions at a distance as well and makes him independent too.The first movement of human life, thus, positions people in mutual existence, which is fundamental and of equal status.However, the second movement places them within a mutual relationship, which does not necessarily possess an equal character.The attitude is realized in complete "making present," when one thinks of the other not as this particular one, but experiences the sense of belonging to her as this particular one at a given moment.Only in this way can the other acquire a self of his/her own. The protagonists of the doppelgänger literature are usually incapable of making their other present as a subject in herself: they perceive the other as their own reflection and hence do not obtain an independent existence as a result of this encounter.For instance, Herman (Nabokov [1965] 1989) shows no interest in Felix's subjectivity but consistently seeks what use he would be able to make of the latter to fulfill his own needs.Thus he does not "make him present" and naturally does not satisfy the conditions required for the mutual existence of the two as equal counterparts.Nor is Dostoyevsky's Golyadkin (Dostoyevsky [1846] 1997) capable of seeing his double present separately of himself.Though at times he feels inferior to his younger and more successful double, this momentary differentiation between himself and his double, which he seems to capture soon, turns into full identification or merging with his other. Doubleness as Failure to Establish a Dialogue Buber emphasizes the importance of a genuine encounter and relates to standing face-to-face with the other by way of mutual exposure as "a dialogue" (Buber [1965(Buber [ ] 2002)).The major requirement for the emergence of a genuine dialogue, maintains Buber, is that each of the participants relate to her other as what that other indeed is.One becomes aware of one's other, of the latter's being essentially different, in a unique way.One has to accept whom she sees that way.Even by struggling or arguing with that other, one confirms the other as the very creature standing opposite-and opposed to-her. Kramer (2003) understands Buber's "dialogue" as one involving open, straightforward, mutual, and present communication between people speaking spontaneously, without inhibiting themselves nor promoting any agenda.He follows Buber's observations and mentions that Buber differentiates between three types of dialogue: "a genuine dialogue," whether spoken or silent, takes place when each of the participants indeed holds within herself the other as present in a particular way, and relates to that other in order to form a living mutual relation with him.Buber sees such a dialogue as a rare occasion. Another type of dialogue, "a technical dialogue," occurs due to the urgent need for transient reciprocity, an objective understanding between people (as in cooperation between colleagues, or an accidental encounter between strangers where one asks the other for directions as to how to get to a certain place).However, Buber allows a genuine dialogue to occur abruptly even in the middle of a technical one. The third type of dialogue is, in fact, a monologue disguised as a dialogue.The latter involves people talking in cunning ways, so that a certain speaker would like to form a certain impression on other people, or in a conversation between friends where emphasis is placed on the self more than on the interlocutor. Following Buber, Anderson and Cissna (2012) stress that a dialogue should not be confused with "social activity."A Buberian dialogue is not reached through the maintenance of many friendships, nor does it necessitate much talking.They quote Buber saying, "The life of dialogue life is not one in which you have much to do with men, but one in which you really have to do with those with whom you have to do." (Buber [1965(Buber [ ] 2002, p. 20), p. 20). Anderson and Cissna attribute much significance to the spatial conceptualization of the Buberian dialogue, according to which the basic movement of a dialogic life is turning to the other.They claim that Buber refers to a situation where people turn to the other not only with their bodies, but also with their entire fundamental being, equipped with openness and reactiveness, as well as attentiveness to the other.Only under these conditions can a dialogic partnership thrive.It is important to note that, according to this conceptualization, a monologue is characterized not by withdrawal from the other, but by replication, a certain folding towards oneself, where instead of meeting the other in her particularity, the other exists only as part of the self.This situation turns the dialogue into a fiction, hence the chances for genuine inter-personal relationship significantly decline. This observation summons an interesting perspective on what occurs during the encounter between the protagonist and his or her double in doppelgänger works.First, it should be spelled out that defining the type of dialogue developing between the two is not at all simple: at times, it is clear that the encounter between the two is very instrumental in nature, as the protagonist wishes to benefit from the extreme similarity to the double.A case in point is Herman, Nabokov's protagonist (Nabokov [1965] 1989), who sees in Felix an opportunity to escape from his own life.Seemingly, he has no interest in Felix himself, only in the benefit the latter's existence suggests.The relationship between Tertuliano and Claro, Saramago's protagonists (Saramago 2002), also have extremely practical aspects: it seems that each sees the other's life as an opportunity for a different destiny. From a different angle, it is clear that a substantial part of the conversation taking place between the protagonist and the double exemplifies a monologue disguised as a dialogue: in a basic sense, this can be inferred from the very fact that rather than showing interest in the other, or in the one-time communication with the latter, the protagonists are all too often interested in talking about themselves.So does Golyadkin, for instance, who relates to his younger double his life story, and at some moments finds his double-more than anything else-a good and understanding listener (Dostoyevsky [1846(Dostoyevsky [ ] 1997)).In a deeper sense, the two can only generate a monologue, as the double is not really a different person, but a reflection of the protagonist.Thus, the developing dialogue is an internal one, taking place between different representations of one single self, which clearly does not express a genuine dialogue. However, in spite of what has just been argued, and in a somewhat paradoxical manner, these pseudo-encounters may after all consist of moments endowed with elements of a genuine encounter.Following the night Golaydkin spends in the company of his younger image, the narrator states, "With tears in his eyes Mr. Golyadkin embraced his companion, and, completely overcome by his feelings, he began to initiate his friend into some of his own secrets and private affairs [ . . .]" (Dostoyevsky [1846(Dostoyevsky [ ] 1997, p. 54), p. 54).These words seem to indicate a genuine encounter, and only the later clarification, revealing that the younger is but the protagonist's mental creation, presents this conversation as a monologue disguised as a dialogue.In other words, in retrospect, this intimate moment may be viewed as spurious, and forming a genuine dialogue between the two Golyadkins seems again like an impossible task. Following Buber, Anderson and Cissna point to three problems hindering dialogic life: the intrusion of "seeming," the inadequacy of inter-personal perception, and the tendency to influence others by imposing one's opinion on them.The duality of being and seeming is "the essential problem of the sphere of the interhuman" (Buber [1965] 2002, p. 75), according to Buber."Being" refers to what a person really is, whereas "seeming" refers to what he would like to reveal to others, or how one would like to seem to others.Clearly, no one lives solely in one of these states, but one state is often dominant.Buber believes that, in the inter-personal sphere, "truth" means that people communicate themselves to each other as what they really are.This does not necessitate telling each other everything, but forbidding any falsehood to come between them.For that purpose, the speaker must promise her interlocutor a piece of her being.Golyadkin, yearning for a significant relationship, does share with his young friend "pieces of his being," so much that he feels completely bare before him.However, other doppelgänger protagonists tend to protect themselves from their double and present a false appearance, as the presence of the latter makes them experience a threat to their unique existence. The second threat posed to dialogic life has to do with perceiving people as they are.For a genuine dialogue to develop, each of its participants should relate to the other as they really are.Being aware of the other in this way entails perception and understanding of the other as a whole person, out of complete willingness to imagine the "dynamic center" characterizing each utterance, action, and attitude of the other. Buber names this willingness "inclusion" or "experiencing the other side" (Buber [1965] 2002).If the other does not react in a similar way, then the dialogue might die quickly.Yet if reciprocity occurs, then the inter-personal blooms into a genuine dialogue.In literary pieces featuring the double, most protagonists do not perceive the other as a whole person.In fact, they tend to perceive themselves as partial vis-à-vis the other.This is epitomized in works where the protagonists express a need for the existence of their identical-other in order to sense their existential wholeness.In the Israeli novel Too Much Nina (Orbach 2011), Miki, the protagonist's closest friend, says that knowing she had a twin sister from whom she had been separated at birth, Nina had been feeling like half-a-person during her whole life.This sensation indeed accompanies Nina throughout the novel, and does not seem to cease even when she finally meets Clara, her twin sister. In a more extreme way, the two halves of the viscount, Calvino's protagonist (Calvino [1952] 1998), stand face to face when their existential partiality is most tangibly present.They also address and respond to the others this way, as well as to their other half.According to Buberian criteria, this situation does not enable the establishment of any genuine dialogue, by definition.We can also infer that these circumstances do not allow any inter-personal sphere to come into being: whatever takes place occurs within an intra-personal sphere. The third threat to dialogic life concerns two distinct ways of affecting another person: imposing and unfolding.Anderson and Cissna (2012) note that Buber associates imposing with propaganda initiated by a person devoid of any genuine interest in others, who is unaware of the real existence of the other and only wishes to impose her opinion on the latter.By contrast, the genuine educator acts in order to facilitate the development of forces existing within the other, which are about to materialize.Both the propagandist and the educator are interested in influencing the other, but unlike the propagandist, the educator takes the other into consideration, in her entirety and uniqueness and wishes to help in developing the latter's potential. The approach held by the protagonists of doppelgänger works tends toward imposing rather than toward unfolding.Such an approach is exemplified in William Wilson's (Poe [1839] 1976) description of his classmate who bears his name.He feels that his double resembles himself so much in his abilities that he is his superior.The competition between Wilson and his double arouses within the protagonist hostility towards the latter.His opponent, who knows him intimately, is depicted as patronizing him.These manifestations reflect the unequal relations existing between the two. In a slightly different way, one can observe Herman's tyrannical attitude towards Felix, in Nabokov's Despair (Nabokov [1965] 1989).Herman describes Felix's appearance as repugnant, pities him, and gives him money in order to allow him the satisfaction of his basic needs.Thus, he cares for him in a patronizing way.If this leaves any doubt regarding the inability to form a genuine dialogue under these circumstances, then Herman himself, self-humoredly, confesses summoning Felix for a "monologue."In other words, he does not even pretend to engage in a dialogue with Felix. Another significant element in Buber's perception of communication emphasized by Anderson and Cissna (2012) involves "confirmation."People practically confirm each other according to their personal qualities and capabilities.For Buber, a society is as humane as the extent to which its members confirm each other.Confirmation involves accepting and recognizing the other, both as the latter is and as what she might become.Confirmation may also involve a conflict, where a person sticks to her opinion and still tries to listen to the other real person with whom she converses.Two people may confirm each other by way of conflict as well. Literary doubles often fail in confirming each other.As a matter of fact, their simultaneous presence threatens the existence of each as a unique individual.Even the explicit conflict between the two cannot calm them down as far as their authentic existence is concerned.This failure, along with the wish for confirmation, is central in several of the doppelgänger works.Following a night of intimate closeness with his double, Golyadkin meets the latter at work and depicts his feelings as follows: Recognizing in a flash that he was ruined, in a sense annihilated, that he had disgraced himself and sullied his reputation, that he had been turned into ridicule and treated with contempt in the presence of spectators, that he had been treacherously insulted, by one whom he had looked on only the day before as his greatest and most trustworthy friend, that he had been put to utter confusion, Mr. Golyadkin senior rushed in pursuit of his enemy.(Dostoyevsky [1846(Dostoyevsky [ ] 1997, p. 65) , p. 65) Saramago's novel, The Double (Saramago 2002), also abounds in indications of Teruliano's yearning for confirming his own existence.For instance, shortly before his encounter with Claro and their comparison of ages, the narrator mentions, "Tertuliano Máximo Afonso is troubled now by the possibility that he might be the younger of the two, that the other man might be the original and he nothing but a mere and, of course, devalued repetition."(Saramago 2002, p. 175). Anderson and Cisna conclude by listing three characteristic signs of Buber's dialogic communication.First, Buber's communication philosophy encourages genuineness or authenticity, which does not entail complete exposure yet reprimands pretension, exclusive self-attention, and renunciation of the inter-personal sphere.Secondly, Buber emphasizes the importance of being aware of others as unique and whole people, an attitude that enhances turning to the other, attempting to imagine the other's reality, accepting the other as a partner, hence confirming the other as a human being.Thirdly, in authentic relations a person can struggle decisively in order to influence the other, but being open to change by others, this struggle will never amount to hurting the other.Dialogue is not perceived as a tool for manipulating the other, though persuasion often does take place in this sphere.A dialogue relies on respect and willingness held by both partners, enabling mutual reality and ability for opening rather than coercion of ideas in a monologic manner.Anderson and Cisna conclude that, for Buber, human experience consists of both the quest for unity and the search for individuation. As has been said, most of the doppelgänger protagonists do not fulfill consistently any of the dialogical communication signs put forward by Buber.They are not able to perceive others as different from themselves and unique.For them, their doppelgänger others are but a variation of themselves.Therefore, they do not confirm the other's distinct existence, let alone are confirmed by that of the other.The struggle they are immersed in often does not leave any space for difference between the two, as expected by Buber, a space which could have come back to them as confirmation of their own identity.Since they pose their own images in front of themselves, images they are not willing to perceive as distinct, there exists no space for dialogue.Nevertheless, it is interesting to note that these protagonists too-and perhaps in an especially clear way-also embody the human experience of looking for unity with the other, verging on merging, side by side with the quest for individuation, differentiation, and uniqueness vis-a-vis the other. In his book I and Thou (Buber [1923(Buber [ ] 1996)), Buber differentiates between two types of relation patterns: 'I-it', where a person is introduced to an object, with which she experiments and of which she makes use, on the one hand; and 'I-thou', where a person stands across from another person, whom the former makes present by her confirmation, and thus is also made present as an independent entity.According to Buber, while in the 'I-it' pair the 'I' appears as an individuality which becomes aware of itself as a subject experimenting and using the object, in the 'I-thou' pair the 'I' appears as a person and becomes aware of herself as a subjectivity.Individuality appears as being distinct from other individualities, whereas a person appears by entering a relationship with other persons.Thus, for Buber, personhood and individuality stand in two contrastive poles, where individuality shares no reality with the other but distinguishes itself from the other and requests to appropriate to itself as much as possible by way of experience and use.This way it acts on an object rather than on another person (Wyschogrod 1967). As a rule, protagonists of the doppelgänger works tend to develop an I-it relation with their double: it seems that rather than appearing as persons aware of themselves as subjectivities, they make use of the other for their own needs, avoiding any relation of mutual making-present.This has been illustrated regarding Herman in Nabokov's Despair (Nabokov [1965] 1989).As for Buber's distinction between an "individual" and a "person," it seems that these protagonists do not gain any of these qualities: on the one hand, they are not clearly distinct from other individuals, but tend to merge with them.On the other hand, they do not form relations with other people as different to themselves.Indeed, rather than sharing their experiences with their others, they keep struggling for their distinct existence. The process aimed at reciprocal making-present may also end in failure.Buber maintains that if a person does not attempt to achieve and actualize her essential "thou" with regard to what she encounters, then she withdraws inwards and develops in a space that does not offer her any real room for development.Hence, confrontation with what stands in front of her occurs within herself, and this cannot constitute a relation, or presence, or flowing interaction, but only self-contradiction. Buber advocates that the affinity to the other comes first.Hugo Bergman, in his preface to the Hebrew edition of I and Thou, annotates that "a person becomes an 'I' with the help of the 'thou.'This thou goes out, and another one comes instead, and with the changes of these 'I's, the constant 'I' consolidates in its affinities, until the 'I' finds itself vis-à-vis itself as a separate 'I,' or the I-thou affinity breaks down and the 'I' leaves its 'thou' and stands vis-à-vis itself alone."(Buber 1959, p. 17, my translation).That last option is the one indicating the failure in the double move towards subjectivity put forward by Buber.In the doppelgänger literature, one may point to similar dynamics to the one mentioned by Buber.In some cases, the protagonist and the double do try to make each other present, motivated by the will to become independent.This is the case, for instance, with the encounter between Nina and her twin, Clara (Orbach 2011), where each twin seems to hold that by making her twin present she will perfect something missing in her own life and acquire a status of independent wholeness.Nevertheless, finally they do not succeed in realizing a parallel whole existence of each and keep instead to their partial existence following their mutually decided role reversal. By contrast, there are other cases where at least one of the parties relates to its other instrumentally, and does not see the other as "thou," but as a kind of "it."A case in point is Felix (Nabokov [1965] 1989), whose only role for Herman serves as a substitute on his fabricated suicide.Similarly, the narrator in "The Shadow" (Andersen [1847] 1946) makes instrumental use of his shadow in order to realize desires he cannot fulfill on his own.Later on in the story, the two switch roles, and the shadow-who has acquired his own identity by now-is the one to make use of the author, whose very projection he used to be. At times, the attempt at reciprocal making-present fails, and at least one of these figure withdraws into themselves into self-contradiction and in fact does not maintain any "relation", i.e., does not move forward to the second movement in human life.Dostoyevsky's Golyadkin illustrates the inability of the individual experiencing doubleness to turn out to the other and make him present.Indeed, he ends up completely withdrawn from others (Dostoyevsky [1846(Dostoyevsky [ ] 1997)).Charme (1977) notes that I-thou relations are discussed by Buber across two contradictory parallel levels: the epistemological and the ethical.At the epistemological level, Buber deals with man's relation with God and the modes in which she can get to know God.Buber aims to show that man's relation to God is personal, as she can only know God the way she knows the other, where God is perceived as the absolute other.As we cannot possibly validate our knowledge about God objectively, this knowledge-as the knowledge about any Thou-is of unique nature: it is intuitively self-validated, undefinable, momentary and devoid of any content.Charme describes it as mystical knowledge, which negates any ordinary way of knowing about a person or an object.The 'I-thou' encounter is thus immediate and timeless.For the 'I', the Thou is not a specific entity in space and time; rather, the 'I-thou' relations take place in a present devoid of time and space. In such an encounter, a person meets a Thou who is a unique indivisible unity.The encounter takes place with the object-as-is, about whom there is no specific knowledge or sensory perception.No use is made of cognitive categories for the purpose of knowing this Thou.Though this is an encounter devoid of experience, sensory data, and content regarding the Thou, the individual does obtain an absolute kind of knowledge, which is undoubtable despite its being incomprehensible.For this reason, it cannot be mistaken either. Buber identifies "experience" with the realm of the "it," as this is the way the world is mediated to us by means of one's senses and intellect.Buber finds experience to be superficial, since it deals only with appearance (or "seeming") and misses the real being of people and things.For him, experience is but an accumulation of information that does not genuinely involve the world, because it arises within a person rather than between one and the world. According to Charme, a relation that is the result of the unmediated and mystical encounter between the 'I' and the 'Thou' does not necessitate any sounds, gestures, or words.A sudden silent communication may occur between two people who do not talk to each other, look at each other, or know anything about one another.Charme notes that this type of encounter, occurring beyond actual encounter between people, might pose difficulties to keeping the individuality of the 'I' and Thou and the distinction between the self and the other. This perspective regarding Buber's 'I-Thou'" concept is particularly interesting in the context of the doppelgänger works, as the encounter between the protagonist and her double is often mystical and frequently experienced supersensually.This is apparent in Saramago's The Double (Saramago 2002): having watched the video where he sees his cinematic double for the first time, Tertuliano goes to bed.Soon after, he wakes up feeling that another person is present at his home: "The sense of another presence that had woken him up grew slightly stronger."(Saramago 2002, p. 14).Ricardo Reis's encounters with the late poet Pessoa are also irregular, having nothing to do with actual presence or ordinary sensual and cognitive recognition (Saramago [1984(Saramago [ ] 1999)).Their first encounter, for example, is described as follows: "[ . . .] there was a man sitting on the sofa.He recognized him at once, though they hadn't seen each other for many years.Nor did he think it strange Fernando Pessoa should be sitting there waiting for him."(Saramago [1984(Saramago [ ] 1999, pp. 64-65), pp. 64-65).If it seems that the protagonist only imagines Pessoa's visit, a few days later the narrator mentions that, "Ricardo Reis did not ask himself the obvious question, Could it have been a dream, he knew that Fernando Pessoa, with enough flesh and bone to embrace and be embraced, had been in this very room [ . . .]" (Saramago [1984] 1999, p. 69).Pessoa's presence is therefore undoubtable, though it is not perceived via regular sensory means. Indeed, as Charme cautions, such an encounter poses difficulties in keeping one's individuality for both partners involved in doubleness, as well as in the distinction between the two.Terulilano, for instance, is cast into uncertainty regarding his identity from the very moment he encounters Claro for the first time (Saramago 2002).Similarly, Reis attempts to "clothe his own portrait with a new substance, to be able to raise his hands to his face and recognize himself [ . . .]" (Saramago [1984] 1999, p. 69), following the encounter with Pessoa. At the ethical level, Buber advocates the 'I-Thou' relations as the appropriate way of treating other people, out of respect for any individual's wholeness and uniqueness.The main prerequisite for a genuine dialogic situation is treating the other as she is and becoming aware of her being essentially different to me, in an absolute and unique way, sincerely accepting this different being.Charme emphasizes that Buber strongly urges people to avoid the use of another person, such that no In other words, the doppelgänger narratives are frequently devoid of all four characteristics of the 'I-Thou' relation, a fundamental position which cannot yield any genuine Buberian dialogue between the two. The Lack of the Between Sphere as the Failure Underlying Doubleness In his essay titled "The Word that Is Spoken" (Buber 1998b), Buber emphasizes the importance of spoken language, which does not want to remain with the speaker but "reaches out toward a hearer [ . . .] it lays hold of him [ . . .and] even makes the hearer into a speaker" (Buber 1998b, p. 354).Buber clarifies that the place of occurrence of language is not the total sum of the participants involved in the dialogue, but the space between them ("the between").This space always leaves traces in the discourse partners.The dialogue, according to Buber, consists exactly in that which does not belong to any of the participants, nor does it pertain to the choice of words exchanged between them, but to "spokenness" itself.Friedman (1976) stresses that Buber differentiates in his essay between truth regarding the reality, which has been perceived and now is expressed, truth regarding the person one turns to and makes-present, and truth regarding the speaker's factual existence.This human truth, he claims, opens itself only in the presence of that individual as this concrete individual, who responds genuinely to the word spoken by the former individual. This description of the essence of dialogueness may clarify the failure underlying doubleness in its literary manifestation: the doubles do not view each other as possessing an independent existence, and rather than treating the other as a particular person at a given moment, they see each other as their own image and do not become involved in a genuine dialogue with each other.This stems from the fact that no in-between sphere emerges between the two, since their merging does not allow the emergence of such a space.In fact, their relation does not leave any space at all. In conclusion, Buber advocates the inherent need for doubleness by way of mutual making-present in order for a subjective independent self-constitution to occur.However, doubleness which constitutes identity, and which does not recognize the otherness of the "thou," annihilates the between sphere and does not allow any genuine dialogue to occur.Thus, the protagonists of works featuring the double cannot indulge in self-constitution.This may illuminate the severe sense of distress involved in most of these.As such, Buber's theory of self-constitution by way of genuine dialogue may offer another angle for examining what occurs within the protagonists of this literary corpus and why it arouses such intrigue and discomfort among its readers.
2019-04-09T21:18:45.028Z
2018-01-29T00:00:00.000
{ "year": 2018, "sha1": "72493ea4d5b4c0df2d50d10f3fdac26367b5b219", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0787/7/1/13/pdf?version=1517386961", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "72493ea4d5b4c0df2d50d10f3fdac26367b5b219", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Philosophy" ] }
219708821
pes2o/s2orc
v3-fos-license
Iterated sumsets and Hilbert functions Let A be a finite subset of an abelian group (G, +). Let h $\ge$ 2 be an integer. If |A| $\ge$ 2 and the cardinality |hA| of the h-fold iterated sumset hA = A + $\times$ $\times$ $\times$ + A is known, what can one say about |(h -- 1)A| and |(h + 1)A|? It is known that |(h -- 1)A| $\ge$ |hA| (h--1)/h , a consequence of Pl{\"u}nnecke's inequality. Here we improve this bound with a new approach. Namely, we model the sequence |hA| h$\ge$0 with the Hilbert function of a standard graded algebra. We then apply Macaulay's 1927 theorem on the growth of Hilbert functions, and more specifically a recent condensed version of it. Our bound implies |(h -- 1)A| $\ge$ $\theta$(x, h) |hA| (h--1)/h for some factor $\theta$(x, h)>1, where x is a real number closely linked to |hA|. Moreover, we show that $\theta$(x, h) asymptotically tends to e $\approx$ 2.718 as |A| grows and h lies in a suitable range varying with |A|. As usual, we set 0A = {0}. A central problem in additive combinatorics is to understand the behavior of |hA| as h grows. Asymptotically, it is known that |hA| is eventually polynomial in h. See e.g. [6,7,11]. But not much is known about this polynomial and, for h small, the behavior of |hA| may wildly depend on the structure, or lack thereof, of A. For instance, if A is a subset of Z such that |A| = n, then both bounds being attained in suitable cases: arithmetic progressions for the lower bound, and so-called B h -sets for the upper bound. The latter is best understood by noting that this binomial coefficient counts the number of monomials of degree h in |A| commuting variables. See e.g. [18,Sections 2.1 and 4.5] or [4,Section 3.2]. Here we address the following question. If h ≥ 2 and |hA| is known, what estimates on |(h − 1)A| and |(h + 1)A| can one derive? The classical answer, given by Plünnecke's inequality and based on graph theory [13], is as follows: See also [16,11,18]. In this paper, we derive this bound with a completely new approach, and we significantly improve it along the way. Our approach relies on Macaulay's classical 1927 theorem characterizing the Hilbert functions of standard graded algebras [9]. We apply that theorem to a suitable standard graded K-algebra R = R(A) = ⊕ h≥0 R h having the property for all h ≥ 0. Using a recent condensed version of Macaulay's theorem [3], we improve (1) as follows. Denote for x ∈ R and h ∈ N. If |A| ≥ 2, our improved bound implies where x is the unique real number larger than h such that |hA| = x h . This ensures θ(x, h) > 1. In fact, the factor θ(x, h) often exceeds 1.5 and even 2, as shown in Sections 5.2 and 5.3. For instance, for h = 12 we have θ(x, 12)) > 2.013 for all x ≥ 50. This implies in turn that if A satisfies |12A| ≥ 121,400,000,000, then The wide occurrence of the case θ(x, h) ≥ 2 is described in more detail in Section 5.3. Remarkably, for x large enough and suitable values of h depending on x, the factor θ(x, h) approaches e ≈ 2.718, the basis of the natural logarithm. For instance, this occurs for all x ≥ 10 6 at h = 3000. See also Section 5.4, where strong evidence suggests that lim x→∞ θ(x, x 1/2 ) = e. Three general remarks are in order here. Remark 1.1. Our results are stated for finite subsets of an abelian group G, but they hold more generally if G is a commutative semigroup, as in [12] for instance. Remark 1.2. Commutative algebra has already been applied to estimate the growth of iterated sumsets. In particular, the Hilbert polynomial of graded modules has been used to determine the asymptotic behavior of the function h → |hA|, and more generally of the function (h 1 , . . . , h r ) → |B + h 1 A 1 + · · · + h r A r |. See [6,7,12,11]. However, to the best of our knowledge, the only previous application of Macaulay's theorem to additive combinatorics is in [3], where the above-mentioned condensed version is established and applied to yield an asymptotic solution of Wilf's conjecture on numerical semigroups. Remark 1.3. Another way of comparing |hA| with |(h−1)A| has been made, at least for A ⊂ Z, by seeking to bound the difference |hA| − |(h − 1)A| from below rather than the quotient |hA|/|(h−1)A| from above [8]. In the study of the difference |hA| − |(h − 1)A|, a main tool is Kneser's theorem, whereas for the quotient |hA|/|(h − 1)A|, the classical one is Plünnecke's inequality, and an additional one is now Macaulay's theorem as made plain in this paper. There is a vast literature on Plünnecke's inequality, its rich applications to additive combinatorics and its successive refinements, such as the Plünnecke-Ruzsa inequality for instance [17]. Besides dedicated chapters in [16,11,18], see also the nice survey [14] and its many references. The contents of this paper are as follows. In Section 2, we construct a graded algebra R(A) whose Hilbert function exactly models the sequence |hA| h≥0 . In Section 3, we recall Macaulay's theorem on Hilbert functions and the recent condensed version that we shall use. We prove our main results in Section 4. The first ones, Theorems 4.3 and 4.4, are obtained by applying Macaulay's theorem and its condensed version to the algebra R(A). The strength of these results is then illustrated with the specific case |5A| = 100. Here, Plünnecke's inequality implies |4A| ≥ 40 and |6A| ≤ 251, whereas our method yields much sharper and almost optimal bounds, namely |4A| ≥ 61 and |6A| ≤ 152. As our next main result, Theorem 4.9, we derive the Plünnecke-based estimate (1) from Theorem 4.4 and improve it by some multiplicative factor θ(x, h) > 1. The numerical behavior of that factor is studied in Section 5 and shown to often exceed 1.5 and even 2. In Section 6, we give a presentation of R(A) by generators and relations. We conclude the paper in Section 7 with related questions and remarks. The graded algebra R(A) Let A be a finite subset of an abelian group. Here we associate to A a standard graded algebra R(A) whose Hilbert function models the sequence |hA|. We start by recalling some basic terminology. Definition 2.1. A standard graded algebra is a commutative algebra R over a field K endowed with a vector space decomposition R = ⊕ i≥0 R i such that R 0 = K, R i R j ⊆ R i+j for all i, j ≥ 0, and which is generated as a K-algebra by finitely many elements in R 1 . It follows from the definition that each R i is a finite-dimensional vector space over K. Moreover, as R is generated by of R i as a vector space over K. Thus d 0 = 1, and R is generated as a K-algebra by any d 1 linearly independent elements of R 1 . Let now (G, +) be an abelian group. Consider the group algebra K[G] of G. Its canonical K-basis is the set of symbols {t g | g ∈ G}, and its product is induced by the formula , the one-variable polynomial algebra over K [G]. Then S has for K-basis the set and the product of any two basis elements is given by for all g 1 , g 2 ∈ G and all n 1 , n 2 ∈ N. The degree of a basis element is defined as deg(t g Y n ) = n for all g ∈ G and all n ∈ N. This endows S with the structure of a graded algebra. Thus S = ⊕ h≥0 S h , where S h is the K-vector space with basis the set {t g Y h | g ∈ G}. Definition 2.3. Let A = {a 1 , . . . , a n } be a nonempty finite subset of G. We define R(A) to be the K-subalgebra of S spanned by the set {t a 1 Y, . . . , t an Y }. Thus R(A), being finitely generated over K by elements of degree 1, is a standard graded algebra. We then have for all h ≥ 0, as desired. For future work on R(A), it is algebraically important to determine the relations between its given generators t a i Y . This is done in Section 6. Macaulay's theorem We now turn to Macaulay's theorem [9] and a recent condensed version of it [3]. Macaulay's theorem gives a necessary and sufficient condition for a numerical function N → N to be the Hilbert function of some standard graded algebra. It rests on the so-called binomial representations of integers. Here is some background information. This expression is called the ith binomial representation of a. Producing it is computationally straightforward: take for a i the largest integer such that a i i ≤ a, and complete a i i by adding to it the (i − 1)th binomial representation of a − a i i . We omit trails of 0's, if any. For instance, for a = 10 and i = 3, we abbreviate 10 = 5 3 + 1 2 + 0 1 as simply 10 = 5 3 . Note that the defining formula of a i yields the (i + 1)th binomial representation of the integer it sums to. Here is one half of Macaulay's classical result, constraining the possible Hilbert functions of standard graded algebras [9]. Remarkably, the converse also holds in Macaulay's theorem, but we shall not need it here. That is, satisfying (4) for all i ≥ 1 characterizes the Hilbert functions of standard graded algebras. See e.g. [1,10,13]. Then m i+1 ≤ m i i for all i = 1, . . . , 5 as readily checked. Hence there exists a standard graded algebra R = ⊕ j≥0 R j whose values of dim R i for i = 0, . . . , 6 are exactly modeled by the sequence (m 0 , . . . , m 6 ). For instance, one may A condensed version For our new derivation of the Plünnecke-based estimate (1), we shall need the following condensed version of Macaulay's theorem established in [3]. For m ∈ N and x ∈ R, denote as usual In particular, x 0 = 1. We shall constantly need the following observations. Lemma 3.5. Let i ≥ 1 be an integer. Then the map y → y i is an increasing continuous bijection (in fact, a homeomorphism) from [i − 1, ∞) to [0, ∞). In particular, for any real numbers y 1 , y 2 ≥ i − 1, we have Proof. By the above lemma, there is a unique real number Here is the condensed version of Macaulay's theorem that we shall use in the next section. Main results Let A be a finite subset of an abelian group with |A| ≥ 2. Indeed, the general case is implied by (6), as shown by induction on h: Consequently, in the sequel, we mainly focus on comparing |hA| with |(h − 1)A| and/or |(h + 1)A|. In this spirit, a particular case of Plünnecke's Theorem 4.1 is the estimate for all h ≥ 1. In comparison, here is our first main result, obtained by applying Macaulay's Theorem 3.3 to the standard graded algebra R(A) defined in Section 2. The strength of Theorem 4.3 is illustrated in Section 4.1, with a concrete example showing that (8) may be much sharper than (7). In fact, the improvement of the former over the latter is systematic, as shown by Theorem 4.9, Corollary 4.10 and Remark 4.11. See also a comment in Section 7. Proof. Let R = R(A) be the standard graded algebra associated to A as defined in Section 2. We have Let us now apply Theorem 3.7, the condensed version of Macaulay's theorem. We obtain the following more flexible bounds from which we shall derive and improve (6). Then Proof. As above, let R = R(A) be the standard graded algebra associated to A with its decomposition R = ⊕ h≥0 R h into the direct sum of its homogeneous subspaces of given degree, where dim R h = |hA| for all h ≥ 0. The claimed bounds follow from Theorem 3.7 applied to R(A). Given |hA|, the lower bound on |(h − 1)A| provided by Theorem 4.4 may be up to 2.71 times better, in suitable circumstances, than the one provided in (6) Indeed, let x ≥ 5 be the unique real number such that x 5 = 100. Then 8.69 < x < 8.7, as follows from 8 Proof. The 5th binomial representation of 100 is given by The inequality |(h + 1)A| ≤ |hA| h of Theorem 4.3 then yields (10). More precisely, we have |6A| ≤ 9 6 + 8 5 Thus inequality (12) We conjecture that these bounds are optimal for sets of integers. As seen here, the improvement provided by Theorem 4.4 is already quite good. How good is it in general? We investigate this question in the sequel. Macaulay vs Plünnecke As our next main result, we show that Plünnecke's Theorem 4.1 also follows from our Macaulay-based Theorem 4.4, and we significantly strengthen it by a multiplicative factor which may exceed 2.71 in suitable circumstances. We close this section with an equivalent formulation of Theorem 4.9. It provides a nice inequality between |(h − 1)A| and |hA|, yet less suited to comparison purposes with Plünnecke's inequality. Proof. Directly follows from Theorem 4.9 and the formulas Alternatively, directly follows from Theorem 4.4 and formula (14). x j /j |iA|. Proof. Straightforward consequence of the above theorem. Behavior of θ(x, h) We now study the numerical behavior of the function θ(x, h). Denote e ≈ 2.718, the basis of the natural logarithm. We show that 1 < θ(x, h) < e whenever x > h ≥ 2, and that θ(x, h) asymptotically tends to e in suitable circumstances. This section is slightly more informal in nature. Numerical computations and graphics were done with Mathematica 10 [20]. Proof. The lower bound follows from (15) and Remark 4.11. As for the upper bound, we have We shall also need to invoke the monotonicity of θ(x, h) in x. Proof. It is equivalent to show that the map x → θ(x, h) h is strictly increasing. This easily follows from the positivity of its derivative. Details are left to the reader. Asymptotics We provide here, somewhat informally, a good approximation of θ(x, h) together with its asymptotic behavior as x grows. Recall Stirling's approximation of n! for large n: On the other hand, the bounds below are valid for all n ≥ 1: This yields the following well known approximation of n k for n much larger than k, see e.g. [19]: As a consequence, here is the asymptotic behavior of θ(x, h) when x grows. Proposition 5.3. Let h ≥ 2 be an integer. Then In particular, lim Proof. Directly follows from the above approximation of the binomial coefficients. When θ(x, h) ≥ 1.5 Our multiplicative improvement factor θ(x, h) over the Plünnecke-based estimate exceeds 1.5 quite early in terms of x or h. Indeed, one observes that the smallest integer x for which θ(x, h) ≥ 1.5 for some integer h is x = 10, specifically at h = 4 and 5. Even starting at h = 3, we have θ(x, 3) ≥ 1.509 (17) for all x ≥ 12. As an example of application, these observations, together with Theorem 4.9, yield the following improvements of (16) for h = 3, 4, 5. When θ(x, h) ≥ 2 We now examine circumstances guaranteeing θ(x, h) ≥ 2, a case of interest where our bound in Theorem 4.9 is at least twice better than (16). As it turns out, for x large, one has θ(x, h) ≥ 2 for almost all integers h between 6 and x/2 . We also describe cases where θ(x, h) gets very close to its upper bound e. In fact, when x goes to infinity, then θ(x, h) ≥ 2 holds for almost all positive integers h ≤ x/2. Indeed, as observed in (18), we have θ(x, 6) ≥ 2 for all x ≥ x 0 = 1210. Now, numerical computations at x 0 yield Together with Theorem 4.9, this yields the following factor 2 improvement over the Plünnecke-based estimate (16). Corollary 5.5. Let h be an integer such that 6 ≤ h ≤ 595. Let A be a subset of an abelian group G such that |hA| ≥ 2h+20 h . Then (19) and Proposition 5.2. The conclusion follows from Theorem 4.9. As yet another instance, for x 1 = 10 6 now, one has an almost identical statement as in (19) for x 0 = 1210, namely Statements (19) and (20) are no accident, as hinted by the following result. The highest point For fixed x, the general shape of θ(x, h) when h runs from 1 to x is well illustrated by Figure 1 for x = 48. Figure 2 displays the case x = 1000. It would be desirable to determine the highest point of that curve, and in particular the integer 1 ≤ h ≤ x maximizing θ(x, h). We do not have yet a precise answer. Nevertheless, by computing derivatives of the approximation of θ(x, h) provided by Proposition 5.3, one sees that for fixed x, Thus, for x fixed, the sought-for integer h maximizing θ(x, h) occurs when For instance, for x 0 = 100, the maximum of θ(x 0 , h) is reached at h = 18, for which θ(100, 18) ≈ 2.177. Hence for all x ≥ 100, as follows from Proposition 5.2. For h fixed In the opposite direction, for h fixed, it is easy to locate the real number x 1 ≥ h maximizing θ(x, h). Indeed, using (21), we find This suggests that as is fully confirmed by numerical experiments. As a concrete illustration, here are instances where θ(x, h) gets very close to e: • For all x ≥ 200000 and all 1200 ≤ h ≤ 1300, one has θ(x, h) ≥ 2.70. A presentation of R(A) Reusing the notation of Section 2, let A = {a 1 , . . . , a n } be a nonempty finite subset of an abelian group (G, +). For future use, it is algebraically necessary to determine the relations between the given generators t a i Y of the associated algebra R(A). Our aim here is thus to identify R(A) as the quotient of the polynomial algebra K[X 1 , . . . , X n ] by a suitable homogeneous ideal I. Let ϕ : K[X 1 , . . . , X n ] → R(A) be the surjective morphism induced by ϕ(X i ) = t a i Y for all i. On the set M , we define the equivalence relation for all u, v ∈ M . Equivalently, let us write u = X α , v = X β with α = (α 1 , . . . , α n ), β = (β 1 , . . . , β n ) ∈ N n . Then In particular, equivalent monomials have the same degree, where as usual deg(X α ) = i α i . We shall need the notion of simple polynomial relative to ∼. We say that f is simple if f = 0 and all monomials occurring in f are equivalent under ∼. Observe that a simple polynomial is homogeneous. Indeed, equivalent monomials under ∼ have the same degree as observed above. Moreover, every nonzero polynomial g ∈ K[X 1 , . . . , X n ] may be decomposed, in a unique way up to order, as the sum g = f 1 + · · · + f r of maximal simple polynomials f i , in the sense that for all i = j, the monomials occurring in f i are nonequivalent under ∼ to those of f j . The f i are obtained by simply regrouping the monomials of f into maximal equivalence classes. We shall refer to the f i as the simple components of f . See e.g. [2, p. 232] and [5, p. 346], where similar notions were used. Lemma 6.3. Let g ∈ ker(ϕ)\{0}. Then every simple component of g belongs to ker(ϕ). Proof. Let f be a simple component of g. We must show ϕ(f ) = 0. Since f is simple, it is homogeneous of some degree h. Write f = i λ i u i , where λ i ∈ K \ {0} for all i and where the u i are pairwise distinct monomials. Since the u i are pairwise equivalent under ∼, we have ϕ( Now, for any monomial v occurring in g but not in f , we have ϕ(v) = t b Y h as v is non-equivalent to the u i . Since ϕ(g) = 0, it follows that i λ i = 0. Hence ϕ(f ) = 0, as desired. Proof. We have I ⊂ ker(ϕ) by construction. Conversely, let 0 = f ∈ ker(ϕ). By Lemma 6.3, we may further assume that f is simple. Write f = r i=1 λ i u i , where λ i ∈ K \{0} for all i and where the u i are pairwise distinct monomials. Since ϕ(f ) = 0 and ϕ(u i ) = ϕ(u j ) for all i = j, it follows that r i=1 λ i = 0. Therefore λ r = − r−1 i=1 λ i , and so Since u i ∼ u r for all i, it follows that u i −u r ∈ I. Hence f ∈ I, as desired. Concluding comments We end this paper with a few related questions and remarks. A first natural question is, how far from optimal are our new bounds? More precisely, let (G, +) be an abelian group, and let h, i, m be positive integers such that m ≤ |G|. Among all subsets A ⊆ G such that |hA| = m, what is • (inverse problem) the best possible lower bound on |iA| for i ≤ h? • (direct problem) the best possible upper bound on |iA| for i ≥ h? As another natural question, can one specialize Macaulay's theorem by characterizing the Hilbert functions of all algebras of the form R(A) for finite subsets A of a given abelian group G? A positive answer would help tackle the former question. Finally, in a sequel to this paper, we will show two more aspects of the strength of Theorem 4.3. The proof methods are quite different from the present ones, except that Macaulay's theorem remains central. First, we will show that Theorem 4.3 is asymptotically optimal : the upper bound it provides, namely |(h + 1)A| ≤ |hA| h for all h ≥ 1, is in fact an equality for h large enough. Second, we will show that Theorem 4.3 is best possible in the sense that, given any sequence of positive integers (d i ) i≥0 such that d 0 = 1 and Together, the present paper and its forthcoming sequel raise the prospect that Macaulay's theorem, an almost century-old classical result from commutative algebra, may emerge as a powerful new tool in additive combinatorics.
2020-06-17T01:01:13.105Z
2020-06-16T00:00:00.000
{ "year": 2020, "sha1": "04b0173ead5a8aab2f5b48d7a84a6b85252c8172", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2006.08998", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "04b0173ead5a8aab2f5b48d7a84a6b85252c8172", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
5537204
pes2o/s2orc
v3-fos-license
A review of brain circuitries involved in stuttering Stuttering has been the subject of much research, nevertheless its etiology remains incompletely understood. This article presents a critical review of the literature on stuttering, with particular reference to the role of the basal ganglia (BG). Neuroimaging and lesion studies of developmental and acquired stuttering, as well as pharmacological and genetic studies are discussed. Evidence of structural and functional changes in the BG in those who stutter indicates that this motor speech disorder is due, at least in part, to abnormal BG cues for the initiation and termination of articulatory movements. Studies discussed provide evidence of a dysfunctional hyperdopaminergic state of the thalamocortical pathways underlying speech motor control in stuttering. Evidence that stuttering can improve, worsen or recur following deep brain stimulation for other indications is presented in order to emphasize the role of BG in stuttering. Further research is needed to fully elucidate the pathophysiology of this speech disorder, which is associated with significant social isolation. INTRODUCTION Stuttering (stammering in British English) is a speech disorder characterized by disruptions in speech motor behavior (repeated or prolonged articulatory and phonatory actions) that result in sound and syllable repetitions, audible and inaudible sound prolongations and broken words (Max et al., 2004b). The definition of stuttering remains the subject of debate, despite multiple attempts (Andrews and Harris, 1964;Wingate, 1997;Bloodstein and Ratner, 2008). For the purposes of this review stuttering will be considered a speech motor disorder, even if the process may have broken down at the pre-motor level and even if there is a cognitive/linguistic or emotional/psychological processes related. Stuttering has a negative impact upon quality of life, interpersonal relationships, employment opportunities and job performance, and it is associated with significant personal financial costs (Klein and Hood, 2004;Blumgart et al., 2010;Koedoot et al., 2011;Van Borsel et al., 2011). Stuttering is associated with stigma and discrimination due to negative stereotypes, especially if severe and if causality is perceived to be psychological (Gabel, 2006;Boyle et al., 2009). It is associated with higher levels of social anxiety (Kraaimaat et al., 2002;Iverach et al., 2009a,b). Although ∼1% of the population stutters (Van Riper, 1982), the etiology is still unknown and a unifying pathomechanism for acquired neurogenic stuttering (ANS) has yet to be identified. The aim of this review is to describe neuroimaging, lesion, pharmacological, and genetic studies on the neural circuitries implicated in developmental and acquired stuttering. PERSISTENT DEVELOPMENTAL STUTTERING Persistent developmental stuttering (PDS) often first manifests in children between the ages of 2 and 4. It improves or remits spontaneously in a large proportion of affected children, boys having a much higher rate of persistence into adulthood than girls. Stuttering can also occur de novo in adulthood secondary to neurological injury or disease. However resolved childhood stuttering can recur in the context of adult onset neurological disease such as Parkinson's disease (PD). Despite an extensive literature on the subject, the etiology of PDS remains unknown. motor planning level rather than being due to abnormalities of the vocal tract or of the peripheral nervous system. Fluency-inducing conditions include choral speech or reading, the rhythm effect (or metronome speech), non-automated speech (e.g., foreign accent, role play, or acting), white noise, and singing (Stager and Ludlow, 1998;Kalinowski et al., 2000;Kalinowski and Saltuklaroglu, 2003;Davidow et al., 2011). ASSOCIATED SYMPTOMS Associated features or "secondary" symptoms can be divided into overt concomitants and physiological concomitants (Bloodstein and Ratner, 2008). Overt concomitants include associated movements which may be due to underlying motor dysfunction (e.g., visible tension in the face, head jerking while speaking, eye blink, forehead wrinkling, sudden exhalation), and interjected speech fragments or 'filled pauses,' which can be sounds, syllables, words, or phrases. There may also be abnormal speech rate and altered vocal quality, with sharp shifts in pitch level or lack of normal pitch variation. Physiological concomitants include flushing, pallor, perspiration, eye movements, and cardiovascular phenomena. PUTATIVE GENETIC ETIOLOGIES OF PDS The aggregation of PDS in certain families, high rates of monozygotic (63%) and dizygotic twin concordance, as well as reports of significant difference in sex ratio between stutterers with and without a positive family history have led to extensive research into a potential genetic etiology of the disorder (Howie, 1981;Drayna et al., 1999). No single gene has been identified in PDS, and it is likely a polygenic disorder. There is evidence to suggest a Mendelian model with an autosomal dominant major gene effect (Viswanath et al., 2004). An area on chromosome 18 was identified in a genome-wide linkage analysis of stuttering (Shugart et al., 2004). This area was relatively large, but putative candidate genes included a cluster of genes belonging to the desmoglein/desmocolin family (on 18q12.1), and the neuronal cadherin gene 2 (on 18q11.2). Both have known roles in cell adhesion and intercellular communication, and might be of relevance to neural substrates of speech. The results of other genome-wide linkage surveys suggest linkage on chromosomes 1, 13, and 16 (Cox and Yairi, 2000), and on chromosome 12q (study of 46 consanguineous families, Riaz et al., 2005). Mapping of the significant locus on chromosome 12q identified mutations in three related genes implicated in lysosomal metabolism. A link between these mutations in lysosomal metabolism genes and the white matter (WM) abnormalities described in PWS has been suggested (Büchel and Watkins, 2010). Watkins (2011) makes a comparison of stuttering with a genetic disorder of speech and language development described in the large multigenerational KE family, which displays autosomal dominant monogenic inheritance. The affected members of the KE family have been found to have a mutation in the FOXP2 gene in the SPCH1 region of chromosome 7q31 (Lai et al., 2001). The chromosome 7 locus identified by Suresh et al. (2006) did not include the FOXP2 gene. Voxel based morphometry (VBM) and positron emission tomography (PET) studies of affected KE family members found structural and functional abnormalities of the caudate nucleus (Watkins et al., 1999(Watkins et al., , 2002Watkins, 2011), and FOXP2/Foxp2 is expressed in the dorsal striatum in human and rat embryogenesis. FOXP2 is also expressed in a homologous area of the songbird brain and knockout of the gene in songbirds is associated with severe impairment of song learning, with stuttering-like output (Haesler et al., 2004(Haesler et al., , 2007. Thus the genetic and neuroimaging findings in the KE family provide evidence of a possible genetic ontogeny to stuttering. The structural and functional abnormality of the caudate supports the hypothesis that stuttering is a basal ganglia (BG) disorder, and is consistent with certain neuroimaging studies in stuttering (see below). Alm and Risberg (2007) suggested that the adult stutterers in their study could be divided into two groups, the first comprising those with higher trait anxiety and higher Wender Utah Rating Scale (WURS) scores. The WURS is used in the retrospective diagnosis of childhood attention deficit hyperactivity disorder (ADHD). The stutterers in this group had a higher occurrence of pre-existing neurological lesions or had relatives who stuttered. In contrast, the stutterers in the second group had lower trait anxiety and WURS scores, fewer pre-onset neurological lesions, and more relatives who stuttered. They thus posited that these groups might represent two separate subtypes of stuttering. This is consistent with Poulos and Webster (1991), who suggested that adults with developmental stuttering can be divided into two groups, one with a family history of stuttering and therefore possible genetic etiology, and another with no family history of stuttering but a history of pre-onset head injury or birth injury. There is evidence of a relationship between mild head injury and stuttering (and hyperactivity and mixed handedness) in children (Segalowitz and Brown, 1991). NEURAL CORRELATES OF PERSISTENT DEVELOPMENTAL STUTTERING Brain imaging studies of developmental stuttering have disclosed various abnormalities. In this paper we discuss evidence for the role of the cerebellum, the anterior cingulate cortex (ACC), the supplementary motor area (SMA), and the right frontal operculum (RFO) (Figure 1). THE CEREBELLUM AND AUDITORY PROCESSING The cerebellum has classically been considered to be a motor structure implicated in motor learning and in novel behaviors. A meta-analysis of functional imaging literature showing that it is consistently activated in purely auditory tasks suggests that it might also have a role in sensory auditory processing (Petacchi et al., 2005). There is evidence of greater overall cerebellar activation and abnormal right lateralization in stutterers compared to controls during silent and oral word reading, which increases further following fluency-inducing therapy but then falls to below pretreatment levels in the long term . Increased cerebellar activation in PWS compared to controls both pre-Frontiers in Human Neuroscience www.frontiersin.org and post-treatment may be related to increased sensory or motor monitoring due to reduced automaticity in articulatory movement sequences, even when reading silently . Cerebellar activation may also be related to selected attention processes, and prior treatment in stutterers may lead to greater attention and monitoring during speech production and thus less automation in articulatory movement execution (Allen et al., 1997). The increase in cerebellar activation from pre-to post-treatment followed by a decrease in activation would be consistent with this hypothesis as speech therapy would initially reduce automaticity and increase self-monitoring and attentional effort during speech and this would then decrease as the acquired skills for fluency became more practiced and automatic with time. Fox et al. (1996) reported a diffuse increase in activation of the cerebral and cerebellar motor systems in stutterers [M1, SMA, superior lateral premotor region (SLPrM), and cerebellum] during solo and chorus reading conditions. The M1 activation was aberrantly right dominant in the dextral stutterers. They also found that stutterers (but not controls) activated the insula bilaterally and the claustrum, the thalamus and the globus pallidus (GP) on the left during speech tasks (Figure 2). THE ANTERIOR CINGULATE CORTEX AND THE SUPPLEMENTARY MOTOR AREA IN STUTTERING A review of the neural pathways underlying vocal control found evidence for divergent roles of the anterior cingulate gyrus in human and non-human primates (Jürgens, 2002). Studies in macaques report vocalization-correlated activity changes in anterior cingulate gyrus neurones, whereas human PET and functional magnetic resonance imaging (fMRI) studies show that the SMA is consistently activated during speech and singing. The anterior cingulate gyrus is only activated during a few speech-related tasks in humans, but shows consistent activation during non-vocal emotional-related tasks. Thus it may be that the SMA is implicated in the volitional control of learned motor patterns, and the anterior cingulate gyrus in the volitional control of emotional states. A unifying feature is the control of initiation of vocal utterances rather than pattern generation. Functional imaging studies of stuttering in humans provide evidence for the implication of the ACC in atypical neural activation patterns during speech, with relatively increased activation in stutterers in the ACC during silent and oral reading tasks (Kroll et al., 1997a,b;De Nil, 1999b). De Nil and proposed that the ACC provides a connection between the limbic system and the sensorimotor cortex of direct relevance to stuttering. The ACC is involved in response preparation and in anticipatory reactions, particularly when presented with complex stimuli and the need to select one of multiple possible responses (Paus et al., 1998). Thus increased ACC activation in stutterers might be due to increased anticipatory reactions when reading and scanning for potential fluency problems, and the ACC could also be involved in the silent rehearsal of words . Less automated tasks are associated with increased activation of the inner articulatory loop, which may involve the ACC (Paus et al., 1993). Furthermore, ACC activation during silent reading tasks is significantly decreased in stutterers following fluency-inducing treatment (Kroll et al., 1997a,b;De Nil, 1999a,b). This could be due to decreased silent articulatory rehearsal or decreased anticipatory scanning. PET and fMRI studies have shown consistent activation in the sensorimotor cortex, the SMA and the anterior cingulate gyrus during speech and singing in humans. There are other areas that show task-specific activation. Pronouncing sequences of meaningless phonemes is associated with activation in the insula and auditory cortex (Bookheimer et al., 2000), whereas passive listening to external sounds and auditory feedback of produced sounds activate the auditory cortex but not the insula (Perry et al., 1999). Furthermore, activation of the insula is seen during singing and speaking aloud but not during silent speech and song nor when listening to speech or tone sequences (unlike the auditory cortex; Herholz et al., 1994;Riecker et al., 2000). AUDITORY FEEDBACK AND STUTTERING There is evidence of voice-sensitive and -selective clusters of activation in the superior temporal sulcus (STS), which may be analogous to face-selective areas in human visual cortex (Belin et al., 2000). The association of increased superior temporal cortex activation with mismatch between actual and expected auditory Frontiers in Human Neuroscience www.frontiersin.org feedback lends support to a verbal self-monitoring model, in which there is communication between speech production regions and speech perception regions (Fu et al., 2006). This is consistent with activation in the superior temporal gyrus (STG) bilaterally for DAF conditions compared to normal auditory feedback. The positive correlation between DAF and STG activation suggests that areas of the temporo-parietal cortex are the substrate for a conscious verbal self-monitoring that supports automatic speech production (Hashimoto and Sakai, 2003). A magnetoencephalography (MEG) study of cortical activation in response to hearing one's own voice showed that the human auditory cortex is primed by speech at a millisecond rate and that this delays and decreases reactions to one's own expected vocal output (Curio et al., 2000). Stuttering could involve abnormalities in this type of motor to sensory priming of auditory cortex responses during speech output (Salmelin et al., 1998). The sensitivity of the auditory cortex is reduced when listening to one's own voice, and the activity of the auditory cortex may be modulated according to predicted or expected auditory feedback (Houde et al., 2002). Riecker et al. (2002) found that a rhythmic speech production task was associated with activation of the left putamen and thalamus and right perisylvian areas, including STG and right Broca analog, and propose that the right hemisphere is implicated in rhythmic pattern rehearsal and the left hemisphere in self-monitoring of verbal output. A MEG study of PWS' auditory cortex response in tasks with and without auditory feedback suggested an unstable interhemispheric balance in stutterers, easily disturbed by increased workloads, which could lead to unpredictable and transient failure of auditory perception (Salmelin et al., 1998). Accurate interpretation of auditory input is needed for self-monitoring and on-line speech adjustment, therefore this abnormal auditory response could initiate or facilitate stuttering. Previous studies suggested that a significant proportion of stutterers fail to show the normal right ear advantage in dichotic presentation of meaningful linguistic stimuli (Curry and Gregory, 1969;Hall and Jerger, 1978), and that stutterers may even have difficulties in sound localization (Rousey et al., 1959). Braun et al. (1997) used PET to investigate speech in adults who stuttered since childhood. They found that rCBF patterns in stuttering differed markedly from normal controls, failing to demonstrate left hemispheric lateralization typically observed in controls; instead regional responses were either absent, bilateral or localized to the right. During a dysfluent language task, stutterers had increased right caudate, bilateral periaqueductal gray (PAG) and midline cerebellar activations. The dysfluent language-motor contrast also showed absence of left inferior insular cortex activation in stutterers compared to controls. There was disproportionate activation of anterior forebrain regions (which have a regulatory role in motor function) in stutterers during dysfluent speech production, and at the same time, a relative deactivation of post-rolandic regions involved in sensory information perception and decoding. In the dysfluent language-motor contrast, stutterers failed to activate Wernicke's area (posterior STG and inferior angular gyrus). Thus when dysfluent, stutterers may not be effectively monitoring speech language output, as this is the role of Wernicke's area in normally fluent individuals (Petersen et al., 1988;Wise et al., 1991;Démonet et al., 1992;Zatorre et al., 1992). Van Borsel et al. (2003a) also reported increased activation of the right homologs of left language areas in stutterers during language processing, as well as increased cerebellar and auditory activations during silent reading, which may demonstrate the use of less differentiated auditory and motor feedback mechanisms in stutterers, and might partly explain the fluency-enhancing effects of various types of AAF in PDS. Conversely, Ingham et al. (2003) found only two regions with different activation between PWS and controls, namely increased activation of the right anterior insula and deactivation of right Wernicke's homolog (BA21/22) in stutterers. This differs from the results of Pool et al. (1991), who report global absolute reductions in rCBF in PWS compared to controls, with significant flow asymmetry (left < right) in the anterior cingulate and the superior and middle temporal gyri in PWS. Ingham et al. (1996) failed to find any significant differences in resting state regional cerebral perfusion between PWS and controls (on PET and MRI), but rather only suggested minor differences in hemispheric symmetry, despite adequate sample size (n = 29), 74 regions of interest and sound methods. The discordance of results between these studies may illustrate the lack of consensus in the neuroimaging literature and motor speech research. NEURAL ACTIVATION CHANGES FOLLOWING FLUENCY-INDUCING THERAPY De Nil et al. (2003) provided evidence using PET that fluencyinducing treatment may be associated with a general reduction in overactivation, especially in the motor cortex, and with changes in activation lateralization. There were significant differences in activation patterns between controls and stutterers, even during silent speech tasks (so presumably not attributable to articulatory movements). During silent reading, controls activated speech and language areas in left frontal and temporal cortices, and no activation in motor or premotor cortex. Stutterers showed a significantly increased level of overall neural activation compared to controls, consistent with the hypothesis that in stutterers there is recruitment of more neural resources in order to achieve even relatively simple speech tasks (De Nil et al., 2000). Stutterers activated the primary motor cortex and the cerebellum, even during silent reading tasks, suggesting that they place more emphasis on articulatory aspects even during silent reading (De Nil et al., 2000. Following treatment, the activation pattern in stutterers became more left-lateralized, only to return to bilateral or right-lateralized at follow-up scanning. There was a gradual reduction in overall activation following treatment and at follow-up. Activation in the insula changed from being predominantly right-lateralized pretreatment to being left-lateralized post-treatment and at followup, whereas cerebellar activation became more right-lateralized with treatment (as expected given its cross-connections to the motor cortex). Pre-treatment, stutterers had right-lateralized STG activation, which became left-lateralized post-treatment and at follow-up. Frontiers in Human Neuroscience www.frontiersin.org The neural overactivation seen in stutterers compared to controls during speech-related tasks, and increased cerebellar activation suggests a lack of speech automatization. The results of De Nil et al. (2003) suggest that this aberrant neural overactivation can be reduced in stutterers by fluencyinducing therapy, however, activity was not entirely normalized after treatment. Furthermore, it is not yet known whether neural activation patterns and treatment-induced changes in them differentiate stutterers who will relapse following treatment from those who will successfully maintain fluency in the long term. Giraud et al. (2001) found that in normal controls the posterior superior temporal cortex (BA42) has a greater PET response to complex sounds than to white noise and Wernicke's area (BA22) responds specifically to speech sounds, and in cochlear implant patients this specialization of function is absent in both areas. They argue that this demonstrates experience-dependent changes in the functional specialization of the language network thanks to underlying neuroplasticity. Such neuroplasticity in these cortical speech areas could, at least in part, explain changes in neural activation in stutterers following fluency-inducing therapy. THE RIGHT FRONTAL OPERCULUM The RFO has been reported to be the only region consistently overactivated in stutterers compared to controls during both reading and passive semantic decision tasks (Preibisch et al., 2003). The RFO is the right homolog of Broca's area and could compensate for aberrant transmission between Broca's area and left motor cortex representations of the larynx and tongue. Consistent RFO overactivation, negatively correlated with stuttering severity, suggests a compensatory overactivation rather than a primary dysfunction Preibisch et al., 2003). Watkins et al. (2008) reported two areas of overactivation in the right anterior insula close to the RFO. There is diffusion tensor imaging (DTI) evidence of decreased fractional anisotropy (FA) in the WM underlying the left rolandic operculum (LRO), an area corresponding to the left sensorimotor representation of the larynx and tongue (BA43; Sommer et al., 2002). Decreased FA suggests demyelination or loss of organization of WM tracts. However, the results reported by Sommer et al. (2002) should be interpreted with caution since the large voxel size used means that WM analysis may be influenced by adjacent gray matter. Watkins et al. (2008) identified decreased FA of the WM underlying ventral premotor and motor cortical areas which were underactive on fMRI. This area of bilaterally decreased FA was located close to that reported in the LRO by Sommer et al. (2002). The results of Watkins et al. (2008) may suggest decreased WM integrity in tracts, which are important for execution of articulatory movements (via connections with primary motor cortex) and for integration of articulatory planning and sensory feedback (via connections with posterior superior temporal and inferior parietal cortex). The finding of reduced FA in WM underlying the LRO also corroborates the finding of atypical gyral anatomy in stutterers in the same area (Foundas et al., 2001). In another study, the right middle frontal cortex was the only area of decreased activation in PWS following therapy (Neumann et al., 2005). There was increased activation in an extended network of mainly left-sided areas following therapy, and also in areas of temporal cortex bilaterally. The left insula and the LRO, close to the area of decreased FA in Sommer et al. (2002), both showed increased activation after therapy. Cases of recovery from aphasia following frontal injury suggest that the right inferior prefrontal cortex can be rapidly activated to compensate for damage to Broca's area (Heiss et al., 1999;Rosen et al., 2000), and the RFO may be recruited to compensate for left frontal cortex dysfunction in dyslexia (Pugh et al., 2001). Persons who stutter may have subtle changes in right perisylvian cortical anatomy corresponding to areas of increased activity reported in imaging studies, with an increased number of sulci in the suprasylvian gyral banks and of sulci connecting to the second segment of the right Sylvian fissure (Cykowski et al., 2008). This study failed to find any differences in asymmetry between stutterers and controls in the number of sulci and gyral banks in the left perisylvian language region and the planum temporale. Reduced WM integrity in the left superior longitudinal fasciculus (SLF) was reported in study of children with developmental stuttering (Chang et al., 2008). The cortex overlying the left SLF includes the rolandic operculum (BA 43), consistent with the area of reduced left hemisphere FA and functional underactivity (Sommer et al., 2002;Watkins et al., 2008). Using an augmented VBM technique, Jäncke et al. (2004) found that stutterers had increased white matter volume (WMV) in the perisylvian language areas, and atypical anatomy and lateralization in the perisylvian language areas and also in prefrontal and sensorimotor cortex. Stutterers had increased right hemisphere WMV in the STG including the PT and Heschl's gyrus, in the ITG including the pars opercularis (part of the right-sided homolog of Broca's area), in the precentral gyrus (M1) including parts of the face, mouth, and hand representations, and in the anterior MFG. The dextral controls had a leftward asymmetry of auditory cortex WM, consistent with the findings of Penhune et al. (1996). Stutterers had symmetric auditory cortex WM volumes. There was no correlation between stuttering severity and the anatomical findings. Jäncke et al. (2004) posited that regional increases in right hemisphere WMV could be due to increased or atypical interhemispheric communications, and that there may be altered processing strategies in the right hemisphere in stutterers. It should, however, be noted that despite concordant WM FA changes reported in these studies, Connally et al. (2014) described a much more complex picture, with stutterers having significantly decreased WM FA compared to controls in multiple areas. They used diffusion-tensor imaging in a sample of 29 PWS in order to replicate previous findings of the literature that showed reduced integrity in WM underlying ventral premotor cortex, cerebral peduncles and posterior corpus callosum. They also showed that within the group of PWS the higher the stuttering severity index, the lower the WM integrity in the left angular gyrus, but the greater the WM connectivity in the left corticobulbar tract. A parametric performance correlation analysis of PET rCBF during solo reading and chorus reading found that dextral stutterers had increased activation of the SMA mouth area, but the location and right lateralization of the area were comparable to Frontiers in Human Neuroscience www.frontiersin.org that in controls . In the stutterers, the primary motor cortex (M1) was not readily differentiable from the ILPrM (BA44/46, Broca's area). There was a significant increase in cerebellar activation in stutterers, and this activation was abnormally left-lateralized. Ingham et al. (2000) suggested that the significantly greater cerebellar syllable correlates in stutterers compared to controls and the state effect (stuttering in solo condition) indicate that the cerebellum may play a role in enabling fluent speaking in PDS speakers (in the chorus condition). Stutterers also showed abnormally left-lateralized STG activation. A gender replication study of the Fox et al. (2000) analysis found that dextral female stutterers had increased activity in the right anterior insula and decreased activity in the left IFG and in right BA21/22, as observed in males (Ingham et al., 2004). In addition to this, female stutterers had activations in the BG (the left GP and the right caudate) and in the left anterior insula. Female stutterers also had widespread deactivation in the right hemisphere (limbic and parietal lobes and prefrontal area). Overall, stutter rate correlated positively with bilateral regional activations in females, and with right-lateralized regional activations in males. There may be a relationship between these gender differences in neural activation and the higher rate of childhood recovery from developmental stuttering in females compared to males. In an activation likelihood estimation meta-analysis of neuroimaging studies of PDS, word reading in fluent controls was associated with activation in M1, premotor cortex, SMA, rolandic operculum, auditory areas, and lateral cerebellum (Brown et al., 2005). There was considerable overlap between the Talairach coordinates of the activations in motor cortex, cerebellum, SMA, and auditory cortex between Brown et al. (2005) and Turkeltaub et al. (2002). These two meta-analyses did not have any studies in common, so it is plausible that these activations represent a set of core areas for speech production. Thus there is a set of areas consistently activated in speech production, namely M1, SMA, premotor cortex, anterior insula, frontal and rolandic opercula, cingulate, quadrangular lobule of the cerebellum and the GP and putamen. These areas may be generally implicated in voluntary vocalization because they are also activated during wordless singing (Perry et al., 1999;Riecker et al., 2000;Brown et al., 2004). In PWS compared to controls, there was a greater number of more widespread areas activated for the same task. Key differences in PWS included overactivation of motor areas (M1, SMA, cerebellar vermis, cingulate), atypical right lateralization of activity in rolandic and frontal opercula and anterior insula, and absence of auditory activations associated with self-monitoring of speech. This is consistent with deactivation in right auditory association cortex and atypical right anterior insula/frontal operculum activation reported by Ingham (2001), Ingham et al. (2004). The function of the RFO/anterior insula has yet to be fully elucidated, but it has been implicated in the processing of vocal fundamental frequency and of prosody (Perry et al., 1999;Riecker et al., 2000;Brown et al., 2004;Meyer et al., 2004;Wildgruber et al., 2004;Hesling et al., 2005). Interestingly, this area is also implicated in Tourette syndrome (Stern et al., 2000). Stutterers failed to show the, albeit weak, left GP activation of fluent controls, or the activation in any other BG nucleus, so the results of the meta-analysis do not either strongly support or disagree with the BG model of stuttering (Alm, 2004). Brown et al. (2005) proposed that their findings could be explained by the phenomenon of efference copy, or feedforward projection of a motor plan, in which an inhibitory signal is projected to the perceptual region from the motor region (Numminen and Curio, 1999;Curio et al., 2000;Houde et al., 2002;Leube et al., 2003;Max et al., 2004a). If stuttering is predominantly a problem of motor program initiation, it is plausible that perceptual prediction of speech sounds, an inhibitory signal, is repeatedly delivered to the auditory system, causing word or syllable repetition. Thus efference copy could account for the absence of auditory activation in stutterers (associated with vocal self-monitoring in fluent subjects). In efference copy, there is self-monitoring comparing the expected and actual output, a function in which the cerebellum is believed to have a role (Blakemore et al., 2001). Brown et al. (2005) thus propose that cerebellar overactivation in stuttering may be associated with the discrepancy signal generated from the difference between expected speech output (left auditory cortex) and the actual speech output (right motor cortex), and that their efference copy hypothesis predicts an inverse relationship between the left auditory cortex and the right anterior insula. Max et al. (2004a) proposed a hypothesis regarding putative sensorimotor etiologies for stuttering. Stuttering may be caused by insufficiently activated or unstable internal models within feedforward and feedback speech movement control subsystems. Thus speech system instabilities in stuttering result from an overreliance on afferent feedback that has inherent time lags (compared to efference copy or feedforward control). Civier et al. (2010Civier et al. ( , 2013 has simulated this hypothesis, (i.e., that over-reliance on feedback control leads to production errors which if the grow large enough can cause the motor system to "re-set" and repeat the current syllable), using computer simulations of a "neutrally impaired" version of the DIVA model (Directions Into Velocities of Articulators), a neural network model of speech acquisition and production (Bohland et al., 2010). Simulation results support findings from neuroimaging on the WM disruptions and elevated dopamine levels for PWS. Watkins et al. (2008) found that both controls and stutterers showed activity in areas including the left IFG, ventral premotor cortex, SMA, pre-supplementary motor cortex and cingulate motor cortex, face sensorimotor cortex, STG and STS and left thalamus and anterior cerebellum during speech production and perception. Overactivation in stutterers compared to controls in the cerebellum, midbrain and anterior insula bilaterally, and underactivation in sensorimotor, ventral premotor, rolandic operculum cortical areas bilaterally and Heschl's gyrus on the left (Watkins et al., 2008) is consistent with the findings of Brown et al. (2005). Watkins et al. (2008) also report overactivation in stutterers in the midbrain, involving the substantia nigra (SN) as well as the pedunculopontine nucleus, the subthalamic nucleus (STN) and the red nucleus, consistent with BG network dysfunction or abnormalities of dopamine in PWS (see below). Chang et al. (2008) used VBM and DTI to investigate brain anatomy differences between children who stuttered, children recovered from developmental stutter and normal controls. They Frontiers in Human Neuroscience www.frontiersin.org showed decreased gray matter volume (GMV) in the right cingulate gyrus in children with persistent stuttering compared to those who had recovered. There was decreased GMV in bilateral MTG/STG, bilateral precentral gyri (BA6) and bilateral cerebellar regions in recovered versus persistent stuttering children. There were also differences in integrity of WM underlying the LRO in children with persistent/recovered stuttering compared to fluent controls. This is consistent with the results of Sommer et al. (2002), and suggests reduced FA in left WM corresponding to motor control of oral articulators. Thus the results of Chang et al. (2008) suggest a possible association of deficiencies in left hemisphere GMV and decreased left speech system WM integrity with risk of PDS. Chang et al. (2008) did not find any differences in left-right hemisphere asymmetries between stuttering children and controls, nor any increase in right hemisphere speech regions (contrary to other studies on adults with PDS). In the context of PDS in adults, neuroplasticity during development may be implicated in these differences. In a MEG study of single word reading, Salmelin et al. (2000) found differences in activation sequence, lateralization of neuronal processing, and functional connectivity in relevant motor cortical areas between dextral stutterers and controls. Following visual word presentation, controls showed left inferior frontal cortex activation within 400 ms, which may correspond to articulatory programming or encoding. Subsequently there was activation of the left lateral central sulcus and of the dorsal premotor cortex, corresponding to motor preparation. In stutterers, there was a reversed activation sequence, with early left motor cortex activation and later left inferior frontal activation. Salmelin et al. (2000) thus suggested that stutterers initiate motor programs before articulatory code preparation. Furthermore, stutterers failed to show the left motor and premotor cortex activation seen in fluent controls during word reading tasks. With regards to the suppression of motor cortical 20 Hz rhythm (a MEG correlate of task-related neuronal processing), stutterers showed a right hemisphere dominant response, whereas the controls showed a left dominant response (as expected in dextral subjects). This is consistent with PET studies showing higher rCBF in right rolandic areas in stutterers compared to controls Braun et al., 1997). Thus during speech production in stutterers, the right frontal cortex is very active but fails to produce synchronous time-locked responses. Salmelin et al. (2000) proposed that this failure to produce time-locked responses could be associated with difficulties in initiating the correct prosody in propositional speech in stutterers. The 20 Hz suppression was greatest in the mouth area in controls, but in the hand and mouth areas in stutterers (there were no overt hand movements during the task). This could be a reflection of imprecise functional connectivity between adjacent mouth and hand motor cortex representations in stutterers when speaking. The findings of Salmelin et al. (2000) support the idea of bilateral cortical abnormalities in stutterers, consistent with the results of neuroimaging results (Wu et al., 1995;Fox et al., 1996;Braun et al., 1997), and suggest dysfunction throughout a bilateral language network, with abnormal timing relationships between premotor and primary motor regions in left hemisphere affecting articulatory and motor preparation for speech and generation of correct prosody. However, the suggestion by Salmelin et al. (2000) that stutterers initiated motor programs inappropriately early before articulatory code preparation was not borne out in other studies which failed to find any clear evidence of problems in assembling speech production motor plans in stutterers compared to controls (Van Lieshout et al., 1996a,b). There is evidence that PWS have abnormal neural activation patterns in non-speech vocal motor tasks as well as during speech tasks, and the functional abnormalities in PDS may therefore not be limited to speech. During speech and non-speech vocal motor tasks, stutterers consistently showed underactivation of frontal and temporal areas, including the left STG and the left pre-motor cortex (BA6) during perception and planning, and underactivation of the right STG, and of Heschl's gyrus, the precentral motor region (BA4), the insula and the putamen bilaterally (Chang et al., 2009). Evidence of increased right hemisphere activation not only during speech production but also during other tasks suggests that increased right hemisphere activation may be inherent in adults who stutter (Preibisch et al., 2003). Stutterers have atypical neural functions even in the absence of overt speech production during silent reading and event-related brain potentials (ERPs) in stutterers compared to controls suggest differences in functional neural organization and altered processing common to word classes (Weber-Fox, 2001). ACQUIRED NEUROGENIC STUTTERING AND SUBCORTICAL BRAIN LESIONS The published reports indicate that the dominant feature of neurogenic stuttering is repetitions of sounds or syllables, sometimes together with sound prolongations, but blocks with struggle seem to be less common in ANS (Alm, 2004). In terms of localization of lesions, developmental stuttering (PDS) is associated with a reduction in the WM anisotropy situated just below the left sensorimotor cortex (Sommer et al., 2002), which corroborates the more general observation that the perisylvian region is anatomically more heterogenous in people who stutter than in controls (Foundas et al., 2001). In contrast with developmental stuttering, ANS is more often associated with subcortical lesions, in particular the BG, than with lesions in cortical speech and motor regions (Ludlow and Loucks, 2003;Alm, 2004). Acquired neurogenic stuttering can occur following lesions in almost any site in the brain, either bilateral or unilateral, cortical or subcortical, left-or right-sided, focal or diffuse (Lebrun et al., 1987). Acquired stuttering can also present with concomitant aphasia. It is thus difficult to determine the localizing significance of ANS. ANS is more common in men than in women (similarly to PDS) and is also more frequently reported following left hemisphere or bilateral lesions, but ANS is a very heterogeneous disorder, and there are reports of ANS in women following right hemisphere lesion (Fleet and Heilman, 1985). ANS can occur following temporal, parietal or occipital lobe lesions (Ardila and Lopez, 1986;Grant et al., 1999;Franco et al., 2000). Alm and Risberg (2007) propose that the main mechanism causing acquired stuttering following head injury is rotational forces at the level of Frontiers in Human Neuroscience www.frontiersin.org the midbrain and the STN causing diffuse neuronal injuries affecting several BG pathways, and that there may also be a link between ADHD and pre-or peri-natal hypoxia causing subtle biochemical changes in striatal neurones, especially intermittent hypoxia. Repeated episodes of fetal asphyxia have been shown to cause preferential damage to the striatum in sheep, with loss of mediumsized striatal GABAergic projection neurones to the GP and to the SN (Mallard et al., 1995a,b). Ludlow et al. (1987) reported persistent ANS in 10 patients following penetrating missile wounds in the brain during wartime. The sites of lesions in this group were compared with the sites of lesions in a group of patients with missile wounds to the brain but without speech problems. The only gray matter structures that were significantly more affected in the stuttering group were the striatum and the globus pallidum. There were lesions in the caudate or lentiform nucleus in 80% of ANS subjects, suggesting a central role of the BG in ANS. There was also cerebellar damage in 50% of the ANS subjects. Thus ANS can be heterogeneous in its speech manifestations. Van Borsel et al. (2003b) report a case of ANS following an ischaemic lesion of the left ventrolateral thalamus, with severe stuttering during propositional speech but only mild stuttering during non-confrontational speech and therefore propose that thalamic stuttering is a distinct clinical entity. Abe et al. (1993) report a case of ANS following midbrain and paramedian thalami. The patient's ANS differed from other cases of stuttering in that it was characterized by numerous repetitions (7) of the first syllables of words at a constant rate and loudness, in a very monotonous manner. It was thus similar to palilalia, which has also been reported in a patient with infarcts in the paramedia thalami and midbrain (Yasuda et al., 1990) as well as in patients with PD. Abe et al. (1993) posited that the repetitive speech disturbance in this patient was not attributable to the extrapyramidal system but rather to projections to the SMA from the infarcted regions of the thalamus and midbrain, because the clinical features were similar to those reported for ANS patients with SMA infarcts. Ackermann et al. (1996) reported a case of ANS affecting only word-initial sounds and transcortical motor aphasia (TCMA) following an SMA infarct. They thus proposed that ANS due to SMA lesions represents a distinct clinical entity compared to ANS associated with lesions in other areas. In contrast, Van Borsel et al. (1998) reported a case of severe ANS following a left SMA hemorrhage in which there was a different clinical picture, with stuttering not limited to word-initial position and present when reading aloud and during sentence repetition. Thus lesions of the same area can give rise to different types of ANS. ACQUIRED STUTTERING ASSOCIATED WITH THE THALAMUS Among the most articulate proponents of a possible thalamic contribution to language and speech are Penfield and Roberts (1959) who assessed such functions in patients who suffered from focal cerebral seizures, and who underwent temporal lobe excisions involving various amounts of neural tissue. They consequently proposed, as a speech hypothesis, "that the functions of all three cortical speech areas in man are coordinated by projections of each, to parts of the thalamus, and by means of these circuits the elaboration of speech is carried out." Our knowledge about the role of thalamus in ANS has increased with the advent of stereotactic neurosurgery. Hassler (1966) noted that stimulation of the ventrolateral thalamus produces acceleration or blocking of vocalization. Samra et al. (1969) in an extensive study of the anatomical location of the lesions in the brains of 27 patients with PD who had undergone thalamic surgery noted that "the presence of dysfluencies may depend more on the motor cortex-ventrolateral thalamus modulation than to thalamic influences in general (. . .). Consequently bilateral destruction of this thalamic zone may account for the more obvious and long-standing speech phenomena of hesitations, blocking, or increase of rate of speaking (i.e., palilalia)." Andy and Bhatnagar (1992) report four patients with mesothalamus dysfunction and a history of chronic pain, absence seizures, and dyskinesias who went on to develop acquired stuttering as part of a larger syndrome complex. Chronic implantation of stimulating electrodes in the left centromedian nucleus of the thalamus was performed as a last resort treatment for the patients' chronic pain and other symptoms. All four patients had spontaneously occurring abnormal EEG discharges in the mesothalamus. Their stuttering lacked secondary behaviors and failed to show adaptation, but featured numerous blocks. Unipolar self-stimulation of the CM nucleus attenuated the abnormal EEG discharges and improved the stuttering, in addition to the chronic pain and other symptoms. All four patients remained stutter-free post-operatively (and had ∼90% improvement in other symptoms). Schaltenbrand (1975) also noted that stimulation of the thalamus and of the corpus callosum during stereotactic surgeries to treat epilepsy, chronic pain and dyskinesias had effects on speech. Stimulation of the anterior corpus callosum with the stereotactic needle silenced speech, stimulation of the posterior corpus callosum caused confused thinking and interrupted speech. Stimulation of the posterior and ventro-oral thalamus resulted in alterations in articulation and in interruption of speech. Stimulation of the deep thalamus gave rise to various kinds of shouts and utterances. The effects reported were predominantly associated with stimulation of the dominant hemisphere. The pattern of this evoked compulsory speech resembled that of stuttering and palilalia. Mechanical perturbation of the thalamus (advancing a 1 mm diameter electrode 2 mm in the post-eroventromedial thalamus) intraoperatively in a patient having lesion surgery for chronic pain was found to cause repetitive speech dysfluencies similar to stuttering (Andy and Bhatnagar, 1991). Electrophysiological recording showed concurrent abnormal discharge from the part of the thalamus being perturbed. There are also reports of alleviation of stuttering upon electrical stimulation of the same site in the thalamus (Bhatnagar and Andy, 1989). These observations suggest that the mesothalamus is part of a speech-regulating corticomesothalamic feedback pathway. Anomia and preservation can be evoked by electrical stimulation of the left ventrolateral thalamus (specifically the medial central portion; Ojemann and Ward, 1971), and stimulation of an adjacent area of the (pulvinar and inferior) ventrolateral thalamus in right handed patients results in anomic responses (Ojemann et al., 1968), suggesting a speech integrating center in the lateral thalamus. THE BASAL GANGLIA AND SPEECH PATHWAYS IN STUTTERING The neural pathways of the BG remain incompletely understood, but they are known to be involved in the selection of competing voluntary motor programs (generated by the cortex and cerebellum), disinhibiting one selected motor program and simultaneously inhibiting all other competing motor programs in order to allow the execution of voluntary movements (Mink, 1996). Thus the BG do not themselves generate movements, but rather play a central role in the selection of competing voluntary movement patterns, inhibiting competing motor programs that would otherwise prevent execution of the desired movement. Degenerative disease of the BG is known to cause a number of movement disorders characterized by slow movements, involuntary muscle activity, or abnormal postures, including PD, dystonia, and tremor of various etiologies. There is evidence that stuttering may be a movement disorder of speech involving BG dysfunction (Alm, 2004;Max et al., 2004b). Jürgens (2002) posited that there is a cerebello-thalamo-cortical pathway implicated in normal speech production in humans, based on lesion studies in humans and functional and structural studies in the macaque monkey and other primates. Speech is severely affected following lesions to the cerebellum (Ackermann and Ziegler, 1991) and to the ventrolateral thalamus and, as mentioned above, electrical stimulation of areas of the thalamus can produce vocalizations in humans (Schaltenbrand, 1975;Lechtenberg and Gilman, 1978). PET and fMRI studies have shown bilateral activation in the ventrolateral thalamus and the cerebellum during speech and singing tasks (Petersen et al., 1988;Herholz et al., 1994;Hirano et al., 1996;Price et al., 1996;Perry et al., 1999;Bookheimer et al., 2000). Medial parts of the ventrolateral thalamus contain facial muscle representations and show increased activity during vocalization (Vitek et al., 1994;Farley, 1997). The ventrolateral thalamus has projections to M1, to Broca's area and to the SMA (Nakano et al., 1992;Rouiller et al., 1994). Alm (2004) proposes that there is a medial BG-SMA route and a lateral cerebellar-lateral pre-motor cortex (including Broca's area) route, and that in PDS there is dysfunction in the BG-cortical route and compensatory overactivation of the cerebellar-cortical route. This could be consistent with the evidence of cerebellar overactivity in PWS reported by Brown et al. (2005). Structural equation modeling (SEM) also provides evidence of altered connectivity in the basal ganglia-thalamo-cortical circuit in PWS (Lu et al., 2009;Civier et al., 2013). The role of BG during dysfluent speech has been extensively described (Alm, 2005). The increased neural activation on eventrelated fMRI in the putamen bilaterally in stutterers following fluency-inducing therapy suggests that the putamen is implicated in speech motor control in PDS (Neumann et al., , 2005. However, this increased activation in the putamen did not persist at 2 year follow-up, unlike the therapy-associated increased activation found in other regions, including limbic areas, bilateral temporal cortex, and right parietal and frontal cortex. An fMRI study of reading tasks in PWS before and after fluency-inducing therapy showed a statistically significant correlation between stuttering severity and BG activity, lending further support to the BG hypothesis of stuttering (Giraud et al., 2008). Pre-treatment (n = 16), there was a significant (p < 0.001) positive correlation between stuttering severity and bilateral caudate nucleus activity and activity in left medial superior posterior parietal/post-central regions (BA 4/5/7). There was also a negative correlation of stuttering severity with activity in the left SN. Following therapy (n = 9), this pattern of activation was lost and there was a correlation between pre-treatment stuttering severity and a small area of left caudate activation, but this failed to reach significance. There was no significant correlation between increase in caudate activity and improvement in fluency with therapy, as would be expected if the caudate were implicated in compensation. Giraud et al. (2008) proposed a functional model of stuttering in which structural abnormalities affecting flow of information from Broca's area to the motor cortex engender BG dysfunction. Their model is based on cortico-striato-cortico loops and models of dysfunction in these loops in BG disorders such as PD and dystonia (see figure, p. 197, Giraud et al., 2008). DYSFLUENCY IN PARKINSON'S DISEASE The term "palilalie" (from the greek "palin" again and "lalia" speech) in the context of acquired neurological disease was first used by Souques (1908) one of the talented House Officers of Charcot Walusinski (2011). Souques reported on a particular disturbance of language in a patient with stroke leading to left-sided hemiplegia, which presented as compulsive repetition of semantically adequate answers to the examiners' questions. This symptom termed palilalia by Souques was also observed in post-encephalic Parkinsonism. Post-mortem examinations have suggested lesions of the striatum as the anatomical substratum of the disease (Critchley, 1927). In the cases reported so far, palilalia was either constantly present or varied in degree; it occurred both in spontaneous speech and in replying to questions, but not often when reading or reciting a well known text; the number of repetitions usually range between four and eight (Ackermann et al., 1989); reiterations comprised syllables, words or sentences. Often the repetitions tend to be uttered with increasing rapidity and decreased loudness (Critchley, 1927;Ackermann et al., 1989). This complex speech disturbance in PD can resemble stuttering, lending further support to a pathophysiological role of the BG in ANS. Benke et al. (2000) proposed that repetitive speech phenomena in PD patients can be divided into two types, the first resembling palilalia (with hyperfluent repetitions, fast utterances, increased speech rate and often blurred or murmured due to articulation that is poor or decreasing in loudness) and the second more similar to PDS, with dysfluent, prolonged, relatively well articulated speech, in a constant rate and loudness. In their study of 53 patients with idiopathic PD, 15 had repetitive speech phenomena. In these 15 patients, both types of dysfluency were present, with constant distribution across speech tasks. They noted that Frontiers in Human Neuroscience www.frontiersin.org the repetitive speech phenomena were more noticeable in patients with longer disease duration and fluctuating motor response to levodopa (54.3% of the advanced patients). There was no significant difference in repetitive speech phenomena in the on-and off-medication states. They conclude that the palilalia was therefore unlikely to be a type of levodopa-induced hyperkinesia of speech. Conversely Ackermann et al. (1989), described a patient with marked palilalia when on-medication only on repetitiontype tasks and not on spontaneous speech and they conclude that it was a sign of medication-induced hyperkinesia. STUTTERING IMPROVING OR WORSENING AFTER DEEP BRAIN STIMULATION (DBS) SURGERY Deep brain stimulation (DBS) is an established surgical therapy for the management of BG motor disorders such as PD, dystonia and tremor, and the safety and efficacy of DBS in motor disorders has led to its use in an expanding range of other motor and psychological disorders such as Gilles de la Tourette syndrome, obsessive-compulsive disorder and severe depression. DBS affords a unique opportunity to study the pathophysiology of BG disorders and has advanced understanding of BG pathways. Cases of stuttering worsening or improving following implantation of stimulating electrodes into BG nuclei for other indications, and more general speech changes following DBS, shed further light on the role of the BG in stuttering ( Table 2). Walker et al. (2009) report an unusual case of PD-associated acquired stuttering, which improved following unilateral STN DBS. This is in contrast to other cases, where stuttering worsens or reappears subsequent to STN DBS. Toft and Dietrichs (2011) reported two cases of PD patients who underwent bilateral STN DBS. One patient had acquired PDassociated stuttering with that worsening followed surgery, and the other had childhood stuttering that had re-emerged during course of PD progression and which worsened following surgery. Moretti et al. (2003) report a case of stuttering appearing following bilateral DBS STN for PD. Following implantation, the stimulation ON condition was associated with greatly improved motor scores but also with newly acquired stuttering, which persisted at follow-up. Burghaus et al. (2006) reported a case of stuttering worsening subsequent to bilateral STN DBS for PD. The patient had childhood stuttering, which improved in adolescence but then markedly worsened after onset of PD, and also Parkinson-related speech changes (hypophonia and hypokinetic dysarthria). Following surgery, there was an improvement in his Parkinson-related hypophonia but a worsening of stuttering, with increased frequency of blocks, prolongations and syllable repetitions and also of facial grimaces. This worsening in stuttering was marked with bilateral stimulation, but there was no significant effect of unilateral stimulation on stuttering severity. Stimulation-induced motor improvement was associated with worsening of stuttering. There were no tetanic muscle contractions or other side effects to suggest that the speech disturbances were the result of current spread to the internal capsule. PET was performed in the off-drug state comparing on-and off-DBS states during resting conditions and during a speech task. In the DBS-off condition during the speech task, there was increased rCBF in the right posterior STG (Wernicke/BA29), in the left lower frontal gyrus (Broca's area) and the adjacent anterior insula (BA44) and in the left anterior cingulum (BA24). Further increases in rCBF occurred in caudal M1 (BA4), SMA (BA6), and the dorsolateral prefrontal cortex (BA6/9) bilaterally and in the right cerebellar hemisphere. In the DBS on condition, there was increased rCBF in the left rostral SMA and in M1 on the right, in addition to in the anterior cingulate and the cerebellar hemispheres bilaterally. rCBF in the anterior insula and Broca's area was increased compared to during the DBS off condition. There were no significant changes in rCBF in Wernicke's area (left posterior STG) in the DBS on state. Stuttering can also occur following GPi DBS for dystonia, although the most commonly reported adverse effect of GPi DBS is dysarthria, with a prevalence of up to 12% (Kupsch et al., 2006). Nebel et al. (2009) reported two cases of stuttering following GPi DBS (of their series of 67 patients), which was distinct from dysarthria. The first patient had DYT1 mutation positive severe generalized dystonia but his speech was relatively unaffected. He had significant improvement in motor function following bilateral GPi-DBS implantation but new onset stuttering appeared gradually. The stuttering was apparently unrelated to changes in stimulation parameters and progressively worsened (8 months post-operatively his speech was unintelligible to a speech therapist). He did not have any dysarthria, palilalia, accessory motor symptoms, or anxiety. The second patient underwent bilateral GPI DBS for DYT1 negative segmental dystonia of the neck, trunk and upper limbs. In post-operative programming, a change in a stimulation contact improved his motor symptoms but also provoked dysarthria and stuttering. DOPAMINE AND STUTTERING According to the dopamine excess hypothesis of stuttering, there is a hyperdopaminergic state in PDS (Wu et al., 1997;Anderson et al., 1999). Neuropharmacological studies have failed to provide unequivocal evidence of this, because although L-dopa can increase dysfluency in PD (Anderson et al., 1999;Louis et al., 2001) and despite reports of stuttering improving with dopamine antagonists such as haloperidol, risperidone, and olanzapine (Healy, 1974;Murray et al., 1977;Burns et al., 1978;Maguire et al., 2000Maguire et al., , 2004, Goberman and Blomgren (2003) reported no significant difference in dysfluency in PD patients in low and high dopamine states. However, a case control study of the allelic frequencies of five single nucleotide polymorphisms (SNPs) in two dopaminergic genes lends support to the dopamine excess hypothesis (Lan et al., 2009). They report a significantly higher frequency of C alleles than T alleles in stutterers compared to controls at the rs6277 site of the DRD2 gene. Hirvonen et al. (2004) found evidence of an association between the CC phenotype of the rs6277 SNP of the DRD2 gene and decreased D2 receptor binding and increased synaptic cleft dopamine density. This would be consistent with a hyperdopaminergic state in stuttering, and with the hypothesis of increased D2 to D1 receptor ratio in the striatum in developmental stuttering proposed by Alm (2004). Frontiers in Human Neuroscience www.frontiersin.org NEUROIMAGING OF GLUCOSE AND DOPAMINE NEURAL METABOLISM IN STUTTERING Stuttering can be decreased using dopamine antagonists (see below). Wu et al. (1995) reported decreased cortical and subcortical glucose metabolic rates in stutterers compared to controls, which could be due to excess dopamine activity as amphetamine (a dopamine agonist) and cocaine (a dopamine reuptake inhibitor) inhibit regional cerebral glucose metabolic activity. In an fluorodeoxyglucose (FDG) PET study of solo and choral reading in four PWS and four controls, all right-handed, Wu et al. (1995) found both a state-(stuttering versus fluent reading, i.e., solo versus chorus reading tasks) and trait-(stutterers versus controls) dependent decrease in glucose uptake in cortical and subcortical areas in the stutterers (p < 0.05). There was a trait-related state-dependent decrease in glucose uptake in the superior frontal cortex (BA9), in Wernicke's areas (BA39, BA40), Broca's area (BA45), in the posterior cingulate (BA23), in the prefrontal cortex (BA10), in the deep frontal orbital cortex (BA11) and in the medial cerebellum. These areas of hypometabolism can be broadly divided into four categories: left language areas (Wernicke's and Broca's), higher order association areas (superior frontal cortex and prefrontal cortex), the left cerebellum, and limbic areas (deep frontal orbital cortex and the posterior cingulate). Overall, no region had greater glucose uptake during stuttering compared to choral reading or during stuttering compared to reading in normal controls. Comparison of choral reading in stutterers (which induced fluency) with reading in controls revealed two key differences, both in the BG. The largest difference between stutterers and controls was in the left caudate, which showed two areas of reduced glucose uptake during solo reading in stutterers compared to solo reading in controls. The left caudate was ∼50% less active in stutters versus controls for the stuttering state, and the caudate did not show any normalizing increase in glucose uptake during choral reading in stutterers. In the SN/ventral tegmental area, there was markedly increased glucose uptake in the stutterers during the choral reading task. This non trait-related state-dependent change in metabolism suggests an increased rate of neuronal firing in the SN in the induced fluency state. Thus there was a permanent hypometabolism of the left caudate that may be a trait-related marker in PWS, and reversible hypometabolism in left language areas and higher association areas. There was decreased cerebellar glucose uptake in the stutterers during the solo reading task, but the metabolism of the right cerebellum increased to be comparable to that of normal controls during choral reading in stutterers. Lastly, increased limbic metabolism in stutterers during fluent choral speech may correlate with reduced speech-associated anxiety. It should be noted that this study did not include the SMA because image acquisitions did not extend high enough. In a fluorodopa PET study of three subjects with PDS and six controls, all right-handed, Wu et al. (1997) reported results consistent with the dopamine excess theory of stuttering. FDOPA is a dopamine precursor used as a means of measuring the rate of dopamine synthesis in the brain (Barrio et al., 1997). Wu et al. (1997) found that compared to controls, stutterers had nearly three times increased FDOPA uptake activity in the right ventral medical prefrontal cortex (BA32, p < 0.01) and left caudate tail (p < 0.05). Stutterers also had a greater than 100% increase in FDOPA uptake in limbic structures including the left extended amygdala, the left insular cortex and the right deep orbital cortex (p < 0.05), and also in auditory cortex (BA22, p < 0.05). Overall, the greatest increased in FDOPA uptake activity in stutterers was in ventral limbic areas, which Wu et al. (1995) found to have decreased metabolic activity and which they proposed are involved in neural circuits of stuttering. The medial prefrontal cortex receives extensive dopaminergic innervations and has functional connections to the SMA (Bunney and Aghajanian, 1977;Tassin et al., 1977;Chiodo et al., 1984;Thierry et al., 1988;Weinberger et al., 1988;Bertolucci-D'Angio et al., 1990;Cenci et al., 1992). Furthermore, the medial prefrontal cortex has been identified as a vocalization center in primates (Jürgens and Müller-Preuss, 1977;Jürgens and Pratt, 1979;Jürgens, 1986). Wu et al. (1997) proposed that their findings may indicate abnormal overactivity in mesocortical dopamine tracts. Dopaminergic tracts also project to temporal cortical regions (De Keyser et al., 1989). The results of Wu et al. 's (1997) study are of limited power due to the small sample size, but they nonetheless suggest an association between increased dopamine activity in brain regions implicated in speech production and stuttering and thus lend credence to the dopamine excess theory of stuttering. D2 RECEPTOR ANTAGONISTS AND STUTTERING There are reports of haloperidol-associated improvement in stuttering (Healy, 1974;Burns et al., 1978), and of improvement or complete resolution of ANS when treated with paroxetine, a potent and selective serotonin reuptake inhibitor (SSRI; Schreiber and Pick, 1997;Costa and Kroll, 2000;Boldrini et al., 2003). It has been proposed that there are interactions between serotonergic and dopaminergic systems in the forebrain (specifically in the SMA, which has connections to the BG), and that paroxetine may improve stuttering via a serotonin-mediated indirect antidopaminergic effect in such patients. Turgut et al. (2002) report a case of ANS subsequent to focal left parietal infarct, which resolved completely with paroxetine therapy. By contrast, there are reports of SSRIs including sertraline and fluoxetine causing stuttering (Guthrie and Grunhaus, 1990;Christensen et al., 1996). SSRIs are not a homogenous class of drugs (Sokolowski and Seiden, 1999), so it possible that effects on serotonergic and dopaminergic systems differ between agents. Ecstasy (MDMA) has antiparkinsonian effects in primates, possibly via a serotonergic mechanism by agonist effect on 5HT1a or 5HT1b receptors (Iravani et al., 2003). The anti-parkinsonian effect of ecstasy in primates is completely blocked by fluroxamine, a SSRI. A case of Parkinsonism associated with MDMA use in humans has also been reported (Kuniyoshi and Jankovic, 2003). There are reports of stuttering improving with amphetamine treatment Bowling, 1962, 1965). There is evidence that amphetamine results in a long lasting decrease in the number of D1 and D2 receptors available in the striatum (due to cytoplasmic internalization of receptors; Dumartin Frontiers in Human Neuroscience www.frontiersin.org Ginovart et al., 1999;Sun et al., 2003). The downregulatory effect of amphetamine on striatal dopamine receptors may be relatively greater for D2 receptors versus D1 receptors (Gifford et al., 2000), which would be consistent with the hypothesis of a relationship between high D2 receptor density in the putamen and stuttering proposed by Alm (2004). Cases of stimulant-induced stuttering have also been reported, and may be attributable to increased dopaminergic neurotransmission (Burd and Kerbeshian, 1991). Stimulants are thought to affect dopaminergic and noradrenergic systems in children with ADHD, and can worsen tic symptoms and even trigger tic disorders (Lowe et al., 1982). There are reports of theophylline-induced stuttering in children and adults and it has been proposed that theophylline engenders stuttering by disturbing dopaminergic neurotransmission (indirectly via inhibition of GABA and adenosine receptors), with a hyperdopaminergic effect that is greatest in the BG (McCarthy, 1981;Rosenfield et al., 1994;Gérard et al., 1998;Movsessian, 2005). STUTTERING AS A FORM OF DYSTONIA It is possible that stuttering represents a form of focal segmental dystonia of the orofacial muscles (Alm, 2005). The involuntary movements seen in stuttering are similar to those in dystonia, and sensitivity to emotional stress is common to stuttering and to focal dystonias (Kiziltan and Akalin, 1996). A family history of stuttering is more common in patients with idiopathic torsion dystonia than in the general population (Fletcher et al., 1991). There is also evidence of a hyperdopaminergic state in both disorders. Kiziltan and Akalin (1996) propose that stuttering is a focal/segmental action dystonia. Dystonia can frequently result from focal lesions of the BG (Bhatia and Marsden, 1994;Naumann et al., 1996). Others argue that the presence of involuntary movements similar to those seen in BG movement disorders and complex motor tics in patients with PDS suggests a common pathophysiology for tics and stuttering (Mulligan et al., 2003). CONCLUSION AND FUTURE PERSPECTIVES The etiology and pathophysiology of stuttering remain poorly. Stuttering is a disorder associated with significant psychological burden and social stigma, and work toward achieving successful therapies has been focusing on its psychological or psychodynamic causes. The increased recognition of a structural or functional neurological cause can render stuttering potentially amenable to surgical or medical intervention. Further research on the cortical and subcortical anatomical and functional changes in stuttering is needed. In this study, we have reviewed evidence demonstrating that dysfunction of the BG and of their cortical targets are a likely pathomechanism underlying stuttering.
2016-05-10T15:06:31.324Z
2014-11-17T00:00:00.000
{ "year": 2014, "sha1": "c711cf1dab3857f5ae0f5a25ea17787790946467", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2014.00884/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fb43de548d6e3df775e164c42f0e840e59bbf35a", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
19262585
pes2o/s2orc
v3-fos-license
Treating cachexia using soluble ACVR2B improves survival, alters mTOR localization, and attenuates liver and spleen responses Abstract Background Cancer cachexia increases morbidity and mortality, and blocking of activin receptor ligands has improved survival in experimental cancer. However, the underlying mechanisms have not yet been fully uncovered. Methods The effects of blocking activin receptor type 2 (ACVR2) ligands on both muscle and non‐muscle tissues were investigated in a preclinical model of cancer cachexia using a recombinant soluble ACVR2B (sACVR2B‐Fc). Treatment with sACVR2B‐Fc was applied either only before the tumour formation or with continued treatment both before and after tumour formation. The potential roles of muscle and non‐muscle tissues in cancer cachexia were investigated in order to understand the possible mechanisms of improved survival mediated by ACVR2 ligand blocking. Results Blocking of ACVR2 ligands improved survival in tumour‐bearing mice only when the mice were treated both before and after the tumour formation. This occurred without effects on tumour growth, production of pro‐inflammatory cytokines or the level of physical activity. ACVR2 ligand blocking was associated with increased muscle (limb and diaphragm) mass and attenuation of both hepatic protein synthesis and splenomegaly. Especially, the effects on the liver and the spleen were observed independent of the treatment protocol. The prevention of splenomegaly by sACVR2B‐Fc was not explained by decreased markers of myeloid‐derived suppressor cells. Decreased tibialis anterior, diaphragm, and heart protein synthesis were observed in cachectic mice. This was associated with decreased mechanistic target of rapamycin (mTOR) colocalization with late‐endosomes/lysosomes, which correlated with cachexia and reduced muscle protein synthesis. Conclusions The prolonged survival with continued ACVR2 ligand blocking could potentially be attributed in part to the maintenance of limb and respiratory muscle mass, but many observed non‐muscle effects suggest that the effect may be more complex than previously thought. Our novel finding showing decreased mTOR localization in skeletal muscle with lysosomes/late‐endosomes in cancer opens up new research questions and possible treatment options for cachexia. Introduction Cancer cachexia is a debilitating condition without an effective treatment. It is usually associated with marked loss of muscle and fat mass, reduced physical activity and function, decreased tolerance to cancer therapies and increased mortality. 1,2 Skeletal muscle has been an underappreciated tissue in health and disease, 3 but a growing body of evidence suggests a beneficial role for treating muscle tissue in cachectic conditions associated with different diseases, such as cancer. 4 Muscle wasting in cancer cachexia is a consequence of decreased muscle protein synthesis, 5,6 impaired regeneration 7 and/or increased protein degradation, 6 but their relative importance and mechanisms are not well known. One possible mechanism for muscle wasting in cachexia is increased signalling through activin receptor ligands, such as myostatin and activins. [8][9][10][11] Myostatin and activins negatively regulate muscle mass through binding to their receptors activin receptor type 2 (ACVR2) A and B. 12,13 Blocking these ligands or their receptors can increase muscle mass and prevent muscle wasting in various animal models, [12][13][14][15][16] but also in humans. 17,18 Prevention of cancer associated cachexia by blocking ACVR2 ligands with either soluble receptor (sACVR2B) 9,16 or neutralizing antibody against the receptors 14 has previously been shown to improve survival without an effect on primary tumour growth in preclinical animal models. In addition, many other strategies to prevent muscle loss in different experimental models suggest causality between reduced muscle loss and survival in cachexia. For example, inhibition of NF-κB signalling reduced denervation-and Lewis lung carcinoma (LLC) tumour-induced muscle loss which was associated with improved survival rate. 19 Blocking GDF15, and consequently cachexia, significantly improved survival in fibrosarcoma (HT-1080) and in LNCaP tumour-bearing mice. 20 Furthermore, preventing the loss of muscle mass in C26 tumour-bearing mice by histone deacetylace inhibitor 21 and by inhibiting TWEAK/Fn14 signalling in the tumour 22 have prolonged survival. If indeed treating cachexia and especially muscle loss by strategies such as blocking ACVR2 ligands can improve survival in cancer, this may occur at least in part through preventing the loss of respiratory muscle mass and function. 23 However, also other factors, such as haematological changes, acute phase response (APR), inflammatory cytokines, and myeloid-derived suppressor cells (MDSCs), have recently been identified as potential contributors to either the development of cancer cachexia or to the poor prognosis associated with it. [24][25][26][27][28] The contribution of these factors to the improved survival, when treating cachexia by blocking of ACVR2 ligands, is unknown. In the present study, we aimed to study the effects of blocking ACVR2 ligands in a preclinical model of cancer cachexia on both muscle and non-muscle tissues. Two different treatment protocols were applied to compare the effects of blocking ACVR2 ligands only before the tumour formation, and thus increasing muscle size only prior to the onset of cachexia, or continued treatment both before and after the onset of cachexia. This comparison was performed to investigate whether increased muscle mass alone before the onset of cachexia is enough for the improved survival or if the continued treatment is crucial. We aimed to gain more insight into the potential mechanisms of muscle wasting and the role of non-muscle tissues in cancer cachexia, in order to understand the sACVR2B-mediated improved survival. The treatment of the animals was in strict accordance with the European Convention for the protection of vertebrate animals used for experimental and other scientific purposes. The protocols were approved by the National Animal Experiment Board, and all the experiments were carried out in accordance with the guidelines of that committee (permit number: ESAVI/10137/04.10.07/2014) and with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. Tumour cell culture Colon 26 carcinoma (C26) cells (provided by Dr Fabio Penna, obtained from Prof. Mario P. Colombo and originally characterized by Corbett et al. 29 ) were maintained in complete Dulbecco's Modified Eagle's Medium (high glucose, GlutaMAX ™ Supplement, pyruvate, Gibco ™ , Life Technologies) supplemented with penicillin (100 U/mL), streptomycin (100 μg/mL), and 10% FBS. Our pilot experiments showed that injecting mice with C26 cells (5 × 10 5 ), resulted in marked cachexia and considerably higher tumour gene expression of Activin A, Interleukin-6 (Il-6) and Myostatin in comparison to our previously conducted experiment using the LLC tumour model (Online Resource 1: Figure S1) with the same number of cells injected (5 × 10 5 ), but larger tumour 15 (data not shown). Experimental design The mice were randomized into one of four groups (matched by body weight): (i) healthy control mice (CTRL), (ii) C26 tumour-bearing mice receiving vehicle treatment throughout the experiment (C26 + PBS), (iii) C26 tumour-bearing mice receiving sACVR2B-Fc treatment before tumour formation (until Day 1 after C26 cell inoculation) followed by vehicle treatment until the end of the experiment (C26 + sACVR/ b), and (iv) C26 tumour-bearing mice receiving continued sACVR2B-Fc treatment throughout the experiment (C26 + sACVR/c). The experimental design and the treatment protocols are shown in Figure 1. Body mass and food intake of the mice were monitored daily in all the experiments. Survival experiment The mice were followed until the predetermined humane end-point criteria were fulfilled, or until 3 weeks after C26 cell inoculation at the latest, to investigate survival. The end-point criteria combined the body mass loss and the overall condition of the mice. In the evaluation of the overall health status of the mice, the following aspects were taken into account in addition to the body mass loss: appearance and posture (lack of grooming, piloerection, and hunched posture), natural and provoked behaviour (inactivity, impaired locomotion, and reduced reactivity to external stimuli), and food intake/ability to eat and drink. Mice were euthanized when two researchers confirmed the fulfilment of the end-point criteria. During the experiment, seven mice needed to be euthanized due to reasons unrelated to study purposes (e.g. tumour ulceration or self-mutilation), and three mice were excluded from analysis due to delayed tumour growth. This did not have any major effect on the results. Short-term experiments To study the potential mechanisms, another experiment was conducted with a predetermined end-point at Day 11 after C26 cell inoculation ( Figure 1). This experiment was repeated with three groups: CTRL, C26 + PBS and C26 + sACVR/c groups in order to replicate the findings of the first shortterm experiment and to collect more samples and data for further analysis. The production of soluble ACVR2B The ectodomain of ACVR2B was fused with an IgG1 Fc domain and the fusion protein was expressed in house in Chinese hamster ovary cells grown in a suspension culture as explained earlier in detail. 30 The protein is similar but not identical to that originally generated by Lee and colleagues. 12 Home cage physical activity Home cage physical activity was recorded by our validated force plate system as previously described 31,32 at baseline and on Day 10 after C26 cell inoculation (22 h recording). The mice were housed in pairs and the activity index of each cage reflects the total locomotive activity in all directions (y, x, and z axes) of the two mice housing the same cage (from the same experimental group). Tissue collection At the end of each experiment, the mice were anaesthetized by an intraperitoneal injection of ketamine and xylazine (Ketaminol® and Rompun®, respectively) and euthanized by cardiac puncture followed by cervical dislocation. A sample of the collected blood was taken to EDTA tubes for the analysis of basic haematology. The rest of the blood was collected in serum collection tubes, and centrifuged at 2000 g for 10 min (Biofuge 13, Heraeus). The diaphragm, the heart, tibialis anterior (TA), and gastrocnemius muscles, as well as the liver, the spleen, epididymal fat pads, and the tumour were rapidly excised, weighed, and snap-frozen in liquid nitrogen. The right TA and a sample of the spleen were embedded in Tissue-Tek® O.C.T. compound and snap-frozen in isopentane cooled with liquid nitrogen. All tissue masses were normalized to the length of the tibia (TL, mm), which was unaltered by the tumour or the continued sACVR2B-Fc treatment, but slightly increased in the C26 + sACVR/b group as compared to C26 + PBS (Online Resource 3: Figure S2). Muscle protein synthesis: in vivo surface sensing of translation Muscle protein synthesis was analysed using surface sensing of translation method 33,34 as earlier in our laboratory. 15,30,35 Briefly, on Day 11 after C26 cell inoculation, mice were anaesthetized and subsequently injected i.p. with 0.040 μmol/g puromycin (Calbiochem, Darmstadt, Germany) dissolved in 200 μl of PBS. At exactly 25 min after puromycin administration, mice were euthanized by cardiac puncture followed by cervical dislocation. The left TA muscle and the heart, the diaphragm, as well as a sample of the median lobe of the liver were isolated, weighed and snap-frozen in liquid nitrogen at exactly 30, 35, and 40 min, respectively, after puromycin administration. Basic haematology Basic haematology was analysed from whole blood (EDTA) samples diluted 1:25 in saline solution with an automated haematology analyser (Sysmex XP 300 analyzator Sysmex Inc, Kobe, Japan). For the analysis of the platelet count, whole blood was diluted 1:250 due to high platelet counts in the samples. RNA extraction, cDNA synthesis, and quantitative real-time PCR Total RNA was extracted from tumour, gastrocnemius, and spleen samples using QIAzol and purified with RNeasy Universal Plus kit (Qiagen) according to manufacturer's instructions resulting in high quality RNA. RNA was reverse transcribed to complementary DNA (cDNA) with iScriptTM Advanced cDNA Synthesis Kit (Bio-Rad Laboratories) following kit instructions. Real-time qPCR was performed according to standard procedures using iQ SYBR Supermix (Bio-Rad Laboratories) and CFX96 Real-Time PCR Detection System combined with CFX Manager software (Bio-Rad Laboratories). Data analysis was carried out by using efficiency corrected ΔΔCt method. Based on the lowest variation between and within the groups of the potential housekeeping genes (Rn18S, Gapdh, 36b4, or Tbp), Tbp was selected for the spleen whereas 36b4 was chosen for the tumour and the muscle. Primers used are listed in Online Resource 2: Supplementary methods (Table S1). Protein extraction and content Tibialis anterior (TA), diaphragm, heart, and liver samples were homogenized in ice-cold buffer with proper protease and phosphatase inhibitors and further treated as earlier 30 with slight modifications. The samples were centrifuged at 500 g for 5 min at +4°C for the analysis of the protein synthesis, and at 10 000 g for 10 min at +4°C for other analyses. Total protein content was determined using the bicinchoninic acid protein assay (Pierce, Thermo Scientific) with an automated KoneLab device (Thermo Scientific). Citrate synthase activity assay Citrate synthase activity was measured from TA, diaphragm, and heart homogenates using a kit (Sigma-Aldrich) with an automated KoneLab device (Thermo Scientific). Western blotting Western blot analysis was performed as previously described. 15,30,36 Briefly, tissue homogenates containing 30 μg of protein were solubilized in Laemmli sample buffer and heated at 95°C (except at 50°C for the analysis of OXPHOS proteins) to denature proteins, separated by SDS-PAGE and then transferred to PVDF membrane followed by overnight probing with primary antibodies at +4°C. Proteins were visualized by enhanced chemiluminescence using a ChemiDoc XRS device and quantified with Quantity One software version 4.6.3 (Bio-Rad Laboratories, Hercules, California, USA). In the case of the analysis of puromycin-incorporated proteins and ubiquitinated proteins, the intensity of the whole lane was quantified. Ponceau S staining and GAPDH were used as loading controls and all the protein level results were normalized to the mean of Ponceau S and GAPDH. Antibodies used are listed in Online Resource 2: Supplementary methods. For mechanistic target of rapamycin (mTOR)-LAMP2 colocalization analysis, TA sections were air-dried and fixed in À20°C acetone for 10 min. After PBS washes, sections were blocked with 5% goat serum and 0.3% CHAPS in PBS for 1 h, washed with PBS, and incubated overnight at +4°C with primary antibodies against mTOR, dystrophin, and LAMP2 diluted in PBS containing 0.5% BSA and 0.3% CHAPS. After PBS washes, sections were incubated with secondary antibodies (Goat anti-rabbit Alexa Fluor 555, Goat anti-mouse Alexa Fluor 405 and Goat anti-rat Alexa Fluor 488) for 1 h at room temperature, washed and mounted. Histology and immunohistochemistry Spleen sections were stained using haematoxylin and eosin for basic histology. For immunofluorescence staining, frozen sections were air-dried for 15 min and fixed with 4% PFA for 10 min, followed by washes with PBS. The sections were blocked with 5% goat serum in PBS for 1 h, washed with PBS, and incubated with primary antibodies against LY-6G and LY-6C (GR-1) or CD11b diluted in 0.5% BSA in PBS at +4°C overnight. After washing, the sections were incubated with Alexa fluorochrome conjugated secondary antibody (Goat anti-rat Alexa Fluor 488) diluted in 5% goat serum in PBS for 1 h. The samples were mounted with Mowiol-DABCO. Fluorescently labelled samples were imaged using Zeiss LSM 700 confocal microscope and analysed from 10 images (mTOR-LAMP2) or from 6-11 images (CD11b and GR-1) in each sample using ImageJ. The colocalization of mTOR with LAMP2 was analysed according to Costes et al. 37 using the Colocalization Threshold plugin. All the steps were performed blinded to the sample identification. Statistical analyses Differences in survival were analysed with the Kaplan-Meier method [log-rank (Mantel-Cox) test]. Cox regression analysis was used to determine factors predicting survival. The C26 cancer effect (CTRL vs. C26 + PBS or CTRL vs. C26 groups pooled) was examined with Student's t-test or nonparametric Mann-Whitney U test, and the effect of sACVR2B-Fc in the tumour-bearing groups with one-way analysis of variance or Kruskall-Wallis test followed by Holm-Bonferroni corrected LSD or Mann-Whitney U post hoc tests, respectively, when appropriate. Pearson correlation coefficient was used to analyse correlations. Statistical significance was set at P < 0.05. All values are presented as means ± SEM unless otherwise stated. Results Blocking activin receptor type 2B ligands improves survival of C26 tumour-bearing mice Mice treated with sACVR2B started to gain body mass soon after the beginning of the treatment (Online Resource 3: Figure S2). The body mass of the tumour-bearing mice started to decrease after C26 cell inoculation. There was no significant difference in the survival time between C26 + PBS and C26 + sACVR/b groups ( Figure 2A). However, the survival was significantly improved when sACVR2B-Fc administration was continued also after tumour formation (Figure 2A). To study the potential mechanisms underlying the improved survival with only the continued sACVR2B-Fc administration, another experiment was conducted. Associations between the body mass change and survival time were analysed with Cox regression analysis, which revealed that especially the body weight change from Day 10 to Day 11 after cancer cell inoculation predicted survival (B = 1.82, P < 0.001). Thus, Day 11 was determined as the end-point for the second experiment to target the early phase of cachexia. At this time point, vehicle treated tumour-bearing mice exhibited cachexia manifested by significantly decreased body mass accompanied by lower TA, diaphragm, and adipose tissue mass compared with healthy controls ( Figure 2B-F, Body mass in Online Resource 3: Figure S2). Both sACVR2B-Fc administered groups had significantly greater TA masses compared with vehicle treated tumour-bearing controls, and similar trend was also apparent in diaphragm ( Figure 2C and 2D). sACVR2B-Fc administration had no effect on adipose tissue mass when compared to the PBS-treated mice, but discontinuation of the treatment seemed to result in more prominent fat wasting compared with the continued treatment (ns) ( Figure 2F). Increased fat wasting together with non-significantly smaller muscle masses compared with continued treatment protocol probably explains why body mass had started to decrease especially rapidly in C26 + sACVR/b group. Heart mass was unaffected by the tumour and the sACVR2B-Fc administration at this time point ( Figure 2E), although mild cardiac cachexia was observed in our pilot study at 2 weeks after cancer cell inoculation (Online Resource 1: Figure S1). During the last days of the experiment, all tumourbearing groups had reduced food intake compared to healthy controls ( Figure 2G). Administration of sACVR2B-Fc had no effect on tumour mass ( Figure 2H). To find out if the mRNA expression of the potential cachexia-inducing factors was nonetheless modulated by sACVR2B-Fc administration, gene expressions were analysed from the tumours. Consistent with no effects on tumour mass, sACVR2B-Fc administration had no major effect on tumour Activin A (Inhibin βA) mRNA expression, but it increased Il-6 mRNA expression independent of the treatment protocol (Online Resource 3: Figure S2). In the gastrocnemius muscles of the tumour-bearing mice, the mRNA expression of Activin A slightly, but significantly decreased while Il-6 strongly increased and Myostatin (Gdf8) tended to increase without an effect of the treatment (Online Resource 3: Figure S2). Muscle protein synthesis and mTOR signalling are decreased in C26 cancer cachexia alongside reduced mTOR localization to lysosomes/late-endosomes To clarify the mechanisms underlying C26 cancer-induced muscle atrophy, muscle protein synthesis was analysed from TA, diaphragm, and the heart. Tumour-bearing mice had markedly blunted protein synthesis in all of these tissues and especially in TA, whereas sACVR2B-Fc administration had no effect ( Figure 3A). The mTOR, a regulator of protein synthesis, is at least in part regulated by its subcellular Figure 2 The effects of sACVR2B-Fc administration on survival, tissue masses and food intake in C26 cancer cachexia. (A) A 3-week Kaplan-Meier survival curve (log-rank (Mantel-Cox) test). N = 6, 12, 8, and 9 in CTRL, C26 + PBS, C26 + sACVR/b, and C26 + sACVR/c, respectively. (B) Body mass and the final tumour-free body mass, in the short-term experiment. There was a significant time × group interaction (P = 0.006, repeated measures ANOVA). Masses of (C) tibialis anterior (TA), (D) diaphragm (DIA), (E) the heart, and (F) epididymal white adipose tissue (eWAT) normalized to the length of the tibia in mm (TL) at 11 days after C26 cell inoculation. (G) Average food intake during Days 8-10 of the short term experiment, in which N = 3-4 cages/group, 2 mice/cage. (H) Tumour mass on Day 11 after C26 cell inoculation. *, ** and *** = P < 0.05, 0.01 and 0.001, respectively. CTRL vs. C26 + PBS difference was analysed by Student's t-test (B-G), and differences between the C26-groups with one-way ANOVA with Holm-Bonferroni corrected LSD (B-H). Lines without vertical ends show a pooled effect: (D) sACVR2B-Fc combined and (G) C26-groups combined. N-sizes are depicted in the bar graphs. Treating cachexia using soluble ACVR2B localization: localization to the lysosomal/late-endosome membrane is associated with mTOR activation. 38,39 To analyse whether decreased muscle protein synthesis was associated with altered mTOR localization, TA cross-sections were labelled with antibodies against mTOR and a lysosome/lateendosome marker LAMP2. The results demonstrate that colocalization of mTOR with LAMP2 was decreased in the tumour-bearing mice compared with the control group and restored by continued sACVR2B-Fc administration ( Figure 3B), reflecting the levels of phosphorylation of ribosomal protein S6, a marker of mTOR signalling ( Figure 3C). Also the phosphorylation of S6 kinase 1 at Thr389 was decreased in the tumour-bearing mice without consistent restoration by the continued sACVR2B-Fc treatment (Online Resource 4: Figure S3). The total amount of mTOR analysed with western blotting was similar between the groups (data not shown). Interestingly, mTOR colocalization with LAMP2 correlated well with muscle protein synthesis (r = 0.751; P < 0.01, Figure 3D) and the body mass change of the last day (r = 0.630; P < 0.01) in the untreated mice. C26 cancer cachexia is associated with elevated content of ubiquitinated proteins in skeletal muscle The content of ubiquitinated proteins was slightly but significantly increased in TA and diaphragm of the tumourbearing mice (Online Resource 4: Figure S3). In line with this result, the mRNA expression of the major musclespecific E3 ubiquitin ligases Murf1 and Atrogin1 was markedly increased in the tumour-bearing mice, similar trend being observed also in recently characterized muscle ubiquitin ligase of the SCF complex in atrophy-1 (Musa1) 40 (Online Resource 4: Figure S3). sACVR2B-Fc administration did not have significant effects on the markers of ubiquitin-proteasome system (Online Resource 4: Figure S3). Other protein degradation pathways may also contribute to muscle atrophy in tumour-bearing mice. Indeed, our data suggests potentially increased autophagy in tumour-bearing mice (Hentilä et al. unpublished observations). Reduced physical activity in C26 cancer cachexia is not rescued by soluble ACVR2B and is associated with minor alterations in skeletal muscle oxidative properties Home cage physical activity of the mice was recorded at baseline and on Day 10 after the injection of cancer cells or vehicle control. On Day 10, the tumour-bearing mice were significantly less active compared with the control mice, and sACVR2B-Fc administration had no effect on the level of physical activity ( Figure 4A). Reduced physical activity was accompanied by minor decreases in citrate synthase activity, but not in the markers of mitochondrial content in skeletal muscle and the heart of the tumour-bearing mice compared with healthy controls (Figure 4B-E, Online Resource 5: Figure S4). However, OXPHOS complex IV subunit 1 (MTCO1) was increased in tumour-bearing mice in both skeletal muscle and the heart (Figure 4D and 4E; Online Resource 5: Figure S4). Increased circulating levels of pro-inflammatory cytokines are not affected by blocking activin receptor ligands To investigate the possible effects of C26 cancer and sACVR2B-Fc administration on circulating cytokines, a multiplex assay was conducted. Of the 16 cytokines analysed, the levels of pro-inflammatory IL-6 and monocyte chemoattractant protein (MCP-1), also known as Chemokine (C-C motif) ligand 2 (CCL2), were highly elevated (P < 0.001) in the sera of the C26 mice while chemokine RANTES (CCL5) was decreased from already low values of the healthy mice (Online Resource 6: Table S2). The sACVR2B treatment did not have any effect on IL-6 (P = 0.67) or on Treating cachexia using soluble ACVR2B RANTES (P = 0.89), while it even further increased MCP-1 (P = 0.042), when the treatment was continued (Online Resource 6: Table S2). The treatment with sACVR2B-Fc also resulted in slightly elevated serum IL-1β (P < 0.05) independent of the treatment protocol, but its levels were very close to the detection limit in most of the samples (Online Resource 6: Table S2). Increased hepatic protein synthesis and acute phase response in tumour-bearing mice are partially blocked by soluble ACVR2B Liver mass was unaltered by C26 tumour and the treatments ( Figure 5A). However, C26 tumour-bearing mice had significantly increased liver protein synthesis ( Figure 5B) supported by increased phosphorylation of ribosomal protein S6, a marker of mTOR signalling ( Figure 5C). This cancer effect was attenuated by sACVR2B-Fc administration independent of the treatment protocol ( Figure 5B and 5C). Administration of sACVR2B-Fc alone in healthy mice did not affect liver protein synthesis as analysed from our previous experiment 30 (data not shown). In line with increased protein synthesis, the C26 tumourbearing mice had increased levels of fibrinogen and serpinA3N compared to healthy controls together with the increased phosphorylation of Stat3 indicating activation of APR ( Figure 5C). Increased Stat3 phosphorylation was partially attenuated by sACVR2B-Fc administration ( Figure 5C). The protein contents of fibrinogen and serpinA3N correlated with the body mass loss during the last day in the tumour-bearing mice (r = À0.659, P = 0.001, and r = À0.845, P < 0.001, respectively). C26 cancer associated splenomegaly is partially prevented by soluble ACVR2B independent of splenic myeloid-derived suppressor cells C26 tumour-bearing mice treated with PBS had significantly (over 2.5-fold) increased spleen mass compared with healthy control mice, which was partially prevented by sACVR2B-Fc administration independent of the treatment protocol ( Figure 6A). We replicated the experiment with all groups except the discontinued sACVR2B-Fc treatment and showed again the same effects (Online Resource 7: Figure S5). Analysis of the spleen histology from this experiment revealed well-organized and clear red and white pulp areas in the control mice whereas in the tumour-bearing mice, moderate structural disorganization of the white pulp areas occurred, especially in sACVR2B treated mice ( Figure 6B). To identify possible myeloid-derived suppressor cell (MDSC) expansion, spleen tissue was more specifically labelled with antibodies against GR-1 (LY-6C/G) and CD11b and the expression of typical MDSC marker genes was analysed by qPCR. 42 The density of CD11b positive cells (count/area) was increased in the C26 tumour-bearing mice compared with the control mice without changes in the density of GR-1 (LY-6C/G) positive cells ( Figure 6C). As spleen size was increased in the tumour-bearing mice, counts/area were multiplied by the spleen mass to get an idea of the total abundance of CD11b and GR-1 (LY-6C/G) positive cells. This analysis showed a more pronounced increase in both CD11b and GR-1 (LY-6C/G) positive cells in the tumour-bearing mice (Online Resource 7: Figure S5). The expression of genes previously related to presence and development of MDSCs 42 was increased in the tumour-bearing mice, and this effect was even more pronounced in the sACVR2B-Fc treated mice ( Figure 6D-G). These results suggest an increased abundance of splenic MDSCs in the tumour-bearing mice, and this effect is even accentuated by sACVR2B administration. Thus, changes in MDSCs do not explain the effect of sACVR2B-Fc on the spleen mass, and the cell population responsible for this effect is still to be identified. Nevertheless, as a possible mechanism, sACVR2B-Fc treated group showed increased mRNA expression of Cyclin Dependent Kinase Inhibitor 1A (Cdkn1a/p21), an inhibitor of proliferation (Online Resource 7: Figure S5). Activin receptor ligand blocking reverses the mild anaemia observed in tumour-bearing mice In the tumour-bearing mice, red blood cell count, haemoglobin, and haematocrit were slightly, but significantly decreased ( Figure 7A-C). All of these parameters were at least partially restored by sACVR2B-Fc administration ( Figure 7A-C). In contrast, platelet count was robustly augmented in the tumour-bearing mice independent of CD11b and GR-1 (LY-6C/G) count in spleen on Day 13 after C26 cell injection and representative immunofluorescence images. Scale bar = 100 μm. The mRNA expression of MDSC markers (D) interleukin-10 (Il-10, (E) S100 calcium binding protein A8 (S100a8), and (F) the splice variant of X-box Binding Protein 1 (Xbp1s) as well as (G) Interferon Regulatory Factor 8 (Irf8), a negative regulator of MDSCs, 41 on Day 13 after C26 cell injection. *, **, and *** = P < 0.05, 0.01, and 0.001, respectively. Student's t-test and one-way ANOVA with Holm-Bonferroni corrected LSD (A, C, D), Mann-Whitney U (E-G). Lines without vertical ends show a pooled effect of all C26-groups combined. N-sizes are depicted in the bar graphs. N = 7-9/group in (E-G). Discussion In the present study, we show that preventing cachexia by continued blocking of ACVR2B ligands improved survival in tumour-bearing mice without affecting primary tumour growth similarly as earlier. 9,14,16 These findings together with results from treatments affecting other pathways [19][20][21][22] as well as epidemiological evidence in humans 2 have led to suggestions for a possible causal link between preservation of muscle mass and improved survival. 4 This hypothesis is in part supported by the present study showing that increasing muscle mass and maintaining it by continued blocking of ACVR2B ligands improves survival. In comparison, the discontinuation of the treatment before the tumour formation led to a systematically worse outcome and also shorter survival. This may be due to the fact that the discontinuation of the treatment may in itself have adverse effects on the host or because larger muscles at the onset of the disease may result in more robust cachexia, as shown earlier. 43 Nevertheless, if the preservation of muscle per se indeed improved survival, the exact mechanisms still remain unresolved. It is possible, for instance, that the preservation of some specific vital muscles, such as the major respiratory muscles, 44,45 is paramount rather than muscle tissue in general. Indeed, diaphragm atrophy and weakness accompanied by ventilatory dysfunction have been reported in C26 tumour-bearing mice. 46,47 Interestingly, ACVR2 ligand blocking restored diaphragm mass in the present study, which may at least in part have explained the prolonged survival of these mice, although the differences between the treatment protocols were quite marginal at the time point investigated. In addition to skeletal muscles, cardiac cachexia and associated pathological changes such as arrhythmias may be linked to survival in cancer cachexia. 48,49 In our hands, however, the C26 tumour burden resulted in only mild cardiac cachexia and sACVR2B treatment did not affect heart size, similarly as earlier. 35 This differs from the results of Zhou et al. who reported significant cardiac atrophy which was fully reversed by sACVR2B. This may be explained by more severe cachexia that was treated with higher doses of sACVR2B-Fc. 9 However, we have recently demonstrated that sACVR2B-Fc has markedly smaller effects on cardiac than skeletal muscle in chemotherapy-induced cachexia model. 35 Future studies should better elucidate the effects of blocking ACVR2 ligands on the heart and the importance of cardiac cachexia on cancer prognosis. Liver acute phase response (APR) has been associated with impaired survival in cancer cachexia in humans. 26 It is an early-defence system driven by cytokines such as IL-6, which induces Stat3 activation and consequently increased expression of acute phase proteins. 26,50 We showed induced hepatic APR in tumour-bearing mice supporting previous findings. 25 Also the liver protein synthesis was increased in the tumour-bearing mice, a finding that is consistent with an earlier study with C26 cancer, 51 and also with human cancer cachexia, assuming that increased synthesis of circulating fibrinogen reflects mainly increased liver protein synthesis. 52 Increased liver protein synthesis in tumour-bearing mice may reflect increased synthesis of exported APR proteins, because no significant changes in liver mass were observed in any of the experiments. Both ACVR2 ligand blocking protocols reduced the increased protein synthesis and Stat3 phosphorylation again without an effect on liver mass. Although no differences in these results were observed between the two treated groups, the discontinued sACVR2B-Fc treatment was associated with much worse prognosis perhaps arguing against these hepatic changes being important for the survival benefit of continued ACVR2 ligand blocking. Interestingly, however, the level of hepatic APR proteins correlated with body mass loss suggesting that the importance of these pathways should be further investigated in the future as well as the mechanisms of blocking ACVR2 ligands on liver protein synthesis in cancer. Pro-inflammatory cytokines are thought to be important for the development of cancer cachexia 53 and on its prognosis. 27,54 Of multiple cytokines analysed, IL-6 and MCP-1 were strongly elevated in the sera of the C26 tumour-bearing mice, which is in agreement with previous findings in the same experimental model. 9,20 In humans, high levels of MCP-1 27 and IL-6 54 have been related to shorter survival time in pancreatic ductal adenocarcinoma and lung cancer, respectively. Recently, elevated MCP-1 was associated with cachexia in treatment naïve pancreatic cancer patients. 55 However, in the present study, these responses were not attenuated by the sACVR2B treatment suggesting that continued blocking of ACVR2B pathway enhances survival and prevents muscle loss independent of the elevated circulating pro-inflammatory cytokines similarly as suggested by Zhou et al. based on IL-6, IL-1β, and TNF-α. 9 Continued sACVR2B-Fc treatment even increased serum MCP-1 and IL-1β, but the mechanism and physiological importance of this effect is unknown and further studies are needed. We also analysed sera from the survival experiment at the day of euthanasia (n = 4-5 per group), where MCP-1 was even further elevated in the C26 + sACVR/c group of mice. This may be due to prolonged survival and thus more advanced disease at euthanasia (data not shown). Interestingly, increased spleen size (splenomegaly) typically observed in experimental cancer, 28,56,57 was attenuated in sACVR2B-Fc treated mice. In addition, expansion of splenic MDSCs has previously been associated with potential effects on cachexia development and survival. 28 Interestingly, although the increase in spleen size was prevented, the markers of MDSCs in spleen were not decreased with ACVR2 ligand blocking. Moreover, the increase in spleen size was prevented by sACVR2B-Fc treatment independent of the treatment protocol suggesting that spleen may not play a major role in enhanced survival with the continued ACVR2 ligand blocking. Nevertheless, an overall reduction in red pulp area by sACVR2B was recently observed in an animal model of β-thalassemia intermedia, and this was associated with alleviation of anaemia and splenomegaly. 58 We found that the white pulp areas were clearly visible in healthy control mice, whereas in the tumour-bearing mice, these areas were disorganized, and this tended to occur especially in sACVR2B treated mice. We also found changes in basic haematological parameters such as decreased blood haemoglobin and haematocrit in C26 tumour-bearing mice, which is in line with previous studies, 59 and those were reversed in the sACVR2B treated mice. Importantly, however, these factors did not differ between the treated groups, at least at this time point where the loss of body mass had already started with the discontinued treatment, suggesting that the attenuation of anaemia unlikely results in improved survival with continued ACVR2 ligand blocking. However, the effect of preventing anaemia per se may have other benefits as erythropoietin can improve health in C26 tumour-bearing mice. 24,60 Physical activity has been shown to be beneficial for health and also for cancer incidence and potentially for tumour host survival. 61,62 Our results showed that tumourbearing mice were less active than healthy controls supporting earlier evidence of decreased physical activity in tumour-bearing mice. 16,59,63 Decreased physical activity was not due to muscle wasting per se as preventing muscle wasting by blocking ACVR2 ligands did not prevent the decrease in physical activity. Our results also argue against physical activity being an important factor for improved survival with continued sACVR2B-Fc treatment. Similar results of the effects of sACVR2B treatment on physical activity have been reported earlier in LLC tumour-bearing mice. 16 The reduction in physical activity was associated with only minor changes in some of the mitochondrial markers in skeletal muscle and the heart. Similarly to Zhou et al., 9 we report that sACVR2B-Fc did not affect C26 tumour mass showing that C26 tumour growth is not regulated by ACVR2 ligands. We extended this finding by showing that the gene expression of Activin A and Il-6, which are important proteins in cachexia, 8 were not reduced by sACVR2B-Fc further showing that sACVR2B-Fc improved survival in this experimental model of cancer without marked effects on the tumour. However, the circulating ACVR2 ligands may also be directly or indirectly related to the cancer prognosis at least in part independent of cachexia. High circulating Activin A levels predict poor prognosis in colorectal and lung cancer patients. 10,64 This may be explained by increased Activin A levels reflecting the severity or the extent of the cancer or cachexia. However, also direct effects of Activin A, 65,66 and perhaps of other ACVR2 ligands, on non-muscle tissues may also affect survival in cancer cachexia, and thus more studies are needed to further investigate this phenomenon. Muscle wasting in cancer cachexia can be attributed to decreased protein synthesis, 5,6 impaired regeneration 7 as well as increased protein degradation 6 in skeletal muscle. At the time point in which body mass loss started to accelerate and predicted survival, increased mRNA expression of muscle specific E3 ubiquitin ligases and the content of ubiquitinated proteins were observed, suggesting increased protein degradation via the ubiquitin-proteasome system. At the same time, robustly decreased muscle protein synthesis in TA, diaphragm, and the heart of the tumour-bearing mice was observed. In the present study, as predicted from decreased mTOR signalling activity, mTOR colocalization with the lysosomes/late-endosomes was decreased in skeletal muscles of C26 tumour-bearing mice. Interestingly, our correlation data suggests that this novel finding may explain at least in part the cachexia and decreased muscle protein synthesis in the untreated tumour-bearing mice. Targeting of mTOR to lysosomes/late-endosomes has previously been shown to be sufficient to activate mTOR signalling while mTOR inactivation by, e.g. amino acid starvation is associated with mTOR dissociation from lysosomes/lateendosomes. 38,39 Even though continued sACVR2B-Fc administration had no effect on protein synthesis at this time point, it was able to partially restore S6 phosphorylation and the colocalization of mTOR with the lysosomes/lateendosomes. The reason for this discordance is unknown, but may be due to decreased food intake or simply the refractory nature of cancer cachexia at this time point in most of the animals. 67 Indeed, the increased skeletal muscle masses with ACVR2B ligand blocking are probably due to earlier changes in protein synthesis and/or degradation, as we have previously reported increased protein synthesis with ACVR2B ligand blocking in healthy and chemotherapy receiving mice. 15,30 In conclusion, we showed that increased muscle size with ACVR2 ligand blocking was associated with improved survival in C26 tumour-bearing mice only when the treatment was continued after the tumour formation. The prolonged survival could potentially be attributed in part to maintenance of muscle mass and, in theory, the respiratory muscle mass. However, more specific strategies in preventing total and specific loss of muscle (limb, respiratory, and heart) without possible non-muscle effects should be investigated in the future. Moreover, our results suggest that circulating proinflammatory cytokines, physical activity, or altered hepatic and splenic physiology may not be determining factors for improved survival with activin receptor ligand blocking. In addition, our novel result of decreased muscle protein synthesis and mTOR localization with lysosomes/late endosomes opens up possible future research questions and treatment options for cachexia. Online supplementary material Additional Supporting Information may be found online in the supporting information tab for this article. Figure S1 C26 cancer decreases (a) body mass (time x group interaction P < 0.001) and masses of (b) tibialis anterior (TA), (c) gastrocnemius (GA), (d) heart and (e) epididymal fat (eWAT). TL = tibial length. C26 tumour expresses substantially higher levels of (f) Activin A, (g) Il-6, and (h) Myostatin mRNA than LLC tumour. *, ** and *** = P < 0.05, P < 0.01 and P < 0.001, respectively. Students t-test (a-e), Mann-Whitney U (f-h). N-sizes are depicted in the bar graphs, except in (a) where N = 8 per group. Data is presented as means ± SEM, except in (a), where data is presented as mean ± SD. Supplementary Table S1 Primer information for qPCR analyses. Figure S2 The effects of C26 cancer and sACVR2B-Fc administration on body mass and tumour and muscle gene expression. (a) Length of the tibia on day 11 after C26 cell inoculation. (b) Body masses in the survival experiment. Tumour (c) Activin A (Inhibin βA) and (d) Il-6 mRNA expression. Gastrocnemius (e) Activin A (Inhibin βA), (f) Il-6 and (g) Myostatin (Gdf8) mRNA expression at day 11 after tumour inoculation. C26 cells were inoculated at day 0. mRNA-results were normalized to 36b4 mRNA. FC = fold change. * and ** = P < 0.05, and P < 0.01, respectively. Student's t-test and one-way ANOVA with Holm-Bonferroni corrected LSD (a, e-g). Kruskall-Wallis with Holm-Bonferroni corrected Mann-Whitney U (c, d). N-sizes are depicted in the bar graph except in (b) in which n = 6, 12, 8, and 9 in CTRL, C26 + PBS, C26 + sACVR/b, and C26 + sACVR/c, respectively. Figure S3 (a) Phosphorylation of S6K1 at Thr389 was decreased in tumour-bearing mice on day 11 after C26 cell inoculation. C26 cancer cachexia was associated with increased ubiquitinated proteins in (b) tibialis anterior and (c) diaphragm, and increased mRNA expression of ubiquitin ligases (d) Murf1, (e) Atrogin1 and (f) Musa1, which were not affected by sACVR2B-Fc administration in gastrocnemius on day 11 after C26 cell inoculation. C = CTRL, P = C26 + PBS, Ab = C26 + sACVR/b, Ac = C26 + sACVR/c. FC = fold change. * and ** = P < 0.05 and P < 0.01, respectively. Kruskall-Wallis with Holm-Bonferroni corrected Mann-Whitney U (a-e), Student's t-test and one-way ANOVA with Holm-Bonferroni corrected LSD (f). N-sizes are depicted in the bar graphs. Figure S4 Mitochondrial markers in the heart on day 11 after C26 cell injection. (a) PGC-1α and cytochrome c (Cyt c) protein levels were not altered by the C26 tumour or the sACVR2B-Fc treatment in the heart. N-sizes are depicted in the bar graphs. (b) OXPHOS complex IV (MTCO1) was significantly increased in the hearts of the vehicle treated tumour-bearing mice (C26 + PBS). In addition, when all C26 tumour-bearing groups were pooled, a significant increase was seen also in complexes CI (NDUFB8) and CIII (UQCRC2) as well as the sum of all complexes (total). This pooled C26-effect is depicted by the lines without vertical ends. N = 7-9/group. C = CTRL, P = C26 + PBS, Ab = C26 + sACVR/b, Ac = C26 + sACVR/c. * and *** = P < 0.05 and P < 0.001, respectively (Mann-Whitney U). Supplementary Table S2 Serum cytokine levels at 11 days after C26 cell injection. N = 8, 7, 6 and 8 in CTRL, C26 + PBS, C26 + sACVR/b and C26 + sACVR/c groups, respectively. The values are presented in pg/ml. If over half of the values in the group were below or close to the detec-tion limit, the concentration is not presented (depicted as N/A in the table). Cytokines with at least 3/4 of all values below or close to the detection limit are not shown (IL-1a, IL-2, IL-3, IL-4, IL-10, IL-17, IFNy, TNF-α, MIP-1a and GM-CSF). In statistical analysis, the C26-effect was analysed by pooling all the tumour-bearing groups. The sACVR-effect P-value designates the lowest sACVR2B-Fc P-value in comparison to C26 + PBS and if the significance is found, the sACVR2B-Fc group significantly different compared with C26 + PBS is indicated with *. Figure S5 Effects of C26 tumour and sACVR2B-Fc on the spleen on day 13 after C26 cell inoculation. (a) C26 cancer-induced splenomegaly is attenuated by sACVR2B-Fc administration. Splenic (b) CD11b and (c) GR-1 contents were increased in C26 cancer when multiplied by spleen mass to reflect the total abundance of CD11b and GR-1 positive cells. (d) sACVR2B-Fc administration resulted in increased splenic Cdkn1a (p21) mRNA. * and ** = P < 0.05 and P < 0.01, respectively. C26 and sACVR2B-Fc effects were analysed with Student's t-test (a, b, d) or Mann-Whitney U test (c). N-sizes are depicted in the bar graphs.
2018-05-09T00:44:54.447Z
2018-05-02T00:00:00.000
{ "year": 2018, "sha1": "9e46a50d1abd7c086e759eeb0009d04942a00ee8", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jcsm.12310", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e46a50d1abd7c086e759eeb0009d04942a00ee8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
269179613
pes2o/s2orc
v3-fos-license
SeedChain: A Secure and Transparent Blockchain-Driven Framework to Revolutionize the Seed Supply Chain : Farming is a major sector required for any nation to become self-sustainable. Quality seeds heavily influence the effectiveness of farming. Seeds cultivated by breeders pass through several entities in order to reach farmers. The existing seed supply chain is opaque and intractable, which not only hinders the growth of crops but also makes the life of a farmer miserable. Blockchain has been widely employed to enable fair and secure transactions between farmers and buyers, but concerns related to transparency and traceability in the seed supply chain, counterfeit seeds, middlemen involvement, and inefficient processes in the agricultural ecosystem have not received enough attention. To address these concerns, a blockchain-based solution is proposed that brings breeders, farmers, warehouse owners, transporters, and food corporations to a single platform to enhance transparency, traceability, and trust among trust-less parties. A smart contract updates the status of seeds from a breeder from submitted to approved . Then, a non-fungible token (NFT) corresponding to approved seeds is minted for the breeder, which records the date of cultivation and its owner (breeder). The NFT enables farmers to keep track of seeds right from the date of their cultivation and their owner, which helps them to make better decisions about picking seeds from the correct owner. Farmers directly interact with warehouses to purchase seeds, which removes the need for middlemen and improves the trust among trust-less entities. Furthermore, a tender for the transportation of seeds is auctioned on the basis of the priority location loc p , Score , and bid_amount of every transporter, which provides a fair chance to every transporter to restrict the monopoly of a single transporter. The proposed system achieves immutability, decentralization, and efficiency inherently from the blockchain. We implemented the proposed scheme and deployed it on the Ethereum network. Smart contracts deployed over the Ethereum network interact with React-based web pages. The analysis and results of the proposed model indicate that it is viable and secure, as well as superior to the current seed supply chain system. Introduction In developing countries, the agricultural sector is a major sector for the livelihood of its citizens.For sustainable agriculture, seeds are the primary and most critical ingredients.The performance of all other farming inputs is largely reliant on the quality of the seeds.According to estimates, the direct contribution of high-quality seeds to overall production ranges from 15% to 20%, depending on the crop, and can reach up to 45% with the effective management of other inputs [1].In past decades, there have been major advancements in the seed sector.Government bodies have executed a major re-structuring of the seed industry to empower the seed infrastructure.To survive in the competitive market and effectively contribute to the national effort to increase food production in order to achieve food and nutritional security, seed corporations must urgently transform themselves in line with the industry in terms of infrastructure, technologies, approaches, and management culture [2]. Breeders procure seeds and get them validated through a government entity such as a food corporation, which validates the seeds and stores them in their warehouses, which are provided to the farmers based on their demand [3].The food supply chain is critical in terms of its impact on a country's economy and its relevance to sustaining the essential sector of any country.In this supply chain, the exchange of goods relies on complex and paper-thin settlement processes, and these processes are not very transparent, with a high risk to breeders and farmers during the exchange of value.Since transactions are prone to fraud, middlemen step in, increasing the total costs of remittances and making it difficult for the farmer to receive the right product overall. Existing seed supply chain solutions in the agricultural sector focus on enhancing transparency between producers, i.e., farmers, and consumers, with an aim to minimize intermediaries between end-users and producers [4].In addition, systems supporting the traceability of food products and the identification of the original sources of products [5,6] have been suggested.Furthermore, the agricultural system financially benefits from the identification of buyers ready to pay additional charges for a specific product [7,8].For the optimization of food supply chain purchasers' trading portfolios, the Practical Byzantine Fault Tolerance (PBFT) algorithm was laid out and validated using a consortium blockchain [9,10].However, major concerns in agriculture related to seed distribution have not received enough attention.There is a requirement of the seed distribution framework that allows farmers to keep track of seeds from breeders throughout the chain and to identify the sources of seeds.In addition, the supply chain framework allows transparent and fair bidding and tender mechanisms for logistics. Blockchain is an immutable technology consisting of interconnected blocks of data, each containing a list of transactions and a unique reference to the previous blocks [2,[11][12][13].By assigning unique digital identifiers, blockchain enables the easy traceability of food products across the supply chain, incorporating essential information such as batch numbers and expiry dates [3,14].This ensures transparency and facilitates the efficient tracking of food products from their origin to the end consumer.Implementing a food ledger and transaction register through blockchain technology has the potential to prevent fraud and enhance traceability, enabling the identification of the sources of foodborne illnesses [15]. Our Contribution: We propose a blockchain-based solution that not only significantly reduces corruption but also optimizes the transaction of seeds for farmers and government officials.The proposed approach is a "decentralized application" that keeps an immutable record of every transaction between farmers and the Food Corporation of India (FCI).It allows farmers to keep track of seeds throughout the supply chain across all stakeholders and to identify the original sources of seeds.In addition, the system grants non-fungible tokens corresponding to approved seeds that explicitly state their date of cultivation and their owner to make the system more transparent.Furthermore, it enables a fair auctiondriven transportation system for logistics.The proposed solution is implemented and deployed in a P2P (peer-to-peer) network and is executed on a two-phase verification system that primarily comprises nodes (FCI, farmer, warehouse, transport) in the chain.Farmers looking to buy seeds and fertilizers can check prices and availability at various FCI warehouses within a pre-specified location.Our proposed solution offers an effective mechanism for farmers to conveniently purchase seed products from warehouses and carry out transactions using Ethers Eth on the Ethereum mainnet private blockchain. The rest of this work is organized in the following manner.Section 2 describes the existing seed distribution system and its limitations.Section 3 describes the tool known as blockchain, along with its types and smart contracts.The proposed "SeedChain" framework is discussed in Section 4, while Section 5 discusses its implementation along with its features and security analysis.Finally, Section 6 discusses the conclusion and future work. Existing Systems The existing seed supply chain in agriculture employs minimum technology and is based upon the manual updating of records, which is tangible and prone to malpractices, such as the misrepresentation of product availability and the black marketing of important and in-demand seeds, as depicted in Figure 1.The government's role in the pricing, sale, and purchase of products creates significant opportunities for corruption [16].In India, corruption at Food Corporation of India (FCI) warehouses is often reported, wherein officials input incorrect information into their government databases.For example, if a farmer purchases 50 kg of seeds, FCI officials would often record it as 100 kg sold, and the remaining 50 kg are back-channeled and sold later, creating an expensive market for the same.While such actions lead to increased corruption in the government, they also make it difficult for a poor farmer to access good-quality seeds and fertilizers, which is a huge problem in the initial stages of the food supply chain [17].For agri-food, a detailed survey was conducted, and a framework corresponding to the Vietnamese cashew nut business [18] was discussed.To efficiently address concerns in Iraqi agriculture, blockchain technology has been incorporated so that it can benefit from growth-promoting solutions to enduring problems.It can increase productivity and competitiveness by improving data management, accountability, and intelligent contracts.Notwithstanding these challenges, blockchain's advantages, such as its efficiency and transparency, make it a wise investment for Iraq's agriculture industry [19].The potential of blockchain technology in precision agriculture, food supply chains, crop insurance, and agricultural product transactions is examined in [20], taking into account both theoretical frameworks and real-world applications.It also tackles the difficulties in logging farm-owner transactions and creating an ecosystem powered by blockchain for the food and agriculture industries.A blockchain-based system, "AgriOnBlock", was designed to address issues in the agriculture industry to enhance transparency among various stakeholders, such as bankers, retailers, customers, farmers, wholesalers, etc. [21].It intends to handle relevant issues within the industry and efficiently connect stakeholders by utilizing Ethereum smart contracts and Internet of Things (IoT) devices.A blockchain framework that is based on the IoT and incorporates an artificial intelligence system is intended to oversee and control smart water management [22].An IoT-based smart water irrigation system is recommended to efficiently address the water crisis in the agriculture industry today, taking into consideration several factors, such as fertilizer quality.Most of the existing techniques primarily focus on food supply and food security, which should benefit all stakeholders by increasing transparency and trust among all of the stakeholders involved in the farming sector.However, seeds, which are the pioneering and fundamental elements in the farming sector, have not received enough attention. Moreover, in the Indian context, carrying huge amounts of cash is precarious, especially in Northern Indian Rural Areas, as thefts and burglaries lead to significant losses.This creates a need for a cashless system so that the need for physical money is eliminated when purchasing input materials for farming.Also, at times, a farmer has to travel to various FCI warehouses before finding products with the required availability and quality [23].This leads to an enormous wastage of time as well.While we intend to solve these ground-level problems in rural areas and decrease the pitfalls for farmers, as well as simplify paperwork for the government, the broader vision of India 2030 is also fulfilled, as this development in the agricultural field will lead to an increase in digital literacy among farmers as well. 1. Lack of digitization: Every monetary transaction and storage entry is recorded on paper.Piles of record registers take up a great amount of space in the form of filing cabinets.These are susceptible to manipulation, theft, fire, water, and bugs.Searching for a record would become a cumbersome task and would lead to a waste of time and labor.2. Extensive corruption: The easy manipulation of records largely contributes to corruption.Authority figures might authorize a fake transit of goods, and the money would get transferred to a bogus account.The seeds are often hoarded so that a state of scarcity can be established and are then sold for a higher price. 3. Opaque operations: In the existing system, no operation is logged at any step of the supply chain.In case of any exigency, there would not be any chain of custody to identify the point where things went wrong.There is no transparency in operations, due to which it is easy to hide bogus transactions and shipments.4. Inefficient and slow: There is no automation of any of the tasks, starting from breeder registration and ending with a farmer's purchase of a packet of seeds.Entering and searching paper records is extremely time-consuming, which becomes a burden to the entire system. 5. Security: Records are kept in a file cabinet in an unguarded room, which any intruder can visit and then view the details without any restrictions.This is conducive to data leaks, which could be catastrophic to farmers, as their account details and seed purchase routines could fall into the wrong hands.6. Middleman: The present system comprises middlemen who take advantage of the manual system and sell seeds at a greater price than the maximum retail price by manipulating the taxes for their advantage.In addition, expired seeds are sold with normal seeds.The practice of hoarding and creating a fake demand allows middlemen to sell seeds at a higher price, which causes trouble for farmers. Preliminaries This section delves into blockchain technology, smart contracts, and various classifications of blockchain systems. Blockchain Technology The term blockchain was coined in 2008 and came into practice in the year 2009 as a core mechanism in Bitcoin [24].Blockchain is analogous to a public ledger/database that is shared among peers in a network and stores records of every transaction made on a chain of blocks.Decentralization, traceability, and persistence are the key characteristics of blockchain technology. Decentralization: Conventional systems require every transaction to be approved by a centrally trusted authority, which results in additional costs and poor efficiency.On the contrary, transactions that take place within a blockchain network can be executed directly between two users, obviating the necessity for intermediary supervision.Blockchain technology consequently reduces operational and development expenditures by a substantial amount. Traceability: Each blockchain transaction is validated and subsequently documented with a timestamp.This feature grants users the ability to inspect and trace historical records from any node within the decentralized network, thereby augmenting the transparency and traceability of the data that are stored. Persistence: In the blockchain network, each effectively completed transaction is confirmed and recorded in blocks, rendering falsification extremely difficult.Furthermore, prior to being disseminated, every block is validated by neighboring nodes, which guarantees the confirmation of transactions and simplifies the process of detecting tampering. Blockchain comprises a series of blocks that contain records of all transactions that have been carried out across the network.The initial block in the blockchain sequence is referred to as the genesis block and lacks a preceding node, known as the parent node.Subsequent blocks reference the previous block by its hash value, establishing a sequential chain.Each block consists of a body and a block header, as depicted in Figure 2. Specifically, the block header includes the parent block hash of 256 bits and the sum of all block transactions, i.e., the Merkle tree root hash, as well as the block version, 4-bit Nonce, and timestamp expressed time in seconds. Smart Contracts These are contracts crafted using Solidity that are implemented between the deployer and the recipient and act as independent contracts binding the parties involved, as depicted in Figure 3.These contracts can be programmed to be deployed on the chain and can be triggered in the event of any action.Thus, they are automated, and once deployed on the chain, they cannot be altered, which implies that these programmable contracts are non-disputable and hence play an important role in establishing trust in a trust-less environment.If there exists a system where there is a lack of confidence in a deal between two parties who do not have complete faith in each other, a smart contract written to eliminate issues can be deployed on a blockchain, which is triggered in real time [25,26].For instance, a smart contract is deployed on the Ethereum blockchain, which will record a transaction between a farmer and a warehouse whenever one takes place.So, whenever an order is started, the contract is triggered, and it keeps updating the status of the order on-chain in an automated manner.Since this contract is already deployed and the logic cannot be altered, the status of the order cannot be manipulated by anyone, which helps us achieve a list of records that cannot be changed and are non-disputable.The contract is responsible for maintaining faith in the seed-purchasing process, and all the relevant parties have a limited scope of participation in the purchasing process [27]. Different Types of Solutions Based on Blockchain Technology For efficient seed supply chain management, a decentralized application that makes use of blockchain and smart contract technologies may be a good option [28,29].There are four main categories of blockchain systems available for use: 1. Public blockchain: Nodes in a public blockchain are arranged so that anyone can become a part of the blockchain and participate in mining future blocks of the chain.Such a chain that is accessible by the general public without any restrictions is termed a public blockchain, as depicted in Figure 4. 2. Private blockchain: A private blockchain refers to an arrangement of nodes for a restricted network rather than being open to everyone willing to contribute processing power.Such a chain is also referred to as a managed blockchain, as only the central authority permits a node to join the chain, as shown in Figure 5. 3. Hybrid blockchain: In a hybrid blockchain, as depicted in Figure 6, an organization can join a chain, which allows it to use the best of both public and private blockchains.They can create a permission-based system along with a permissionless system.In this manner, the administering organization can control who can access specific data and what data can be open to the public.4. Consortium blockchain: Figure 7 depicts the consortium blockchain, as it shares similarities with the hybrid blockchain on the grounds of having both public and private features.The differentiating factor for this blockchain is that various organizations collaborate in a decentralized network, hence eliminating the risk of one entity's monopoly over the entire network. The Proposed Framework Our proposed solution involves establishing a sense of trust among trust-less parties, i.e., farmers who are willing to purchase raw materials and warehouses where all quality seeds can be found.The introduction of blockchain technology also improves relationships with third-party logistics and FCI while efficiently managing the approval of new seeds from various breeders within the framework. A total of five entities play a vital role in this ecosystem, as depicted in Figure 8, i.e., breeders, government agencies (in our case, the Food Corporation of India (FCI)), farmers, warehouses, and a third-party logistics company.The entire supply chain, right from the cultivation of seeds, will be digitally recorded to ensure that all the necessary transactions that take place, ranging from approving a particular seed quality to capturing the purchasing history of a farmer, are available in a non-disputable fashion.Entities such as breeders and third-party logistics companies are in a direct relationship with FCI, and their respective roles are highlighted, showing the shift of these manual processes to on-chain recorded processes in a blockchain environment.The proposed algorithms are discussed below.Farmer Registration: Prior to purchasing any seeds, firstly, the farmer calls Algorithm 1 to register him-or herself.Every farmer F N is expected to provide his or her phone no.F P , email F E , and unique id F Id . Input: Takes F N , F P , F E , and F Id as input.Output: Farmer Registered Successfully.Breeder exists/Choose different name to sign up 18: end if Transporter Registration: Before participating in the bidding, every transporter needs to register him-or herself on the portal using Algorithm 3. Every transporter T N is expected to provide his or her phone no.T P , email T E , and unique id T Id .In addition, every transporter enters three preferred locations Loc i:i∈{1,2,3} based on his or her preference for operating services, with Loc 1 and Loc 3 as the highest and lowest priorities, respectively.Transporter exists/Choose different name to sign up 18: end if Approve Seeds: Breeders submit seeds along with their meta-data S M , i.e., the seed name S N and location S loc , where the seeds are harvested S harv , and their quantity S Q , to FCI, which calls Algorithm 4 to check the quality of seeds and, based on that, mints an NFT as a certificate, as shown in Figure 9, to approve the seeds or otherwise rejects the seeds.The flowchart for approving seeds is depicted in Figure 10.S M : Rejected 10: end if Order Seed: Every farmer creates an order O F = {S N , S Q , F N , F Q }, where S N and S Q are the name and quantity of seeds, respectively, while F N and F Q are the name and quantity of fertilizer.To place an order for seeds, the farmer calls Algorithm 5.The flowchart for ordering seeds/fertilizers is depicted in Figure 11. Input: Order where δ Eth =Order value + gas fees. end if 12: end if Bidding: To select a transporter to successfully complete the bid B i , FCI calls Algorithm 6. Algorithm 6 Bid: To select transporter T for bid B i . Output: Bid i is allocated to transporter T j .1: for i in Bid do 2: for j in Trans do T.add(Trans([T idj ])) Implementation To implement the proposed system, a decentralized application is built to encompass the basic functionalities of the proposed system.The application is a JavaScript-based web application consisting of the basic logic defined by a smart contract residing on the Ethereum blockchain [30].This DApp provides functionalities like adding and verifying users and their properties, provided they can offer the necessary title deeds.The users can view their properties and further sell or lease them to a buyer for the price they set. Technology Stack Used This subsection describes the technology utilized in the implementation of our proposed system. 1. Ethereum and Solidity: It is an open-source blockchain-based platform employed to create and share business, financial services, and entertainment applications.These contracts are written for various points in the supply chain, especially for interactions between two adjacent entities, such as breeders and FCI, farmers and warehouses, etc.These smart contracts help implement features provided by blockchain, such as storing immutable records of purchases by farmers from the government or documenting the quality of seeds approved by FCI. 2. Web3 JS Library: Web3 is a collection of libraries of the World Wide Web that incorporates and facilitates blockchain applications.We have used web3.js,which is an Ethereum JavaScript API that helps in connecting the smart contracts deployed on the chain at the back end to the front end, which is the portal for breeders and FCI and the website available to farmers for purchasing seeds.Therefore, while the purchase of seeds is initiated at the front end by the farmer, the recording of the purchase and the transfer of the order value from the farmer's wallet to the warehouse address are executed by the contract and are synced with the help of web3.js. 3. Ganache and Truffle: Ganache and Truffle are essential for testing the smart contracts and assist in creating a virtual environment to understand the behavior of the deployed contracts in different scenarios.Thus, with the help of these tools, we were able to develop test cases and expand the scope of the contract so that the contract does not fail or produce unexpected results in case of exigency.4. React JS: The React JavaScript library is the open-source library used to develop the front-end user interface and add functionalities to different portals, such as navigation between different pages and registration and the login of different entities.Thus, the web pages displayed to the users and the interaction of users with the portals are made possible with the help of this library.5. Inter Planetary File System (IPFS): IPFS is a p2p method of efficiently storing data and sharing data in a distributed file system.It is designed to hold data in a manner such that no single entity in a network holds the entire table of data.We have utilized IPFS to store data from seed applications and will employ the details in the future with the help of the generated hash.6. Metamask: This wallet is used to assist in the transfer of Ethers among different parties and between parties and smart contracts in instances where the smart contracts store Ethers until an event has been successfully concluded.7. MongoDB: Since every single detail, like user login details, for entities such as breeders, farmers, and logistics companies cannot be stored on-chain, we have used MongoDB services to create a back-end environment and store these details, and it is integrated with the user interface. The Architecture of the Proposed System Due to Ethereum's significant smart contract support, the system, which is largely a peer-to-peer network, was used to develop the blockchain-based seed supply chain.The proposed system's architecture is shown in Figure 12.Several entities (computers) connected as nodes on the internet make up a peer-to-peer network.Without a central authority, transactions are passed from one peer to another.A blockchain, which is a type of public distributed ledger, houses all of the network's transactions.To compare it with other peer nodes, peers also have a synchronized copy of this public ledger.Additionally, any node (peer) that tries to interfere with the network will be immediately removed from it.The Ethereum Virtual Machine powers every node in the Ethereum network (EVM).In Ethereum, a block takes an average of 15 s to mine.An ETH block reward will be given to the successful miner.The user committing each transaction pays the whole gas cost incurred by the block's transaction execution, and the gas cost incurred is credited to the miner's account. Smart contracts are designed in Solidity, while Reactjs is utilized to develop user interfaces.Users can engage with the Ethereum blockchain through interfaces and with the blockchain's smart contracts using DApps.DApps thus provide a user interface for the back-end smart contract that updates the blockchain with data.The transactions are signed and carried out using the Metamask wallet plugin.The Truffle framework will be used to test and deploy the DApp on regional test networks using Ganache, the open test network Rinkeby, and finally, the main Ethereum network. Designing the Smart Contract As shown in Table 1, smart contracts are designed to carry out tasks associated with seed ordering, enable interactions between FCI and transporters and between farmers and warehouses, perform seed approval, and create matching NFTs for accepted seeds.A smart contract uses logic to satisfy all requirements and consists of four fundamental functionalities.Because it requires funds to store documents like files or photos, we store them on IPFS, which gives us a hash that is kept on the blockchain. Method Function Used Explanation Ordering Seeds addOrder() This is called to store the order placed by a farmer.acceptOrder() Authorized warehouses call this to accept the placed order. reqTransit() This is called to request the transit of the order shipment, and as a result, the status of the order shifts to "REQ-TRANSIT". ackTransit() An authorized logistic company calls this function and updates the status to "TRANSIT ACKNOWLEDGED".reqDelievery() Transit calls this method to request admission of delivery and simultaneously change the status to "REQUESTING DELIVERY". ackDelievery() Once the status is changed to "DELIVERYACKNOWLEDGED", this is the final payable method that can only be called by the farmer. Seeds Approval addSubmission() This adds a new submission for the breeder to the chain and updates the status of the process to "SUBMITTED".acceptSubmission() FCI calls this to accept the submission, which updates the status to the "ACCEPTED" state. approveSubmission() FCI calls this to mint an NFT for seeds with the seed name, timestamp, and approved quality standards. Warehouse addWarehouse() FCI calls this to add a new warehouse to the list of approved warehouses.remWarehouse() FCI calls this to remove a warehouse from the list of approved warehouses. Integration The Remix IDE (remix.ethereum.orgaccessed on 3 March 2024) was used to develop this contract, and the Ropsten test network, which is a test blockchain network, was used for deployment.An Application Binary Interface (ABI) and a contract address are formed following the smart contract's successful compilation, and they are copied and used.The contract is then initialized in the NodeJS code using the ABI and contract address, making it possible to call the subsequent methods that were explained.The project as a whole is tested on a local network before the smart contract is released to the Ropsten test network.Users connect to Metamask using the front end of a web browser like Chrome, and whenever a transaction needs to be made that will modify the blockchain or a user-end method needs to be called that will modify the blockchain, a Metamask popup will appear to request permission and display the gas price that will be used to execute the transaction.Accessing data stored on the blockchain is free, but gas fees are incurred only when altering data.Each time that any user changes the state of the blockchain, a gas fee in Ether has to be paid.The time taken by the transaction is inversely proportional to the gas price entered by the user.Sometimes, the transaction can be canceled, too.When the DApp loads, a contract instance is created and gets stored in the state of that particular React class component, which gets passed to its child components, and the contract methods can be used from there. The successful execution of our blockchain-based seed supply chain proved to be an effective strategy for reducing the need for middlemen in the seed distribution process.The gas fees paid for transactions are significantly less than the charges in the current system and may be further optimized by using better smart contracts.The implementation successfully provides the proof of concept that a decentralized solution can be used for seed distribution.The proposed system is faster and provides traceability and immutability at a reasonable cost.Every action on the blockchain that involves adding new data requires a transaction and comes with a small transaction cost called a gas fee, and it is imperative when considering the running costs for the widespread and prolonged functionality of a fundamental system like the proposed one.Figure 13 depicts the gas fees used for our implementation. Feature Analysis This section delves into the characteristics of the proposed system.Table 2 highlights the distinctions between the proposed approach and the existing framework. 1. Decentralization: The proposed workflow transforms the functionality of each entity involved in the supply chain, as there is no single authority that holds data centrally and can manipulate them to an extent that harms anyone in the chain.Blockchain embedding in the existing procedure enables a decentralized procedure for storing relevant data on the chain, such as FCI-approved seeds in the form of a minted token, and the transaction records of the purchases carried out by the farmer. 2. Accountability: The notion of updating the status in real time by triggering an appropriate function in a smart contract enables us to identify the responsible entity in the supply chain as accountable and hence figure out where the process is halted in order to optimize the operations of the chain in the near future. 3. Transparency: After ordering seeds, a farmer can track his or her order in real time, which ensures that no foul play is possible and that the farmer does not have to worry about his or her order getting misplaced, as the responsible entity can be tagged very easily.4. Immutability: All data written to a blockchain are permanent, and hence, even a minor modification to the data can drastically change the hash of the next block, which will disrupt the entire chain.Since a purchase transaction between a government entity, i.e., the warehouse, and the farmer is recorded onto the chain in a similar fashion, it ensures that the record of the transaction is non-disputable and can be considered legally binding by both parties.5. Security: Modifiers in smart contracts like "onlyFCI" ensure restricted access to calling functions, which enhances security, and most of the payable functions can only be called via the owner address, which means that the contract cannot be modified; unauthorized access is only possible if a person steals the Metamask wallet of the government entity that deploys the contract, i.e., the Food Corporation of India.Further, the blockchain provides the contents of all transactions in an encrypted format, due to which the personal data submitted are anonymous.6. No middlemen: Middlemen are largely responsible for causing corruption in the supply chain, as they have the capability of modifying transactions to create a shortage of products in the warehouse.Corruption is eliminated because all of the processes are automated and the transactions are recorded, due to which modification is restricted.7. Trust-less: Since the existing system comprises multiple parties, to ensure a convenient system where everyone can work, smart contracts are deployed, establishing mutual trust among all entities by ensuring that different entities interact with each other without the need to trust each other.8. Auction-driven Transportation Tenders: Tenders for the transportation of seeds are auctioned, and transporters are granted tenders on the basis of their priorities for the location loc where they operate, their CIBIL score, and the price they code.Thus, the monopoly of transporters in the system is prohibited, and the proposed system provides equal opportunity for each transporter to participate in tendering. Security Analysis Blockchains are open to the 51% attack.If, somehow, an individual or entity gains control over 50% of the computational power of the blockchain, the attacker can prevent new transactions or cause double-spending issues when in control of the network.Although previous transactions remain immutable, changing anything will lead to a mismatched hash and the disruption of the whole blockchain. Decentralized systems like blockchain require extensive computational resources and are usually accompanied by huge power draws.As we move toward a more energyconscious future, unless there is an advancement in clean energy generation and transfer, this will remain a limitation of this proposal.Also, there are huge setup and development costs, but these may be justified by the system's minimal fraud and secure transfers. A highly technical system that also encompasses a paradigm shift may be challenging to general users, and as most of India is just coming online, it may be prudent to expect all of the people to be technically inclined enough to understand the workings of systems like this.This means that the existing system may be required to remain functional for a long time, which introduces financial constraints. The digitization of the existing seed supply chain plays a very crucial role in the agriculture system.Without the digitalization of entire records corresponding to seeds, farmers, and transporters, it may be difficult to reap the actual benefits of this system.This also brings more responsibility to system administrators for digital verification, which does not bode well with the current systems and the powers they enjoy.There is no leeway for fraud in cryptographically safe digital signatures, but there is a chance that these officers might abuse their power even further, which will need to be checked.The quality of the records uploaded to the blockchain needs to be high to ensure proper functioning.There may be some initial hesitancy to move over to a digital ownership paradigm, too. Cryptocurrencies and decentralized payment systems do not offer a stable currency like a fiat currency, at least not yet.The adoption of crypto has been quite nimble in recent times but is still questionable for future aspects.There are no regulatory decisions regarding this, which also makes it a little uncertain. Conclusions and Future Work In this article, we propose a blockchain-based seed supply chain system using the Ethereum blockchain, with the goal of achieving a transparent and fair bidding mechanism for transporters.In addition, the proposed system is immutable and removes the need for middlemen.The proposed system was implemented using Solidity and deployed on the Ethereum mainnet, while the front end was designed by leveraging React.In addition, our proposed system employs non-fungible tokens for breeders corresponding to their approved seeds, which enables the traceability of seeds by farmers.Thus, the scheme is best suited for a practical seed supply chain system. Future research can explore opportunities to enhance the system's interoperability with other blockchain networks or traditional systems used in the seed supply chain industry.This could involve developing standards or protocols for data exchange and communication between different blockchain platforms or integrating with existing industry standards and protocols. Figure 12 . Figure 12.Architecture of the proposed framework. Figure 13 . Figure 13.Example of the gas fees paid. Table 1 . Methods employed for smart contracts. Table 2 . Comparison between existing and proposed techniques: "-" denotes absence of feature, and "✓" denotes presence of feature.
2024-04-17T15:13:11.445Z
2024-04-15T00:00:00.000
{ "year": 2024, "sha1": "fba2fdf0948a95055d14cdd24e6355ef632396da", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-5903/16/4/132/pdf?version=1713172476", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a1f73a7c1e8f9ef8418c96e6c5f18f1d8c10699f", "s2fieldsofstudy": [ "Computer Science", "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
216650779
pes2o/s2orc
v3-fos-license
MODIFICATION OF GRAPHITE BY A FLUORINE- CONTAINING OLIGOMER Impact of the process of modification on graphite structure at the application of fluorine-containing oligomer chloroperfluorododecylfluoro sulfate has been studied. It was shown that during this process graphite structure doesn’t change. Only particles of chloroperfluorododecylfluoro sulfate are destructed into monolayers, which are spread in graphite intercrystalline voids. At the mechanical impact it protects graphite from destruction and correspondingly, improves some of its tribotechnical characteristics. KEYWORDS Introduction. Graphite, thanks to its unique properties is widely used in various spheres of human activity, including production of antifriction materials [1,2]. In solid antifriction materials it can be used as a matrix, as well as additive; in plastic and liquid lubricants -as admix, and in polymer self-lubricating antifriction materials -as filler. Exploitation properties of these materials depend significantly on the properties of graphite. Therefore, graphite is often subjected to modification. Selection of modified graphite for any definite material is made according to definite technical demands. Therefore, widening of nomenclature of such modified graphite is a rather urgent task. The present paper deals with graphite modification by the use of fluorine-containing oligomer -chloroperfluorododecylfluoro sulfate. As it turned out, lubricating properties of such graphite are improved markedly. This conditioned our interest to study the effect of this process on graphite structure, in order to determine the mechanism of improvement of antifriction properties. Materials and methods. Initial materials were natural crystalline graphite and fluorinecontaining oligomer -12-chloroperfluorododecylfluoro sulfate (FDS), obtained from 1-hydro-12chloroperfluorododecane via treatment by peroxide sulphuryl di-fluorine in conditions analogous to those described earlier for 1-hydro per fluorine alkenes [3]. CHEMISTRY Samples of modified graphite were prepared by joint treatment of graphite and FDS mixture in vibrating mill or by friction between two steel discs at P=1kg/cm charge and V=0,35m/sec rate for 4 minutes. Lubricating properties of the initial and modified graphite were estimated by their testing between two fractionating solid surfaces, at the terms of speed increase by definite intervals. Speed limit value corresponded to instantaneous growth of friction coefficient. Investigation of the initial specimens and those obtained by their treatment was performed by X-ray method on a diffractometer "DRON-1, at CuKa emission terms. Samples were subjected to Xray diffraction roentgenography in isotropic state in the form of powder or paste. To receive X-ray diffraction patterns of the product obtained by friction, metal sample together with the layer rubbed on it, was placed in diffractometer. At the reflection analysis we obtained diffraction patterns of graphite-FDS mix surface layer. Results and analysis. Let's consider the structure of initial materials first. Graphite structure is known (3). The main diffraction maximum (002) of graphite 2θ =26,6 0 (d002=3,35A). Fig 1 offers diffraction pattern of FDS, on which ciphers 2, 3, 4, 5 and 6 refer to the order of reflection, corresponding to the main reflex d1 = 17,80 A. We have also the reflex d2 = 4,90Å and wide maximum in the region 2θ from 32 to 44 0 . Diffraction picture obtained from FDS is analogous to diffraction obtained from paraffin and other substances possessing laminated structure. Period d1 = 17,80 Å is inter-laminar period and it characterizes thickness of one layer. System of parallel layers is well ordered and extended at significant distance. This is proved by great quantity of orders of reflection from the main period d1. Six reflection orders are seldom observed even in highly regulated laminated structures. Reflex, that corresponds to 2θ = 18.15 0 (d2 = 4,90 Å) is not inter-laminar, it is intra-laminar. Inter-planar spacing d2 = 4,90A and wide maximums 32-44 0 characterize inter molecular spacings in the layer. In diffraction pattern of polytetrafluoro ethylene (PTFE), the high-molecular analogue of FDS, the main maximum also corresponds to d2 = 4,90 Å, and the remaining two reflexes are at 2θ 37,3 and 41,5 0 . Intra-laminar maximums of FDS correspond to reflexes of PTFE, but maximums 37,3 and 41,5 0 are not resolved. Thus, it can be stated that molecules in the layer of FDS, are positioned approximately, as in the structure of PTFE, but with lower degree of order. 5.6 Since molecule axes in PTFE structure are perpendicular to elemental cell plane, molecular axes in FDS structure should be perpendicular to layer plane. Fig 2 gives a scheme of structure of one FDS layer and a scheme of FDS crystalline, consisting of parallel layers. Molecules in the layer are positioned in approximately hexagonal lattice with inter molecular space 5,6 Å. Length of all molecules is strictly constant and equals to 17,80 A. It proves that FDS molecules are mono-dispersive and have similar molecular masses. In PTFE structure 13 CF2 (4) groups coincide with a period 16,9 Å. One group along the chain axis coincides with a segment 1,3 Å. Conformation of end groups OSO2H is not known precisely. We can assume approximately that CF2 -O -SO2F length along the chain axis equals to 4,5 A (5). Then 17,8 -4,5 = 14,3 Å and 14,3/1,3 = 11, which corresponds to 12 groups of CF2. Thus, FDS molecules in one layer are packed in hexagonal lattice with inter-chain spacing 5,6 Å. Molecule length or thickness of a layer is 17,80 Å. In the initial FDS layers are parallel and are packed in a crystallite, consisting of many layers. . Now let's consider graphite-FDS mix with FDS concentration 1,5,10 and 15%. After crushingof graphite-FDS mix in a mortar structures of the components practically remain the same. Diffraction pattern of a mix after it is treated in a mortar is a sum of diffraction patterns of components with unaltered initial structure. Fig,3 offers the main reflex of graphite(002) and all above listed reflexes of FDS.. Further treatment of mixes was performed invibrating mill and by friction between flat counter bodies. Structural changes at such treatments turned out similar, but rubbed layers on the counter bodies because of variable thicknesses are not good samples for diffractometer. This is why in the Fig. 3 we give diffraction patterns of samples prior (Fig.3a) and after (Fig.3b) treatment in vibratingmill. Comparison of diffraction patterns shows that as a result of treatment graphite line (002) suffers insignificant change. It becomes somewhat wider, which refers to definite decrease of graphite particle sizes after mechanical impact. At analogous treatment of pure graphite in vibrating mill we observe greater widening of a line. In the initial graphite dimensions along the axis are minimum, 0.5 -1 mcm, and after treatment in a mill, they are reduced up to 150 Å. At the treatment of a mixture by 1% FDS, graphite crystallite sizes along the axis equal to 400 A, and by 5% FDS -800 Å. The bigger the size of graphite crystallites, the lesser they are destructed at friction in vibrating mill. Thus, introduction of FDS into graphite protects graphite crystallites from destruction. As to the lines attributed to FDS, the main change after treatment in vibrating mill is that all orders of reflection connected with the inter-laminar period d1 = 17,80 Å disappear. Simultaneously intralaminar reflex d2 = 4,90 Å is preserved and is clearly observable on diffraction patterns (Fig. 3b). Disappearance of inter-laminar reflex implies that the layers as such (or at least its significant part) are preserved, but the layer packages are destructed. After treatment of mixes in vibration mill FDS remains    C not in the form of particles, consisting of packages of layers, but in the form of separate mono layers. It is natural to suppose that such changes of structure are associated with the fact that separate FDS mono layers are spread and "smeared" at the borders of graphite particles. Structure of a mix after its treatment in vibrating mill can be imagines in the form of graphite particles covered by thin FDS film (Fig.4.). (1, 3, d1 = 17.80 Å) in graphite particle (2, 800 Å). Such FDS mono layers prevent friction of graphite particles on one another and guarantee border lubrication of graphite particles. This is namely what prevents destruction of graphite particles in vibrating mill. Disappearance of orders of lines corresponding to periodicity d1, and preservation of lines d2 are detected by the use of diffraction patterns of mixtures containing 5 and 10% FDS. At the FDS concentration 1% its amount is too small to show its lines on diffraction patterns. But even in this case as a result of treatment in vibrating mill or at the friction between two flat counter bodies, graphite lines (002) don't undergo any significant change, similar to those in the mixes by 5 and 10% FDS and in distinct from pure graphite. Fig. 4. Position of FDS mono-layers Conclusions. Thus, particles containing a package of FDS layers, in the process of mechanical impact undergo destruction into separate mono-layers. Such destruction probably proceeds easier, since inter-laminar molecular forces of FDS of polytetrafluor ethylene structure should be very weak. These mono-layers, spread at the border of graphite crystals, provide preservation of structure of separate crystals and that of the whole mix at mechanical load, and correspondingly, low friction coefficient, both inter-crystal and at the solid surface of a mix. Application of thus modified graphite as fillers in some polymer self-lubricating material and as additive to some plastic and liquid lubricating materials, will enable us to improve significantly their tribotechnical characteristics.
2020-04-16T09:10:27.809Z
2019-10-31T00:00:00.000
{ "year": 2019, "sha1": "7b2fe8767e295c42b22d8c1550cc9b1e8eb64382", "oa_license": "CCBY", "oa_url": "https://rsglobal.pl/index.php/ws/article/download/246/237", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b8ac8506608374ce1fd539b1c71d3b02280fc9d0", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
73584113
pes2o/s2orc
v3-fos-license
Noise spectrum of a quantum dot-resonator lasing circuit Single-electron tunneling processes through a double quantum dot can induce a lasing state in an electromagnetic resonator which is coupled coherently to the dot system. Here we study the noise properties of the transport current in the lasing regime, i.e., both the zero-frequency shot noise as well as the noise spectrum. The former shows a remarkable super-Poissonian behavior when the system approaches the lasing transition, but sub-Poissonian behavior deep in the lasing state. The noise spectrum contains information about the coherent dynamics of the coupled dot-resonator system. It shows pronounced structure at a frequency matching that of the resonator due to the excitation of photons. For strong interdot Coulomb interaction we find asymmetries in the auto-correlation noise spectra of the left and right junctions, which we trace back to asymmetries in the incoherent tunneling channels. Introduction A variety of fundamental quantum effects and phenomena characteristic for cavity quantum electrodynamics (QED) has been demonstrated in superconducting circuit QED [1][2][3][4]. The equivalent of single-atom lasing has been observed, with frequencies in the few GHz range, when a single Josephson charge qubit is strongly coupled to a superconducting transmission line resonator [5,6]. This progress stimulated the study of a different circuit QED setup where the superconducting qubit is replaced by a semiconductor double quantum dot with discrete charge states. Incoherent single-electron tunneling through the double dot sandwiched between two electrodes can lead to a population inversion in the dot levels and, as a consequence, induce a lasing state in the resonator [7,8]. The potential advantages of quantum dots are their high tunability, both of the couplings and energy levels [9][10][11]. In addition, larger frequencies are accessible since the restriction to frequencies below the superconducting gap is no longer needed. Experimental progress has recently been made in coupling semiconductor quantum dots to a GHz-frequency high-quality transmission line resonator [12][13][14]. The double quantum dot-resonator circuit lasing setup differs from the more familiar interband transition semiconductor laser [15,16], where the cavity mode is coupled to the lowest quantum dot interband transition, and which is driven by carrier injection in a p-n-junction or via optical pumping. Since the circuit considered here is driven by single-electron tunneling, the lasing state correlates with electron transport properties. This fact allows probing the former via a current measurement [8]. Further information about the system is contained in the current fluctuations. Due to the charge discreteness the noise is shot noise, which has been studied extensively [17][18][19][20][21][22][23]. For the double dot-resonator lasing circuit, it is therefore important to compare the electron shot noise with the fluctuations of the photons in the resonator. Although more difficult, experimental progress has also been made toward measuring the finite-frequency noise spectrum of electron transport [24]. It contains information about the full dynamics of the system, including the relevant time scales that characterize the transport processes. In this work, we therefore investigate the frequency-dependent noise spectrum of the transport current through the system in and near the lasing regime. It shows pronounced A double quantum dot-resonator lasing circuit. The dot is placed at a maximum of the electric field of the transmission line in order to maximize the dipole interaction. The population inversion in the dot levels, leading to the lasing state, is created by incoherent electron tunneling through the dots, driven by the bias voltage, which is assumed to be high, eV = µ L − µ R ω r . characteristic signals at frequencies close to the eigen-Rabi frequency of the coupled system or matching that of the resonator. This paper is organized as follows. In section 2, we introduce the model of a quantum dot-resonator lasing circuit and the methods. We extend the work of [8], where strong interdot Coulomb interaction was assumed, to arbitrary strength interaction [25]. The method used for the calculation of the noise spectrum is based on a master equation combined with the quantum regression theorem. In section 3, the stationary properties of the resonator, the average current and the zero-frequency noise are studied. The finite-frequency noise spectrum is evaluated in section 4 in the low-and high-frequency regimes, both for strong and weak interdot Coulomb interaction. We find characteristic symmetric and asymmetric features in the frequency-dependent noise spectrum. We conclude with a discussion in section 5. The model We consider the electron transport setup schematically shown in figure 1, where electrons tunnel through a semiconducting double quantum dot coupled to a high-Q electromagnetic resonator such as a superconducting transmission line. The Hamiltonian includes the interacting dot-resonator system, H S = H d + H r + H I , which is responsible for the coherent dynamics. The double dot is described by with d † j being the electron creation operators for the two levels in the dots j ( j = l, r) with energies ε j , separated by ε = ε l − ε r , which are coupled coherently with strength t c . Both ε j and t c can be tuned by gate voltages [10,11,[26][27][28]. The interdot Coulomb interaction is denoted by U . The transmission line can be modeled as a harmonic oscillator, H r = ω r a † a, with frequency ω r and a † denoting the creation operator of photons in the resonator. The dipole moment induces an interaction between the resonator and the double dot, H I , which will be specified below. We further account for electron tunneling between the dots and electrodes, , with tunneling amplitudes V k (with α = L, R). The electrodes with H b = αk ε αk c † αk c αk act as baths. Here c † αk is the electron creation operator for an electron state in the electrode α. Below, the tunneling between the electrodes and the dots is assumed to be an incoherent process. The double dot can be biased such that at most one electron occupies each dot. The two charge states |L and |R serve as the basis of a charge qubit [29,30]. In this work, we consider two limits, (i) strong U and (ii) weak U , respectively. In case (i), transport through the double dots involves only one extra third state, namely the empty dot |0 , whereas in case (ii), two extra states, |0 and the double occupation state |2 ≡ |L R , are involved in the transport. In both limits the dipole interaction between the resonator and the double dot is H I =hg 0 (a † + a)τ z , with Pauli matrices acting in the space of the two charge states, τ z = |L L| − |R R|. In the eigenbasis of the double dot and within rotating wave approximation, the Hamiltonian of the coupled dot-resonator system, for strong interdot Coulomb interaction, can be reduced to while for weak interdot interaction an extra term U |2 2| is to be included. In the restricted space of states we have d l = |0 L| + |R 2| and d r = |0 R| − |L 2|, and the Pauli matrix operates in the eigenbasis, i.e. z = |e e| − |g g| with |e = cos (θ/2) |L + sin (θ/2) |R , Here, we fix the zero energy level by ε l + ε r = 0. The angle θ = arctan(t c /ε) characterizes the mixture of the pure charge states, the coupling strength is g = g 0 sin θ and ω 0 = ε 2 + t 2 c /h denotes the level spacing of the two eigenstates. It can be tuned via gate voltages, which allows control of the detuning = ω 0 − ω r from the resonator frequency. Master equation The dynamics of the coupled dot-resonator system, which is assumed to be weakly coupled to the electron reservoirs with smooth spectral density, can be described by a master equation for the reduced density matrix ρ in the Born-Markov approximation [31,32]. Throughout this paper, we consider low temperatures, T = 0, with vanishing thermal photon number and excitation rates. Consequently, the master equation iṡ where the dissipative dynamics is described by Lindblad operators of the form The first two terms L L/R account for the incoherent sequential tunneling between the electrodes and the dots with α (ω) = 2π k |V αk | 2 δ(ω − ε αk ) ≡ α . For the assumed high voltage and low temperature, i.e. in the absence of reverse tunneling processes, we have L L = d † l and L R = d r with tunneling rates L and R , respectively. For the oscillator we take the standard decay term L r = a with rate r = κ. Here, we ignore other dissipative effects, such as relaxation and dephasing of the two charge states, which were studied in [8], since such effects only weakly affect the main points we wish to study. From the definition I α (t) ≡ −e d n α (t) dt with n α = k c † αk c αk , it is straightforward to obtain the transport current from the electrodes to the dots [33,34], I α (t) = Tr[Î α ρ(t)], with current operators given bŷ In the stationary limit, t → ∞, the average current satisfies I = 1 2 (I L − I R ) = I L = −I R , consistent with charge conservation. Current noise spectrum We consider the symmetrized current noise spectrum where δÎ (t) =Î (t) − I and G I (±ω) = ∞ 0 dt e ±iωt G I (t) with G I (t) = δÎ (t)δÎ (0) . In the Born-Markov approximation, the current noise spectrum can be calculated via the widely used MacDonald's formula [35] or the quantum regression theorem [31]. Since we already know the current operators, as expressed in equation (5), it is more convenient to calculate the current correlation function via the quantum regression theorem where ρ st denotes the steady-state density matrix. According to the Ramo-Shockley theorem, the measured quantity in most experiments [19] is the total circuit current I (t) = a I L (t) − bI R (t), with coefficients, a + b = 1, depending on the symmetry of the transport setup (e.g. the junction capacitances). The circuit noise spectrum is thus composed of three components: [19,36], where S α (ω) = F {δÎ α (t), δÎ α (0)} are the auto-correlation noise spectra of the current from lead-α, and S LR (ω) = (F {δÎ L (t), δÎ R (0)} + F {δÎ L (t), δÎ R (0)} )/2 is the current cross-correlation noise spectrum between different leads. Alternatively, in view of the charge conservation, i.e. I L = I R + dQ/dt, where Q is the charge on the central dots, the circuit noise spectrum can be expressed as [37][38][39] [40]. Thus, from the behavior of the auto-correlation and crosscorrelation noise spectra, which will be studied in the following, we can fully understand the (d), describe the incoherent transitions in the dot-basis and eigenbasis (including the interaction with the resonator), respectively. Panels (a) and (c) correspond to asymmetric transition channels for strong interdot Coulomb interaction, and panels (b) and (d) to symmetric transition channels for weak interdot Coulomb interaction. Furthermore, we have + α = α cos 2 (θ/2) and − α = α sin 2 (θ/2) with α = L, R. circuit noise spectrum even including the charge fluctuation spectrum in the central dots, S C (ω). At zero frequency, we have S(0) = S L (0) = S R (0) = −S LR (0) and S C (0) = 0 due to current conservation in the steady state. Stationary properties Let us first recall the parameter regime for which, according to [8], lasing can be induced for the present setup [12][13][14]. We consider the level spacing in the dots comparable to the resonator frequency ω r in the range of a few GHZ, and a high-quality resonator with Q factor assumed to be 5 × 10 4 , corresponding to a decay rate κ = 2 × 10 −5 ω r . The coupling of the dot and the resonator, chosen as g 0 = 10 −3 ω r , is strong enough compared to the photon decay rate in the resonator, and we assume weak incoherent tunneling with L = R = = 10 −3 ω r to be a few MHz throughout the paper, unless otherwise stated. A crucial prerequisite for lasing is a pumping mechanism [5,41], involving a third state, which creates a population inversion in the two-level system. In [8], the empty state |0 in the double dot was considered as the single third state under the assumption of strong charging energy, ε j + U > µ L > ε j > µ R . This limit, which we call case (i), is sketched in figure 2(a). On the other hand, the interdot Coulomb interaction may also be weaker compared to the level spacing of the charge states. In the tunneling regime, we have L > j , j + U > R . This limit, called case (ii), where two extra states are involved in the incoherent tunneling, is illustrated in figure 2(b). The question arises, which case is better for lasing. Let us first consider the key factor for lasing, i.e. the population inversion defined by |i, n being the stationary population of the state of the dots (i = e, g). Explicitly, we find that for case (i), with 0 = L + R . The population inversion does not depend on L for case (i). But it depends on both tunneling rates for case (ii), suggesting that in this case the population inversion is driven by transitions from |R to both extra states |0 and |2 . See figures 2(a) and (b) for cases (i) and (ii), respectively. Although, in general, an additional incoherent tunneling channel reduces the population inversion slightly, for the parameters studied in the present work, i.e. ω r , it approaches the same value for both cases (i) and (ii), τ 0 ≈ 4 cos θ/(3 + cos 2θ), which reaches a maximum, τ 0 → 1, for θ → π/2. To balance the effective dot-resonator coupling g = g 0 sin θ and the population inversion τ 0 , following the consideration in [8], we set the interdot coupling strength t c = 0.3ω r throughout this work. The properties of the resonator can be characterized by the average number of photons n and the Fano factor F n ≡ ( n 2 − n 2 )/ n 2 describing their fluctuations [16]. When reducing the detuning between the dot and the resonator from large values to zero, we observe that the system undergoes a transition from the nonlasing regime, where n < 1 and F n = n + 1, to a lasing state with a sharp increase in the photon number. Before we reach the lasing state the photon number distribution has a thermal shape, which explains the value of the Fano factor. At the transition to the lasing regime the amplitude fluctuations lead to a peak in the Fano factor, as shown in figure 3(b). In the lasing state the photon number is saturated, and the Fano factor drops to F n < 1, indicating a squeezed photon number distribution in the resonator. Interestingly, the average photon number in the lasing regime, as well as the corresponding peak in the Fano factor at the lasing transition are larger for weak interdot interaction, case (ii), than for strong one, case (i). Approximately, we obtain the average photon number [42] for case (ii) Compared to case (i), where [8] n i cos θ 3κ − 2 +4 2 96g 2 (7 + cos θ), we find an increase to n ii ≈ n i + 6κ , showing that case (ii) with four levels is more suited for lasing. The difference is due to the existence of one more incoherent tunneling channel, driven as illustrated in figures 2(b) and (d). As had been pointed out in [43], for a superconducting single-electron transistor (SSET) coupled to a resonator, the noise spectra of the fluctuations of the photons are correlated with the zero-frequency shot noise of the current. This fact is illustrated for the Fano factor F I = S(0)/2I in figure 3(d). For strong detuning in the nonlasing regime, where the dots effectively do not interact with the resonator, the shot noise shows a Poissonian distribution, i.e. F I 1. Near the lasing transition the shot noise is enhanced strongly with a super-Poissonian distribution. Compared to the Fano factor of the photons, the signal in the shot noise is stronger with a narrower transition window and sharper peak. In the lasing state, where the photons are saturated and the transport current reaches the maximum value, we find sub-Poissonian current noise, F I 0.5, while the photon Fano factor F n describes a squeezed state of the radiation field in this nonclassical regime, differing from a conventional coherent state with Poissonian distribution. Noise spectrum Since in the nonlasing regime the noise spectrum displays only trivial features, we focus in the following on the finite-frequency noise spectra in the lasing regime and at the lasing transition, as shown in the inset of figure 4(a). For tunneling dissipative operators L L and L R as defined after equation (4b) it has been demonstrated [44] that all correlation functions can be expanded in terms of the eigenvalues λ k of L tot and the coefficients c k = [V −1Î αV ] kk . HereV is built from the eigenvectors of L tot , andÎ α is the current operator described in equation (5). For example, we have where the imaginary part Im(λ k ) and the real part Re(λ k ) represent the coherent and dissipative dynamics, respectively. The coherent dynamics follows from the Jaynes-Cummings Hamiltonian, equation (2), with eigenstates [3,45] |+, n = cos θ n |e, n + sin θ n |g, n + 1 , |−, n = sin θ n |e, n − cos θ n |g, n + 1 (14) and eigenenergies E ±,n = (n + 1)ω r ± 1 2 4g 2 (n + 1) + 2 (15) with θ n = 1 2 tan −1 ( 2g √ n+1 ). The typical signal in the noise spectrum is dominated by these eigenenergies, while the linewidth of the signal follows from the jump operators in equation (4b). Low-frequency regime Let us first consider the low-frequency regime around ω ∼ 0 displayed in figure 4. We find a zero-frequency peak and dip in the auto-and cross-correlation noise spectra, respectively. Both decrease and finally disappear when one approaches the lasing state. The height of the zero-frequency peak as a function of a detuning is shown in figure 3(d). Since in the absence of a resonator we have S α (ω ≈ 0)/2I = −S LR (ω ≈ 0)/2I 1, the peak/dip feature at zero frequency in the noise spectra must be the effect of the resonator. The noise spectra in figure 4 have a Lorentzian shape with linewidth γ 0 ∼ κ, determined by the emission spectrum of the photons [32]. In the regime around zero frequency, corresponding to the long-time limit, the noise spectra are determined by the single minimum eigenvalue λ min with the real part dominated by the weakest decay rate, i.e. κ. For weak interdot Coulomb interaction, where we have to account for one more incoherent tunneling channel (see figures 2(b) and (d)) the low-frequency noise spectra display a similar behavior as in figure 4(d), except for the enhancement of the zero-frequency peak as shown in figure 3. It is worth noting that in this low-frequency regime, the relation S L (ω ∼ 0) = S R (ω ∼ 0) = −S LR (ω ∼ 0) is still satisfied. However, as we will show below, the cross-correlation noise changes sign beyond the low-frequency regime. At higher frequencies but still within the range |ω| < ω r , the spectra are no longer Lorentzian due to the contributions from several λ k in equation (12). We find characteristic features showing a step and peak in the auto-and cross-correlation noise spectra, respectively, as shown in figure 5. The position of the step/peak is nearly independent of the detuning, while the magnitude is sensitive to it. With increasing dot-resonator interaction, both the step and peak are shifted as shown in figure 6. These characteristics are a consequence of the coherent dynamics of the coupled dot-resonator system. The step/peak occurs at ω = δ E, where δ E = |E +, n − E −, n | = 4g 2 ( n + 1) + 2 ≈ 2g( n + 1) is the Rabi frequency corresponding to the photon number n . As expected, this coherent signal of the step/peak becomes weak and even disappears with increasing incoherent tunneling rate (not shown in the figure). Interestingly, as shown in figure 6(b), we find that with increasing dot-resonator interaction, the coherent signal for weak interdot Coulomb interaction is not only shifted, but the step also turns into a dip. This is consistent with the coherent signal of the Rabi frequency in the double dot in the absence of the resonator showing a dip and peak in the auto-and cross-correlation noise spectra, respectively [39,46]. It arises from the recovered symmetrical transition tunneling channels (figures 2(b) and (d)). In the low-frequency regime, ω < |ω r |, the auto-correlation noise spectra of left and right leads satisfy S L (ω) = S R (ω). This is no longer true in the high-frequency regime ω |ω r | for strong interdot Coulomb interaction, as will be shown in the following subsection. Regime close to the resonator frequency Before addressing the noise spectrum in the high-frequency regime, let us briefly discuss its properties in the absence of the resonator. It has been demonstrated [38,39] that the signal of the intrinsic Rabi frequency ω 0 of the double dots can be extracted from the noise spectra. For instance, the auto-correlation noise spectrum shows a dip-peak structure and a dip at ω = ω 0 for strong and weak interdot Coulomb interactions, respectively [38,39]. Considering the present parameter regime, where lasing is induced for ω 0 ≈ ω r with very weak incoherent tunneling, = 10 −3 ω 0 , we find in the strong Coulomb interaction case nearly Poissonian noise in the full-frequency regime, with a small correction due to a weak coherent Rabi signal, i.e. S α (ω 0 )/2I ∼ 1 ± 5 × 10 −5 . The correction can be neglected compared to the signal induced by the coupled resonator as shown in figure 7. The signals in the high-frequency noise spectrum arise because of transitions with the energy E ±,n − E ±,n−1 ≈ ω r . They depend on the detuning in the same way as the spectrum of the oscillator [47]. Namely, for positive detuning we find a signal at frequencies somewhat higher than ω r and for negative detuning at frequencies below ω r . In contrast to the low-frequency case, for high frequencies the spectra of the current in the left and right junction, S L (ω) and S R (ω), do not have to be identical due to the overall symmetry of the circuit broken by the resonator. This feature has been demonstrated by the previous studies in [48,49] for investigating the spectral properties of a resonator coupled to a single-electron transistor (SET) and a superconducting single-electron transistor (SSET), respectively, in the nonlasing regime. For the present studied setup in the lasing regime, in this case we find significant differences between the cases (i) and (ii) of strong and weak Coulomb interactions, as illustrated in the left and right columns of figure 7, respectively. For strong while for weak Coulomb interaction we have I L (t)I L (0) = n n| 0|ρ I L (t)|0 + R|ρ I L (t)|R |n , Here we introduced the density matrix ρ I i (t) which satisfies the master equation (4) with the initial condition ρ I i (0) =Î i ρ st (i = L, R). For strong Coulomb interaction only S R (ω) couples directly to the state |R , which in turn couples resonantly to the oscillator. As a result we observe the signal at ω ≈ ω r only in S R (ω), while S L (ω) ≈ 1 is unaffected by the oscillator. In contrast, in case (ii), where we allow the state |2 to participate, we again find a symmetry between the currents through the right and left junctions and S L (ω) = S R (ω), as well as the anti-symmetry between the auto-and crosscorrelation noise spectrums, i.e. roughly S α (ω)/2I ≈ 1 + S(ω) and S LR (ω)/2I ≈ − S(ω) with the signal S(ω) changing sign leading to a peak and a dip as function of frequency. Furthermore, in contrast to the low-frequency regime, the noise spectrum at high frequency shows a Fano-resonance profile. It displays the same mechanism as observed by Rodrigues [50] that the detector (here the double quantum dot) feels the force in two ways, namely the original one from the voltage-driven tunneling and the one from the resonator. It arises from a destructive inference between the two transition paths between |g and |e , i.e. a direct tunneling channel through the leads and a transition assisted by the absorption and emission at the detection frequency. We would like to mention that the present result differs from the observation mode in [50]. These authors found a Fano-resonance in the charge noise spectrum of a single-electron transistor coupled to a resonator and conjectured that it should also show up in other experimentally more accessible variables. We find the Fano-resonance in the current noise spectrum, which is directly observable. Summary and discussion We have evaluated the frequency-dependent noise spectrum of the transport current through a coupled dot-resonator system in the lasing regime, in a situation where incoherent tunneling induces a population inversion. We considered both strong and weak interdot Coulomb interactions, in the latter case taking into account the doubly occupied state as well. Both situations lead to a similar behavior of the zero-frequency shot noise but to different features in the finite-frequency noise spectrum. When the system approaches the lasing regime the zero-frequency shot noise is enhanced strongly showing a remarkable super-Poissonian distribution. When the resonator is in the lasing state, the shot noise displays sub-Poissonian characteristics. The current follows here the behavior of the photon distribution, which is also super-Poissonian as one approaches the lasing regime and becomes sub-Poissonian near resonance. We found that the average photon number and the corresponding Fano factor, as well as the average current and its Fano factor in the lasing regime, are larger for weak interdot Coulomb interaction than for strong interaction. The zero-frequency shot noise could be detected with current experimental technologies. For example, a quantum point contact coupled to the dots has been demonstrated to detect in real-time single-electron tunneling through the double dot [22,23]. Considering the finite-frequency noise spectra, we found pronounced characteristic structures in the low-and high-frequency regimes reflecting the coherent dynamics of the coupled dot-resonator system. At low but finite frequencies the coherent dynamics of the oscillator leads to a peak at the eigen-Rabi frequency of the coupled system. At frequencies close to that of the resonator, due to the excitations of the photons in the resonator, we found for strong interdot Coulomb interaction a strongly asymmetric signal in the auto-correlation noise spectra of the left and right junctions. Symmetry is restored for weak interdot Coulomb interaction. The difference arises from the asymmetrical and symmetrical incoherent tunneling channels induced by strong and weak interdot Coulomb interactions, respectively.
2013-08-19T16:01:06.000Z
2012-10-21T00:00:00.000
{ "year": 2012, "sha1": "11245cb0e61a051ad8f47cd9f95aae165ae66472", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1367-2630/15/2/025044", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "72e1110562da6443d2d5cc7d0edc48aa2b5c2171", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257044479
pes2o/s2orc
v3-fos-license
Basiliximab impairs regulatory T cell (TREG) function and could affect the short-term graft acceptance in children with heart transplantation CD25, the alpha chain of the IL-2 receptor, is expressed on activated effector T cells that mediate immune graft damage. Induction immunosuppression is commonly used in solid organ transplantation and can include antibodies blocking CD25. However, regulatory T cells (Tregs) also rely on CD25 for their proliferation, survival, and regulatory function. Therefore, CD25-blockade may compromise Treg protective role against rejection. We analysed in vitro the effect of basiliximab (BXM) on the viability, phenotype, proliferation and cytokine production of Treg cells. We also evaluated in vivo the effect of BXM on Treg in thymectomized heart transplant children receiving BXM in comparison to patients not receiving induction therapy. Our results show that BXM reduces Treg counts and function in vitro by affecting their proliferation, Foxp3 expression, and IL-10 secretion capacity. In pediatric heart-transplant patients, we observed decreased Treg counts and a diminished Treg/Teff ratio in BXM-treated patients up to 6-month after treatment, recovering baseline values at the end of the 12-month follow up period. These results reveal that the use of BXM could produce detrimental effects on Tregs, and support the evidence suggesting that BXM induction could impair the protective role of Tregs in the period of highest incidence of acute graft rejection. Abbreviations Regulatory T cells Many transplant programs employ induction immunosuppression, a relatively intense prophylactic therapy, at the time of transplantation based on the empiric observation that potent immunosuppression is required to prevent early acute rejection 1 . However, there remains controversy whether these agents are absolutely required and about their risk/benefit balance of their use 1 . Moreover, little is known about the specific effects of these immunosuppressants on certain immune cell subsets and the consequences for immune homeostasis. Of induction medications, the IL-2 receptor antagonist currently utilized in the clinics is basiliximab (BXM), a therapeutic monoclonal antibody that reversibly binds the IL-2 receptor alpha chain (IL-2Ra or CD25) and can completely block its interaction with IL-2 2,3 . Its use aims to block activated effector T cells 4 ; however, regulatory T cells (Tregs) also express high levels of CD25 constitutively 5 , and rely on CD25 not only for proliferation and survival but also for detection of excessive proliferation of effector cells 6,7 . In the context of transplantation, Tregs prevent activation and expansion of effector T cells implicated in cellular rejection and also induce B cell death, preventing humoral rejection, as observed in animal heart transplant models 8 . Therefore, Tregs play a crucial role in the regulation of immune processes essential for transplant acceptance 9,10 , and Tregs were shown to promote transplantation tolerance and indefinite allograft survival in renal transplant and animal models 11,12 . Indeed, the Treg to effector T-cell (Teff) balance was described to be crucial in development of either graft rejection or allograft tolerance 10 . Therefore, while BXM may prevent acute rejection by hindering IL-2-mediated Teff expansion, it could also compromise the mechanisms of graft acceptance by impairing Treg proliferation and function. In the case of children requiring heart transplantation, BXM-mediated Treg impairment could be even more problematic. The thymus is typically removed for surgical field exposure in pediatric cardiac surgeries, making pediatric heart transplant recipients unable to regenerate the thymus-derived Treg population. In this study, we investigated in vitro whether BXM had detrimental effects on Treg survival, proliferation and functionality compared to other T-cell populations. Additionally, we compared Treg values and their evolution in pediatric heart transplant patients with and without BXM induction therapy. Results In vitro BXM treatment has a direct effect on both Treg counts and Foxp3 expression. We analysed in vitro the effect of BXM on Treg from healthy donors. We studied the BXM effect on T cells after 72 h of culture and employing: (i) non-activating conditions, which could be indicative of the BXM effect on an immune system in a quiescent status; (ii) activation employing anti-CD3/CD28 coated beads, which mimic antigen-presenting cells and activate T cells and thus could be indicative of the BXM effect on an immune system activated by the presence of alloantigens. To confirm BXM-dependent CD25 blockade in Tregs, we stained cells with a competing anti-CD25 antibody that would be unable to bind CD25 if the receptor is already saturated with BXM 13 . Treatment of PBMC with a single dose of 10 μg/ml BXM completely blocked CD25 in all CD4+ T cells including FoxP3+ Tregs within 4 h of culture, and the blockade remained after 72 h in all six experiments ( Fig. 1A-C). With CD25 blocked in BXMtreated cells, we identified the Treg subset via CD4+ Foxp3+ expression. Without activation, BXM induced a marked decrease in Treg frequency (p = 0.003), but no significant effects were observed on total CD4+ (p = 0.282) and CD8+ T cells (p = 0.160) percentages after 72 h culture ( Fig. 2A). Similar results were observed when PBMCs were activated (Treg: p < 0.001; T-CD4+ : p = 0.375; T-CD8+ : p = 0.224) (Fig. 2B). We also observed a significant reduction in absolute Treg counts in presence of BXM (approximately 70%; Supplemental Fig. S1A), but not in total CD4+ T-cell counts, in both unstimulated and stimulated conditions after 72 h (Fig. 2C,D). BXM did not induce changes in viability of Tregs (p = 0.687) and total CD4+ T cells (p = 0.123), but there was a trend to decreased viability of CD8+ T cells (p = 0.043) with BXM in stimulating conditions (Fig. 2E). In non-stimulating conditions, BXM treatment did not modify viability of Tregs, or total CD4+ and CD8+ T cells (Supplemental Fig. S1B). Besides Treg frequency and counts, the Foxp3 expression on Tregs, measured as Foxp3 median fluorescence intensity (MFI) in Foxp3+ CD4+ cells, was significantly reduced after BXM treatment (Fig. 2F) in all six experiments. Reduction of Foxp3 expression was observed in both non-stimulated (p = 0.012) and stimulated conditions (p = 0.006), indicating that BXM also has an impact on Foxp3 expression, the key regulator of Treg function. BXM specifically suppresses Treg proliferation in vitro. We then investigated the effect of BXM on cell proliferation as a potential mechanism of the observed decrease in Treg proportions. CFSE-labelled PBMC were stimulated and cultured in the presence or absence of BXM. After 72 h we analysed CFSE signal intensity, which decreases upon cellular division. We found that Treg proliferation significantly decreased in the presence of BXM (p = 0.027), while proportions of proliferating cells in total CD4+ (p = 0.204) or CD8+ T cells (p = 0.843) remained unchanged (Fig. 3A,B), indicating that BXM specifically decreases proliferation of the Treg subset. www.nature.com/scientificreports/ To further assess the effect of BXM on Treg proliferation, we analysed Ki-67, which is present in all actively dividing cells. In the absence of stimulation (Fig. 3C), BXM induced a pronounced decrease in Ki-67+ frequency in Treg cells (p = 0.008) and a milder decrease in total CD4+ T cells (p = 0.012). In activating conditions (Fig. 3D), BXM induced a significant decrease in Ki-67+ Tregs (p = 0.004) but not in total CD4+ T cells (p = 0.163). BXM modifies the cytokine secretion pattern of Treg cells. It is known that IL-2 primes Tregs for IL-10 production 14 , and Tregs can be differentiated to Th2-15 or Th17-phenotype 16 after the loss of Foxp3 expression. Because BXM blocked the IL-2 receptor and decreased the Foxp3 expression, we evaluated the cytokine secretion pattern in BXM-treated Tregs. In activated cells, BXM treatment significantly reduced the proportion of IL-10-secreting Treg cells in all six experiments (p = 0.027; Fig. 4A) but did not modify the proportion of IL-4 and IL-17 secreting Tregs (Fig. 4B,C). In non-stimulating conditions, we found an increase in the proportion of IL-10, IL-4 and IL-17 secreting Tregs in BXM-treated cells (Supplemental Fig. S2A-C), suggesting that in a "resting" scenario BXM could also favour a switch of Treg cells into a more pro-inflammatory phenotype and function. We also examined the expression of inhibitory molecules CTLA-4 and CD39, implicated in Treg suppressive function 17,18 . In both unstimulated and stimulated conditions, BXM did not modify the frequency of CTLA4-or CD39-expressing Tregs (Supplemental Fig. S3). Basiliximab temporarily impairs Treg cell population of heart transplant recipients. We investigated whether the detrimental effect of BXM on Treg observed in vitro could be confirmed in vivo by analysing Treg in patients treated or non-treated with BXM. Six pediatric patients awaiting cardiac transplantation were enrolled in the study ( Table 1). The median age of patients was 5.19 years (range 0.14-13.96). In addition to standard immune suppression outlined in the methods, two patients received induction therapy with two doses of BXM (12 mg/m 2 at day 0 and 4 post-Tx). www.nature.com/scientificreports/ Tregs at Day-0 were identified as CD4+ Foxp3+ CD25+ (Fig. 5A); all enrolled patients showed normal Treg values before transplantation when compared to those previously published by Schatorjé et al. 19 . Ten days after introduction of immune suppressive therapy, patients not treated with BXM showed preserved proportions of CD25 + Foxp3 + Tregs. Nevertheless, as observed in our in vitro experiments (Fig. 1), the epitope recognized by the anti-CD25 antibody used to detect these Treg cells was completely blocked in BXM-treated patients. This BXM-mediated CD25 blockade in CD4+ Foxp3+ cells was evident for at least 30 days; CD25 was detectable by We followed the evolution of the Treg subset over time independently of their CD25 availability, analysing all (CD25+ and CD25−) CD4+ Foxp3+ cells (Fig. 5C). Compared to baseline values, we observed a slight decrease in Treg proportion in BXM-treated patients during the first 3 months after Tx, which was more pronounced in terms of Treg absolute counts (cells per µl of blood, right panel), where we could observe a decrease in Treg counts up to 50%. Of note, the recovery of CD25 availability after BXM clearance (day 45) was not immediately followed by significant recovery in the numbers of CD4+ Foxp3+ cells in BXM-treated patients, which seem to reach normal levels after 3 months post-Tx. Moreover, as observed in our in vitro setting (Fig. 2F), BXM treatment also induced a reduction of Foxp3 expression in patients' cells (Fig. 5D). At Day + 10 post-Tx, in 3 of the 4 patients who did not receive BXM, their Foxp3 MFI remained close to baseline levels. However, the two BXM-treated patients showed a huge decrease (> 40%) in Foxp3 MFI at Day + 10, which was still present, but less pronounced, 1-month post-Tx. Additionally, we assessed the impact of BXM induction therapy on immune parameters related to potential risk of graft rejection. BXM-treated patients showed a trend to increased values of both activated and effector memory CD8+ T-cell over the 1-year follow-up, whereas no differences were found in CD4+ T-cells (Supplemental Fig. S4A-D). Interestingly, patients treated with BXM showed lower mean ratios of Treg/CD4Teff and Treg/CD8Teff than patients not treated with BXM over the 12-month follow-up (Supplemental Fig. S4E,F). Altogether, these observations indicate that BXM treatment could be negatively affecting to Treg cells. Even the transitory of this affection, it could in part be responsible for the increased values of activated and memory CD8+ T-cells that can promote the graft rejection in medium-long term. Although the low number of patients analysed per group hindered the drawing of any robust conclusion, they are in line with our in vitro findings about the effects that BXM could have on the Treg cell population, which has been proved to have a key role in graft protection. www.nature.com/scientificreports/ www.nature.com/scientificreports/ Discussion Tregs have been shown to play a crucial role in preventing early graft rejection, and there is evidence suggesting that Tregs could facilitate long term graft tolerance in transplantation 21,22 . However, the mechanisms involved in graft acceptance could be compromised by the use of certain immunosuppressive drugs. Since CD25+ Tregs share the target of BXM with the cell populations mediating rejection, it became crucial to assess the balance of the drug effect between targeting effector cells and not hampering Treg functionality and survival. In our in vitro study, we treated human PBMC with BXM and observed a decrease in CD4+ FoxP3+ frequency and absolute numbers, indicating a depletion of Treg cells. Since we did not observe direct effect of BXM on Treg viability, we dismissed the possibility that the decrease in Tregs could be due to increased Treg mortality from higher toxicity of BXM in these cells. This is consistent with the findings of Wang and colleagues, who demonstrated that in the presence of BXM, CD4+ CD25+ T cells were not depleted from the circulating pool through monoclonal antibody activation-associated apoptosis 23 . In our attempt to identify specific mechanism(s) responsible for the BXM-mediated Treg depletion, we demonstrated that BXM treatment markedly decreases Treg proliferation. CFSE-labelling of human PBMCs can account for up to 50% cell death and modifies activation markers 24 , and for that we further confirm the BXM effect on Treg proliferation analysing Ki-67, which is present in all actively dividing cells. Moreover, BXM-mediated impairment of Treg proliferation was observed in both unstimulated and stimulated cells. This indicates a potential effect of BXM on the homeostatic proliferation of Treg in a quiescent state, but also on Treg proliferation in response to expansion of effector cells that could mediate graft rejection. Our in vitro findings are in agreement with the results reported by Bouvy et al. in kidney transplant recipients, reporting that patients receiving rabbit antithymocyte globulin (rATG) induction therapy had higher percentages of Ki-67+ Tregs after 1 month than before transplantation whereas patients with BXM induction had very low percentages of Ki-67+ Tregs 25 . In contrast to the high impact on Tregs, total CD4+ T-cell frequency and proliferation capacity were not affected in vitro by BXM, suggesting that BXM may have stronger effects on Tregs compared to the total CD4+ population, the latter being the main desired target of this drug. This is likely related to higher demand for IL-2 by Tregs to maintain proliferation, as shown in previous studies where in contrast to Treg, non-Treg cells have been shown to proliferate upon antigen stimulation in an IL-2 deficient environment 26 . We also have the unique opportunity of analyzing the in vivo effect of BXM in a small cohort of pediatric heart transplant patients (two receiving BXM induction therapy and four without BXM induction). In our study, CD25-blockade in Tregs from pediatric patients lasted for 30-45 days, which matches the timeframe of basiliximab elimination in serum reported in pediatric liver and kidney transplant patients 20,27 . Independently of CD25 availability, BXM-mediated IL-2 signalling deprivation appears to cause a decrease in Treg (Foxp3+ CD4+ cells) numbers in comparison to BXM-untreated patients, which persists for at least four months. This is contradictory with results provided by several authors 23,[27][28][29] , showing that frequency of Foxp3-expressing CD4+ T cells remains unchanged in transplanted patients treated with BXM. All these articles analysed percentages of Foxp3+ cells, which is a relative frequency that could be influenced by changes in other subsets of T cells. To avoid this, we analysed the absolute counts of Foxp3+ cells per μl of blood, a better indicator of the true number of cells available in the periphery. The effect of anti-CD25 on Treg in heart transplant adults 30 and the relevance of the Treg subset in rejection in these heart transplant recipients has been previously described 31 . However, in contrast to other studies, our cohort of heart transplant pediatric patients was thymectomized, and the regeneration and production of new Tregs are likely seriously compromised in these patients. Because in these patients the capacity of T-cell replenishment is compromised by the absence of a functional thymus, the effect of BXM on Treg counts could be irreversible or having more serious consequences than in other pediatric patients, in which the thymus is intact, or than in adult patients where the whole T-cell repertoire has been correctly generated in the infancy. The fact that Treg frequency and absolute numbers were more affected by BXM than total CD4+ and CD8+ cells had a strong impact on Treg/Teff ratio, which was shown to be critical for rejection freedom 32,33 . BXM-treated patients showed lower Treg/Teff CD4+ and CD8+ T cells, notably in the first months after BXM treatment, a pattern described to be less favourable for graft acceptance. Due to the very low number of heart transplants performed in children per year it was not possible to enrol a large enough sample to draw definitive conclusions, but these clinical observations reflect somehow our in vitro results, which provided objective evidence of the detrimental effect of BXM on Treg values and phenotype. Presence of BXM in concentrations comparable to the serum concentration that is reached in pediatric patients treated with BXM 20 produced a complete blockade of CD25, which it was related with a decrease in the frequency and number of Tregs. We also observed in vitro and in vivo that BXM-mediated IL-2 signalling deprivation induces a clear decrease in Foxp3 expression on Tregs, which may impair their suppressive capacity 34 . This finding agrees with other studies in the context of anti-CD25 antibody treatment of multiple sclerosis 35 and our previous studies showing altered levels of CD25 affecting Foxp3 expression 36 . Foxp3 downregulation in Tregs has also been associated with a switch to a secretion pattern of pro-inflammatory cytokines 15,16,34 . In stimulated conditions, which could mimic the Treg response in a scenario of immune activation against the graft, IL-4 and IL-17 secretion by Treg was not modified with BXM, but a clear reduction was observed in the frequency of IL-10-producing Tregs. This is consistent with the necessity of IL-2 signalling to prime IL-10 production in Tregs 14 , and may also be related to Foxp3 downregulation 15,37 . If one translates these results to a physiological context, we could hypothesize that if an immune response against alloantigens is initiated in the presence of BXM, Treg proliferation will not match the proliferation of CD4+ and CD8+ cells, which may lead to a decrease in the ratio between Treg/Teff with potential negative impact on graft acceptance. This hypothesis is supported by our findings that BXM has a stronger suppressive effect on www.nature.com/scientificreports/ Treg proliferation in activation conditions, which will render the immune system having a lower Treg number to neutralize T-cell effector proliferation. In addition to their decreased numbers and the imbalance with effector cells, Tregs will undergo Foxp3 downregulation and a reduction in their capacity to secrete IL-10, which contrasts with other articles reporting that BXM does not impair Treg suppressive functions (reviewed in 38 ). We cannot exclude that other immunosuppressants administered as maintenance therapy may also have detrimental effects on Tregs. In fact, calcineurin inhibitors such as TAC inhibit IL-2 production, and this could also interfere with the number and function of Tregs 39 . However, considering the controversy about the risks/ benefits of BXM induction, the minimal effect of BXM decreasing the absolute risk of acute rejection reported by several authors [40][41][42] , together with the detrimental effect on Tregs shown in our study, cast doubt on the overall usefulness of BXM in pediatric heart transplantation. A recent large registry study employing multivariable models investigated the benefits of induction therapy in a cohort of pediatric heart transplanted patients after risk stratification 43 . Castleberry et al. report that, although use of anti-CD25 induction therapy was associated with lower rates of rejection and infection compared to no induction, and overall graft survival was higher in patients who received induction therapy, a clear relationship between survival and use of induction therapy could not be proven, based on multivariable analysis. Besides better preservation of Tregs, we have observed (in a reduced cohort of 4 patients) that transplantation without BXM does not result in increased effector or activated CD4+ or CD8+ T cell populations that could potentially promote graft rejection. In fact, Patient 2 (BXM-treated) did not have donor directed antibodies before transplant but had humoral rejection at Day + 21 post-Tx. Patient 2 was the patient with lower Treg counts after the first month post-Tx who had increased values in both effector memory and activated CD8+ cells from Day + 90 onwards. In conclusion, our findings demonstrate a detrimental effect in vitro of BXM on Tregs, impairing their number and function by affecting their proliferative capacity, Foxp3 expression, and IL-10 secretion capacity. Due to the small number of pediatric heart transplant patients available, we cannot extrapolate these conclusions to the in vivo context, but they reinforce the results obtained in vitro, suggesting that detrimental effects of BXM on Tregs are probably produced in vivo, which could negatively affect the protective role of Treg populations just in the period of highest incidence of graft rejection 44 . Therefore, BXM effects on Tregs and immune homeostasis must be considered in the design of appropriate immunosuppressive regimens for these patients. Patients and methods Human samples. Fresh blood samples were obtained from pediatric heart transplant patients (n = 6) either treated (n = 2) with basiliximab (BXM, Simulect, Novartis Pharma, Basel, Switzerland) in a dose of 12 mg/m 2 on Days 0 and 4 after transplantation (Tx) or without BXM (n = 4). All patients received maintenance immunosuppressive therapy consisting of short-term steroids, mycophenolate mofetil (MMF) and tacrolimus (TAC). None of the patients were pre-sensitized, they did not have donor directed HLA antibodies before transplant. Patient 2, who showed signs of rejection at day + 21 post-Tx, received three boluses of methylprednisolone together with intravenous immunoglobulin (IVIG), and four weekly doses of rituximab (375 mg/m 2 per week) from day + 26 post-Tx (Table 1). Rituximab is a chimeric monoclonal antibody that specifically binds to the CD20 molecule, the expression of which is restricted to B cells. Therefore, we considered that it would not have any direct effect on Treg cells, and even has been described that it could indirectly increase the Treg levels 45 . The study was conducted after approval of the ethics committee of Hospital Gregorio Marañón (Madrid, Spain) and according to the principles expressed in the Declaration of Helsinki. Written informed consents from the legal guardians were obtained before patient´s enrolment. All the samples were obtained at the Pediatric Cardiology Unit of the Hospital Gregorio Marañón. Peripheral blood samples (< 3 ml) were drawn before of transplantation (Day-0 or BL) and at 10,15,30,45,60,90,120,180,270, 360 days post-Tx. Baseline samples (BL or Day-0), were obtained 1-2 days before the transplant procedure and the administration of immunosuppressive therapy, including BXM. Fresh blood samples were always processed within 2 h after extraction. Peripheral blood mononuclear cells (PBMC) employed for in vitro experiments were obtained from buffy coats of adult subjects from the Transfusion Centre of Madrid. Culture and in vitro treatment of PBMCs. PBMCs from buffy coats were isolated on a Ficoll-Hypaque (Rafer, Madrid, Spain) density gradient and treated with BXM (Simulect, Novartis Pharma, Basel, Switzerland) to analyze its effect on CD4, CD8 and Treg viability, proliferation, Foxp3 expression and cytokine secretion. Briefly, CFSE-labeled PBMCs (CFSE from Life Technologies, Carlsbad, CA) were cultured with RPMI 1640 medium (Biochrome) supplemented with 10% heat-inactivated FCS, 500 U/ml IL-2 and a mix of antibiotics (125 µg/ml cloxacillin, 125 µg/ml ampicillin and 40 µg/ml gentamicin; Sigma-Aldrich, St. Louis, MO). CFSElabeled PBMCs (Life Technologies) were stimulated with anti-CD3/anti-CD28-coated beads (Life Technologies) at 0.5:1 or 1:1 ratio (bead:PBMC), treated with 10 µg/ml of BXM (Simulect, Novartis Pharma) and cultured for 72 h in the presence of IL-2. This concentration was chosen considering the serum concentration that is reached in pediatric patients treated with BXM 20 , and looking for maximal suppressive effect 46 . After culture, all samples were analysed by flow cytometry. Analysis of immune subsets by flow cytometry. The frequency and absolute counts of different T-cell subsets were analysed in peripheral blood from patients using a combination of specific antibodies as previously described 47,48 . Briefly, we determined by flow cytometry (Gallios Cytometer, Beckman Coulter, France) the frequency and absolute counts of CD4+ T cells and CD8+ T cells, including markers for the following subsets: activated (HLA-DR+) and effector memory (CD45RA− CD27−). Absolute counts and frequency of Treg cells in peripheral blood were quantified by measuring either CD3+ CD4+ CD25+ Foxp3+ or CD3+ CD4+ Foxp3+ cells.
2023-02-21T14:47:52.682Z
2021-01-12T00:00:00.000
{ "year": 2021, "sha1": "2c432821f602168eb66adf29ea73da1b5710751b", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-80567-9.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "2c432821f602168eb66adf29ea73da1b5710751b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
56027796
pes2o/s2orc
v3-fos-license
The Effect of Malaria / HIV / TB Triple Infection on Malaria Parasitaemia , Haemoglobin Levels , CD 4 + Cell and Acid Fast Bacilli Counts in the South West Region of Cameroon Malaria, Tuberculosis (TB) and Human Immune deficiency virus/ Acquired Immune Deficiency Syndrome (HIV/AIDS) are 3 of the world’s most common and serious infectious diseases that underline development in low and middleincome countries [1,2]. These infections are not only associated with poverty, but also occur in the same geographic zone and have major public health implications [3]. These three infections together claimed 5-7 million lives in 2001 [4]; annually, approximately 5 million People die every year of these illnesses with humanitarian, economic and social impact which is still not fully measured [5]. Introduction Malaria, Tuberculosis (TB) and Human Immune deficiency virus/ Acquired Immune Deficiency Syndrome (HIV/AIDS) are 3 of the world's most common and serious infectious diseases that underline development in low and middle-income countries [1,2].These infections are not only associated with poverty, but also occur in the same geographic zone and have major public health implications [3].These three infections together claimed 5-7 million lives in 2001 [4]; annually, approximately 5 million People die every year of these illnesses with humanitarian, economic and social impact which is still not fully measured [5]. Despite the wide use of life attenuated vaccine and several antibiotics, TB, one of the oldest forms of recorded human affliction is still a big killer among infectious diseases [6].It is estimated that M. tuberculosis has affected approximately 2 billion people, with 13.7 million existing cases and nearly 2 million deaths annually [7].There is one death every 15 seconds and 8 million people develop TB every year [6].Without treatment, up to 60% of people with the disease will die [8].Essentially most of these cases are in the third world [9] reflecting the poverty and the lack of healthy living conditions and inadequate medical care [10]. On the other hand, malaria is prevalent in tropical settings where access to preventive and curative services is limited [11] and HIV/AIDS has a devastating effect on human lives [12].In 2013 thirty five million people were living with HIV and more than two thirds of new HIV infections are in Sub-Saharan Africa [13].Few studies have been carried out to assess the severity of co-infection of these diseases and their effects on parameters Open Access 2 such as Haemoglobin (Hgb), Malaria parasitaemia, Acid Fast Bacilli (AFB) and CD4+cell counts.Such studies are necessary to generate data which can be exploited for better management of these infections. Research questionnaire We administered a standard questionnaire to obtain demographic data, clinical symptoms of patients and the hygienic conditions they were living in additionally, knowledge of patients on HIV, TB and malaria, their risk factors, modes of transmission, preventive measures and past medical history was also assessed. Haemoglobin (Hgb) concentration (g/dl) Following the manufacturer's instructions, Hgb concentration was measured using the URIT-12 haemoglobin meter (URIT medical electronic Co.Lt Guangxi, China).Briefly a drop of blood was placed on the Hgb concentration electronic test card which was inserted into the haemoglobin meter and the Hgb read directly. Measurement of AFB count AFB counts were determined from sputum smears stained using the Ziehl Nielsen staining technique and at least 100 microscopic fields were examined following standard procedures [15]. Diagnosis of HIV/AIDS The Uni-Gold rapid test kit (Trinity Biotech PLC, Bray, Ireland) was used to detect HIV/AIDS-1/2 antibodies.Essentially, 50 µl of freshly collected whole blood was dropped on the sample pad of the HIV test strip and allowed to freely diffuse into the strip.Test results were read after 15 minutes from corresponding colour changes on the control and patient portion of the strip.All HIV positive results by Uni-Gold rapid test were confirmed with SD Bioline (Standard Diagnostics, INC. 156-68 Hagaldong, Korea) which differentiated infections caused by HIV-1 from HIV-2.HIV seropositive status was based on the presence of antibody to either HIV-1 or 2 in blood. Measurement of CD4+ cell count Total CD4+ cell counts were determined using Becton Dickinson FACS Flow counter (KAPTAN SCIENTIFIC INC.).Subjects with CD4 cell count < 200 cells/mm³ of blood were considered to be in the advanced stage of HIV disease, while those with counts 200-499 and ≥ 500 were considered to be in the chronic and asymptomatic stages of HIV disease respectively. Ethical consideration Ethical approval and Administrative authorisation were obtained from the South West Regional Delegation for Public Health and the Limbe Regional Hospital.Each participant signed a consent form before enrolment. Statistical analysis Epi Info Version 3.5.3was used to enter data and Excel 2007 was used to obtain the general statistics parameters of mean and Standard Deviation (SD).Chi square analysis was used to verify statistical difference.P-value of ≤ 0.05 was considered to be significant. Overall, the mean haemoglobin concentration for the entire study population was 11.4 g/dl.Mean Hgb concentration was 11.0 ± 2.7 SD, 10.8 ± 2.2, 10.4 ± 2.6 and 10.1 ± 2.0 in patients infected with HIV/malaria, TB/HIV, TB/malaria and TB/HIV/malaria respectively.The mean Hgb concentration of those with TB mono infection (13.4 ± 2.6), malaria mono infection (11.4 ± 2.3) and HIV alone (12.0 ± 2.4) were higher when compared with co-infections.Those with triple infections had the lowest mean Hgb concentration.The difference in mean Hgb was statistically significant (p=0.028). Malaria parasite density by infection category The mean parasite density in single malaria infection was 410 ± 2515.5 parasites/µl.When this was compared with the parasite density in co and triple infections, those with TB/HIV/ malaria triple infection had the highest mean parasitaemia (461.1 ± 295.0 parasites/µl).The lowest mean parasite density (271.0 ± 198.7 parasites/µl) was observed in patients coinfected with TB/malaria.However the difference in the mean parasite density was not statistically significant (p=0.329). CD4+ cell count by Infection Category Mean CD+4 cell count for the study population was 178 ± 159.4 cells/ µl.Patients with TB/HIV/malaria triple infection had the highest mean CD4+ cell count of 193.3 ± 200.3 cells/µl, followed by patients with HIV mono infection (187.4 ± 169.7 cells/µl) while TB/HIV co-infection had lower CD4+ cell counts of 112.7 ± 76.7 cells/µl.However the difference in CD4+ cell counts in the various infection categories was not statistically significant (p=0.562). AFB count by infection category Based on AFB count of the study population, the mean AFB count was 15 AFB/Field.Those with TB/HIV/ malaria triple infection recorded the highest mean count (21.3 ± 15.9 AFB/Field) followed by those coinfected with TB/ malaria (18.0 ± 11.8 AFB/field).The count for TB/HIV co-infection and TB mono infection was 12.1 ± 14.1 AFB/field and 15.0 ± 13.3 AFB/field respectively (p=0.234) (Table 1). Discussion Knowledge of the effects of the co-existence of HIV, malaria and TB on the overall health of patients is important for proper management of these diseases.Existing data are not sufficient to provide a clear picture on the interactions between TB, HIV and malaria co-infections.It is barely 16 years that a picture of an association between these infections began to emerge [4,16].An understanding of the transmission and clinical features of TB, HIV and malaria co-infections is important particularly since these infections are prevalent in poor countries such as Cameroon that have limited resources for prevention and treatment. In this study, TB/malaria, HIV/TB, HIV/malaria and HIV/TB/malaria co-infected patients recorded lower mean Hgb levels compared with those for mono infections of malaria, HIV and TB.The lower levels of Hgb in co-infected patients could be as a result of an increased burden on erythropoiesis (production of red blood cells).In co-infected persons, P. Open Access 3 falciparum and HIV use up erythropoietin (a protein) which is the most important single factor which controls the process of erythropoiesis. Forschen Sci Studies have shown that HIV can complicate the diagnosis of TB [17,18], others have shown that TB and malaria are more prevalent in the rainy season [4,19] and that there is an association between HIV, malaria and TB co-infections and disease severity.However most of these have used small sample sizes.In the present study in which a larger sample was used, it was established from our data that malaria parasitaemia was higher in triple infections (461.1 ± 295.0 parasites/µl) when compared with parasitaemia in dual infections of HIV/malaria (353.7 ± 292.0 parasites /µl) and TB/malaria (271.0 ± 198.7 parasites /µl) or malaria occurring as mono infection (410.0 ± 2515.5 parasites /µl).The finding that parasitaemia was higher in subjects infected with triple infections of HIV, TB and malaria suggests that immunity was compromised by these diseases. A related study in Cameroon showed a higher risk of TB and malaria co-infections in HIV positive subjects [19].The present study has gone further to show an association between triple co-infections of malaria, HIV and TB and severity of these co-infections as assessed by the differences in their levels of Hgb, malaria parasitaemia, CD4+cell count and AFB counts in Cameroon.Some studies have shown associations between co-infections and CD4+ cell counts [16] but no clear relationship has been established.Based on this study the mean CD+4 cell counts for the study population was 178 ± 159.4 cells/µl.Patients with TB/HIV/ malaria triple infection had the highest mean CD4+ cell count followed by patients with HIV alone (187.4 ± 169.7 cells/µl) while those with HIV/ malaria and TB/HIV co-infection had CD4+ cell counts of 176.2 ± 154.1 and 112.7 ± 76.7 cells/µl respectively.However the difference in CD4+ cell counts in the various infection categories was not statistically significant (P=0.562).This is different from the study carried out by [20] reported that an increase of one log in HIV viral load occurs during febrile malaria episodes enhancing susceptibility to malaria in HIV infected patients and this was found to facilitate the geographic expansion of malaria in areas where HIV prevalence was high. Tuberculosis has been found to serve as a "sentinel" for HIV seroprevalence [21].People with HIV have suppressed immunity and as a result, a higher chance of reactivation of dormant TB bacilli.In the present study, Patients with triple infections had the highest mean AFB count and this group has a higher chance of reactivation of dormant TB bacilli than their counterpart with dual and mono infections.Those co-infected with HIV/TB had lower AFB count than those with TB only suggesting that HIV complicates the diagnosis of TB due to alteration of the normal host immune response to Mycobacterium tuberculosis in persons with HIV. Cavitation and transfer of bacilli into respiratory secretions is markedly reduced hence making diagnosis difficult. Conclusion The data from the present study indicates that triple infection with TB, HIV and malaria increase malaria parasitaemia, AFB count and decreased Hgb levels with no impact on the progressive depletion of CD4+ cells in HIV infection. O p e n H U B f o r S c i e n t i f i c R e s e a r c h Citation: Irene AA, Enekembe MA, Meriki HD, Fonkeng NF, Nkuo-Akenji T (2016) The Effect of Malaria/HIV/TB Triple Infection on Malaria Parasitaemia, Haemoglobin Levels, CD4+ Cell and Acid Fast Bacilli Counts in the South West Region of Cameroon.J Infect Pulm Dis 2(1): doi http://dx.doi.org/10.16966/2470-3176.110 O p e n H U B f o r S c i e n t i f i c R e s e a r c h Citation: Irene AA, Enekembe MA, Meriki HD, Fonkeng Nkuo-Akenji T (2016) The Effect of Malaria/HIV/TB Triple Infection on Malaria Parasitaemia, Haemoglobin Levels, CD4+ Cell and Fast Bacilli Counts in the South West Region of Cameroon.J Infect Pulm Dis 2(1): doi http://dx.doi.org/10.16966/2470-3176.110
2018-12-12T16:32:13.426Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "08bf25069659b93b594f53268a3a3a4b1dc439e5", "oa_license": "CCBY", "oa_url": "https://www.sciforschenonline.org/journals/pulmonary-diseases/article-data/JIPD-2-110/JIPD-2-110.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "08bf25069659b93b594f53268a3a3a4b1dc439e5", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
220908945
pes2o/s2orc
v3-fos-license
Paradox of HIV stigma in an integrated chronic disease care in rural South Africa: Viewpoints of service users and providers Background An integrated chronic disease management (ICDM) model was introduced by the National Department of Health in South Africa to tackle the dual burden of HIV/AIDS and non-communicable diseases. One of the aims of the ICDM model is to reduce HIV-related stigma. This paper describes the viewpoints of service users and providers on HIV stigma in an ICDM model in rural South Africa. Materials and methods A content analysis of HIV stigmatisation in seven primary health care (PHC) facilities and their catchment communities was conducted in 2013 in the rural Agincourt sub-district, South Africa. Eight Focus Group Discussions were used to obtain data from 61 purposively selected participants who were 18 years and above. Seven In-Depth Interviews were conducted with the nurses-in-charge of the facilities. The transcripts were inductively analysed using MAXQDA 2018 qualitative software. Results The emerging themes were HIV stigma, HIV testing and reproductive health-related concerns. Both service providers and users perceived implementation of the ICDM model may have led to reduced HIV stigma in the facilities. On the other hand, service users and providers thought HIV stigma increased in the communities because community members thought that home-based carers visited the homes of People living with HIV. Service users thought that routine HIV testing, intended for pregnant women, was linked with unwanted pregnancies among adolescents who wanted to use contraceptives but refused to take an HIV test as a precondition for receiving contraceptives. Conclusions Although the ICDM model was perceived to have contributed to reducing HIV stigma in the health facilities, it was linked with stigma in the communities. This has implications for practice in the community component of the ICDM model in the study setting and elsewhere in South Africa. INTRODUCTION Chronic diseases are diseases (e.g. hypertension, diabetes and HIV) that require regular and ongoing, usually for a long time contact with health facilities. Since June 2011, the National Department of Health has been testing the integrated chronic disease management (ICDM) model of care in all the clinics Bushbuckridge sub-district. The aim of this model is to improve health outcomes of patients with chronic diseases and improve service delivery in health facilities. We invite you to describe your experiences with the quality of care in the clinics with respect to the topics below. You are not expected to name the chronic disease for which you are being managed in the clinic. Topics for the discussion 1) General satisfaction with the quality of the integrated chronic disease care  In general, could you tell us how satisfied or dissatisfied you are with the integrated chronic disease care in the health facility you attend? o Prompt participants to explain the reason for their satisfaction or dissatisfaction  All of you have been receiving treatment in the clinics for at least two years. Have you noticed any changes in the way services for chronic diseases have been combined? o Prompt participants to discuss any positive or negative changes in the combined/integrated chronic disease services in the clinics since June 2011  How perfect are these services? o o Probe participants to ascertain their reasons for agreeing or disagreeing with the above question. 16) Attendance and examination  How well do the nurses attend to you in the consultation room? o Probe participants to ascertain their reasons for agreeing or disagreeing with the above question.  Do the nurses do physical examination of your body in the consultation room? o Probe participants to ascertain their reasons for agreeing or disagreeing with the above question. 17) What would you like us to tell the National Department of Health concerning your individual needs, circumstances and challenges? SUMMARY  In order to be sure that we did not miss anything, the moderator will summarise the topics for the discussion.  Is the summary accurate? Was anything left out? INTRODUCTION -Introduce members of the team -The aim of this discussion is to understand patient experiences with the quality of care for chronic diseases in the primary health facilities and to know why the participants failed to come for clinic appointment for at least one month after the appointment given to them by the nurses -We would like to get a very good understanding of your experiences in the clinics, some of which you may not have been able to discuss with us during interviews in the clinic last year. -This discussion will contribute in understanding how services for chronic diseases are organized in the health facilities -This study has the support of the Bushbuckridge Department of Health and Wits University 2. ACTIVITIES -Eleven topics around quality of chronic disease care in the clinic will be discussed today -Participation in this discussion will be free, fair, equal and in relaxed environment -A report of this discussion will be sent to you -There will be no mention of your names in the report or any other document(s) -You will not be identified as a participant in this discussion -Refreshments will be provided in the meetings -You are free to leave the study at any time -We would like to know if there are suggested modifications to this programme 3. WRITTEN INFORMED CONSENT -Written informed consent, signed by participants, study investigator or senior qualitative field worker is obtained INTRODUCTION Chronic diseases are diseases (e.g. hypertension, diabetes and HIV) that require regular and ongoing, usually for a long time contact with health facilities. Since June 2011, the National Department of Health has been testing the integrated chronic disease management (ICDM) model of care in all the clinics Bushbuckridge sub-district. The aim of this model is to improve health outcomes of patients with chronic diseases and improve service delivery in health facilities. We invite you to describe your experiences with the quality of care in the clinics with respect to the topics below. You are not expected to name the chronic disease for which you are being managed in the clinic. Topics for the discussion 1) General satisfaction with the quality of the integrated chronic disease care  In general, could you tell us how satisfied or dissatisfied you are with the integrated chronic disease care in the health facility you attend? o Prompt participants to explain the reason for their satisfaction or dissatisfaction  All of you have been receiving treatment in the clinics for at least two years. Have you noticed any changes in the way services for chronic diseases have been combined? o Prompt participants to discuss any positive or negative changes in the combined/integrated chronic disease services in the clinics since June 2011  How perfect are these services? o o Probe participants to describe specific instances of competence or incompetence of the nurses 5) Communication  Tell us whether the nurses explain to you reason(s) for doing physical examination or requesting laboratory tests o Invite participants to describe such experiences  Do the nurses sometimes ignore what you tell them? o Probe participants to provide illustrations with specific instances 6) Financial aspect of accessing care  Do have to pay fees to access services in the clinics? o Invite participants to illustrate specific instances when fees were paid  Are you able to afford the cost of transportation to the clinic? o Invite participants to describe distances between their homes and the clinic, and how much they have to pay for transport 7) Waiting time before seeing nurses and time spent with nurses in the consultation room  How long do you wait to see the nurses before they attend to you? o Probe participants to describe their experiences with waiting time  Do the nurses hurry too much when they attend to you in the consultation room? o Probe participants to describe their experiences 8) Accessibility of the integrated chronic disease services  How hard or easy is it for you to access services in the clinic? o Invite participates to describe their experiences  Describe your experiences in having access to a doctor or a specialist 15) Prepacking of medicines  Do the nurses pack your medicines the day before your clinic appointment? o Probe participants to ascertain their reasons for agreeing or disagreeing with the above question. 16) Attendance and examination  How well do the nurses attend to you in the consultation room? o Probe participants to ascertain their reasons for agreeing or disagreeing with the above question.  Do the nurses do physical examination of your body in the consultation room? o Probe participants to ascertain their reasons for agreeing or disagreeing with the above question. 17) What would you like us to tell the National Department of Health concerning your individual needs, circumstances and challenges? SUMMARY  In order to be sure that we did not miss anything, the moderator will summarise the topics for the discussion.  Is the summary accurate? Was anything left out?
2020-08-02T13:05:30.360Z
2020-07-31T00:00:00.000
{ "year": 2020, "sha1": "e9197f7af588118875c36ebfd7516cf076019810", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0236270&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e876fcc79de0c738cf2744c9ecbf732a8ff04a34", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
248142255
pes2o/s2orc
v3-fos-license
On the low Mach number limit for 2D Navier–Stokes–Korteweg systems † † This contribution is part of the Special Issue: Fluid instabilities, waves and non-equilibrium dynamics of interacting particles Abstract: This paper addresses the low Mach number limit for two-dimensional Navier–Stokes– Korteweg systems. The primary purpose is to investigate the relevance of the capillarity tensor for the analysis. For the sake of a concise exposition, our considerations focus on the case of the quantum Navier-Stokes (QNS) equations. An outline for a subsequent generalization to general viscosity and capillarity tensors is provided. Our main result proves the convergence of finite energy weak solutions of QNS to the unique Leray-Hopf weak solutions of the incompressible Navier-Stokes equations, for general initial data without additional smallness or regularity assumptions. We rely on the compactness properties stemming from energy and BD-entropy estimates. Strong convergence of acoustic waves is proven by means of refined Strichartz estimates that take into account the alteration of the dispersion relation due to the capillarity tensor. For both steps, the presence of a suitable capillarity tensor is pivotal. Introduction The class of Navier-Stokes-Korteweg equations arises in the modelling of capillary fluid flow as it occurs for instance in physical phenomena such as diffuse interfaces [27,40]. Capillarity effects are mathematically described by a dispersive stress tensor depending on the density and its derivatives. In their general form, these systems read ∂ t ρ + div(ρu) = 0, ∂ t (ρu) + div(ρu ⊗ u) + ∇P(ρ) = 2ν div S + κ 2 div K. (1.1) The unknowns are the density ρ and the velocity field u. We consider the isentropic pressure law P(ρ) = 1 γ ρ γ with γ > 1. The parameters ν, κ > 0 denote the viscosity and capillarity coefficients respectively. The viscous stress tensor S = S(∇u) equals S = µ(ρ)Du + λ(ρ) div(u)I, where µ, λ denote the shear and bulk viscosity coefficients respectively and satisfy µ(ρ) + 2λ(ρ) ≥ 0. The capillary term K = K(ρ, ∇ρ) amounts to The capillary tensor is referred to as Korteweg tensor [40], see also [54,55]. The family of Navier-Stokes-Korteweg equations has rigorously been derived in [27] and more recently in [33]. A prominent example of (1.1) are the quantum Navier-Stokes (QNS) equations that will mainly be considered in this paper. The QNS equations are obtained from (1.1) by choosing the shear viscosity to depend linearly on the density, namely µ(ρ) = ρ, vanishing bulk viscosity λ(ρ) = 0, and k(ρ) = 1/ρ. Its inviscid counterpart (considering λ(ρ) = µ(ρ) = 0) is the Quantum Hydrodynamic system (QHD) [7,9,10] which has a strong analogy with Gross-Pitaevskii type equations describing for instance the effective dynamics in terms of a macroscopic order parameter of superfluid helium [39] or Bose-Einstein condensation [50]. This close link to NLS type equations highlights the quantum mechanical nature of the model, see e.g., [7,34]. Beyond that the QHD system also serves as model for semi-conductor devices [30]. In this regard, (1.3) can be interpreted as a viscous regularization, but can also be derived as the moment closure method with a BGK type collision term [23,37], see also [36] for an overview of dissipative quantum fluid models and their utility for numerical simulations. The class of systems (1.1) with capillarity tensor K but with ν = 0, namely inviscid systems such as QHD, goes under the name of Euler-Korteweg system [14,20]. The choice k(ρ) = const., µ(ρ) = ρ, λ(ρ) = 0 constitutes a second example that is extensively studied in literature and is commonly referred to as Navier-Stokes-Korteweg (NSK) system [13,18,19]. Finally, we mention that for κ = 0, namely (1.1) without capillarity term, one recovers the compressible Navier-Stokes equations with density-dependent viscosity [18,45,56]. The aim of this paper is to investigate the low Mach number limit of (1.1) posed on [0, ∞) × R 2 in the class of weak solutions and for general ill-prepared data. To that end, we focus on the analysis of the (QNS) equations, namely ∂ t ρ + div(ρu) = 0, ∂ t (ρu) + div (ρu ⊗ u) + ∇P(ρ) = 2ν div (ρDu) + 2κ 2 being complemented with the non-trivial far-field behavior ρ(x) → 1, |x| → ∞. (1.4) For (1.3) the capillarity tensor div K, defined in (1.2), can formally be rewritten as The total energy associated to (1.3) reads . (1.6) Note that the assumption of finite energy E(ρ, u) < +∞ enforces the far-field behavior (1.4) for the given choice of the internal energy F(ρ). The motivation to mainly study (1.3) is two-fold. First, our main purpose is to elucidate the relevance of the capillarity tensor K for the developed method. In this regard, the choice k(ρ) = 1/ρ allows for a more concise and straightforward exposition. Second, to the best of our knowledge, (1.3) is the only system within the class of (1.1) with density dependent viscosity and non-trivial far-field for which existence of finite energy weak solutions (FEWS) is known [8]. However, postulating existence of weak solutions we discuss how our results can be generalized, to the following set of capillarity and viscosity tensors satisfying the compatibility condition [18,20,21], see Remark 2.8 below. Assuming (1.7) describes a sufficient condition which leads to suitable a priori estimates required for our method. Note that e.g. NSK, namely k(ρ) = const., µ(ρ) = ρ, λ(ρ) = 0 does not satisfy (1.7). Nevertheless, the BD-entropy estimates obtained in [13] enable us to include NSK in our considerations. For the investigation of the low Mach number limit of (1.3), we consider a highly subsonic regime in which the Mach number Ma = ε = U/c given by the ratio of the characteristic velocity U of the flow and the sound speed c goes to zero. One expects the flow to asymptotically behave like an incompressible one on large time scales and for small velocities. Given the dimensionless system (1.3), we introduce the scaling t → εt, u → εu, ν → εν ε κ → εκ ε , (1.8) The scaled viscosity and capillarity coefficients are such that The scaled version of (1.3) then reads For the sake of a concise notation, we suppress the ε-dependence of ν ε and κ ε . The scaled energy is given by We refer to [2,28,35] for details on the scaling analysis. Provided that the energy (1.10) is uniformly bounded, the heuristics suggests that ρ ε − 1 converges to 0 as ε → 0. Formally ρ ε u ε → u for which we infer from the continuity equation of (1.9) that div u = 0. The limit function u is expected to solve the target system given by the incompressible Navier-Stokes equations Our main result states the following. We refer to Theorem 2.6 below for a precise statement. Beyond its analytic scope, we mention that the low Mach number analysis is also motivated by the utility of (1.1) for the numerical purposes, such as investigation of diffuse interfaces [3]. For a general introduction to the mathematical low Mach number theory, we refer to the review papers [2,35] and the monograph [28] and references therein. In this introduction, we restrict ourselves to point out the key difficulties of the low Mach number theory for weak solutions of (1.9). A key issue in proving convergence towards the target systems consists in controlling the acoustic waves, carried by density fluctuations and the irrotational part of the momentum density. Unless the acoustic waves propagating with speed 1 ε are controlled in a suitable way one may only expect weak convergence of the sequence of momentum density ρ ε u ε . The latter is in particular insufficient to obtain compactness of the convective term div(ρ ε u ε ⊗u ε ) and for the passage to the limit in (1.9). The dispersion of acoustic waves can be exploited in order to infer the desired decay. When working on unbounded domains, Strichartz estimates for the wave equation provide an appropriate tool for such an analysis applied to the classical compressible Navier-Stokes equations, see [25] and the survey papers [2,24,35]. The dispersive tensor present in (1.9) alters the dispersion relation of acoustic waves in (1.9) that is no longer linear. We develop a refined dispersive analysis allowing for decay of acoustic waves at explicit improved convergence rates and under arbitrarily small loss of regularity. For that purpose, we adapt the analysis of acoustic oscillations initiated in [6] by the author in collaboration with P. Antonelli and P. Marcati, see Section 4 below. Refined Strichartz estimates taking into account the augmented dispersive relation are also used by the same authors [4] for the study of the low Mach number limit of (1.9) posed on R 3 . However, as the dispersion turns out to be weaker for d = 2 the estimates introduced in [4] do not yield the desired decay properties for d = 2. In [6], the authors complement the analysis of [4] for d = 3 with suitable refined Strichartz estimates for d = 2 and elucidate the link with the Bogoliubov dispersion relation [17] that governs the system of acoustic waves. These estimates can be considered a refinement of the Strichatz estimates in [16] and the ε-dependent version of [32] introduced in the framework of the Gross-Pitaevskii equation, see Section 4. Note, that the ε-dependent estimates do not follow from a direct scaling argument as the Bogoliubov dispersion relation is non-homogeneous. Second, suitable a priori estimates are required in order to infer the compactness needed for the passage to the limit. At this stage, further difficulties related to the Cauchy Problem of (1.9) and its difference to the one for the classical compressible Navier-Stokes become apparent. The density dependence of the viscosity tensor 2ν div(ρ ε Du ε ) in (1.9) leads to a degeneracy close to vacuum regions. This prevents a suitable control of the velocity field u ε which in general can not be defined a.e. on [0, T ) × R 2 . In addition, propagating regularity of ρ ε is a difficult task due to presence of the highly nonlinear quantum correction term in (1.9). The lack of appropriate uniform estimates is compensated for by the Bresch-Desjardins (BD) entropy estimates [18,19] which are available for (1.9) and more in general for (1.1) under specific conditions on µ, λ and k. While, in the case of (1.9), these provide bounds up to second order derivatives of √ ρ ε , they do not suffice to define u ε a.e. on R 2 , see also (2.4) below. This distinguishes the present analysis from the incompressible limit for the classical compressible Navier-Stokes equations, see e.g., [25], for which only weaker information on the density in Orlicz spaces but on the other hand a uniform Sobolev bound for u ε are available. This further motivates the need of an accurate dispersive analysis of the acoustic waves when dealing with weak solutions at low regularity. The presence of the capillarity tensor allows for both refined Strichartz estimates and additional uniform estimates on √ ρ ε (compared to the case κ = 0), see also Remark 2.7. Previously and to the best of our knowledge, the low Mach number limit for (1.9) has only been studied for d = 3. In the aforementioned paper [4], see also [5], the low Mach number limit for (1.3) posed on R 3 is investigated. As detailed above, the dispersive analysis of the linearized system differs substantially from the present one due to the weaker dispersion for d = 2. Moreover, due to the uniqueness and regularity properties of weak solutions to (1.11) for d = 2, here we are able to infer additional information on the limit velocity field u, see Theorem 2.6. We also mention [41,57] where the incompressible limit for (1.3) posed on T 3 is considered. In these papers, the authors augment (1.3) by additional drag terms that allow for a direct control of the velocity field u ε . In addition, [41,57] consider local smooth solutions to the primitive system under further assumptions that are shown to converge to local strong solutions of (1.11) by means of a relative entropy method. In [41], the authors also study the limit of local smooth solutions to (1.3) posed on R 3 including again additional drag terms and requiring the initial data to be smooth and well-prepared. Note that the class of weak solutions under consideration in this paper is not suitable for relative entropy methods. Finally, we mention that the low Mach number limit for the (QHD) system, the inviscid counterpart of (1.3), is investigated in [26] on T d for d = 2, 3. Posed on R d , it will further be addressed by the author in a forthcoming paper including vortex solutions of infinite energy, see also [34]. The remaining part of this paper is organized as follows. Section 2 reviews the Cauchy Theory for the primitive system (1.3) and the target system (1.11) and provides a precise formulation of the main results of this paper. Subsequently, we collect the needed uniform estimates in Section 3. Strong convergence to zero of acoustic waves is proven in Section 4 while Section 5 completes the proof of the main theorem. Notations We list the notations of function spaces and operators used in the following. We denote • the symmetric part of the gradient by Du = 1 2 (∇u + (∇u) T ) and the asymmetric part by Au = 1 2 (∇u − (∇u) T ), • by D(R + ×R 2 ) the space of test functions C ∞ c (R + ×R 2 ) an by D (R + ×R 2 ) the space of distributions. The duality bracket between D and D is denoted by ·, · , • by L p (R 2 ) for 1 ≤ p ≤ ∞ the Lebesgue space with norm · L p . We denote by p the Hölder conjugate exponent of p, i.e. 1 = 1 p + 1 p , and for 0 < T ≤ ∞ by L p (0, T ; L q (R 2 )) the space of functions u : (0, T ) × R 2 → R n with norm By L p− (0, T ; L q (R 2 )) we indicate the space of functions f ∈ L p 0 (0, T ; L q (R 2 )) for any 1 ≤ p 0 < p, , we refer to [1,48] for details. • by Q and P the Helmholtz-Leray projectors on irrotational and divergence-free vector fields, respectively: For f ∈ W k,p (R 2 ) with 1 < p < ∞ and s ∈ R the operators P, Q can be expressed as composition of Riesz multipliers and are bounded linear operators on W s,p (R 2 ). In what follows C will be any constant independent from ε. For the convenience of the reader, we recall an interpolation result used several times throughout the paper. Lemma 1.2 (Interpolation). Let T > 0, p 1 , p 2 , r ∈ (1, ∞) and s 0 < s 1 real numbers. Further, let u ∈ L p 1 (0, T ; W s 0 ,r (R 2 )) ∩ L p 2 (0, T ; W s 1 ,r (R 2 )). Then, for all (p, s) such that there exists θ ∈ (0, 1) with . The Lemma is a simplified statement of Theorem 5.1.2 in [15] and can also be proven by standard interpolation of Sobolev spaces in the space variables, see e.g., Paragraph 7.53 in [1], followed by Hölder's inequality in the time variable. Preliminary and main results This section briefly reviews the Cauchy theory for both, the primitive system (1.3) and the target system (1.11). Subsequently, we state the main results of this paper characterising the incompressible limit of (1.9) in the class of weak solutions. Cauchy theory The mathematical analysis of (1.3), and more in general Navier-Stokes-Korteweg systems (1.1), encounters two major difficulties beyond the well-known ones arising in the study of classical compressible Navier-Stokes equations [48]: the density-dependence of the viscosity stress tensor and the presence of the highly non-linear dispersive stress tensor. For compressible fluid flow with constant coefficient viscosity, the energy bound yields √ ρu ∈ L ∞ t L 2 x and the energy dissipation provides a L 2 t,x -bound for ∇u. For the degenerate viscosity stress tensor considered in (1.3), the energy dissipation fails to provide suitable control on u. By consequence, the Lions-Feireisl theory [29,48] which relies on a Sobolev bound for u can not be applied. Without further regularity assumptions, none of the quantities u, ∇u and 1/ √ ρ is defined a.e. on R 2 due to the possible presence of vacuum {ρ = 0}. These difficulties are reminiscent of the ones encountered in the analysis of the QHD system [7,10], the inviscid counter-part of (1.3), and also arise in the absence of a capillarity tensor, namely for the barotropic Navier-Stokes equations with density dependent viscosity [19]. It is hence pivotal for the development of the Cauchy Theory to obtain suitable control on the mass density, which turns out to be a difficult task given in particular the presence of the highly non-linear dispersive stress tensor. The lack of uniform bounds for the velocity field u is compensated for by the Bresch-Desjardins (BD)-entropy estimates [18,19]. The mathematical theory for finite energy weak solutions is then developed in terms of the variables ( √ ρ, Λ := √ ρu) which enjoy suitable bounds in the finite energy framework. Note that the mass is infinite in view of (1.4). Weak solutions are commonly constructed in terms of an approximation procedure [8,11,42]. This does in general not allow one to prove the energy inequality The energy inequality is replaced by a weaker version by defining the tensor By denoting its symmetric part S ν = T sym ν , we recover the identity √ ν √ ρS ν = νρDu for smooth solutions. The energy inequality for (1.3) then reads The aforementioned (BD)-entropy estimates provide uniform bounds for the asymmetric part A ν = T asym ν and the second order derivatives of √ ρ, see (2.4) below. We refer the reader to [4,8] and references therein for a detailed discussion. Similarly, the capillary tensor given by (1.5) is well-defined in weak sense by virtue of the regularity properties stemming from the energy (2.3) and Bresch-Desjardins entropy inequality (2.4). Concerning the far-field condition, we mention that the internal energy F(ρ) and the pressure P(ρ) are related through the identity P(ρ) = ρF (ρ) − F(ρ). The particular choice for F(ρ) in (2.3) enforces the desired far-field behavior. Following [8] we introduce our notion of weak solutions to (1.3) with far-field behavior (1.4). Global existence of (FEWS) to (1.3) posed on T d for d = 2, 3 is proven in [11] and [42] following different approaches. In collaboration with P. Antonelli and S. Spirito [8], the author proves global existence of (FEWS) to (1.3) posed on R d for d = 2, 3 with or without non-trivial far-field (1.4) and initial data of finite energy. In particular, vacuum regions are included in the weak formulation of the equations. The method of [8] consists in a invading domains approach. More precisely, by a suitable truncation argument a sequence of approximate solutions is constructed. To that end, the authors rely on the existence result [42] on periodic domains. The compactness properties provided by the energy and BD-entropy bounds allow for the passage to the limit in the truncated formulation yielding finally a global (FEWS) to (1.3). Further, the weak solutions constructed in [8] are such that (2.3) and (2.4) are satisfied. The validity of the energy inequality (2.1) for general weak solutions to (1.3) is at present not clear. In addition, the minimal assumptions on weak solutions such that (2.3) and (2.4) are fulfilled remain to determine. For a more detailed discussion of these issues, see e.g., [8], [4, Section 2 and Appendix A] and [49]. Concerning the Cauchy Theory of (1.11), we recall the following well-known result. A weak solution u is called a Leray-Hopf weak solution to (1.11) if the energy equality is satisfied for a.e. t ∈ [0, T ), where the kinetic energy is defined as Existence and uniqueness of Leray-Hopf weak solutions to (1.11) for initial data of finite kinetic energy is due to [44]. We refer to the monograph [43] for the analysis of (1.11) and we limit ourselves to the following comments. The space L 2 (R 2 ) corresponds to the energy space of (1.11), namely the space of velocity fields of finite kinetic energy and enjoys scaling invariance. Main results We specify the assumptions on the sequence of initial data (ρ 0 ε , u 0 ε ) that we consider to be general and ill-prepared and without further regularity or smallness assumptions. The assumptions are stated in therms of the hydrodynamic states ( ρ 0 ε , Λ 0 ε ). Assumption 2.5. Let (ρ 0 ε , u 0 ε ) be a sequence of initial data such that Note that Theorem 2.2 guarantees the global existence of a sequence of (FEWS) to (1.9) with initial data satisfying Assumption 2.5. Our main result then characterises the low Mach number limit of (FEWS) to (1.9) with such initial data. Even though only the weak form of the energy inequality (2.3) is available for ε > 0, we recover unique Leray-Hopf weak solutions in the limit. Note that for the general ill-prepared data for the primitive system, the possible formation of an initial layer can not be ruled out. In particular, E(ρ 0 ε , u 0 ε ) does not converge to E INS (P(u 0 )) and one may not infer the energy inequality for u by passing to the limit in (2.3). However, the validity of the energy equality follows from Ladyzenskaya-Prodi-Serrin regularity criterion [43,51,52] as we prove that u ∈ L 4 (0, T ; L 4 (R 2 )). Moreover, by virtue of the uniqueness of Leray-Hopf weak solutions, we conclude that the sequence Λ ε converges without requiring any extraction of a subsequences. The regularity properties stemming from the energy and BD-entropy estimates are essential for that purpose. This is in contrast to the low Mach number limit for (1.3) posed on R 3 considered in [4]. In 3D, the regularity properties of the limit velocity field u do not suffice in order to infer the validity of the energy inequality in the limit, see [4, Theorem 2.4 and Remark 2.5]. By consequence, one recovers a global weak Leray-Hopf solution only for well prepared initial data, namely data such that E(ρ 0 ε , u 0 ε ) does converge to E INS (P(u 0 )). In addition, for d = 3 convergence does hold up to subsequences only. On the other hand, we can not rely on the dispersive estimates providing suitable decay of the acoustic waves for d = 3, see [4,Proposition 4.2] due to the weaker dispersion for d = 2, see Section 4 below. Remark 2.7. The presence of the capillarity tensor K in (1.9) is essential for both the uniform estimates (Section 3) and the acoustic analysis (Section 4). Regarding the former, the respective BD-entropy inequality (2.4) allows for uniform bounds of second order derivatives of √ ρ ε − 1 which enable us to infer a suitable Sobolev bound on ρ ε u ε , see Lemma 3.3 and Remark 3.4 below. For the latter, it leads to improved decay rates for the acoustic waves through an alteration of the dispersion relation, see (4.1). Both are in general no longer available without capillarity tensor, namely for κ = 0 corresponding to the degenerate compressible Navier-Stokes equations; the low Mach number limit of which will be subject of future investigation. Remark 2.8. The presented theory generalizes to systems (1.1) provided that the capillarity tensor is chosen in a suitable way so that the respective BD-entropy inequality (2.4) entails bounds on second order derivatives of √ ρ ε . This is in particular the case provided that the BD relation (1.7) is satisfied [20,21], see also Remark 3.4. We stress that even though the NSK equations do not satisfy (1.7) suitable estimate can be shown, see [12,13]. In addition, the linearized system for acoustic waves turns out to be still governed by the dispersion relation obtained for (1.9), see Remark 4.10. Finally, this allows one to infer the required compactness properties for {(ρ ε , √ ρ ε u ε )} ε>0 and to prove convergence of FEWS towards Leray-Hopf weak solutions of (1.11). Remark 3.2. Note that in contrast to compressible fluid flow with constant viscosity coefficients [47] the assumption for the initial data to be of uniformly bounded energy and (2.3) only yield a bound on the symmetric part S ν,ε of T ν,ε , see (6) of Lemma 3.1. In particular, no L 2 or Sobolev bound for u ε is available. On the other hand, the control of ∇ √ ρ ε allows one to prove that √ ρ ε − 1 converges to 0 in L ∞ (0, ∞; H s (R 2 )) for any s ∈ [0, 1) by virtue of (1) Lemma 3.1 while in the constant viscosity coefficient case such bounds are available in Orlicz spaces only [25,47]. Additional uniform bounds can be obtained from (2.4). Note that the scaled BD-entropy functional reads As the initial data (ρ 0 ε , u 0 ε ) is of uniformly bounded energy it follows that B(ρ 0 ε , u 0 ε ) ≤ C for some C > 0. In particular, this allows one to infer a L 2 -bound on T ν,ε . Similarly, it provides Sobolev bounds of second order for √ ρ ε − 1. Remark 3.4. We emphasize that both statements of Lemma 3.3 rely on the uniform bound for ∇ 2 √ ρ ε stemming from (2.4) which is not available for κ = 0. In particular, if κ = 0 then the third term on the right-hand side of (3.6) is merely bounded in L ∞ (0, T ; L 1 (R 2 )). In turn, we are no longer able to state that m ε ∈ L p (0, T ; W s,r (R 2 )) for some s > 0, r ≥ 2 and p ∈ [1, ∞). Control of acoustic oscillations The aim of this section is to provide suitable control of fast-propagating acoustic waves, namely the density fluctuations σ ε := ε −1 (ρ ε − 1) and the irrotational part of the momentum density Q(m ε ). In general, for ill-prepared data these fast oscillations may prevent the sequence Q(m ε ) from converging strongly to the incompressible limit velocity field u and only allow for weak convergence. However, when the problem is posed on the whole space, the dispersion at infinity can be exploited to prove strong convergence to zero of the acoustic waves as ε → 0 in suitable space-time norms at an explicit convergence rate. We refer to the monograph [28,Chapter 7] and the survey paper [24] for the analysis on bounded domains. The acoustic equations are obtained by linearizing (1.9) around the constant solution (ρ ε = 1, u ε = 0), see also the scaling in (1.9), and applying the Leray-Helmholtz projection onto curl-free vector fields to the moment equation. More precisely, where the Leray-Helmholtz projections are defined by Q := ∇∆ −1 div and P := I − Q respectively and Formally, the density fluctuations σ ε satisfy the Boussinesq-type equation The fourth-order term stems from the dispersive stress tensor div K in the equation for the momentum density upon using identity (1.5) and alters the dispersion relation for the acoustic equations. In the absence of capillary effects, namely for κ = 0, (4.3) reduces to the wave equation with sound speed 1/ε which is known to govern the evolution of acoustic waves for a classical compressible fluid. For κ > 0, the dispersion relation for high frequencies (above the threshold 1/ε) is no longer linear but quadratic. For a discussion of the physical background and the link to the Bogoliubov dispersion relation [17] appearing in the microscopic theory for Bose-Einstein condensation we refer to [6]. Moreover, by an accurate dispersive analysis of (4.1) it is proven in [6], see also [4,16], that the presence of the quantum correction term leads to improved decay rates of acoustic waves on R d with d ≥ 2 compared to compressible fluids without capillarity effects. For that purpose, (4.1) is symmetrized by means of the transformation so that the system reads where G ε = (−∆) − 1 2 div G ε . Upon controlling (σ ε , Q(m ε )) in terms of ( σ ε , m ε ), it suffices to investigate (4.5). System (4.5) can be characterised by means of the linear semigroup operator e itH ε where H ε is defined via the Fourier multiplier A stationary phase argument leads to the following dispersive estimate for the semigroup operator e itH ε , see [4,Corollary B.6] and also [6,Corollary 4.3]. Lemma 4.1. Let d ≥ 2, φ ε as in (4.6), R > 0 and let χ(r) ∈ C c (0, ∞) be a smooth cut-off frequency cut-off localizing in frequencies of order R. Then there exists a constant C > 0 such that for any δ ∈ [0, d−2 2 ]. For ε = 1, the dispersive estimate (4.7) is proven in [32] to investigate the large time behavior of solution to the Gross-Pitaevskii (GP) equation. Here, we only mention that the (GP) equation is formally equivalent to the QHD system, the inviscid counter part of (1.3), see [6]. Note that the righthand side of (4.7) blows up for κ → 0. Indeed, the acoustic dispersion is then governed by the wave equation while (4.7) is a Schrödinger-like dispersive estimate. Here, we consider κ > 0 to be fixed. The symbol φ ε is non-homogeneous and does not allow for a separation of scales. Hence the εdependent version can not be obtained by a simple scaling argument. For δ = 0, the dispersive estimate (4.7) reduces to the one for the free Schrödinger propagator e it∆ . In addition, (4.7) yields a regularizing effect for low frequencies for d > 2 that provides decay of order ε δ at the expense of a factor R δ for δ > 0 arbitrarily small. This is related to the curvature of the hypersurface τ = φ ε (|ξ|) which depends on the spatial dimension d. For d = 2, (4.7) does not yield any decay in ε. It is shown in [6, Proposition 3.8] that the desired decay for d = 2 can be obtained by separating the regimes of frequencies above and below the threshold 1/ε. The symbol φ ε is well approximated by |ξ| ε , namely the wave operator with speed 1/ε for frequencies below the threshold 1/ε and by |ξ| 2 , i.e., the free Schrödinger operator for frequencies larger than 1/ε. The desired decay then follows from the wavelike estimate for low frequencies and Sobolev embedding for high frequencies. However, this leads to a loss of the aforementioned regularizing effect. Interpolating in the low frequency regime between the wave-type estimate and (4.7) allows one to obtain Strichartz estimates with arbitrarily small loss of regularity. Remark 4.4. In [6], Corollary 4.3 is stated in terms of Besov spaces which is slightly more precise but not needed for our purpose. The uniform estimates for (σ 0 ε , m 0 ε ) and G ε together with the Strichartz estimates allow one to infer strong convergence of ( σ ε , m ε ) to 0 as ε → 0 in space-time norms. Note that s > 0 provided that r > 2 and that s can be made arbitrarily small by choosing θ > 0 sufficiently small. Remark 4.10. If (1.1) is considered with a general capillarity tensor K, as defined in (1.2), and in the scaling (1.8), then the respective linearized system amounts to (4.1) at leading order. More precisely, we wish to linearize (1.2) for ρ ε = 1 + εσ ε and note that only the first term ρ div(k(ρ)∇ρ)I of K yields a contribution of order O(ε) while the second and third term contribute with terms of order at least O(ε 2 ). Those may be discharged into G ε on the right-hand side of (4.1) and bounded in appropriate Sobolev spaces at negative regularity. We recover the Bogoliubov dispersion relation as in (4.3) and the dispersive analysis then follows the same lines. Note that if κ = 0, one may still prove that Q(m ε ) strongly converges to zero in L q (0, T ; W −s,r (R 2 )) for some s > 2 and wave-admissible exponents (q, r), though with increased loss of regularity and worse decay rate as (4.7) is no longer available. However, under the light of Remarks 2.7 and 3.4 we lack an appropriate uniform estimate to perform the interpolation argument of Corollary 4.9. Proof of the main theorem This section provides the proof of Theorem 2.6. First, we show strong convergence of Λ ε and m ε in L 2 loc ((0, ∞) × R 2 ). Second, we pass to the limit in (1.9) to show that the limit function is the unique Leray weak solution of (1.11). In order to show strong convergence of the momentum density {m ε } ε>0 , it remains to prove compactness of the solenoidal part {P(m ε )} ε>0 of the momentum density m ε . Therefore, Λ ε converges strongly to u in L 2 loc ([0, ∞) × R 2 ). We are now in position to prove the main result Theorem 2.6. Remark 5.2. Note that Lemma 5.1 and the proof of Theorem 2.6 can be developed along the same lines when dealing with general viscosity and capillarity coefficients satisfying (1.7) or for NSK by adapting carefully the respective uniform estimates stemming from the energy and BD-entropy estimates, see Remark 3.4. In particular, the compactness of P(m ε ) can be inferred in the same manner.
2022-04-14T15:17:16.112Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "1d6b91976a11e8f928a59de9ef2fbb11b3460da4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3934/mine.2023023", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f0f2b837c12c7a41369da2e1bf4892927c2f0922", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [] }
271291553
pes2o/s2orc
v3-fos-license
Thermostabilization of a fungal laccase by entrapment in enzymatically synthesized levan nanoparticles In this work, we present a comprehensive investigation of the entrapment of laccase, a biotechnologically relevant enzyme, into levan-based nanoparticles (LNPs). The entrapment of laccase was achieved concomitantly with the synthesis of LNP, catalyzed by a truncated version of a levansucrase from Leuconostoc mesenteroides. The study aimed to obtain a biocompatible nanomaterial, able to entrap functional laccase, and characterize its physicochemical, kinetic and thermal stability properties. The experimental findings demonstrated that a colloidal stable solution of spherically shaped LNP, with an average diameter of 68 nm, was obtained. An uniform particle size distribution was observed, according to the polydispersity index determined by DLS. When the LNPs synthesis was performed in the presence of laccase, biocatalytically active nanoparticles with a 1.25-fold larger diameter (85 nm) were obtained, and a maximum load of 243 μg laccase per g of nanoparticle was achieved. The catalytic efficiency was 972 and 103 (μM·min)-1, respectively, for free and entrapped laccase. A decrease in kcat values (from 7050 min-1 to 1823 min-1) and an increase in apparent Km (from 7.25 μM to 17.73 μM) was observed for entrapped laccase, compared to the free enzyme. The entrapped laccase exhibited improved thermal stability, retaining 40% activity after 1 h-incubation at 70°C, compared to complete inactivation of free laccase under the same conditions, thereby highlighting the potential of LNPs in preserving enzyme activity under elevated temperatures. The outcomes of this investigation significantly contribute to the field of nanobiotechnology by expanding the applications of laccase and presenting an innovative strategy for enhancing enzyme stability through the utilization of fructan-based nanoparticle entrapments. Introduction Laccases (benzenediol oxygen reductases, EC 1.10.3.2) are a family of copper-containing polyphenol oxidases that belongs to the multicopper oxidases [1].Due to their low substrate specificity, laccases are able to catalyze the one-electron oxidation of a wide range of organic compounds, including phenols and aromatic amines, with the simultaneous reduction of molecular oxygen (O 2 ) to water (H 2 O) [2,3].The catalytic reaction of laccase involves the removal of one electron from the substrates for the production of free radicals.The free radical is then subjected to homo-and hetero-coupling in order to form a dimeric product, a polymeric product, or a cross-coupling product, which has practical implications for organic synthesis [4].Therefore, laccases are eco-friendly and versatile enzymes that exhibit significant potential in the synthesis of bioactive compounds, offering applications in the fields of medicine, pharmaceuticals, agriculture, producing cosmetics, nanobiotechnological productions, textile industries, woodworking industries, and food industries [5,6].Aside from its long-studied use in bioremediation and lignin degradation, other high-value applications for laccase have been described, for example the synthesis of compounds with potential medical use [7]; the synthesis of insulating polymers [8]; and as part of biosensors for pollutants, drugs and even for detecting infections [9][10][11].For some of these possible uses of laccase, its combination with nanomaterials has been proven beneficial in terms of its activity, stability and suitability for different applications [12][13][14].One significant culprit of mesophilic enzymes is the inherent limitations due to its lack of robustness [15].Enzymes may exhibit vulnerability towards high temperature, extreme pH, the presence of organic solvents, oxidation, and shear stress due to mechanical agitation, consequently leading to diminished operational stability under certain conditions [16].The immobilization of enzymes to preserve their activity under harsh conditions, and also to recycle them, has been widely studied for decades.Technological advances have allowed the synthesis, characterization and application of nanomaterials in several fields.Entrapment of bioactive molecules into nanoparticles enhances their stability, enables controlled release, and facilitates their efficient utilization across a broad spectrum of disciplines.As a result, this technology holds immense potential for transformative advancements in the realms of medicine, agriculture, environmental science, and beyond.In the particular case of laccases, several works demonstrating the amenability of these enzymes for immobilization on different types of supports, as well as their advantages and disadvantages, have been recently reviewed by numerous authors [10,[17][18][19][20]. In this research work, we have successfully coupled the synthesis of levan nanoparticles (LNP) with laccase entrapment.Levan is a biocompatible natural polymer produced by plants and bacteria through the action of levansucrases [21].It is comprised of fructose monomers, covalently linked through β-(2!6) glycosidic bonds, with β-(2!1) linkages that introduce branching points within the polysaccharide chain [22,23].Levan is usually a high molecular weight polymer (ranging from 10 5 −10 9 Da) with multiple branching (as high as 20%) [24].This and other polysaccharides have been shown to form nanostructures either spontaneously during its enzymatic synthesis [25,26] or under the influence of external agents [27][28][29].Levan has been shown to entrap small molecules and proteins, and thus could also serve as nanocarrier for some applications [30,31].By utilizing these LNPs, we aimed to enhance the stability of a fungal laccase, thereby expanding its potential applications in various fields, including biocatalysis, biomedicine, and biosensing (for a recent review of enzyme immobilization into polysaccharides, see Sharma et al 2021 [32]; for a recent review on the advantages and applications of levan, see Domżał-Kędzia et al 2023 [31]. Chemical reagents and enzyme The chemicals employed in this study were sourced exclusively from analytical grade materials.The 3,5-dinitrosalicylic acid (DNS) was purchased from Thermo Fisher Scientific Inc. Syringaldazine, succinic acid and calcium chloride were purchased from Sigma.The production and purification of laccase from Coriolopsis gallica UAMH 8260 was conducted following previously established protocols, as documented in prior literature [33]. Expression of the recombinant levansucrase (LevS4N70Tn38) The pET-22b-LevS4N70Tn38 vector, containing a truncated version of levansucrase from Leuconostoc mesenteroides B512F [34] was introduced into electrocompetent E. coli BL21 cells obtained from New England Biolabs.The preparation and transformation of electrocompetent cells was performed according to the protocol provided by MicroPulser™ Electroporation Apparatus Operating Instructions and Applications Guide (Catalog Number 165-2100) Bio-Rad.The vector contained an ampicillin resistance gene, enabling the selection of positive clones, along with an origin of replication and a LacI promoter that induces transcription of the inserted gene.To initiate the transformation process, bacterial cells were cultured in 1 ml of Luria-Bertani (LB) medium and incubated at 37˚C with continuous shaking at 350 rpm for 1 hour.This step aimed to allow the cells to reach the logarithmic growth phase, ensuring optimal conditions for the subsequent transformation procedure.Following the recovery period, the transformed cell suspension was evenly spread onto LB agar plates supplemented with 200 μg/mL of ampicillin.The plates were subsequently incubated at 37˚C for 14 h. Expression of LevS4N70Tn38 was performed as previously reported [25].Transformed colonies were cultivated in Luria Bertani broth supplemented with 200 μg/mL ampicillin.The culture was incubated at 37˚C and 200 rpm until reaching an optical density between 0.5 to 0.6 (measured at OD600, λ = 600 nm).At this point, induction of the culture was initiated by adding 1.5 mM isopropyl β-D-1-thiogalactopyranoside (IPTG).The induced culture was further incubated for an additional 8 h at 18˚C and 120 rpm.After completion of the incubation period, the cells were harvested through centrifugation at 4˚C, 9000 rpm for 10 minutes.The resulting cell pellet was subjected to two washes with a 10 mM phosphate buffer (pH 6.0).Subsequently, the washed pellet was resuspended in 20 mL of a 1 mg/mL lysozyme solution prepared in a 10 mM phosphate buffer (pH 6.0).Following a 30-minute incubation on ice for cell lysis, the samples underwent three cycles of freezing and thawing.Sonication was performed immediately using a 2-minute cycle at 70% amplitude, with intervals of 10 seconds on and 30 seconds off.Cell debris was subsequently removed by centrifugation until clarification of the extract was achieved (45 min at 9000 rpm and 4˚C).The resulting supernatant containing the enzyme was collected and utilized for purification. Purification of LevS4N70Tn38 The LevS4N70Tn38 enzyme was subjected to purification through anion-exchange chromatography with Macro-Prep DEAE resin (Bio-Rad).The purification process involved continuous elution at a flow rate of 1 ml/min using an external peristaltic pump (Econo-Pump; Bio-Rad).The elution was performed with phosphate buffer at pH 6.0, employing a gradient ranging from 0.1 M to 1.0 M. Fractions demonstrating significant enzymatic activity, as assessed through the 3,5-dinitrosalicylic acid (DNS) assay, were pooled together and concentrated employing a stirred ultrafiltration cell (AMICON, Millipore, Darmstadt, Germany) with 10-kDa molecular weight cut-off membrane.The ultrafiltration process enabled the removal of smaller molecules and impurities, while retaining the desired LevS4N70Tn38 enzyme within the retentate. Quantification of levansucrase activity through a reducing-sugars assay LevS4N70Tn38 activity was determined by quantification of released reducing sugars from sucrose employing the DNS assay described by Miller in 1959 [35].The enzymatic reaction was conducted in a reaction volume of 600 μL, containing 292 mM sucrose, 50 mM acetate buffer (pH 6.0), and 1 mM CaCl 2 .The reaction mixture was maintained at a constant temperature of 30˚C with continuous stirring at 350 rpm.To measure the release of reducing sugars, samples were taken at specific time intervals to measure the optical density (OD) at 540 nm using a Bio Spectrometer1 UV-Visible Spectroscopy System (Eppendorf, Germany).The OD was converted to reducing sugar concentration using a standard curve.The LevS4N70Tn38 activity was expressed as enzymatic units (U), defined as the amount of enzyme required to generate 1 μmol of reducing sugars per minute under the specified assay conditions.All determinations were performed at least in triplicate. Quantification of laccase activity through the syringaldazine assay Laccase activity was quantified utilizing the syringaldazine assay, a colorimetric technique.based on monitoring the increase in absorbance at 530 nm, indicative of product formation.The assays were conducted in succinate buffer (60 mM, pH 4.5) with 0.05 mM of syringadazine.The slope of A 530nm vs time was obtained and converted to values of U/mL using ε = 64,000 M -1 cm -1 [36].One enzymatic unit (U) is defined as the amount of laccase required to generate 1 μmol of the product per minute, under the specified assay conditions.The rate measurements were performed at least in triplicate using the Agilent 8453 UV-Visible Spectroscopy System (Santa Clara, CA, USA). Synthesis and preparation of LNPs The synthesis of LNPs was performed in a 600 μl reaction volume, under optimal conditions for LNP formation.The reaction mixture contained sucrose at 200 g/L in a pH 6.0 50 mM acetate buffer supplemented with 1 mM CaCl 2 .The reaction was initiated by adding 2 U/ml of LevS4N70Tn38 and it was incubated at 30˚C with constant stirring for 12 h.Laccase entrapment was tested by adding different amounts of enzyme, specifically 13, 33, 66, 99, and 132 μg of laccase, and each of these conditions is referred to as LNP-Lac 13 μg-132 μg.After 12 hr, free, non-entrapped laccase was separated from LNPs by employing an ultracentrifuge filter device with a 100-kDa molecular weight cut-off membrane (manufactured by Millipore, USA).The separation process involved centrifugation at a speed of 10,000 rpm for a duration of 10 minutes.Subsequently, the laccase-containing LNPs (LNP-Lac) retained in the filter were subjected to two washes using a 50 mM pH 6.0 acetate buffer.Following the washes, the LNPs were stored at a temperature of -20˚C until further utilization and subsequent analysis. Quantification of laccase entrapment efficiency (EE) and load into the LNPs The determination of laccase entrapment within the LNPs involved an indirect calculation method.This approach relied on quantification of the activity of free, non-entrapped laccase that was separated from LNP-Lac using an ultracentrifuge filter device, as described above.The amount of free laccase, obtained in a falcon tube, was quantified as enzymatic units (U) using the syringaldazine method.It is important to note that all reported values represent the average results obtained from three independent experimental replicates.Controls were performed to assess the efficiency of this procedure, by employing solutions of free laccase at different concentrations, performing the ultracentrifugation and measuring both retained activity after two washes, as well as recovered activity in the filtrate.Recovery efficiency of free laccase ranged from 68-70%, thus laccase entrapment quantification was corrected by a factor of 0.68.Data is available as supporting information, in S1-S3 Tables in S1 File. The entrapment efficiency (EE) of laccase was determined using the following equation: where IL represents the initial amount of laccase added to the LNP synthesis reactions, and FL is the amount of free laccase detected in the collected washes after the centrifugation process, and 0.68 is the correction factor. Effect of the presence of laccase on synthesis of levan To assess the effect of laccase in the levan synthesis reaction, the produced polymer was quantified following the entrapment reactions.The levan polymer was recovered from the entrapment reaction by precipitation with ethanol; a 200 μL sample of the entrapment reaction was mixed with 300 μL of ethanol and incubated at 4˚C for 1 h.After incubation, the mixture was subjected to centrifugation (15000 rpm, 20 minutes, 4˚C), and the supernatant was completely removed.The resulting pellet was washed twice with 800 μL of absolute ethanol, followed by complete ethanol evaporation through incubation at 60˚C for 10 minutes.Subsequently, the polymer was dissolved in 200 μL of 0.617 M HCl and subjected to hydrolysis at 80˚C with constant stirring (450 rpm) for a duration of 2.20 h.The hydrolysis process was halted by adding 200 μL of 1 M NaOH, and the resulting polymer was quantified using the DNS method. Characterization of LNPs For the characterization of the shape and morphology of the LNPs, transmission electron microscopy (TEM) was utilized.The TEM analysis was performed using a ZEISS model Libra transmission electron microscope, operating at an accelerating voltage of 120 kV.The LNPs samples were loaded onto a carbon-coated copper grid and subsequently air-dried prior to imaging.A Gatan digital camera was employed to capture high-resolution images for the comprehensive examination and characterization of the LNPs' shape and morphology.To ascertain the polydispersity index (PDI), hydrodynamic size, and particle size distribution profile of the LNPs, dynamic light scattering (DLS) analysis was employed.The measurements were conducted using a Zetasizer NanoZS instrument (manufactured by Malvern Instruments Ltd., Malvern, UK).The DLS technique enabled the determination of the PDI, which indicates the degree of size variation within the LNP sample.Additionally, the hydrodynamic size of the LNPs was assessed, providing insights into their overall dimensions in a liquid medium.The Zetasizer NanoZS instrument facilitated the estimation of the particle size distribution profile, enabling a comprehensive understanding of the size range and distribution of the LNPs under investigation.To determine the surface charge of the LNPs, the zeta potential was measured using laser Doppler velocimetry (LDV) with the same instrument and experimental conditions.The zeta potential provides crucial information about the NPs' surface properties, electrostatic stability, and colloidal behavior. Thermal and storage stability of entrapped laccase A sample of LNP-Lac, as well as free laccase, was incubated for 1 h at 40, 50, 60, and 70˚C.Enzyme activity measurements were conducted at 10-minute intervals, using the syringaldazine assay described above, to monitor any fluctuations or changes in activity levels.The residual activity of laccase was calculated by comparing the enzyme activity measured after 1 hincubation at various temperatures (40, 50, 60, and 70˚C) to the initial activity of the enzyme.The initial activity refers to the enzyme activity measured before the incubation period.The residual activity, expressed as a percentage, was calculated using the following formula: Residual Activityð%Þ ¼ Activity after incubation Initial activity � 100 All assays were performed at least in triplicate. The storage stability of LNP-Lac 33 μg, LNP-Lac 66 μg, and LNP-Lac 99 μg was evaluated over time by assessing their activity, through the syringaldazine assay, after 0, 6, 17, and 27 days of entrapment.The samples were kept at 4˚C.Data is available as supporting information, in S4 and S5 Tables in S1 File. Kinetic characterization LNP-Lac Different concentrations of syringaldazine (from 0 to 60 μM) were used to measure initial reaction rate of free and LNP-Lac 99 μg.All measurements were performed at least in triplicate.Data was fitted through non-linear regression to the Michaelis-Menten equation to obtain the values of Vmax and Km.For LNP-Lac, the amount of calculated entrapped laccase was used to obtain the k cat .Data is available as supporting information, in S6 Table in S1 File. Quantification of entrapment efficiency (EE) percentage Recombinant LevS4N70Tn38 was produced from E.coli and purified to a specific activity of 350 U/mg with an electrophoretic purity of 90%.This enzyme was used to catalyze the production of levan, both in the absence and presence of laccase from Coriolopsis gallica.This laccase is a good model for high-redox fungal laccase, as it has been reported to be a very versatile catalyst, amenable to pilot-plant production, as well as protein engineering [33,37,38].It is a glycosylated, monomeric protein with a molecular weight of 61 kDa and featuring typical properties of fungal laccases, such as an acidic pI of 3.4, an optimum pH between 4.5 and 5.5, and broad specificity towards numerous substrates [39][40][41][42][43]. To assess the potential impact of laccase entrapment on the extent of levan production, the amount of synthesized polymer was quantified both in the absence and in the presence of laccase and is shown in Fig 2 .The levan concentration was 24.5 mg/mL in the absence of laccase, and decreased to 19.1, 17.5 17.6, 19.1 and 16.7 mg/mL, respectively, in the presence of the different amounts of laccase used in entrapment reactions.Laccase does not interact or recognize sucrose, fructose or glucose as substrate or ligand, so a decrease of 20 to 35% in the amount of obtained levan cannot be explained in terms of competing/inhibiting secondary reactions.Nevertheless, the integrity and formation of catalytically active LNP were not compromised. Regarding the laccase load (amount of laccase per gram of levan nanoparticle), it also provided valuable information about the efficiency of the entrapment process and can guide the optimization of enzyme loading for various applications.Therefore, following the determination of EE and the amount of produced levan, the amount of laccase (in μg) entrapped per gram of LNP was calculated.The results demonstrated varying laccase loading capacities.The amount of laccase that was entrapped per gram of LNP was found to be 496 μg, 931 μg, 1524 μg, 4468 μg, and 1166 μg, respectively (Fig 1B).These findings reveal a clear trend in laccase loading capacity with respect to the enzyme concentration utilized during the entrapment process.The highest laccase loading capacity of 4468 μg/gr NP was achieved when employing an amount of 99 μg of laccase in the entrapment reaction.However, a higher laccase amount of 132 μg resulted in a lower load of 1166 μg/gr LNPs, indicating again a saturation effect.The sample LNP-Lac 99 μg was used for further kinetic and stability characterization, as it was the best preparation in terms of laccase loading. Physical characterization of LNP and LNP-Lac A comprehensive study of the LNPs physical properties, such as the average particle diameter, PDI, and surface charge was performed using DLS in combination with a zeta potential analyzer (Table 1).The average size was 68 nm for LNP and 85 nm for LNP-Lac 99 μg, respectively.These dimensions provide valuable information about the overall size of the nanoparticles, indicating their potential suitability for various applications.These values are similar to those reported by Gonza ´lez-Garcinuño, et al in 2019 [26], who analyzed the size of LNPs synthesized by two systems: whole bacteria and cell-free system, producing nanoparticles of 90 nm and 110 nm respectively. Additionally, PDI values were calculated as 0.036 for LNP and 0.056 for LNP-Lac 99 μg.The PDI represents the degree of size distribution within the nanoparticle population, with lower values indicating a narrower and more uniform particle size distribution.The relatively low PDI values obtained for both LNP and LNP-Lac 99 μg suggest a relatively homogeneous nanoparticle formulation.Others works using the enzymatic method to produce fructan nanoparticles have reported PDI values lower than 1 [25].The slight size and PDI differences between the LNPs vs LNP-Lac 99 μg, could be attributed to the entrapment of laccase as has been observed for other LNPs [27]. The zeta potential, serving as an indicator of the surface charge on nanoparticles, was -0.64 mV for the LNP and -1.92 mV for the LNP-Lac 99 μg samples.The slight negative value of zeta potential for laccase-containing LNP could be due to negatively charged laccase molecules, as the isoelectric point (pI) of this enzyme is 3.4 [38,41].Zeta potential plays a crucial role in determining the stability and potential interactions of nanoparticles within their surrounding environment.According to previous research by Barhoum et al. [44], nanoparticles with zeta potential values ranging from -10 to 10 mV are generally considered to be neutral.In line with these findings, the zeta potential values obtained in this study for the LNP and LNP-Lac 99 μg support their classification as neutral nanoparticles.Other works with proteins entrapped into LNP report similar zeta potential values.As reported by Oktavia et al [45] entrapment of bovine serum albumin (BSA) and lysozyme in levan, yielded nanoparticles with a non-spherical (for BSA) and spherical (for lysozyme) shape, an average size of 65-100 and 210-280 nm and negative zeta potential values of -4.72 and -2.57mV, respectively.In the context of drug delivery systems and other biomedical applications, nanoparticles possessing a neutral charge demonstrate several advantageous characteristics.Specifically, these particles exhibit an extended circulation time within the biological system [46], enhanced stability, and a reduced propensity for nonspecific interactions with biological components [47]. The analysis of images from TEM (Fig 3) revealed that the LNP, LNP-Lac 66 μg, lac 99 and LNP-Lac 132 μg samples exhibited predominantly spherical shapes with uniform sizes.These findings strongly suggest that the enzymatically synthesized levan possesses the inherent capability to self-assemble, leading to the formation of NPs.Similar reports on spontaneous nanostructuration are available [25,26,28,48].Most important, the entrapment of laccase during the enzymatic synthesis of levan is feasible.These results shed light on the potential of utilizing levan-based systems for NP formation and the incorporation of bioactive enzymes such as laccase.The successful demonstration of these principles in the present study holds promise for various applications in the field of nanotechnology.The samples were subjected to examination on two separate occasions.The first evaluation by TEM took place immediately following the synthesis of the nanoparticles and the entrapment of laccase.The second evaluation occurred approximately one month after the entrapment, aiming to assess the long-term stability and condition of the nanoparticles.This followup analysis was carried out to ensure that the nanoparticles maintained their structural integrity and functional properties over an extended period.We observed the nanoparticles maintained its integrity after 27-day storage, as will be shown below. Kinetic characterization of LNP-Lac The activity of LNP-Lac was measured with syringaldazine and for all cases, the biocatalyst displayed lower activity than expected on the basis of the amount of entrapped laccase (Table 2). In order to better understand this decrease in activity, the kinetic characterization was performed for both the free laccase and LNP-Lac 99 μg.The initial rates for syringaldazine oxidation at different substrate concentration was fitted to the Michaelis-Menten equation and the kinetic parameters Vmax and Km were obtained (Fig 4).The calculated k cat values for the free laccase and LNP-Lac 99 μg were 7050 min -1 and 1823 min -1 , respectively, whereas apparent Km is 2.4-fold larger for the entrapped laccase (17.73 μM vs 7.25 μM for the free laccase).The entrapment of laccase into the polysaccharide matrix may introduce structural changes in laccase, reduce its flexibility and/or affect the accessibility of the substrate to the enzyme's active site, as has been observed in other works [10].In this particular case, where no covalent interactions exist between the enzyme and the nanostructured polysaccharide, the structural effects may be low or absent, thus the activity decrease of the entrapped laccase could be mainly due mass transfer limitations.A lower catalytic activity is expected when enzymes are immobilized inside a matrix [16], and for laccases there are many examples reporting this phenomenon [49]. The catalytic efficiency, the kcat/Km value, was also calculated for both free laccase and LNP-Lac 99 μg, yielding values of 972 and 103 (μM�min) -1 , respectively.This indicates a 9-fold decrease in the catalytic efficiency of entrapped laccase.This reduction falls within the range reported in other studies involving enzyme immobilization.For instance, Lloret et al. [50] observed a substantial decrease of 27-fold in the catalytic efficiency of laccase when entrapped in sol-gel matrices composed of semi-permeable polymers.Similarly, in a study conducted by Castro, A. et al., [51] the catalytic efficiency of laccase decreased by as much as 250-fold.Thus, the observed decline in catalytic efficiency upon immobilization aligns with the commonly documented phenomenon encountered during enzyme immobilization processes [52][53][54]. While entrapment offers advantages such as improved stability (see below), it also results in a reduction in the catalytic efficiency of laccase.The choice between the free laccase and entrapped laccase should consider the specific requirements of the application, balancing the benefits of entrapment with the desired catalytic performance.Further investigations using different substrates and varying experimental parameters would provide a more comprehensive understanding of the catalytic behavior of both the free laccase and LNP-Lac 99 μg. Thermal and storage stability of entrapped laccase The temperature has a significant influence on enzyme activity, making the preservation of catalytic functionality essential for practical applications [55].The environment provided by nanostructured hydrophilic macromolecules, such as levan, may be beneficial for maintaining the 3D structure of proteins, lowering their susceptibility to thermal inactivation.The thermal stability of free laccase was evaluated under various temperature conditions, including 40˚C, 50˚C, 60˚C, and 70˚C, over a duration of 60 minutes.Altough fungal laccases are robust enzymes due to glycosylation, and it maintains more than 75% after 1 h incubation at temperatures of 60˚C and lower, we observed that the enzyme lost all activity after 1 h-incubation at 70˚C (Fig 5).On the other hand, the entrapped laccase (LNP-Lac 99 μg) retained 41.01%residual activity under these conditions, indicating a remarkable improvement in thermal stability.Comparing thermal stability of laccase with other works that also entrap it into natural polysaccharides, the environment within LNP appears to be more beneficial for retaining the laccase activity at high temperatures.Koyani et al, entrapped a laccase in chitosan NPs and reported no difference in activity loss for free laccase and entrapped laccase upon incubation at 60, 70, and 80˚C over 30 min.The authors argue that chitosan NP seems to remain as a gel, thus unable to confer rigidity to the enzyme despite the electrostatic interaction that maintains the protein bound to the polymer [56].In another investigation, laccase was entrapped into alginate beads [57].The results showed that the immobilization did not increase the stability of the enzyme but instead led to a loss in the stability at high temperatures, together with enzyme leaching and degradation.In a different approach, in which a laccase was covalently immobilized into the surface of inulin-coated magnetic nanoparticles, approximately a 1.5-fold increase in thermal stability was conferred to the enzyme, as shown by residual activity after a 3 h-incubation at 75˚C and pH ranging from 2.5 to 7.5 [58].Inulin is also a fructan-based polymer, as levan.Apparently, fructan-mediated protection to thermal denaturation of enzymes cannot be generalized, as other proteins entrapped into LNP are only slightly stabilized; for instance, a xylanase entrapped into LNP showed a stability increase at 70˚C of only 10%, based on the values of the inactivation constant for free and entrapped enzyme [59].It is well known that polysaccharides exert a positive effect on the stability of proteins when exposed to harsh environments, for example high temperatures and low water content (drying) medium [60,61].The exact mechanism is however controversial.It has been proposed that the hydroxyl groups from the saccharide moieties may form hydrogen bonds, reducing local and global flexibility of the proteins, which translates in slower denaturation. The storage stability of entrapped laccase into LNP was also determined, and provided valuable insights into the long-term stability and functionality of the entrapped enzyme (Fig 6A).As depicted in the figure, the activity of LNP-Lac 66 μg, LNP-Lac 99 μg and LNP-Lac 132 μg was assessed at 0, 6, 17, and 27 days after entrapment, resulting in high residual activity.The storage stability of the LNP-Lac suggests the robustness and stability of the entrapment system, allowing for the preservation of enzymatic functionality.The findings of this study in terms of improved stability are in line with that reported by other researchers for laccase [49,[62][63][64], which is actually one of the goals when immobilizing an enzyme.These results support the notion that enzyme entrapment into LNP can confer improved stability to the enzyme while maintaining a reasonable level of activity over an extended period. Additionally, the structure and morphology of the nanoparticles were investigated to assess their stability after 27 days following the synthesis of laccase entrapment.Notably, it was observed that the structure and morphology of the nanoparticles were preserved, indicating the long-term stability of the entrapment system (Fig 6B -6D).The maintenance of the nanoparticle structure and morphology is crucial for ensuring the stability and functionality of the entrapped laccase.The results highlight the efficacy of the entrapment method in preserving the enzymatic performance of laccase, even during extended storage periods.The ability to maintain enzyme activity over time is crucial for practical applications, where stability is paramount for consistent and reliable performance. Conclusion This study demonstrates the successful entrapment of laccase within fructan-based nanoparticles (LNPs), providing enhanced stability to the enzyme.The development of LNPs yielded stable and enzymatically active nanoparticles.The entrapped laccase within LNPs exhibited enhanced thermal stability, suggesting that the LNPs provided a protective environment for laccase, slowing down the temperature-induced denaturation.These findings may fuel the uses of laccase in medicine, pharmacy and biosensing.Immobilization of laccase into biocompatible polymers such as natural polysaccharides as levan, has the advantage to create biodegradable, non-toxic catalytic materials, to carry desirable enzyme activity to specific sites in which it is of use.For example, immobilized laccase into biocompatible polysaccharides could be applied as a component of wound dressings, to detect infections and in nanomedical devices [9,37,65].Overall, this study contributes to the advancement of nanobiotechnology by demonstrating the successful entrapment of laccase within fructan-based nanoparticles resulting in an enzymatically active preparation that is also stable to temperature and longterm storage.
2024-07-20T05:15:57.417Z
2024-07-18T00:00:00.000
{ "year": 2024, "sha1": "0eb983960d35cdc5848929f0a3bca3ee90177c50", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6937e0f8171d2879cc2330461957c5a1289c54e6", "s2fieldsofstudy": [ "Materials Science", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
100523330
pes2o/s2orc
v3-fos-license
Synchronous recovery of iron and electricity using a single chamber air-cathode microbial fuel cell In recent years, microbial fuel cell (MFC) technology has become an attractive option for metal recovery/ removal at the cathode combined with electricity generation, using organic substrates as electron donor at the anode. With no organic substrate supply, a single chamber air-cathode MFC was used to synchronously recover metal and electricity from a real stream containing high-strength metal, sulfate, strong acid and acidophilic chemoautotrophic bacteria (ACB). Instead, ferrous ions were used as electron donor which made the single chamber air-cathode MFC applicable for the (bio)leachate and mining/metallurgical stream sites possibly lacking organics. We showed that 71.8% iron was recovered, and 95.9% ferrous ions were removed from a real iron-laden stream. At the same time, 360.1 mV cell voltage was achieved with 88.1% of coulombic efficiency. In the presence of ACB microbes, the iron recovery and power density were increased by 8.6% and 29.2%, respectively, via promoting the anode electron transferring and preventing sulfur passivation of electrodes. Iron in the form of FeOOH (goethite) was recovered mainly at the anode via the ferrous oxidization to Fe(OH)3. At the cathode, ferrous ions directly combined with oxygen and electrons into FeO, and further into Fe2O3. It was prospective at sites lack of organics to synchronously recover metals and electricity from real metalladen streams using single chamber air-cathode MFC technology. Introduction Due to the combination of pollutant removal with electricity generation, microbial fuel cell (MFC) technology had potential to transform the conventional wastewater treatment processes from energy consumption to energy generation. 1,2 Its application also expanded to production of added-value products, such as H 2 , 3 from treatment of wastewater. The (bio)leachate and mining/metallurgical streams were main contributor of heavy metals to water body environment. To remove metals from those streams, the methods generally involved in membrane separation, 4 electrowinning, 5 absorption, 6 biological transform etc. 7 On the other side, those streams also provided options for valuable metal recovery, which possibly made removal processes more economical and sustainable. Combined with electricity generation, MFC reactors recently became attractive option for metal recovery/removal at the cathode from metal-laden streams, using organic substrates as electron donor at the anode. [8][9][10] Removal/recovery of metals from metal-laden streams was widely studied using dual or single chamber MFCs, in which metals were removed in the anaerobic or anoxic cathode chamber through cathode metal reduction, while organic substrates were used as the carbon and electron donor in the anode chamber. [11][12][13][14][15][16] Cr(VI) and V(V) were simultaneously reduced at the cathode in a double-chamber MFC. 16 With 20 mmol L À1 acetate as electron donor, copper removal of >99% from the CuCl 2 catholyte [1 g L À1 Cu(II)] were achieved over 6 to 7 days of MFC operation. 11 In a dual-chamber MFC, removal efficiencies of 97.8% and 94.6% were achieved for initial concentrations of 50 and 100 mg L À1 Au(III), respectively, over 12 h. 17 A maximum power output of 0.89 W m À2 was outputted for 100 mg L À1 Au(III). Removal of Au(III) from the catholyte was associated with deposition of metallic Au(0) on the cathode surface. The single chamber air-cathode microbial fuel cells achieved the power density of 3.6 W m À2 , and removed 90% Cd and 97% Zn mainly by bio-sorption and sulde precipitation, from 200 mmol L À1 Cd and 400 mmol L À1 Zn solutions, respectively. 18 As reported above, metal was generally used as the electron acceptor at the cathode, and organics was as the electron donor at the anode. It was not always applicable for the (bio)leachate and mining/metallurgical stream sites possibly lack of organics. In a double chamber fuel cell reactor with an anion exchange membrane, Fe 2+ was abioticly removed from synthetic acid-mine drainage (ADM) water through oxidizing to insoluble Fe(III) [Fe(OH) 3 ], which precipitated at the bottom of the anode chamber or on the anode electrode via eqn (1). 19 Optimum conditions were a pH of 6.3 and a ferrous iron concentration above $0.0036 mol L À1 . Further, Fe 2 O 3 particle diameters ranged from 120 to 700 nm, with sizes that could be controlled by varying the conditions in the fuel cell, were harvested. 20 However, real (bio)leachate and mining/metallurgical streams contained high-strength metals and sulfate as well as acidophilic chemoautotrophic bacteria (ACB), 21 which made the metal recovery/ removal complicated. Moreover, it became unknown how the presence of sulfate and ACB microbes inuenced electricity generation from those metal-laden streams. Here, we used single chamber air-cathode MFCs to recover metals combined with electricity generation from real stream contained 50.1 mmol L À1 Fe 2+ , 14.1 mmol L À1 Fe 3+ and 52.1 mmol L À1 SO 4 2+ as well as ACB microbes. The metal precipitates were identied using scanning electron microscopy (SEM) and X-ray photoelectron spectroscopy (XPS), and metal recovery mechanism was analyzed. We showed here that, with no organic substrates as electron donor, 71.8% iron was recovered and 343.31 mW m À2 power density was achieved with 88.1% of coulombic efficiency. In the presence of ACB microbes, the iron recovery and power density were increased by 8.6% and 29.2%, respectively, via promoting the anode electron transferring and preventing sulfur passivation of electrodes. The results might be useful for the investigations that metals were recovered using MFCs, with no organics as electron donor, from (bio)leachate and mining/metallurgical streams contained high-strength metal, sulfate, strong acid and ACB microbes. Setups and bioleaching solution Equipped with air-cathodes, single-chamber MFC reactors with an internal volume of 28 mL were used in this study (Fig. 1). With a normalized surface area of 7.1 cm 2 (one side), anodes were made of pretreated graphite felt (non wet-proofed, Beijing Sanye Cabon Co. Ltd., China). The cathode was prepared through applying platinum powder (0.5 mg cm À2 Pt, Hispec 3000, Shanghai Hesen Electric Co. Ltd., China) and four diffusion layers (polytetrauoroethylene, PTFE) on 30 wt% watertight carbon cloth (HCP 330P, Shanghai Hesen Electric Co. Ltd., China) as previously described. 19 Both electrodes paralleled to each other with a distance of 1.5 cm and were connected by a piece of titanium wire across an external loading of 500 U. The real iron-laden stream used here was obtained by bioleaching FeS power as previously described. 22 The mixture of 10 mL effluent, from the existing well-running membrane bioreactor (MBR) treating synthetized municipal wastewater in our lab, and 90 mL anode medium [0.20 g L À1 (NH 4 ) 2 SO 4 , 3.93 g L À1 K 2 HPO 4 $3H 2 O, 0.50 g L À1 MgSO 4 $7H 2 O, 0.19 g L À1 CaCl 2 , and 10.00 g L À1 elemental sulfur] was adjusted to pH ¼ 2.5 and transferred into 250 mL serum bottles. The bottles were put in a shaker of 150 rpm, and cultivated for at 30 C. One month later, the bottles were taken out of the shaker and naturally settled. The supernatant was refreshed using the anode medium and continued to cultivate as described above. Three months later, the inocula contained ACB microbes were obtained 23 and its pH level was ~2.5. ACB microbes could grow well in strong acid solution and tolerated high-strength metals under both anaerobic and aerobic environment. 24,25 Importantly, ACB microbes had ability to take S 0 as electron donor (during which S 0 was oxidized to SO 4 2À ) and ferric ions or electrode as electron acceptor, consequently preventing sulfur passivation of electrode. 26,27 14 mL inocula in brown color was fully mixed with 25 g L À1 of FeS power and added to the anode chamber of a double-chamber MFC reactor separated by proton exchange membrane (Naon-117, DuPont Company, USA), followed by addition of anode medium to 28 mL. Then, 28 mL phosphate buffer solution (PBS, 11.53 g L À1 Na 2 HPO 4 $12H 2 O, 2.77 g L À1 NaH 2 PO 4 $2H 2 O, 0.31 g L À1 NH 4 Cl, and 0.13 g L À1 KCl) transferred to the cathode chamber (28 mL). Both anode and cathode were made of pretreated graphite felt, and connected by a piece of titanium wire across an external loading of 500 U. Aer the double-chamber MFC reactor above reached stable, the anode supernatant (here called real iron-laden stream and used in this study) was collected which contained 50.1 mmol L À1 Fe 2+ , 14.1 mmol L À1 Fe 3+ and 52.1 mmol L À1 SO 4 2+ as well as ACB microbes. The pH value of real iron-laden stream was around 3.5. Operational For determination of the initial pH inuence on electricity generation, three single-chamber MFC reactors were lled with 28 mL synthetic stream containing 50 mmol L À1 Fe 2+ in an anaerobic glove box, and then the pH adjusted to 2.5 (MFC-2.5), 4.5 (MFC-4.5) and 6.5 (MFC-6.5) using 0.1 mol L À1 HCl and 0.1 mol L À1 NaOH solution, respectively. For the treatment of real iron-laden stream, the reactor (inoculated MFC) were lled with 28 mL real stream in an anaerobic glove box, and then the pH level adjusted to the optimal value determined above from 3.5. In order to investigate the role of ACB microbes carried by real ironladen stream, another reactor (Sterile control) was lled aer ltrating by 0.2 mm acetate ber microltration membrane to remove ACB microbes from real iron-laden stream. All reactors were placed in a temperature-controlled room (30 C). The medium in the reactors was relled when the cell voltage dropped below 20 mV. At the end of experiment, the precipitates at the bottom of reactors and on electrode surface were removed with a plastic plate and analyzed as follows. The solution was monitored for total iron and ferrous ions. The anode and cathode were soly washed using deionized water and air-dried at room temperature for the use of SEM (TM3030, HITACHI, Japan) observation. The precipitates were washed using deionized water and centrifuged three times at 3000g, and air-dried at room temperature for the use of XPS (AXIS HIS 165 spectrometer, Kratos Analytical) survey. Analysis The cell voltage (V) across the external loading (R) was automatically recorded by a computer-based data acquisition system (DAQ-2204, Taiwan ADLINK Ltd., China) at a pre-determined sampling frequency (1 h). The power output (P), normalized by the projected surface area of the anode (A), was calculated by the equation P ¼ V 2 (R À1 Â A). Aer the reactors reached stable, cyclic voltammetry (CV) scanning of the anode was conducted using an electrochemical workstation (CHI600D, CH Instruments Inc., China) in depleted substrate condition. Before analysis, the reactors were le in open-circuit for 1 h to reach the static state. The working and counter terminals of the electrochemical instrument were connected in situ to the anode and cathode of the examined MFC reactor, while a saturated calomel electrode (SCE, +0.242 V vs. the standard hydrogen electrode [SHE], Gaoshirilian Ltd., China) as the reference electrode was placed close to the anode. Prior to use, SCE was carefully rinsed with deionized water. According to the working potential of the electrodes investigated here, CV was performed from À0.9 V to +0.9 V vs. SCE at a scan rate of 1 mV s À1 . The concentrations of total iron were quantied at 248 nm by the ame atomic absorption spectrophotometry (AA-7000, SHIMADZU, Japan) equipped with a hollow cathode lamp (GL, SHIMADZU, Japan), and ferrous ions was by phenanthroline spectrophotometric method. 28 If not analyzed immediately, the ltered samples were kept pH < 2.5 in closed vials to prevent the oxidization of ferrous to ferric ions or precipitation. At the end of experiment, the anode and cathode of the inoculated MFC reactor aer dried were imaged using SEM. 29 The precipitates of both anode and cathode of the inoculated MFC reactor were respectively recorded by XPS spectrometer equipped with a monochromatized Al Ka X-ray source (1486.71 eV photons). Inuence of initial pH on electricity generation from synthetic stream Due to the strong dependence of eqn (1) on solution pH, the outputted cell voltages and power densities were rstly shown in Fig. 2 and 3 as varying initial pH of synthetic stream from 2.5 to 4.5 and 6.5. The peak cell voltages of three single chamber air-cathode MFC reactors rstly experienced a rapid decline and then reached a platform, suggesting that electricity heavily depended on chemical reaction. The peak cell voltages of MFC-2.5, MFC-4.5 and MFC-6.5 reactors presented to be 267.4, 352.4 and 189.6 mV. Accordingly, the power density along with initial pH variation was 139.5 mW m À2 for MFC-2.5, 298.9 mW m À2 for MFC-4.5 and 47.3 mW m À2 for MFC-6.5, which was negatively associated with the internal resistances (Fig. 3). The single chamber air-cathode MFC reactor with pH ¼ 4.5 outperformed in electricity generation. The power generated at pH ¼ 4.5 here was approximate to that of a double chamber fuel cell reactor with an anion exchange membrane treating synthetic ADM water at pH ¼ 6.3, in which the maximum power density was 290 mW m À2 . 19 In the air-cathode MFC reactor, the cathode pH was always approximate to that of the anode, and the lower pH level supplied enough H + for the cathode reduction reaction. The shi of optimal pH from 6.3 of dual-chamber MFC to 4.5 here was possibly associated with the reactor structure, which could reduce the dosage of alkali. At 25 C, the solubility product constant of Fe(OH) 2 in aqueous solution was 8 Â 10 À16 . 30 Generally, when the iron ion concentration in aqueous solution was below 10 À4 mol L À1 , iron hydroxide [Fe(OH) 2 ] were regarded to completely precipitate via eqn (2). Accordingly, [OH À ] were calculated to be around 10 À6.5 mol L À1 , indicating that the pH level higher than 7.5 was favorable for the precipitation of Fe(OH) 2 . The higher the pH, the more the iron hydroxide precipitates. In our single chamber air-cathode MFC reactors, the initial pH was 2.5, 4.5 and 6.5 and lowered to 2.0, 4.0 and 6.0 at the end of each reaction cycle, suggesting that ferrous iron would not precipitate in the form of Fe(OH) 2 via eqn (2). However, with the anode catalysis and electron transferring, ferrous ion easily deposited in the form of Fe(OH) 3 via eqn (1) at the anode although this reaction was thermodynamically unfavorable with D f G 0 ¼ +77.49 kJ mol À1 under standard conditions ([H + ] ¼ 1 mol L À1 , pH ¼ 0). Synchronously, oxygen was combined with H + (from streams or diffused from the anode) and electrons (transferred from the external circuit) into water at the cathode. Eqn (1) was strongly pH dependent, and increasing pH made it more favorable. When the initial pH in air-cathode MFC reactors increased to 4.5 from 2.5, Fe(OH) 3 was more favorable to deposit with the oxidization of ferrous ions. The outputted cell voltages in MFC-4.5 reactor amounted to 352.4 mV (Fig. 2). Also, at pH ¼ 2.5, it was difficult for Fe(OH) 3 to precipitate and recover iron, which further blocked the oxidization of ferrous to ferric ions, causing that the outputted cell voltage of MFC-2.5 was lower than that of MFC-4.5 reactor. The peak cell voltage of MFC-6.5 heavily turned down to 189.6 mV from 352.4 mV of MFC-4.5. Although not well understood, it was possibly associated with [H + ] decrease at pH ¼ 6.5, which made cathode oxygen reduction lowered, consequently decreasing the outputted cell voltage. Electricity generation and iron recovery from real ironladen stream As revealed above, the single chamber air-cathode MFC reactor at pH ¼ 4.5 achieved the highest cell voltage when treating synthetic stream with 50 mmol L À1 Fe 2+ . We replaced synthetic stream using real iron-laden stream containing 50.1 mmol L À1 Fe 2+ , 14.1 mmol L À1 Fe 3+ and 52.1 mmol L À1 SO 4 2+ as well as ACB microbes. It showed that the peak cell voltage was 360.1 mV for the inoculated MFC reactor and 314.6 mV for the Sterile control (Fig. 4). Accordingly, the power densities were 343.31 mW m À2 for the inoculated MFC reactor and 265.64 mW m À2 for the Sterile control. In the presence of ACB microbes, the power density was increased by 29.2%. A couple of redox peaks with the potentials of about À0.1 V and +0.1 V, which was in agreement with that of biolm, 27 was observed in CV curve of the inoculated anode (Fig. 5). It demonstrated that the anode biolm with redox species formed due to the presence of ACB microbes carried by real iron-laden stream, and further promoted electrons transferring. 31 While, in CV curve of the Sterile anode, there was no redox peak, showing that it presented no electrochemical activity substances. In single chamber air-cathode MFC reactors, SO 4 2À was as high as 52.1 mmol L À1 and could be reduced to S 0 via eqn (3) (D f G 0 ¼ À193.66 mol L À1 ), and S 0 deposition usually led to electrode passivation 32 and further inhibited electricity generation. Different from well-known microbes, such as sulfatereducing bacteria, 33 ACB microbes could grow well in strong acid solution and tolerated high-strength metals under both anaerobic and aerobic environment. 24,25 Importantly, ACB microbes had ability to take S 0 as electron donor (during which S 0 was oxidized to SO 4 2À ) and ferric ions or electrode as electron acceptor, consequently preventing eqn (3) to happen. 26,27 Therefore, the presence of ACB microbes in the inoculated MFC reactor made sulfur passivation of electrode avoidable and the outputted cell voltage was higher than that of the Sterile control. It also was higher than that of a double chamber fuel cell reactor treating synthetic ADM water with no microbes. 19 At the end of experiment, the total iron ion concentration decreased to 1218.6 mg L À1 for the Sterile control and 1012.3 mg L À1 for the inoculated MFC reactor, respectively, from 3595.2 mg L À1 at the beginning of experiment ( Table 1). The iron recovery rate was calculated to be 66.1% for the Sterile control and 71.8% for the inoculated MFC reactor by precipitating on the anode and cathode as well as at the bottom of reactors. Accordingly, the total iron precipitates collected were 184.0 mg and 196.0 mg ( Table 1) and was light-brown color to the naked eyes (Fig. S1 †), and the anode precipitates amounted for 83.2% and 83.7%, respectively. It showed that the iron recovery mainly completed on the anode and was dominated by eqn (1), which was in agreement with the results of Cheng et al. 19 In addition, the ferrous ion concentration in the inoculated MFC reactor decreased to 114.4 mg L À1 from 2805.6 mg L À1 , showing the removal rate of ferrous iron was as high as 95.9%. With 95.9% ferrous ions removed in the inoculated MFC reactor, there was 1.35 mmol electrons released to the anode (eqn (1)). Based on the cell voltage across 500 U external resistance, the current generated was equivalent to 1.19 mmol electrons over one stable reaction cycle, and the coulombic efficiency of the inoculated MFC reactor was calculated to be 88.1% higher than 72% obtained using bacteria and acetate. 34 However, the ferrous removal and coulombic efficiency here were slightly lower than the literature. 19 It might be attributed to oxygen diffusion to the anode from the air cathode and the presence of unknown electron acceptors in real streams. Anyway, it demonstrated that single chamber air-cathode MFC technology had potential to synchronously recover metal and electricity from real metal-laden streams. SEM observation and XPS analysis of precipitate and electrodes At the end of experiment, the precipitates of the anode and cathode as well as the anode surface of the inoculated MFC reactor aer air-dried were observed using SEM (Fig. 6), and the precipitates was surveyed using XPS (Fig. 7). As can be seen, the anode precipitate presented to be coarse with some bumps (Fig. 6a). The cathode precipitate grew a large amount of irregular solid pellets or ocs (Fig. 6b), suggesting that the anode precipitate was different from the cathode. Additionally, the anode surface was covered by rod-shaped bacteria (Fig. 6c), which was in agreement with the electro-active biolm formation (Fig. 5). The anode precipitate contained O, Fe, Na, Mg, Cl and S elements as revealed by the XPS survey (Fig. 7a), and the O absorption peak was strongest followed by that of Fe element. It indicated that the anode precipitate mainly contained O and Fe elements, probably were iron oxides carrying a small amount of sodium and magnesium salts in. In XPS spectra, the absorption peak of O element appeared at the binding energy of around 531.1 eV, and O element here was attributed to iron(III) hydroxide oxide (FeOOH). 35 The absorption peak of Fe element occurred at the binding energy of around 710.1 eV, which was also attributed to iron(III) hydroxide oxide (FeOOH). 36 It concluded that the anode precipitate was dominated by iron(III) hydroxide oxide (FeOOH), which was in agreement with the result of Cheng et al. 20 In addition, a weak absorption peak of S element appeared at the binding energy of 169.1 eV in the XPS spectra, suggesting that there was a small amount of sulfate in the anode precipitate. 37 There was no S 0 detected in the anode precipitate according to Lindberg et al., 38 which was possibly associated with ACB microbes presented in real iron-laden stream. The XPS spectra of cathode precipitate was revealed in Fig. 7b. There were O, Fe, S, K, Na, Mg and Cl elements, which was slightly complicated than that of the anode precipitate. O and Fe elements still remained to be dominant. Different from Fe spectra of anode, the absorption peak of Fe element occurred at the binding energy of around 712.1 eV, and was regarded to be from iron(III) oxide. 39 O element was from metallic oxide because the binding energy was at around 532.1 eV. 40 It suggested that, different from the anode precipitate, the cathode precipitate mainly contained Fe 2 O 3 . Similarly, the S absorption peak revealed to be sulfate, 37 but not S 0 . In the cathode precipitate, because there was a certain amount of K element and the S absorption peak was heavily 6 ] precipitated via eqn (4) with the thermodynamic favorable free energy (D f G 0 ¼ À135.45 kJ mol À1 ). Further, the jarosite electrode passivation possibly took place. 41 However, jarosite dissolution reactions (eqn (5) and (6)) in lower pH solution were also thermodynamically favorable with the free energies of À116.50 and À93.70 kJ mol À1 , respectively. Mechanism analysis of iron recovery in single chamber air-cathode MFC reactors Based on the results and discussion above, the possible reactions associated with iron recovery in single chamber aircathode MFC reactors treating real iron-laden stream were listed in Table 2. The reaction free energy (D f G 0 ) was calculated according to the thermodynamic data 30 and the potentials under standard conditions were obtained for the reduction and oxidization reactions via eqn (7). Under our experiment conditions of pH ¼ 4.5, [Fe 3+ ] ¼ 0.014 mol L À1 , [Fe 2+ ] ¼ 0.050 mol L À1 and T ¼ 303 K, the ferrous oxidization to Fe(OH) 3 precipitate [eqn (a) in Table 2] with E 0 ¼ À0.773 V was most favorable and followed by the ferrous oxidization to ferric ions [eqn (b)] with E 0 ¼ À0.726 V. In the single chamber air-cathode MFC combined electricity generation with iron recovery from real iron-laden stream in this manuscript, ferrous iron, instead of organic substrates, was used as electron donor. At 25 C, the solubility product constant of Fe(OH) 3 in aqueous solution was 4 Â 10 À38 . 30 Accordingly, [OH À ] were calculated to be around 10 À11.5 mol L À1 , indicating that the pH level higher than 2.5 was favorable for the precipitation of Fe(OH) 3 . Therefore, the ferric ions also precipitated in the form of Fe(OH) 3 at pH > 2.5 [eqn (e)]. Further, Fe(OH) 3 was lost water to FeOOH, as demonstrated by anode XPS spectra (Fig. 7a). Additionally, the free energies (D f G 0 ) of ferrous and ferric reduction to metallic Fe(0) [eqn (c) and (d)] were +78.87 and +4.70 kJ mol À1 under standard conditions, 30 and the corresponding potentials (E 0 ) were À0.409 and À0.016 V, respectively. Compared with the potential of eqn (a) in Table 2, the formation of metallic Fe(0) was not competitive at the anode, especially at pH ¼ 4.5 [Fe(OH) 3 completely precipitated]. To avoid metallic Fe(0) formation at the anode and keep sustainable electricity generation, the solution pH should be kept at higher value. From Table 2 (Fig. 7b). It meant that Fe 2 O 3 was directly produced through ferrous oxidization using oxygen at the cathode. Both FeOOH (goethite) and Fe 2 O 3 were conductive and their precipitation on the anode or cathode would not inuence the sustainable electricity generation. At the anode of single chamber air-cathode MFC reactors treating real iron-laden stream, the higher pH (>2.5) should be kept to completely precipitate Fe(OH) 3 and the anode product was controlled to be higher-grade FeOOH. To ensure that Fe 2 O 3 dominated the cathode precipitate, enough oxygen should be supplied at the cathode. Conclusions The single chamber air-cathode MFC combined electricity generation with iron recovery from real iron-laden stream in this manuscript. Ferrous iron, instead of organic substrates, was used as electron donor, which made it applicable for the (bio)leachate and mining/metallurgical stream sites possibly lack of organics. Using the synthetic stream, the optimal initial pH of air-cathode MFC solution was determined to be 4.5 with 352.4 mV of cell voltage and 298.9 mW m À2 of power density. Without organic substrates as electron donor, 71.8% iron was recovered, 95.9% ferrous ions was removed and 343.31 mW m À2 power density were generated from real iron-laden stream at pH ¼ 4.5. ACB microbes carried in real iron-laden stream was able to make the anode biolm electrochemically active and further promote the electron transferring, and prevent sulfur passivation of electrodes via inhibiting sulfate reduction to S 0 . Ferrous ions were mainly oxidized to Fe(OH) 3 at the anode and recovered by FeOOH. In the presence of oxygen, ferrous ions directly combined with oxygen and electrons into FeO, and further into Fe 2 O 3 at the cathode. Aer optimization of system, it was prospective to recover metals and electricity from real streams, which contained high-strength metal, sulfate, strong Table 2 Reaction equations and their thermodynamics parameters involved in iron recovery using single chamber air-cathode MFC reactors. The free energy (D f G 0 ) was calculated based on thermodynamic data, and potentials under standard and nonstandard conditions were calculated using Nernst equation Reaction equations D f G 0 (kJ mol À1 ) E 0 (V) Potential equations E 0 at pH ¼ 4.5 (V)
2019-04-08T13:11:44.433Z
2017-02-21T00:00:00.000
{ "year": 2017, "sha1": "1be678f29b90dff285f573bca1e3f20de09db153", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c6ra28148f", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aaded33d6793167f2e4648c3ddddf18a49c7d1a6", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
151897159
pes2o/s2orc
v3-fos-license
Spiritual Vulnerability, Spiritual Risk and Spiritual Safety—In Answer to a Question: ‘Why Is Spirituality Important within Health and Social Care?’ at the ‘Second International Spirituality in Healthcare Conference 2016—Nurturing the Spirit.’ Trinity College Dublin, The University of Dublin In offering an answer to the question, ‘Why is spirituality important within health and social care?’ this paper articulates views on the concepts ‘Spiritual Vulnerability,’ ‘Spiritual Risk’ and ‘Spiritual Safety’ and argues for the centrality of spirituality within holistic, person-centred professional health and social care. It proceeds to offer a definition of Spiritual Safety and then goes on to highlight how the patient being and feeling spiritually safe and how professional carers enabling spiritual safety can reduce spiritual vulnerability and spiritual risk; and may be seen as essential aspects of professional holistic care. Introduction In the discussion paper which follows the author articulates some thoughts on the concepts 'Spiritual Vulnerability,' 'Spiritual Risk' and 'Spiritual Safety', which were offered in answer to a delegate's question 'Why is spirituality important within health and social care?' at the 'Second International Spirituality in Healthcare Conference 2016-Nurturing the Spirit.' Trinity College Dublin, The University of Dublin, 23 June 2016 [1]. Spirituality: Vulnerability, Risk and Safety In recent decades the subjects of vulnerability, risk and safety have grown in significance in all aspects of health and social care settings. UK and Irish Social policy has helped to increase the emphasis on these areas. Examples include Risk Management Policy and Process Guide [2]; Safeguarding Vulnerable Persons at Risk of Abuse National Policy & Procedures [3]. Additionally, professional bodies such as the Nursing and Midwifery Council in the UK (NMC) and the Nursing and Midwifery Board of Ireland (NMBI) provide accountability guidance i.e., The Code for nurses and midwives [4] and Code of professional conduct and ethics [5]. While much of social policy has focused primarily on the professional's care of the patient, it can be argued that there is a need to explicitly focus on both patient and staff care in these matters. In caring professions, the concepts 'vulnerability', 'risk' and 'safety' are seen as central to quality care, whether to patients or colleagues. It may also be argued that these three concepts equally apply to each of the interactional dimensions of holistic care, such as, (NMBI 2015): ' . . . social, emotional, cultural, spiritual, psychological and physical experiences of patients . . . ' [6]. These concepts are not new to caring professions [7,8]. For example, one would be surprised if the nurse Florence Nightingale, or her mentor Cardinal Manning, or the nuns who provided her formative nurse training in a Paris Hospital [9] have not already spoken of such, or similar concepts. Yet the meaning of spirituality and religion are contested concepts within the health professional. Secondly, nurses question spirituality's relevance to care and many shy away from responding to such needs [7,10] and therefore omit essential interventions to holistic care, thus raising important ethical issues. However, to what extent an individual is, and believes they are, or are assessed by competent others as, vulnerable, at risk/a risk, distressed, or unsafe with regards to any dimension of holistic care, within the physical and interactional environment they are recipients of care, raises legitimate questions for patients and care professionals alike. Concerns regarding care require investigation through rigorous assessment, followed by systematic planning and management at individual, inter-professional and inter-agency levels, as required [11]. Spiritual Safety While further research is required in this area, for example, in terms of conceptual analysis and applications to practice, the health and social care professional, may still begin an initial person-centred process by asking the patient questions to ascertain their perspective [12,13], for example: • Is spirituality/religion important to you? • What are your spiritual/religious needs? • How may your spiritual/religious needs could be best met by the care team? • Do you feel vulnerable, at risk, distressed or unsafe spiritually? If so, please outline why you feel that way and describe how the care team may support you to be and feel spiritually safe in their care. As the approach is person centered, the questions may need to be reframed to ensure that they are conveyed in ways the person finds meaningful and relevant. Equally, it should be acknowledged that spirituality may not be of significance to some care recipients. Furthermore some patients may struggle with understanding, defining and expressing their spiritual/religious concerns/needs, and this needs to be considered. By adopting a person centred approach, the health and social care professional may begin to address spirituality from the patient's own understanding, belief and practice and respond accordingly. Spiritual Safety may therefore be described and the care framed within the context of the patients own belief/faith system, if they have one, and could furthermore be defined as 'the extent to which the individual recipient of care is and feels secure to practice their faith, and also in the ways in which health and social care professionals acknowledge, understand, demonstrate respect and respond effectively to their needs/concerns, as defined by them.' [8]. In the context of person-centred approaches to care [8,12,13], it is essential that individuals are, and feel spiritually safe, as spelt out by themselves, within the physical and interactional care environment in which they receive care, and that this is managed by health and social care professionals effectively. While this may present many challenges in terms of, for example, implementation, it is incumbent on care professionals to develop appropriate evidence-based approaches were spirituality may be properly acknowledged, respected and fostered in ways which manage risk and enhance safe, quality care. The starting point for seeking such evidence on which to build care begins with the evidence provided by the individual patient themselves or, in the case of those patients who are unable to at the time, significant others such as family members. Professionals in such settings are required to develop competencies which enhance knowledge and understanding of various belief systems, practices and pastoral supports within different religions; collaborate within inter-professional and interagency frameworks with faith professionals and develop competencies in identifying and responding to a care recipient's spiritual distress. Were professionals are lacking in proficiency in such areas or were it is beyond their scope of professional practice they are advised to make appropriate referrals [8,14]. Conclusions The author has suggested that the patient being and feeling spiritually safe, as expressed by the individual themselves, and the professional carer enabling spiritual safety may reduce the extent to which the individual feels and/or is vulnerable, at risk, distressed or unsafe spiritually. This should be seen as an essential aspect of professional holistic care. It is recognized that further research is required in terms of how these concepts may be fully understood and applied. However, all health and social care professionals have a role to play in acknowledging, respecting and seeking evidence-based ways of enabling and managing the spirituality of patients by beginning to actively engage with the individual on the topic. Conflicts of Interest: The author declares no conflict of interest.
2017-03-31T08:35:36.427Z
2017-03-11T00:00:00.000
{ "year": 2017, "sha1": "0897bf4919bd6ceb514804ba732e07abff200c86", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-1444/8/3/38/pdf", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "0897bf4919bd6ceb514804ba732e07abff200c86", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Sociology" ] }
214816568
pes2o/s2orc
v3-fos-license
Headquarter resource allocation strategies and subsidiary competitive or cooperative behavior: achieving a fit for value creation Integrating insights from the literature on the multinational corporation into current perspectives on resource allocation, we argue that the ability of headquarters to create value through resource allocation to subsidiaries within the multinational corporation is contingent on the complementary fit between the resource allocation strategy and the dominant behavior of the receivers of the resources. We expound on a theory and an explanation for the volatility of value creation generated by headquarter resource allocation that includes multiple layers of hierarchy. As a corollary, we extend and contribute to the theorizing on headquarters-subsidiary relations and resource allocation by illustrating different scenarios of the resource allocation process. More specifically, we develop a two-by-two matrix of the resource allocation process that corresponds to different resource allocation strategies of headquarters (winner-picking and cross-subsidization) and subsidiary behavior (collaboration or competition) in multinational corporations. We argue that, depending on which scenario within the matrix is brought to the fore, our understanding of how the resource allocation process plays out between headquarters and subsidiaries will differ and therefore influence value creation within the multinational corporation. Introduction The headquarters of multinational corporations (MNCs) perform many sophisticated activities (e.g., Ciabuschi et al. 2017;Dellestrand and Kappen 2012;Menz et al. 2015;Nell et al. 2017). As part of this, often referred to as the strategic role of headquarters (Goold et al. 1994), resource allocation to subsidiaries has been argued to be a source of both value creation (e.g., Khanna and Tice 2001;Nell and Ambos 2013) and value subtraction (e.g., Ciabuschi et al. 2011;Scharfstein and Stein 2000). In recent research, which has started to delve into resource allocation strategies, the evidence reported is mixed as it relates to value creation (Khanna and Tice 2001;Ozbas and Scharfstein 2009;Scharfstein and Stein 2000). While it may be axiomatic to most observers that resource allocation can both create and subtract value, what is less clear is exactly how or under what circumstances it does so. In this paper, we argue that this is so because of the interplay between headquarters and subsidiaries within the MNC. The traditional assumption in the literature appears to assume that headquarters, owing much to their superior capabilities, creates value when allocating resources (e.g., Donaldson 1984;Williamson 1975). This could be due to the focus on the provider (i.e., the headquarters) rather than the recipient (i.e., the subsidiary that receives resources allocated by the headquarters). Although subsidiaries are typical recipients of resource allocation from headquarters for the purpose of creating value for the organization as a whole, research has often assumed their behavior to be passive, homogenous, and aligned to corporate strategy (see Kostova et al. (2016) for a review). However, there are reasons to believe subsidiary behavior is sometimes active, heterogeneous, and not always aligned with the overall corporate strategy (Ambos et al. 2010;Andersson et al. 2007;Cuervo-Cazurra et al. 2019). In fact, the MNC has been depicted as a federative arena in which units (both headquarters and subsidiaries) fight for power and influence (Andersson et al. 2007). As a corollary, goal conflicts between headquarters and subsidiaries might emerge (Egelhoff et al. 2013;Pahl and Roth 1993). As a result, the effects of variations in subsidiary behavior on the outcome of resource allocation have remained relatively unexplored from a theoretical perspective. This suggests that there might be resource allocation strategies from headquarters that are related to themes of the inner workings of the resource allocation process that have been left relatively uncharted in current research. Drawing on a complementary fit 1 logic (Cable and Edwards 2004;Ostroff 2012), we integrate and contrast resource allocation strategy with subsidiary behavior, which leads to our framework for understanding headquarter value creation. We subject the traditional organizational structure of internal competition and internal cooperation to the pressures imposed by employing a winner-picking or cross-subsidizing resource allocation strategy. We do this to better understand what these combinations yield in terms of value creation from a complementary fit perspective. This approach echoes pertinent themes in the organizational design literature related to fit and coordination as well as configuration and control (Foss 2019; Joseph et al. 2018). In addition, our approach resonates with recent research that delves into issues related to how the use of control mechanisms by headquarters is contingent on subsidiary power (Ambos et al. 2019). Specifically, we present a theory of complementary fit between headquarters' resource allocation strategies and subsidiary behaviors that allows us to understand the bright and dark sides of headquarter resource allocation in the MNC. In doing so, we answer the call for research on the connection between organizational design and headquarters (Foss 2019). Focusing our discussion on the value-creating role of headquarters in allocating resources throughout the MNC, we consider two opposing generic resource allocation strategies-i.e., the winner-picking strategy and the cross-subsidization strategy-by which headquarters may create this value. We present four main scenarios of how the resource allocation process might be understood in MNCs, which subsequently influence the theoretical mechanisms that are believed to influence the outcomes of the resource allocation process. 1 The term complementary fit refers to the extent to which the strength or weakness of one organizational unit is offset by that of another and vice versa (Muchinsky and Monahan 1987). As such, it is similar to the idea of congruence between organizational units as discussed by Nadler and Tushman (1980). In the paper, we show how the complementary fit between resource allocation strategy and subsidiary behavior influence value creation. The notion of complementary fit is important as headquarter resource allocation strategies might only work as intended given certain subsidiary behaviors. For an organization to function properly, its components must exist in a state of relative balance. A lack of balance between interfacing components will lead to dysfunction. Consequently, we propose that headquarter resource allocation should take place in an overall system that needs to harmonize to facilitate value creation within the MNC. The theory developed in this paper addresses a critical omission in current literature on headquarter resource allocation and ultimately suggests that headquarter value creation in the resource allocation process cannot be meaningfully understood without also considering the subsidiary perspective. Background Following Collis et al. (2007, p. 385), headquarters can be defined as "staff functions and executive management with responsibility for, or providing services to, the whole (or most of) the company, excluding staff employed in divisional headquarters". 2 Subsidiaries are defined as entities that signify aggregations of the firm's holdings in host countries and non-parent entities in the home country (Birkinshaw and Hood 1998). The definition entails the human decision-making entities that have the ability to engage in productive effort as well as in non-productive value-subtracted efforts. Such a definition relaxes the assumption of traditional hierarchies, where affiliates are viewed more like army formations than an interconnected heterogeneous collection of geographically dispersed subsidiaries (Bartlett and Ghoshal 1986;Blomkvist et al. 2017;Hedlund 1986;Nohria and Ghoshal 1994). Headquarter value creation is tied to resource allocation in the sense that resource allocation is one way in which headquarters can attempt to create value for the MNC, which goes above and beyond the value created by the operational activities of subsidiaries (Ambos et al. 2010;Bouquet and Birkinshaw 2008). 3 Resources might be more or less fungible and therefore more or less arduous to allocate. For the sake of simplicity, the resources discussed herein are to be viewed as capital allocations (Sengul et al. 2019). These also happen to be the resources that are easiest to observe, are fastest to transfer, and have been the subject of prior studies, starting with the seminal piece by Lamont (1997), who provided evidence of overinvestment in non-oil divisions of diversified oil firms in times of high oil prices. 2 It is questionable whether this definition includes divisional headquarters as a part of corporate headquarters. Including divisional headquarters has strong theoretical ramifications. First, it influences where strategy is conceived and how a top-down or bottom-up perspective on strategy making can be viewed. Second, divisional headquarters might be closer to subsidiary operations and work more closely in collaboration with the subsidiaries, whereas corporate headquarters might be more concerned with overall strategy making. Third, it influences the size of headquarters and the activities performed. Fourth, it suggests legitimacy. On the one hand, divisional headquarters might be perceived as a more active business network participant and therefore have a greater degree of legitimacy than corporate headquarters (Forsgren et al. 2005). On the other hand, external actors might perceive corporate headquarters as the legitimate actor to interact with (Birkinshaw et al. 2006). This question merits considerable attention but is not dealt with in the present paper. 3 According to the reasoning of Puranam and Vanneste (2016) and their discussion about corporate advantage, value creation can refer to the synergies of the activities pursued by both headquarters and subsidiaries. However, while value may be created at both these levels, headquarter resource allocation in itself rests on the notion that while subsidiaries create value by using resources in operational activities, headquarters may also create value by allocating resources in the way that is the most valuable for the MNC as a whole. Thus, value creation as discussed in this paper concerns the aggregate MNC. Complementary to the literature on financial capital allocations is the literature on inter-temporal economies of scope (Folta et al. 2016;Helfat and Eisenhardt 2004;Levinthal and Wu 2010;Lieberman et al. 2017;Sakhartov and Folta 2015). This stream of research typically focuses on a broader set of resources (e.g., Levinthal and Wu 2010) and in doing so emphasizes relatedness and how the ability of redeploying resources from one subsidiary to another over time might in itself create value by lowering both entry and exit costs. While this paper focuses on capital resources, the abovementioned stream of research is nonetheless relevant to our discussion. The central idea that a firm might redeploy resources committed to one market to another also suggests that more value can be created in cases where there are little sunk costs. This in turn depends on how related the two markets are or where the resources yield a higher return elsewhere, which in turn depends on the redeployment costs and the performance advantages of redeploying (Penrose 1959;Sakhartov and Folta 2015). Beyond avoiding sunk costs and allowing for redeploying resources to more efficient use, the ability to redeploy also reduces the risk of entering new markets by reducing the cost of failure (Lieberman et al. 2017). Regardless of the type of resource allocated, it is clear that this activity is not without its challenges. Overinvestment (Arrfelt et al. 2015), empire building (Xuan 2009), and rank-ordering error (Stein 1997) are just a few of the difficulties that suggest that the resource allocation process can be multifaceted and largely dependent on the strategies and interactions between different entities within the MNC. As a corollary, the MNC system of headquarters and subsidiaries represents a fertile ground for analyzing resource allocation using an organizational design perspective, as this underscores key contingency considerations of how to best divide the organization into subunits and how to integrate and control those subunits in support of the organization's goals (Joseph et al. 2018;Lawrence and Lorsch 1967). Headquarter resource allocation The extant literature lists the roles headquarters play that can potentially strengthen the competitiveness of the MNC (see, e.g., Menz et al. 2015). A common feature of many of these roles is that they are concerned with how headquarters manages the MNC as a whole rather than manages the specific operational activities of subsidiaries (Chandler 1962(Chandler , 1991Ciabuschi et al. 2012). Although headquarters' attempts at value creation (e.g., the allocation of resources across the MNC) might aim to increase competitiveness, the outcomes of such attempts can vary. Research on resource allocation concerns the role of headquarters as a value creator through the allocation of resources to a subsidiary of the MNC under the assumption of resource constraints. In such a setting, for example, the surplus generated in one subsidiary might be allocated to another, or the resources of one subsidiary might be used to underwrite a loan to another subsidiary. That is, resources are allocated based on the subsidiary's relative investment merits rather than absolute investment merits (Stein 1997). The research allocation process was brought to the attention of management scholars by Bower's seminal work, which analyzed the forces that shape the resource allocation process (Bower 1970). However, this work represents an important perspective in terms of highlighting top-down processes in firms; that is, in terms of organizational design, the hierarchy is connected to enable vertical specialization and a division of labor (Chandler 1962;Simon 1957). Moreover, the resource allocation literature places much less emphasis on the motives and behaviors of the units being the targets for resource allocation (Bower and Gilbert 2005). However, the extent to which the resource allocation activity is of actual value to the MNC is still subject to debate, with one stream of literature claiming that there is a dark side to it (e.g., Ciabuschi et al. 2011;Scharfstein and Stein 2000;Stein 2002) and another stream stating that resource allocation is one of the primary objectives for headquarters (e.g., Donaldson 1984;Khanna and Tice 2001). A possible reason for these deviating standpoints is that these views rest on diverse and sometimes conflicting assumptions about the relationship between the strategies pursued by headquarters and the way these strategies affect MNC's subsidiaries. Still, what these strategies and effects are and how they relate to MNC value creation remain unclear. Headquarter resource allocation in MNCs can, broadly speaking and for the purpose of using two contrasting resource allocation strategies, be understood as a choice between either (1) winner-picking or (2) cross-subsidizing among the firm's subsidiaries. A winner-picking strategy means that headquarters disproportionally support the strongest performing subsidiaries by allocating disproportionate amounts of resources to them (Stein 1997;Scharfstein and Stein 2000). The idea of winner-picking is essentially the efficiency-seeking of top managers in the MNC to ensure that the highest value investments are pursued. This resource allocation approach is akin to the thinking about efficient capital markets (Scharfstein and Stein 2000). For example, the strategy of allocating resources to subsidiaries based on profitability is common to many firms in the consumer electronics industry: the Philips Corporation (Bartlett 2009) provided ample resources to boost its profitable country subsidiaries (e.g., Israel, Japan) and, over time, under-funded and subsequently divested several of its low-performing subsidiaries (e.g., Sweden, Greece). The cross-subsidization strategy, conversely, means that headquarters disproportionally supports the weakest performing subsidiaries at the cost of under-supporting stronger subsidiaries (Scharfstein 1998;Scharfstein and Stein 2000;Shin and Stulz 1998). The idea of cross-subsidization reflects the tendency of top managers in MNCs to manage rough business climates by prioritizing to keep different subsidiaries of the MNC (and the synergies between them) intact by ensuring that crucial strategic value is not lost due to short-term events. The cross-subsidizing resource allocation approach reflects the thinking in the strategy literature where assumptions of economies of scope and synergies take precedence over short-term capital efficiency (e.g., Goold et al. 1994). This approach to allocating resources can be exemplified by some of the major firms in the construction equipment industry, e.g., Caterpillar, Komatsu, and Volvo CE (Haycraft, 2002). These firms produce excavators, wheel loaders, bulldozers, and many other types of construction equipment at subsidiaries in markets around the world and often use a cross-subsidization strategy to allocate resources to subsidiaries even if they have a lower than average (or even a complete lack of) profitability. This strategy is sometimes motivated by economies of scope as well as a desire to be present in local markets and be seen as a one-stop shop for customers so they will not feel the need to reach out to competitors. While the above represents a notable example, it should be highlighted that resource allocation strategy is typically independent of most cohort characteristics. As such, resource allocation strategy is empirically observed to vary within for instance industry and country of origin. The influence of organizational design and the local environment on subsidiary behavior Headquarter resource allocation strategies are not situated in a vacuum, and the recipient (i.e., the subsidiary) will likely often influence the outcome of the process. Studying the MNC, Ambos et al. (2010) detailed how important it is to understand the concept of subsidiary initiative and how subsidiaries behave within the boundaries of the organization. The competitive context of MNCs is such that in parallel to the external competition with other firms there may be an internal design of an organization that promotes either competition or collaboration. This design, in turn, is closely connected to the corporate strategy of the firm. Specifically, the internal design concerns how structure affects the relationships between subsidiaries in each MNC in the face of scarce resources (e.g., Birkinshaw 2001;Birkinshaw and Fry 1998;Joseph et al. 2018;Phelps and Fuller 2000). Consequently, subsidiaries' predominant behavior towards other units of the MNC, be it headquarters or other subsidiaries, can be either internal competition or internal cooperation. Bouquet et al. (2009) provided a related argument and explained how the amount of information and the number of subsidiary initiatives continued to increase, but the supply of attention from headquarters is a constrained resource, suggesting that subsidiaries might have to fight for resources allocated by headquarters. The competition between them concerns scarce MNC resources and often the firm's financial assets (Bower 1970;Bower and Gilbert 2005). In order to be slated for headquarter resource allocation efforts, subsidiaries make moves and take initiative. This results in a situation in which different MNC subsidiaries will sometimes compete with each other to be picked as a winner in the resource allocation process (Dutton and Ashford 1993) and cooperate at other times. Since the number of subsidiaries often increases within large MNCs, the importance of showing the potential for results to headquarters continues to increase (Bouquet and Birkinshaw 2008). Although subsidiaries' predominant behavior towards other units of the MNC can be influenced by corporate strategy, it may also be influenced by the external environment, where local differences in areas such as culture, regulations, or customer preferences often constitute isomorphic pressures that affect the behavior of subsidiaries. In theorizing about the relationship between headquarter resource allocation strategy and subsidiary behavior on value creation, the MNC context is advantageous (Roth and Kostova 2003). Specifically, we argue that the strong efficiency-seeking effect of a winnerpicking strategy and competitive subsidiary behavior is unsustainable in MNCs given that the preferences of customers in different countries, as well as factors such as emission regulations, might temporarily align, but might just as soon drift apart, leaving the specialized firm vulnerable. However, as is the case for all firms, a certain level of efficiency is required to compete with domestic companies. In so doing, this paper uses the MNC context to theorize on "the best MNC strategic response and organizational design" (Roth and Kostova 2003, p. 896). As such, our arguments should be applicable to both the purely domestic multi-business context as well as the international context in which MNCs operate. Moreover, the heterogeneity of the international environment faced by MNCs consists of a wide variation in terms of culture, regulatory frameworks, and consumer preferences. This area has also been highlighted in received research as being particularly challenging for MNC headquarters to deal with (Mahnke et al. 2012;Menz et al. 2015). In fact, Ambos et al. (2019) found that headquarters' use of control mechanisms is contingent on subsidiary power. In this setting, the MNC can be conceived of as a federative arena where units compete for power and influence (Forsgren et al. 2005). In our reasoning, an important issue arising out of this heterogeneity is that it creates competing pressures on the behavior of subsidiaries and a need to consider such pressures. First, this suggests that headquarters' strategies cannot disregard influences from the external environment (such as the industries or markets it is active in) when considering how to design and allocate resources in the MNC (Sengul et al. 2019). In turn, this means that headquarters cannot simply force alignment between subsidiary behavior and a particular preferred resource allocation strategy. Rather, the resource allocation strategy of headquarters in MNCs might need to be adapted to the behavior of subsidiaries, which is subject to change and variation between subsidiaries if dictated by the external pressures of different markets. Competitive and cooperative subsidiaries Internal competition is often seen as an organizational principle in MNCs when it comes to the division of roles and resources (Ambos et al. 2010;Gammelgaard 2009). Such internal competition is a head-on struggle between subsidiaries over the limited resources of the MNC. Although the subsidiaries of any MNC could be expected to compete for scarce internal resources, subsidiaries of MNCs that duplicate certain activities have an especially broad potential scope for such competition (Birkinshaw and Lingblad 2005;Kappen 2011). This behavior occurs when the MNC wishes to bring efficiency pressures to bear on subsidiaries, which is believed to reduce slack and promote productivity by allowing competition between subsidiaries. Competitive subsidiary behavior can be expected in MNCs that design an organizational structure that generally evaluates subsidiaries on a stand-alone basis. This type of organizational structure is common in MNCs that pursue an efficiencyfocused corporate strategy where pushing each subsidiary to be as lean as possible is paramount to competitiveness. This type of corporate strategy can be observed in firms where there is minimal relatedness between the subsidiaries and therefore minimal possibilities for reaching synergies (Goold et al. 1994). Competitive subsidiaries typically strive to be the best among their peers and this is what is expected from the organization at large. The subsidiaries are largely autonomous and the organization drives efficiency by making sure the subsidiaries stay lean and mean by competing against each other. This resembles the constrained delegation design and how competition plays out between units (Sengul and Gimeno 2013). Competitive subsidiary behavior drives focus and a winner-takes-all mentality within the firm, pressuring subsidiaries to perform at their absolute best. However, a potential drawback of this seemingly productive influence is that it makes the subsidiaries channel their efforts into an increasingly narrow focus in order to be the best. This pushes subsidiaries to be sharp but brittle as the behavior makes them better at their particular activity as they strive to be the best possible investment. However, if the business landscape starts to change, such focused subsidiaries may not be able to adapt. They will typically not have broad sets of skills and activities, and the organizational slack will long since have been diminished. This subsidiary behavior is characteristic of the Panasonic Corporation (Bartlett 2009), for example, which has a long history of encouraging its subsidiaries around the world to compete directly for internal resources. This competition, which took place across the company and centered on profitability, was an important part of the corporate culture and fostered an environment that had serious consequences for subsidiaries that were repeatedly found to be insufficiently profitable. In contrast to the competitive organizational design, cooperative subsidiary behavior is expected in MNCs that evaluate their subsidiaries as a group. This type of organizational design is common in MNCs that pursue an effectiveness-focused corporate strategy in which a key to firm competitiveness is encouraging subsidiaries to collaborate, for example, through the diffusion of knowledge (Kogut and Zander 1995). This kind of corporate strategy often emphasizes company synergies and economies of scope and uses collaboration to reach these synergies (Goold et al. 1994). A cooperative subsidiary cultivates a broad set of skills and is inherently heterogeneous in pursuing several skills and generalizations rather than specializations. That is, the skills of the subsidiary are broad but dull, yet the subsidiary has the resilience needed to weather changes in the market or the industry. As a jack-of-all-trades rather than a specialist, the cooperative subsidiary is replete with exciting, off-book projects and pockets of experimentation at the cost of considerable organizational slack. This breadth of activities and the collaborative relationships to other subsidiaries in the MNC means that the subsidiary risks complacency; therefore, such a subsidiary is not so much on its toes as on its backside. A typical example of the cooperative subsidiary behavior is Siemens AG, whose telecom switch subsidiaries around the world work on different aspects of new telecom equipment but at the same time rely on each other (Pettigrew et al. 2003). This reliance concerns input related to the subsidiaries' specialties such as market knowledge, deep product expertise, and software prowess. This collaborative behavior melded cost structures where employees of a subsidiary in India spend time working for a subsidiary in Germany or the USA, making comparisons of profitability across subsidiaries appear less important. Theory development: a complementary fit framework By drawing on complementary fit logic (Cable and Edwards 2004;Ostroff 2012), it is possible to reconcile and contrast resource allocation strategies and subsidiary behavior into our current understanding of headquarter value creation. Specifically, we allow the traditional organizational structure of internal competition and internal cooperation to vary by polar opposite resource allocation strategies-winner-picking and crosssubsidizing-to better understand what these combinations might yield in terms of value creation for the organization as a whole. Winner-picking resource allocation strategy and subsidiary behavior The main advantage of a winner-picking resource allocation strategy can be understood as allocating the major part of resources to the subsidiaries that have proven to be the strongest achievers when it comes to presenting opportunities for investment (Andersson and Kappen 2010;Khanna and Tice 2001;Nell and Ambos 2013). Therefore, the winnerpicking strategy promises the highest possible yield on the invested resources (Table 1). However, as a consequence of allocating large amounts of resources to only a few subsidiaries, the winner-picking resource allocation strategy also introduces higher levels of uncertainty. If headquarters misjudges either the subsidiaries or the market when picking these winners, the strategy might also introduce the highest possible risk to the MNC portfolio. In sum, winner-picking identifies a few star subsidiaries and then supports these subsidiaries to a considerably higher degree than it does the average performing subsidiaries. This strategy creates a focused allocation of resources that has the potential to yield the highest returns, but only if the assumptions made about the subsidiaries' potential turn out to be correct. Thus, the winner-picking strategy is a highrisk/high-return resource allocation strategy. Having explained how a winner-picking resource allocation more specifically shapes the MNC, we will now turn to how a winner-picking resource allocation strategy might influence value creation depending on the chosen organizational design-i.e., competitive or cooperative subsidiary behavior. Considering a winner-picking strategy coupled with an organizational design that favors competitive subsidiary behavior, we suggest that competitive subsidiary behavior has similar effects to a winner-picking strategy on subsidiaries' behavior by compounding the high risks associated with each context. Therefore, the combination will make for even more sharply focused subsidiaries that introduce even higher risk to the MNC portfolio. In other words, headquarters are, in practice, betting on a few extremely specialized subsidiaries that, while likely to yield great returns, make the investment subject to both the subsidiary risk of being highly specialized and the headquarters' risk of putting many eggs in only a few baskets. The subsidiary is already straining itself as far as it can in its competitive environment, and as it becomes subject to an even more harshly competitive environment through winner-picking, the pressure might prove counter-productive and produce an organization with highly specialized, but ultimately frail, subsidiaries. An illustration of this combination, building on the Philips Corporation example mentioned above (Bartlett 2009), is how it allocates resources disproportionally to the profitable country subsidiaries and where the subsidiaries themselves compete against Competitive and winner-picking Combining these creates an over-focus on specialization and risk-taking, which is argued to be an extreme set-up. Cooperative and winner-picking The relative complacency and slack characterizing cooperative subsidiaries will be balanced by the winner-picking strategy. Cross-subsidizing (+) Broad risk spread (−) Little focus on performance and return Competitive and cross-subsidizing The risk and performance introduced by the competitive subsidiaries is mellowed by the cross-subsidizing, which spreads both risk and performance. Cooperative and cross-subsidizing The soft steering of the crosssubsidizing strategy is reinforcing the lack of focus and performance of the cooperative subsidiaries. each other based on profitability. In such a competitive situation scenario, it is easy to imagine how slack resources that could be directed towards more visionary innovation would be sacrificed. However, while this combination could create a particularly lean and efficient Philips Corporation, it would likely also mean that the subsidiary organizations had few resources to devote to experimenting with new products or other offerings as focusing on such potentially risky activities would put subsidiaries at a disadvantage in the context of the winner-picking resource allocation strategy. Meanwhile, a winner-picking resource allocation strategy would force a comparably complacent cooperative subsidiary to introduce ambition into its low-risk operations. In this context, considerable slack might exist in relation to activities that lie between subsidiaries (i.e., where one subsidiary performs activities for another) as the costs might not be borne by the other subsidiary. In such a scenario, the winner-picking strategy might make such slack visible as both subsidiaries in this example make an account of their own cost and revenue drivers. Although the winner-picking strategy might force the cooperative subsidiary to focus more on its own performance, headquarters can temper the high-risk/high-reward profile of its resource allocation strategy by applying it to a portfolio of subsidiaries that are low-risk/low-reward to begin with. The likely result will be a more efficient group of subsidiaries that might not yield the highest returns but will also not be at the highest risk of allocation mistakes or industry change. Imagining how this might play out in the cooperative subsidiary organization of Siemens AG (Pettigrew et al. 2003), we can expect that a winner-picking resource allocation strategy on behalf of headquarters would incentivize the otherwise friendly subsidiaries in different countries to make an inventory of their costs. For example, if the bearing of costs remains disorganized for several years in a cooperative climate, we could expect the cost control to become lax and a build-up of slack and inefficiency. A headquarter winner-picking resource allocation strategy would incentivize subsidiaries to identify such slack as well as to negotiate more vigorously with each other, leading them to apportion costs more correctly. The outcome of such a combination can be expected to be a more balanced organization with regard to both subsidiary and headquarter resource allocation risk. Taken together, combining the winner-picking resource allocation strategy with either a competitive or cooperative subsidiary behavior suggests that when it comes to complementary fit between headquarter strategy and subsidiary behavior, we can propose the following proposition: Proposition 1: A competitive (cooperative) subsidiary behavior will weaken (strengthen) the positive effect of a winner-picking resource allocation strategy on value creation. Cross-subsidizing resource allocation strategy and subsidiary behavior The main advantage of the cross-subsidizing resource allocation strategy is that it spreads the risks evenly across all the subsidiaries of the MNC. Consequently, crosssubsidization can be considered a resource allocation strategy that minimizes risk by avoiding the misallocation of resources on the part of headquarters (betting on the wrong horse) since funding all subsidiaries is betting on none. However, a potential drawback of the cross-subsidizing resource allocation strategy is the minimization of returns. Since subsidiaries receive resources on the premise of equality, relatively weak subsidiaries will receive a disproportionate amount of resources compared to stronger subsidiaries. In sum, cross-subsidizing largely disregards potential performance and aims to support all subsidiaries, albeit to a lower extent due to resource constraints. This creates a widely dispersed allocation of MNC resources that are likely to yield modest returns. Although modest, the returns are likely to be stable as no particular predictions need to be realized for the return to materialize and there is little uncertainty involved in the allocation. Thus, the cross-subsidizing strategy is a low-risk/low-reward kind of resource allocation approach. Having specified how a cross-subsidizing resource allocation strategy more specifically affects the MNC, we now turn to how cross-subsidizing fits with a competitive or cooperative behavior in subsidiaries. While a good match to a cross-subsidizing resource allocation strategy might appear to be a cooperative subsidiary behavior, we argue that this is misleading. As discussed previously, a cooperative subsidiary is characterized by a broad portfolio of skills and innovativeness allowed by organizational slack. This kind of behavior does not generally suggest high profitability, but spreads risks broadly. This would suggest that a crosssubsidizing resource allocation strategy combined with a cooperative subsidiary behavior would reinforce the strengths and, crucially, weaknesses of both. Such a scenario can be illustrated using the construction equipment industry, which is characterized by cross-subsidizing resource allocation strategies (Haycraft 2002). Having cooperative subsidiaries would suggest that the strategy not only allows the weaker subsidiaries to stay in business but also encourages them to stay weak (or at least does not encourage them to become stronger/more competitive) by the cooperative relationships between the subsidiaries in different countries. Thus, the weaker subsidiaries would not be incentivized to catch up with their stronger sister subsidiaries, either by the dynamics between subsidiaries (cooperative) or by MNC headquarter resource allocation strategy (cross-subsidization), resulting in a comparatively inefficient company. The combination of a cross-subsidizing resource allocation strategy and a competitive subsidiary behavior could result in the competitive subsidiary behavior being allowed a resource allocation context that is more conducive to experimentation and shooting for the stars as it is more forgiving of mistakes, while the cross-subsidizing resource allocation strategy is not very encouraging. This would suggest an additional acceptance of failure that might provide the competitive subsidiary with much appreciated freedom to innovate and experiment to meet requirements in the local market. In a scenario where competitive subsidiaries are subjected to a cross-subsidizing resource allocation strategy, the drive to reduce slack and increase specialization of the subsidiaries would be blunted by the relative indifference of headquarters to their profit or loss. This de-emphasis on profits as the measure of subsidiary performance would allow for the build-up of a certain amount of slack in the subsidiaries, which might encourage innovation, cooperation, and broadening of capabilities driven by local market pressure. Again, using some of the major firms in the construction equipment industry as an example, this could be illustrated by the subsidiaries that, by virtue of their competitive behavior, guard their own profit and loss statements by closely tracking revenues and costs. The combination of competitiveness among subsidiaries and crosssubsidization on behalf of headquarters would keep subsidiaries lean and efficient while allowing the relative losers to stay in business due to the resource allocation strategy used. Consequently, we suggest the following proposition: Proposition 2: A competitive (cooperative) subsidiary behavior will strengthen (weaken) the positive effect of a cross-subsidizing resource allocation strategy on value creation. Headquarter resource allocation as a question of complementary fit Contrasting the aforementioned headquarter resource allocation strategies and subsidiary behavior, Table 2 shows how the relative complementary fit between strategy and subsidiary behavior influence resource allocation and ultimately value creation. The literature has provided little elaboration on how different headquarter resource allocation strategies might complement subsidiary behavior as well as on what the potential impact of such a complementary fit might be on value creation. The notion of complementary fit is important as headquarter resource allocation strategies might only work as intended given certain subsidiary behaviors. This reasoning goes a long way to offering an explanation for why headquarter resource allocation is challenging and highlights that the way in which the MNC is designed is important for the outcomes of the resource allocation process. Thus, the framework we propose has implications for the success or failure of the hierarchy and how headquarters is an important player from an organizational design perspective when thinking about conflicting interests and the dispersion of power within the MNC as elaborated on by Foss (2019). Viewing competition as an organizational principle that is linked to the strategies and competitiveness of the overall MNC echoes the reasoning of Nadler and Tushman (1980), who essentially saw competition and the MNC as a congruence system in which the parts affect each other and therefore need to be congruent if problems are to be avoided. The term complementary fit, as used in this paper, refers to the interaction between two organizational components (Muchinsky and Monahan 1987;Nadler and Tushman 1980). A relevant example of such an interaction is between an organization's tasks and its abilities to perform those tasks. In order for an organization to function properly, its components must exist in a state of relative balance. A lack of balance between interfacing components will lead to dysfunction. The term complementary fit is used to emphasize how different components of the receiving organization interface with how headquarters attempt to create value through resource allocation. Table 2 provides a framework for thinking about headquarter resource allocation as taking place in an overall system that needs to harmonize in order to facilitate value creation within the MNC. Drawing on the work of Nadler and Tushman (1980), headquarter resource allocation would therefore be viewed through the lens of congruence systems. This would imply explaining variation in resulting value creation by capturing specific issues of correspondence, or complementary fit, between headquarter resource allocation strategy and the behavior of the receivers as a system. The main premise of such a congruence model is that in order for any organization to function effectively, there must be consistency-i.e., congruence-between its sub-components. To achieve congruence, the sub-components need a high level of complementary fit with each other. Examples of complementary fit and sub-components are the resource allocation strategies pursued by headquarters and how these strategies and the behavior of the MNC subsidiaries complement each other. As a whole, therefore, a congruence model displays a relatively high or low level of congruence as a consequence of the complementary fit between the underlying components, which in this paper is between headquarter resource allocation strategy and the behavior of MNC's subsidiaries. While the central argument in this paper has rested on the notion of a single, easily identifiable headquarters unit at the apex of the firm, recent evidence suggests more complex headquarter structures (e.g., Kunisch et al. 2019;Nell et al. 2017) where a subsidiary's behavior is asserted in nested control function systems (Sengul and Gimeno 2013). If we relaxed our definition to allow for these more complex headquarter structures, which is probably closer to actual headquarter designs of the large MNC but complicates the complementary fit argument, it is likely that the outcome of our model would remain. In fact, the increased complexity of hierarchy and the push and pull of additional units of headquarters would make it even more difficult for the subsidiary to align with corporate strategy as yet another layer of heterogeneity is introduced (Decreton et al. 2017). This resonates with the idea of the M-form organization and a division of labor between organizational entities as a design parameter. This represents contingency solutions on how to organize the MNC where complexity drives the structure of the headquarter design, and where there is an organizational abandonment of single headquarter solutions and the organization is deliberately designed to consist of multiple headquarters (Ciabuschi et al. 2012;Kunisch et al. 2019). Nevertheless, these multiple headquarters operate under the M-form logic with a division of labor and our resource allocation framework is applicable to this line of thinking. While this paper theorizes about value creation through resource allocation in light of certain strategies and subsidiary behavior, it does so while not considering reallocation strategies across the temporal dimension. The influence of such intertemporal economies of scope (e.g., Helfat and Eisenhardt, 2004;Lieberman et al. 2017;Sakhartov and Folta 2015) constitutes an interesting avenue for future research as it would relax the time-invariant perspective and therefore effectively introduce dynamics into the analytical framework. Potentially, this would allow for further theorizing on the resource allocation process and its influence on performance.
2020-04-07T00:40:25.580Z
2020-04-03T00:00:00.000
{ "year": 2020, "sha1": "5cf958c2f13340b9c9d9cc99e7350fe0543d0690", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s41469-020-00070-3", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5cf958c2f13340b9c9d9cc99e7350fe0543d0690", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
7385858
pes2o/s2orc
v3-fos-license
The Potential Involvement of E-cadherin and β-catenins in Meningioma Objective To investigate the potential involvements of E-cadherin and β-catenin in meningioma. Methods Immunohistochemistry staining was performed on samples from patients with meningioma. The results were graded according to the positive ratio and intensity of tissue immunoreactivity. The expression of E-cadherin and β-catenin in meningioma was analyzed by its relationship with WHO2007 grading, invasion, peritumoral edema and postoperative recurrence. Results The positive rates of E-cadherin in meningioma WHO I, II, III were 92.69%, 33.33% and 0, respectively, (P<0.05); while the positive rates of β-catenin in meningioma WHO I, II, III were 82.93%, 33.33% and 20.00%, respectively, (P<0.05). The positive rate of E-cadherin in meningioma without invasion (94.12%) was higher than that with invasion (46.67%) (P<0.05). The difference in the positive rate of β-catenin between meningioma without invasion (88.24%) and meningioma with invasion (33.33%, P<0.05) was also statically significant. The positive rates of E-cadherin in meningioma with peritumoral edema 0, 1, 2, 3 were 93.75%, 85.71%, 60.00% and 0 respectively, (P<0.05); the positive rates of β-catenin in meningioma with peritumoral edema 0, 1, 2, 3 were 87.50%, 85.71%, 30.00% and 0 respectively, (P<0.01). The positive rates of E- cadherin in meningioma with postoperative recurrence were 33.33%, and the positive rate with postoperative non-recurrence was 90.00% (P<0.01). The positive rates of β-catenin in meningioma with postoperative recurrence and non-recurrence were 11.11%, 85.00%, respectively (P<0.01). Conclusion The expression levels of E- cadherin and β-catenin correlated closely to the WHO 2007 grading criteria for meningioma. In atypical or malignant meningioma, the expression levels of E-cadherin and β-catenin were significantly lower. The expression levels of E- cadherin and β-catenin were also closely correlated with the invasion status of meningioma, the size of the peritumoral edema and the recurrent probabilities of the meningioma, all in an inverse correlationship. Taken together, the present study provided novel molecular targets in clinical treatments to meningioma. Introduction Meningioma originates from the derivatives of meninges and arachnoid cap cells. Its incidence as primary intracranial tumors is very high (15-20%), ranked just behind the cerebral glioma [1,2]. The biological characteristics of meningioma are diverse. Most of tumors show retarded growth, but a small portion also display invasive growth. Most meningiomas can be cured by complete surgical removal, although there are always the risks of recurrence. The present theory on the occurrence and the development of meningiomas include polygenetic and multiple molecular factors [3,4]. For example, recent studies suggested that E-cadherinmediated cell-cell adhesion is critical in these processes [5,6]. The objectives of this study were to investigate the expression levels of E-cadherin and b-catenin in meningioma with both temporal and spatial information, in order to determine their pathological significance in tumor invasion, formation of peritumoral edema, and postoperative recurrence. The results showed that both the two molecules are tightly associated in meningioma and could have important implications in the development of new targeting therapies. Tissue materials All specimens involved in this study were collected from 49 meningioma patients with university guidelines carefully followed (approved by Taizhou Hospital ethic committe for medical research in using clinical human samples). Written permissions from patients were obtained. These operative specimens came from the Affiliated Municipal Hospital at Taizhou Medical College and Sir Run Run Shaw Hospital at Zhejiang University. All the patients were diagnosed as meningioma in formal pathological reports between Jan 2003 and Sep 2005. Clinical data Eighteen cases were male, and 31 cases were female. The age range was 20 to 75 years, and the average age was 56.3617.1 years. The length of time for which the patients had meningioma varied from 5 years to 2 days. The first symptoms were as follows: increased intracranial pressure (12 cases), visual disturbance (7 cases), disorder of limb activity (20 cases), hearing disorder (4 cases), seizure (6 cases), and other (10 cases). Cranial MRIs revealed the tumor locations as follows: cerebral hemisphere (15 cases), pardsagittal (17 cases), and cerebellopontine (5 cases) Angle, 6 cases in sphenoid ridge, 6 cases in sellar region; The tumor size was as follows: ,3 cm (8 cases), 3,6 cm (25 cases), .6 cm (16 cases). The dural tail sign was not obvious in 20 cases and was obvious in 29 cases. All 17 cases of pardsagittal tumors had MR venous angiography(MRV), and 2 cases underwent DSA. The results showed that no obstructed sagittal sinus or thrombosis was present. In five cases, the tumors were closely related to the cortical draining veins, but it was easy to separate the tumors and veins during the operation, and there were no obstructions or thrombosis with cortical draining veins. In the samples we studied, 22 cases were Simpson resection grade I, and the other 27 cases were Simpson resection grade II. Invasive tumor was defined according to whether the tumor had invaded the pia mater and skull. This was ascertained by surgical findings and pathological examination. In the invasive group, the tumors had invaded the pia mater or the skull and the arachnoidal cleavage plane had disappeared, while in the non-invasive group, the tumors had not invaded the pia mater or the skull and the arachnoidal cleavage plane was well preserved. According to these criteria, 15 cases were classified as invasive. All of them, the tumor .3cm diameter 2 cases, the tumor 3-6cm diameter 8 cases, the tumor .6 cm diameter 5 cases. Goldman's method was used to classify peritumoral edema [7]. According to this classification method, 16 cases were 0 grade without obvious edema, 21 cases were grade 1 with an edema zone ,2cm, 10 cases were grade 2 with an edema zone $2cm but restricted to the hemisphere, 2 cases were grade 3 with an edema zone more than the hemisphere. The resected pathological specimens were hematoxylin-eosin (HE) stained. The histological type and grade of the specimens were classified according to the WHO 2000 standard. Forty one cases were benign meningiomas (WHO grade I), 3 cases were atypical meningiomas (WHO grade II), and 5 cases were malignant meningiomas (WHO grade III). Of the benign meningiomas, 10 were epithelial, 5 were transitional, 14 cases were fibrillar, 7 were glit, 2 were angiomas, and 3 were microcystic. Every case included a follow-up visit by the out-patient service or by telephone and letter. The follow-up intervals ranged from 18,52 months, and the mean follow-up time was 40.9619.3 months. Upon follow-up, there were 9 relapses, 3 cases had been reoperated, 4 cases had been treated by a gamma knife, and the other 2 cases had been under continuous observation. Immunohistochemistry The expression levels of E-cadherin and b-catenin were measured by immunohistochemical staining and En Vision. Tissues were prepared as paraffin sections. Prior to immunohistochemistry, the sections were deparaffinized with xylene, dehydrated with ethanol, and deoxidised with methanol. The sections were then prepared in a pressure cooker to 121uC for 2 minutes in citrate buffer solution to restore the antigen immunoreactivity. Then the sections were washed in PBS prior to incubation with primary mice monoclonal antibodies of Ecadherin (1:50, Shanghai Gene Tech Co.) and b-catenin (1:200, Shanghai Gene Tech Co.) overnight at room temperature. Then the sections were processed for DAB visualization. The sections were then mounted with permount medium and observed under a light microscope. Criteria in analyzing the staining pictures The expression of E-cadherin was located in either the membrane or cytoplasm of meningioma cells, more commonly in the former. The expression of b-catenin was located in the membrane, cytoplasm, and perinuclear granules [8]. The expression strength was analyzed and graded based on the positive ratio and intensity of immunoreactivity [9]. The positive cells were stained light brownish-yellow to chocolate-brown, and the intensity of the immunoreactive products was scored under a high power microscopic as follows: no expression, 0; yellowish, 1; imperial yellow, 2; and brown, 3. The positive ratio was scored as follows: positive cells ,5%, 0; positive cells 5-10%, 1; positive cells 11-50%, 2; positive cells 51-80%, 3; positive cells .80%, 4. The two scores were multiplied, and the IRS (values from 0-12) was determined as follows: 0 (2), 1-3 (+), 4-6 (++), and .6 (+++). We selected the best of production quality glass , which had been as observation objects then decided the results of determining. Statistical analyses Statistical analyses were carried out using SPSS 11.0 software. The comparison of the expression strength of E-cadherin and bcatenin with the differentiated pathological types was performed by the rank sum test. The comparison of the positive ratio of Ecadherin and b-catenin with the differentiated WHO 2000 grading, incidence of invasion, level of peritumoral edema, and postoperative recurrence was performed by the Chi-square test. P,0.05 was considered as statistically significant. Results The expression levels of E-cadherin and b-catenin and their co-relationship to meningioma pathological types (Table 1) We investigated the expression levels of E-cadherin and bcatenin among different types of meningioma. We found that their expression levels were not statistically different between each other (x 2 = 5.649, 6.274, respectively; P.0.05). This suggested that Ecadherin and b-catenin could be common molecules participated in the development of diverse meningioma. The expression results for E-cadherin and b-catenin with differential invasion of meningioma (Table 3) We further asked if the expression levels of the two proteins could contribute the invasion ability of the tumor. We found significant differences in the expression levels of E-cadherin and bcatenin between invasive or non-invasive meningioma (P,0.05). This strongly suggested that E-cadherin and b-catenin could be potentially negative regulators of the tumor invasion. The relationship of the expression levels of E-cadherin and b-catenin with the postoperative recurrence of meningioma (Table 5) Because the expression levels of E-cadherin and b-catenin could reflect the invasive ability of the tumor cells, therefore they might also be implicative for post-operative recurrence. We therefore investigated the association between expression levels of of Ecadherin and b-catenin in our study. We found that the expression levels of of E-cadherin and b-catenin in postoperative recurrence cases were 33.33% and 11.11%, respectively; and for postoperative non-recurrence cases they were 90% and 85%, respectively. In both cases, the differences were significantly different (x 2 = 15.49 for postoperative recurrence cases, and 12.84 for postoperative non-recurrence cases, P,0.01). Discussion E-cadherin was a calcium-dependent cell-cell adhesion molecule with pivotal roles in epithelial cell behavior, tissue development, and suppression of cancer growth [10,11]. It was first discovered in 1995 by Berx et al. [12]. Cadherin depended on Ca 2+ for its function and the structural rigidity; the extracellular aminoterminus formed the ''zipper-like'' structure with that would act as a cell tight junction. The intracellular carboxyl-terminus of the cadherin molecule indirectly attached the cytoskeleton via catenin. b-catenin, which was one of four known kinds of catenin, is a multifunctional protein [13]. It directly joined to the cytoplasmic terminal of E-cadherin and formed the E-cadherin/catenin complex [14,15]. Disruption of this junction would lead to diverse phenotypes, including loosed cell-to-cell contacts, morphological changes of the tissue and cells, enhanced cell motility, and the lost of the cell contact inhibition. The changes in molecule structure and function were directly related to the biological behavior of tumor cells, affecting their detachment and re-adhesion. The expression level of E-cadherin could be related to the classification of astrocytomas. For this reason, an assessment of the expression status of E-cadherin in astrocytomas could be one important index in determining the prognosis of patients [16,17]. A previous study by Motta et al. examined the expression of Ecadherin in astrocytoma and brain cells with non-CNS tumors [18]. They found that the expression strength of E-cadherin in low-grade astrocytomas (grade I-II) was higher than that presented in high-grade astrocytomas (grade III-IV) (P,0.0001), while the expression strength of E-cadherin in non-CNS tumors is higher than that found in grade I astrocytomas. The results of our research revealed remarkable differences in the involvements of E-cadherin and b-catenin in different pathological grades of meningioma. Moreover, these differences were statically significant. As the pathological grade of meningioma increased, the positive rates of E-cadherin and b-catenin in meningioma decreased. The expression levels of the E-cadherin and b-catenin were actually completely diminished in malignant meningioma. In previous studies, Utsuki et al. tested specimens of 103 meningioma and found that the expression levels of E-cadherin in 5 atypical meningioma were all negative, 3 cases of the expressions of the bcatenin were negative among them [19]. However E-cadherin and b-catenin expression were positive in epithelial meningioma. In 10 of the 12 cases of invasive meningioma, the E-cadherin and bcatenin expression levels were negative. Therefore they concluded that the decrease in cell adhesion molecules was associated with the increase in tumor cell proliferation and might contribute to invasive ability of meningioma. Tumor invasiveness and the presence of peritumoral edema were the two major factors that would determine the clinical management of meningioma. There were some biological, physical, and chemical factors that contribute to the peritumoral edema of meningioma [20]. In our studies, we found remarkable differences in the positivity rates for E-cadherin and b-catenin expression corresponding to different degrees of peritumoral edema. As the expression of E-cadherin and b-catenin decreased, possibilities of developing peritumoral edema would increase. We therefore believe that there existed a mechanism in the meningioma cell (even with the high degree of malignancy) which could inhibit the expression of E-cadherin and b-catenin, leading to the harmed cell-to-cell junctions, damaging of the tumor-brain interface and the blood-brain barrier. Consequently, the meningioma cells could infiltrate brain tissue and increase brain edema. In cases that showed a deletion of E-cadherin and b-catenin, one would expect serious brain edema and the clinical features of intracranial hypertension. For this reason, clinical surgeons should pay close attention to the intracranial pressure during a tumorremoval operation. The decrease or deletion of the expression of E-cadherin leads to the loss of contact inhibition and unrestricted hyperplasia, the loss of intercellular junction with, stronger invasive ability, enhanced tumor cell diffusion and metastasis, as well as benign tumor malignant transformation in some extreme cases [21,22,23]. Erdemir F et al. tested specimens of bladder cancers and found that 13 of the 25 stage T1a cases were recurrent and that the positive rate of E-cadherin among them was only 30.7% [24]. However, in the 12 non-recurrent cases, the positive rate of E-cadherin was 75%. Among the stage T1b 27 cases, 25 of the 27 were recurrent, and the positive rate of E-cadherin was only 12%. All these data suggested there was a close relationship between the decreased expression of E-cadherin and recurrence of postoperative bladder cancer. The results of our studies showed that the positive rates of E-cadherin and b-catenin with postoperative recurrence were both significantly lower when compared to those of postoperative non-recurrent cases. We also tested E-cadherin and b-catenin expression levels in pituitary adenoma and found that E-cadherin and b-catenin expression levels were significantly down-regulated and were related to the extent of invasive pituitary adenoma. Pituitary adenoma recurred most easily when the expression of E-cadherin and b-catenin were decreased (Liu et al. Unpublished data). In summary, the present study employed molecular biology and immunohistochemistry tools to understand the potential involvements of two cell-adhesion molecules in development and invasiveness of meningioma, which provided novel targets for pathological analyses as well as therapeutic drugs.
2014-10-01T00:00:00.000Z
2010-06-21T00:00:00.000
{ "year": 2010, "sha1": "27215836269ba2de2749c7658489907112d0f21d", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0011231&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "27215836269ba2de2749c7658489907112d0f21d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247107054
pes2o/s2orc
v3-fos-license
Reshaping wound care: Evaluation of an artificial intelligence app to improve wound assessment and management amid the COVID‐19 pandemic Abstract Wound documentation is integral to effective wound care, health data coding and facilitating continuity of care. This study evaluated the usability and effectiveness of an artificial intelligence application for wound assessment and management from a clinician‐and‐patient user perspective. A quasi‐experimental design was conducted in four settings in an Australian health service. Data were collected from patients in the standard group (n = 166, 243 wounds) and intervention group (n = 124, 184 wounds), at baseline and post‐intervention. Clinicians participated in a survey (n = 10) and focus group interviews (n = 13) and patients were interviewed (n = 4). Wound documentation data were analysed descriptively, and bivariate statistics were used to determine between‐group differences. Thematic analysis of interviews was conducted. Compared with the standard group, wound documentation in the intervention group improved significantly (more than two items documented 24% vs 70%, P < .001). During the intervention, 101 out of 132 wounds improved (mean wound size reduction = 53.99%). Positive evaluations identified improvements such as instantaneous objective wound assessment, shared wound plans, increased patient adherence and enhanced efficiency in providing virtual care. The use of the application facilitated remote patient monitoring and reduced patient travel time while maintaining optimal wound care. increased patient adherence and enhanced efficiency in providing virtual care. The use of the application facilitated remote patient monitoring and reduced patient travel time while maintaining optimal wound care. K E Y W O R D S artificial intelligence, digital application, documentation, wound, wound care Key Messages • wound applications have improved the accuracy of wound assessment, management and documentation, leading to improved wound care and positive patient outcomes • this study aimed to evaluate the usability and effectiveness of an application for wound care and documentation • the wound application demonstrated a significant improvement in wound documentation, and clinicians and patients gave positive evaluations • the wound application facilitated remote patient monitoring, while maintaining optimal wound care | INTRODUCTION Individuals with wounds can be found across all age groups and all health care specialties. Some causes of wounds include trauma, burns, skin cancers, infections or underlying medical conditions such as diabetes. 1 Wound care is generally considered a comorbid disease, for example, diabetic foot ulcers, venous leg ulcers, pressure injuries, but despite this, patients are treated in the silo of their medical specialty. 1 The impact is that the approaches to wound care may differ between specialities, and currently there is no specialist patient-centred wound care approach. Management of all wounds requires good assessment and handover. 2,3 Inaccurate wound documentation can affect the determination of the best treatment options and the wound healing process. 4 The careful documentation of wounds should include all variables, such as the wound location and size, the surrounding skin, the presence of undermining and tunnelling and the amount of exudate, odour and/or pain. 5 The Australian Standards for Wound Management highlight the importance of accurate documentation to provide a 'legal, comprehensive, chronological record of the individual's wound assessment, management and prevention plan'. 6 However, there is a paucity of research that specifically addresses wound documentation-only a few small studies have been published, which indicate deficiencies in this area. 7 A small Australian and Norwegian study 8 using a retrospective review of wound documentation across five health care facilities identified that almost half (45%) of the medical record notes on wounds-specifically, pressure injuries-lacked key details on assessment and intervention. Wound care activities were not described comprehensively. Gillespie and colleagues 9 studied wound documentation by clinicians in 200 medical records in an Australian hospital. They found that less than half (41.4%) of the medical records had completed wound assessment documentation and that wound care documentation was not in line with evidence-based guidelines. Similar results have been found in the United States and other countries. In their pilot study conducted in a 560-bed tertiary hospital, Dan Li and Korniewicz 10 found that wound assessment and management documentation in electronic medical records (eMR) was poorer than in written documentation. Thus, current wound documentation in eMR does not capture the complex needs of patients, and nor does it reflect evidenced-based practice. 11 A recent stakeholder engagement and forum review conducted by a national group of experts in Australia found a lack of integration of patient records and wound information across the continuum of care. 12 The importance of wound photography for detailed, reliable wound assessment, monitoring and management has been demonstrated in numerous publications. [13][14][15][16][17] Increasingly, health services are using photos to assess, manage and document wounds. 16,18,19 The wound photo can be checked immediately, is readily available for viewing on a computer screen, and can easily be shared among clinicians and printed if required for hard-copy records. There are several problems associated with wound assessment and the use of wound photography for assessment. First, the wound assessment process is subjective, given that it is influenced by the clinician's experience. The assessment of wound dimensions varies between clinicians even when a standard tape measure is used because there is often no clear definitive wound edge and the pinpointing of an edge from which to judge width or length is subjective. Second, the type of camera used to capture wound images and the way the camera is positioned can affect the wound picture, creating further variation in wound dimension measures or the 'look' of a wound. Last, the steps required to upload and transfer images can breach hospital policy and privacy regulations. 20 Recent advances in wound photography, such as digital wound applications, have improved the accuracy of wound care documentation, leading to improved wound management and patient outcomes. [13][14][15]17 Digital wound applications downloaded to a smartphone are capable of performing real-time wound analyses and tracking through image capture. Certain applications have algorithms and clinical decision-support tools to assist in determining the best treatment options, the type of wound products to use, the tracking of the wound healing progression and the next steps to take. These technologies for wound care have been developed and used for a number of years in Europe, the United States, Canada and the United Kingdom. [21][22][23][24] Their use is also understood to be in the Australian context; however, we could not identify any literature reporting digital apps for an extensive range of wound types. Some studies have reported single wound types such as diabetic foot ulcers. 25 A major metropolitan quaternary health service in New South Wales, Australia conducted a market analysis and a review of digital applications for wound management. The Tissue Analytics digital application (TA app) met all requirements necessary to meet the health service's needs including wound image capture, eMR integration, aggregated wound data, telehealth integration and clinical decision-support capability such as an algorithm that provides clinicians with treatment options. Tissue Analytics application is a machine learning application, which fits under the overall artificial intelligence (AI) umbrella; AI describes any task automated by a machine/computer (eg, a robotic arm executing a simple up-and-down movement), whereas machine learning specifically describes software systems that are learning and improving from data. 26 The aim of this study was to evaluate the usability and effectiveness of the TA app to improve wound assessment and management. | Study design This study used a quasi-experimental design to evaluate the usability and effectiveness of the TA app where clinicians used the TA app for patient wound assessment and managment from June to October 2020, the data were compared with a historical patient group who received standard wound assessment and management (December 2019 to March 2020) ( Figure 1). Patients using the TA app at home (patient interface) were assigned in mid July 2020. Owing to the coronavirus disease (COVID-19) pandemic, the study paused in March 2020 but then restarted immediately after 3 weeks. In addition, patients were not enrolled to use the patient interface until July 2020 because the patient interface software was not available for download in Australia. | Setting We conducted this study in four settings in one health service in the state of New South Wales, Australia. The health service is a large urban setting that provides primary, secondary and tertiary care to a local population of 700 000 people. Three study settings were in a quaternary hospital: an aged care ward, a colorectal ward and an outpatient dermatology clinic. The hospital is an 800-bed | Clinicians A purposive sample of 13 clinicians-11 nurses and two doctors-were recruited. These 13 clinicians were trained on the TA app and used the app with their patients. | Intervention The TA app is a cloud-based application to measure, analyse and treat wounds. The TA app is designed to facilitate patient wound care delivery using artificial intelligence-based technology to support clinical decision-making. By capturing an image of the wound, the TA app analyses its dimensions and perimeters, surface area and tissue composition and presents augmented visual images ( Figure 2). To facilitate the success of the intervention, we undertook careful planning, and codesigned the clinical model and technology solution through broad collaboration with many key stakeholders, including the TA app developers, system architects and clinicians, as well as consultation with statewide government agencies. For this study, we tested a standalone solution that was not integrated with eMR, whereby the image and structured report generated from the TA app were to be uploaded to the eMR via the medical records department. The TA app was available for any smartphone and Android device with an integrated camera. No additional optics or hardware were needed to run the app. The TA app comprises two components, each with a separate login, a clinician interface ( Figure 3) and a patient interface ( Figure 4). Clinicians used the clinician interface to take photos and document wound assessment and management. The patient interface was used by patients who care for their wound at home and require regular monitoring by clinicians. The patient interface was linked to the clinician interface for oversight. | Standard practice The health service uses wound iView in the eMR for wound documentation. Wound iView is a platform for wound documentation in the eMR. Wound iView at present has wound documentation fields, such as wound location, colour and exudate, with drop-down options to document wound assessment. Wound iView does not have inbuilt functionality to store visual images; therefore, the wound information is stored only as text. Clinicians take photos with either a single lens reflex camera or a work mobile device. The photos are then uploaded onto a secure password-protected drive to which only senior wound care nurse consultants have access. Wound iView in the eMR has the functionality for wound assessment and management documentation for patients with one wound. | Outcome measures The outcome measures of this study were: (a) patient and clinician usability, and acceptability of the TA app; (b) reduction in wound size at the point of discharge (inpatient cohort) and at the end of 3 months after enrolment (community and outpatient cohort); (c) completeness of wound-related documentation determined by the documentation of pain, wound size, exudate, odour and a management schedule. | Clinicians A clinician survey (APPENDIX 1) was developed by the study investigators (M.B.J., A.J., F.C.) in consultation with researchers and senior clinicians using the REDCap system (a system for building and managing online surveys and research data) hosted by the local health service. 27 The clinician survey included 21 questions regarding user experience in the following categories: usability and easiness (seven items), image capture (four items), benefits to assessment and management (three items), benefits to communication and continuity (three items), benefits to workflow and time to wound assessment (three items) and overall perceived value (one item). The survey used a 10-point Likert scale, with scores ranging from 0 to 10 (higher scores indicating stronger agreeance), with items 3, 5, 7, 9, 11, 14 and 16 being reverse scored. The survey was reviewed for face validity by wound care expert clinicians and pilot tested on five clinicians. No changes were required. To further explore clinician usability and acceptability, focus group interviews were conducted with clinicians. | Patients A patient data collection tool was developed by the lead authors (M.B.J., A.J., F.C.) in consultation with wound care expert clinicians and reviewed for face validity by senior registered nurses (RNs) and senior researchers. The tool captured patient demographic data, diagnoses and comorbidities; wound aetiology; wound size percentage decrease; the completeness of wound documentation including data on pain, exudate, odour, size, wound management goal, wound assessment schedule and potential saving in patient travel-related time. For outcome measures of usability and acceptability, semi-structured indepth interviews were conducted with patients via telephone. | Clinicians Clinicians were trained by the application vendor on the use of the TA app prior to enrolment. They were instructed to download the TA app on their work smartphone. Two training sessions were held (Box 1). Out of 13 clinicians, one designated Skin Integrity Nurse Consultant (T.L) was assigned as superuser. The lead author was also assigned as the superuser for oversight and coordinatinon. The remaining clinicians were assigned as end users. The TA app developers conducted a 4-hour virtual session for superusers and 1-hour session for end users. The end-user virtual training sessions were followed by 2-hour face-to-face sessions with the superusers. The investigators assembled a training resource pack, including an information brochure on the TA app for both clinicians and patients, step-by-step instructions and a cheat sheet with frequently asked questions. At the end of the training session, clinicians were provided with the training resource pack and were instructed to raise any questions with the investigators, who would then triangulate issues with the TA support team. For progress and monitoring, the investigators set up 15-minute virtual catchups with clinicians twice a week to discuss progress and raise any issues related to the app and the clinical workflow for the duration of the study. All clinicians testing the TA app were invited via email to complete the survey online and to participate in the focus groups 4 months after the TA app implementation. They were given a month to complete the survey and were sent two follow-up emails in this period to remind and encourage them to complete it. Semi-structured focus group interviews, which lasted between 20 and 40 minutes, were conducted with clinicians in a quiet location of their choice. They were given the opportunity to share their views on the TA app: its ease of use, its impact on wound documentation, their experience with the TA app, TA app incorporation into the workflow and its effectiveness in assisting with wound management decisions. Focus group interviews were audio-recorded, professionally transcribed and deidentified. | Patients Data on patients were collected via their medical records and were compared with data from standard care patients collected prior to the TA app implementation. All patient data were de-identified when recorded and entered onto a secure web-based platform used to store data research (REDCap). 27 The clinician users assisted the patients using the patient interface to download the TA app on their phone and instructed them on the use of the app. The patients were provided with a written information sheet and the contact details of their clinicians for questions. When patients used the patient interface to photograph their wound and input wound information, the treating clinicians received a notification to review the report and triage as appropriate. All patients using the TA app were invited after discharge, via a follow-up call from an investigator, to participate in a brief (15-minute) semi-structured interview. The interviews were conducted via telephone. Interviews were audio-recorded, professionally transcribed and deidentified. | Data analysis All data were entered and analysed using IBM SPSS Statistics V24 (IBM Corps, 2013, Armonk, NY: IBM Corp). A convenience sampling was used for patients, with no a priori sample size calculation. Patient demographic and clinical data were analysed descriptively using frequencies and percentages, means and standard deviations (SD) and compared using independent t-tests for continuous data and chi-square tests of independence for categorical data. Chi-square tests of independence were conducted to examine whether there was a statistically significant difference between cohorts in wound documentation practice: pain, exudate, odour, size, wound care plan, wound assessment schedule and types of dressing applied. Wound percentage decrease was not able to be compared between groups as this outcome was not measured in the standard care approach. The first and the last wound area measurements were used to calculate the percentage change in area. Clinician satisfaction with TA was calculated by adding the scores of each question of the survey and then calculating the mean of each category. The higher score reflected greater satisfaction. Transcripts of clinician focus groups and patient individual interviews were uploaded into NVivo 11 software for analysis. Thematic analysis 28 was conducted in six steps: familiarisation with data; generation of initial codes; search for themes; review of themes; definition and naming of themes; and preparation of a written report. Data analysis was undertaken by four team members (M.B.J., F.C., A.F., S.R.) independently to ensure rigour. The potential travel time of patients was estimated using Google Maps. The patient's address and the hospital's address were used to calculate the time using Google Maps. The time of travel was set at 12 PM. The mean travel time was calculated using the minimum and the maximum travel time provided by Google Maps. The average fuel cost of 122.6 c/L over the period of the study was used to calculate the cost of travel. | Ethical consideration The study was conducted in accordance with the National Health and Medical Research Council's (NHMRC) National Statement on Ethical Conduct in Human Research. Ethics approval was obtained from the Ethics Review Committee (RPAH Zone) X19-0307 and 2019/ETH12459. Patient participants who used the app were invited to participate in an interview and clinician participants were invited to participate in an interview and in clinician focus groups. Fully signed consent was obtained from both patient and clinician participants. Confidentiality was ensured by assigning each participant a unique identifier and de-identifying data when transferred into a secure base. Re-identifiable coded data were stored on our secure online REDCap database, accessible only by study investigators. Clinicians anonymously completed an electronic satisfaction survey, where the completed return of the survey signified consent. | Demographic and baseline characteristics A total of 290 patients were enrolled in this study: 166 patients in the standard group and 124 in the intervention group ( Figure 5). The TA app clinician interface was trialled by 13 clinicians on 124 patients with 184 wounds and the outcomes compared with a standard group of 166 patients with 243 wounds who were treated with standard care. There was no statistically significant difference between the two groups by age, gender or study settings. The standard care group had statistically significant higher proportions of wound types such as blister/abrasion/skin tear and lower proportion of ulcers than the intervention group (based on adjusted standardised residuals >2). The demographic and clinical characteristics of both groups are presented in Table 1. Of the 124 participants in the intervention group, 13 trialled the patient interface, which provided data to calculate the potential travel-related time, and cost. 3.2 | Feasibility, usability and acceptability of the TA app | Clinician survey results The survey was distributed to the 13 participant clinicians, among whom 10 completed it, yielding a response rate of 77%. Most clinicians strongly agreed that the TA app was easy to learn and to use ( Table 2). The highest mean item score was for overall perceived value (8.44) and the lowest was for ease of use (6.92). | Clinician focus groups and patient interviews Data were collected from four patients individually and 13 clinicians in five focus groups. Table 3 gives an overview of participating clinicians' characteristics. Two major themes emerged from the data: Connecting treatment and continuity of care and Engaging with a new technology. | Connecting treatment and continuity of care Patients were unanimous in their belief that the TA app provided benefits to their wound healing and communication with clinicians. The importance of clinician engagement and continuity of care from the hospital to the community was highlighted: So, we took weekly pictures of it, or whenever I would get it dressed to get a new dressing done. And I found that good, I found it good for like, seeing the progress of it or seeing, like how far the wound had progressed or whatever, in terms of healing. (Patient participant 1) So it gave me a level of comfort that it's improving, it's improving by a certain percentage. (Patient participant 4) From the clinician's and the patient's perspective, the use of the TA app increased patients' engagement and their own care and security in knowing that they were in constant contact with their provider: Every time we change the dressing, the patient will ask, 'Can I have a look what it looks like now?'. (Focus group 2, RN 4) I mean, looking at the patient side of things, it really has made a difference in terms of The importance of communication and continuity of care was reinforced by all clinicians. Clinicians indicated that using the TA app in practice could improve communication between clinicians, reduce discomfort for the patient and allow the wound to be viewed remotely by the doctor, which reduced the unnecessary early removal of the dressing: on the app, everything is pretty much there … (Focus group 2, RN 4) So, as a doctor using the app, I found it overall usefula useful way of collecting good data and documenting wounds and adding good photographic documentation as well. (Focus group 5, Doctor 2) Virtual communication and seeking expert advice were other advantages identified by clinician participants. They suggested that these enhanced efficiency, communication and continuity of care for patients: It will connect us with community nurses, with GP services and keep us connected with patients if they fall through the loops. For example, we get patients that come here [in hospital], and then, if they discharge and we can't keep track of them, it will help us keep connected with continuing of care with their treatment. (Focus group 3, RN 7) One clinician participant suggested that the TA app offered health care providers the ability to manage wound-related liability and dispute by recording an objective wound measurement on discharging or transferring the patient to another facility. At the moment, we would often send these patients off to other facilities … sometimes, because the documentation was so poor, we do often get a few calls back saying, 'Well, this wasn't here before. It wasn't as extensive' … it could be quite useful, particularly if the patient is going on to other facilities … I think it provides us with a sort of evidence of what we've seen as well. (Focus Group 4,RN 8) Clinicians also reported that that using the TA app increased patient adherence to the wound care plan, which facilitated wound healing: Yeah, especially with [patient's name] … before, she refused to reposition. … And then, when we showed her how bad it was and then the progression … Since then, she repositioned herself. (Focus group 2, RN 4) | Engaging with a new technology Engaging with new technology can be challenging. Given that clinicians and patients were introduced to this new technology at similar times, it was unsurprising that they gave positive feedback as well as identified some problem areas. Environmental considerations, both for clinicians and patients, need to be explored and understood when engaging with a new technology. Several patient participants commented on the advantages of being in their own home environment, regarding this as a positive element of their recovery: It was easy for me to do that from home and for my wife to actually do the wound itself … and for us to send those photos … it was very convenient for us, otherwise we would have The advantages included ease of movement, the opportunity to adhere to their normal meal-time routines and the ability to have better sleep at home. Most patient participants also believed that using the TA app enabled them to manage their own wound care under supervision, which was convenient and saved them travel time: However, clinician participants noted difficulties in the community, such as inadequate lighting that inhibited them from obtaining a good-quality photograph: We have no control over the dark dingy room or the over-brightly lit room and trying to reposition people in the right spot and the frustrations that that can develop into. (Focus group 1, RN 1). They also mentioned problems with the correct positioning of a patient if the wound was in a 'hard-toreach' area. Similarly, patients had difficulties photographing their own wound, depending on the part of anatomy: The clientele that we have are quite elderly, so not a lot of them are tech savvy and depending on where their wound is, they might not be able to get a picture of it themselves or have someone be there to take it. (Focus group 1, RN 2) Another environmental consideration that several hospital clinician participants highlighted was infection control. Problems were identified with infection control practices and picture clarity when using personal phones; with covering phones with a plastic sleeve for protection; and about the need to take a photo in a room where a patient had an infectious disease: I try to use the plastic before the sleeve, but it doesn't give you a clear photo. (Focus group 2, RN 5) I think the other biggest thing for me was infection control, so having the phone in an infectious room as well is another challenge. (Focus group 4, RN 9) Participants were found to have different expectations, experiences and needs when using the TA app. Patients found the application easy to use, taking little time except when uploading a photo, which was noted to be a bit slow. For clinicians, the process of logging in and inputting patient information was described as, at times, timeconsuming: I find that it's quite time-consuming to read the information. Not that it's not good, it's fantastic, but I don't know how well people sit to do that at the point of call. (Focus group 1, RN 3) It was quite an easy app to actually use in terms of functionality, but it was a bit time-consuming and then a couple of things also make it a bit … Even just logging in takes a good minute or so because you've got to put in email whatever, and then, it logs out. (Focus group 5, Doctor 2) Clinicians agreed that the TA app was useful for clinical assessment and for the real-time tracking and monitoring of wounds; however, participants often did not use the clinical decision support provided by the TA app algorithm because they felt they already had access to expert advice by doctors and wound care specialists. Several clinician participants suggested that the use of the clinical decision support component of the TA app could be beneficial for inexperienced nurses, including new graduate nurses: There's a lot of new grads in the hospitals, and it [TA] also can help with that. If they haven't got the support, this app also can give you recommendations of what to put on the dressings. It then analyses the wound and can give the nurse the option of dressings to put on that wound. (Focus group 3, RN 7) They indicated that it could also improve wound management, given that effective care could be initiated immediately rather than waiting for advice from the Wound Clinical Nurse Consultant: Yes, because it [TA app] gives you advice and recommendations on what sort of dressing. At least something's being done, just putting a simple dressing … because in that short 2 days, there's a lot of things that can happen in that wound. (Focus group 2, RN 4) | Reduction in wound size Fifty-two patients (42%) in the intervention group had multiple wounds. In this group, the size of 132 wounds was measured more than once by using the app; 101 out of 132 wounds over an average of 36.47 days (SD 41.76) improved, with a mean reduction of 54.0% (SD 31.60) in wound size. The breakdown of the wound size percentage change by wound type is presented in Table 4. | Completeness of wound-related documentation The use of the TA app by clinicians significantly increased the completeness of documentation compared with standard care ( Table 5). Completeness of the documentation was based on the number of dressing changes. In particular, the recording of wound size increased from 8.3% completion rate in the standard care group to 100% completion rate in the intervention group and that for exudate increased from 31.9% in the standard care group to 87.2% with the use of the TA app. | Potential travel-related time avoided Out of 13 patients who used the TA app, travel related data were collected on 12 patients. Nine patients used the TA app patient interface from their home in the Sydney | DISCUSSION This study demonstrated a significant improvement in the completeness of wound documentation. This has important clinical implications because it demonstrates improvements in objective and quantitative wound information and consistency of documented care using the TA app. Currently, in our health service, clinicians document their wound assessments in the progress notes section of the eMR, a variable and often inconsistent approach to documentation. The TA app provided standardised key information for all wounds that link to the patient's eMR. In addition, this information was available across both community and inpatient settings. The TA app performed well on measures of functionality and user experience. Clinicians strongly agreed that the TA app was easy to learn, particularly with respect to assessment, clinical tracking and monitoring of the wound. This finding is consistent with several previous studies testing wound applications to improve wound assessment and monitoring. [29][30][31][32] Currently, across many health services in Australia, there are no databases or systems that allow for monitoring, tracking or benchmarking of wounds. We found the TA app facilitated clinical workflows and improved continuity of patient care and clinician communication. Accurate wound documentation is an essential part of handover and is important for effective wound management. 33,34 Similar to our study, Klinker, Wiesche and Krcmar (2020) found that their wound app prototype had the potential for improving the wound documentation process. 29 Their smart glass wound care app allowed for rapid objective hands-free wound documentation. In our study, clinicians reported that the TA app provided an instantaneous objective wound size measurement, which gave a reliable and effective record of the patient's wound and was a powerful tool that enabled them to see wound progression or deterioration. Shamloul, Ghias and Khachemoune (2019), in their review on digital technologies, concluded that mobile applications and digital applications potentially lead to prompt, easy and accurate wound assessment for both the clinician and the patient. Accurate and easy assessment facilitates effective wound management, which improves patient wound outcomes. 30 Objective wound documentation also facilitates patient continuity of care, as reported by our clinician participants in this study. Clinicians and patients in this study valued the TA app providing real-time information, its ease of use and efficiency in wound assessment and image capture. This enabled patients to view the progress of their wound instantaneously after a photo was taken. Clinicians reported that because of this, patients were more engaged in their care and were more adherent to the wound management regime. Findings from a systematic review on engaging patients to improve quality of care suggest that patients feel empowered by being involved in their care and that it increases their self-esteem. 35 Further, a recent consensus document by the World Union of Wound Healing Societies on patient engagement and wound care indicated that involving patients in their wound care can positively influence wound healing. 36 Consistent with previous work on virtual health care platforms, [37][38][39] our study demonstrated that the TA app enabled patient wound care through virtual connection to a clinician. Strong patient satisfaction was reported through interviews. Patients were excited to use the TA app and commented on the advantages of receiving care and being in their own home environment, regarding this as a positive element of their recovery. Further, patients commented on feeling safe, as though the provider was within arm's length should wound help be required. The clinicians perceived that the TA app made it easier to communicate with the patient about their wound care and that the patient did not have to travel to receive the same level of care. Virtual patient care facilitates prompt treatment and has the potential to prevent negative patient outcomes. [37][38][39] When interventions are implemented in complex settings, such as a health service setting, facilitators and barriers are often encountered, which can directly affect patient health outcomes. [40][41][42] In this instance, barriers to wound care can potentially lead to wound breakdown or prolonged wound healing. Acknowledging the barriers and facilitators that were reported by clinicians and patients in this study, the intervention (the TA app) did not have any negative impact on patient outcomes or on wound healing. In fact, in this study, the app potentially enabled patients to continue to be treated effectively despite outpatient appointments having to be cancelled because of COVID-19. Further, patients using the app reported the benefits of the app in facilitating their wound healing. Patients' wounds continued to heal, reducing in percentage size by over half (54%). The approach taken in this study, namely careful planning, and broad collaboration with key stakeholders, ensured the reduction of any unintended negative effects and promoted positive patient experiences. Although it was not the focus of this paper, the time and cost saved by the patient needs to be noted, especially in the context of prompt access to care. Studies have reported that if a patient cannot access timely wound care owing to costs and delays, it can lead to wound breakdown or prolonged healing. 43,44 In this study, virtual care via the TA app led to travel cost savings for patients. Patient also reported the benefits of not having to travel and being able to communicate with wound care nurse specialists, which made care very convenient. Last, the TA app facilitated wound care during COVID-19. Most projects in our health service ceased because of COVID-19 priorities. However, telehealth and telemedicine consults increased during the COVID-19 pandemic. The TA app during this period allowed for virtual wound care and reduced face-to-face visits between staff and patients, as well as clinician face-to-face consultations on wound care. The process of setting up the TA app in our health service so that it was more appropriate for our population was an iterative as well as a positive process between our health service and information technology team and the TA team in the United States. The responsiveness to clinician experience in the early part of the implementation was critical to the success of the implementation of the app. The use of the TA app during COVID has led to the establishment of the Virtual Wound Care Command Centre and further research into this area, expanding on the work of this study. The Virtual Wound Care Command Centre research will investigate the benefits of having a centralised specialist wound care service using a digital wound application for remote timely intervention for patients in the community, to ensure they receive world-class wound care irrespective of their physical location. This study has acknowledged limitations. The purposive sampling of 13 clinicians in this study introduces a degree of bias and limits the generalisability of the results. Since we recruited patients from four health settings in one metropolitan health service, future studies involving the recruitment of clinicians and patients from other settings, such as residential aged care facilities and general practice, may assist in determining whether similar benefits are observed in different specialties to expand on this study's findings. The use of a historical control (the standard care group) as a non-randomised comparator in our study may have introduced bias and increased the type 1 error rate. However, we used matching of study design and demographic parameters to incorporate historical data in a supplementary manner for reducing the number of concurrent control subjects required and the recruitment burden. The retrospective review of medical record notes for patients in the standard group limits our ability to understand whether improvements in wound healing were directly related to the intervention, the TA app. However, over the past decades, with the advent of eMR, considerable investments have been made in high-quality, curated and trusted clinical data. | CONCLUSION This is the first Australian study to report a wide-ranging wound care service using a digital application that provides real-time wound data with an interface for communication between the patient and the clinician, and clinicians in a hospital, community and outpatient setting. The use of the TA app demonstrated positive documentation and data management of wound care, clinician-and-patient communication and patient travel time and costs. This study demonstrated the feasibility of the use of the TA app in acute, community and outpatient settings. The TA app facilitated remote monitoring via telemedicine for patients, thus reducing face-to-face visits and travel time. As an innovative application that supports clinicians and patients in wound care, the TA app has the potential to improve patient wound outcomes. The findings of this study will be used to guide further application in other settings and scale up across the health service and eventually the state.
2022-02-26T06:23:41.689Z
2022-02-25T00:00:00.000
{ "year": 2022, "sha1": "0b0a83ffb7872fe071e8ef1dd6b686f74ef68f13", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/iwj.13755", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "476fac652d462b89b7aeabb98d55efe4f2dfbc19", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
55792211
pes2o/s2orc
v3-fos-license
Effect of Plant Roots on Soil Nutrient Distributions in Shanghai Urban Landscapes Twenty-seven surface soil samples were collected from four landscape sites in Shanghai, and seven soil profile samples were gathered from the two older sites for evaluation of horizontal and vertical distribution of soil properties to reveal their relationship with plant roots. Results indicated that urban soil had significant heterogeneities. Soil total nitrogen was significantly correlated with organic matter and total potassium was more abundant than total phosphorus. The available contents of iron, manganese, zinc and copper were higher than the standards for plant growth established by Soltanpour. pH and electrical conductivity increased with increasing soil vertical depth, possibly due to leaching, while the nutrients limiting plant growth such as nitrogen, phosphorus, potassium, iron, copper and zinc had more shallow distributions due to absorption by plant roots. However, with the increasing of soil depth, contents of magnesium, sodium, sulfur and chloride increased due to leaching and bio-cycling, which was further shown by the correlation analysis. Introduction Urban soils are the basis of landscape planting and have a great effect on plant growth.Without desirable concentrations of appropriate nutrients, plant growth is adversely affected.Moreover, since urban landscape soils are generally recognized as being highly disturbed and heterogeneous, many soils have systematic patterns and obviously differ, even in the same area.Therefore, many studies have been conducted on the physical and chemical properties of urban green space soils in major cities of China [1].Lu et al. claimed that urban green space soils in Shenzhen had characteristics of sandy loam and light loam soil texture, high bulk density, low porosity and low cation exchange capacity [2].Soil in Hong Kong had poor structure and fertility [3].Bian et al. showed that nutrients of urban park soils were highly heterogeneous in Shenyang [4].Degradation of soil structure and nutrient deficiency in green spaces occurred in Chongqing [5].However, these studies mainly focused on soil macro-nutrient elements.Micro-nutrients and secondary elements are normally required in minute quantities to ensure normal plant growth and formation of flowers because they are mostly associated with the enzymatic system of plants [6].Furthermore, the total levels of nutrients are poor indicators of their actual bioavailability to plants.The available state of elements including macro-nutrient, secondary and micro-nutrient elements is a more valuable indicator to sustain and support plant growth. There have been numerous studies on horizontal or vertical distribution of soil nutrients; however, traditional sampling methods with fixed interval depths of 20 or 10 cm have generally been applied to test vertical distribution of soil physical and chemical properties, which did not conform to the distribution of plant roots and ignored soil between the sampling positions.Consequently any correlation between plant roots and soil nutrients could not be accurately determined [7].As the underground organ of terrestrial plants, roots are indispensable for plant survival.Root systems hold the plant upright and absorb water and nutrition for plant growth and development [8].Unfortunately, there have been very few studies on the relationship between bio-available elements and plant growth, especially the relationships between soil properties and plant roots. Soil-amending and soil fertility practices such as plant cover systems and organic and inorganic inputs strongly influence all soil components [9].Optimal soil properties for plant growth vary for various urban landscape species.Therefore, in this paper, four typical green areas in Shanghai-Zhongshan Park, Expo Park, Century Park and Chenshan Botanical Garden-built in different years and located in different areas were selected as sampling zones to study the distribution of soil pH, EC, CI, organic matter (OM), total nitrogen (TN), available phosphorus (P), available potassium (K), available iron (Fe), available manganese (Mn), available zinc (Zn), available copper (Cu), available magnesium (Mg), available sodium (Na) and available sulfur (S).Soil sampling depth was targeted according to the distribution of plant roots within the sampling location, to study the horizontal distribution of soil nutrients in different parks, to discuss the mechanisms of vertical distribution of soil nutrients and to evaluate the effects of soil properties on plant growth. Study Area Shanghai is located at 121˚29' and E 31˚41'N, in the east of China, and has a total area of about 6341 km 2 .Shanghai is one of the most important cultural, commercial, financial, industrial and communication centers in China.The investigation areas included Zhongshan Park, Expo Park, Century Park and Chenshan Botanical Garden.Zhongshan Park was built in 1914 in Changning District with a total area of over 21.42 ha, half of which is for landscape planting.EXPO Park is green land in the city center and was built in 2010.Century Park is the biggest urban park in the inner ring road of Shanghai, situated in Pudong new district and built in 1997.Chenshan Botanical Garden, built in 2007 in Songjiang district, has a total area of 207 ha and highly diverse plant species. Sampling and Analysis Soil samples were collected from the four parks on 19-21 April 2011.Sampling depth was according to the distribution of plant roots of the sampling location.Three to four depths were sampled for arbor trees, large and medium shrubs based on the distribution of number of plant roots.Each soil sample consisted of five sub-samples which were collected from the surrounding area of each site.A total of 27 surface soil samples were collected: nine samples from Zhongshan Park, five from EXPO Park, nine from Century Park and four from Chenshan Botanical Garden.In addition, a soil profile was excavated in Zhongshan Park and Century Park.Soil samples were gathered from the profile according to the distribution of plant roots, and total of seven soil samples were collected.Details of the sampling record are presented in Table 1. The samples were taken to the laboratory, dried at ambient room temperature and ground to pass a 2-mm sieve before analysis.Half of each sieved soil sample was further ground to pass through a 1-mm mesh screen for the determination of soil moisture coefficient, and others were passed through a 0.149-mm mesh and stored Note: "-" represent that soil profile that was harvested.in clean polyethylene bags for the determination of soil OM and TN.The 2-mm soil samples were used for the saturated extraction and the determination of elements extracted by AB-DTPA (ammonium bicarbonate-diethylenetriamine pentaacetic acid) method [10].AB-DTPA is a common "universal soil extractant" and is also used to evaluate the bio-availability of non-essential heavy metals.It is a gentle extractant used to mimic the ability of roots to assimilate minerals. Soil water content was determined gravimetrically after heating in an oven at 105˚C for 8 h; all results are presented on an oven dry basis.Soil OM was determined using the potassium dichromate oxidation procedure [11].AB-DTPA extraction was employed to determine the bio-available concentrations of elements K, Cu, Fe, Mn, Zn, Mg, S, Na and P. The analytical determinations in the extracts were made via optical emission spectroscopy using Inductively Coupled Plasma.Soil pH, EC and water extractable chlorine were measured by the saturated extraction method [10].This method gives the best true estimate of dissolved salts in soil moisture.Soil pH was determined directly on the paste with a pH meter.Soil EC and chlorine were estimated by extracting the liquid phase of the saturation paste under partial vacuum.Soil EC was measured with conductivity meter and chlorine by ion chromatography. Quality Control The quality of chemical analysis was validated by repeated measurements of blanks and reference samples.Chemical analyses were repeated until a precision of ±5% and an accuracy of 95% -105% was achieved; while prepared blanks were always below instrumental detection limits. Horizontal Distribution of Soil Properties of Urban Parks Soil properties of the four urban parks are presented in Table 2.The soil pH in the different parks was relatively consistent, with mean pH of soils from Zhongshan Park, Expo Park, Century Park and Chenshan Botanical Garden being 7.4, 7.7, 7.5 and 7.6, respectively.Moreover, the coefficients of variation (CVs) of soil pH from the different parks were <5.7%, and therefore slightly alkaline might be the main characteristic of urban park soils in Shanghai, in close agreement with the results of Yang et al. [12].Previous studies also revealed that the urban soils had a higher pH [13], with extraneous materials such as bricks and stones included in the soils the primary causative factor [14]. EC of soils reflects the concentrations of dissolved salts in soil moisture.The EC value of soil samples collected were within the range of 0.3 -3.3 with an average of 1.0 ± 1.3 mS/cm for Zhongshan Park, 0.9 -1.6 with average of 1.2 ± 0.3 mS/cm for Expo Park, 0.4 -1.9 with average of 0.9 ± 0.6 mS/cm for Century Park and 1.1 -2.8 with average of 1.9 ± 0.8 mS/cm for Chenshan Botanical Garden (Table 2).According to the classification system established by Richards summarized (Table 2s-1) [15], sensitive crops (e.g.bean) can only be grown without yield loss in soils with EC < 2 mS/cm, and so EC value of all soils from Expo Park and Century Park met the standard for sensitive plant growth.However, 20.1% of EC values for soil samples from Zhongshan Park were in the range that adversely affects growth of moderately saline-sensitive plants (2 -4 mS/cm).Of samples from Chenshan Botanical Garden, 40.5% had salinity that exceeded the standard for growth of saline-sensitive plants but met the standard for moderately sensitive plants.Moreover, the CV of EC varied greatly among the parks, indicating heterogeneities of soil [1].Therefore, plant species should be selected based on soil properties, especially when large amounts of green space areas are constructed. As the vital aggregating agent, soil OM can influence soil structural formation and maintenance.The amount of soil OM differed among the four parks (Table 2), with mean contents in the following order: Chenshan Botanical Garden > Century Park ≈ Zhongshan Park > Expo Park.Compared with the Chinese soil fertility classes (Table 2s-2), 23.8% of soil OM contents from the parks were considered extremely high, 19.0% were high, 23.8% were moderate to high, 28.6% were low to moderate and 4.8% were low.This was similar to a previous study [16].Although green space soils are covered by vegetation, which would be expected to accumulate OM, management practices such as clearing leaves can lower soil OM contents, a consequence of which is disappearance of fertile topsoil to keep it "neat". For TN, 28.6% of soil samples were extremely high, 19.0% were high, 33.3% were moderate to high, 9.5% were low to moderate, 4.8% were low and 4.8% were extremely low.The TN contents of soils were significantly correlated with OM, consistent with results of Jim [3] and Yang et al. [12]. As the two major macro-nutrients for plants, levels of P and K should be studied.The contents of available P and available K in soils were near to those of earlier reports that showed K was more abundant in soil than P [17].The available Fe, Mn, Zn and Cu of all soil samples were high compared with the established criteria of Soltanpour (Table 2s-3) for which growth is expected to be within 90% of the optimum rate for each nutrient [18].Thus these parks had sufficient micro-nutrients Fe, Cu, Zn and Mn for plant growth.This may be due to the alluvial soil, which is the main soil type in Shanghai.In addition, the urban soil contamination may also result in increasing of soil Zn and Cu contents.The bio-available concentrations of Fe and Mn increase in soil that is poorly aerated promoting the reduction of Fe and Mn. The contents of available Na and S also had a large range in CV.Available Na ranged from 10.8 (Zhongshan Park) to 233.1 mg/kg (Century Park), and available S from 7.2 (Zhongshan Park) to 528.0 mg/kg (Chenshan Botanical Garden).The contents of available Na and S in surface soil decreased with the increasing age of parks, which may be ascribed to leaching and application of fertilizer, and this hypothesis will be further tested in following studies. Vertical Distribution of Soil Properties of Urban Parks Generally speaking, the mechanisms affecting the vertical distribution of soil nutrients can be classified into at least four major processes: weathering, atmospheric deposition, leaching and biological cycling [19].Because the effects of plant roots on biological cycling of soil nutrients are large, therefore sampling layers should be defined according to the distribution of plant roots in the sampling position.The vertical distribution of soil nutrients in different distribution layers of plant roots for Zhongshan Park and Century Park are presented in Figure 1.pH and EC values were clearly enhanced with increased soil depth and OM and TN contents decreased-possibly due to leaching and biological cycling.Leaching moves salt ions downward and increases salinity with increasing soil depth but, in contrast, plant litterfall and application of organic fertilizer or organic modified materials on the soil surface can lead to a shallower distribution of OM and TN.Many studies have reported significant positive correlations between TN and OM contents [12].This is expected since mature OM generally contains about 5% N. CI is an easily mobile element in soil and is not likely to constrain plant growth with adequate leaching [20].This phenomenon was also supported in the present study (Figure 1), which showed CI concentrations increased with soil depth.Phillips [21] and Tyler et al. [22] also considered that CI concentrations were associated with the soil depth. Contents of P and K tended to decrease with increased soil depth, which differed to CI, and might be ascribed to the plant cycling and management practices.P and K are not readily mobile in soil and generally remain in the soil profile to which they have been applied.On one hand, it may be generally recognized that P and K are the main nutrients limiting plant growth, and have shallower distributions than nutrients that are less limiting [23].On the other hand, organic fertilizer or organic modified materials with higher P and K are usually applied to surface soil of green spaces instead of the whole soil profile. Fe, Mn, Cu and Zn are essential micro-nutrients for plant growth and important for gene expression and biosynthesis of proteins [24] [25].Available Fe, Mn, Cu and Zn contents tended to decrease with increasing soil depth, which might be controlled by plant cycling and soil pH [23] [25].Generally, root distribution and maximum rooting depth play an important role in shaping micro-nutrient profiles [23] [26], because some nutrients absorbed by plants are transported aboveground and recycled to the soil surface by litterfall [27].Furthermore, Fe, Mn, Cu and Zn are the most bio-available at lower pH, and their cationic forms may be changed to insoluble forms such as hydroxides and oxides in less acidic soil [28].Therefore, increasing pH with soil depth may be another reason for the lower contents of these micro-nutrients in deeper soil.Another cause of soil acidification is application of fertilizers, and these are applied to the soil surface [29]. The levels of Mg, Na and S tended to be low in the soil surface, which was in contrast with all other elements.Previous studies indicated that nutrients that are rarely required by plants (such as Na) have shallower distributions in soils [17].Furthermore, the role of leaching is probably important for available Mg, Na and S because their contents had an increasing trend with soil depth [23]. Relationship among Soil Chemical Properties The relationships between nutrient elements and soil properties were studied, and the results are presented in Table 3. pH values were significantly negatively correlated with OM, TN and available Zn and Cu.The contents of OM, TN and available Cu and Zn decreased dramatically with increasing depth, while pH increased, which might be related to OM, organic residues, applying of organic modified materials and exudation of plant roots [26].There were also significant positive correlations between OM and TN, available P, available K, available Mn, available Zn and available Mg, which indicated the roles of plant cycling.Nutrients taken up by deep roots would be transported to the soil surface, especially for plants with deeper roots [30]. EC values were positively and significantly correlated with the content of chlorine and available Mg and S; moreover, the chlorine concentration was positively correlated with available K, Mn, Zn and Na (Table 3).These results suggested that there was a significant effect of leaching on vertical distribution of soil nutrient elements, which would deplete them from the topsoil, and accumulate them in deeper layers and produce a peak at the maximum rooting depth [22]. Conclusions Urban ecological environments can have a large impact on sustainable economic development, and can be influenced to a large extent by the amount of green space available to the public.Growth of vegetation also has a useful ecological function and is strongly affected by soil quality.Therefore, it is essential that the content of various nutrient elements in soils of green spaces are investigated.The horizontal and vertical distributions of soil nutrients in Zhongshan Park, EXPO Park, Century Park and Chenshan Botanical Garden in Shanghai confirmed that urban soils had a heterogeneous spatial distribution.The CVs of various soil qualities except for pH varied greatly between and even within parks.For example, the mean pH of soils from Zhongshan Park, Expo Park, Century Park and Chenshan Botanical Garden was 7.4, 7.7, 7.5 and 7.6, respectively, and the CVs of soil pH from different parks was <5.7%, indicating a slightly alkaline nature of these urban park soils.pH and EC values increased with increasing depth likely due to leaching, and nutrients limiting for plants (such as N, P, K, Fe, Cu and Zn) had more shallow distributions due to absorption by plant roots.However, Mg, Na, S and chlorine decreased possibly due to the contribution of leaching and bio-cycling. Figure 1 . Figure 1.Vertical distribution of soil nutrients in different layers of plant roots.(The hollow points represent Zhongshan park, and the solid points represent Century Park). Table 2 . Properties of soils collected from Zhongshan Park, Expo Park, Century Park and Chenshan Botanical Garden. Table 3 . Pearson's correlation coefficients among soil nutrient elements and soil properties.
2018-12-12T23:27:18.162Z
2016-02-03T00:00:00.000
{ "year": 2016, "sha1": "459621e2ec594032b4fbb04e8afcd2bcad05eebd", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=63619", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "459621e2ec594032b4fbb04e8afcd2bcad05eebd", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
269059422
pes2o/s2orc
v3-fos-license
Topical Agents in Biofilm Disaggregation: A Systematic Review and Meta-Analysis Background: to evaluate the effectiveness of different topical agents in biofilm disaggregation during non-surgical periodontal therapy. Methods: the search strategy was conducted according to the PRISMA 2020 on Pubmed, Cochrane Library, Scopus, and Web of Science, and it was registered in PROSPERO, ID: CRD42023474232. It included studies comparing non-surgical periodontal therapy (NSPT) with and without the application of topical agents for biofilm disruption. A risk of bias analysis, a qualitative analysis, and a quantitative analysis were performed. Results: out of 1583 records, 11 articles were included: 10 randomized clinical trials and one retrospective analysis. The total number of participants considered in the 11 articles included in the study was 386. The primary outcomes were probing pocket depth (PPD), clinical attachment level (CAL), and bleeding indices. The secondary outcomes were plaque indices, gingival recessions, and microbiological parameters. The meta-analysis revealed the following: [Weighted mean difference (WMD): −0.37; 95% confidence interval (CI) (−0.62, −0.12), heterogeneity I2: 79%, statistical significance p = 0.004]. Conclusions: the meta-analysis of probing pocket depth reduction (PPD) between baseline and follow-up at 3–6 months showed a statistically significant result in favor of sulfonated phenolics gel. The scientific evidence is still limited and heterogeneous; further randomized clinical trials are required. Introduction According to the American Academy of Periodontology, periodontitis is a chronic disease with a multifactorial etiology.Periodontitis is associated with the presence of dysbiotic biofilm and characterized by a progressive loss of attachment, bone resorption, and formation of a periodontal pocket [1].Periodontal disease thus leads to the destruction of both supporting soft and hard tissues and the possible loss of dental elements over time.Furthermore, recent studies found a correlation between oral biofilm and systemic inflammation, e.g., cardiovascular diseases [2]. Non-surgical periodontal therapy is currently the gold standard in periodontal treatment to reduce periodontal pathogens in order to achieve a reduction in the probing pocket depth, to eliminate inflammation, and to stop the disease progression [3].Non-surgical periodontal therapy is carried out with manual instruments or mechanical ultrasonic or sonic instruments [4]. However, in specific clinical conditions, mechanical biofilm removal alone is not enough: in these cases, periodontal debridement does not provide sufficient achievement of satisfying clinical outcomes [5,6].These cases may include, for example, deep pockets, the involvement of furcation, and anatomical areas that are difficult to access [7,8].There is evidence in the scientific literature about the introduction of additional therapies aiming to improve the objectives that can be achieved with non-surgical mechanical periodontal therapy alone [9].Examples of frequently used topical applications include antiseptic substances such as chlorhexidine (one of the most widely used oral antimicrobial agents, available in different formulations), antibiotics in gel, fibers, or other formulations. Because periodontitis a biofilm-mediated disease, antibiotic treatment in periodontal patients is typically selected empirically or using technical methods.These approaches are directed towards establishing the levels of different periodontal pathogens in periodontal pockets to determine the appropriate antibiotic treatment.However, current methods are costly and do not consider the antibiotic susceptibility of the entire subgingival biofilm [10]. Furthermore, controversies associated with local delivery are also reported: induction of bacterial resistant strains, the efficacy of systemic versus local drug delivery, and whether local drug delivery should function as an alternative or as an adjunct to conventional treatment [11]. Moreover, the additional use of ozone therapy, laser, and photodynamic therapy has been reported [12][13][14].Because of their action against anaerobic bacteria, the use of these physical agents is currently being discussed in cases of post-extraction complications and in patients with chronic gingivitis, periodontitis, and periodontal abscesses [15].Finally, topical disaggregating agents have been applied into the periodontal pockets before the mechanical instrumentation.Because their primary target is the bacterial matrix, their purpose is to disaggregate this matrix.Consequently, it is then easier to remove biofilm deposits with mechanical debridement [13,16]. In light of these considerations, the aim of this systematic review was to analyze and compare different topical active ingredients as additional therapy during non-surgical treatment of periodontal disease, with specific attention to their biofilm disaggregation activity. Materials and Methods This systematic and meta-analysis review was conducted in accordance with the criteria presented in the latest version of the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA 2020) [17].We also report that this review is registered under number CRD42023474232 with PROSPERO, the international prospective register of systematic reviews. Focused Question The current review attempts to answer the following question: in patients with periodontal disease (referred to as the patient), is there sufficient and adequate evidence in the scientific literature that these topical agents (referred to as the intervention) lead to an improvement in clinical and microbiological parameters (referred to as the outcome) compared to standard periodontal therapy (referred to as the control)? Search Strategy An electronic search was implemented to retrieve all relevant studies.The search was carried out using the following database: PubMed, Cochrane Library, Scopus, and Web of Science. Publications published from January 2008 to October 2023 were considered, including randomized clinical trials (RCTs) and retrospective analyses.Relevant keywords and Boolean operators (AND, OR, NOT) were used to implement the following search string: (periodontitis OR periodontal disease) AND (chemical cleansing OR adjunctive therapy OR topical treatment OR subgingival irrigation OR topical agent OR desiccant agent OR chloramine OR sodium hypochlorite OR Perisolv OR hypochlorite) NOT (antibiotic OR systemic disease OR laser OR photodynamic OR orthodontic OR mouthwash OR rinse OR systematic review OR alendronate OR simvastatin OR atorvastatin OR rosuvastatin OR toothpaste) AND (randomized controlled study OR clinical trial OR retrospective analysis). Screening and Selection The search was carried out by two reviewers (VF e AP) who worked independently through the screening of titles, abstracts, and full text of the studies obtained from the searches by applying the inclusion and exclusion criteria that were already established.Any disagreement concerning eligibility was resolved by discussion between the parties. The studies included in the search strategy were as follows: randomized controlled clinical trials (RCTs), prospective studies, and retrospective studies.In vitro studies and studies on animals were excluded.The studies that fulfilled all the inclusion criteria were processed for data extraction. The inclusion criteria were: - Topical agents with a physical action on biofilm (laser, ozone therapy). Risk of Bias Assessment Two reviewers (VF and AP) performed the risk of bias analysis of the included studies using two different tools. The first one was the RoB 2 tool [18] for RCTs, and the second one was the ROBINS-I tool for retrospective analysis [19].The scoring was based on different domains and could be scored as unclear, low risk of bias, or high risk of bias.The following items were evaluated: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and others.It was "a priori" decided that the domains 5.2 and 5.3 were not answered because the risk of bias assessment was made at study level and not for every outcome.A study was estimated to be at a high risk of bias if at least one domain had a high risk of bias, at an unclear risk of bias if at least one domain was unclear and none were high, and at a low risk of bias if all domains were assessed as being at low risk of bias. Data Extraction The reviewer extracted the details on the characteristics of the studies using an Excel paper. The following data were collected: author, year of publication, study design, country, sample number, mean age, gender, test and control interventions, follow-up, outcomes. The means and standard deviations were extracted if available.Methods outlined by The Cochrane Handbook were used for imputing missing standard deviation.Imputed standard deviations were calculated for studies that provided a mean and confidence interval.If the sample size was small (<60), then the confidence intervals were calculated using a value from a t distribution, obtained from the tables of the t distribution, with degrees of freedom that were equal to the group sample size minus 1 (n − 1).If the standard deviation (SD) could not be calculated, data were imputed from a similar study included in this review through the correlation index. Methods outlined by The Cochrane Handbook were used for imputing missing standard deviation.Imputed standard deviations were calculated for studies that provided a mean and confidence interval.If the sample size was small (<60), then the confidence intervals were calculated using a value from a t distribution, obtained from the tables of the t distribution, with degrees of freedom that were equal to the group sample size minus 1 (n − 1).If the standard deviation (SD) could not be calculated, data were imputed from a similar study included in this review through the correlation index. All the formulae used are represented in Figure 1. Data Analysis The meta-analysis was performed using Review Manager Web [20]. The meta-analysis resulted in a 95% confidence interval (CI) using the inverse variance method and a random effect model.A meta-analysis was performed on subgroups.The heterogeneity was interpreted following The Cochrane Handbook guidelines: 0-40% indicated that heterogeneity was negligible, 30-60% indicate that heterogeneity was mod-erate, 50-90% indicated that heterogeneity was substantial, and 75-100% indicated that heterogeneity was considerable [21]. Grading the Body of Evidence The grade was obtained with GRADEpro GDT [22].Two reviewers (VF and AP) rated the certainty of the evidence according to the following aspects: risk of bias, inconsistency, indirectness, imprecision, publication bias, large effect, plausible confounding, and doseresponse gradient.Any disagreement between the two reviewers was resolved after additional discussion. Search and Selection Results The search conducted on Pubmed, Cochrane Library, Scopus, and Web of Science resulted in 1583 papers.Filters concerning year of publication (2008-2024) and type of publication (article) were applied and resulted in 996 papers.After the removal of duplicates (191) and inaccessible papers (29), a total of 776 studies were screened. The screening of the titles, abstracts, or full text resulted, for both reviewers, in the inclusion of 11 papers.The studies were all RCTs; only one was a retrospective analysis.Figure 1 shows the PRISMA flow diagram. Characteristics of Included Studies The studies included in this systematic review were conducted in eight different countries: Italy, Romania, Switzerland, Syria, India, Netherlands, Lithuania, and Germany. They were published between 2015 and 2024.Ten out of eleven were randomized controlled studies, and one was a retrospective analysis of case series. The RCTs were comparative studies performed in periodontal patients to evaluate the adjunctive use of topical agents in biofilm disaggregation during non-surgical periodontal therapy and standard therapy. In all the studies, the number of participants ranged from 16 to 56, with an age average of 48.27 years and a proportion of male and female of 47.43% and 52.57%, respectively. All studies assessed parameters at baseline and at different follow-up times: eight studies performed a follow-up at 6 months, six at 3 months, four at 12 months. Sodium Hypochlorite Sodium hypochlorite was applied into the periodontal pockets as a liquid solution only in one study. One author used a gel of 0.05% NaOCl, whereas the other five studies applied a new gel composed by NaOCl plus amino acids.In this way, it was able to create chloramine which penetrates the biofilm.Moreover, the high pH (pH = 12) of the gel enhances the disaggregation effect of the NaOCl. Furthermore, to accelerate the healing of soft tissues, two studies used hyaluronic acid after the application of NaOCl gel (Clean & Seal) into the periodontal pockets. The NaOCl gel was applied before the traditional non-surgical therapy in four studies, while in two studies it was applied after the therapy (see Table 2). Desiccant Agent The desiccant agent is a gel with strong hygroscopic properties, and in consequence, it absorbs the water from the organic matrix, leading to the denaturation of the biofilm structure.This sulfonated gel was applied before the traditional non-surgical therapy in all the included studies; in this way, it became easier to remove the deposits mechanically.The gel was kept into the periodontal pockets for a range of time between 20 s and 60 s (see Table 2). Risk of Bias Assessment Analysis of the risk of bias was carried out for all the included studies.The retrospective analysis showed an overall low risk of bias.Five out of ten RCTs showed a low risk of bias, while the other five demonstrated an unclear risk of bias. The domains with some concern risks were the random sequence generation and allocation concealing and the registration of the protocol and its concordance with the published article. Figure 2 shows the risk of bias assessment results. Desiccant Agent The desiccant agent is a gel with strong hygroscopic properties, and in consequence, it absorbs the water from the organic matrix, leading to the denaturation of the biofilm structure.This sulfonated gel was applied before the traditional non-surgical therapy in all the included studies; in this way, it became easier to remove the deposits mechanically.The gel was kept into the periodontal pockets for a range of time between 20 s and 60 s (see Table 2). Risk of Bias Assessment Analysis of the risk of bias was carried out for all the included studies.The retrospective analysis showed an overall low risk of bias.Five out of ten RCTs showed a low risk of bias, while the other five demonstrated an unclear risk of bias. The domains with some concern risks were the random sequence generation and allocation concealing and the registration of the protocol and its concordance with the published article. Figure 2 shows the risk of bias assessment results. Study Outcomes Results: Qualitative Analysis All the studies performed the analysis of PPD, CAL and BOP, with nine of them asserting an improvement in PPD and CAL from baseline to follow-up for both the treatment groups.Five studies out of nine found a significative difference favoring the test group in at least one of the two indices (PPD and/or CAL). Regarding BOP, it was found to be improved in all the studies included; seven studies out of eleven found a significative difference favoring the test intervention, while one study asserted that the control intervention provided a better reduction at 12 months. Study Outcomes Results: Qualitative Analysis All the studies performed the analysis of PPD, CAL and BOP, with nine of them asserting an improvement in PPD and CAL from baseline to follow-up for both the treatment groups.Five studies out of nine found a significative difference favoring the test group in at least one of the two indices (PPD and/or CAL). Regarding BOP, it was found to be improved in all the studies included; seven studies out of eleven found a significative difference favoring the test intervention, while one study asserted that the control intervention provided a better reduction at 12 months. The PI was analyzed in eight studies, and all of them demonstrated a statistically significant reduction in both treatments from baseline to follow-up.Only two studies showed a significant difference between the interventions, both favoring the test group. The REC was the last index analyzed: just one study found a statistically significant difference between the studies; this difference favored the topical agent. Five out of eleven studies included [23,26,28,30,31] analyzed microbiological changes over time in the test and control groups, respectively.Studies by Radulescu [28] and Megally [23] did not show any statistically significant differences between the groups at any follow-up time.In contrast, the other studies showed a statistically significant difference between the groups. Meta-Analysis Eight studies were included for quantitative analysis, and meta-analysis was performed in two subgroups to evaluate the outcome of the two different topical agents on the reduction in probing pocket depth (∆PPD) from baseline to the follow-up at three or six months.Table 3 shows the means and SDs of the PPD change from baseline to follow-up.1.9 ± 0.57 2.5 ± 0.29 Baseline-3 months Soancă et al., 2023 [33] 0.85 ± 0.91 0.82 ± 0.72 Baseline-3 months Figure 3 provides a summary of the meta-analysis outcomes.In almost all instances, the baseline scores were not statistically different.The plaque scores were higher in the test group in one study [29].Moreover, the BOP was higher in the test group in one study [24], while other two studies reported a higher value in the control group [28,31]. The PI was analyzed in eight studies, and all of them demonstrated a statistically significant reduction in both treatments from baseline to follow-up.Only two studies showed a significant difference between the interventions, both favoring the test group. The REC was the last index analyzed: just one study found a statistically significant difference between the studies; this difference favored the topical agent. Five out of eleven studies included [23,26,28,30,31] analyzed microbiological changes over time in the test and control groups, respectively.Studies by Radulescu [28] and Megally [23] did not show any statistically significant differences between the groups at any follow-up time.In contrast, the other studies showed a statistically significant difference between the groups. Meta-Analysis Eight studies were included for quantitative analysis, and meta-analysis was performed in two subgroups to evaluate the outcome of the two different topical agents on the reduction in probing pocket depth (ΔPPD) from baseline to the follow-up at three or six months.Table 3 shows the means and SDs of the PPD change from baseline to follow-up. Grading the Body of Evidence The evidence and the strength of the recommendations were evaluated according to GRADEpro GDT. The estimated risk of bias varied from low to unclear, so reporting bias was considered not serious.Comparisons between the two topical agents indicate moderate certainty evidence for their additional effect over non-surgical periodontal therapy alone.The application of the two topical agents can be recommended to improve PPD reduction over time. Discussion Non-surgical periodontal treatment aims to remove biofilm by manual, ultrasonic, or sonic instruments [4].However, under specific clinical conditions, mechanical removal alone is not sufficient [5,6], and the permanence of periodontopathogens can lead to residual periodontal pockets [34].In this paper, it has been shown that different additional therapies can reduce the need for surgical therapy, improving PPD and CAL [35].Common antiseptic solutions are not able to directly disaggregate the biofilm, although they inhibit the formation of new bacterial plaque [36].For this reason, recent developments in periodontal strategies for non-surgical therapy focus not only on the use of antiseptics and antibiotics with bacteria as a target but also on substances able to disaggregate the biofilm [37][38][39][40][41]. From this assumption, the present systematic review has considered alternative topical agents not based on chemical or physical functions, finally selecting, in the search strategy, sodium hypochlorite (NaOCl) and a sulphonated gel.A recent systematic review [37] which considers the same articles included in this paper as regards the application of NaOCl [23,24,26,28], declared that this gel could significantly improve PPD values at 6month follow-up, while no significant difference was detected after 3 months.Similarly, this systematic review conducted a meta-analysis regarding the PPD parameter as a difference (∆) between baseline and follow-up, finding a statistically significant difference in the sulphonate gel.On the other hand, the forest plot of NaOCl obtained results lacking statistical significance, even in favor of the test group.More precisely, Bizzarro et al. [26] reported disagreement with three other studies: this difference could be attributed to the fact that NaOCl was used just as a liquid solution instead of being used as a gel with NaOCl and amino acids. In addition, it is relevant to report that two studies [25,29] implemented a new method, called "Clean & Seal", which involves the addition of hyaluronic acid, demonstrating its effectiveness in improving clinical parameters.This claim is supported by other evidence in the scientific literature of its restorative and regenerative action, resulting in a lower need for surgical therapy [38,39]. Regarding sulphonated gel, a study by Nardi [40] shows that its use leads to an improvement in inflammatory clinical parameters at one week and at one month of followup in cases of periodontal patients suffering from rheumatoid arthritis.Nevertheless, these results are presented in a case report.Zafar [42] assessed elements for extraction and reported that the application of this gel does not appear to significantly improve the removal of calculus in the deep pockets of the posterior elements or elements with complex morphology. Concerning the potential of sulphonated gel in biofilm removal, Bracke demonstrated, in a case report [43], its effectiveness in plaque disintegration, elimination of pathogenic periodontal bacteria, and prevention of the occurrence of resistant microbial flora.Lauritano [44] considered a sample of 11 patients, in which it was reported that the gel reduced each of the red complex bacteria by 99% after a single application. Results of the abovementioned studies are in line with the microbiological findings found in this systematic review: Lombardo et al., at the end of the first treatment session, showed a significant reduction (p < 0.0001) both for aerobic and anaerobic bacterial species.However, it should be noted that this reduction was recorded for both interventional groups; only at six weeks was it found that a significant reduction occurred in aerobic species in the test group (p = 0.02) and that no reduction occurred in the control group [31]. Furthermore, Isola et al. demonstrated a decrease in bacteria of the red complex, in this case, after 15, 30, 60, and 180 days, and such differences remained for up to a year (p < 0.001).A recent review [12] showed that non-surgical periodontal treatment associated with chlorhexidine gel did not lead to any statistical significance in the reduction in PPD at 3-month follow-up compared to the control group (0.49 [1.13, 0.14], p = 0.05).This is partly in disagreement with the findings of the present review: regarding sulphonated gel, a statistically significant difference was found with respect to the control group (p = 0.007), while NaOCl showed no statistical significance (p = 0.10). In terms of BOP, a significant improvement at 3 and 6 months was observed in two studies using Chlorhexidine.One of these showed a bleeding index of 90% and 95% at the baseline, respectively, in the control and test group; at 3 months, the respective values decreased to 21.5% and 5% (with p = 0.0001).These findings are consistent with this review, as it can be concluded that a total of six studies [21,[25][26][27][28] reported a statistically significant difference in BOP reduction in favor of the test group, even if not at all follow-up intervals.According to Gegout's review [12], Khalil's study [32] reported a baseline bleeding index of 70.28% and 71.02% in the control and test group, respectively, decreasing to 44.22% and 22.03% (p < 0.001) after 3 months.Iorio-Siciliano [24] also described decreasing bleeding rates at 6 months: in the test group, rates decreased from 85.3% to 2.2%; in the control group, rates decreased from 81.6% to 7.3% (with p = 0.001). Almost all studies demonstrated a significant reduction in PPD from baseline to followup, and five studies reported a significant result in favor of the test group; Radulescu and Megally [23,28] showed no statistical significance at any follow-up.Referring to the most recently published reviews about the application of several topical agents' gels, including antibiotics [12], the meta-analysis failed to demonstrate a significant improvement in PPD at 3 months (0.50 [1.20, 0.20] p = 0.16) in sites treated with SRP + metronidazole compared to SRP as monotherapy (placebo gel).In contrast, tetracyclines showed a significant improvement in PPD at 3 months (0.51 [0.71, 0.31], p <0.001) compared to control. Our meta-analysis has shown a statistical significance (−0.37 [−0.62, −0.12], p = 0.0004), and therefore, results are similar to evidence found in the literature regarding tetracyclines but not metronidazole.The results of this paper are also consistent with those of a review concerning photodynamic therapy [14], where a difference in PPD was found in favor of the test group compared to the control group.In this study, statistical significance was found in a total of five out of thirteen studies included in a 3-month follow-up.Three studies conducted a further 6-month follow-up appointment, showing again a statistically significant difference in favor of the group treated with photodynamic therapy.Overall, these PPD changes were (−1.79 [−2.26, −1.31] with p < 0.00001).Regarding the analysis of microbiological parameters, the same systematic review [14] concludes that the bactericidal efficacy of photodynamic therapy in addition to non-surgical periodontal therapy remains questionable: no statistically significant difference or even any difference between treatment groups could be found.Similarly, in this review, only five studies out of a total of eleven analyzed microbiological changes, with contrasting results. Limitations This systematic review and meta-analysis presents some limitations regarding the evidence included, as a certain degree of heterogeneity emerged from the meta-analysis. Differences between studies could be attributed to different periodontal case definitions, as well as different stages and grades of periodontal disease considered in each trial.Moreover, a discrepancy in the range of PPD was observed.Two studies considered patients enrolled in supportive periodontal therapy, while the remaining nine did not. Furthermore, the evaluation of change in BOP or spontaneous bleeding after healing after mechanical or chemical dissolution and removal of soft and hard deposits could be related to tobacco smoking as a possible confounding factor: not all studies have included, among the exclusion criteria, this risk factor closely related to the progression of periodontal disease. Finally, regarding the application of topical agents, it is relevant to say that in two studies, it was used a liquid solution instead of a viscous formulation. Conclusions Based on the evidence gathered with this systematic and meta-analytical review, it can be concluded that the application of gel with the phenolics sulfonate and sulfuric acid as an adjunctive therapy to non-surgical periodontal therapy was shown to improve clinical and microbiological parameters compared to non-surgical periodontal therapy alone.Therefore, in light of the current scientific evidence and related limitations of the meta-analysis, the qualitative analysis found that the two active ingredients analyzed can be considered as promising topical agents for the disaggregation of biofilm during non-surgical periodontal therapy. Figure 1 . Figure 1.PRISMA flow diagram of the search and selection process [17].Figure 1. PRISMA flow diagram of the search and selection process [17]. Figure 1 . Figure 1.PRISMA flow diagram of the search and selection process [17].Figure 1. PRISMA flow diagram of the search and selection process [17]. Table 1 . Characteristics of the included studies. Table 2 . Characteristics of the test and control interventions. Table 3 . Mean and SD of PPD change. Table 3 . Mean and SD of PPD change.
2024-04-12T15:12:08.878Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "c206f8a204363c2f208f838773fe5e1a2fa8da56", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/13/8/2179/pdf?version=1712738196", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "51d3be58dbb93929aad6387988a1d7d0e778d915", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
238794645
pes2o/s2orc
v3-fos-license
An image analysis method for regionally defined cellular phenotyping of the Drosophila midgut Summary The intestine is divided into functionally distinct regions along the anteroposterior (A/P) axis. How the regional identity influences the function of intestinal stem cells (ISCs) and their offspring remain largely unresolved. We introduce an imaging-based method, “Linear Analysis of Midgut” (LAM), which allows quantitative, regionally defined cellular phenotyping of the whole Drosophila midgut. LAM transforms image-derived cellular data from three-dimensional midguts into a linearized representation, binning it into segments along the A/P axis. Through automated multivariate determination of regional borders, LAM allows mapping and comparison of cellular features and frequencies with subregional resolution. Through the use of LAM, we quantify the distributions of ISCs, enteroblasts, and enteroendocrine cells in a steady-state midgut, and reveal unprecedented regional heterogeneity in the ISC response to a Drosophila model of colitis. Altogether, LAM is a powerful tool for organ-wide quantitative analysis of the regional heterogeneity of midgut cells. In brief The intestine is divided into functionally distinct regions along its anteroposterior axis. Here, Viitanen and co-workers develop a quantitative image analysis approach to map and compare cellular phenotypes of Drosophila midgut with subregional resolution. INTRODUCTION The intestine has a critical role in regulating organismal metabolism and immunity (Miguel-Aliaga et al., 2018). These functions are dynamically modulated by environmental factors, such as nutrition and microbes. Uncovering the mechanistic basis of the underlying regulation requires tractable in vivo model systems. The Drosophila midgut, analogous to the mammalian small intestine, has proved to be a powerful model for understanding intestinal physiology (Miguel-Aliaga et al., 2018). The midgut is composed of four cell types: the absorptive enterocytes (ECs), their differentiating progenitor cells, called enteroblasts (EBs), the hormone-secreting enteroendocrine (EE) cells, and the mitotic intestinal stem cells (ISCs) (Miguel-Aliaga et al., 2018). The midgut is an adaptive regenerative organ whose cellular turnover and composition is affected by diet, sex, inflammation, age, and reproductive status (Biteau et al., 2008;Buchon et al., 2009Buchon et al., , 2013Hudry et al., 2016;Reiff et al., 2015). Previous studies have uncovered regulatory pathways involved in the control of intestinal homeostasis through inter-and intracellular signaling (Gervais and Bardin, 2017;Guo et al., 2016). To perform the functions of digestion, absorption, metabolism, nutrient sensing, and signaling in a sequentially coordinated manner, the animal intestine is compartmentalized into regions along its anteroposterior (A/P) axis (Miguel-Aliaga et al., 2018;O'Brien, 2013). Moreover, human intestinal pathophysiologies, such as cancer or inflammatory disorders, often manifest in a region-specific manner (Missiaglia et al., 2014;Mowat and Agace, 2014). Therefore, the mechanisms that establish, maintain, and modulate the regionalized functions of the intestine are of high biological and medical relevance. The Drosophila midgut regions have been distinguished on the basis of anatomical characteristics, differential staining with histological dyes, and region-specific gene expression patterns (Buchon et al., 2013;Dimitriadis, 1991;Marianes and Spradling, 2013). Buchon et al. (2013) divided the midgut into six major regions (R0 to R5), which can MOTIVATION Because of the combination of small size and regionalized structure, the Drosophila midgut is an ideal model for organ-wide analyses of intestinal cells, including stem cells. Although imaging of the whole midgut is feasible by using fast and affordable tile scan imaging, the downstream analysis remains challenging. Because cellular phenotypes are inherently variable, it is necessary to quantitatively analyze several replicate midguts, which requires proper alignment of the spatially resolved cellular data. Each midgut has a unique morphology, which brings about the need for time-consuming and subjective manual work to identify, align, and compare the intestinal regions of replicate samples. Because of these technical challenges, the potential of Drosophila midgut for organ-wide analysis has not been taken to full use. be distinguished on the basis of cross-intestinal anatomy. R1-R5 were further divided into 14 subregions on the basis of morphological, histological, and gene expression differences. In a parallel study, Marianes and Spradling (2013) divided the midgut into ten zones, with significant overlap to the 14 subregions defined by Buchon et al. (2013). Molecular analyses of the intestinal cell types have given more detailed insight into midgut regionalization. Consistent with sequentially coordinated digestion and absorption, the digestive enzyme and nutrient transporter genes display strictly regionspecific expression patterns in the ECs (Dutta et al., 2015). The EE cells, mediating the signaling function of the intestine, can be divided into ten subtypes displaying region-specific distribution (Guo et al., 2019). In addition to the differentiated cell types, it has also been proposed that the function of undifferentiated ISCs depends on regional identity. The ISCs display regional autonomy, i.e., their differentiated daughter cells do not cross most region boundaries (Marianes and Spradling, 2013). The ISCs in different midgut regions also display distinct morphological features as well as differential gene expression, exemplified by the finding that more than 900 genes show regional expression variation in the ISCs (Dutta et al., 2015;Marianes and Spradling, 2013). The acidic R3 region, often termed the stomach of Drosophila, contains stem cells that have been deemed quiescent in unchallenged conditions but activated in response to stressful stimuli, such as heat shock or pathogen ingestion (Strand and Micchelli, 2011). Despite the evidence strongly implying regional ISC heterogeneity, most studies on ISCs focus on one specific region (mostly R4), and the possible impact of regional identity and tissue environment on ISC regulation is largely overlooked. Achieving representative data of the midgut requires unbiased quantitative analysis of all midgut regions. Rapid development of affordable and fast tile scan imaging has made it feasible to collect high-resolution imaging data from the whole midgut. High phenotypic variation between midguts limits the reproducibility of qualitative analysis and sets the requirement for robust quantitative analysis of replicate samples. However, achieving quantitative and regionally defined data from midgut cells has remained a major bottleneck, hampering the use of organ-wide analysis. Here we describe a widely applicable phenotyping method called LAM (Linear Analysis of Midgut) to achieve spatially defined quantitative data on midgut cells. LAM transforms data from three-dimensional (3D) midgut images into one dimension by an algorithm design that couples cellular identities into a specific position at a linear representation of the midgut. This enables binning of cell-specific data along the A/P axis and joining of replicate samples into spatially resolved data matrices. The use of one-dimensional (1D) data enables automatization of the regional boundary identification, allowing accurate alignment of corresponding regions. These features enable LAM to achieve robust quantitative phenotyping of midguts with subregional resolution. To facilitate the downstream data analysis, LAM includes various options for visualization, statistical analysis, and data subsetting. A graphical user interface, user manual, and tutorial videos make LAM accessible to all researchers. As a proof of concept, we use LAM to quantitatively analyze regional distributions of ISCs, EBs, and EE cells. We also demonstrate the regional heterogeneity of the injury response to a well-established colitis model, dextran sulfate sodium (DSS) treatment. The organ-wide analysis by using LAM revealed several features of DSS-induced response, including a failure of regenerative stem cell activation in R3, a regionally discordant pattern of stem cell division and differentiation in R4 versus R5, and an increase in EE cell numbers in the posterior R4/R5 region. By making unbiased, quantitative, organ-wide analysis highly feasible, LAM is expected to open new avenues for the analysis of regional heterogeneity of midgut cells. RESULTS An approach for spatially defined quantitative phenotyping of the Drosophila midgut To analyze the spatial heterogeneity of intestinal cell responses in an unbiased and reproducible manner, we developed an intestinal phenotyping approach that is automated, quantitative, and regionally defined. For imaging the nuclei of pseudostratified midgut epithelium, fixed 4 0 ,6-diamidino-2-phenylindole (DAPI)stained tissues were mounted in between a coverslip and a microscope slide with 0.12-mm spacers. Flattening the intestinal tube into two epithelial layers while still separated by its lumen allowed z-stack acquisition of one layer, saving time and reducing file size ( Figure 1A). As an initial step, we sought a means to reduce the tile scan stacks of nonlinear midguts into a linearized representation ( Figure 1B). To this end, an algorithm that approximates the midlines of partially uncoiled midguts along their A/P axis was used. The algorithm first transformed coordinates of nuclei belonging to the midgut into a binary image onto which pixel erosion was applied to produce a pixel-wide skeleton. The pixels of the skeleton were then iteratively scored to produce a linear representation of the midgut ( Figure 1C), which we colloquially call vectors, as they reduce the data into 1D arrays. Subsequently, any object in the 3D space, such as nuclei, and any associated characteristics could be projected and have their x:y:z coordinates reduced to a linear reference. Thereby, the projection point's normalized distance along the midline vector directly corresponds to the location along the A/P axis of the linearized midgut ( Figure 1D). The vector and data were then divided into bins, the number of which can be adjusted to a desired spatial resolution. Because of the linear referencing and binning of the measured cellular features, data collected from different intestines could be joined as biological replicates in a spatially relevant data matrix, where each row corresponds to the same biological location. As a result, the binning and joining of data allowed spatially defined quantitative representation and statistical analysis between sample groups. Next, we wanted to address whether the quantitative information obtained by using our algorithm allowed automatic determination of the borders of midgut regions. Midgut regions are characterized by differences in enterocyte ploidy and density ( Figure 2A) (Marianes and Spradling, 2013) and are separated by constrictions of the midgut radius (Buchon et al., 2013). We first separated the polyploid enteroblast and enterocyte population from diploid cells, based on nuclear area ( Figure 2B). The filtered cell population was then projected onto the A/P vector along with associated data on polyploid nuclear area and 2 Cell Reports Methods 1, 100059, September 27, 2021 Article ll OPEN ACCESS nucleus-to-nucleus nearest distances. Together with midgut width computed from the projection distances of all nuclei along the vector, these data enabled multivariate mapping for the detection of borders. Because of high variability of morphology, it was not possible to reliably detect the borders from individual midguts (data not shown). This led us to explore border detection from combined measurements of several replicate samples. As even minor variation in region proportions could produce a compounding error resulting in misalignment of border signals between samples, we sought to create more accurate bin-to-bin correspondence. Consequently, we introduced an anchoring point (AP) in the middle of the midgut, located at the border of the copper cell region (CCR) and large flat cell (LFC) region, which can be easily identified on the basis of the difference in nuclear distance ( Figure 1D). Alignment of the projected polyploid areas, nearest distances, and midgut widths from several replicate midguts by using the APs revealed characteristic midgut profiles, as described by Buchon et al. (2013) (Figures 2C-2E). Although the borders of all regions are not uniform, they are characterized by sudden, localized changes in values. Therefore, we fitted a Chebyshev polynomial to the normalized data to simulate background context and subtracted it from the values as an adjustment ( Figure 2F). After scoring each replicate by summing the values of its weighted variables, distinct patterns could be detected in the joined scores ( Figure 2G). Smoothing and peak detection with average values of each group allowed for robust identification of four peaks corresponding to region borders B1-B4 ( Figure 2H). Midgut total length, as well as the length of the individual regions, is variable (Buchon et al., 2013). This poses a challenge for aligning corresponding regions of replicate samples. Accordingly, the utilization of a single alignment point in the middle midgut, i.e., the point where the vectors of different samples are anchored together, can lead to an imprecise alignment of the regions toward the anterior and posterior ends ( Figure 2I). On the other hand, anchoring the samples from the ends will reduce the accuracy of the alignment in the middle regions (Figure 2J). To minimize noise introduced by the variable length, we utilized region border analysis to apply several independent alignment points, resulting in a more optimal comparison of midgut regions. In this ''split and combine'' approach the vectors and projected data were cut on the basis of region border detection, aligned separately, and rejoined back together ( Figure 2K). Although this pipeline could lead to slight discrepancy in bin lengths between different regions, it improved accuracy in regional comparisons between the midguts. We have implemented the analysis tools described above into a Python package, called ''Linear Analysis of Midgut'' or LAM (https://github.com/hietakangas-laboratory/LAM). LAM provides various options for analyzing midgut image-derived feature data, such as object coordinates for measuring cell-tocell distances and cell clustering ( Figures 3A and 3B), object size, and object intensities in a regional manner. It also provides various options for plotting and statistical analysis between sample groups ( Figure 3C). We also provide a separate tool for stitching tile images for large-scale datasets (B) A representative tile scan image of DAPI (cyan)stained midgut. After imaging and stitching of the tiles, the image is processed to exclude any features lying outside the area of interest. Subsequently, the image is analyzed for DAPI spots by, for example, the spot-detection algorithm of the Imaris software, or by StarDist segmentation. (C) Pixel selection in skeleton vector creation. The vector is a piecewise line starting from leftmost pixels of the binary image skeleton. The vector is extended with pixel coordinates based on a scoring system that gives penalties depending on the pixel's directional change and distance. With n as the last pixel of the vector, a direction-giving line is formed based on coordinates of n and the average coordinate of n-1 and n-2. On this line, a projection point (green circle) is created equidistant from n as the average coordinate. For each candidate pixel, distances to n (d vector ) and the projection point (d point ) are determined, both contributing equally to the penalty. Additionally, the absolute radian changes of each pixel in relation to n and the direction line is multiplied by 10 and added to the distance scores to give the full penalty. The pixel with the smallest penalty is added to the vector, and subsequently the algorithm would follow the pixel in darker gray. (D) Projection and linearization. The spots, and any accompanying data, are projected onto the vector. The vector is then binned, where the number of bins is chosen on the basis of the desired resolution. An ''anchoring point'' (AP) is introduced into a morphologically distinct place, such as the border between the copper cell region (CCR) and the large flat cells (LFC) region of the middle midgut (arrow). (https://github.com/hietakangas-laboratory/Stitch). Finally, LAM is accompanied by a step-by-step guide, tutorial videos (https:// www.youtube.com/playlist?list=PLjv-8Gzxh3AynUtI3HaahU2o ddMbDpgtx), and a user-friendly graphical user interface. Region-specific cellular profiling of the steady-state Drosophila midgut To date, no quantitative data on regional distribution of cell types in a steady-state midgut are available. We used LAM to establish such a dataset for mated young (7 days old) females, grown on chemically defined holidic medium ( Figure 4A) (Piper et al., 2014). With the chosen experimental settings, we expect the midguts to be in a gradually renewing steady state. The border-detection algorithm was used to identify regions R1-R5. To identify intestinal cell types, we used specific markers for ISCs (Delta-LacZ), EBs (Su(H)-LacZ), and EE cells (anti-Prospero) along with Esg-Gal4,UAS-GFP,tub-Gal80 ts (Esg ts ), which marks ISCs and EBs (Figures S1A and S1B) . The relative (normalized to total cell number) and total cell numbers within regions R1-R5 were calculated (Figures 4B-4D and S1C-S1F). The analysis shows clear regional variation in the proportional numbers of distinct cell types-for example, the EE cells were most concentrated in R3 ( Figure 4D). The overall regional pattern of Delta-positive ISC and Su(H)-positive EB distributions largely overlap with each other (Figures 4E and 4F). The relative number of ISCs and EBs are high in the middle, and the posterior of R4 (corresponding to R4bc) as well as in the anterior R5 (corresponding to the R5a). In R2, ISCs and EBs are most abundant in the middle of the region (corresponding to the R2b). Notably, R1 contains very low numbers of ISCs and EBs compared with the rest of the midgut ( Figures 4E and 4F). As the LAM analysis was performed at the resolution of 62 bins per midgut, we were able to identify even more fine-structured patterns of cellular distribution. For example, R3 is divided into the acid-secreting CCR and the LFC region flanked by intestinal constrictions. Plotting the polyploid EC nuclei number, area, nuclei-to-nuclei distance, and midgut width revealed typical topology of the CCR and LFC along the R3 region (Figures 4H-4L). Interestingly, the anterior side of R3, composed of the CCR, displayed high relative numbers of ISCs and EBs. However, their respective distributions within this region differed slightly: ISCs were most abundant in the middle and posterior parts of CCR, whereas EBs were primarily clustered in the anterior end of the CCR, adjacent to the R2/R3 border . This is in line with the findings that the CCR can be subdivided into molecularly distinct regions (Strand and Micchelli, 2011) and suggests the existence of localized signals directing the balance between stem cell renewal and differentiation in the CCR. In addition to ISCs and EBs, EE cells displayed specific patterns in the middle midgut ( Figures 4M and 4P). A high density of EE cells was present in a narrow stripe at the anterior CCR, as well as directly after the R3/R4 border ( Figure 4P). The latter stripe corresponded to the so-called iron cell region, which contains enterocytes highly expressing the iron storage protein Ferritin (Marianes and Spradling, 2013). Interestingly, additional In practice, the functionality can be used in determining cell densities and differences in cell dynamics. In the schematics, the colored circles indicate feature locations of different channels, and the arrows show the nearest features in the channel that is under analysis. (B) The clustering algorithm is based on finding neighbors of each feature on one channel to form ''cluster seeds.'' The seeds are then merged on the basis of shared feature IDs to form the final clusters (blue circles). In the figure, the centroid of feature number 1 falls within the cluster seed of feature 0, whereas feature 2 does not. However, as feature 2 is within the proximity of feature 1, during the merging of seeds all these numbered features are joined into one cluster. (C) Pairwise sample group comparisons in LAM. All groups are first analyzed alone and then compared against the control group. LAM analysis can include any number of sample groups, but each group is statistically tested only against the control group. group (n = 32). The red line is the median score of the sample group and black lines are individual samples. Although individual samples have great variation in score, grouping of samples leads to emergence of trends that can be used for peak detection. (H) Peak detection performed on a sample group's median scores (red line) shows approximate locations of border regions, as defined by value changes in multiple variables. The group's score is smoothed and rescaled to [0,1] for peak detection. The vertical red lines at peak locations show their prominence. The marked borders from left to right are B1, B2, B3, and B4. (I-K) Anchoring of midgut samples for regional alignment. (I) Midpoint anchoring. Using a single anchoring point in a distinct morphological site, such as the border between copper cells and large flat cells, results in accurate alignment close to the anchoring point but propagates error toward the distal regions. The anchoring point is a user-defined image coordinate that is projected onto the normalized [0,1] vector. The vector is then divided into a user-defined number of bins that is equal for each sample. The samples are aligned within a data matrix by assigning them to indices according to the bin of their projected anchoring point. Note the unequal alignment of the midgut ends due to varying proportions of regions, and variable lengths at either side of the anchoring point. (J) Endpoint anchoring. Aligning the samples from both ends propagates the error toward the middle of the midgut. In this method, a user-defined anchoring point is not necessary. (K) Split and combine anchoring. In this method, border peak analysis determines vector cut points. This allows splitting, realigning, and rejoining of the vectors with more accurate regional comparison of different midgut samples. Article ll OPEN ACCESS enrichments of the EE cells were observed in the distal ends of the midgut, at the border between the crop and R1, and at the border between the midgut and hindgut ( Figure 4G). Taken together, profiling of the cellular distributions along the steadystate midgut A/P axis by LAM revealed unprecedented patterns of cell organization and demonstrated the performance of LAM in quantitative analysis of subregional phenotypic features ( Figure 4Q). DSS feeding results in regional changes to midgut morphology and ISC differentiation As a further proof of concept of the functionalities in LAM, we analyzed the injury response of ISCs in a widely used colitis model, oral administration of DSS ( Figure 5A). DSS treatment has been reported to induce regenerative ISC proliferation and accumulation of Su(H)-positive enteroblasts, and there were no significant changes in numbers of Delta-or Prospero-positive cells (Amcheslavsky et al., 2009). An analysis of the morphological features of the midgut revealed that DSS feeding results in significant, region-specific changes in midgut morphology. Midgut width and length were affected in several regions, especially in R3 and R4, R3 displaying the strongest relative reduction in midgut length ( Figures 5B and 5C). Furthermore, the size and patterning of nuclei were altered in a region-specific manner ( Figure 5D). These changes somewhat compromised border detection, in particular preventing reliable detection of the first border (B1, Figure 5E). One of the most striking consequences of DSS feeding was the prominent reduction of R3 cell numbers ( Figures 5F and 5G). This implies that the ECs of R3 are more sensitive and/or that the R3 ISCs are not equally capable of maintaining homeostatic regeneration upon DSS treatment. In line with EC loss in R3, DSS treatment resulted in significant loss of polyploid cells, whereas the number of smaller diploid nuclei was less affected ( Figure 5H). Consistent with the notion of the stem cells' inability to divide and compensate for cell death, the number of ISC-derived GFP-marked cells was not significantly increased in the R3 region upon acute DSS treatment ( Figure 5I). As a consequence of these changes, the typical subregional R3 morphology, including differential patterning and number of ECs in the CCR and LFC region, was lost in the DSStreated flies (Figures 5J and 5K). Altogether, based on the anal-ysis of the morphological and cellular parameters, our results indicate severe sensitivity of the R3 region to acute DSS treatment concomitant with impaired stem cell activation to compensate for the cell loss. To further investigate the regional heterogeneity of ISC differentiation during DSS-induced injury, we used cell-type-specific markers for ISCs (Delta-LacZ), EBs (Su(H)-LacZ), and EE cells (anti-Prospero). Consistent with earlier findings (Amcheslavsky et al., 2009), DSS treatment led to accumulation of Su(H)-positive EBs (Figures 6A and 6B). However, the accumulation of EBs displayed region-specific differences, being most prominent in R5 and particularly low in R1, and in the anterior parts of R4 ( Figure 6B). In contrast to the previous report (Amcheslavsky et al., 2009), we detected widespread accumulation of Deltapositive ISCs, especially in R2 and R4 (Figures 6C and 6D). Notably, the regional pattern of Delta-and Su(H)-positive cells did not fully correlate. This might be explained by a regional difference in the prevalence of symmetric ISC-ISC divisions (high Delta in the R4) and asymmetric ISC-EB divisions (high Su(H) in the R5) ( Figures 6B and 6D). Interestingly, we also noticed that the nuclei of the Su(H)-positive cells were significantly larger in R5 compared with R4. This might reflect impaired regulation of the Notch signaling pathway with failure to switch off Su(H) expression in the differentiating enterocytes in the R5 region ( Figures 6A and 6E). Consistent with the low amount of stem cells in R1 during steady state, few Delta-positive cells were detected in the anterior parts of the midgut after the DSS treatment (Figure 6D). The levels of Prospero-positive EE cells remained stable upon the DSS treatment in most of the midgut area ( Figure 6F). Interestingly, however, an area ranging from posterior R4 to anterior R5 displayed significantly elevated numbers of Prospero-positive cells after the DSS treatment ( Figure 6F). In conclusion, the DSS-induced injury response displays prominent regional heterogeneity in terms of stem cell activation, division, and differentiation profiles. DISCUSSION Here, we present an approach to quantitatively study cellular phenotypes of the whole Drosophila midgut. In combination with fast tile scan imaging and efficient image feature detection Article ll OPEN ACCESS algorithms, LAM enables, for the first time, quantitative and regionally defined automated phenotyping of all cells in the whole midgut. LAM allows (1) coupling of cellular identities to a specific position along the A/P axis, (2) automated detection of regional boundaries, and consequently (3) quantitative and statistical analysis of cellular phenotypes along the regions of the midgut, with subregional resolution. In doing so, LAM (4) opens the path for organ-wide studies of midgut cells and eliminates the bias caused by selective analysis of a specific midgut area. Through these advances, LAM will allow the exploration of regional heterogeneity of midgut cells, including the ISCs, and will significantly increase the representativeness of midgut phenotypic data. The graphical user interface makes LAM accessible even for scientists with limited experience in computational image analysis. Variations in regional distribution of midgut cells We tested the performance of LAM by analyzing the distribution of ISCs, EBs, and EE cells in a steady-state midgut of mated young females. This analysis revealed several new features of cellular distributions, including partially overlapping clusters of Delta-and Su(H)positive cells within the CCR/R3ab subregion. In addition, EE cells were observed to cluster around the main regional boundaries, including the cardia-R1, R2-R3, R3-R4, and R5-hindgut boundaries, suggesting a common regional organizer for the specification of EE cell fate in these regions. One such signal could be the Wg signaling pathway, whose activity has been shown to localize to these regions . Notably, due to the high variation of phenotypes between individual midguts, it would have not been possible to reliably detect such features by qualitative analysis of individual midguts. This demonstrates the ability of LAM to detect variable phenotypes with high subregional resolution. The resolution of LAM is influenced by the numbers of bins, which can be freely adjusted by the user. The optimal number of bins depends on the density of input data points as well as data quality, which influences the accuracy of alignment of individual midguts. Regional heterogeneity of the injury response in a Drosophila colitis model As another proof of principle, we employed a widely used Drosophila colitis model induced by DSS feeding. Use of LAM allowed us to identify several new features of stem cell activation and differentiation not previously documented in the literature, providing new insight into the previously reported models of midgut injury response (Amcheslavsky et al., 2009;Jiang et al., 2016). We noticed a significant reduction of the total cell numbers in R3, which coincided with low activation of stem cells in R3 when Cell Reports Methods 1, 100059, September 27, 2021 9 Article ll OPEN ACCESS compared with the neighboring R2 and R4 regions. As our experiment focused on an acute 5-day DSS response, it remains possible that R3 ISCs react with slower kinetics or that R3 ISCs differentially depend on other environmental factors, such as nutrition. In fact, a previous study has demonstrated that the R3 stem cells are capable of inducing a regenerative response during a slightly longer (>1 week) DSS administration period on a diet with 20% sucrose (in contrast to 2% in our study) . Outside R3, DSS treatment led to a relatively uniform increase in Delta-positive cells, except in R1, which contains fewer Delta-positive cells to start with. Notably, our data differ from those of an earlier study (Amcheslavsky et al., 2009) showing no accumulation of Delta-positive cells upon acute DSS treatment. A possible technical explanation for the discrepancy is the perdurance of the b-Galactosidase protein, expressed by the Delta-LacZ reporter. Whereas the overall pattern of Su(H)-positive cells was similar to that of the Delta-positive cells, the posterior midgut displayed interesting quantitative differences. Most of R4 showed only a modest increase in Su(H)-positive cells, but an area from the posterior end of R4 to R5 displayed a very high increase in relative numbers of EBs in response to DSS. Comparison of the regional profiles of Delta-LacZ and Su(H)-LacZ reporters is consistent with the conclusion that the differentiation rates of ISCs display significant regional differences, with the ISCs of R5 being more prone to EB fate. In addition, the Su(H)-positive cells in the R5 region of the DSS-treated midguts showed enlarged nuclei compared with the EBs in other regions. Although the molecular details explaining the difference in ISC fate between R4 and R5 are as yet unknown, regional transcriptome mapping has revealed existing gene expression differences between the ISCs of these regions (Dutta et al., 2015). One candidate in regulating ISC fate in these regions is the transcription factor Snail, whose expression is relatively high in R5 ISCs. Forced expression of Snail prevented EB differentiation into ECs, leading to an accumulation of EBs (Dutta et al., 2015). Hence, it will be interesting to learn whether intrinsic differences in Snail expression, or possible region-specific extrinsic factors, underlie the region-specific differentiation patterns in the injured midgut. Concluding remarks In addition to the physiology of midgut regionality, the unbiased organ-wide analysis with LAM can improve representativeness of midgut data in general. Considering the concern of confirmation bias throughout the scientific literature, there is a risk that studies focusing on a narrow (often undefined) area of the midgut primarily record and present data from areas that give the strongest phenotypes. Considering our DSS experiment, a focused analysis of only one (sub)region would have yielded several different, and sometimes even mutually contradictory, biological conclusions, depending on the region chosen. Therefore, one should exercise caution when making generalized conclusions based on the findings of a small subset of ISCs. We propose an approach whereby the phenotypic response for a given treatment/genotype is first quantitatively analyzed and reported at the level of the whole midgut, with more detailed follow-up experiments concentrated on the specific region(s) of interest. In conclusion, we expect that the unbiased organ-wide analysis offered by LAM will allow the pursuit of more representative data and uncover the extent of tissue context-dependence of stem cell regulation as well as increasing the understanding of the physiological roles of intestinal regionalization. Limitations of study The performance of LAM is dependent on the quality of the midgut preparations, image acquisition, and image segmentation for cellular objects. Each step is to be carefully considered for successful application of LAM. In our experiments, the intestines were mounted between a microscope slide with 0.12-mm spacers and a coverslip. Images were obtained to capture half of the midgut circumference, thus assuming the cellular heterogeneity to be equal on each side. When mounted, the midgut is not always equally flat along the A/P axis. Special care is needed to avoid disproportional recording of the midgut circumference in different regions. To circumvent any bias from disproportional imaging, it is possible to extend the z stacks to include the full circumference of the midgut if required. Although LAM allows the recording of all imaged cells, the projection of objects to the midline vector is dimensionally restricted, and LAM does not account for orientation on the z axis. Consequently, the information on cell stratification as well as the 3D geometry of the intestinal cylinder is not used during object counting. z-axis coordinates are, however, taken into account when calculating the object distances and clustering, allowing reliable data acquisition around the intestinal circumference. The algorithm for detecting region borders is based on the morphological characteristics of the regions, such as midgut constrictions, as well as the nuclear size and distance, which were previously applied to manually map borders between physiologically distinct compartments (Buchon et al., 2013). Although the parallel use of multiple parameters increases the robustness of border detection, it remains possible that experimental conditions, such as those influencing visceral muscle function or changing the ratio between cell types (e.g., EB accumulation), might influence the border detection. As was the case with DSS treatment, a subset of borders can often still be reliably detected. Specific ad hoc solutions, such as region-specific GFP traps or elimination of EBs from the analysis (by using a marker), might be applied under such circumstances. Object segmentation is a critical step for calculating the nuclear features characteristic of different regions. The performance of the traditional object segmentation methods, such as intensity thresholding, is compromised by high cellular densities, cell-size variation, cell stratification, and intensity differences. The variation in the success of nuclear segmentation possibly hampered our attempts to reliably detect regional borders from individual midguts. We overcame this limitation by applying sample group average values to locate the borders of individual samples. Deep-learning-based nuclear segmentation algorithms, such as StarDist (Schmidt et al., 2018;Weigert et al., 2020), are likely to further improve the accuracy. STAR+METHODS Detailed methods are provided in the online version of this paper and include the following: RESOURCE AVAILABILITY Lead contact d Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Ville Hietakangas (ville.hietakangas@helsinki.fi). Materials availability d This study did not generate new unique reagents. Data and code availability d The raw image data reported in this study cannot be deposited in a public repository because of file size. To request access, contact Ville Hietakangas (ville.hietakangas@helsinki.fi). In addition, segmentation data on cell-like objects from the raw images have been deposited at IDA-database and are publicly available as of the date of publication. URL and DOI are listed in the key resources table. d All original code has been deposited at GitHub and Zenodo and is publicly available as of the date of publication. LAM is also available on Python Package Index (PyPI). URLs and DOIs are listed in the key resources table. d Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request. METHOD DETAILS DSS treatment 36-50 KDa DSS was obtained from Fisher Scientific (cat no. 11424352). Staged Esg FO>UAS-GFP, Delta-LacZ and Esg FO>UAS-GFP, Su(H)-LacZ pupae were collected into vials containing holidic diet (Piper et al., 2014). After eclosion the flies were kept on the holidic diet for 5 days at 18 C, and then transferred into vials containing 2% sucrose (w/v) in medium containing agar 0.5% (w/v), nipagin 2.4%, and propionic acid 0.7%, in water with or without 3% DSS, and then kept at +29 C for 5 days. Microscopy and image processing Fixed and immunostained whole midguts were mounted in between a microscope slide with 0.12 mm spacers and a coverslip, followed by tile scan imaging by the Aurox clarity spinning disc confocal microscope from the anterior to posterior end. To reduce the image size and scanning time, stacks of only one side of the flattened midgut epithelium were obtained. For stitching the tiles and image processing in ImageJ (Schindelin et al. 2012), we generated a python script, ''Stitch'', with a graphical user interface (https://github.com/hietakangas-laboratory/Stitch). ''Stitch'' is a programme for stitching together a series of tiff images within a directory, utilizing the ImageJ Grid/Collection plugin (Preibisch et al., 2009), and performing stitching for multiple directories in a batch process. This programme can stitch together a series of tiff images using only a companion.ome metadata file associated with the tiff series. Alternatively, as in this article, ''Stitch'' can utilize the tile positions output from the microscope to perform image stitching. Full usage instructions and details are available in the ''Stitch'' user guide. After stitching and image processing, TIFFs were converted to Imaris (Bitplane) files, and features were obtained by the Imaris spot detection algorithm (Imaris (version 9.5.1) 2019). Raw feature data, including spot surface area measurements, were exported and used as input for LAM for further analysis (see the LAM user guide for details). Python script for enabling easy export of bulk Imaris .csv files to LAM is available in github (https://github.com/hietakangas-laboratory/LAM-helper-modules). The repository also contains python scripts with graphical user interface for exporting manually drawn midgut vectors and anchoring points in Fiji/ImageJ. Notably, LAM input is not restricted to Imaris, but accepts data from any source with at least coordinates and unique object identifiers in wide-format tables. We have included python code for running the deep-learning tool StarDist (Weigert et al. 2020) on 3D midgut images in addition to several pre-trained segmentation models (https://github.com/hietakangas-laboratory/predictSD). Methods in LAM Data handling in LAM is performed with NumPy (Harris et al., 2020) and Pandas (McKinney, 2010), while plotting is done using matplotlib (Hunter, 2007) and Seaborn (Waskom et al., 2020). Geometric and image operations are performed with Shapely (Gillies, 2007) and Scikit-image (van der Walt et al., 2014), respectively. Statistics are calculated with scipy.stats (SciPy 1.0 Contributors et al., 2020) and statsmodels (Seabold and Perktold, 2010). The border detection additionally uses scipy.signal (SciPy 1.0 Contributors et al., 2020) for locating regions of high signal. LAM includes an easy-to-use graphical user interface (GUI) with enabling/disabling of related options as well as a default settings file that can be edited at will to control all runs. LAM also supports execution from the command line using a limited scope of arguments. Full description of the usage of LAM and step-by-step instructions can be found in the LAM user guide found in GitHub (https://github.com/hietakangas-laboratory/LAM). LAM video tutorials are available at (https://www. youtube.com/playlist?list=PLjv-8Gzxh3AynUtI3HaahU2oddMbDpgtx). Vector creation LAM provides two alternative methods for creating piecewise median lines, which we colloquially call vectors, for midgut images: binsmoothing and skeletonization. The methods provided by LAM require the midguts to be horizontally oriented, but the vectors can alternatively be given as coordinate files without restrictions in orientation. An auxiliary script is provided to rotate data to horizontal orientation. Bin-smoothing of the data is performed by binning the x-axis after which the median of the nuclei co-ordinates is calculated for each bin. Then a piecewise line is created to connect the bin midpoints. The number of bins is a user defined parameter to be adjusted for suitable level of smoothing. In the skeleton vector creation option, the DAPI channel co-ordinate data is first converted into a binary image where each nuclei is resized to one pixel. As a result, a binary matrix is created where pixels of nuclei are marked as one, and empty pixels as zero. The binary image is then processed with resizing, smoothing, binary dilation, as well as hole filling in order to produce a continuous blob (user defined parameters, see user guide for more details). The matrix is then subjected to skeletonization, where pixels of the image are eroded until reduced to pixel-wide structures. The vector starting point is determined as the average of five pixel co-ordinates having the smallest x value. The vector is then drawn from pixel to pixel by scoring pixels within a specified range (find distance in GUI) using the following penalty function: where d vector is the distance of the pixel to the last co-ordinate of the vector, and d point is the pixel's distance to the projection point ahead of the last coordinate. The projection point is determined by adding the previous vector progression (distance and direction) into the last vector point. The final scoring component, the modulus of radians, is the difference in direction between the last vector co-ordinate and a pixel compared to a fitted line between the last three vector co-ordinates. The x and y co-ordinates of the pixel with the smallest penalty are then added to the path of the vector, and the next pixels are scored based on these coordinates, and so on until no more pixels are found ( Figure 1C). Projection and counting All segmented image objects and their associated data, which we collectively call features, are projected to the vector using linear referencing methods of the shapely package. To this end, each feature coordinate is assigned a value based on the normalized distance [0.1] to its nearest coordinate point along the A/P length of the vector. The features can then be counted by dividing the vector into a user-defined number of bins of equal length. The default 62 bins is suitable for standard analysis of midgut cell types, but if studying e.g. cell type subpopulations that are more sparse, the bin number may need to be reduced to avoid number of cells per bin skewing towards zero. In contrast, the number of bins may be increased for better resolution if the data has sufficiently high density of cells of interest. By conserving the bin number between samples, LAM enables building of data matrices for bin-to-bin and windowed statistical comparisons. Bin-wise comparability between sample groups may be reduced by variation in region length. Consequently, LAM allows the data to be joined using the samples' APs, i.e., linear references of distinguishable points for each midgut, to maximize correspondence of regions within the data matrix. Each sample's data can be centered at a specific index position of the matrix when using individual APs. Alternatively, the vector and data can be cut, re-binned, and recombined at each segment flanked by the APs in the ''split and combine'' approach. To align the samples region-to-region in the ''split and combine'' approach, the APs can be obtained using LAM's border detection on the data. OPEN ACCESS Feature-to-feature distances LAM has the option to compute pairwise Euclidean distances between nearest features ( Figure 3A). The distances can be calculated between features on one channel, e.g. DAPI, or between two channels, e.g. the distance from each Delta + -cell to nearest Pros + -cell. The features can additionally be filtered by area, volume, or another user defined variable. The algorithm finds for each feature the shortest distance to another feature in the filtered dataset. Feature clustering LAM also includes an algorithm for cluster analysis that functions in a similar manner to the feature-to-feature distance calculations. Cell clusters in the midgut tend to either take the form of longer strands or a more spherical shape, and consequently defining the clusters by their shapes would be problematic. To overcome this, LAM takes the approach to cluster the cells by their proximity to each other ( Figure 3B). For each feature, LAM first finds its neighbors within a user-defined distance inside a constructed k-d tree of the 3D co-ordinate data. Found features are then marked as a ''cluster seed''. After all seeds are found, they are merged based on shared feature identification. As a result, unique clusters with no shared features are formed. The clusters can be further filtered by a user-defined number of features, and are finally assigned unique cluster identification numbers. Gut width measurement LAM computes the width of each midgut along its vector. The midgut is binned into segments of equal length, and nuclei with the largest distances to the vector are found. As the vector may not exactly follow the true center of the midgut, the handedness of the nuclei relative to the vector are determined. Average distance of the furthest decile of nuclei is calculated for both hand sides, and the width at each bin is the sum of these averages. Automatic border detection Before running the algorithm, the nuclei area distribution is determined, and only polyploid nuclei are included into the analysis. The borders are detected based on normalized values of (i) polyploid nuclei distance to its nearest neighbor, (ii) midgut width, (iii) midgut width bin-to-bin difference, and (iv) polyploid nuclei area bin-to-bin difference (default setting variables). These variables have region specific variation along the midgut's A/P-axis, and local changes correspond to the major region borders. In order to find local changes of the variables, a fitted fifth degree Chebyshev polynomial is subtracted from the values as a context adjustment. To this end, for each bin (x) in the full range of bins [0 . a], a total score is calculated by summing the weighted (w) deviations of each variable's (v i.n ) normalized value from the fitted curve (c): fx˛Nj0 % x % ag; x score = X n The resulting score arrays are then smoothed and rescaled to interval [0, 1]. Peak detection is then performed on context-adjusted group average scores to find signals corresponding to region borders. To increase resolution, the border detection algorithm is run by twice the number of bins set by the user. QUANTIFICATION AND STATISTICAL ANALYSIS Statistics in LAM LAM includes pairwise statistical testing of control and sample groups ( Figure 3C). LAM has two types of in-built statistical testing. Firstly, bin values of the sample group are tested against the respective bin of the control group resulting in a representation of p values along the A/P axis of the midgut. Secondly, total feature counts of a sample group are tested against the control group. Both tests are performed with Mann-Whitney-Wilcoxon U test using continuity correction. In the bin-by-bin testing, false discovery rate correction due to multiple testing is applied. Additionally, for the bin-by-bin testing, a sliding window option of user-defined size is available. The use of a sliding window has some advantages depending on input data. For example, some cell types of the midgut may be spatially too sparse for bin-to-bin testing as the cell count at each bin would be skewed towards zero. Consequently, using a sliding window to merge bins would increase the number of non-zero values in the test population, and therefore increase the strength of the statistical test. Other statistical analysis Statistical analyses were performed in R/Bioconductor. For parametric data, two-sample t-test or two-way ANOVA in conjunction with Tukey's HSD test was used. For the non-parametric count data Wilcoxon rank-sum test with multiple testing correction (FDR<0.05) was used.
2021-10-15T00:09:12.662Z
2021-07-30T00:00:00.000
{ "year": 2021, "sha1": "925f0dde41c5e70b270296f293dd4e3627fb92b1", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.crmeth.2021.100059", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "54c500e4ad80b891f0074e7984c0c998441df9f7", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
86412581
pes2o/s2orc
v3-fos-license
A review and classification of fossil didemnid ascidian spicules This study discusses and classifies fossil didemnid ascidian spicules. Three new fossil genera and nine new fossil species are described based on spicule morphology. The genera are Bonetia gen. nov., Rigaudia gen. nov. and Monniotia gen. nov. The species are Bonetia acuta sp. nov., B. brevis sp. nov., B. quasitruncata sp. nov., B. truncata sp. nov., Rigaudia multiradiata sp. nov., R. praecisa sp. nov., Micrascidites pauciradiatus sp. nov., Monniotia acuformis sp. nov., and M. fasciculata sp. nov. Recognizing these distinctive fossil didemnid spicules in fine-grained sediments should provide useful palaeoenvironmental information and may stimulate interest in their biostratigraphy. INTRODUCTION Ascidians, often called sea squirts, are sessile, filter feeding tunicates (subphylum Urochordata) which are important members of marine benthonic communities throughout shelf seas. Living ascidians have attracted a widespread interest from biologists because of their evolutionary position as close relatives of the vertebrate (Plough, 1978). Although most ascidians are soft-bodied, some species secrete distinctive aragonitic spicules. Didemnid spicules are often found in slides prepared for calcareous nannofossil examination. These stellate or spherical-shaped spicules range in size between 10 and 70 p m and, more rarely up to 125 p m . Although living didemnid ascidians are well known from the continental shelves throughout the world, fossil didemnid spicules are very rarely reported by palaeontologists and, therefore, at present are of little use as palaeoenvironmental or biostratigraphic markers. This study summarizes the present knowledge of ascidians in relation to didemnid ascidians and fossil ascidian spicules. All type material has been deposited in the Department of Palaeontology, Natural History Museum. SCOPE OF STUDY This study is based on observations by the authors during investigations of nannofossil assemblages from worldwide localities, ranging from Asia, the Middle East, North Africa, Europe and the Gulf of Mexico, over a period of fifteen years. Additionally, the published records of ascidian spicule distribution have been compiled. BIOLOGY AND ECOLOGY OF ASCIDIANS Forms belonging to the subphylum Tunicata have a body (zooid) covered by a complex tunic (from which the name tunicate is derived) containing a substance chemically almost identical with cellulose. The tunicates consist of three classes: the scssile Ascidiacea and the free-floating Thalaliacea and Larvacea. Over 1300 species of Tunicata have been descriibed; the great majority of which belong to the Ascidiacea (Barnes, 1980). The class Ascidiacea, also known as sea-squirts, are sessile, mostly colonial tunicates and are common marine invertebrates worldwide. The sack-like zooic ranges in size from 1 to 10cm. Most (c. 95%) ascidians form colonies in the shallow water of the continental shelf where they are attached to rocks and shells; or they are occasionally fixed in mud and sand by filaments or stalks. Colonial organization varies within the class, but although the colony itself may grow to a considerable size, usually the individuals forming it are very small (see Plough, 1978, plate VII). Ascidians are filter feeders and extract plankton from water which passes through the pharynx. The water currents are drawn in through branchial slits by ciliary action. Some deep-sea ascidians obtain their food from the surrounding sediments. Other deep-sea ascidians feed on minute animals (nematodes and epibenthic crustaceans) which are caught with lobes situated around the buccal siphon. Ascidians are hermaphroditic, each individual having male and female gonads. When the eggs are externally fertilized they hatch into free-swimming larvae similar in appearance to tadpoles. However. as the larvae mature, they usually attach themselves to some suitable object and lose their tail. DIDEMNID ASCIDIANS All didemnid ascidians are sessile and colonial, they are common and have a global distribution ranging from the Arctic to the Antarctic (Plough, 1978). Most are found in shallow water (0-50 m) attached to rocks. shells and other hard surfaces. They are rarely found in deep water, exceptions include 1,rptoclinide.s faroensis (1500 m) and Dirlernnitrn alhiurn polare (1430 m). Didemnid ascidians are depth-sensitive and different species occupy well-defined areas on the sea bottom, conditioned by ocean currents and water temperatures. Ascidians are usually very vulnerable to prolonged freshwater influences. Heavy cyclonic rainfall regularly kills large colonies o f ascidians close to the Australian coastline (P. Mather, pers. comm. in Heckel, 1973). Dead ascidian bodies may float on the surface of the sea, where wind, tides and current drift may play a part in their distribution. DIDEMNID ASCIDIAN SPICULES Ascidians are soft-bodied animals and, with the exception of spicules in didemnid and polycitorid ascidians, they are rarely found as fossils. Monniot (1970) described Cystodyfes (Polycitorida) from Pliocene deposits of Brittany, France. Didemnid ascidians secrete fibrous spicules which, according to Matthews (1966), are composed of aragonite with high levels of strontium (6.5%). In this study, several specimens recovered in sediments from the Red Sea (DSDP Leg 23, Site 229A) were analysed using energy dispersive X-rays (EDAX) to identify the strontium level. However, only traces of strontium (<1%) were observed in the spicules. The origin of the spicules remains unknown. Loewig & Koelliker (1846), Hardman (1886), Woodland (1907) and Prenant (1925) suggested that the spicules were developed independently of the zooid, whereas Michelsen (1919), PCrks (1947 and Van Name (1952) indicated that the spicules were products of the zooid and originatcd in the lateral organs. In living ascidians, the spicules are surrounded by a double-layered membrane (Lafargue & Kniprath, 1978). Spicule formation is therefore not simply the result of physico-chemical processes. Spicule characteristics have not been used by biologists for generic level taxonomy of living ascidians. Classification is based instead on a range of soft body elements (see Elredge, 1966). However, at specific level, spicules and their characteristics have been used by many authors as the primary diagnostic feature, in conjunction with characteristics of the zooids. Presence or absence of spicules, their diameter, ray count, arrangement, distribution and density of rays are usually considered as primary diagnostic criteria, in conjunction with the characteristics of the zooid. However, Berrill (1950) and Elredge (1966) treated these criteria as of secondary importance or disregarded them as given species can be spicular or aspicular depending on environmental controls. Variations in spicule presence in ascidians could, however, be the result of confusing more than one species whose definitions are only based on the characteristics of the zooid. In a given colony, spicules are usually identical. Van Name (194s) states in a discussion on Dirlemnitm cunidum that 'I am far from being able to overcome the fear that I am confusing more than one species, but after the examination of a large amount material from various American localities I am at a loss to find a reliable basis for dividing it by studying museum specimens'. Spicule distribution and density o f cover may very widely not only within a given species, but also occasionally within the same colony. Van Name (1952) suggested that variable distribution and density occurs when colonies undergo a certain amount of regression during unfavourable periods. At such times the spicule remains fixed within the tunic while the zooids degenerate and are added to when new zooids develop. Waters rich in carbonatc, particularly coral reef areas, are especially favourable for the development of didemnid colonies with large spicules with elongated and well-formed conical rays. The attachment of colonies to some rigid object also favours spicule secretion, whereas attachment to a flexible object which allows even slight bending or movement of the test, frequently results in secretion of smaller types of spicules with shorter and less well-formed rays or points. Didemnid spicules secreted in polar and subpolar species are frequently only sparsely distributed in the surface layer of the test and are burr-like with poorly-developed spicule rays (Van Name, 1945;Kott, 1969). Fossil didemnid ascidian spicules of varying shape have been recorded in sediments of Jurassic to Quaternary age (Boekschoten, 1981). Living didemnid ascidians are classified primarily on the characteristics of the zooid; so it is difficult to assign individual spicules to living species. Until now, therefore, Tertiary didemnid spicules which are spherical in shape have been placed in Micvuscidites and those which are disc-shaped are assigned to Neanthozoites (produced by Polycitoridae). If a direct relationship can be established between fossil spicules and living species, the information obtained from living forms may be applied to fossil spicules, assuming they have not changed their habitat with time, and provide useful palaeoenvironmental information. The occurrence of recent didemnid-spicule-rich sediments seems to be restricted to tropical and subtropical carbonate-rich sediments. Heckel (1973) found high concentrations of didemnid spicules (>lo% of the nannoplankton fractions) occurred around the main carbonate reef areas of the Great Barrier Reef. Heckel concluded that the spatial distribution of the spicules in the sediments was controlled by selective preservation of the aragonitic spicule rays. Freshwater discharge into the basin severely reduced the preservation potential of the spicules in the sediments. When studying fossil occurrences of didemnic spicules, special attention must be taken to distinguish in siiu occurrences. The durability of didemnid spicules to the processes of erosion, transport and deposition in warm waters is shown by their occurrence in turbidite deposits adjacent to carbonate-rich shelf environments (Beall & Fischer, 1963: Wei, 1993. Didemnid spicules also survive digestion and have been reported in fish guts (Rae, 1967) confirming fish predation and indicating another method of dispersal of the spicules from their sessile habitat into soft sediments. Spicules are composed of aragonite and thus are highly susceptible to dissolution. Stieglitz (1972) recorded strongly etched didemnid spicules from Recent sediments from the Bimini Lagoon, Bahamas. Aragonite dissolution in freshwater phreatic environments in tropical areas has also been well documented (Land, 1970). The recovery of spicules in sediments therefore suggests high sedimentation rates and/or rapid sealing of sediments soon after deposition (Houghton & Jenkins, 1988). The susceptibility of ascidian spicules to dissolution and diagenesis may restrict their value as biostratigraphic indicators. However, although ascidians are mainly benthic, they also have a free-swimming (tadpole) larval phase. This initial pelagic life cycle, coupled with a reported cosmopolitan distribution of many species (Knott, 1969; (described as Micrascidiies vulgaris) Wei, 1993Plough, 1970) may yet indicate that ascidian spicules have potential interbasin correlation in well-preserved sections. Hypersaline lagoonal and shallow water carbonate platform sequences should provide fruitful starting materials. Ascidian spicules are reported to be particularly common in faecal pellets and other carbonate mud aggregates of the Great Bahamas Bank (Purdy, 1963). Significantly, as diagenesis proceeded, only ascidian spicules were found to remain as the non-recrystallized constituents within the pellets. Didemnid spicules also occur in carbonate muds deposited in lagoons and carbonate shoals from Belize (Matthews, 1966) and in the Bimini Lagoon, Bahamas (Stiegletz, 1972). The stratigraphical and geographical distribution of ascidiari spicules for the Mesozoic and Tertiary-Quaternary are givm in Tables 1 to 2. The biostratigraphical potential of didemnid spicules has still to be explored, and full stratigraphic ranges of the species are yet to be established. However, a few are already known to be moderately good Great Banier Reef zonal markers. For example, Kokia, a possible ascidian spicule, has so far been documented from Valanginian to Berriasian sediments of the North Sea area. However, van Niel (1994) suggests that Kokia specimens show a construction similar to calcareous pentalith genera in the Braarudosphaeracae; but because of their high number of rays, Kokia should still be considered 'incertae sedis'. Didemnum minutum seems to be restricted to the Middle and Upper Jurassic and has a wide geographical distribution. High frequency variations of spicule abundance could also be used for local biostratigraphic correlation (cf. Heckel, 1973). Wei (1993) studied tunicate spicule abundances in sediments from the Great Barrier Reef and Queensland Plateau ( O D P Leg 133 sites) and concluded that tunicate spicules d o not appear to be promising biostratigraphic markers for the Pliocene-Pleistocene. However, listed below are the tentative ranges of the genera described here: Wei, 1993 Great Banier Reef (described as Tunicate spicules) Wei, 1993 Great Banier Reef (described as Tunicate spicules) Durand, 1948& 1955Chauvel, 1952Durand & Pelhate, 1961Bonet & Beneviste-Velasquez, 1971Cita, 1973Houghton & Jenkins, 1988Wei, 1993 Deflandre & Deflandre- Rigaud, 1956Deflandre-Rigaud, 1968Monniot & Buge, 1971Heckel, 1973Varol, 1985Deflandre-Rigaud, 19491956& 1968Durand, 1952 Deflandre & Deflandre-Rigaud, 1956Bouche, 1962Lezaud, 1966 (c) Bonnetiu recorded range Middle Miocene-Pleistocene (d) Micrascidiies recorded range Lower Eocene-Pleistocene. MORPHOLOGY OF SPICULES The arrangement and shape of the rays in spicules can be utilized as diagnostic features for the identification of genera. The ratio of the length of the free part of the ray to that of the joined part, together with the number and density of rays, are used for the determination of the species. The stellate spicules are assigned to the genus Micrascidites and the spherical spicules assigned to the new genera Bonetia, Rigaudia and Monniotia. Spicule rays are defined as the parts in which a spicule is naturally separated or divided. Each ray therefore has two parts: the joined part (where each ray joins to the next) and the free part. The joined inner parts of the rays are always conical with their apices towards the centre, whereas the free parts of the rays may be conical, truncated conical or cylindrical. In this study, four major types of ray were identified and this led to the recognition of three new gcnera (Fig. 1). 1. Micruscidites-type rays are biconical or rhomboid in axial section, without an inflection between the free and joined parts of the rays. 2. Bonetia-type rays arc biconical, but with a distinct inflection between the joined and free parts. The rays are widest in diameter along the jointed part of the rays. The free part is always narrower and is conical or truncated conical. 3. Rigaudia-type rays resemble a sharpened pencil. Their free lengths are cylindrical, with parallel sides to the ray and a truncated peripheral end. 4. Monniotia-type rays are unequal in length, composite and are constructed of needle-like elements forming bundlelike structures with or without a free part. Two major types of spicule construction have been recognized in this study: (a) stellate spicules; (b) spherical spicules. Stellate spicules are composite spicules constructed of several layers of petaloid units which may be flat or concavo-convex. In these petaloid layers each spicule ray radiates from the centre of the unit. Adjacent petaloid cycles are then joined in a saddle-like manner to form a complete spicule (Fig. 2). The number of rays is reduced towards the outermost cycle. Stellate spicules are found in Micrascidites. Spherical spicules are constructed from rays which radiate from the centre of the spicule and are found in Bonetia, Riguurlia and Monniotia. All the rays in the spicules are approximately the same length. An advantage of using spicule morphology as a system of classification is that individual species may be identified even if only isolated rays are preserved rather than complete spicules. CONCLUSIONS overlooked by many palaeontologists because of their small Didemnid ascidian spicules should have been more widely size, their tendency to break up in to individual spicule rays, reported in the fossil record, but have probably been and possible confusion as to their organic origin. Future studies on didemnic spicules should concentrate on mapping their areal distribution in sediments and also provide detailed morphological descriptions. Such studies should be particularly useful in palaeoenvironmental analysis o f fine-grained sediments and also should determine their value as biostratigraphical markers. The aragonitic spicules of didemnid ascidians are more susceptible to solution than the calcitic remains of coccoliths and foraminifera. The didemnid spicules, like aragonitic pteropods, are therefore likely to be better preserved in basins having high bottom water temperatures, sluggish circulation, and rapid sedimentation rates such as the Mediterranean Sea. Red Sea and the Persian Gulf. Diagnosis. Spherical spicules with conical or truncated conical rays which have a greater diameter in the joined part of the ray than the free part (Bonetia-type rays).These rays radiate from the centre of the spicule. Different species of this genus are distinguished by the number of rays, shape of the free part, and ratio of the length of the free part to that of the joined part of the rays. Remarks. Bonetiu differs from Micrascidires by having rays which are greater in diameter along the joined part of the ray, whilst the latter has rhomboidal rays. Bonefia is a spherical spicule, whereas Micrascidites is a stellate spicule. Other spherical spicules, Rigaudia and Monniofia, Holotype. PI. 2, figs 5 (NF514/Neg. 6) Type level and locality. Late Pleistocene (Zone NN21), Red Sea (D!jDP Leg 23, Site 229A, Core 2, Section 5 , 40-45 cm). Dimensions of holotype. Diameter of spicule = 21.0 p m : length of the free part of the rays 2.5 p m . Remarks. B. yuasitruncata differs from R. tritncafa by having rays with distinct truncated cone-shaped free parts, whereas the latter has a higher number of rays that are practically wi1:hout a free part. Type level and locality. Late Pleistocene (Zone NN21), Red Sea (DSDP Leg 23, Site 229A, Core 2, Section 5, depth 40-45 cm). Dimensions of holotype. Diameter of spicule = 19.5 pm: maximum diameter of rays = 1.7 p m . Remarks. R. multiradiata differs from R. p,raecisa by having a greater number (70-90) of thinner and longer rays, than the latter which has 30-45 thicker and shorter rays. Occurrence. Pleistocene sediments of the Red Sea and the Gulf of Aden. Also recorded by Edwards (1973) Diagnosis. species of Micruscidites having three to five large rhomboidal rays in which the length of the free part is at least five times, and maximum diameter is at least two times, greater than the length of the jointed part. Holotype. PI. 5 , fig. 11 (NF514/Neg. 11). Type level and locality. Late Pleistocene (Zone NN21), Red Sea (DSDP Leg 23, Site 229A, Core 2, Section 5, depth 40-45 cm). Dimensions of holotype. Diameter of spicule = 21.5 p m ; maximum length of rays = 11.0 p m , maximum width 5.0 p m , maximum length of joined part = 2.0 p m . Micrascidites uulgaris Deflandre & Deflandre- Rigaund, 1956 (PI. 1, figs 1-4: PI. 3, fig. I ; PI. 5, figs 5-9) Remarks. The number of rays varies between 8 and 30. The free parts of the rays are conical with bluntly pointed peripheral ends. Further subdivision of this species may be possible using inumber of rays and ratio of the length of the free part to that of the joined part. Occurrence. This species has been reported from Eocene to quaternary sediments (Table 2). In the present study it is also recorded from deposits of similar age worldwide.
2018-12-17T23:10:15.785Z
1996-10-01T00:00:00.000
{ "year": 1996, "sha1": "37a3628bd5362ef2cea7b32e87130b9611414dca", "oa_license": "CCBY", "oa_url": "https://www.j-micropalaeontol.net/15/135/1996/jm-15-135-1996.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "37a3628bd5362ef2cea7b32e87130b9611414dca", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Biology" ] }
247312662
pes2o/s2orc
v3-fos-license
Prediction of Early Response to Immunotherapy: DCE-US as a New Biomarker Simple Summary Immune checkpoint inhibitors (ICI) have revolutionized cancer care. However, assessing the efficacy of these new molecules with targeted therapeutic responses may induce too much delay when using classical biomarkers derived from morphological imaging (CT). The objective of our study is to propose fast, cost-effective, convenient, and effective biomarkers using the perfusion parameters from dynamic contrast-enhanced ultrasound (DCE-US) for the evaluation of ICI early response. In a population of 63 patients with metastatic cancer eligible for immunotherapy, we demonstrate that a decrease of more than 45% in the area under the perfusion curve (AUC) between baseline and day 21 is significantly associated with better overall survival. Thus, AUC from DCE-US looks to be a promising new biomarker for the early evaluation of response to immunotherapy. Abstract Purpose: The objective of our study is to propose fast, cost-effective, convenient, and effective biomarkers using the perfusion parameters from dynamic contrast-enhanced ultrasound (DCE-US) for the evaluation of immune checkpoint inhibitors (ICI) early response. Methods: The retrospective cohort used in this study included 63 patients with metastatic cancer eligible for immunotherapy. DCE-US was performed at baseline, day 8 (D8), and day 21 (D21) after treatment onset. A tumor perfusion curve was modeled on these three dates, and change in the seven perfusion parameters was measured between baseline, D8, and D21. These perfusion parameters were studied to show the impact of their variation on the overall survival (OS). Results: After the removal of missing or suboptimal DCE-US, the Baseline-D8, the Baseline-D21, and the D8-D21 groups included 37, 53, and 33 patients, respectively. A decrease of more than 45% in the area under the perfusion curve (AUC) between baseline and D21 was significantly associated with better OS (p = 0.0114). A decrease of any amount in the AUC between D8 and D21 was also significantly associated with better OS (p = 0.0370). Conclusion: AUC from DCE-US looks to be a promising new biomarker for fast, effective, and convenient immunotherapy response evaluation. Introduction Immune checkpoint inhibitor (ICI) immunotherapies using monoclonal antibodies antagonizing T-cell co-inhibition receptors have been the major revolution of the last few years in anti-cancer treatment. By blocking the immune checkpoints used by the tumor cells to create an immunosuppressive tumor microenvironment, ICIs enhance the antitumor immune response [1]. Since the first FDA approval of an ICI (ipilimumab for the treatment of advanced melanoma both in pre-treated or chemotherapy naïve patients, in March 2011 [2]), many more therapeutic extensions and molecules have been approved [3]. The effectiveness of ICIs in metastatic cancer is no longer a question in terms of survival gain or sustainable response compared to chemotherapy. However, except for melanoma and Hodgkin lymphoma, which show an excellent response rate (>50%) [4,5], only a subset of patients exhibits a good response with immunotherapy. We cite the following as examples: for advanced-stage hepatocellular carcinoma, the response rates for nivolumab and pembrolizumab were respectively 19.7% and 20.7% [6]; in advanced-stage small cell lung cancer, characterized by its aggressiveness and early diffusion of metastases, the rate of response for pembrolizumab in a phase II trial was 19.3% [7]; for advanced squamous-cell non-small lung cancer, the rate of response for nivolumab in a phase III study was 20% [8]; for metastatic DNA mismatch repair-deficient/microsatellite instability-high colorectal cancer, which also has a poor prognosis following conventional chemotherapy, the rate of response for nivolumab was 31.1% in a phase II trial [9]. This highlights the need to rapidly evaluate the efficacy of immunotherapy to avoid wasting valuable time and resources since the majority of patients will not respond to these expensive molecules. Furthermore, the use of these new molecules has been associated with unconventional response patterns, such as pseudo-progression, which is defined as an objective response following initial progression with the same treatment. Pseudo-response has been reported with an incidence rate of up to 10% [10]. To deal with these new patterns of responses, the usual criteria for evaluating chemotherapy response, the Response Evaluation Criteria in Solid Tumors 1.1 (RECIST 1.1), was updated with immune RECIST1.1 (iRECIST 1.1) [11,12]. The differences are small but essential. The progression category includes two new subcategories. The first is immune Unconfirmed Progressive Disease (iUPD), which is labeled a progression according to RECIST 1.1. However, with iRESIST 1.1, a progression needs to be confirmed 4 to 8 weeks later by a new increase in lesion size to be included in the second subcategory of progressive disease, namely, immune Confirmed Progressive Disease (iCPD). This increases the delay before declaring a patient as a non-responder and, therefore, the delay in changing the therapeutic line. Hence, morphological images, which form the basis of iRESIST 1.1, are not useful for predicting early response to ICIs. In summary, ICIs are a great alternative to minimize the low efficacy of chemotherapy for some advanced cancers with a greater overall survival rate. Above all, they offer a sustainable response for patients that respond well. However, the response rate is only 20-30% when considering all cancer types together. Thus, early response evaluation has become a major requirement to stop ineffective treatments earlier, which is not possible at present with the assessment from iRECIST 1.1 CT-scans. To the best of our knowledge, there are currently no studies in the literature that have looked at dynamic contrast-enhanced ultrasound (DCE-US) for the purpose mentioned above. DCE-US is a real-time functional imaging modality with high temporal resolution and sensitivity to contrast agents. It highlights the signal from a microbubble-based contrast agent within the tumor micro-vascularization, making it possible to follow the tumor vascularization over time using a time-intensity curve (TIC) [13]. It has already been shown to be effective in the early response assessment to antiangiogenic drugs [14]. This technique brings to light the inflammation process induced by ICI and observes the tumor destruction by the immune system. After ICI administration, a lymphoproliferation is observed, resulting in an influx of immune cells [15], leading to an increase of perfusion followed by a decrease due to necrosis induced by the destruction of tumor cells and vasculature. Our study aims to determine whether perfusion parameters extracted from dynamic contrast-enhanced ultrasound can be used as biomarkers for ICI early response evaluation. Patients This retrospective study enrolled 63 patients with metastatic melanoma, colorectal cancer, pulmonary cancer, kidney cancer, liver cancer, cervical uterus cancer, or sarcoma, eligible for immunotherapy treatment (atezolizumab, nivolumab, or pembrolizumab). All these patients were included from three phase I or IIB clinical trials to assess the efficacy of the combination of systemic ICI and local treatment in patients with metastatic tumors. The inclusion criteria were as follows: (a) patients with metastatic cancer, (b) treatment with ICIs used alone or in association with other modalities of treatment, (c) patients older than 18 years of age, (d) target lesion accessible by ultrasound, and (e) tumor size larger than 10 mm at baseline in B mode. The exclusion criteria for this study were all the contraindications to the use of the sulfur hexafluoride (Sonovue ® ): hypersensitivity to the sulfur hexafluoride, uncontrolled systemic hypertension, severe pulmonary arterial hypertension, recent acute coronary syndrome, unstable ischemic heart disease, right-left shunt, respiratory distress syndrome, as well as pregnant women or breast-feeding women. All patients signed informed consent forms. DCE-US Technique and Quantification The DCE-US examinations were conducted using an Aplio™ 500 ultrasound system (Canon, Puteaux, France). Depending on the metastatic site, two different probes (3.5 and 8 MHz) were used. The Aplio™ ultrasound system had access to the raw linear data using Vascular Recognition Imaging (VRI), a perfusion software, and CHI-Q quantification software, as in a previous study [14]. Standardized procedures were performed: first, a morphological analysis was undertaken to determine tumors sizes in all three dimensions with electronic calipers; then, the perfusion study was conducted after an intravenous bolus injection of 4.8 mL of Sonovue ® (Bracco, S.P.A., Milan, Italy), followed by the perfusion of 5 mL of physiological serum. The perfusion curve was recorded for 3 min immediately after the Sonovue bolus. The time-intensity curve of the tumor perfusion was then modeled by a DCE-US study leader using a mathematical model based on the indicator-dilution theory that models the flow of contrast microbubbles in the vasculature [13] and the software already mentioned. Seven perfusion parameters were then measured, four of which are related to blood volume (peak intensity, area under the curve (AUC), AUC during the wash in, AUC during the wash-out); two to blood flow (time to peak intensity, slope of the wash in); and the last parameter to the mean transit time. The parameters are represented in Figure 1. Assessments The DCE-US examinations were performed at baseline, day 8 (D8), and day 21 (D21) after the beginning of the treatment. For this study, the chosen target lesion was treated exclusively with ICIs without any additional local treatment. The tumor perfusion curve was modeled on these three dates to produce the seven DCE-US perfusion criteria described previously. The change in perfusion parameters was then measured between baseline-D8 and baseline-D21. In order to study the variation of perfusion due to the ad- Assessments The DCE-US examinations were performed at baseline, day 8 (D8), and day 21 (D21) after the beginning of the treatment. For this study, the chosen target lesion was treated exclusively with ICIs without any additional local treatment. The tumor perfusion curve was modeled on these three dates to produce the seven DCE-US perfusion criteria described previously. The change in perfusion parameters was then measured between baseline-D8 and baseline-D21. In order to study the variation of perfusion due to the administration of ICI, we focused on the increase of tumor perfusion at D8 and its decrease at D21. Analysis The base of our evaluation was overall survival (OS), defined as the time between the first DCE-US at baseline and death from any cause. Median and interquartile range (IQR) were used to report the distribution of the variation of perfusion parameters. The Kaplan-Meier method was used for univariate analyses and Cox regression for multivariate analyses. DCE-US variation parameters were then used to separate the population into two subgroups to show its impact on OS. The log-rank test was used to compare the survival distribution of the categories obtained from the univariate parameters and compute the p-value to assess the significance of the comparison. The perfusion parameters in each survival group were then reported in box plots, and a Mann-Whitney U-test of independence was performed to show the significance of the difference in parameters. The thresholds used to evaluate the chosen criteria were computed using maximally selected rank statistics [16,17]. Population Between November 2016 and February 2021, 63 patients were enrolled in this study. Patient baseline characteristics are listed in Table 1. Among the 63 patients initially included, 25 patients had no ultrasound at D8 in their study protocol, 1 patient had a D8 DCE-US with unsatisfactory quality, and 10 had no ultrasound quantification at D21 (6 suboptimal qualities and 4 ultrasounds not performed). Thus, there were 37 remaining patients in the D8 group and 53 in the D21 group ( Figure 2). Pembrolizumab 16 25 Among the 63 patients initially included, 25 patients had no ultrasound at D8 in their study protocol, 1 patient had a D8 DCE-US with unsatisfactory quality, and 10 had no ultrasound quantification at D21 (6 suboptimal qualities and 4 ultrasounds not performed). Thus, there were 37 remaining patients in the D8 group and 53 in the D21 group ( Figure 2). At D8 The analyses of changes from baseline to D8 revealed no significant association with OS for any of the perfusion parameters (Table 2). However, the difference in the AUCwash in at D8 showed the strongest association (p = 0.0592). At D8 The analyses of changes from baseline to D8 revealed no significant association with OS for any of the perfusion parameters (Table 2). However, the difference in the AUC wash in at D8 showed the strongest association (p = 0.0592). A maximally selected ranking test with a constraint of a minimum of 30% population in each class determined the best cutoff point for this parameter: an increase in AUC wash in between baseline and D8 greater than 20% would seem to be associated with better OS (p = 0.3111) (Figure 3). This threshold of 20% separated the 37 patients into two groups: the first composed of 12 patients with good overall survival (median OS not reached), the second consisting of 25 patients with poor overall survival. The variation of the perfusion criteria from baseline to D8 in these two groups is represented in a box plot (Figure 4). It shows an increase in all perfusion parameters in the better overall survival group, with a significant difference in most parameters. A maximally selected ranking test with a constraint of a minimum of 30% population in each class determined the best cutoff point for this parameter: an increase in AUCwash in between baseline and D8 greater than 20% would seem to be associated with better OS (p = 0.3111) (Figure 3). This threshold of 20% separated the 37 patients into two groups: the first composed of 12 patients with good overall survival (median OS not reached), the second consisting of 25 patients with poor overall survival. The variation of the perfusion criteria from baseline to D8 in these two groups is represented in a box plot (Figure 4). It shows an increase in all perfusion parameters in the better overall survival group, with a significant difference in most parameters. At D21 The analyses of changes from baseline to D21 revealed a significant association with OS for most of the perfusion parameters (Table 3). Change in the AUC at D21 was the most important criterion, showing the strongest association with OS (p = 0.0028). At D21 The analyses of changes from baseline to D21 revealed a significant association with OS for most of the perfusion parameters (Table 3). Change in the AUC at D21 was the most important criterion, showing the strongest association with OS (p = 0.0028). A maximally selected ranking test with a constraint of a minimum of 30% of patients in each class on the variation of the AUC brought out a cutoff point at 45%: a decrease greater than 45% in AUC between baseline and D21 was significantly associated with better OS (p = 0.0114) ( Figure 5). This threshold at 45% separated the 53 patients into two groups: the first was composed of 20 patients with good overall survival (median survival not reached at the endpoint date), and the second consisted of 33 patients with poor overall survival. A maximally selected ranking test with a constraint of a minimum of 30% of patients in each class on the variation of the AUC brought out a cutoff point at 45%: a decrease greater than 45% in AUC between baseline and D21 was significantly associated with better OS (p = 0.0114) ( Figure 5). This threshold at 45% separated the 53 patients into two groups: the first was composed of 20 patients with good overall survival (median survival not reached at the endpoint date), and the second consisted of 33 patients with poor overall survival. The variation of the perfusion criteria from baseline to D21 in these two groups is represented in the box plot ( Figure 6). It shows a significant decrease for most perfusion parameters in the better overall survival group. The variation of the perfusion criteria from baseline to D21 in these two groups is represented in the box plot ( Figure 6). It shows a significant decrease for most perfusion parameters in the better overall survival group. The log hazard ratio of each parameter included in a Cox model. Covariates studied were age, sex, and AUC decrease between baseline and D21 greater than 45%. Only the latter was significantly correlated to survival (HR = 1.75; p-value = 0.05). Change in AUC between D8 and D21 For the most significant parameter, its variations amongst all three-time points were studied for the eligible patients: 33 patients had a DCE-US at D8 and D21. In the group with better survival at D21 (corresponding to the group of patients with a decrease of more than 45% in AUC compared to baseline), analyses of the changes in AUC between D8 and D21 revealed in all patients (without exception) a decrease in AUC between these two dates. Conversely, in the group with poorer survival, all patients showed an increase in AUC between D8 and D21 (except for two patients) ( Figure 8). Thus, the decrease in AUC between D8 and D21 is significantly associated with better overall survival (p = 0.0370) (Figure 9). The log hazard ratio of each parameter included in a Cox model. Covariates studied were age, sex, and AUC decrease between baseline and D21 greater than 45%. Only the latter was significantly correlated to survival (HR = 1.75; p-value = 0.05). Change in AUC between D8 and D21 For the most significant parameter, its variations amongst all three-time points were studied for the eligible patients: 33 patients had a DCE-US at D8 and D21. In the group with better survival at D21 (corresponding to the group of patients with a decrease of more than 45% in AUC compared to baseline), analyses of the changes in AUC between D8 and D21 revealed in all patients (without exception) a decrease in AUC between these two dates. Conversely, in the group with poorer survival, all patients showed an increase in AUC between D8 and D21 (except for two patients) ( Figure 8). Thus, the decrease in AUC between D8 and D21 is significantly associated with better overall survival (p = 0.0370) (Figure 9). Change in AUC between D8 and D21 For the most significant parameter, its variations amongst all three-time points were studied for the eligible patients: 33 patients had a DCE-US at D8 and D21. In the group with better survival at D21 (corresponding to the group of patients with a decrease of more than 45% in AUC compared to baseline), analyses of the changes in AUC between D8 and D21 revealed in all patients (without exception) a decrease in AUC between these two dates. Conversely, in the group with poorer survival, all patients showed an increase in AUC between D8 and D21 (except for two patients) (Figure 8). Thus, the decrease in AUC between D8 and D21 is significantly associated with better overall survival (p = 0.0370) (Figure 9). Discussion With the advent of immunotherapy, revolutionizing therapeutic cancer management, ICIs are being studied in numerous cancers. With the significant increase in the use of ICIs, atypical new response patterns have emerged, such as pseudo-progression. It is defined as an increase in the size of lesions or the appearance of new lesions, followed by a potentially long-lasting positive response. Therefore, it is necessary to develop new methods to assess the efficacy of ICIs. In clinical routine, iRECIST 1.1 is accepted as the new reference in ICI scan evaluation. The guidelines for response criteria of iRECIST 1.1 are well described in an article published in The Lancet Oncology in 2018 [11]. However, these criteria are based on morphological analysis, and they increase the delay by 1 month compared to RECIST 1.1 before being able to associate patients with a progressive disease, which might be critical. Thus, there is an important need to find a tool for the early assessment of ICI response. With this view, we investigated whether DCE-US, with the study of perfusion parameters, could be useful for the early assessment of the therapeutic response. Our study confirmed that the early evaluation of AUC at D21 may be used to predict survival after treatment with ICIs. Indeed, the decrease of AUC by more than 45% at D21 is associated with better overall survival (p = 0.01). To our knowledge, this is the first study evaluating DCE-US in patients with metastatic cancer treated with ICIs. Our results are consistent with a large multicenter cohort study published in 2014, which confirmed that DCE-US could be used to predict early progression and overall survival after antiangiogenic therapy in metastatic cancers [14]. In this study, AUC was also found to be the best performing criterion for this evaluation, with a decrease in AUC at 1 month of more than 40% associated with better overall survival (p = 0.05). Although the cutoffs are not identical, our study was highly motivated by that result since the studied criterion (decrease of AUC) as a relevant marker for overall survival is the same. Moreover, in a study assessing the reproducibility of DCE-US perfusion parameters, AUC was found to be the most robust criterion [18]. Furthermore, our results at D8 (although not significant) and D21 show a trend. It appears that in patients with prolonged overall survival, perfusion increases at D8 and decreases strongly at D21. This result seems to be consistent with the changes in the tissue level induced by the introduction of ICIs. Immune checkpoint blocking is shown to activate and lead to the proliferation of T-cells and NK-cells [15,19]. A pro-inflammatory environment is created. One of the aspects of this environment is the initiation of neo- Discussion With the advent of immunotherapy, revolutionizing therapeutic cancer management, ICIs are being studied in numerous cancers. With the significant increase in the use of ICIs, atypical new response patterns have emerged, such as pseudo-progression. It is defined as an increase in the size of lesions or the appearance of new lesions, followed by a potentially long-lasting positive response. Therefore, it is necessary to develop new methods to assess the efficacy of ICIs. In clinical routine, iRECIST 1.1 is accepted as the new reference in ICI scan evaluation. The guidelines for response criteria of iRECIST 1.1 are well described in an article published in The Lancet Oncology in 2018 [11]. However, these criteria are based on morphological analysis, and they increase the delay by 1 month compared to RECIST 1.1 before being able to associate patients with a progressive disease, which might be critical. Thus, there is an important need to find a tool for the early assessment of ICI response. With this view, we investigated whether DCE-US, with the study of perfusion parameters, could be useful for the early assessment of the therapeutic response. Our study confirmed that the early evaluation of AUC at D21 may be used to predict survival after treatment with ICIs. Indeed, the decrease of AUC by more than 45% at D21 is associated with better overall survival (p = 0.01). To our knowledge, this is the first study evaluating DCE-US in patients with metastatic cancer treated with ICIs. Our results are consistent with a large multicenter cohort study published in 2014, which confirmed that DCE-US could be used to predict early progression and overall survival after antiangiogenic therapy in metastatic cancers [14]. In this study, AUC was also found to be the best performing criterion for this evaluation, with a decrease in AUC at 1 month of more than 40% associated with better overall survival (p = 0.05). Although the cutoffs are not identical, our study was highly motivated by that result since the studied criterion (decrease of AUC) as a relevant marker for overall survival is the same. Moreover, in a study assessing the reproducibility of DCE-US perfusion parameters, AUC was found to be the most robust criterion [18]. Furthermore, our results at D8 (although not significant) and D21 show a trend. It appears that in patients with prolonged overall survival, perfusion increases at D8 and decreases strongly at D21. This result seems to be consistent with the changes in the tissue level induced by the introduction of ICIs. Immune checkpoint blocking is shown to activate and lead to the proliferation of T-cells and NK-cells [15,19]. A pro-inflammatory environment is created. One of the aspects of this environment is the initiation of neoangiogenesis that allows an influx of immune cells (in particular CD8+ LTs) to eliminate the tumor cells, leading to the destruction of tumor vasculature along with tumor cells. This results in necrosis, which explains the decrease of perfusion in case of response. This phenomenon may explain why an increase in perfusion at D8 in the tumor site would indicate a good response to immunotherapy (witnessing the influx of immune cells), while a decrease in perfusion at D21 would also indicate a good response (witnessing the tumor necrosis). Currently, more and more studies are looking at non-morphological criteria to evaluate the response to ICIs. The metabolic response of tumors also seems to be a good way to assess ICIs. In a retrospective study with a small cohort of 28 patients with non-small cell lung cancer (NSCLC) treated with nivolumab, authors evaluated the potential of FDG PET/CT to monitor ICIs' response [20]: they used a modified PERCIST (PET Response Criteria in Solid Tumors), iPERCIST (immune PET Response Criteria in Solid Tumors), which is a dual-time-point evaluation of "unconfirmed progressive metabolic disease" status after the first PET scan evaluation, followed by a new evaluation after 4 weeks to confirm or deny a progressive metabolic disease (which is similar to iRECIST). In the study, iPERCIST is a good tool to separate responder and non-responder patients, with significantly better overall survival in responder patients (p = 0.0003). Moreover, the comparison of iPERCIST with iRECIST showed a reclassification in 39% of the 28 patients with relevant additional prognostics. However, we are confronted with the existence of a delay, as with iRECIST, before being able to declare patients as non-responders. In another prospective study with a small cohort of 24 patients, authors investigated whether 18 F-FDG-PET/CT could predict the therapeutic response of an ICI (nivolumab in NSCLC) in the early phase [21]. They showed that at one month, the 18 F-FDG uptake with the measure of total lesion glycoses (TLG) could significantly predict partial response (p = 0.021) and progressive disease (p = 0.002). Furthermore, a statistically significant difference in the predictive probability of response was found between TLG by PET and CT scans at 1 month (p = 0.0007). The perfusion was also analyzed to assess the response of ICIs with DCE-MRI, particularly in a study assessing DCE-MRI perfusion to predict pseudo-progression in metastatic melanoma treated with immunotherapy [22]. With a cohort of 44 patients, the authors highlighted that the plasma volume (Vp) was significantly lower in pseudo-progression than in real progression (p = 0.04). These studies represent interesting perspectives to complement morphological analysis in the early evaluation of the response to immunotherapy. However, unlike these tools, DCE-US is much less expensive, less invasive, much more readily available, with the possibility of repeating these examinations regularly after the onset of treatment to study the perfusion profile. There is also no contraindication against renal failures, which is not the case for the other techniques mentioned above. They are also much less convenient and are associated with harmful effects if repeated too frequently. These effects include irradiation due to PET/CT or the accumulation of gadolinium in the brain, for which we do not have the necessary hindsight concerning the safety of repeated injections. In addition to the previously mentioned tools, there is a new rising approach using artificial intelligence (AI) and radiomics models as predictive biomarkers of response to ICIs [23]. Trebeschi et al. [24] performed an AI-based characterization of 1055 lesions from 203 patients with advanced melanoma and NSCLC undergoing anti-PD1 therapy on the pretreatment contrast-enhanced CT imaging data. In this study, significant performances were observed to predict OS for both tumor types (p < 0.001) with ICIs. However, the predictive performance varied depending on the site of the metastases, with non-significant performance for liver metastases (p = 0.13) or adrenal metastases (p = 0.18). Khorrami and al. [25] developed a radiomics model based on changes in pretreatment and early post-treatment (6-8 weeks) CT scans of patients with NSCLC undergoing ICIs; in this study, changes in the intra-tumoral and peritumoral tissue could predict RECIST response (p < 0.05) and were associated with OS (HR 1.64, 95% CI 1.22-2.21). There are some potential limitations associated with our study. First, this was a monocentric retrospective study with a small population (63 patients). While most of these patients were in the D21 group, 10 patients, i.e., 15%, were excluded essentially due to the suboptimal quality of DCE-US examinations. Nonetheless, our results showed a significant association with overall survival at D21 and are consistent with a larger multicenter study that analyzed the same tool (DCE-US) with another therapeutic class. Second, the studied population was heterogeneous: it showcased different cancer types associated with various histological characteristics and a difference in the ICIs used in this study. Third, DCE-US explores and evaluates only one lesion, which may not represent all tumor lesions. Furthermore, we did not compare the ultrasound results with the CT scan, which is still the reference evaluation modality for anti-cancer treatments. The comparison might constitute the topic of further study. Finally, further retrospective studies on a much larger population, or a prospective one, would provide stronger proof of the validity of this biomarker. Conclusions We found that the decrease in perfusion parameters measured by DCE-US was significantly associated with the prolonged survival of patients treated with ICIs. A decrease of more than 45% in AUC at D21 seems a promising biomarker. Additional studies on a larger population would be useful to find a robust threshold to define non-responders earlier after the start of treatment, avoiding a harmful loss of time. Finally, it would be interesting to study perfusion under DCE-US at regular intervals from the first days after the introduction of ICIs in a large cohort to try to find a specific vascular profile according to each type of response to immunotherapy (pseudo-progression, hyper-progressive disease, response, or non-response). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper. Data Availability Statement: The data presented in this study are available on request from the corresponding author and Nathalie LASSAU.
2022-03-09T16:23:15.917Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "239e7747819af1ba709f8471fe16f4b948d1dd28", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/14/5/1337/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "413e4fb3562fe1e3e6cf87c15905195eef40c54b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235290690
pes2o/s2orc
v3-fos-license
A Discussion on the Type Selection of the 200 m Span Closed Coal Shed in the Thermal Power Plant This thesis discusses the type selection of the concrete implementation of the 200 m span closed coal shed in the thermal power plant. We analyze the characteristics and development situations of five schemes for selection, and focus on introducing and recommending the closed coal shed which uses the PVDF membrane material as its outer space cover. Then we choose the steel grid of three-centered circulars and the membrane framework as its structure selection, and strives to make it highly coordinated with the surrounding buildings and environment. Such renovation is to turn it into an industrial construction in the new era with both aesthetics and practicality, and to promote the rapid development of the membrane material structure as a new product of new technology. At the same time, the paper provides guidance and reference for the type selection of the supramaximal span structure of 200 m and above. Closed Coal Shed Article 31 of 'Law of the People's Republic of China on the Prevention and Control of Atmospheric Pollution' stipulates: When coal, gangue, coal cinder, coal ashes, sandstone lime soil or other material is stored in densely inhabited areas, fire and dust prevention measures shall be taken in order to prevent atmospheric pollution. [1] After the thermal power plant has gradually completed the environmental protection transformation with the ultra-low emission, the project of sealing the coal yard becomes a relatively large environmental protection project in thermal power plants. The closed coal shed covers a large area, the construction cost is high, and the investment income is low, which has a great influence on the overall landscape of thermal power plants. In recent years, whenever a new power plant is built, it is a basic requirement to build the supporting closed coal yard, and the renovation projects of the closed coal sheds of the built power plants are also vigorously promoted and implemented. Therefore, we should actively carry out renovation plan on the coal storage yards of the thermal power plants. The renovation plan is in line with the long-term development interests of enterprise and the industrial standards. What's more, it meets the requirements of laws and regulations and complies with the requirements of increasing corporate social responsibility. This thesis introduces the coal yard's phase-III closure project of the 2 × 680 MW unit in a power plant in Shandong. The Shenfu-Dongsheng coal was used as the type coal in this design, and four stripshaped coal storage yards were built. The total coal storage capacity was 190,000 tons, which met the coal consumption of 15 days under the maximum continuous output operating condition of the unit. The coal storage capacity just reached the requirements of 'Code for Design of Fossil Fired Power Plant' (GB 50660 -2011): The coal storage capacity should not be less than the 15-day coal consumption of the corresponding unit in a thermal power plant with a transport distance greater than 100 km [2]. The economic benefits of thermal power plants are closely related to coal prices, and the coal storage will be adversely affected by winter transportation. As a result, it is necessary to do a good job of coal storage in advance for winter, so as to meet the requirements of sustainable and stable operation of thermal power plants and to ensure the public's livelihood demands for heating. Most of the coal yard areas of the built power plants are in relatively urgent usage and do not have the conditions for the expansion of the coal yards. Therefore, it is necessary to build the large-span closed coal sheds to ensure that the coal storage meets the operation requirements of the unit. Engineering Overview This paper introduces the coal yard's phase-III closure project of the 2 × 680 MW unit of the ultrasupercritical coal-fired condensed generator set in a power plant in Shandong, located on the coast of the economic and technological development zone. Two bucket wheel stoker-reclaimers of the DQ2500 / 2500 · 35 type were set up in the coal yard, and there are four strip-shaped coal yards. The effective length is 210.7 m, the stack height is 12 m, and the width of the coal yard is 47.5, 73.8 (of the intermediate two yards) and 47.5 m, respectively. The total coal storage capacity of the original design is 190,000 tons [3]. The peak ground acceleration of the location of the factory is 0.10 g, the basic seismic intensity is 7 degrees, the basic wind pressure is 0.65 kN / m 2 in 50 years and 0.75 kN / m 2 in 100 years, and the basic snow pressure if 0.50 kN / m 2 in 50 years and 0.60 kN / m 2 in 100 years [4]. A Discussion of Type Selection for the Design According to the on-site capital collection and reconnaissance, the proposed closed coal yard was designed according to the following dimensions: the span is 200 m, the length is 216 m, and the closed projection area is 43,200 m 2 . The following five designs for selection are proposed for analysis and comparison. Type 1 for Selection: The Double-Span 'Seagull-Shaped' Steel Grid of Three-Centered Circulars Plus the Membrane Framework In the double-span 'seagull-shaped' steel grid of three-centered steel circulars plus membrane framework, the seagull-shaped cylindrical grid of three-centered circulars is used as the internal supporting frame of the building, and the membrane material is used for external closure. The requirements of building dry coal sheds for coal-fired power plants were first proposed by some small and medium-sized power plants in the late 1960s. Until the early 1970s, dry coal sheds were quickly used in most provinces south of the Yangtze River. This trend rapidly became a leading concept and was included as a design standard in 'Design Specification' [5]. The span of the dry coal sheds constructed in early period was generally small, for it was set according to the limited area within the coal yard. After decades of development, at present, the grid design and the construction technology, more used in the engineering within a span of 120 m, have been very mature and reliable, and have accumulated rich experience of engineering technology and operation management. In all the already built closed coal sheds, purlins plus color steel plates are used while membranes were used as the material of the exteriorprotected construction. The weight of the membrane material is lighter than that of the profiled steel sheets of aluminum-coated zinc, and the load of membrane frameworks is smaller than that of the conventional purlins plus the color steel plates. By using the membrane framework, the load of the external closure of the steel grids of the three-centered circulars is reduced, which can not only reduce the amount of steel used in the structure and the size of the foundation piles, but also reduce the total cost of the engineering of the closed coal shed. The membrane structure is the building or structure composed of membrane materials and other components, and it is divided into the tension membrane structure and the inflatable membrane structure [6]. In this paper, the tension membrane structure is adopted. The design of the membrane structure is implemented according to the relevant requirements of 'Technical Specifications for the Membrane Structure', and the acceptance check of the membrane is implemented in reference to the relevant provisions of 'Acceptance Regulations for Construction Quality of Membrane Structure Engineering' T / CECS 664-2020. Due to the elimination of purlins, for the closure of the coal yard of the same scale, the steel consumption per square meter in the scheme of membrane framework is about 5 kg less than 3 that of the grid structure scheme, and the steel consumption per unit area of horizontal projection of the former scheme is about 52 kg / m 2 . The structural section of scheme one is shown in Figure 1. Features of the double-span 'seagull-shaped' steel grid of the three-centered steel circulars plus the membrane framework: 1. It uses the double-span design, its grid structure has high supporting strength and its structure is stable. 2. In its external structures, the membrane framework is used instead of the conventional purlins plus color steel plates in closure, ensuring the light weight and the easy maintenance. The density of membrane material is only about one-fifth of that of the color steel plate. The use of the structure of membrane framework can save the purlin weight and steel consumption of the steel grids, and the cost of membrane material itself is also low, and thus the use of membrane material can reduce the cost of enclosure. 3. The closed coal shed is symmetrically arranged along the axis of the middle column on both sides of the structure, forming a 'seagull-shaped' mode. It has an aesthetic outlook in coordination with the blue sea, especially in the place hundreds of meters away from the sea. 4. The service life of the membrane framework is better than that of the color steel plate, and the former is excellent in corrosion resistance. The overall advantage of it is reflected in the saving of investment costs. 5. The row of supporting structure in the middle of the coal yard has some influence on the shovel and unloading operation. At the same time, it is necessary to set up protection devices to avoid the corrosion of the coal heap on the main stress structure. 6. Effective discharge of rain, snow and water should be considered and drainage channels should be set up in the ditches. For areas with large snow load, it is also necessary to consider setting up a heating snow elimination device, or consider the human device, so as to facilitate the later dust and snow cleaning operation. It is suitable for southern areas with less or no snow load in winter. 7. Due to the fact that the structure of membrane framework in sealing the coal yards has been applied not for a long time, most of the data are provided by the manufacturers, and they still need to be checked in long periods of practices. Type 2 for Selection: The Double-Span 'Arc-Shaped' Steel Grid of Three-Centered Circulars Plus the Membrane Framework In the double-span 'arc-shaped' steel grid of the three-centered circulars plus the membrane framework, the arc-shaped cylindrical grid of the three-centered circulars is used as the internal supporting frame of the building, and the membrane material is used for external closure. On the basis of design 1 for selection, the height of the middle supporting structure and the overall shape of the grid are adjusted. After the closure is completed, the overall appearance of the coal shed is an arc structure, which can reduce the snow load on the roof, improve the safety of the structure, and reduce the snow cleaning during the operation and maintenance period, especially suitable for areas with large snow load in winter. The steel consumption per unit area of horizontal projection of this scheme is slightly higher than that of scheme one, about 60 kg / m 2 . The structural grid section of scheme two is shown in Figure 2. Type 3 for Selection: The Prestressed Large Span Steel Trusses The main structure of the coal shed of the prestressed large span steel trusses is the prestressed truss cable structure, and the longitudinal pipe truss and the longitudinal horizontal strutting pieces are arranged between the main trusses to ensure the longitudinal force transmission and the safety and stability. In the roofing enclosure structure, it generally adopts scheme of 'purlins plus color steel plates'. On the whole, the steel consumption of prestressed truss structures is less than that of grid structures and portal steel frames under the same span. With the development of steel structure technology and the exploration of calculation software, a variety of structures of dry coal sheds have been emerging. At present, the prestressed structure of pipe trusses formed by the organic combination of prestressed technology and ordinary steel trusses has been able to realize the closure of the coal shed with a span of more than 200 m [7]. For example, the span of the closed coal shed of Fangjiazhuang Power Plant is 229 m, which is the largest span among similar buildings. Under this scheme, the steel consumption per unit area of the horizontal projection is about 69.08 kg / m2. The structural section of scheme three is shown in Figure 3. The prestressed large span steel truss can span a large space. By setting the horizontal string cables, it can reduce the thrust of the arch feet and resist the wind load. The foundation column distance can be made about 15 m. Compared with the traditional grid foundation, it can save nearly half of the foundation engineering quantity, and its advantage is more obvious in regions of poor geological conditions. The prestressed large span steel truss sets prestressed cables in the upper clearance area of the arch, balances the horizontal thrust of the partial arch structures, and forms the arch truss structure of cables, which are dominated by arch force and have the characteristics of the string truss structure. Thus, the vertical stiffness of the horizontal section of the roof is strengthened, and the steel consumption is saved. [8] The number of prestressed pipe truss members is small and the structure is simple. All the basic welding work of installing the general steel trusses is completed on the ground, and the installation is completed in differently hoisted sections with high construction safety. Since the joint welding work is basically constructed on the ground, it is also convenient for the owners and supervisors to monitor the construction quality. At the same time, the prestressed pipe truss has good sealing and convenient maintenance in the later stage. In this scheme, there is no system of vertical columns in the middle part of the coal yard, so the coal piling is not affected and the reserves can be large in amounts. With the high safety performance, there will not be excessive snow load on the roof. The middle area will not affect the operation of stacker reclaimers, shovel trucks and other equipment. However, the steel consumption per unit area is large, the construction cost is high, and it needs on-site processing, while the processing is difficult and the construction period is long. Type 4 for Selection: The Two Independent Grid Structures This scheme adopts the closed coal shed of two independently arranged steel grid structures of the threecentered circulars plus the single-layer profiled steel sheets of aluminum-coated zinc. It is necessary to reserve an 8-meter-wide channel in the middle. The span of two closed coal sheds is around 96m, and the total projected area is 41,472 m 2 . In this scheme, the steel consumption per unit area of horizontal projection is 45. 70 kg / m 2 . The structural section of scheme four is shown in Fig. 4. The grid structure has the advantages of wide adaptability, mature technology, convenient processing, easy installation, practicability and economic efficiency, and aesthetic values for the buildings. The grid structure has large horizontal thrust, small column spacing (usually 6-8 m or even smaller), large foundation engineering quantity and costs, and high requirements for foundation site, though. Especially in the environment of poor geological conditions or serious restrictions of surrounding buildings in reconstruction projects, the adverse effects of the large reaction force of the grid foundation on structural safety and economy is more obvious. The grid nodes are connected by bolt balls with a large number of nodes. The connecting points of bolt rods are prone to the electrochemical reaction with the corrosive medium. The coal shed corrosion in the past was mainly the internal corrosion of the ball node, which was not easy to be found, repaired or maintained. Some improvement measures have been taken in the current design. The total coal storage capacity of this scheme is only 170,000 tons, which is about 10% less than that of other schemes for type selection. To meet the coal storage capacity of 15 days, it is necessary to increase the length or span of the closure. Type 5 for Selection: The Coal Shed of the Air Membrane The air membrane is a new type of building with high technology in the new era. This coal shed adopts an independent inflatable membrane structure with a span of 200 m, a length of 216 m and a projected area of 43,200 m 2 . The fully enclosed inflatable membrane structure is a new kind of closed structure emerging in recent years. The membrane structure adopts the special membrane material of high strength and flame retardancy as the closed structure to be welded and assembled into a closed shell, the lower edge of the membrane structure is anchored with the concrete coal retaining wall to form a closed space, and the air is injected with the blower and the pressure difference between the indoor and the outdoor is maintained, so that the membrane surface is pulled to ensure stiffness, while maintaining the shape and resisting external load as a peculiar structural form [9]. There are many examples of the construction and operation of air-membrane buildings abroad, and the construction examples of air-membrane buildings in China are gradually increasing. With the increase of construction and operation experience of air-membrane buildings, the span of the airmembrane closure is also increasing. The closed coal shed of Wangqu Power Plant is the air-membrane building of the largest span in Asia, with a span reaching 180 m and a length of 198 m. As to airmembrane building with a span above 180 m, there still lacks relevant design data and construction experience. The fully enclosed coal shed of air membrane has large space and flexible shapes, with the characteristics of free pollution, small workload for maintenance and long life. This flexible membrane material has light weight, low cost, and its overall cost is much lower than that of the grid structure and the prestressed structure of pipe trusses. Its construction period is short and can be constructed in winter. It has good light transmittance with reusable ability and strong corrosion resistance. Continuous gas supply is needed during the operation of the air-membrane coal shed, and a certain amount of electric energy is to be consumed. Compared with the coal storage shed of the steel structure used in the industry, the coal storage shed of air membrane greatly reduces the use of steels and cements, and the cost of concrete foundation alone can be reduced by 60%, while saving nearly 30% of the overall project funds. It is estimated that the operation and maintenance cost of the coal storage shed in Shenba Coal Preparation Plant is about CNY 900,000 a year, and the coal storage shed of the steel structure of the same level is around CNY 1.5 million [10]. The following design schemes are aimed at the closed coal shed of a supramaximal span over 200 m. The double-span 'seagull-shaped' steel grid of three-centered circulars plus the membrane framework can be used to seal the coal shed. The membrane framework is used to replace the conventional purlins plus the color steel plates for external sealing, and the load of the membrane framework is less than that of the conventional purlins plus the color steel plates. The use of the membrane framework can reduce the load of the external sealing of the steel grid of the three-centered circulars, which can reduce the amount of steels used in the structure and the size of the foundation piles, and reduce the total cost of the project of the closed coal shed. The double-span 'arc-shaped' steel grid of the three-centered circulars plus the membrane framework can be used to seal the coal shed. On the basis of using membrane framework, the shape is changed from 'the seagull-shaped' to 'the arc-shaped', which well solves the problem of easily accumulated snow in the middle of the coal shed in the 'seagull-shaped' scheme. When the coal shed is sealed by the prestressed large span steel truss, the coal storage in the middle is not affected, but the steel consumption per unit area of horizontal projection is relatively large, and the project cost is relatively high. When the two independent grid structures are used to seal the coal shed, the intermediate channel occupies the original coal accumulation area, which results in a reduction of coal storage by around 10%. The coal shed of the air membrane is a new type of coal shed in engineering application in recent years. It has certain advantages over other closed coal sheds in terms of environmental protection and costs. Therefore, it can be used as a choice in the design of coal sheds in thermal power plants in the future. Its adaptability to the environment in terms of wind pressure and snow pressure needs to be further tested and improved in actual operation [11]. Photos of each scheme after implementation are shown in Fig. 5 This project is located in the coastal area of Shandong Province. The closed coal shed adopts the structural scheme of the steel grid of the three-centered circulars plus the membrane framework. It uses the PVDF membrane material as the external supporting structure, and the reasonable technical measures are adopted to create the supramaximal space to meet the functional requirements. Based on its huge structural system of the steel grid of the three-centered circulars plus the membrane framework, it will be the most prominent monomer in the factory after completion. The new closed coal shed focuses on the concept of green environmental protection and low-carbon development. The façade of the closed coal shed is processed with modern techniques into a simple and spacious form. Therefore, the design of this project adopts the structural scheme for selection of the coal shed of the double-span 'seagullshaped' steel grid of the three-centered circulars plus the membrane framework, and fully considers the snow load of the two-span middle ditch in winter. For the coal shed with large snow load in the region, it should also consider the installation of de-icing devices to avoid the adverse impacts of excessive snow on the coal shed. After the implementation of this project, it will become a typical example of the supramaximal structure of the membrane framework. The grid model of this scheme is shown in Figure 6, and the partitions of the membrane structure is shown in Figure 7. Conclusion With the continuous development and progress of scientific technology and material properties, there will be more and more structures with a span over 200 m, and the technology of selection for schemes of structures with a supramaximal span will be constantly updated. The recommended scheme for selection of the structure of the steel grid of the three-centered circulars plus the membrane framework is the innovative product proposed by continuous exploration on the basis of existing technology and experience. It is the combination of the grid structure and the membrane structure for the first time to give play to the advantages of their respective systems and realize the overall benefits of the project of the closed coal shed. This selection uses the PVDF membrane as the external supporting structure, adopts the reasonable technical measures, creates the large-span space, meets the functional requirements, and reflects the industrial characteristics. The adopted closed coal shed of the structure of the steel grid of the three-centered circulars plus the membrane framework is an industrial building in the new era with both aesthetics and practicality. While highlighting the main body, it pays more attention to the detailed design. Through the discussion on the selection for scheme of the 200 m-span closed coal shed, it provides guidance and reference for the selection of large-span structures of 200 m and above, and focuses on the introduction and recommendation of the membrane framework structure as a new product and of the new technology.
2021-06-03T00:51:43.665Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "8e8a23a8f2fca69f29a4331da739fcfd09cd43a2", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1885/4/042048", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8e8a23a8f2fca69f29a4331da739fcfd09cd43a2", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
258967451
pes2o/s2orc
v3-fos-license
Corners and collapse: Some simple observations concerning critical masses and boundary blow-up in the fully parabolic Keller-Segel system Our main result shows that the mass $2\pi$ is critical for the minimal Keller-Segel system \begin{align}\label{prob:abstract}\tag{$\star$} \begin{cases} u_t = \Delta u - \nabla \cdot (u \nabla v), \\ v_t = \Delta v - v + u, \end{cases} \end{align} considered in a quarter disc $\Omega = \{\,(x_1, x_2) \in \mathbb R : x_1>0, x_2>0, x_1^2 + x_2^20$, in the following sense: For all reasonably smooth nonnegative initial data $u_0, v_0$ with $\int_\Omega u_0<2\pi$, there exists a global classical solution to the Neumann initial boundary value problem associated to \eqref{prob:abstract}, while for all $m>2 \pi$ there exist nonnegative initial data $u_0, v_0$ with $\int_\Omega u_0 = m$ so that the corresponding classical solution of this problem blows up in finite time. At the same time, this gives an example of boundary blow-up in \eqref{prob:abstract}. Up to now, precise values of critical masses had been observed in spaces of radially symmetric functions or for parabolic-elliptic simplifications of \eqref{prob:abstract} only. Before discussing qualitative properties of solutions of (1.1), we note that (1.1) is locally well-posed also in nonsmooth domains. While for a four-component system introduced in [17], a well-posedness result has recently been obtained in [14], for (1.1) it mainly suffices to reference the classical work [10]. That is done in Section 2, where we prove Proposition 1.1. Suppose that Ω is a piecewise C 1+α bounded domain in R 2 for some α ∈ (0, 1) with a finite number of vertices and with nonvanishing interior angles (1.2) (meaning that Ω is a bounded planar domain of class Σ 1,α for some α ∈ (0, 1) in the sense of [7,Definition 2.1]) and 0 ≤ u 0 ∈ C 0 (Ω) as well as 0 ≤ v 0 ∈ p>2 W 1,p (Ω). (1.3) Then there exist T max = T max (u 0 , v 0 ) ∈ (0, ∞] and a pair of functions (u, v) solving (1.1) in Ω × [0, T max ) both weakly and classically in the sense of Definition 2.1 with the property that T max is maximal, meaning that this solution cannot be extended beyond T max . In order to describe the critical mass phenomena in more detail, we henceforth let T max (u 0 , v 0 ) be as given by Proposition 1.1 and for Ω as in (1.2) introduce the values and as well as and That is, for all masses smaller than M ⋆ (Ω), all solutions exist globally, while for all masses larger than M ⋆ (Ω), initial data exist whose corresponding solutions blow up in finite time. This leaves open the possibility that M ⋆ (Ω) < M ⋆ (Ω). In this case, there would be an intermediate regime in which for some masses all solutions exist globally and for some masses they do not (cf. [8]). Moreover, when Ω is a disc, we may ask whether these values change when the situation is restricted to radially symmetric settings, i.e. require that u 0 , v 0 are radially symmetric. (1.6) To that end, we also set (1.6) and (1.6) and Several partial results for the size of these values are available. If Ω is a smooth, bounded, planar domain, then M ⋆ (Ω) ≥ 4π and M ⋆ (Ω, rad) ≥ 8π, cf. [25]. For such domains and all masses larger than 4π and not equaling an integer multiple of 4π, unbounded solutions are constructed in [15] (and also more recently in a different way in [9]) -these, however, may potentially exist globally in time so that this result has no direct influence on M ⋆ (Ω). If Ω = B R (0) ⊆ R 2 , R > 0, then M ⋆ (Ω, rad) ≤ 8π (and hence also M ⋆ (Ω) ≤ 8π), cf. [21] (see also the earlier work [12]). Thus, if Ω is a disc, then M ⋆ (Ω, rad) = 8π = M ⋆ (Ω, rad), while for arbitrary smooth, bounded, planar domains Ω it is up to now only known that 4π ≤ M ⋆ (Ω) ≤ M ⋆ (Ω) ≤ 8π. In [24], the critical mass identity M ⋆ (Ω) = 4π = M ⋆ (Ω) is conjectured for smooth, bounded, planar domains Ω. Moreover, [24,Theorem 1] implies that blow-up has to occur at a single point at the boundary (or not at all) for masses between 4π and 8π. The actual occurrence of such boundary blow-up has, up to now, not been shown for (1.1). As usual, more is known for parabolic-elliptic simplifications of ( . For a detailed discussion of the parabolic-elliptic system, we refer to [28]. Let us also briefly mention that critical mass phenomena (with slightly different flavors) have also been detected if Ω = R 2 ([5], [31]), if fluid-interaction is accounted for ( [11,32]), if the signal is produced indirectly ( [30]) or if different boundary conditions are imposed on v ([3], [8]). These results naturally lead to the question whether there is a domain Ω with M ⋆ (Ω) = M ⋆ (Ω); that is, whether there is a critical mass distinguishing between global existence and finite-time blow-up also for the fully parabolic system (1.1) in non-radially symmetric settings (and if there is, what its precise value is). Our main results states that for certain domains Ω, we can indeed guarantee that M ⋆ (Ω) = M ⋆ (Ω). For instance, 2π is the critical mass for quarter discs. Theorem 1.2. Let Ω ⊂ R 2 be a circular sector with central angle θ ∈ (0, π 2 ] (and arbitrary positive finite radius). Then The proof of Theorem 1.2 is split into two parts. We first show in Section 3 that the solution (u, v) constructed in Proposition 1.1 based on [10] is global whenever the initial mass is sufficiently small. This is achieved by adapting the proof in [25] to non-smooth domains and thereby by crucially relying on the Trudinger-Moser inequality (cf. [6, Proposition 2.3]) whose constants depend on the minimal interior angle of the domain. Next, in Section 4, if Ω is a circular sector and given sufficiently large initial mass, we construct initial data such that the corresponding solutions blow up in finite time. This is achieved by restricting finite-time blow-up solutions on discs (constructed in [21]) to Ω. Theorem 1.4. Let Ω ⊂ R 2 be a circular sector with central angle θ ∈ (0, 2π) (and arbitrary positive finite radius). Then Theorem 1.4 appears to be the first result for (1.1) (or any fully parabolic chemotaxis system considered on a bounded domain), where blow-up is shown to take place on the boundary. For most choices of θ ∈ (0, 2π), Theorem 1.4 shows that corners of Ω may be blow-up points, but the choice θ = π makes it clear that blow-up can also happen at smooth parts of the boundary. We also note that our technique fails if one considers (1.1) with (no-flux boundary conditions for u and) homogeneous Dirichlet boundary conditions for v. However, for such a system blow-up at a boundary point probably should not be expected at all: At least for certain parabolic-elliptic simplifications all blow-up points have been proven to be contained in the interior of the domain ( [29]). Local existence: Proof of Proposition 1.1 Throughout this section, we fix a domain Ω fulfilling (1.2) and u 0 , v 0 satisfying (1.3). • A pair of nonnegative functions (u, v) is called a classical solution of (1.1) if where V denotes the set of vertices of Ω, and the equations in (1.1) are fulfilled pointwise in (Ω\V )×[0, T ). Remark 2.2. Let T ∈ (0, ∞) and let (u, v) be a weak solution of (1.1). Then (2.1)-(2.4) assert ∆v ∈ L 2 (Ω × (0, T )), hence (2.2) and (2.4) imply 1 2 d dt Ω |∇v| 2 = − Ω v t ∆v a.e. in (0, T ). The following local existence result and the extensibility criterion is essentially due to [10]. We note that a local existence result for less regular initial data has been proven in [ has to be unbounded as t ր T max . In order to show (2.5), let us assume that T max < ∞ and that there exists Proof. For every ψ ∈ C 2 (Ω) with ∂ ν ψ = 0 on ∂Ω and every ϕ ∈ C 1 (Ω), direct computations show that hold for all T ∈ (0, T max ). The main idea is now to fix a point x ∈ Ω \ V , where V denotes the set of vertices of Ω, and 0 < τ < T < T max , choose a cut-off function ψ ∈ C 2 (Ω) with ∂ ν ψ = 0 on ∂Ω which equals 1 in a neighbourhood of x and vanishes on a neighbourhood of V (existence of such ψ can be shown by, e.g., following the proof of [4,Lemma 3.2]) and then to apply various parabolic regularity results to ψu and to ψv which solve certain parabolic equations on smooth domains containing supp ψ. In each step, we may choose a new cut-off functionψ with smaller support so that regularity information on, say, ∇(ψv) translates to information on ∇v on all of suppψ. Proof of Proposition 1.1. In Lemma 2.3, a local weak solution of (1.1) has been constructed, which is also a classical solution by Lemma 2.4. Global existence: Proof of Theorem 1.3 Throughout this section, we assume (1.2) and denote the minimal interior angle of Ω by θ (and set θ = π if there are no corners). Moreover, we fix u 0 , v 0 fulfilling (1.3) as well as the solution (u, v) of (1.1) and its maximal existence time T max given by Proposition 1.1. We shall show T max = ∞ for sufficiently small Ω u 0 , which we will achieve by following the reasoning in [25] and [10]. (We also refer to [2] where these techniques have been adapted to systems with different boundary conditions for v.) The approach rests on two crucial observations. The first one, which has been first noted in [25] and [10], is that (1.1) admits an energy-type functional. holds. Proof. This follows by a direct calculation, see for instance [25,Lemma 3.3]. Second, we will rely on the following consequence of the Trudinger-Moser inequality. Similarly as in [25,Lemma 3.4], these preparations allow us to show that solutions exist globally whenever the mass of the first component is sufficiently small. Proof. Since m < 4θ, we can find η > 0 such that (1+η)m 8θ < 1 2 . As integrating the first equation in (1.1) shows Ω u = m in (0, T max ), applying Jensen's inequality and Lemma 3.2 yields where in the last step we have used (1+η)m 8θ < 1 2 . As testing the second equation in (1.1) with 1 reveals Ω v ≤ max{ Ω v 0 , m}, we thus conclude that there is c 1 > 0 such that Together with Lemma 3.1, this shows boundedness of Ω u ln u in (0, T max ). As Let Ω = B R (0) ⊂ R 2 with some R > 0, and let m > 8π. Then for all T > 0 and each p > 1 there exist ε > 0 and (u 0 , v 0 ) ∈ I := {(û 0 ,v 0 ) ∈ C 0 (Ω) × W 1,∞ (Ω) |û 0 andv 0 are radially symmetric and positive in Ω} such that Ω u 0 = m and such that the solutions of (1.1) for all initial data ( u 0 , v 0 ) ∈ I satisfying u 0 − u 0 L p (Ω) + ṽ 0 − v 0 W 1,2 (Ω) < ε lead to solutions blowing up before time T . In restricting these solutions to circular sectors, we have to ensure that the boundary conditions are still satisfied, and note the following elementary fact: Lemma 4.2. Let R > 0, ξ, ν ∈ ∂B 1 (0) with ξ · ν = 0 and ϕ ∈ C 1 (B R (0)) be radially symmetric. Then Proof. Without loss of generality, we may assume ξ = (1, 0) and ν = (0, 1). h) is an even function for all x 1 ∈ (0, R) and hence has derivative 0 at 0. , v(·, 0)) satisfying Ω u 0 = m, be radially symmetric classical solutions of (1.1) on Ω := B R (0) blowing up at some finite time T ∈ (0, ∞). Moreover, [24,Theorem 3] asserts that 0 is the only blow-up point of u. We set (u 0 , v 0 ) := ( u 0 , v 0 ) Ω and note that due to the radial symmetry of u 0 we have and that 0 is also a blow-up point of u. We next claim that (u, v) := ( u, v) Ω×[0,T ) is a classical solution of (1.1): That the differential equations, the initial conditions and the boundary conditions for the circular arc are fulfilled follows immediately from the fact that (u, v) is a solution of (1.1) in Ω × [0, T ). Finally, Lemma 4.2 shows that the boundary conditions are also fulfilled on the remaining (smooth part of the) boundary. Therefore, (u, v) is a solution of (1.1) in the sense of Definition 2.1 and due to the uniqueness statement in Lemma 2.3 thus has to coincide with the solution given by Proposition 1.1. Proof of Theorem 1.4. This is a direct consequence of (1.5) and Lemma 4.3. At last, we note that Theorem 1.3 and Theorem 1.4 entail Theorem 1.2. Proof of Theorem 1.2. As the minimal interior angle of a circular sector with central angle θ is min{θ, π 2 }, Theorem 1.2 results as a combination of Theorem 1.3 and Theorem 1.4.
2023-05-31T01:16:07.862Z
2023-05-30T00:00:00.000
{ "year": 2023, "sha1": "e09f58d9b1a6a5c800b463071b7047fb7bcb7a3b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e09f58d9b1a6a5c800b463071b7047fb7bcb7a3b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
2367745
pes2o/s2orc
v3-fos-license
Pointwise convergence of wavelet expansions In this note we announce that under general hypotheses, wavelet-type expansions (of functions in $L^p,\ 1\leq p \leq \infty$, in one or more dimensions) converge pointwise almost everywhere, and identify the Lebesgue set of a function as a set of full measure on which they converge. It is shown that unlike the Fourier summation kernel, wavelet summation kernels $P_j$ are bounded by radial decreasing $L^1$ convolution kernels. As a corollary it follows that best $L^2$ spline approximations on uniform meshes converge pointwise almost everywhere. Moreover, summation of wavelet expansions is partially insensitive to order of summation. \footnote We also give necessary and sufficient conditions for given rates of convergence of wavelet expansions in the sup norm. Such expansions have order of convergence $s$ if and only if the basic wavelet $\psi$ is in the homogeneous Sobolev space $H^{-s-d/2}_h$. We also present equivalent necessary and sufficient conditions on the scaling function. The above results hold in one and in multiple dimensions. Introduction In this note we present several convergence results for wavelet and multiresolution-type expansions. It is natural to ask where such expansions converge (and whether they converge almost everywhere) and what are the rates of convergence. The answer is that under rather weak hypotheses, one-and multidimensional wavelet expansions converge pointwise almost everywhere and, more specifically, on the Lebesgue set of a function being expanded. We will also give exact rates of convergence in the supremum norm, in terms of Sobolev properties of the basic wavelet or of the scaling function. Wavelets with local support in the time and frequency domains were defined by A. Grossman and J. Morlet [GM] in 1984 in order to analyze seismic data. The prototypes of wavelets, however, can be found in the work of A. Haar [Ha] and the modified Franklin systems of J.-O. Strömberg [St]. To identify the underlying structure and to generate interesting examples of orthonormal bases for L 2 (R), S. Mallat [Ma] and Y. Meyer [Me] developed the framework of multiresolution analysis. P. G. Lemarié and Y. Meyer [LM] constructed wavelets in S(R n ), the space of rapidly decreasing smooth functions. J.-O. Strömberg [St] developed spline wavelets while looking for unconditional bases for Hardy spaces. G. Battle [Ba] and P. G. Lemarié [Le] developed these bases in the context of wavelet theory. Spline wavelets have exponential decay but only C N smoothness (for a finite N depending on the order of the associated splines). I. Daubechies [Da1] constructed compactly supported wavelets with C N smoothness. The support of these wavelets increased with the smoothness; in general, to have C ∞ smoothness, wavelets must have infinite support. Y. Meyer [Me] was among the first to study convergence results for wavelet expansions; he was followed by G. Walter [Wa1,Wa2]. Meyer proved that under some regularity assumptions on the wavelets, wavelet expansions of continuous functions converge everywhere. In contrast to these results, the pointwise convergence results presented here give almost everywhere convergence (and convergence on the Lebesgue set) for expansions of general L p (1 ≤ p ≤ ∞) functions. We assume rather mild bounds and no differentiability for the wavelet or the scaling function; our conditions allow inclusion of the families of so-called r-regular wavelets [Me], as well as some other wavelets. These results parallel L. Carleson's [Ca] and R. A. Hunt's [Hu] theorems for Fourier series. One difference and slight advantage of wavelet expansions comes from the fact that almost everywhere convergence occurs on a simple set of full measure, namely the Lebesgue set, while almost everywhere convergence for Fourier series is established on a much more elaborate set of full measure. We also give necessary and sufficient conditions for given pointwise (sup-norm) rates of convergence of wavelet or multiresolution expansions, in terms of Sobolev conditions on the basic wavelet or the scaling function. It has been shown previously by Mallat [Ma] and Meyer [Me] that the Sobolev class of a function is determined by the L 2 rates of convergence of its wavelet expansion. Necessary and sufficient conditions for L 2 rates of convergence which are analogous to our sup-norm conditions have been obtained by de Boor, DeVore, and Ron [BDR], who have also studied sup-norm convergence in more general situations [BR]. Our results on convergence rates can be viewed as a sharpening in the context of wavelets of the well-known Strang-Fix [SF] conditions for convergence of multiscale expansions. The results given here hold for multiresolution expansions in multiple dimension. The proofs, which will appear elsewhere [KKR], involve the kernels of the partial sums of such expansions and the above-mentioned result that such kernels are bounded by rescalings of L 1 radial functions. We should add that such bounds for wavelet expansions are nontrivial and arise from cancellations which occur in the sum representations of the partial sum kernels. Naive bounding of the summation kernels by using absolute values in their sum representations fails to yield the needed radial bounds for any class of wavelets. We remark that such bounds on the summation kernel can be obtained more easily by writing it using the orthonormal translates φ(x − k) of the scaling function, instead of the wavelets ψ jk . However, in proving results for convergence of wavelet expansions (Theorem 2.1(iii)), we wish to avoid making any assumptions about radial bounds for the scaling function. The pointwise and L p convergence results contained here were obtained independently by the first author and by the second two authors. Results on the Gibbs effect obtained by the first author will appear elsewhere. To start, we define a multiresolution analysis on L 2 (R d ) [Ma, Me]. and the spaces V j satisfy the following additional properties: Associated with the V j spaces, we additionally define W j to be the orthogonal ⊕W j . We define P j and Q j = P j+1 − P j , respectively, to be the orthogonal projections onto the spaces V j and W j , with kernels P j (x, y) and Q j (x, y). Under the assumptions in the above definition and with some additional regularity, it can be proved [Me,Da2] that there then exists a set {ψ λ } ∈ W 0 , where λ belongs to an index set Λ of cardinality 2 d − 1, such that {ψ λ jk } k∈Z d ,λ is an orthonormal basis of W j , and thus {ψ λ we define the following related expansions: (a) The sequence of projections {P j f (x)} j will be called the multiresolution expansion of f . where the coefficients a jk and b k are the L 2 expansion coefficients of f . where the coefficients a jk are the L 2 expansion coefficients of f . We remark for part (a) of the above definition that it can be shown that the projections P j (defined by their integral kernels) extend to bounded operators on L p , 1 ≤ p ≤ ∞. The L 2 expansion coefficients in (b) and (c) (defined by integration against f ) are defined and uniformly bounded for any f ∈ L p , 1 ≤ p ≤ ∞. where B ǫ denotes the ball of radius ǫ about the origin, and V denotes volume. Such points p are essentially characterized by the fact that the average values of f (x) around p converge to the values of f at these points, as averages are taken over smaller balls centered at x. Note that all continuity points are Lebesgue points, but the converse is not true. A function f is partially continuous if there exists a set A of vectors a ∈ R d with positive measure such that lim ǫ→0 ψ(x + ǫa) = ψ(x) for a ∈ A. Definition 1.5. The homogeneous Sobolev space of order s is defined by The ordinary Sobolev space H s is defined as H s h in (5), with replacement of |ξ| 2s by (|ξ| 2s + 1). Under the Fourier transform, the space H s h is a dense subspace of the complete weighted L 2 space of all measurablef (ξ) with f h,s < ∞. This dense subspace consists of those functionsf (ξ) which are also in the regular unweighted L 2 space. Convergence rates of wavelet expansions are sensitive to both the smoothness of the wavelet and the smoothness of the function being expanded. For a wavelet ψ of given smoothness, the sensitivity to the Sobolev space of the function being expanded disappears when the function's Sobolev parameter is sufficiently large. Definition 1.6. A family ψ λ of wavelets yields pointwise order of approximation (or pointwise order of convergence) r in the space H s if for any function f ∈ H s , the jth order wavelet approximation P j f satisfies as j tends to infinity. More generally, the wavelets ψ λ yield pointwise order of approximation (or convergence) r if for any function f which is sufficiently smooth (i.e., is in a Sobolev space of sufficiently large order s), (6) holds. We will give several necessary and sufficient conditions (in terms of their Fourier transforms and membership in homogeneous Sobolev spaces) on the basic wavelet ψ or the scaling function φ for given orders of convergence. In practice, sufficient smoothness for a function f (in the sense of the above definition) will mean that f is in H s+d/2 or a higher Sobolev space. Pointwise convergence results With the background given, our main results can now be presented. Theorem 2.1. (i) Assume only that the scaling function φ of a given multiresolution analysis is in RB, i.e., that it is bounded by an L 1 radial decreasing function. Then for any f ∈ L p (R d ) (1 ≤ p ≤ ∞), its multiresolution expansion {P j f } converges to f pointwise almost everywhere. (ii) If φ, ψ λ ∈ RB for all λ, then also both the scaling (3) (if 1 ≤ p ≤ ∞) and wavelet (4) (if 1 ≤ p < ∞) expansions of any f ∈ L p converge to f pointwise almost everywhere. If further ψ λ and φ are (partially) continuous, then both expansions converge to f on its Lebesgue set. (iii) If we assume only ψ λ (x) ln(2 + |x|) ∈ RB for all λ, then for f ∈ L p , its wavelet (for 1 ≤ p < ∞) and multiresolution (for 1 ≤ p ≤ ∞) expansions converge to f pointwise almost everywhere. If further ψ λ is (partially) continuous for all λ, then both the wavelet and multiresolution expansions converge to f on its Lebesgue set. (iv) The last two statements hold for any order of summation in which the range of the values of j for which the sum over k and λ is partially complete always remains bounded. In statement (iv) above, the summation over k and λ is partially complete for a fixed j if it contains some terms, but not all with the given value of j. By the range of values for which the sum is partially complete we mean the difference of the largest and smallest value of j for which the sum is partially complete. Statement (iv) requires that this range always be smaller than some constant M . The above result on convergence of multiresolution expansions applies to spline expansions as well. Given a uniform grid K in R, one might ask whether given a function f ∈ L 2 (R d ), the best L 2 approximations P j f of f (by splines of a fixed polynomial order k) converge to f pointwise as the grid size goes to 0. The answer to this is affirmative. Corollary 2.2. For f ∈ L p (R) (1 ≤ p ≤ ∞), the order k best L 2 spline approximations P j f of f converge to f pointwise almost everywhere, and more specifically on the Lebesgue set of f , as the uniform mesh size goes to 0. The proof follows from the fact that best spline approximations are partial sums of multiresolution expansions, with some radially bounded (RB) scaling function φ. This result also extends to multidimensional splines. Technically, the best L 2 approximation of f ∈ L p only makes sense when p = 2, but it can be defined for functions in L p by continuous extension of the projections P j from L 2 to L p . The following proposition is a consequence of the proof of Theorem 2.1. It has been proved before under somewhat stronger hypotheses, yielding stronger conclusions in [Me]. Proposition 2.3. Under the hypotheses of case (i), case (ii), or case (iii) of Theorem 2.1, L p convergence of the expansions also follows for 1 ≤ p < ∞. Thus for wavelet series and more generally for one-and multidimensional multiresolution expansions, essentially all hoped for convergence properties hold, regardless of rates of convergence. The basis for Theorem 2.1 is the bound on the kernel of the projection P j onto the scaling space V j . It can be shown that under any of the hypotheses in Theorem 2.1, the kernel P m (x, y) has the form for x, y ∈ R d , with convergence of both sums on the right occurring pointwise, uniformly on subsets a positive distance away from the diagonal D = {(x, y) : x = y}. The kernel converges to a delta distribution δ(x − y) in the following sense: Theorem 2.4. Under the assumption that φ ∈ RB or that ψ λ (x) ln(2 + |x|) ∈ RB for all λ, the kernels P j (x, y) of the projections onto V j satisfy the convolution bound where H(| · |) ∈ RB, i.e., H(| · |) is a radial decreasing L 1 function. In one dimension, precise bounds can be obtained for kernels of specific wavelets. Two examples are illustrated in the result below. (a) If φ has exponential decay, i.e., φ(x) ≤ Ce −a|x| for some positive a, then As a corollary, if a scaling function φ has rapid decay (faster than any polynomial), then P j (x, y) is bounded by a scaled convolution kernel (of the form on the right side of (7)) which has rapid decay. Spline wavelets (see [St, Ba, Le]) satisfy the conditions in part (a), and wavelets constructed by P. G. Lemarié and Y. Meyer [LM] are an example of wavelets of rapid decay. Rates of convergence We will now give necessary and sufficient conditions on the basic wavelet ψ λ and on the scaling function φ, for given supremum-norm rates of convergence of wavelet expansions. The conditions on ψ λ (here given in terms of membership in homogeneous Sobolev spaces) can be translated into differentiability and then moment conditions on ψ λ (see the introduction). Theorem 3.1. Given a multiresolution analysis with either (i) a scaling function φ ∈ RB, (ii) basic wavelets satisfying ψ λ ln(2 + |x|) ∈ RB for each λ, or (iii) a kernel for the basic projection P satisfying |P (x, y)| ≤ H(|x − y|) with H ∈ RB, the following conditions ((a) to (e ′ )) are equivalent : (a) The multiresolution expansion (see Definition 1.2) yields pointwise order of approximation s. (a ′ ) The multiresolution expansion yields pointwise order of approximation s in every Sobolev space H r for r ≥ s + d/2. (b) The projection I − P j : H s+d/2 h → L ∞ is a bounded operator, where I is the identity and d denotes dimension. The above conditions on the scaling function can also be given in modified form in the case where the scaling function φ has nonorthonormal integer translates. Necessary and sufficient conditions for convergence rates of spline and other nonorthonormal expansions can then be obtained directly from the same arguments as above.
2014-10-01T00:00:00.000Z
1994-01-01T00:00:00.000
{ "year": 1994, "sha1": "2325841fec8282a97db362db8a049296c4dcd7a2", "oa_license": null, "oa_url": "https://www.ams.org/bull/1994-30-01/S0273-0979-1994-00490-2/S0273-0979-1994-00490-2.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "f0a2d48e62dc4b2d9f1f69b32bbc2423843fb481", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
199421144
pes2o/s2orc
v3-fos-license
Enhanced cancer therapy through synergetic photodynamic/immune checkpoint blockade mediated by a liposomal conjugate comprised of porphyrin and IDO inhibitor Cancer metastases is still a hurdle for good prognosis and live quality of breast cancer patients. Treatment strategies that can inhibit metastatic cancer while treating primary cancer are needed to improve the therapeutic effect of breast cancer. Methods: In this study, a dual functional drug conjugate comprised of protoporphyrin IX and NLG919, a potent indoleamine-2,3-dioxygenase (IDO) inhibitor, is designed to combine photodynamic therapy and immune checkpoint blockade to achieve both primary tumor and distant metastases inhibition. Liposomal delivery is applied to improve the biocompatibility and tumor accumulation of the drug conjugate (PpIX-NLG@Lipo). A series of in vitro and in vivo experiments were carried out to examine the PDT effect and IDO inhibition activity of PpIX-NLG@Lipo, and subsequently evaluate its anti-tumor capability in the bilateral 4T1 tumor-bearing mice. Results: The in vitro and in vivo experiments demonstrated that PpIX-NLG@Lipo possess strong ability of ROS generation to damage cancer cells directly through PDT. Meanwhile, PpIX-NLG@ Lipo can induce immunogenic cell death to elicit the host immune system. Furthermore, PpIX-NLG@Lipo interferes the activity of IDO, which can amplify PDT-induced immune responses, leading to an increasing amount of CD8+ T lymphocytes infiltrated into tumor site, finally achieve both primary and distant tumor inhibition. Conclusion: This work presents a novel conjugate approach to synergize photodynamic therapy and IDO blockade for enhanced cancer therapy through simultaneously inhibiting both primary and distant metastatic tumor. Introduction Breast cancer is the most common cancer among women and 90 % of lethality in breast patients was caused by cancer metastases all over the world [1]. The metastases are usually undetectable and remain latent for many years following primary tumor removal, which may lead to a poor prognosis and low five-year survivals rate of breast cancer patients [2]. Therefore, treatment strategies that can inhibit metastatic cancer while treating primary breast cancer are needed to improve the prognosis and live quality Ivyspring International Publisher of breast cancer patients. Photodynamic therapy (PDT), which utilizes photosensitizers, oxygen and light with specific wavelength to generate cytotoxic reactive oxygen species (ROS), has attracted substantial research interest as an emerging treatment strategy for cancer therapy over the past decade [3,4]. With significant advantage of spatiotemporal controllability, minimal invasiveness and irreversible destruction, PDT has been approved for clinical application by Food and Drug Administration of the United Stated for the treatment of various of solid tumors [5]. Very recently, many studies indicated that PDT could not only damage cancer cells directly to inhibit primary tumor, but also control distant metastases by initiating antitumor immune responses [6][7][8][9]. It has been verified that PDT can cause immunogenic cell death (ICD) during damaging cancer cells to release tumor-associated antigens, which can stimulate the host immune system, subsequently lead to the proliferation and activation of CD8 + T lymphocytes (CD8 + T cells) [10][11][12][13]. Nevertheless, the immune response induced by PDT could be severely impaired by immunosuppressive tumor microenvironment, which had formed during the development of tumors to evade host immune surveillance [14][15][16]. Recently, checkpoint blockade inhibitors, the modulator of immunosuppressive tumor microenvironment, have been applied to amplify the PDT-mediated immune response to improve the anti-tumor outcome of PDT. [6][7][8][17][18][19]. Checkpoint blockade as a promising strategy of cancer immunotherapy has attracted tremendous interests in recent years [20][21][22][23]. Among the immune checkpoint, indoleamine-2,3-dioxygenase (IDO) is such a special checkpoint for its inhibitors are some small-molecule drug [24]. IDO, highly expressed in many types of solid tumors, is an immunoregulatory enzyme that catalyzes the oxidative metabolism of tryptophan (Trp) to kynurenine (Kyn) [25]. The consequential unavailability of tryptophan and accumulation of kynurenine in tumor microenvironment could blunt the proliferation of T cells, promote the generation and activation of T regulatory cells, helpfully forming the immunosuppressive microenvironment in tumor sites [26].Therefore, IDO inhibitors, which can relieve immunosuppressive tumor microenvironment and elicit the host immune system, exhibit promising application for checkpoint blockade [27]. Among the IDO inhibitors, NLG919 is a novel candidate drug with potential immunomodulating and antineoplastic activities, whose values of inhibition constant (Ki) and half maximal effective concentration (EC 50 ) of 7 nM and 75 nM respectively [28]. From the above, the synergetic application of PDT and IDO blockade may be a better treatment strategy to improve the therapeutic effect on the inhibition of both primary and distant tumor [6]. Moreover, keep the ratio and pharmacokinetic properties of photosensitizer and IDO inhibitor consistently may contribute to the preferable synergetic therapeutic efficacy of antitumor. Thus, in order to achieve the dual function of PDT and IDO blockade, the drug conjugate PpⅨ-NLG was synthesized through linking IDO inhibitor NLG919 to photosensitizer PpIX via ester bond. Synthesis of PpIX-NLG To a solution of PpIX (50.0 mg, 0.0888 mmol) and NLG919 (25.1 mg, 0.0888 mmol) in dry DMF (5mL), EDCI (25.5 mg, 0.1332 mmol) and DMAP (1.6 mg, 0.0133 mmol) were added and stirring for 5 h at 60 ℃. Subsequently, the mixture was filtered and the filtrates were collected. The obtained filtrates were concentrated and purified by a silica gel column chromatography using petroleum ether/ethyl acetate (4:1) as eluent to afford a red solid (48. 3 Preparation and characterization of liposomes PpIX-NLG loaded liposome (PpIX-NLG@Lipo) was prepared through the thin-film dispersion method. Briefly, DOPC, cholesterol and PpIX-NLG at a mole ratio of 8.5: 3.5: 1 were dissolved in tetrahydrofuran (THF), then evaporating the THF solvent until forming a lipid thin film. Subsequently, the obtained thin film was hydrated in phosphate buffered saline (PBS) at a temperature of 40 °C, and sonicated using an ultrasonic cleaner for 10 min to get the nanoscale PpIX-NLG loaded liposome (PpIX-NLG@Lipo). The NLG919 loaded liposome (NLG@ Lipo) and PpIX loaded liposome (PpIX@Lipo) was prepared through the same method. ROS generation detection The detection of ROS generated from free drug was performed by using DPBF as a ROS indicator. Briefly, the DMSO solution of PpIX or PpIX-NLG (the concentration of PpIX or PpIX-NLG were 5 μM) containing 62.5 μM DPBF were irradiated by LED light (630 nm, 20 mW/cm 2 ) at the predetermined time points. After irradiation, the absorbance of DPBF at 416 nm was measured by UV-vis spectrophotometer to determine the ROS generation. ROS generation of PpIX-NLG@Lipo in aqueous solution was detected by using ABDA as a water-soluble ROS indicator. Firstly, ABDA (final concentration was 80 μM) was mixed with equal volume of PpIX@Lipo or PpIX-NLG@Lipo at a final concentration of 5 μM, respectively. Then the mixture solution was exposed to laser irradiation (630 nm, 50 mW/cm 2 ) at the predetermined time points, and the solution absorbance curve was recorded by a UV-vis spectrophotometer after irradiation. Subsequently, the decrease of ABDA absorbance at 401 nm was used to evaluate the ROS production. The indicator SOSG was applied to detect the generation of singlet oxygen of PpIX-NLG@Lipo under light irradiation. Briefly, 100 μg of SOSG was diluted in 165 μL of methanol to achieve the 1 mM SOSG stock for preparation. A 5 μL of SOSG stock was added into a 5 mL sample with a certain concentration. The samples were then irradiated by LED light (630 nm, 20 mW/cm 2 ) for different times, respectively, and the fluorescence change was recorded with an excitation wavelength of 504 nm and an emission wavelength of 525 nm. In vitro PDT efficacy MTT assay was performed to evaluate the PDT efficacy and phototoxicity of PpIX-NLG@Lipo in vitro. MCF-7 cells (1×10 4 cells per well) or 4T1 cells (3×10 3 cells per well) were seeded in 96-well plates and incubated with PpIX@Lipo, NLG@Lipo or PpIX-NLG@Lipo. After being incubated for 24 h, cells were exposed to LED light irradiation (630 nm, 20 mW/cm 2 ) for 10 min or without irradiation, followed by incubated for another 24 h. Subsequently, MTT solution was added and incubated for another 4 h, then the culture medium was replaced with DMSO. The optical density (OD) of each well was measured at 490 nm with a microplate reader (ELX800, Bio-Tek, USA). The relative cell viability was calculated as follows: viability = (OD sample/OD control) × 100%, the cells without any treatments as a control. The live/dead cell staining assay was also applied to visualize the PDT outcome of PpIX-NLG@Lipo with light irradiation in 4T1 cells. Briefly, 4T1 cells seeded in 12-well plates at a density of 5×10 4 cells per well were incubated with NLG@Lipo, PpIX@Lipo and PpIX-NLG@Lipo at concentration of 0.625 μM and 1.25 μM for 24 h, following by irradiated with LED light (630 nm, 20 mW/cm 2 ) irradiation for 10 min or not. After further incubation for another 24 h, cells were stained with fluorescein diacetate (FDA) and PI for 10 min, washed with PBS for three times, and subsequently observed and photographed by Inverted Fluorescent Microscope (IX73, Olympus, Japan). Cell apoptosis analysis of PpIX-NLG@Lipo was measured by using Annexin V Apoptosis Detection Kit. Briefly, 4T1 cells (1×10 5 cells per well) seeded into a 6-well plate were incubated with PpIX@Lipo, NLG@Lipo or PpIX-NLG@Lipo at concentration of 1.25 μM and 2.5 μM for 24 h, and subsequently irradiated with LED light (630 nm, 20 mW/cm 2 , 10 min) or not. Following incubation of another 4 h, cells were harvested, stained with Annexin V-FITC and PI according to the manufacturer's instructions, and finally analyzed by FCM. CRT exposure and ATP secretion assay In vitro CRT exposure induced by PpIX-NLG@ Lipo were evaluated by CLSM and FCM. Briefly, 4T1 cells seeded on glass coverslips were incubated with PpIX@Lipo, NLG@Lipo and PpIX-NLG@Lipo (1.25 μM) for 24 h. Then, cells were irradiated with LED light (630 nm) at 20 mW/cm 2 for 10 min. Following further incubation of 2 h, cells were washed with PBS three times, incubated with Alexa Fluor 488-CRT antibody for 2 h, stained with DAPI, and then observed under CLSM using 405 nm and 488 nm lasers for visualizing nuclei and CRT expression on the cell membrane, respectively. For FCM analysis, 4T1 cells seeded in 6-wells plates (1×10 5 cells per well) were incubated with PpIX@Lipo, NLG@Lipo and PpIX-NLG@Lipo (1.25 μM) for 24 h. Afterward, cells were irradiated with LED light (630 nm) at 20 mW/cm 2 for 10 min. Following further incubation of 2 h, the treated cells were harvested, washed twice with ice-cold PBS, incubated with Alexa Fluor 488-CRT antibody for 2 h, and finally analyzed by flow cytometry system (Guava EasyCyte 6-2L, Merck Millipore). For ATP secretion assay, 4T1 cells (1 × 10 5 per well) were seeded into 6-well plates and incubated with PpIX@Lipo, NLG@Lipo and PpIX-NLG@Lipo (1.25 μM) for 24 h, following by irradiated or not with LED light (630 nm, 20 mW/cm 2 , 10 min). After further incubation for 24 h, the supernatant of each well was carefully collected and dying tumor cells were removed by centrifugation and supernatants were isolated for further detection of the extracellular ATP secretion by using a luciferin-based ATP Assay kit. Cell-based IDO enzymatic activity Cell-based IDO enzymatic activity assay was performed to measure the IDO inhibitory effect of NLG@Lipo and PpIX-NLG@Lipo. Hela cells were seeded in a 96-well plate with a density of 5×10 3 cells per well, then 50 ng/mL (final concentration) of Recombinant human IFN-γ was added to each well with the aim to stimulate IDO express [27]. In the meantime, various concentration of NLG@Lipo or PpIX-NLG@Lipo ranging from 0.1 μM to 20 μM were added to the cells. After incubation of 48 h, 150 μL of the supernatants per well were transferred to a new 96-well plate. For colorimetric assay, the 150 μL of the supernatants were mixed with 75 μL of 30% trichloroacetic acid, and the mixture was incubated in 50 ℃ for 30 min. After centrifugation at 5000 rpm for 8 min, 80 μL of supernatants were transferred to a new 96-well plate and following mixed with equal volume of Ehrlich reagent (2% p-dimethylamino-benzaldehyde in glacial acetic acid). The final reaction product was measured at 490 nm by a microplate reader. For HPLC detection, the 150 μL supernatants of per well were directly analyzed by HPLC for tryptophan and kynurenine measurement. The HITACHI Chromaster HPLC system (HITACHI, Japan) and a Diamonsil C18 column (4.6 mm × 250 mm, 5 μm, DiKMA, Beijing, China) was applied. Acetonitrile (A) and 15 mM sodium acetate solution containing 0.02% acetic acid (B) was used as the mobile phase (A: B=8: 92, v/v) with the flow rate of 1.0 mL/min when the column temperature was set at 25℃. The UV spectra were recorded in the range from 200 nm to 800 nm, and 218 nm and 225 nm was set up for quantification of Trp and Kyn, respectively. MTT assay was taken to evaluate the cell viability of Hela cells during the IDO enzymatic activity assay. Briefly, after transferring 150 μL of the supernatants per well to a new 96-well plate mentioned above, 150 μL of fresh medium containing MTT was added to the cells and incubated for another 4 h. Subsequently, the culture medium was removed and replaced with DMSO. The OD was measured at 490 nm with a microplate reader. The relative cell viability was calculated as follows: viability = (OD sample/OD control) × 100%, the cells without any treatment as a control. Western-blot analysis was employed to examine the expression of IDO during the IDO enzymatic activity assay. Hela cells seeded in a 6-well plate were incubated with 50 ng/mL of Recombinant human IFN-γ and NLG@Lipo or PpIX-NLG@Lipo for 48 h. Then, the cells were harvested, lysed and centrifuged to collect proteins. The obtained protein samples were analyzed for IDO expression through western blotting method by using tubulin as an internal control protein. Bilateral tumor xenograft model and in vivo tumor imaging The bilateral 4T1 mouse breast cancer model was established for the following in vivo evaluation. BALB/c female mice (5-6 weeks old) were subcutaneously injected with 1×10 6 4T1 cells in the right flank and 4×10 5 4T1 cells in the left flank to construct the bilateral 4T1 mouse breast cancer model. For in vivo distribution experiment, after the tumor volume reaching about 200 mm 3 , bilateral 4T1 tumor-bearing mice (n=3) were i.v. injected with PpIX-NLG@Lipo at a PpIX-NLG dose of 6 μmol kg -1 . Mice were anesthetized by 4% (w/v) chloral hydrate and imaged at time point of 2, 8, 24 h respectively via small animal imaging system (Night OWL LB983, Berthold, Germany, excitation: 630 nm, emission: 680 nm). Twenty-four hours after injection, the mice were sacrificed and the heart, liver, spleen, lung, kidney, tumor were removed. Subsequently, the excised organs were imaged via small imaging system at the same condition. In vivo pharmacokinetics study Sprague-Dawley rats weighing 200 ± 20 g were randomly divided into 2 groups (n = 3 for each group). Five hundred microliter of PpIX-NLG (which was dissolved in saline containing 3% DMSO and 3% tween-80) and PpIX-NLG@Lipo was injected into the tail vein at dose of 2.5 μmol/kg. At the predetermined time points (10 min, 30 min, 1 h, 2 h, 4 h, 6 h, 8 h, 12 h, and 24 h), the blood samples were collected from the tail vein in heparinized tubes. Blood was centrifuged at 3000 rpm for 5 min to obtain the plasma, which was diluted with DMSO before analyses of PpIX-NLG contents by using Fluor spectrophotometer (Fluoromax-4, Horiba, Japan). In vivo IDO enzyme activity Firstly, the expression of IDO in tumor tissue of the bilateral 4T1 tumor-bearing mice were detected by immunohistochemistry (IHC). When the primary tumor reached about 100 mm 3 , the bilateral 4T1 tumor-bearing mice (n=3) were sacrificed and the tumor were dissected and sectioned for immunohistochemistry staining by IDO1 antibody to indicate the expression of IDO in tumor site. The Kyn and Trp ratios in plasma in 4T1 tumor-bearing mice treated with PpIX-NLG@Lipo, as an indication of in vivo IDO enzyme activity, were examined by HPLC. After the primary tumor reached about 100 mm 3 , the bilateral 4T1 tumor-bearing mice (n=4) were treated with saline, NLG@Lipo, PpIX@ Lipo and PpIX-NLG@Lipo (6 μmol NLG or PpIX per kg) via intravenous injection once every day for total three times. One day after the last treatment, the plasma sample of each mouse was harvested by using heparin as anticoagulant. Then, plasma samples were mixed with methanol for protein precipitation (plasma: methanol, 1:3, v/v), centrifuged at 12000 rpm for 20 min. Subsequently, the obtained supernatant was evaporated by using the sample concentrator and the redissolved solution was collected for HPLC quantification of Kyn and Trp. The HPLC condition was the same as what mentioned above. In vivo antitumor effect and mechanisms analysis When the primary tumor reached about 80~100mm 3 , the bilateral 4T1 tumor-bearing mice were randomly divided into seven groups (n=6): saline; saline with light irradiation; NLG@Lipo; PpIX @Lipo; PpIX@Lipo with light irradiation; PpIX-NLG@ Lipo; PpIX-NLG@Lipo with light irradiation. Liposomes were i.v. injected to animals at a PpIX or NLG dose of 6 μmol kg -1 every two days for a total of three injections. After 24 h post-injection, mice were anesthetized by 4% (w/v) chloral hydrate and the primary tumors were irradiated with 630 nm laser (300 mW/cm 2 ) for 10 min. The body weight and the primary and distant tumor size of each groups were monitored every two days. The tumor volume was calculated with the following formula: V = (tumor length) × (tumor width) 2 /2. After the last measurement at day 21, all the mice were sacrificed, and the tumor were removed and photographed aiming to show the therapeutic effect of each groups directly. To evaluate the immune response of PpIX-NLG@Lipo in the bilateral 4T1 tumor-bearing mice, the infiltrated CD8 + T cells into tumor site were observed by IHC and FCM. The day after the last laser irradiation, the mice were sacrificed, and the tumors were removed. For FCM analysis, tumors were destroyed by mechanical method and the resulting tumor solution was isolated by passing 200 μm and 70 μm filter to get the single tumor cell suspension. The cell suspension was stained with CD3-FITC and CD8-PE antibodies at 4℃ for 30 min, subsequently detected by FCM to quantify the CD8 + T cells in tumor site. For IHC analysis, tumors were dissected and sectioned for staining by CD8 antibody to indicate the infiltration of CD8 + T cells in tumor site. Safety evaluation Blood biochemical analysis and histopatho-logical observation of the vital organs (heart, liver, spleen, lung, kidney) were taken to test the safety of PpIX-NLG@Lipo in vivo application. Tumor-bearing mice were randomly divided into two groups (n=5), which was intravenously injected with saline or PpIX-NLG@Lipo (the dosage of PpIX-NLG was 6 μmol kg -1 ), respectively. The day after injection, plasma sample of each mouse was collected for blood biochemical analysis. Alanine aminotransferase (ALT) and aspartate aminotransferase (AST) were measured to evaluate the liver function, while blood urea nitrogen (BUN) and creatinine (CREA) were examined to identify renal function. The measurement of ALT and AST, BUN and CREA were processed by Laboratory Animal Center of Sun Yat-sen University (Guangzhou, China). After finishing collecting plasma samples, the vital organs of each mouse were removed, fixed with 4% paraformaldehyde, embedded in paraffin and sectioned, finally stained with H&E according to the manufacturer's instructions. The stained section was observed and photographed by Inverted Fluorescent Microscope to determine their histopathological changes. Statistical analysis Data were showed as mean ± S.D. All the statistical analysis was performed using SPSS Statistics 13.0 software. One-way analysis of variance was used to determine the significance of the difference. The differences were considered significant for *p < 0.05 and very significant for **p < 0.01 or ***p < 0.001. Preparation and characterization of liposomes In order to achieve the dual function of PDT and IDO blockade, the drug conjugate PpⅨ-NLG was synthesized through linking IDO inhibitor NLG919 to photosensitizer PpIX via ester bond (Scheme S1). High resolution mass spectra (HRMS) ( Figure S1) and FTIR ( Figure S2) were shown the exact mass and characteristic functional group of PpIX-NLG respectively, indicating that PpIX-NLG was successfully synthesized. Moreover, the successful synthesis of PpIX-NLG could be also confirmed by the UV-Vis absorption spectra (Figure 1A), in which PpIX-NLG exhibited both absorption at 278 nm and 410 nm, which was the characteristic band of NLG919 and PpIX, respectively. With the aim to solve the problem of low solubility of PpIX-NLG and improve its biocompatibility and tumor accumulation, the liposome drug delivery system was applied. PpIX-NLG was encapsulated into liposome (defined as PpIX-NLG@Lipo) by mixing DOPC, cholesterol and PpIX-NLG at a specific molar ratio according to thin-film dispersion method [29]. As shown in Figure 1B, PpIX-NLG@Lipo exhibited similar characteristic absorption peak as free PpIX-NLG in UV-Vis spectroscopy, suggesting the successful loading of PpIX-NLG into liposome. In the fluorescence emission spectra (Figure 1C), PpIX-NLG@Lipo existed a strong fluorescence in aqueous solution at 635 nm, which was similar with that of free PpIX-NLG in DMSO, indicating that the photosensitive properties of PpIX-NLG did not impair after encapsulated into liposome. As shown in transmission electron microscopy (TEM), the as-prepared PpIX-NLG@Lipo showed typical lipid layer structure and uniform sphere-like morphology with a mean diameter of 100 nm (Figure 1E). The encapsulation efficiency and loading efficiency of PpIX-NLG was calculated to be 87.6% ± 0.23% and 4.25% ± 0.13%, respectively. NLG919 loaded liposome (noted as NLG@Lipo) and PpIX loaded liposome (defined as PpIX@Lipo) were prepared through the same method, respectively ( Figure 1D). The hydrodynamic diameter of PpIX-NLG@Lipo was 98 nm measured by dynamic laser scattering (DLS), and that of NLG@Lipo and PpIX@Lipo were 100 nm and 102 nm, respectively ( Figure 1F). The particle size changes of the PpIX-NLG@Lipo in the presence of PBS solution containing 10% FBS were monitored to evaluate its stability in physiological conditions. As shown in Figure S3, the particle sizes exhibited little change, suggesting the physical stability of PpIX-NLG@Lipo in physiological conditions. ROS generation detection The ability of ROS generation from photosensitizer is important for its efficiency of PDT treatment. Thus, we first measured the ROS-generating capability of free drug conjugate PpIX-NLG by using DPBF as an ROS indicator, which can capture and react with ROS, resulting in the decrease of its characteristic absorption peak at 416 nm [30]. As exhibited in Figure S4, PpIX and PpIX-NLG exhibited sharp decrease of absorbance during 5 min with LED light irradiation (630 nm, 20 mW/cm 2 ), while there was no obvious change of absorbance of the DPBF solution, demonstrating that both PpIX and PpIX-NLG showed a strong ability to generate ROS with light irradiation. Subsequently, the ROS-generating ability of PpIX-NLG@Lipo was detected by using ABDA as a water-soluble ROS probe. The decrement of absorbance at 401 nm, the characteristic absorption peak of ABDA, was measured to indicate the generation of ROS [31]. In Figure 2A and Figure S5, PpIX-NLG@Lipo and PpIX@Lipo exhibited obvious decrease of absorbance under laser irradiation (630 nm, 50 mW/cm 2 ), which suggested that PpIX-NLG@ Lipo or PpIX@Lipo could generate ROS under irradiation in the water aqueous solution. Moreover, SOSG was employed as a single oxygen indicator detect the single oxygen generated from PpIX-NLG@ Lipo in the aqueous solution, as shown in Figure 2B, whose result was consistent to the measurement of ABDA. Then, intracellular ROS generation of PpIX-NLG @Lipo under light irradiation was observed by using DCFH-DA, a non-fluorescent ROS indicator, which can be hydrolyzed by intracellular esterase and further oxidized to strong fluorescent product DCF when ROS exists [30]. With LED light irradiation for 10 min, the MCF-7 cells (Figure 2C) or 4T1 cells ( Figure 2D) treated with PpIX@Lipo and PpIX-NLG@ Lipo displayed strong DCF fluorescence, while those treated with NLG@Lipo or PBS showed little fluorescence. The generation of ROS induced by PpIX-NLG@Lipo was observed visually by CLSM, as shown in Figure 2E, the obviously green fluorescence of DCF was occurred in the group of PpIX-NLG@Lipo after 10 min of light irradiation, while inappreciable DCF fluorescence was occurred in those cells without irradiation. Thus, the results discussed above demonstrated that PpIX-NLG@Lipo showed good capability of ROS production both extracellularly and intracellularly. In vitro PDT efficacy and apoptosis analysis After confirming the ROS generation of PpIX-NLG@Lipo in extracellularly and intracellularly, then the in vitro PDT efficacy and phototoxicity of PpIX-NLG@Lipo was tested by MTT assay and live/dead cell staining experiment. As Figure 3A and Figure 3B displayed, PpIX-NLG@Lipo and PpIX@ Lipo showed significant cytotoxicity under the LED light irradiation (630 nm, 20 mW/cm 2 , 10 min) on MCF-7 cells as well as 4T1 cells, while a little bit enhanced cytotoxicity of PpIX-NLG@Lipo compared to PpIX@Lipo. This may give the credit to less PpIX-NLG aggregation in the hydrophobic layer or the solubility changes after conjugating with NLG919. Moreover, nearly no cytotoxicity was observed in PpIX-NLG@Lipo and PpIX@Lipo without LED light irradiation, which well demonstrated the spatiotemporal controllability of PDT, and laid the solid foundation to weaken the system toxicity when applied in vivo. Most importantly, whether with or without irradiation, the cell viability was not affected by NLG@Lipo at the concentration lower than 20 μM. Subsequently, the cytotoxicity of the liposomes was also confirmed visually by live/dead staining assay ( Figure 3C). It could be seen obviously that there were almost live cells (green fluorescence) in the PBS control and NLG@Lipo treated group under irradiation. Moreover, the PpIX@Lipo and PpIX-NLG@Lipo treated cells without light irradiation also displayed almost green fluorescence, while obvious red fluorescence (dead cells) was exhibited after irradiation. In order to further determine the mechanism of strong PDT efficacy induced by PpIX-NLG@Lipo, the Annexin V-PI analysis was applied. As illustrated in Figure 3D, significant amounts of cells underwent apoptosis/necrosis were occurred 4 h post the treatment of PDT induced by PpIX@Lipo and PpIX-NLG@Lipo, respectively, while a mass of number of cells were keep healthy after treatment with PBS or NLG@Lipo with irradiation. The difference results between Annexin V/PI assay and live/dead assay mentioned above may give the credit to the different further incubation time after LED light irradiation during the two experiments. In addition, those cells incubated with PpIX@Lipo or PpIX-NLG@Lipo without light irradiation showed a very high survival rate (>95%), which was in accordance with the results of live/dead cytotoxicity assay. CRT exposure and ATP secretion assay Very recently, PDT was proved to cause immunogenically cell death (ICD) via inducing apoptosis and necrosis. During the ICD process, calreticulin (CRT), a chaperone protein abundant in the endoplasmic reticulum, was transported to the cell surface [32], which serves as an "eat me" signal, promoting the presentation of tumor-associated antigen to dendritic cells (DCs). In the meanwhile, ATP acts as "find me" signals regulate DC-mediated tumor antigen cross-presentation and T-cell polarization, finally activate the antitumor immune response [33]. Thus, CRT exposure and ATP secretion were reported as the distinct biochemical hallmarks of ICD. Thus, after confirming the ability of inducing apoptosis or necrosis of PpIX-NLG@Lipo under light irradiation, the CRT exposure and ATP secretion was evaluated subsequently. To determine the CRT expression, after light irradiation, 4T1 cells were stained with Alexa Fluor 488-CRT antibody and DAPI, then observed by CLSM to visualize the CRT exposure ( Figure 4A). Both PpIX@Lipo and PpIX-NLG@Lipo could induce CRT exposure under irradiation while PBS or NLG@Lipo showed little ability to induce CRT exposure. The FCM results (Figure 4B) was consistent with the CLSM results. As illustrated in Figure 4C, the supernatant ATP content was significantly enhanced after light irradiation treatments in the PpIX-NLG@Lipo incubation group compared to the dark treatments group as well as PBS control group. Taken together, PpIX-NLG@Lipo could induce CRT exposure and ATP secretion under light irradiation, indicating that PpIX-NLG@Lipo could apply as an ICD inducer to activate the host immune system under light irradiation. As a result, PpIX-NLG@Lipo was capable to induce ICD under light irradiation, laying a solid foundation for its further stimulate the host immune system. In vitro and in vivo IDO enzyme activity Small-molecule IDO inhibitors, with the advantages of reversing tumor immunosuppressive microenvironment and inhibiting tumor growth, was widely developed by pharmaceuticals company all over the world [34]. We hope that the introduction of IDO inhibitor can amplify PDT-mediated immune response, achieving both primary and distant tumor inhibition. Firstly, to evaluate the IDO enzyme activity inhibition of PpIX-NLG@Lipo, IFN-γ was applied to promote IDO expression of Hela cells [35]. From western blot assay (Figure 5E), the IDO expressed significantly more in Hela cells after IFN-γ pre-treating. Moreover, from HPLC chromatogram of PBS group and IFN-γ treated group (Figure 5A, Figure 5B), Trp in supernatants can transform into Kyn completely under IDO catalysis. That is, IFN-γ can stimulate higher IDO expression, the IDO activity inhibition can be characterized by inhibiting the conversion of Trp to Kyn under IFN-γ stimulation. From Figure 5A, when IFN-γ was incubated together with PpIX-NLG@Lipo, Trp cannot be oxidized to Kyn fully under catalysis of IDO, furthermore, with the concentration of PpIX-NLG@Lipo increasing from 2.5 μM to 20 μM, Kyn decreasing as well as Trp increasing at the same time, suggesting that PpIX-NLG can weaken the enzyme activity of IDO so that to inhibit the conversion of Trp to Kyn. The capacity of IDO activity inhibition of NLG@Lipo was also examined in the meanwhile, in Figure 5B and Figure 5C, NLG@Lipo showed an excellent ability of Kyn inhibition under the low concentration (0.25 μM to 2.5 μM), with an effective concentration (EC50) of 0.7 μM. Whereas, PpIX-NLG@Lipo inhibited 50% of IDO activity at an effective concentration of 6 μM, 8.5-folds lower than those of NLG@Lipo, which could most likely to be explained by the declined affinity between PpIX-NLG and IDO after conjugating PpIX to NLG919. From western blot assay, the IDO expression in Hela cells did not change significantly after co-incubation with drug and IFN-γ, indicating that the mechanism of NLG919 or PpIX-NLG decreasing kyn was not inhibit the IDO expression but the activity of IDO. In addition, the cell viability of Hela cells after co-treating with prepared liposomes and IFN-γ was also determined at the same time ( Figure 5D). Hela cells from both two treatment groups still live during the experiment, suggesting that IFN-γ showed little cytotoxicity and the results of IDO activity assay was not influenced by the cell death. The mentioned above results demonstrated that PpIX-NLG@Lipo and NLG@Lipo was capable to inhibit IDO enzyme activity under the premise that did not affect the IDO expression as well as the cell viability. In this study, the high expression of IDO in the tumor microenvironment was the precondition for effective of IDO inhibitors. Thus, immunohistochemistry (IHC) was applied to detect the IDO expression in the tumor site of bilateral 4T1 mouse breast cancer model mice. After the bilateral 4T1 mouse breast cancer model have been established, some of mice was sacrificed and removed their tumors, then IHC staining of those tumor sections was performed with the aim to determine the IDO expression in tumor tissues. As depicted in Figure 5G, the brown region was seen both in the left and right tumor, which revealed the IDO positive expression in the established bilateral 4T1 mouse breast cancer model [36]. The potency of inhibiting IDO activity by PpIX-NLG@Lipo in vivo was evaluated by examining the Kyn/Trp ratios in the plasma of 4T1 tumorbearing mice after administration. From Figure 5H, comparing with the saline and PpIX@Lipo group, the Kyn(nM)/Trp(μM) ratios was reduced following the treatment of NLG@Lipo and PpIX-NLG@Lipo, in accordance with their in vitro results of IDO activity inhibition experiment. Besides, compared with PpIX-NLG@Lipo group, a small reduction was also observed in NLG@Lipo group, that is, the ability difference between NLG@Lipo and PpIX-NLG@Lipo in inhibition of IDO activity in vivo seems smaller than that of in vitro, maybe due to the fact that PpIX-NLG can partly transformed back to NLG919 under the effect of esterase in vivo environment. In vivo tumor imaging and pharmacokinetics study The specific accumulation of PpIX-NLG@Lipo in the tumor site was very fundamental to the treatment efficiency in vivo application. With the advantage of fluorescence of PpIX-NLG@Lipo, the drug distribution can be noninvasively tracked. Thus, we examined the specific distribution in tumor tissues of the bilateral 4T1 tumor-bearing mice after intravenous injection of PpIX-NLG@Lipo by using an animal imaging system. As shown in Figure 6A, fluorescence was observed both in the left and right tumor site after 2 h post-injection, and gradually enhanced till 8 h. It can be seen more clearly from the side-lying position view that the fluorescence intensity in the tumor region reached to a max value at the time point of 8 h (Figure 6B), furthermore, the accumulation fluorescence intensity in the tumor tissue was still obviously strong even after 24 h post-injection ( Figure 6C, Figure 6D), suggesting that PpIX-NLG@Lipo show a high accumulation in tumor site, which laid a solid foundation for its in vivo application. The in vivo pharmacokinetics study was determined in rats by tail vein injection of PpIX-NLG@Lipo or PpIX-NLG (which was dissolved in saline containing 3% DMSO and 3% tween-80). As illustrated in Figure 6E, PpIX-NLG@Lipo and free PpIX-NLG shown the similar blood circulation time. The reason for this result was that the liposome was prepared only with DOPC and cholesterol while without a PEG shell. In addition, tween-80 was applied to solubilize the free PpIX-NLG, leading to a micelle of PpIX-NLG, which may cause the similar pharmacokinetics property between PpIX-NLG@Lipo and free PpIX-NLG. In vivo antitumor effect and mechanisms analysis Considering the promising results of PpIX-NLG @Lipo in tumor accumulation and IDO activity inhibition in vivo, the abscopal antitumor activities in the bilateral 4T1 murine breast cancer model was applied to evaluate the synergistic therapeutic effect of PDT and IDO blockade. A bilateral syngeneic mouse model was employed in this experiment, in which 4T1 cells were s.c. inoculated into both the left and right flanks of BALB/c mice. The right-side tumor as primary tumor was designed for laser irradiation to induce PDT treatment, while the tumors on the left side were designated to the distant tumors without laser irradiation. When the primary tumor reached 80~100mm 3 , as described in Figure 7A, mice received systemically administration with different formulations for a total of three injections, followed by laser irradiation (630 nm, 300 mW/cm 2 , 10 min) on primary tumors. As shown in Figure 7B, the group of saline with laser irradiation and PpIX@Lipo without laser irradiation all showed negligible inhibition in tumor volume of primary tumor, suggesting that pure laser as well as PpIX@Lipo without laser irradiation had no obvious effect on tumor growth. Moreover, administration of NLG@Lipo or PpIX-NLG@Lipo without laser irradiation could suppress the tumor growth in primary tumor due to the inhibition of IDO activity, indicating that antitumor immune response could be roused to some extent after IDO activity inhibition. However, NLG@Lipo exhibited better therapeutic effect on inhibiting tumor growth than that of PpIX-NLG@Lipo, which was consistent with the outcomes of IDO activity inhibition experiment in vivo. Both PDT induced by PpIX@Lipo and PpIX-NLG@Lipo under a laser irradiation could excellently inhibit tumor growth to a very large degree (Figure 7E), furthermore, after the treatment of PpIX-NLG@Lipo, the tumor growth rate was slower than that of PpIX@Lipo in the later stage, mainly due to the synergistic effect of IDO activity inhibition. The aim of bilateral tumor model was to evaluate therapeutic outcome of PpIX-NLG@Lipo to inhibit primary and distant metastatic tumors in the meanwhile. As shown in Figure 7C and Figure 7E, compared to the saline group, the pure laser as well as PpIX@Lipo without laser irradiation could not inhibit tumor growth obviously, while NLG@Lipo and PpIX-NLG@Lipo without laser irradiation group showed a slower tumor growth because of the immune response induced by IDO activity inhibition, which was in accordance with that of primary tumor. Interestingly, PDT induced by PpIX@Lipo in primary tumor could inhibit the growth of distant tumor at a short of the beginning, but the increasing rate of distant tumor volume became faster at the later stage of the experiment. In sharp contrast, a significant inhibition of growth of distant tumor was observed in the PpIX-NLG@Lipo treated group, suggesting that synergetic therapy of PDT and IDO activity inhibition displayed increased antitumor effect compared to monotherapy. To elucidate the mechanisms underlying the distant tumor inhibition efficacy of PpIX-NLG@Lipo under laser irradiation, we examined the infiltration of CD8 + T cells in the distant tumor site after a total three treatments of PpIX-NLG@Lipo by FCM and IHC. As displayed in Figure 7F and Figure S6, compared to the saline control group, more CD8 + T cells were generated in distant tumor after PpIX@Lipo induced PDT treatment, which may the reason for the inhibition of distant tumor at the short beginning. Furthermore, there were more CD8 + T cells observed in NLG@Lipo group, which could in accordance with the results of in vivo IDO activity inhibition assay, suggesting that NLG919 could block IDO activity to decrease Kyn/Trp ratio in vivo, and then induce increasing CD8 + T cells infiltration into tumor site to achieve antitumor immunotherapy. Most interestingly, a large amount of CD8 + T cells were infiltrated into tumor site after PpIX-NLG@Lipo mediated PDT treatment, which was more than that of PpIX@Lipo induced PDT treatment as well as NLG@Lipo mediated IDO blockade treatment. This demonstrated that PpIX-NLG@Lipo mediated PDT could stimulate the host immune system to induce a small amount of CD8 + T cells production, and then its ability of IDO activity blockade could strengthen the immune response caused by PDT, resulting in a robust immune response, which may achieve enhanced distant tumor inhibition. From what was discussed above, PpIX-NLG@ Lipo showed obvious therapeutic effect on inhibiting primary tumor and distant tumor to some extent. Subsequently, the infiltration of CD8 + T cells into distant tumor site was detected to explain the abscopal anti-tumor effect of PpIX-NLG@Lipo. However, the role of other immune related factors and other immunocytes, such as NK cells, CD4 + T cells, Treg cells were not evaluated together to elucidate the antitumor immune response of PpIX-NLG@Lipo, which we are going to make efforts next. Safety evaluation To evaluate the system toxicity of PpIX-NLG@ Lipo in vivo application, blood biochemical indexes and histopathology of the vital organs were determined. ALT and AST were measured to evaluate the live function, while BUN and CREA were tested to estimate renal function. As shown in Figure 8A, after a single intravenous injection of PpIX-NLG@Lipo, the above blood biochemical indexes exhibited insignificant changes compared to the saline control group, suggesting that the systematical administration of PpIX-NLG@Lipo showed negligible toxicity of liver and kidney. This was further confirmed by observing the histopathology of the heart, liver, spleen, lung and kidney of mice treated with PpIX-NLG@Lipo after stained with H&E ( Figure 8B). Compared to the saline group, there were negligible pathological changes on the vital organs after treated with PpIX-NLG@Lipo, indicating the good security of single administration of PpIX-NLG@Lipo. Furthermore, the body weight changes during the in vivo antitumor experiment is also the indicator of the system toxicity ( Figure 7D), which also suggested inappreciable toxicity of PpIX-NLG@Lipo during PDT. Conclusion In summary, we developed a liposomal dual functional conjugate comprised of PpIX and NLG919 for synthetic PDT and IDO blockade to inhibit primary and distant metastatic tumor at the same time. The as-prepared liposome PpIX-NLG@Lipo exhibited a uniform size distribution at the mean diameter of 100 nm, showed a high accumulation in tumor site, laying a solid foundation for the in vivo application. Meanwhile, the fluorescence of PpIX-NLG@Lipo allowed noninvasive tracking of drug distribution. The vitro cytotoxicity experiment demonstrated that the negligible phototoxicity, good ROS generation and effective PDT therapy of PpIX-NLG@Lipo. Moreover, the in vitro cell experiment indicated that PpIX-NLG@Lipo could cause ICD via apoptosis and necrosis during PDT treatment, which may present tumor-associated antigens to stimulate the host immune system. Furthermore, PpIX-NLG@Lipo could blockade IDO function by inhibiting its activity to converse Trp to Kyn both in vitro and in vivo. The antitumor experiment on bilateral 4T1 murine breast cancer model demonstrated that PpIX-NLG@Lipo mediated PDT could damage tumor cells directly to inhibit the growth of primary tumor, and simultaneously present tumor-associated antigens to stimulate the host immune system, then PpIX-NLG could inhibit IDO pathway to achieve effectively distant tumor inhibition. Therefore, such a PpIX-NLG@Lipo exhibited a promising application in synergetic PDT and IDO blockade for enhanced cancer therapy. Furthermore, such a treatment strategy that combining PDT and IDO blockade provides great
2019-08-01T07:33:23.322Z
2019-07-29T00:00:00.000
{ "year": 2019, "sha1": "9e84a5244d812f642873bb1045522074fcc94686", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7150/thno.35343", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e84a5244d812f642873bb1045522074fcc94686", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
7588385
pes2o/s2orc
v3-fos-license
Usage and Distribution for Commercial Purposes Requires Written Permission. Airbag-associated Severe Blunt Eye Injury Causes Choroidal Rupture and Retinal Hemorrhage: a Case Report Wang Et Al.: Airbag-associated Severe Blunt Eye Injury Causes Choroidal Rupture and Retinal Hemorrhage: a Case Report A case of choroidal rupture caused by airbag-associated blunt eye trauma and complicated with massive subretinal hemorrhage and vitreous hemorrhage that was successfully treated with intravitreal injection of expansile gas and bevacizumab is presented. A 53-year-old man suffered from loss of vision in his right eye due to blunt eye trauma by a safety airbag after a traffic accident. On initial examination, the patient had no light perception in his right eye. Dilated ophthalmoscopy revealed massive subretinal hemorrhage with macular invasion and faint vitreous hemorrhage. We performed intravitreal injection of pure sulfur hexafluoride twice for displacement, after which visual acuity improved to 0.03. For persistent subretinal hemorrhage and suspicion of choroidal neovascularization (CNV), intravitreal bevacizumab (1.25 mg/0.05 mL) injection was administered. After 3 weeks, the visual acuity of his right eye recovered to 0.4. For early-stage choroidal rupture-induced subretinal hemorrhage and complications of suspected CNV, intravitreal injection of expandable gas and intraocular injection of antiangiogenesis drugs seem to be an effective treatment. Introduction Choroidal rupture, first defined by von Graefe in 1854, is a type of closed-globe injury caused by blunt eye trauma that results in a rupture of the Bruch membrane and the retinal pigment epithelium layer due to anterior-posterior compression and horizontal expansion of the eye. During the trauma caused by the injury, the relatively strong collagen-fortified sclera and the relatively flexible retina are not easily ruptured, but the relatively inflexible retinal pigment epithelium layer, Bruch's membrane, and the neighboring choriocapillaris can be ruptured more easily. Choroidal ruptures can be divided into direct and indirect types. The indirect type is more common; the rupture is often located at the side opposite the impact. Approximately 5-10% of all blunt eye injuries are accompanied by indirect choroidal rupture [1]. However, indirect choroidal rupture generally occurs at the posterior pole, forming a shape that is crescentic and concentric to the optic nerve. If the location of the choroidal rupture affects the macular area or is below the fovea, it will lead to relatively poor visual outcome [2]. Airbags are inflatable devices that are designed to reduce car accident-related mortality and morbidity. Nevertheless, due to the widespread use of airbags, incidents of eye trauma have been reported in recent years, although largely in the form of case reports. Trauma intensity varies from mild to severe, ranging from injuries of the anterior segment to the posterior segment, including corneal abrasion, hyphema, eyelid laceration, traumatic iritis, iris tear, cataract, angle recession, corneal or scleral laceration, chemical keratitis, dislocated lens, orbital fracture, facial nerve palsy, vitreous/retinal hemorrhage, retinal tear or detachment, commotio retinae, macular hole, choroidal rupture, traumatic maculopathy, traumatic optic neuropathy, retinitis sclopetaria, and LASIK flap folds [3]. This case study reports on a patient who suffered from choroidal rupture with severe subretinal hemorrhage that was caused by rapidly inflating airbags during a car accident. Case Report The work described has been carried out in accordance with The Code of Ethics of the World Medical Association (Declaration of Helsinki). Informed consent was obtained for experimentation. The patient was a 53-year-old man in good health without any major illnesses. He claimed that in a recent car accident, which had occurred 3 days before he reported to the clinic, his right eye was hit by a suddenly inflating airbag. In addition to severe pain, he suffered from a sudden-onset loss of vision. Based on the patient's description, his car was involved in a full frontal collision with a roadside sign as he was making a left turn. He was not wearing his seatbelt and was driving at a speed of 50 km/h. The first clinical examination revealed that his right eye was not perceptive to light. The visual acuity of his left eye was 0.8. The intraocular pressure (IOP) of his right and left eye was 12 and 15 mm Hg, respectively. Slit lamp examination revealed that the anterior segment was largely normal and did not exhibit corneal or scleral laceration. Dilated fundus examination revealed that his right eye had massive subretinal hemorrhage around the optic disc with macular involvement in addition to vitreous hemorrhage, while his left eye was normal. Although the resolution was poor due to vitreous hemorrhage, optical coherence tomography (OCT) revealed a hint of macular edema with subretinal fluid with a central retinal thickness of 436 μm (Fig. 1). After a week of observation, the right eye still had no perception to light. Therefore, the patient was advised to have an intravitreal injection of 0.3 mL sulfur hexafluoride (SF6) in the right eye. The patient agreed, and the injection was administered with him in the prone position. Four days after the injection, vision in the right eye had improved. The patient could recognize hand motion within a distance of 60 cm. The IOP was 12 mm Hg. Dilated fundus examination revealed significantly reduced subretinal macular hemorrhage, which was pushed to the periphery of the retina. It was also observed that the location of the choroidal rupture was on the superior and the temporal side of the macula. OCT results suggested that the macular edema had significantly reduced, and the central retinal thickness had decreased to 278 μm, but it still contained some subretinal fluid. Since the intraocular gas had nearly disappeared, it was subsequently recommended that the patient receive a second intravitreal injection of 0.3 mL SF6 in the right eye. One week after the injection, visual acuity of the right eye had improved to 0.03, but the subretinal hemorrhage and the vitreous hemorrhage conditions had not improved at all (Fig. 2), while the central retinal thickness had slightly increased to 319 μm. Since it was possible that a secondary choroidal neovascularization (CNV) had occurred, after fully informing the patient of the potential benefits and risks, we scheduled the patient for immediate intravitreal injection of 1.25 mg/0.05 mL bevacizumab in the right eye. Three weeks after the injection, the corrected visual acuity of the right eye had improved to 0.4, with only small amounts of residual subretinal hemorrhage at the superior, inferior, and nasal side of the eye. Vitreous hemorrhage had almost entirely disappeared as well. Fundus examination revealed a visible scar caused by the choroidal rupture. OCT results showed a complete absence of subretinal fluid, and the central retinal thickness had decreased to 210 μm. The location of the choroidal rupture was observed on fluorescein angiography, and dye leakage was not observed during the early or late phase (Fig. 3). Discussion Airbags are soft inflatable pouches that are installed in automobiles. In the event of a collision or upon detecting a sudden deceleration of the vehicle, in order to prevent the head, face, or body of the passengers from making direct contact with the steering wheel, dashboard, or windshield, airbags will rapidly inflate within one-tenth of a second to reduce the severity of the bodily harm and to prevent passengers from being ejected from the seats, thus also preventing secondary injuries. Airbags are therefore designed to prevent accidentrelated casualties. Pearlman et al. [3] stated that, between 1991 and 2000, there were a total of 263 incidents of airbag-related eye injuries in 101 patients. From an anatomical perspective, the type of eye injury with the highest incidence was corneal injuries, which accounted for 21.6% of all eye injuries and included corneal epithelial defects and chemical keratitis. Hyphema had the second highest incidence, accounting for 17.1% of all injuries, followed by vitreous and retinal hemorrhage at 9.9%, retinal tears and retinal detachment at 5.7%, commotio retinae at 5.3%, and angle recession at 4.2%. There were only 3 incidents of airbag-related choroidal ruptures, accounting for 1.1%, which is generally considered to be a rare form of complication. There were 11 cases of open globe injuries, such as scleral and/or corneal laceration, accounting for 4.2% of the total. Rupture of the eyeball is considered a severe complication; therefore, patients who suffer rupture often have a poor visual prognosis. Despite the fact that multiple possible airbag-related eye complications do occur, 45% of major accidents resulted in eyeball rupture even before airbags were widely used. It is therefore accurate to suggest that airbags can significantly reduce the severity of accident-related ocular trauma [4]. The main mechanism of airbag-related eye injuries includes blunt ocular trauma from the impact of the inflating airbag and chemical keratitis after exposure to the alkaline sodium azide gas that is released during the deflation of the airbag [5]. Corneal epithelial defect or hyphema of the anterior segment of the eye are usually self-limiting. However, if the posterior segment is affected, such as in retinal detachment, choroidal rupture, macular hole, and traumatic optic neuropathy, it may result in a permanent loss of vision. In our case, the patient was found to suffer from choroidal rupture with massive subretinal and vitreous hemorrhage after a blunt injury to the right eye. He was not found to have hyphema, lens shift, or corneal damage. In addition, we also observed that the IOP of the patient was normal, and there was neither ocular hypotony nor hypotony maculopathy. In the subsequent follow-up examinations, complications such as retinal tear, retinal detachment, or macular hole were also not observed. Yang et al. [6] have suggested that Asians tend to have a higher chance of incurring airbag-related eye injuries. This may be related to the orbital anatomical structure of Asians, as a shallow orbital socket and the less obvious orbital rim may allow airbags to make direct contact with the eyeball. In addition, compared to Caucasians, Asians tend to have a smaller stature. Therefore, they are more likely to be closer to the steering wheel and the airbag, and are thus more likely to be hit by a suddenly inflating airbag. Furthermore, if the seatbelt is worn while driving, it may reduce the force with which the eyeball is hit and, therefore, prevent subsequent complications. It would be interesting to evaluate whether there is a difference in the severity of airbag-related eye injuries for Asians driving European-, Japanese-, Korean-, and Taiwanese-made vehicles. Is there a need to develop airbags tailored for Asians? Additional research is needed on this subject. For the treatment of choroidal rupture, there are currently no medications or surgical procedures available that are particularly effective. However, a few case reports have suggested that the intravitreal injection of expandable gases, for example, SF6 or C3F8 (perfluoropropane), along with tissue plasminogen activator can have a positive therapeutic effect on treating choroidal ruptures with newly incurred subretinal hemorrhage [7]. The therapy has also been shown to be effective in treating subretinal hemorrhage that is complicated by wet age-related macular degeneration [8]. In addition, expandable gas can also be injected intravitreally after performing macular hole and retinal detachment surgery to increase the success rate of the procedure [9]. Currently, although there are no large-scale studies on the treatment of choroidal rupture-associated subretinal hemorrhage, we employed the therapy that was documented for the treatment of wet age-related macular degeneration-associated retinal hemorrhage to treat our patient, in the hope that the expandable gas in combination with the prone position would push the hemorrhage beneath the macula to the periphery of the eye in order to restore the patient's central vision. The result of the procedure was positive. In addition to causing complicated hemorrhage, choroidal rupture can result in CNV in about 20% of the cases, usually next to the scar that has resulted from the previous choroidal rupture. CNV, if left untreated, may lead to additional retinal hemorrhage or cause fibrosis and result in loss of vision [10]. Currently, many studies have stated that injection of an antivascular endothelial growth factor agent is significantly effective in treating CNV caused by wet age-related macular degeneration, diabetic retinal edema, retinopathy of prematurity, and intraocular inflammation or infection. In addition, the procedure of intravitreal injection is very safe, time-saving, and inexpensive, and most patients can be treated on an outpatient basis. In 2004, bevacizumab (Avastin) was approved by the Food and Drug Administration (FDA) for the treatment of metastatic colorectal cancer, but intravitreal injection in ophthalmology is still considered an off-label use of the medication. There are only a handful of studies on choroidal ruptureinduced CNV [11], and currently there is a lack of large-scale studies to confirm the efficacy of antiangiogenesis drugs. In our case, we employed the therapeutic dose used to treat wet age-related macular degeneration for our patient, who was given an intravitreal injection of 1.25 mg/0.05 mL bevacizumab once. Although at the beginning, fluorescent angiography did not provide any evidence, fortunately, the visual prognosis of the patient was satisfactory. However, CNV may still occur several years after the injury [10], and, therefore, regular follow-ups over a long period of time are necessary to observe and track the progress of the disease. In addition, there have been several documentations of complications arising from intravitreal injections, such as retinal tear and detachment, exogenous endophthalmitis, ocular hypertension, and vitreous hemorrhage [12]. Literature has indicated that intravitreal injection of bevacizumab may also increase the risk of cardiovascular and cerebrovascular complications, for example, acute blood pressure elevation or myocardial infarction [13]. According to the statistics published by Taiwan's National Police Agency [14], in 2014, 307,482 incidences of traffic accidents were documented, which resulted in approximately 2,612 cases of fatalities and 412,436 cases of injuries. These accidents caused severe injuries and were a substantial burden to society in terms of cost. In order to reduce the rate of accidents, important issues such as drafting comprehensive transport policies and regulations and educating the public on the importance of obeying traffic rules also need to be addressed. The widespread use of airbags can indeed reduce the occurrence of accident-related casualties, but it can also result in relatively more cases of airbag-related eye trauma that can cause complications and ruptures of the posterior segment of the eye, leading to severe cases of loss of vision. Therefore, a timely diagnosis, interventional therapy, and follow-up tracking are very important and necessary. Large-scale studies would need to be conducted on the treatment of choroidal rupture in order to determine the best therapeutic approach. However, for early-stage choroidal rupture-induced subretinal hemorrhage and the complications of suspected CNV, intravitreal injection of expandable gas with the patient in the prone position, in addition to intraocular injection of antiangiogenesis drugs, seems to be an effective treatment. Statement of Ethics Written informed consent was obtained from the patient for the publication of this case report and any accompanying images. Disclosure Statement The authors declare that there is no conflict of interest regarding the publication of this paper. No funding was received for this work. a After 2 times intravitreal injection (IVI) of SF6, subretinal hemorrhage in the fundus has diminished significantly. The scar resulting from the choroidal rupture can be seen (arrow); a large amount of hemorrhage is still visible surrounding the scar, and vitreous hemorrhages decreased slightly compared to before. b OCT after the first IVI of SF6: macular edema has improved significantly, only some subretinal fluid is visible on the temporal side of the fovea. The scar from the choroidal rupture (arrow) can be seen; central retinal thickness has reduced to 278 μm. c After the second IVI of SF6, no significant changes can be observed in the fundus; subretinal and vitreous hemorrhage is still visible around the scar. Central retinal thickness in OCT has slightly increased to 319 μm. Fig. 3. a After the intravitreal injection (IVI) of Avastin, the scar from the choroidal rupture can be observed at the temporal side of the macular area; subretinal and vitreous hemorrhage has nearly disappeared. b After the IVI of Avastin, the subretinal fluid disappeared entirely; CRT is 210 μm, and macular edema is absent. c Early phase of fluorescein angiography after the IVI of Avastin. d Late phase of fluorescein angiography after the IVI of Avastin; obvious dye leakage cannot be observed.
2018-05-08T18:15:55.757Z
0001-01-01T00:00:00.000
{ "year": 2017, "sha1": "3bd73ba79bd6d31e0f13612229b9d7bb5022fefd", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/452652", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3bd73ba79bd6d31e0f13612229b9d7bb5022fefd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
306086
pes2o/s2orc
v3-fos-license
HpSlyD inducing CDX2 and VIL1 expression mediated through TCTP protein may contribute to intestinal metaplasia in the stomach Helicobacter pylori infection is the most important risk factor for gastric intestinal metaplasia (IM). Our previous study demonstrated that infection with H. pylori HpslyD-positive strains associated with IM. To further investigate the signalling pathway involved in HpSlyD-induced IM, CDX2 and VIL1 expressions were determined before and after HpSlyD application. TCTP was knocked down by siRNA or overexpressed by plasmid transfection. An HpSlyD binding protein was used to block HpSlyD’s enzymatic activity. The expression of CDX2 and TCTP in gastric diseases was measured by immunohistochemistry. Our results showed HpSlyD induced CDX2 and VIL1 expressions. TCTP protein expression was markedly increased after application of HpSlyD and in an HpSlyD-expressing stable cell line. Downregulation of TCTP protein led to decreased HpSlyD-induced CDX2 and VIL1. Overexpression of TCTP protein improved the expression of CDX2 and VIL1. Co-application of HpSlyD and FK506 led to significant reductions in CDX2, VIL1, and TCTP expression. Immunohistochemistry demonstrated that CDX2 and TCTP expression was higher in HpslyD-positive specimens compared with HpslyD-negative ones. Expression of CDX2 was positively correlated with TCTP in HpslyD-positive cells. Our study is the first to show that HpSlyD induction of CDX2 and VIL1 expression mediated through TCTP may contribute to IM in the stomach. factors are associated with different histopathological changes of the gastric mucosa. For example, the strain carrying cagA and vacA can produce a stronger inflammatory response, which is related to the occurrence of precancerous lesions such as GIM 5 . In a previous study, we identified a novel peptidylproline cis-trans-isomerase (PPIases, EC number 5.2.1.8) associated with gastric carcinogenesis, which encodes the protein H. pylori SlyD (HpSlyD) 6 . HpSlyD has the ability to promote cell proliferation, malignant transformation and invasion, and to inhibit apoptosis 7,8 . Further study has shown that infection with HpslyD-positive strains may be associated with atrophic gastritis 9 . However, the signalling pathway involved in HpSlyD-induced intestinal metaplasia is not yet completely understood. Caudal-related homeobox 2 (CDX2) is a molecular engine that regulates intestinal differentiation. It can directly promote the expression of a variety of intestinal cell-specific factors, while playing an irreplaceable role in maintaining intestinal cell proliferation, development and differentiation. Under normal conditions, CDX2 expression is restricted to the intestine, but it is ectopically expressed in IM lesions, not only of the stomach, but also of the oesophagus and gall bladder, among other locations. CDX2 activation plays a key role in the development of GIM 10 . Villin 1 (VIL1) is a structural protein involved in the formation of small intestinal microvilli and has upregulation of expression in IM. VIL1 is a known transcriptional target of CDX2 11 . Both CDX2 and VIL1 play a key role in the development of gastric metaplasia. It has been reported in the literature that H. pylori can affect CDX2 and VIL1 expression [12][13][14] . However, it is unclear whether HpSlyD affects CDX2 and VIL1 expression, and if it does, how it regulates CDX2 and VIL1 transcriptional expression is also unclear. Translationally controlled tumor protein (TCTP), a highly conserved protein found in eukaryotic cells, is an important tumor-associated protein identified in a study of tumor reverse screening. In 2007, the journal Nature reported 15 that TCTP controls growth and differentiation in drosophila and TCTP overexpression occurs in many human cancers, such as breast cancer and liver cancer [16][17][18][19][20][21] . Recent studies have shown that TCTP is also pivotal in the cell reprogramming network, with a role as a checkpoint, and it regulates the transition points of cell phenotype under a variety of physiological and pathological states 22 . It is unclear whether TCTP is involved in the regulation of GIM. In our previous study, using differential proteomics, we screened for changes in protein expression associated with the expression of HpSlyD in a stable cell line. Among the 21 up-regulated proteins, the one elevated the most was TCTP, suggesting that TCTP may be involved in HpSlyD-mediated regulation (data not shown). However, this speculation needs to be further verified. In this study, we investigated whether HpSlyD could induce CDX2 and VIL1 expression in vivo and in vitro and whether TCTP regulates CDX2 and VIL1 expression induced by HpSlyD, and we aimed to clarify the signalling pathway involved in HpSlyD-induced IM in the stomach. Materials and Methods Cell culture and treatment. The human gastric carcinoma cell lines AGS and N87 were purchased from the American Type Culture Collection (ATCC, Manassas, VA, USA). They were grown in Ham's F-12 medium (HyClone, USA) or Dulbecco's modified Eagle's medium (DMEM; HyClone, USA) supplemented with 10% foetal bovine serum (FBS, Gibco, Australia) in an atmosphere consisting of 5% CO 2 at 37 °C. AGS cells were transfected with either SlyD-GFP or GFP plasmids and stable cell lines were obtained using the methods described by Zhu et al. 8 . N-terminal His tagged SlyD was purified by Ni 2+ affinity chromatography as described earlier 7 . For all experiments, HpSlyD was used at a concentration of 200 ng/mL. The HpSlyD binding protein tacrolimus (FK506) was purchased from Astellas Ireland Co., Ltd., dissolved in ddH 2 O at a concentration of 18 mg/ml and stored at −20 °C until use. RNA extraction and Real-time quantitative RT-PCR (qPCR). Total RNA was extracted using TRI Reagent (Ambion, USA) and converted to cDNA using a PrimeScript RT reagent kit (Takara, Japan). Human CDX2 (forward 5′-TTCACTACAGTCGCTACATCACC-3′; reverse 5′-TTGTTGATTTTCCTCTCCTTTGC-3′) and VIL1 (forward 5′-GGCAAGAGGAACGTGGTAGC-3′; reverse 5′-CGGTCCATTCCACTGGATGA-3′) were amplified with SYBR Green (SYBR Premix Ex Taq II, Takara, USA) in a fluorescence reader ABI Prism 7500. The following PCR parameters were used: 95 °C for 30 seconds, 40 cycles of 95 °C for 15 seconds, 55 °C for 30 seconds and finally an elongation step at 72 °C for 30 seconds. Each reaction was performed in triplicate and normalized to GAPDH. Relative expression of the target genes was determined using the 2 −ΔΔCt method 23 . Thereafter, expression was expressed as fold difference relative to that of the untreated control cells. The results are expressed as mean ± SD of representative triplicates. Protein extraction and western blot. Western blot analysis was performed using standard techniques. Briefly, cells (2 × 10 6 /well) were treated with or without SlyD (200 ng/mL) for 40 hours. Total protein was extracted using a lysis buffer (2% mercaptoethanol, 20% glycerol, and 4% SDS, in 100 mM Tris-HCl buffer, pH 6.8). Equal amounts of total protein (60 µg/lane) were separated and transferred to PVDF membranes (Bio-Rad, Hercules, CA). The membranes were incubated with primary antibodies overnight at 4 °C: rabbit monoclonal anti-CDX2 Human tissue specimens and immunohistochemistry. Tissue samples were obtained from 84 individuals with gastritis (GS), 91 individuals with intestinal type atrophic gastritis (IM-GA) and 58 with gastric cancer (GC) who participated in the Zhuanghe Gastric Diseases Screening Program between 2008 and 2011, including 133 men and 100 women, 149 cases ≤ 60 years of age and 84 cases > 60 years of age. All subjects were histologically diagnosed based on the updated Sydney System for gastritis. This study was approved by the Ethics Committee of the First Affiliated Hospital of China Medical University Shenyang, China. Written informed consent was obtained from the participants. Statistical analysis. All analyses were carried out by using SPSS for Windows version 16.0. Data were presented as mean ± SD. Differences in the mRNA and protein expression levels of CDX2, VIL1 and TCTP between the treated and non-treated group were analysed by Student's t-test. The correlations between H. pylori infection in tissue samples with other factors were determined using the bilateral χ 2 test. Non-parametric tests were used to analyse the differences of CDX2 and TCTP protein detected by IHC. Correlation analysis was performed between TCTP and CDX2 expression. A value of P < 0.05 was defined as statistically significant. Results HpSlyD induces CDX2 and VIL1 expression in gastric epithelial cell lines. The occurrence of gastric IM during H. pylori infection has been reported to be dependent on induction of CDX2 expression in gastric epithelial cells 30 . Thus, in initial studies, we evaluated CDX2 expression and the expression of another epithelial cell differentiation marker, VIL1, in human gastric cancer cell lines before and after treatment with HpSlyD. AGS or N87 cells were incubated with 200 µg/ml HpSlyD for 40 hours. The level of CDX2 mRNA in the non-treated group was significantly lower than that of the treated group in both cell lines (Fig. 1A). Similarly, mRNAs encoding VIL1 were up-regulated in the treated cells compared with the non-treated cells (Fig. 1B). In addition, CDX2 protein (as well as VIL1 protein) was also expressed at this time point (Fig. 1C-E). CDX2 and VIL1 mRNA expression in AGS cells expressing SlyD-GFP were significantly higher than in control AGS cells and AGS cells expressing GFP alone ( Fig. 2A,B). The same differences were also found in the protein expression of CDX2 and VIL1 (Fig. 2C-E). Our results showed that in both gastric epithelial cell lines and HpSlyD stably expressing cell line, CDX2 and VIL1 expression was affected by the presence of HpSlyD. HpSlyD induced TCTP expression in human gastric epithelial cells. In our previous study, we found that TCTP is a highly expressed protein in an HpslyD-GFP stable cell line, suggesting that TCTP may be involved in HpslyD-mediated biological effects. With this information in hand, we next addressed whether HpslyD can induce increased TCTP expression in AGS, N87, and the HpslyD-GFP stable cell line. As shown in Fig. 3, TCTP expression was markedly increased in AGS and N87 cells treated with 200 µg/ml HpSlyD for 40 hours and in the HpslyD-GFP stable cell line, suggesting that HpSlyD affects TCTP expression in gastric epithelial cells. HpSlyD induction of CDX2 and VIL1 expression inhibited by knockdown of TCTP. To further examine whether TCTP regulates CDX2 and VIL1 expression induced by HpSlyD, we conducted a series of studies addressing the role of TCTP in HpSlyD induction of CDX2 and VIL1. AGS, N87, and AGS HpslyD-GFP stably expressing cell lines were transfected with TCTP siRNA or nonspecific siRNA for 6 hr and then treated with HpSlyD for another 40 hr. As shown in Fig. 4, TCTP-specific siRNA strongly inhibited HpSlyD-induced upregulation of CDX2 and VIL1, suggesting the involvement of TCTP in H. pylori induced CDX2 signalling. The same result can also be seen in both N87 cells and the HpslyD-GFP stably expressing cell line (Fig. 4A-D). Our data demonstrate that TCTP has a promotion effect on HpSlyD-induced CDX2 and VIL1 expression. TCTP introduction upregulated the expression of CDX2 and VIL1. The above results showed that TCTP was involved in HpSlyD-induced upregulation of CDX2 and VIL1. Whether the introduction of TCTP gene to the cell lines has the same biological effects as HpSlyD? We then transfected a TCTP expression plasmid and a control plasmid (Origene, China) into AGS and N87 cells using Lipofectamine 2000 (Invitrogen, USA). TCTP overexpresion was evaluated 24 hours after transfection by western blot. As shown in Fig. 5, TCTP introduction upregulated the CDX2 and VIL1 expression both in AGS and N87 cells, suggesting the involvement of TCTP in inducing CDX2 signaling. Our data demonstrate that TCTP overexpression has a promotion effect on CDX2 and VIL1 expression, just as the same biological effects as HpSlyD has. HpSlyD binding protein FK506 blocks HpSlyD-induced expression of CDX2, VIL1, and TCTP in AGS and N87 cells. FK506 can block the function of FK506-binding protein (FKBP) by binding to the immunophilin FKBP12 [31][32][33][34][35] . HpSlyD is a member of the FKBP family. First, we assessed whether FK506 could inhibit HpSlyD enzymatic activity. As shown in Fig. 6, with E. coli SlyD as a positive control, enzymatic activity analysis revealed that PPIase activity was substantially lower in cells treated with HpSlyD+FK506 than in those treated with HpSlyD alone. Therefore, our data suggest that FK506 can suppress PPIase activation of HpSlyD. We next addressed the effect of FK506 on HpSlyD-induced expression of CDX2, VIL1 and TCTP. As shown in Fig. 7, co-treatment of cells with HpSlyD and FK506 led to significant reductions in CDX2, VIL1 and TCTP expression compared with cells treated with HpSlyD alone in both the AGS (Fig. 7A-D) and N87 (Fig. 7E-H HpSlyD related to the expression of CDX2 and TCTP in different gastric diseases. The above in vitro studies showed that HpSlyD induces CDX2 and VIL1 expression mediated through TCTP. To determine if a similar phenomenon occurs in vivo we immunostained human different gastric diseases tissue with or without HpSlyD infection. The information from the patients' included in this study is summarized in Supplement Table 1. There was no statistically significant difference in age and sex between groups. In GS group, the IS of CDX2 expression was no statistical difference no matter in H. pylori positive cases than in the negative ones or in the HpslyD positive cases than in the negative ones (P > 0.05, Fig. 8A-E). The IS of TCTP expression was also no difference between H. pylori groups (P > 0.05, Fig. 8C,D,F). These results indicated that the HpslyD positive H. pylori strain doesn't promotes the expression of CDX2 and TCTP in GS. In IM-GA group, the IS of CDX2 expression was higher not only in H. pylori positive cases than in the negative cases but also in the HpslyD positive cases than in the negative group (P < 0.001, Fig. 9A,B,E). The same expression trend can also be seen in the IS of TCTP expression (P < 0.001, Fig. 9C,D,F). These results show that HpslyD positive H. pylori strain promotes the expressions of CDX2 and TCTP in IM-GA. In GC group, the IS of CDX2 expression was higher in H. pylori positive specimens than in the negative specimens, and higher in HpslyD positive specimens than in the negative specimens (P < 0.05 and P < 0.01, Fig. 10A,B,E). The same expression trend can also be seen in the IS of TCTP expression (P < 0.001, Fig. 10C,D,F). These results show that HpslyD positive H. pylori strain promotes the expressions of CDX2 and TCTP in GC. And then we compared TCTP and CDX2 expressions of different gastric diseases in HpslyD positive. As shown in Fig. 11, the IS of CDX2 and TCTP expressions are significantly higher in GC than that of IM-GA, which is also significantly higher in IM-GA than that of GS, indicating that the HpslyD positive H. pylori strain promotes the expression of CDX2 and TCTP with the development of gastric diseases. TCTP is positively correlated with CDX2 in H. pylori slyD positive infection. We next evaluated the relationship between TCTP and CDX2 expression. As shown in Fig. 12, we identified a positive correlation between TCTP and CDX2 levels in HpslyD positive cases (Spearman's correlation coefficient, r = 0.3644, P < 0.01) but not in HpslyD negative cases (r = 0.1292, P = 0.4089) or H. pylori negative cases (r = 0.2585, P = 0.067). Discussion In a previous study, we identified HpslyD as a gastric cancer-associated gene 6 . Further study has shown that infection with slyD-positive H. pylori strains is associated with atrophic gastritis 9 . However, the mechanism by which HpslyD provokes metaplastic changes is poorly understood. In this study, we fill this gap with studies showing that HpSlyD induces CDX2 and VIL1 expression both in vitro and in vivo. In addition, this study is the first to confirm that the TCTP-mediated signalling pathway is involved in HpSlyD-induced IM in the stomach. These results provide novel information that contributes to understanding the molecular events that precede the development of gastric diseases caused by H. pylori infection. Metaplasia is a process whereby a completely differentiated cell transforms into another type of mature cell, and this process is stimulated by certain factors in response to environmental changes. IM refers to a series of phenotype changes from stomach epithelium to an intestinal phenotype during the process of changing from gastritis to atrophic gastritis and sometimes to intestinal-type gastric cancer. This change is caused by an integration of genetic factors expression, transcription factors, signalling pathways and growth factors. CDX2 is a homeobox transcription factor that is critical for intestinal differentiation 36,37 , and is a specific biomarker of the early steps of the gastric carcinogenic cascade, driving the development of IM 38,39 . The key role of CDX2 in the metaplastic transformation of the gastric mucosa was categorically demonstrated by the use of two transgenic mouse models with ectopic expression of CDX2 in the gastric epithelium and subsequent development of IM with absorptive, goblet and enteroendocrine cell types 40,41 . VIL1 is a structural protein involved in the formation of small intestinal microvilli and its expression is upregulated in IM. VIL1 is a known transcriptional target of CDX2 11 . Using two kinds of gastric epithelial cells in vitro we showed that HpSlyD induced CDX2 and VIL1 expression. Furthermore, a similar result was confirmed in an HpslyD stable cell line, which we constructed in previous studies. Therefore, our results indicate that the expression of CDX2 and VIL1 is associated with the presence of HpSlyD. SlyD, as a multifaceted protein, belongs to the PPIase FKBP family and catalyses the intrinsically slow cistrans isomerization of peptidylprolyl bonds (Xaa-Pro) to facilitate the protein folding process 42,43 , but its role as a PPIase in vivo is not well understood. Previous functional and interactional studies have shown that HpSlyD is involved in nickel ion integration of urease and hydrogenase 42,44 . Our previous studies showed that HpslyD is a high-copy gene in gastric cancer patients by constructing a gastric cancer-related H. pylori differential gene (E) Boxplot shows that CDX2 expression is significantly higher in H. pylori positive and HpslyD positive cases than in negative ones. (F) Boxplot shows that TCTP expression is significantly higher in H. pylori positive and HpslyD positive cases than in negative ones. ***P < 0.001. library and that HpSlyD influences the gastric cell biological processes of cell proliferation, transformation and migration 7,8 . Recently, some researchers demonstrated an emerging role of mammalian PPIase in cell differentiation 45 and therefore bacterial-derived PPIase may also be involved in phenotype transitions. The present study suggests that HpSlyD regulates CDX2 and VIL1 to promote IM transition in gastric epithelial cells. This study broadens our understanding of bacterial-derived PPIase and provides a theoretical basis for understanding the function of HpSlyD and an in-depth exploration of the pathogenesis of H. pylori. The molecular mechanism of H. pylori's regulation of CDX2 expression has been reported in the literature. Camilo 12 . Thus, CDX2 and VIL1 expression regulated by H. pylori is a relatively complex process involving the interaction of many signalling pathways. However, the signalling pathway involved in HpSlyD-induced CDX2 and VIL1 expression is not yet completely understood. TCTP is at the heart of the cell-reprogramming network, playing the role of a checkpoint, and is involved in regulating transition points of cell phenotypes under a variety of physiological or pathological states. In vitro, we found that TCTP expression was markedly increased in AGS and N87 cells treated with HpSlyD and in an HpslyD stable cell line, suggesting that HpSlyD also affects TCTP expression in gastric epithelial cells. Meanwhile, we observed that downregulation of TCTP protein led to decreased HpSlyD-induced CDX2 and VIL1 expression and overexpression of TCTP improved the levels of CDX2 and Villin. Co-treatment with HpSlyD and FK506 led to a significant reduction in CDX2, VIL1 and TCTP expression. Furthermore, IHC staining demonstrated that CDX2 and TCTP expression were higher in H. pylori positive specimens than in H. pylori negative specimens, and higher in HpslyD positive specimens than in HpslyD negative specimens. HpslyD positive H. pylori strain promotes the expression of CDX2 (E) Boxplot shows that CDX2 expression is significantly higher in H. pylori positive and HpslyD positive cases than in negative ones. (F) Boxplot shows that TCTP expression is significantly higher in H. pylori positive and HpslyD positive cases than in negative ones. *P < 0.05, **P < 0.01, ***P < 0.001. and TCTP with the development of gastric diseases. In HpslyD positive specimens, the expression of CDX2 was positively correlated with TCTP. Our results show that HpSlyD induces CDX2 and VIL1 expression mediated through TCTP and contributes to IM and the development of gastric diseases. We can further speculate that HpSlyD can activate cell differentiation mediated by transcriptional factors through TCTP, re-programming gastric epithelial cells from the gastric phenotype to the intestinal phenotype. This process may also be involved in the malignant transformation of gastric tissue harbouring this chronic and stable infection. In conclusion, we demonstrated that H. pylori infection leads to increased expression of CDX2 and VIL1 and that TCTP enhances this expression, and these changes were associated with the development of IM and cancer in the gastric mucosa. The results presented in this study show that HpSlyD is a positive regulator of IM progression, and therefore, it may be a possible therapeutic target for inhibiting the formation of IM after H. pylori infection. Our results provide novel information for understanding the molecular events that precede the development of gastric IM, reinforcing the role of the HpSlyD-TCTP-CDX2 pathway in the whole process. Our study also provides an important molecular target for the clinical monitoring of H. pylori infection and 'type-based therapy' , and provides insight into ideas and strategies for blocking H. pylori-related IM formation and decreasing the risk of progression to gastric cancer. Boxplot shows that CDX2 expression of HpslyD positive is significantly higher in GC than that in IM-GA, and also is significantly higher in IM-GA than that in GS. (B) Boxplot shows that TCTP expression of HpslyD positive is significantly higher in GC than that in IM-GA, and also is significantly higher in IM-GA than that in GS. *P < 0.05, **P < 0.01, ***P < 0.001.
2018-04-03T01:01:57.413Z
2017-05-23T00:00:00.000
{ "year": 2017, "sha1": "2939f59cafef839abc417f86aaa08a4e781bd05d", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-02642-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2939f59cafef839abc417f86aaa08a4e781bd05d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
119272543
pes2o/s2orc
v3-fos-license
(0,2) Elephants We enumerate massless E6 singlets for (0,2)-compactifications of the heterotic string on a Calabi-Yau threefold with the"standard embedding"in three distinct ways. In the large radius limit of the threefold, these singlets count deformations of the Calabi-Yau together with its tangent bundle. In the"small-radius"limit we apply Landau-Ginzburg methods. In the orbifold limit we use a combination of geometry and free field methods. In general these counts differ. We show how to identify states between these phases and how certain states vanish from the massless spectrum as one deforms the complex structure or Kaehler form away from the Gepner point. The appearance of extra singlets for particular values of complex structure is explored in all three pictures, and our results suggest that this does not depend on the Kaehler moduli. Introduction The earliest form of model building in string theory consisted of "embedding the spin connection in the gauge group" for a Calabi-Yau compactification of the E 8 × E 8 heterotic string [1]. These correspond to (0,2)-compactifications that happen to have N = (2, 2) worldsheet supersymmetry. A natural question to ask concerns the counting of massless states in uncompactified spacetime which are singlets under the unbroken E 8 × E 6 gauge symmetry. This turns out to be a fascinating question that has received rather sporadic attention in the past 25 years. These massless states correspond to first order deformations of the theory. Marginal deformations must preserve the (0,2) superconformal symmetry [2]. Deformations that preserve the full (2,2) invariance constitute the familiar (2,2) moduli space. Its dimension is constant [3], and in a geometric phase it corresponds to the unobstructed [4] deformations of complex structures and changes in the complexified Kähler form. The remaining moduli that only preserve (0,2) invariance are harder to describe. As a first step, we may count the massless gauge singlets in the four-dimensional effective theory. Each of these is a first order deformation that may be obstructed at higher order. Unfortunately, the identification of all massless singlets at a generic point in the (2,2) moduli space is well beyond our current abilities. To make progress, we must work at certain limiting points where the spectrum is accessible to available techniques. These include large radius points, Landau-Ginzburg loci and the Gepner points they contain, and orbifolds. In each of these points the techniques used to identify the singlets are rather different, and the resulting description of the space of first order deformations might appear as mysterious as an elephant to the group of proverbial blind men from Indostan. Can these different descriptions be reconciled? In the large radius limit of a Calabi-Yau phase, the counting of the singlets is quite easy to visualize. The (2,2) singlets manifest themselves as infinitesimal deformations of the complex structure or complexified Kähler form of the Calabi-Yau, while the less familiar (0,2) singlets correspond to first order deformations of the tangent bundle, counted by H 1 (End T ). This group can jump with complex structure [5]; moreover a "generic" first order (0,2) deformation is expected to be lifted by world-sheet instantons [6]. One might expect, therefore, that the Gepner models corresponding to certain Calabi-Yau threefolds might count the number of (0,2)-deformations differently. After all, the Gepner model describes physics at some "minimal radius" for the Calabi-Yau threefold, well away from the large radius limit, for a special choice of complex structure. Singlet counts for Gepner models were comprehensively listed in [7]. So perhaps the different aspects of the elephant cannot be reconciled. It may be that the number of singlets varies wildly across the moduli space. We will argue here that is not the case. The behaviour of the singlets is quite orderly, with relatively modest jumping in the singlet count. As is well studied, the quintic threefold provides a remarkably boring case study, where the number of singlets is fixed except for a handful of singlets associated with extra U(1) gauge symmetries at the Gepner point. In the case of the quintic, the gauged linear sigma model offers a beautiful explanation of this behaviour. As already noted in [8], the (2,2) GLSM describing a Calabi-Yau complete intersection in a toric variety has natural (0,2)-preserving deformations encoded in a (0,2) superpotential for the gauge theory. The holomorphic parameters of this superpotential encode the "toric" Kähler moduli, the "polynomial" complex structure moduli, as well as a subset of classically unobstructed bundle moduli. This GLSM parameter space was recently studied in some detail in the case of Calabi-Yau hypersurfaces [9]. Remarkably, these GLSM deformations have been argued to correspond to exactly marginal deformations of the (0,2) theory [10][11][12]. Getting back to the quintic, it is not hard to see that all elements of H 1 (End T ) can be represented as deformations of the (0,2) GLSM superpotential. Hence, it is not too surprising that the singlet spectrum at the Gepner point simply differs by a few states associated to the un-Higgsing of additional gauge symmetries. More generally, there are certainly models where H 1 (End T ) is not fully described by the (0,2) GLSM, and the additional singlets, unprotected by GLSM arguments, should suffer the fate of the "generic" large radius singlet and become massive away from the large radius limit. In this paper we will study various cases which have a little more structure than the quintic. We will show how the bulk of the spectrum stays fixed and can be tracked nicely between the Calabi-Yau and Landau-Ginzburg pictures. We will also see how various massless states can appear in some subspaces of the moduli space. In some cases these extra states can be tracked all the way from the Gepner model to the large radius limit. The orbifold is an important intermediate step on a path from the Gepner model to the large radius limit. Comparing the orbifold to the large radius limit is extremely well studied in the context of (2,2) theories. Here the relationship between the orbifold and its resolution is now generally known as the McKay correspondence. A McKay correspondence for the (0,2)-case has been quite neglected, despite its origin in string theory being as old [13]. We make some first efforts in this direction here. In particular, at the orbifold limit the massless states can be characterised as "untwisted" or "twisted," and we are able to compute the spectrum of states of both types. In the case of a Calabi-Yau space with a curve of quotient singularities this involves understanding how the six-dimensional theory determined by the quotient is compactified on the singular curve and leads to a twisted compactification familiar from the study of wrapped D-branes. This is a first step toward a McKay correspondence, but there are some subtleties we do not resolve here. We will focus on 4 examples of Calabi-Yau threefolds in a (weighted) projective space, each of which has its own merit: • a quintic in P 4 is the simplest and most studied case; • a sextic in P 4 21111 has extra singlets states at small radius; • a septic in P 4 31111 is a blown-up orbifold and demonstrates the (0,2) McKay correspondence; • an octic in P 4 22211 exhibits many complications, including extra singlets appearing both at special radii and at special complex structure. The Landau-Ginzburg locus for the sextic and octic theories has additional singlets in comparison to the large radius computation, which are not associated with an enhanced gauge symmetry. What is the fate of these singlets as we move away from the Landau-Ginzburg locus by turning on a Kähler deformation? The only reasonable possibility is that they acquire a Kähler-dependent mass term, which is indeed allowed by the quantum symmetry of the Landau-Ginzburg orbifold and consistent with the fact that the number of additional chiral singlets is even. This would be challenging to verify directly even at the Gepner point, since it would require us to compute correlators of several twisted states. Luckily, we have a tool at our disposal that would be singularly unhelpful to the six blind men: we can take a look at our elephant in the mirror. Using mirror symmetry we are able to show that the extra singlets do indeed acquire a Kähler-dependent mass. The extra singlets provide examples of states with Kähler dependent masses; however we observe that in all of our examples every large radius singlet, whether it is a (0,2) GLSM deformation or not, remains massless at the Landau-Ginzburg locus. Thus, we have yet to find an example of a "generic" large radius singlet that is lifted by world-sheet instantons. Instantons could well lead to higher order obstructions for these first order (0,2) deformations, but we find it remarkable that an instanton-induced mass term for the non-GLSM singlets, while allowed by symmetries, is not generated. This suggests that there may be a nonrenormalization theorem with a wider applicability than the one currently known for the subspace of GLSM deformations. 1 The rest of the article is organized as follows: in sections 2 and 3 we review and develop the technology necessary to study heterotic spectra in Landau-Ginzburg and Calabi-Yau phases. Section 4 is devoted to a comparison of the general results, while section 5 contains specific computations in the examples. Singlet Spectrum at the Landau-Ginzburg Locus Describing the massless spectrum of a heterotic vacuum as a function of the moduli is a difficult affair even in string perturbation theory, since it requires a knowledge of the marginal operators in a non-trivial SCFT. At certain points in the moduli space the SCFT may reduce to a solvable theory: for instance, it might be an orbifold of a free theory or a Gepner model. When this holds, the full perturbative string theory is under control: the spectrum is computable, and any scattering amplitude can be reduced to an integral over the moduli space of a punctured Riemann surface. In principle, conformal perturbation theory can then be used to determine these properties in an open neighborhood of the solvable point. Unfortunately, in practice conformal perturbation theory is difficult to carry out in full generality. For instance, even in an orbifold of T 6 little is known about the dependence of spectrum and amplitudes on the twisted sector moduli corresponding to Kähler deformations. To make progress it turns out to be useful to sacrifice a little of the ambition: if we cannot determine the spectrum as a function of all the moduli, perhaps we can do so on some suitably nice locus in the moduli space. For instance, in the example of a T 6 orbifold we can look at the dependence of the spectrum on all of the untwisted moduli. This example is perhaps not very exciting, since as far as this dependence is concerned, we are basically dealing with a solvable theory. The Landau-Ginzburg models provide an important class of examples where the massless spectrum can be determined, even though the theory is not a solvable SCFT. In particular, the Landau-Ginzburg description allows us to follow the massless spectrum as parameters in the superpotential are varied. (2,2) Landau-Ginzburg Generalities In this section we review some standard results on (2,2) Landau-Ginzburg models [14,15] and their uses in heterotic compactification [16]. A (2,2) Landau-Ginzburg theory with a UV R-symmetry is defined by a Lagrangian for N chiral superfields X i with canonical kinetic terms and a quasi-homogeneous superpotential W (X) satisfying The superpotential coupling is a relevant deformation of the free theory, and under suitable conditions the IR fixed point is believed to be a non-trivial compact (2,2) SCFT, with the UV R-symmetry corresponding to the R-symmetry of the IR theory. In such theories the critical point set of W , i.e. points where dW = 0, is the origin in C n , and without loss of generality the α i may be taken to be 0 < α i ≤ 1 2 . While following the Landau-Ginzburg RG flow is a non-trivial affair, it turns out that a number of properties of the theory, in particular those involving the supersymmetric ground states, are independent of the RG details. Perhaps the most elegant way to encapsulate the accessible IR physics is via a representation of the left-moving N = 2 super-conformal algebra in the cohomology of the right-moving supercharge Q of the UV theory [17]. The propagating fields in a (2,2) chiral multiplet X i and its complex conjugate anti-chiral multiplet X i consist of the bosonic x i , its complex conjugate x i , the right-moving fermions ψ i + , ψ i + , and the left- It is easy to show that these operators are Q-closed up to the equations of motion of the UV theory; moreover, the super-renormalizability of the Lagrangian allows us to evaluate the algebra of these operators via free-field OPEs. The result is that this is a representation of the N = 2 algebra with central charge c = 3 i (1 − 2α i ). The conformal weights h and charges q of the fundamental fields are given in table 2.1; the table also lists the charges q under the right-moving R-symmetry. The left-moving N = 2 algebra in Q cohomology is perfectly suited to study supersymmetric ground states of the theory. In string theory, these right-moving Ramond sector states correspond to massless fermions in the space-time theory, and in a space-time supersymmetric compactification knowledge of these states is sufficient to reconstruct the massless spectrum. While in general determining the Q cohomology is a challenge, the superrenormalizability of the Landau-Ginzburg theory reduces the computation to two simple steps [16]: the UV theory is truncated to the fields in the N = 2 algebra of (1), and the Q operator acts on the remaining modes via The Landau-Ginzburg Orbifold In order to apply these ideas to string compactifications, they must be generalized to Landau-Ginzburg orbifolds. This has been carried out in [16] and extended to a wide class of (0,2) heterotic backgrounds in [18][19][20]. In this section we will review the (2,2) heterotic models. Following Gepner [21], we know that a c = c = 9 (2,2) SCFT with integral q − q charges can be used to construct a space-time supersymmetric E 6 × E 8 heterotic compactification. In Landau-Ginzburg models the natural way to achieve this is to orbifold by exp(2πiJ) [22]. In the Gepner construction, the "internal" theory is tensored with a free theory of 10 left-moving Majorana-Weyl fermions λ A , as well as a level-one left-moving E 8 current algebra. Adding four (0,1) superfields for the four Minkowski directions leads to correct central charges for a critical heterotic string, and modular invariance and space-time supersymmetry are ensured by performing a GSO projection on the world-sheet fermion numbers on the left and the right. The right-moving GSO projection is, as usual, responsible for space-time supersymmetry, while the left-moving projection ensures that the manifest SO(10) × U(1) gauge symmetry is enhanced to E 6 . 3 In the (2,2) Landau-Ginzburg theory, the GSO projections are conveniently combined with the orbifold onto integral q: we simply orbifold by g = exp(−iπJ) and project onto suitable fermion numbers, both on the left and right, mod 2. The massless fermions that are E 6 × E 8 singlets come from the (NS, R) sectors, i.e. the sectors twisted by g k with k odd. In such a sector the GSO projection amounts to keeping states with integral q. Since α i are rational, g 2d = 1 for some integer d, and hence there are 2d − 1 twisted sectors. Space-time CPT exchanges the k-th twisted sector with the (2d − k)-th sector, so we need only consider k = 1, 3, . . . , d. Massless fermion states must be in the Q cohomology, and level matching implies that the left-moving energy E must vanish. The E 6 representation, as well as the type of space-time multiplet (vector or chiral/anti-chiral) is determined by the q, q charges. The E 6 representation follows from the standard decomposition The E 6 singlet states must have q = 0. The type of space-time multiplet can be determined by working out the action of spectral flow on corresponding (NS,NS) operators. The result is that a massless fermion with right-moving charge q = ± 3 2 is a vector; if q = − 1 2 it belongs to a chiral multiplet, and if q = 1 2 , it belongs to an anti-chiral multiplet. The algorithm for determining the singlet spectrum is therefore quite simple: for each odd sector in the Landau-Ginzburg orbifold, we must identify states in the Q cohomology with E = q = 0 and q = ± 1 2 . Since we restrict to the Q cohomology the left-moving quantum numbers can be determined from the left-moving N = 2 UV algebra. It is useful to note that Q commutes with the left-moving algebra and has a definite right-moving charge q = 1. Thus, in any sector the zero energy states live in a complex where the U q are states with definite q charge. Quantum Numbers in Twisted Sectors The remaining question to be addressed is the computation of the E, q and q quantum numbers in the twisted sectors. The E and q quantum numbers of the twisted vacua can be obtained by using the UV N = 2 algebra. In the k-th sector the fields have a twisted moding where 2h i = α i , 2 h i = 1 + α i , and The Fock vacuum |k is defined as the state annihilated by all positive modes. The leftmoving quantum numbers of |k can be computed by working out the one-point functions of J and T via the mode expansion. For instance, we easily find Using a point-splitting regularization for J, these correlators imply whence we read off the left-moving charge of |k as Similarly, we obtain the weight via w 2 k|T (w)|k = h k : To obtain the energy, we must transform from the plane to the cylinder and remember to include contributions from the other left-moving degrees of freedom. In sectors with k even, we find E k = 0-an answer in line with the (2,2) supersymmetry. For k odd, the result is LG + 0 − 5 24 SO(10) The remaining quantum number is the right-moving R-charge. The simplest way to determine this is to note that in the UV the right-moving current J is related to the leftmoving current via J = J + J B , where J B assigns charge +1 to γ i and −1 to ψ i . The J B symmetry is independent of W , and we can evaluate the charge of the twisted vacuum by simply setting W = 0. Combining this with the result for J, we find Table 2: Ground state quantum numbers and modings for the quintic. Unless indicated otherwise, here and in what follows F [d] denotes a degree d polynomial in the x i with respect to the multi-grading of the relevant homogeneous coordinate ring (in this case the x i just have charge 1). The subscript of the ket indicates the number of linearly independent states of each type. The map Q : U −3/2 → U −1/2 is given by so that the image of an arbitrary state |ψ ∈ U −3/2 , specified by two matrices c ij and d ij is What is the kernel of Q? Since W is quasi-homogeneous, dim ker Q ≥ 1 for any W . A Q-closed state is obtained by setting c ij = d ij = δ ij . In fact, for generic W this is the only Q-closed state in U −3/2 . Its existence is not surprising: this is the vector multiplet corresponding to the unbroken U(1) L symmetry. Thus, we find that there are generically 301 massless chiral singlets in the k = 1 sector. When W is taken to be Fermat, we find that there are five Q-closed states at U −3/2 , with d ij = d i δ ij and c ij = d ij . Again, this is not a big surprise, since at this point in the moduli space the theory reduces to the Gepner model for This theory has five unbroken U(1) currents, each of which leads to a massless vector multiplet. So, at the Fermat point we find 305 massless chiral singlets at k = 1. The physics encoded by the change in ker Q is just the supersymmetric Higgs mechanism: the disappearance of a massless vector multiplet is accompanied by the disappearance of a massless E 6 singlet chiral multiplet. The number of massless vector multiplets is given by the number of decoupled components of the Landau-Ginzburg theory. For instance, turning on the unique monomial containing all the fields breaks all but one of the currents and leads to 301 massless singlets. Finally, we turn to the k = 3 sector. Here there are 25 zero energy states with q = 0 as shown in (16). Clearly Q = 0 in this sector, so that all of these states correspond to massless singlets. Combining these states with the k = 1 singlets, we see that the theory contains 326 massless singlets for generic W and 330 at the Fermat point. Subtracting the 1 + 101 (2,2) moduli, we find 224 singlets that are not associated to extra U(1) gauge symmetries. The (2,2) Landau-Ginzburg Lagrangian can be deformed to a (0,2) theory by replacing ∂ i W with arbitrary quartic polynomials W i in the component Lagrangian. Although the G ± generators of the left-moving SCA are no longer conserved, the J and T still generate a leftmoving U(1) L × Virasoro algebra. The computation of the ground-state quantum numbers and Q cohomology are unchanged by these deformations. Universal Structure in LG Singlet Spectrum The spectrum of singlets we have seen in the quintic in fact exhibits a pattern that is somewhat universal and will be useful in connecting the Landau-Ginzburg results to calculations in the large radius phase. Consider a Landau-Ginzburg model with N = 5 fields and arbitrary charges 0 < α i ≤ 1 2 such that i (1 − 2α i ) = 3. We will consider the k = 1 and k = 3 sectors of any such model and find that some singlet states are universally present. To simplify expressions, note from (6) we have We begin with the k = 1 sector. Here Inserting these values we find E 1 = −1 and (q 1 , q 1 = (0, − 3 2 ). These values show immediately that zero-energy states will be given by Since ν i ∝ α i functions of x can be classified by their degree, f [p] (λ α i d x i ) = λ p f (x), so that p determines the charge and energy of f (x). We also use the notation n i = α i d ∈ Z. A possible chiral field with α = 1 2 must here be distinguished, so we use the somewhat clumsy notation Clearly, 0 ≤ n ≤ 1; if n = 0 the first row is vacuous. Considering the first term in (20), the zero energy condition shows that The expansion extends so long as there are terms for which the degree of F is positive. All of these yield states at q = 2. The second term potentially yields two types of zero energy states States of the first type have (q, q) = (0, − 3 2 ) and represent singlets in vector multiplets. The second term in the first line contributes n states at (q, q) = (0, − 5 2 ). The third term also yields two types of zero energy states The first of these contributes states at (q, q) = (0, − 1 2 ) (chiral matter) and the second at (0, − 3 2 ) (vector multiplets). This exhausts the possible singlet states at k = 1. The action of Q is determined from (15) We now move to the k = 3 sector. In general we will not enumerate all possible states at q = E = 0 or the action of Q. Rather, we will show that certain states arise universally at q = − 1 2 . Here a different distinction among the fields by weight is appropriate Clearly n ≤ m ≤ 2. Inserting these values we find that the ground state is characterized by We will not attempt a complete characterization of all zero-energy states in this sector but note that the following states always arise: Note that since we still have ν i ∝ α i we classify functions of x by degree as above. All of these states have (q, q) = (0, − 1 2 ) and represent scalars in chiral multiplets. The first term obviously contributes when m is nonzero. The coefficient space for these states is identical to the first row of q = − 3 2 states in the k = 1 sector in table 3 and in terms of counting Table 3: Universal Singlets in Landau-Ginzburg models. singlets they explicitly "cancel them out." In general there can be other zero energy states at k = 3. These can include states with charge (0, − 3 2 ) and nontrivial Q action, so not all of the states listed above are physical. In all cases we have observed, the coefficient spaces of these q = − 3 2 states are then repeated as coefficients of q = − 1 2 states in other sectors, effectively canceling again. We term this reappearance of k = 1 states in higher sectors the "cascade." We will see an example of this in Section 5.2.3. The set of states we have listed here is present in any Landau-Ginzburg model where the orbifold simply projects onto integral R-charges. In the case of the quintic this is the complete complement of singlets. In other models there will be additional states in various sectors, but these universally present states will figure in comparing the Landau-Ginzburg phase with the orbifold. More LG Orbifolds The methods described above are easily extended to study quotients of the Landau-Ginzburg models discussed above by further discrete symmetries. These will be useful when constructing the mirrors as Landau-Ginzburg models. 4 A model with charges α i = n i /d has a natural non-R action of U(1) N acting by phases on the worldsheet chiral multiplets Φ i . This is broken by the superpotential W , and for generic W it is broken to Z d , by which we took the quotient above. Special nonsingular superpotentials leave a larger discrete subgroup G ⊂ U(1) N unbroken, and we can construct quotient theories following the same steps as above, with the simplifying feature that the new symmetries act non-chirally. In what follows we will take G to be abelian, and we will not consider general choices of discrete torsion in Landau-Ginzburg orbifolds [23]. The symmetry groups by which we quotient will act by phases on the chiral multiplets. We represent the group elements by N -tuples of rational numbers defined up to integers, so that the vector w represents the action of g ∈ G: generating a cyclic action of order d(w) = gcd(w i ). To construct the quotient by a group generated by vectors w (a) we introduce twisted sectors labeled by 0 ≤ t (a) ≤ d(w (a) ) − 1. The new symmetry is not an R-symmetry so the twists of bosons and fermions (6) are modified to We also introduce a projection onto states invariant under (29). The G action in twisted sectors is determined by the requirement of modular invariance from (29). For a non-chiral symmetry this is the same as the action in the untwisted sector, and the twisted vacua are uncharged. The GSO projection, however, is chiral and thus sectors with nonzero k will in general carry a G-charge. 5 The charge is easily computed by working with the UV fields and free OPEs. Since the twisted bosons make no contribution, it is sufficient to consider the action of g on the fermions, which we express as a subgroup of the U(1) vectorial symmetry with current is broken by the superpotential couplings, but since W is G-invariant, we can use this embedding to compute the G-charge. The result is that a twisted vacuum |k, t (a) transforms by a phase e 2πiqg , with The quotients of interest to us will be those preserving space-time supersymmetry. This requires that the left-moving spectral flow operator, in the k = 1 sector, be preserved by the projection. We can construct this operator in the free field representation of section 2.1 and find that it is preserved if Note that since (29) depends on w i only modulo integers, this condition is equivalent to the more familiar in terms of restricting the allowable quotients. In the twisted sectors, (31) will hold when w are chosen to satisfy the more stringent condition. Gepner Models and Mirror Symmetry For special values of the superpotential couplings, the Landau-Ginzburg model in all of our examples is an exactly solvable theory [24]. Prior to the orbifold of section 2.2 we have a 5 We would like to thank B. Wurm for his help in clarifying this point. product of (2,2) minimal models. This, of course, allows a calculation of the spectrum of singlets at this point in the moduli space, but this is equivalent to a special case of the Landau-Ginzburg calculation, as discussed above. The utility, to our work, of the Gepner model, is that at this point we have a construction [25] of the mirror model as an orbifold, as well as an explicit mirror map in terms of the Gepner construction. This allows us to find the singlet states in the mirror model corresponding to the states we enumerate using the Landau-Ginzburg construction. We can then construct the mirror as a Landau-Ginzburg orbifold and study the dependence of the singlet spectrum on the mirror superpotential. Since deformations of the mirror superpotential are mapped to Kähler deformations in the original model, we can thus predict which singlet states will be lifted by Kähler deformations away from the Landau-Ginzburg locus. In all of our examples, the exactly solvable superpotential will be a sum of terms of the form x k i +2 i , corresponding to an A k i +1 minimal model at level k i and of the form i , corresponding to a D l i +2 minimal model at even level k i = 2l i , leading to a tensor product of n minimal models. 6 Primary fields in the level-k minimal model are labeled Φ l,l q,s;q,s where 0 ≤ l,l ≤ k, subject to the identifications (and the same forq,s) as well as Φ l,l q,s;q,s ∼ Φ k−l,k−l q+k+2,s+2;q+k+2,s+2 . Fields with even (odd) s create states in the NS (R) sector. 7 We use Gepner's notation l q s | l qs for the state created by (35). The R-charge and conformal weight of a state are given by The minimal model at level k has a partition function 6 We hope there will be no confusion between the minimal model levels labeled by k i and the twisted sectors labeled by k. 7 In fact, s = 2 states are not primary but after the orbifold they do create highest weight states; this is related to the fact that the quotient projects out some modes of the supercurrents in the individual minimal models. where A l,l is the appropriate affine modular invariant at level k + 2 and the factor of 1 2 reflects the identification (35). The model enjoys a discrete symmetry G k = Z k+2 × Z 2 under which the state l q s has weights q, s. In the associated Landau-Ginzburg model we will be interested in the Z k+2 subgroup of this generated by The Gepner construction of a string vacuum as a quotient of the tensor product was introduced in Section 2.2. We add free fields and perform a quotient by Z d ×Z n 2 . The quotient introduces twisted sectors in whichq i ,s i differ from q i , s i by k and additional twists in which any twos indices are shifted by 2. The gauge symmetry of the model is The SO(10)-neutral scalars in chiral multiplets are states withq = 1,h = 1 2 and q = h = 0. The corresponding fermion states are obtained by applying the spacetime supersymmetry generator, shiftingq,s by one, and leading toq = − 1 2 . We will denote states in the resulting model by qnsn . The mirror model is constructed as a further quotient by a subgroup of G, essentially the subgroup under which the spacetime supercharge is invariant. The quotient introduces twisted sectors in whichq j is shifted relative to q j by 2t a m (a) for a lattice generated by a set of integer vectors m (a) (and the associated projection). The result of the construction is [25] a model in which the primary fields are related to those of the original theory by q → −q, s → −s. The orbifold construction can be realized in the Landau-Ginzburg model as a quotient following Section 2.6 with the action on the chiral superfields given by To use mirror symmetry to study the behavior of the singlet spectrum under Kähler deformations away from the Gepner point, we first find the spectrum of singlets in the original model. For each of these we construct the mirror state and identify the twisted sector in the mirror quotient in which it arises. We then consider the mirror Landau-Ginzburg model constructed as an orbifold. The mirror superpotential will admit polynomial deformations related by the monomial-divisor mirror map to the toric Kähler deformations in the original model. In the relevant twisted sectors, we can study the change in Q cohomology when the superpotential is deformed, thus identifying which singlet states are lifted under Kähler deformations away from the Gepner point. The tangent sheaf Let T denote the tangent sheaf (or bundle -we will use the terms interchangeably here) of a Calabi-Yau threefold X. We are interested in first order deformations of T since they correspond classically to massless fields allowing, to first order, a deformation of a (2,2)model to a (0,2)-model. Such deformations are given by Ext 1 X (T, T ) = H 1 (X, End(T )). For a simple argument we refer to [26], chapter 15. Methods of computing H 1 (End(T )) have been studied for some time [5,27,28] for cases of Calabi-Yau manifolds in products of projective spaces. Here we give a method that is reasonably direct for complete intersections in toric varieties. Let V be a compact toric variety and let X be a Calabi-Yau complete intersection within V . That is, we are in the context of Calabi-Yau's as studied in [29,30]. We quickly review the construction of V to fix notation. Let x 0 , . . . , x N −1 be the homogeneous coordinates on V . That is, we have a homogeneous coordinate ring in the sense of Cox [31] We now have a short exact sequence where D is a lattice 8 of rank r. Each column of the matrix Φ can be thought of as a U(1) r charge vector of the coordinates x i . That is, R has the structure of an r-multigraded ring. The toric variety is given as where B is the "irrelevant ideal" in R and Z(B) is the associated subvariety of C N . B is determined combinatorially from the fan describing V . Let v denote an element of the lattice D, i.e., an r-vector. If M is a multigraded Rmodule then we may shift multi-gradings to form Let q i denote the row vectors of the transpose of Φ. That is, q i represents the multigrading of the homogeneous coordinate x i . Let T V be the tangent sheaf of V . Assuming V is smooth, we have the generalization of the Euler exact sequence for a toric variety [32] Suppose X is a smooth hypersurface in V representing the anticanonical class. Then X is a Calabi-Yau manifold and we have the adjunction exact sequence: where we denote a restriction of Since all the sheaves in (42) are locally-free, we may restrict to X and the sequence will remain exact. Combining this with the sequence (44) yields the following fact. The complex is exact everywhere except the middle term where the cohomology is isomorphic to the tangent sheaf T X . Here W denotes the defining equation for the hypersurface X. It is easy to generalize this to the case of a complete intersection. Suppose X is defined by an intersection of W 1 = W 2 = . . . = 0. Let each W a have grade Q a . Then the tangent sheaf is given by the cohomology of For the remainder of the paper we assume X is a hypersurface. Before heading into the more complicated H 1 (End(T X )) computation it will be useful to consider the cohomology of the tangent sheaf itself. It is most convenient to use the language of the derived category D(X) to manipulate the tangent sheaf. The tangent sheaf is equivalent in D(X) to the complex (45), where the middle position of (45) is considered position zero. Suppose we have an object E • in D(X) represented by a complex We can consider the total cohomology (or hypercohomology) of this complex H n (E • ). There is a spectral sequence with [33] Fortunately it is straight-forward to compute the cohomology groups can be computed. Actually we need to study these cohomology groups in detail and we give a relevant method (if not the most efficient) in the appendix. Then one may use exact sequences of the form to restrict to X. 9 The dotted line in the spectral sequence (49) represented terms which contribute to H 1 (T X ), that is, deformations of complex structure. Since X is a Calabi-Yau threefold we know that H 1 (O X ) = H 2 (O X ) = 0. Also, since Q corresponds to an ample divisor, H 1 (O X (Q)) = 0. The contribution to H 1 (T X ) from the zeroth row (i.e., q = 0) corresponds to the cokernel of the d 1 map induced from the complex (45). This is given by elements of H 0 (O X (Q)) which are not multiples of ∂W/∂x i . This is immediately recognizable as deformations of the defining polynomial W modulo reparametrizations. These are the usual "polynomial deformations" of X. The spectral sequence then yields the following little result: The non-polynomial deformations of complex structure for a Calabi-Yau hypersurface X in a toric variety are given by This should be compared with a similar result obtained for deformations of complex structure of smooth projective toric varieties [35]. End(T ) We would now like to compute the cohomology groups H k (X, End(T )). These groups may also be written Ext 1 X (T, T ). The machinery of the derived category is well-suited to compute these cohomology groups as we discuss. We refer to [36,37] for more details. Given a complex of coherent sheaves and a similar complex F • we may form an object in D(X) which represents the object Hom(E , F ). It is given by the complex whose nth term is If φ ∈ Hom(E • , F • ) n then we define the differential of this new complex by as in [38]. The total cohomology of the object Hom(E , F ) represents the "hyperext" groups Ext(E , F ). Let us apply the above to the case of the tangent sheaf. From (45) we have a complex representing Hom(T, T ) given by (55) where the dotted line represents position zero. The maps in this complex are derived from x i q i and ∂ i W by using (54). This yields Theorem 2 There is a spectral sequence whose E 1 term is given by where Hom(T, T ) is given by (55). This converges to This theorem gives a practical method of computing H 1 (End(T )) as we discuss in several examples below. The Quintic The quintic threefold in P 4 provides a simple example to demonstrate theorem 2. As we will see in this paper, the quintic is deceptively simple and fails to demonstrate most of the interesting phenomena that can happen for counting singlets. Nevertheless it always provides a good example to start with. For the quintic, the tangent complex (45) becomes Before writing down the full spectral sequence we should note that Serre duality gives This makes the second and third row of the spectral sequence copies of row one and zero written in reverse. We obtain where we show contributions to H 1 (End(T )) with the dotted line. To compute the E 2 stage of the spectral sequence requires an explicit determination of the d 1 maps in (61). This is not so bad since the monomials of degree n form a basis (up to the quintic defining equation) of H 0 (O(n)). Actually, since the holonomy of the quintic threefold is precisely SU(3), the tangent sheaf is irreducible and so, by Schur's lemma, H 0 (End(T )) has dimension one. This means the map d Since clearly any degree 5 polynomial can be expressed as a sum i x i g i , for quartic g i 's, the map d 1 is surjective for any W . Thus the spectral sequence degenerates at E 2 and we obtain dim H 1 (X, End(T )) = 25 in agreement with known results [39] and section 2.4. Moreover, all 224 singlets correspond to (0,2) GLSM deformations. Relating the Computations The main point of this work lies in comparing calculations of the singlet spectrum valid at various loci in the moduli space of (2,2) theories. At different loci we apply different techniques, and in comparing the results we can find interesting relations between the calculations. End(T ) and GLSM Deformations The spectral sequence of the section 3.2 is closely related to the (0,2) GLSM holomorphic parameters studied in [9]. In addition to the toric Kähler parameters, the (0,2) superpotential is encoded by the maps in the complex where W ∈ H 0 (O V (Q)) specifies the hypersurface. In order for this to be a complex, this data must satisfy This is the famous (0,2) supersymmetry constraint. A superpotential specified by E i , J i leads to the same IR physics as one specified by E i , J i whenever the two are related by holomorphic field re-definitions. These re-definitions act on various (0,2) multiplets and can be identified with the following sections: These must of course be taken modulo gauge transformations and U(1) L invariance. The data in E, J and W encodes both bundle and polynomial Calabi-Yau deformations. As is familiar from the monomial-divisor mirror map, the latter are nicely described by toric geometry [40] of V and the Newton polytope for W . In particular, a choice of W fixes the polynomial complex structure moduli of the Calabi-Yau, as well as re-definitions of the charged matter fields modulo gauge invariance. Supposing we have fixed the complex structure on X, we can ask about the remaining (0,2) deformations and field re-definitions. The first order deformations δE i and δJ i fit into the first position (recall that the zeroth position is marked by the dotted line) of the complex They must satisfy δE i J i + E i δJ i = 0. This complex is just what we get for H 0 (Hom(T, T )), where Hom(T, T ) is given in (55), but with two important differences. Firstly the sheaves relevant to the GLSM are defined over V , while those relevant to the geometric analysis are defined over X. This can lead to differences in the counting. For instance, in general H 0 (O V (q i −q j )) = H 0 (O X (q i −q j )), and the latter can have additional holomorphic sections. When this holds, the GLSM superpotential modulo holomorphic field re-definitions over-parametrizes the bundle deformations, since there are automorphisms of the NLSM that cannot be lifted to holomorphic re-definitions of the UV theory. We will see an example of this phenomenon in the septic. Secondly, and more obviously, we are missing the first two terms of (55). These will vanish on V but there are examples (not in this paper) where the restriction to X can give nonzero entries. We should also note another subtlety in the comparison: there is a difference between counting first order solutions to the supersymmetry constraint (65) and demanding that it is satisfied to all orders. The latter leads to GLSM deformations, while the former corresponds to massless states accessible via the GLSM. Mapping Geometry to the Landau-Ginzburg Theory It is interesting to compare the geometric computation of the number of singlets with the Landau-Ginzburg description. At a crude level we know that the numbers agree, but can we make a more precise map? We can make some way via a series of exact sequences as follows. We restrict attention to the case r = 1, i.e., there is only one U(1) charge. Define A by (45) gives an exact sequence From these sequences, the following is exact: as is These exact sequences show how the three sources of singlets, H 1 (T ), H 2 (T ) and Ext 1 (T, T ) combine into Ext 1 (A , A ) (minus one). Now let L 1 and L 3 be the "universal" contributions from the Landau-Ginzburg theory at k = 1 and k = 3 respectively from table 3. That is is exact, and L 3 = i H 0 (O(q i )). These statements are written for the toric variety V but one can show, in the case r = 1, that they are also valid on restriction to X. The relationship between L 1 , L 3 and Ext 1 (A , A ) is expressed by the fact that the following two sequences are exact (for generic W ): It is easy to see from this that dimensions are correct, i.e., the singlet count between the Landau-Ginzburg picture and the large radius picture agree: but we also see that the precise mapping of singlets between these pictures is quite subtle. Orbifolds and a (0,2) McKay Correspondence Phases in which X acquires orbifold singularities are an interesting intermediate situation between the large-radius geometric phase and the Landau-Ginzburg phase. Near an orbifold limit point we can distinguish untwisted "bulk" states from twisted states localized near the singular locus. Deep in the orbifold phase, when all sizes in X other than the cycle whose shrinking is responsible for the singularity are taken large, the geometry near the singular locus tends to a limit in which the space transverse to the singular locus is simply C d−D /Γ for a quotient group Γ and a singular locus of dimension D. In the orbifold limit, the theory acquires a discrete quantum symmetry, and states can be classified by their transformation properties under it. Invariant states, also termed untwisted states, correspond to strings occupying the "bulk" of X. Charged, or "twisted" states represent strings localized near the singular locus. The spectrum of massless twisted states can be determined from the local structure of X near the singular locus. When X is a hypersurface in a toric variety, the GLSM provides a simple description of the untwisted sector. Deep in the orbifold phase, some of the chiral fields acquire large expectation values, breaking the gauge group down to a subgroup U(1) r × Γ, r < r. Fluctuations of these fields acquire large masses through the Higgs mechanism 10 , and integrating them out we find an effective theory of the remaining chiral fields interacting with r gauge multiplets and via an effective superpotentialŴ . Applying the GLSM picture of section 4.1 to this reduced model produces singlets in the untwisted sector of the model. To describe the twisted sector we use the fact that twisted states are localized near the singular locus. This allows us to use the local geometry to find a free-field description near a point on the singular locus following [13]. For the reader's convenience we recall the analysis, recast in the notation we use here, restricting attention for simplicity to the case Γ = Z p . Near a point on the singular locus we pick local coordinates x i , i = 1 . . . 3, on which the Γ action is generated by x i → e 2πin i /p x i with n i = 0 (mod p), and consider a free theory with chiral supermultiplets Φ i and this (non-R) action. The twisted sectors of this orbifold of a free field theory, our approximation to the twisted sectors of the orbifold phase of X, can be described using the techniques of Section 2 simply setting W = 0 and α i = 0. We find twisted sectors labeled by (k, s) where k = 1, . . . p − 1, and s = 0, 1 distinguishes R (s = 0) sectors from NS (s = 1) boundary conditions on the left-moving fermions. In the sector (k, s) the boundary conditions on the fields are given by while h i = 0 and h i = 1 2 . With W = 0 computing Q cohomology is trivial, but we need to perform the projection onto Γ-invariant states, as well as the GSO projection. The quotient preserves the four worldsheet supercharges given to within overall factors by The superscripts label the charges under the fermion number currents given by The case D = 0-an isolated singular point-is simplest. We study an example in section 5.2. The free field theory exhibits (in general) an unbroken E 6 gauge symmetry, and the methods we presented lead to predictions for the massless spectrum in twisted sectors. We find 27 and 27 multiplets and their conjugates along with the E 6 singlets related to them by the left-moving supersymmetry which are (2, 2) moduli. These are the subject of the McKay correspondence. In our conventions, chiral 27s correspond to Kähler deformations, while chiral 27s correspond to deformations of complex structure. There will also be E 6 singlets in the twisted sectors, for which one can attempt to find an analogous correspondence. Note that the conjugate states will arise in the conjugate sector, so that in general some of these singlet states might be lifted in pairs by deformations resolving the singularity. In the case D = 1-a curve C of singularities-we proceed in stages. Near a point on C we can find local coordinates as above, such that n 1 = 0. There are bosonic x 1 zero modes in the twisted sectors, so our states will locally be described by functions of x 1 . A novel feature of this construction will be that excluding x 1 means the zero mode ofψ 1 is no longer Q-exact, and we will need to include it in our computations (recall that we compute in the right-moving R sector where ψ 1 satisfies the same untwisted boundary conditions as x 1 ). The free-field quotient preserves an unbroken E 7 gauge symmetry given by the embedding E 7 × SU(2) ⊂ E 8 . 11 We can then consider this quotient structure fibered over the (assumed to be) large curve C. The low-energy physics will be described by a nonlinear sigma model on C with the fields determined from the orbifold construction as above. These will couple to the spin connection on C so that the functions of x 1 become sections of appropriate bundles. To find the spin we note that the curvature of C breaks E 7 → E 6 . Since this fits into the maximal embedding E 6 × SU(3) ⊂ E 8 we see that U(1) C must be an SU (3) subgroup. The fact that it must also commute with an SU(2) subgroup acting in the transverse directions (before taking the quotient) determines Simply compactifying the quotient theory on the curve C will not lead to a supersymmetric spectrum. In fact, the local structure is not described as C ×(C 2 /Γ) but as a fibration over C such that the resulting space is Calabi-Yau. In terms of our fields, the transverse coordinates x i for i = 1 transform under J C in such a way that two of the supercharges are invariant. Moreover, the zero mode ofψ 1 , lying in the cokernel ofQ in the nonlinear sigma model, becomes a section of Ω, the cotangent bundle of C. When acting on a vertex operator corresponding to a space-time fermion in a massless chiral multiplet (withq = − 1 2 ) ψ 1 will create the vertex operator for a massless antichiral multiplet (q = 1 2 ). Thus fields in the nonlinear theory with chargeq = − 1 2 and spin q C will produce massless chiral multiplets corresponding to H 0 (C, Ω ⊗q C ). Fields with chargeq = 1 2 and spin q C will produce massless antichiral multiplets corresponding to H 1 (C, Ω ⊗(q C +1) ). CPT invariance, the requirement that each chiral multiplet of charge q C be accompanied by an antichiral multiplet of charge −q C is then tantamount to Serre duality on C. This gives a matching between counting singlets in the orbifold language and the large radius language which works almost perfectly. Some interesting subtleties concerning dependence on complex structure will be seen in section 5.3.2. The correspondence between the Landau-Ginzburg picture and the large radius picture proceeds typically as shown in figure 1. The orbifold corresponds to a weighted projective space where we essentially just consider a single C * -action (i.e., r = 1), and so the results of section 4.2 apply. That is, we may relate the Landau-Ginzburg picture to the orbifold picture using the L 1 and L 3 contributions to the cascade. Then the orbifold may be related to the large radius limit by using the above (0,2)-McKay correspondence. Geometry The weighted projective space P 4 {2,1,1,1,1} with homogeneous coordinates [x 0 , . . . , x 4 ] has a terminal singularity at x 1 = x 2 = x 3 = x 4 = 0. If X is a generic hypersurface of degree 6 then it will not intersect this singularity. The Hodge numbers of X are h 1,1 (X) = 1 and h 2,1 (X) = 103. The analysis of H 1 (X, End(T )) looks quite similar to that of the quintic. The spectral sequence is E p,q 1 : This gives dim H 1 (X, End(T )) = 230. As in the case of the quintic, each of these corresponds to a (0,2) GLSM deformation. Adding in singlets from h 1,1 , h 2,1 and U(1) partners we would predict a value for the Gepner model of 230 + 1 + 103 + 4 = 338. The actual value as given in the table of [7] is 344. We are short by 6. The Landau-Ginzburg locus The superpotential is a degree 6 polynomial with weights α 0 = 1 3 and all other α i = 1 6 (we take i = 1, . . . , 4). Working sector by sector, we find the following zero energy states with In the k = 1 sector Q : U −3/2 → U −1/2 has a one-dimensional kernel for generic W ; the dimension increases to 5 at the Fermat point. This is just what we have already observed in the quintic. So, we find 307 k = 1 chiral singlets for generic W and 4 more at the Fermat point. The k = 3 states arise from the "cascade" picture described above, while the six k = 5 states account for the discrepancy with geometry. As we will see, mirror symmetry shows that these states are lifted once we turn on the Kähler modulus to move away from the Landau-Ginzburg point. Mirror Symmetry and Kähler Deformations The Gepner model in this moduli space is given by a (quotient of) the product A 2 ⊕ A ⊕4 5 of minimal models at level k i = 4 and k 0 = 1. In addition to the universal "cascade" states in the k = 1 and k = 3 sectors, there are six singlet states in the k = 5 sector. The Gepner model enjoys a discrete S 6 symmetry which permutes these singlet states. The Kähler deformation does not break this symmetry, so all of these singlets will be lifted by the deformation or none will. Explicitly, the six massless singlet states at k = 5 are given by permutations of For clarity we have not cast this into standard form, exhibiting it in a way that makes the twist manifest. The mirror model is given by a Z The state mirror to S (obtained by reversing the signs of q and s) satisfies where m (0) = (1, 1, 1, 1, 1), and so will appear in the sector (k; t) = (11; 1, 1, 2). Note that while the mirror model obviously shares the permutation symmetry of the original, this is not evident in its presentation as a quotient. Thus the states related to S by permutations will arise in other twisted sectors. In the Landau-Ginzburg model the discrete group acts via Constructing the Landau-Ginzburg orbifold we find in this sector the following states at q = 0:q where |v is the twisted vacuum. To find the action of Q note that since ν 0 = 0, the only term that can possibly contribute is γ † 0 (∂ 0 W ) 0 . At the Gepner point ∂ 0 W = 2x 2 0 , and expanding this we find that the q = − 1 2 state contributes to the cohomology at the Gepner point. The mirror Landau-Ginzburg model has a unique superpotential deformation (related by mirror symmetry to the Kähler deformation of the original model). This corresponds to adding to the Fermat superpotential the unique monomial invariant under the quotient group: This modifies Q as found above, introducing a term γ † 0 ψx 1 x 2 x 3 x 4 , which upon expansion gives rendering the kernel trivial. The state S, and thus all six k = 5 singlets states found above, are lifted for ψ = 0 by a Kähler dependent mass term. Geometry Let X orb be a septic hypersurface in P 4 {3,1,1,1,1} . This weighted projective space has a codimension 4 quotient singularity but the degree of the hypersurface forces X orb to pass through this point. X orb thus has an isolated singularity of the form C 3 /Z 3 . X has an interesting relation with the Calabi-Yau threefold X , the resolution of the degree 14 hypersurface in P 4 {7,2,2,2,1} as first observed in [41]. One may follow extremal transitions between hypersurfaces in toric varieties [42] by shrinking the Newton polytope and thus growing its polar. That is, one drops terms in the defining equation corresponding to vertices of the convex hull of P • . Typically this makes X singular. The fact that P grows corresponds to a resolution of singularities which allows X to pass through an extremal transition. We may try to do the same thing with our septic by shrinking the convex hull of (90) to that of The polar of this Newton polytope corresponds to P 4 {7,2,2,2,1} and thus X appears to have undergone an extremal transition to X . That said, the defining equation (91) is actually smooth. The supposed transition is not a transition at all and X is merely a smooth deformation of X. Actually the septic X should be considered more generic than the degree 14 hypersurface X in the following sense. All 122 deformations of complex structure of X are seen as polynomial deformations. For X , theorem 1 shows that 15 of the deformations are nonpolynomial. This is because X has an exceptional divisor of the form C × P 1 where C is a genus 15 curve. As observed in [43], a generic deformation of X will break this divisor up into 28 rational curves. The latter geometry is seen in a generic X. The analysis of H 1 (X, End(T )) proceeds as follows. The spectral sequence is E p,q 1 : This has a new feature compared to the quintic and sextic of the previous sections. The H 0 of various line bundles computed in the bottom row of the spectral sequence must be computed on X and not copied from V . For example H 0 (O V (4, −1)) is trivial while H 0 (O X (4, −1)) is dimension one. Anyway, this gives dim H 1 (X, End(T )) = 288. We can compare this to the Gepner model for X . Adding in singlets from h 1,1 , h 2,1 and U(1) partners we would predict a value for the Gepner model of 288 + 2 + 122 + 3 = 415, which is correct. The counting of (0,2) GLSM deformations following [9] naïvely yields 292 parameters associated to H 1 (X, End(T )). This is due to the four extra automorphisms counted by H 0 (O X (4, −1)) that cannot be lifted to the GLSM. Modulo this subtlety, we expect that all of the (0,2) singlets identified by the geometric computation can be integrated up to deformations of the (0,2) superpotential. The Orbifold As argued in section 4.3, we may analyze X orb in terms of the toric picture of the weighted projective space. That is, we have a homogeneous coordinate ring R = C[x 0 , . . . , x 4 ] with the grading giving by the weights (3, 1, 1, 1, 1) and the irrelevant ideal is simply B = (x 0 , x 1 , . . . , x 4 ). The spectral sequence of (49) which computes the cohomology of the tangent sheaf becomes which predicts h 2,1 = 122 and h 1,1 = 1. Recall that these are the contributions from the untwisted sector of the orbifold. The value of h 2,1 is correct but we need to add one twisted state to h 1,1 to account for the C 3 /Z 3 singularity. Then we agree with above. The corresponding spectral sequence for End(T ) gives E p,q 1 : which yields dim H 1 (X orb , End(T )) = 280. Comparing to above, we see that there must be 8 twisted stated to yield to the total of 288. So we predict that there are 9 twisted singlets states -1 contributing to h 1,1 and 8 to H 1 (End(T )). Indeed the free-field calculation of [13] reproduces this. We have here two twisted sectors k = 1, 2 and since they are related by conjugation we may restrict attention to k = 1. These twisted vacua are invariant under Z 3 , i.e. q g of (31) is zero. The Landau-Ginzburg analysis The Landau-Ginzburg phase of the corresponding GLSM is described by a degree 7 superpotential W (x 0 , . . . , x 4 ), with α 0 = 3 7 and α i = 1 7 for i = 1, . . . , 4. The zero energy states with q = 0 are given bȳ Once again, for k = 1, Q : U −3/2 → U −1/2 has a one-dimensional kernel for W generic and a five-dimensional kernel at the Gepner point, which corresponds to, i.e. the minimal model D 8 ⊕ A ⊕3 6 . Adding up the states, we find 366 k = 1 chiral singlets for generic W . In the k = 3 sector, Q has a trivial kernel unless W 55 = W i5 = 0, but that is a singular superpotential. So, this sector contributes 16 + 21 − 4 = 33 singlets for any non-singular W . This model demonstrates the "cascade" described in 2.5. In the third sector we have in addition to the q = − 1 2 states listed in (28) the set of four states of charge q = − 3 and a nontrivial action of Q so that there are less massless singlets in this sector than the "universal" prediction. However, these states return in the k = 9 sector (not shown) as four states of charge q = − 1 2 (related by conjugation to the four states at charge q = 1 2 at k = 5). Finally, we consider the k = 5 sector. Writing W as we see that the action of Q on an arbitrary state is Thus, all of the U 1/2 states are Q-exact for any non-singular W , and we find 13 chiral singlets in k = 5. This total agrees with the orbifold and large radius phases. Geometry The weighted projective space P 4 {2,2,2,1,1} with homogeneous coordinates [x 0 , . . . , x 4 ] has a Z 2 quotient singularity along x 0 = x 1 = x 2 = 0. This may be resolved to yield a toric variety V 0 with homogeneous coordinates [x 0 , . . . , x 5 ], an irrelevant ideal B = (x 0 , x 1 , x 2 , x 5 )(x 3 , x 4 ) and grades given by the charge matrix X 0 is an octic hypersurface in V 0 with defining equation, in Fermat form, Mirror symmetry was studied in detail for this example in [44]. The exceptional set in X 0 formed by the Z 2 -resolution is of the form E = C × P 1 , where C is a genus 3 curve. One has h 1,1 (X) = 2, where the two deformations of B + iJ can be considered to be the overall volume and a size of E. One may also show h 2,1 = 86. Of these 86 deformations of complex structure, 83 are obtained by deformations of the polynomial (101). The remaining 3 deformations of complex structure arise from H 1 (O X (−2, 1)) = 3 in agreement with theorem 1. The 83 polynomial deformations of complex structure preserve E = C × P 1 . We will see below that the remaining 3 deformations break E apart into 4 disjoint P 1 's in accord with [43]. The E 1 stage of the spectral sequence to compute H 1 (X 0 , End(T )) is given by E p,q 1 : The map marked d 1 in (102) fails to be surjective in this case which makes for more interesting analysis compared to the above examples. Let R 4 denote the vector space of degree 4 polynomials in the variables {x 0 , x 1 , x 2 }. Then one can show that This is a 6-dimensional space. In particular, for example, if W is in Fermat form then coker d 1 is spanned by The map on row one of the spectral sequence is shown to be surjective in the appendix. If one were to replace the map x i q i in (45) with a generic map of the right multi-degree then the map d 1 in (102) becomes surjective. This means that a deformation to a more generic (0,2)-model kills any massless states that appear at the (2,2)-locus due to a failure of surjectivity of d 1 . For this generic (0,2)-model the spectral sequence becomes degenerate at the E 2 stage: This yields a generic value of dim H 1 (X 0 , End(T )) = 188. However, on the (2,2)-locus, where d 1 fails to be surjective H 1 (X 0 , End(T ) may jump to a higher value. A precise analysis of this is not too difficult, but the technical details may be a little distracting. So this computation is left to the appendix. The result is that for the Fermat polynomial one finds dim H 1 (X 0 , End(T )) = 200, but that this value falls back to 188 for a generic W, even on the (2,2)-locus. This kind of jumping in dim H 1 (X 0 , End(T )) as one varies the complex structure was seen in other examples in [5]. Three of the deformations of complex structure of X 0 are obstructed in the sense that they prevent X 0 being embedded in the toric variety we have considered so far. That said, it is still possible to understand these deformations in terms of a hypersurface in a toric variety as follows. V is the crepant resolution of the weighted projective space P 4 {2,2,2,1,1} . It can be viewed as a P 3 -bundle over P 1 . To be precise, the toric data implies it is the space over P 1 . The short exact sequence and B = (x 0 , x 1 , x 4 , x 5 )(x 2 , x 3 ). Let X 1 be a Calabi-Yau hypersurface. For a specific complex structure one might consider the defining equation where λ is a generic complex number (not equal to one or else the threefold is singular). From what we have said, X 0 and X 1 are deformation equivalent. 12 Putting x 4 = x 5 = 0 in X 1 forces x 4 0 + x 4 1 = 0 and thus yields 4 rational curves. These 4 rational curves in X 1 are what remains of the genus 3 curve times P 1 in X 0 after switching on any of the three non-polynomial deformations of X 0 . Note that X 1 exhibits all 86 deformations of complex structure as polynomial deformations. Let us compute H 1 (X 1 , End(T )). The spectral sequence yields E p,q 1 : The map d 1 fails to be surjective, similarly to the X 0 case. The next stage of the spectral sequence is The d 2 maps in this spectral sequence may or may not be zero depending on the precise complex structure. The computation is very similar to that for X 0 in the appendix. The result is that d 2 is zero for the specific equation (108) This fits in very nicely with the picture for X 0 . Once again we have a generic value of 188 for dim H 1 (X, End(T )) but this can increase for specific complex structures. X 1 is "more generic" than X 0 and cannot achieve as large a value for dim H 1 (X, End(T )). The numbers of GLSM deformations for X 0 and X 1 are, respectively, 179 and 180. The Orbifold X orb is the singular octic in the unresolved weighted projective space P 4 {2,2,2,1,1} . We may compute the untwisted sector easily enough in a way analogous to section 5.2.2. The result is that where the subscript 0 denotes the untwisted sector. X orb exhibits a genus 3 curve C of singularities and Γ = Z 2 . We now have two twisted sectors with k = 1, where ν 1 = 0 and ν a = 1 2 for a = 2, 3. Note that the k = 1 sectors are self-conjugate, and (31) shows that the twisted vacua are Z 2 -invariant. When we compactify on C we now find that massless chiral multiplets are given by holomorphic sections of Ω ⊗q C . Riemann-Roch and vanishing theorems give For q C = 0 we thus predict from the twisted sector one massless 27 and the single associated Kähler modulus, as well as 3 additional singlets regardless of the genus of C. Putting g = 3 for the case at hand we predict, from q C = 1 states, 3 massless 27s with the associated 3 complex structure moduli, as well as an additional 9 singlets. We also find 6 massless singlets from the q C = 2 states. Adding together the untwisted and twisted states we obtain h 1,1 (X orb ) = 1 + 1 = 2, h 2,1 (X orb ) = 83 + 3 = 86 (119) as expected. More interestingly, we have a grand total of 288 singlets. That is, we predict 200 singlets associated with H 1 (End(T )). This total agrees with the massless spectrum for large radius phase for the Fermat complex structure. So we have agreement with the large radius (and Landau-Ginzburg phase -as we see shortly) only for particular values of the complex structure of X 0 . In particular, our orbifold computation seems, at first sight, not to depend at all on the value of the complex structure of X 0 . How can we resolve this discrepancy? We do not know the full resolution of this question, but we can make the following observations. Our analysis in terms of free fields of the twisted sector effectively assumes that we were analyzing the normal bundle of C in X 0 rather than X 0 itself. The deviations of the geometry away from the normal bundle as we move away from C must introduce corrections that we have ignored so far. Which states counted above do we believe are reliably massless? The fields in nontrivial E 6 representations, along with the associated Kähler and complex structure moduli, are of course protected by the (2, 2) supersymmetry; in addition, the 6 singlet states coming from q C = 2 involve only excitations along C and we do not expect them to be sensitive to the details of the structure of X away from the curve. The 12 additional singlet states coming from q C = 0, 1 involve excitations in the directions transverse to the curve. There is nothing to stop these states being lifted by the additional interactions introduced upon varying the superpotential. Thus we might reasonably expect there to be 12 fewer singlets for a generic complex structure. This agrees perfectly with the large radius picture. The Landau-Ginzburg locus Using the GLSM, it is easy to describe the Landau-Ginzburg point of the GLSM for X 0 . Integrating out the x 5 and p fields leads to a theory with fields x 0 , . . . , x 4 , with α 0,1,2 = 1 4 , and α 3,4 = 1 8 . Assigning weights [2, 2, 2, 1, 1] to the fields, the Landau-Ginzburg superpotential is a degree 8 polynomial in 5 variables which we denote We will distinguish the fields with different α i by taking indices I, J = 0, 1, 2, and a, b = 3, 4. With these numbers in hand, we classify the zero energy states with q = 0 as shown in table 4. For generic W we will have 244 singlets from k = 1, while for Fermat we find 248. As in the example of the quintic, the physical reason for the appearance of these extra singlets is just the Higgs mechanism. The k = 3 and k = 5 zero energy states clearly satisfy Q = 0, so that all of these correspond to massless singlets. The k = 9 sector states are the CPT conjugates of the states at k = 7 and the Q complex is the transpose of the complex for k = 7. We can thus equivalently study either sector, and since the structure at k = 9 more transparently reflects the description in the previous subsection we choose this option. The antichiral multiplets from this sector (conjugate to the chiral multiplets at k = 7) are determined by the cokernel of Q. Using ν I = 1 8 and ν a = 9 16 and hence ν I = − 3 8 , ν a = − 15 16 , the map in this sector is determined by the following terms where we have used the moding information, and in the second line also the absence of ρ I excitations at q = − 1 2 . The first of these contributes the following nontrivial Q action so that the cokernel agrees with (103). When W is in Fermat form Q 2 = 0 and there is a six-dimensional space of antichiral singlets at q = 1 2 , leaving 19 chiral singlets at q = − 1 2 . When we deform W away from the Fermat form the cokernel can decrease. For instance, adding the term leaves Q 1 unchanged but adds which now acts nontrivially: The three-dimensional image of this is clearly independent of the image of Q 1 so this deformation removes three additional pairs of states, leaving 16 chiral and 3 antichiral singlets at k = 9. For generic W , Q has no cokernel and the singlet spectrum is simply 13 chiral states. To see that this is identical 13 to the calculation in the appendix note that in general using the gauge invariance of W . The most obvious physical explanation for this lifting of states is via a simple mass term in the superpotential of the effective theory. The (anti)chiral states in k = 7 have their chiral(antichiral) conjugates in the k = 9 sector, allowing for a mass term that depends on the untwisted complex structure moduli. To summarize, we find 282 = 2 + 86 + 188 + 6 singlets for generic W , while the Fermat point has an additional 4 + 12 massless singlets. This agrees with the Gepner model value of 298. Mirror Symmetry and Kähler Deformations The singlet spectrum at the Gepner point in this model includes, in addition to the "cascade states" which as expected are not lifted by Kähler deformations, and the expected four additional singlets lifted by the Higgs mechanism under any deformation, a total of 28 additional chiral singlets in the sectors k = 5, 7, 9. The Gepner model here is A ⊕3 3 ⊕A ⊕2 7 . The model enjoys a discrete permutation symmetry S 3 × Z 2 and the states form representations of this. This symmetry commutes with the quantum symmetry associated to the Gepner orbifold so that states within each twisted sector transform into each other. As always, orbits under the discrete symmetry are lifted together under the Kähler deformations away from the Gepner point, both of which are invariant. Explicitly, the 28 states comprise eight orbits of the discrete symmetry as listed below. The number in brackets indicates the size of the associated orbit. The 3 chiral states in the k = 5 sector form the orbit of The 6 chiral states in the k = 7 sector comprise the orbits of 13 Assuming the higher differential d 5 in the appendix always vanishes. The 19 chiral states in the k = 9 sector comprise the orbits of The mirror is given by a Z shows the twisted sectors in which the mirrors of each of these states appear. Note that the S 3 permutation permutes t (a) while the Z 2 acts as (k, t a ) → (k − 4 t a , t a + 2 t a ). The mirror model has two untwisted (2,2) deformations corresponding to the two toric Kähler deformations of the octic. We can write the general superpotential for the mirror model as In each of the sectors in Extra singlets at special complex structure 12 Extra singlets at special Kähler form 6 Extra singlets associated to Gepner U(1) 4 4 Total 298 Table 6: Classification of the singlets in the Gepner model. The differential d : C i B (R) → C i+1 B (R) is given by the obvious inclusion map with some (−1) j factors to ensure d 2 = 0. We will elucidate these signs below. R admits a fine grading valued in Z n , where the grade of each homogeneous coordinate is a basis vector. Let a subscript p denote this fine grading. Then [46] where J is a subset of {1, . . . , l}, m J is the least common multiple of m j , j ∈ J and neg(p) is the subset of {1, . . . , l} corresponding to negative grades. The differential d maps the J component of C * B (R) p to the J component as zero unless J = J ∪ j, in which case it is (−1) e where j is in the eth position of J . The local cohomology groups H i B (R) p are defined by the local cochain complex C * B (R) p . Note, in particular, that they only depend on p via neg(p). The definition ofČech cohomology then giveš We also have C 0 B (R) p = R p . It is a fact that the covering given by U i is affine and thus sufficiently fine to giveČech cohomology. It follows that and that We now have an explicit method of computing the spectral sequence in theorem 2. Following the notation of [48], it is based on a double complex K p,q , where the vector spaces K p,q have localized Laurent monomials as a basis. The vertical maps in the double complex are theČech boundaries as explained above, and the horizontal maps come from the complex representing the sheaf in question. The map d 0,1 1 is induced from the map E in (45), which is given by . (143) Using the Laurent monomial representatives of these cocycles we easily obtain the map d 0,1 1 : The spectral sequence at the E 1 stage is given by (102). Now we have computed all the d 1 maps, and we obtain E p,q 2 : So now we need to compute the d 2 maps. Fortunately they are related by Serre duality. Let us focus on d 0,1 2 : C 9 → C 6 . The map d 2 will depend upon the defining equation W . Only certain monomials terms will contribute to d 2 . For purposes of argument we concentrate first on a single monomial x 0 x 2 2 x 2 3 x 5 . The process of computing d 2 can be quite formidable in a general spectral sequence, but the explicit representatives of elements of the double complex K p,q as Laurent monomials makes the procedure straight-forward. Consider, as an example, the monomial x 0 x −2 3 x −1 4 representing an element of H 1 (O(−3, 1)) in K 0,1 . The map to K 1,1 has two contributions. First we mapČ 1 (O(−3, 1)) toČ 1 (O(−2, 1) by multiplying by x i q i , which in the case of interest amounts to multiplying by x 4 . We also mapČ 1 (O(−3, 1)) toČ 1 (O(−1, 4)) by multiplying by ∂ 5 W . Thus we map x 0 x −2 3 x −1 4 to x 0 x 2 3 , x 2 0 x 2 2 x 4 ∈Č 1 (O(−2, 1)) ⊕Č 1 (O(−1, 4)). This is aČech coboundary, and we can chase it downwards as follows. Recall that these monomial representatives ofČ 1 (O(q)) are really 16 copies of the same monomials under the localization R m i ,m j . Computing the chain complex C * B (R) (0,0,0,−1,0,0) , we see that x 0 x −2 3 lies in the coboundary of the 4 copies of the monomial localized to R m i , where i = 0, . . . , 3. Similarly x 2 0 x 2 2 x −1 4 lies in the coboundary of the 4 copies of the negated monomial localized to R m j , where i = 4, . . . , 7. Finally we apply d 1 to push our element to K 0,2 . x 0 x −2 3 is multiplied by ∂ 5 W , while x 2 0 x 2 2 x −1 4 is multiplied by −x 4 . Paying attention to signs, the result is x 2 0 x 2 2 in both cases. That is, we have aČech cochain which takes the value x 2 0 x 2 2 in all eight patches R m k , k = 1, . . . , 8. The fact this has the same value in all 8 patches means that this is the localization of a monomial in R itself. This had to be, since H 1 B (R) = 0, and allows us to interpret x 2 0 x 2 2 as an element of H 0 (O(Q)). The computation of d 2 can be summarized in the following diagram: Thus, d 2 is nonzero. Given a generic defining equation W with all possible monomials, this d 2 map is surjective and the sequence degenerates at the E 3 term: E p,q 3 : So dim H 1 (X, End(T )) = 188. However, suppose we pick the Fermat form of the defining equation W . Studying the above computation of d 2 , it is clear that we can never hit monomials in K 0,2 of the form The source of the d 4 map is a C 6 subspace of H 3 (O X (−Q)) ⊕2 . This third cohomology group is not the isomorphic image of H 3 (O V (−Q)) under restriction, and so we need to work a little harder to describe everything in terms of cohomology of the toric variety and thus local cohomology. To this end we may use the short exact sequence to write O X in terms of O V . By using mapping cones, we may write the complex (45) representing the tangent sheaf as and we can take the Hom of this complex into itself to form a complex for End(T ). The H 3 (O X (−Q)) ⊕2 from above now manifests itself as H 4 (O V (−2Q)) ⊕2 , and the interesting map appears in E 5 as 15 With a little organization one can show, for example, that the Fermat form of W cannot give rise to a nonzero map. 16 Thus, in this case, the spectral sequence degenerates at the E 2 stage again, and now dim H 1 (X, End(T )) = 185 + 9 + 6 = 200. That is we have 12 extra states compared to the generic hypersurface.
2010-12-07T16:31:32.000Z
2010-08-12T00:00:00.000
{ "year": 2012, "sha1": "884222ffd34c73dc2d23a7192090533a0aad96b1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1008.2156", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "884222ffd34c73dc2d23a7192090533a0aad96b1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
244837762
pes2o/s2orc
v3-fos-license
Photovoltaic Facade Performance Evaluation A high-rise building façade with integrated photovoltaic panels, located in the Central European region with temperate climatic conditions was tested. The PV façade was monitored for three years. Results of the PV system monitoring show that the façade positively influence the energy efficiency and reduction of carbon dioxide emissions from the building operation. Introduction Energy efficiency and applications of renewable energy technologies represent major issues for sustainable building constructions [1,2]. The conversion of solar energy into electricity due to photovoltaic panels has found wide applications in building industry [3,4]. The PV systems are installed separately or integrated into building constructions. Building-Integrated Photovoltaic Systems -BIPS [5,6] are designed for new buildings as well as for building renovations [7,8]. They are installed for roof and façade applications [9]. The PV system efficiency for the electric energy conversion is influenced by many factors like availability of solar radiation in the climatic locality, topographical features, shading obstructions and neighbouring buildings, etc. [10][11][12]. The BIPS installation depends on the PV system type (materials, grid-tied system) dimensions and geometry, tilt and orientation to the cardinal points [13,14]. The photovoltaic systems are highly efficient for direct solar radiation but they also respond to diffuse sky radiation. The building integrated PV modules should be installed in positions of maximum insolation. They must be ventilated for their proper performance. Apart from the main design and installation demands, the facade aesthetic appearance plays also important role for the BIPV system integrations. Design optimisations and modelling of the BIPS have been topics of research projects [15,16]. A post-occupancy survey of a building with integrated photovoltaic façade was performed. The paper shows main results of the building PV façade monitoring. This monitoring was provided under a university research project focused on smart region technologies for sustainable development. The studied building is located in city Brno in the Czech Republic. The multi-functional building was constructed in 2013 [17]. Method The post-occupancy evaluation of the studied building was focused on a monitoring of energy generated from the PV integrated façade. Photovoltaic panels are integrated into the south façade of the high-rise building, figure 1 [18]. That orientation gives potential of intensive solar irradiation in case of clear sky with direct solar radiation. For these conditions the façade is highly efficient. Lower efficiency is monitored for diffusive solar radiation during cloudy sky conditions. The studied building is in the locality with temperate climatic conditions. Outdoor conditions like global horizontal irradiance (monitored by pyranometer CPM KippZonnen) and outdoor temperature (local meteo-station data) were monitored for the building evaluation. Figure 3 [20]. Threephase inverter Sunny Tripower is used for the PV system of total power is 89.7 kW. The generated electricity is used as a complement for the building electric energy consumption [21]. Annual profiles of the electric energy generated from the PV façade were tested for three years [21]. Results of the monitoring are summarized in figure 6 and Table 1. It is obvious that the façade efficiency varies in dependence on local climatic conditions. Minimum generation of electric energy is in winter seasons from November to February. Maximum energy production is between March and September. The PV system highest efficiency appears to be in March and April as well as in July and August. Solar radiation of higher altitudes in May and June is substantially reflected on the vertical façade compared to lower solar altitude situations. It means that the energy conversion is lower for these months. The PV generated energy has positive impacts on the building environmental classification. The clean technology of the PV solar panels represents potentials for reduction of emissions of CO2 and SO2 and NOx compared to the electricity production from traditional sources [22], Table 2. Conclusion The post-occupancy evaluation of the photovoltaic façade prove its energy efficiency. Despite the climate conditions with prevailing overcast days, the façade energy conversion is more than 65 MWh per year. The total electric energy generated from the façade photovoltaic system during the three-year The PV facade system is supportive to the building energy efficiency and positively influences the reduction of emissions from the building operation. The utilization of solar radiation for the building electricity substantially decreases CO2 emissions compared to traditional energy sources.
2021-12-03T20:08:30.532Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "fdcfdabd31582f305d38932564701c911564a374", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/1203/3/032051/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "fdcfdabd31582f305d38932564701c911564a374", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
10261603
pes2o/s2orc
v3-fos-license
The Two Phosphofructokinase Gene Products of Entamoeba histolytica* Two phosphofructokinase genes have been described previously in Entamoeba histolytica. The product of the larger of the two genes codes for a 60-kDa protein that has been described previously as a pyrophosphate (PPi)-dependent enzyme, and the product of the second, coding for a 48-kDa protein, has been previously reported to be a PPi-dependent enzyme with extremely low specific activity. Here it is found that the 48-kDa protein is not a PPi-dependent enzyme but a highly active ATP-requiring enzyme (k cat = 250 s− 1) that binds the cosubstrate fructose 6-phosphate (Fru-6-P) with relatively low affinity. This enzyme exists in concentration- and ATP-dependent tetrameric active and dimeric inactive states. Activation is achieved in the presence of nucleoside triphosphates, ADP, and PPi, but not by AMP, Pi, or the second substrate Fru-6-P. Activation by ATP is facilitated by conditions of molecular crowding. Divalent cations are not required, and no phosphoryl transfer occurs during activation. Kinetics of the activated enzyme show cooperativity with Fru-6-P (Fru-6-P0.5 = 3.8 mm) and inhibition by high ATP and phosphoenolpyruvate. The enzyme is active without prior activation in extracts of E. histolytica. The level of mRNA, the amount of enzyme protein, and the enzyme activity of the 48-kDa enzyme are about one-tenth that of the 60-kDa enzyme in extracts of E. histolytica trophozoites. The pH dependence and apparent substrate affinities of the cloned enzyme were identical to those of the PP i -PFK in trophozoite extracts, indicating that the product of the cloned gene accounts for most if not all of the PFK activity in E. histolytica trophozoites (3). The smaller gene, which codes for a 48-kDa protein, has been expressed in E. coli as a fusion protein that was found to have a much lower specific activity than that of the larger enzyme (1). Whereas the 60-kDa PFK has been purified from the amoeba, no information concerning the expression of the 48-kDa protein is available. The 48-kDa PFK described in the earlier studies is clearly an expressed product in E. histolytica because it was cloned from a cDNA library (1). It may have been present in extracts of the organism but did not copurify with the 60-kDa product or with the activity of PP i -PFK (3). Furthermore, if a second activity had been present which represented at least 10% of the total PFK activity, it would have been detected in native gel electrophoresis. The problem in attributing a significant role to the 48-kDa protein in phosphorylation of fructose 6-phosphate (Fru-6-P) is its extremely low specific activity with PP i as a phosphoryl donor. The specific activity of the 60-kDa enzyme is about 2,000 -3,000 times higher than that reported for the smaller PFK (3). Thus, if expressed at the same level in the organism, the smaller PFK would be virtually undetectable under normal assay conditions for PP i -PFK. One possibility is that the smaller PFK has a yet to be determined catalytic activity. Another possibility is that the 48-kDa protein represents a regulatory protein as one observes in the multisubunit structure of plant PP i -PFKs (4). In the instance of the plant enzymes, the catalytic and regulatory subunits copurify. This was shown to be unlikely regarding the two E. histolytica PFKs in the earlier study (3) because no 48-kDa protein was present in the partially purified fractions of the 60-kDa enzyme from E. histolytica. In the current work, we compare expression of the two forms of E. histolytica PFK in extracts of trophozoites. The 48-kDa PFK has been purified to homogeneity from both native and recombinant sources and has been found to have no detectable activity with PP i as a phosphoryl donor. On the other hand, the enzyme has high activity with ATP as a phosphoryl donor, but only after prior activation with ATP. EXPERIMENTAL PROCEDURES Expression Constructs-Two oligonucleotide primers designed on the basis of the sequence at the 5Ј-and 3Ј-ends of the 48-kDa PFK gene and containing additional nucleotides at the 5Ј-ends to generate NdeI and BamHI restriction sites were used to amplify by polymerase chain reaction (PCR) a fragment containing the 48-kDa PFK gene from a genomic clone (2). The PCR fragment was then cloned into the pCR-Script SK(ϩ) plasmid using the PCR-Script cloning kit as directed by the manufacturer (Stratagene, La Jolla, CA). The plasmid construct was digested with NdeI and BamHI to isolate the fragment containing the 48-kDa PFK gene. The digested fragment was then cloned into the complimentary sites of the pJC45 prokaryotic expression vector (5) (a gift from Dr. Iris Bruchhaus of the Bernard Nocht Institute for Tropical Medicine, Hamburg, Germany). The pJC45 expression vector generates a fusion protein with an additional N-terminal sequence that includes a stretch of 10 consecutive histidine residues. To utilize a second expression system, the above PCR-Script SK(ϩ) plasmid construct containing the 48-kDa PFK gene was digested with NdeI and EcoRI and cloned into the complementary sites of the pALTER-Ex1 plasmid (Promega). The E. histolytica 60-kDa PFK gene cloned into the pALTER-Ex1 has been described previously (3). Enzyme Preparation-The recombinant 60-kDa PP i -PFK was purified as previously described (1). The enzyme preparation was homogeneous on the basis of 10% SDS-PAGE. The enzyme was stored in 50% glycerol at Ϫ20°C. Before being used for kinetic assays, the enzyme was dialyzed against at least 400 volumes of 150 mM KTes, 1 mM EDTA, pH 7.2. The pJC45 vector containing the 48-kDa gene was transformed into BL2 (DE3)[pAPlacIQ] E. coli (a gift from Dr. Bruchhaus), and the bacteria were plated onto LB medium agar plates with 100 g/ml ampicillin, 50 g/ml kanamycin, and 2% (w/v) glucose at 37°C. Freshly transformed single colonies were inoculated into LB medium with 100 g/ml ampicillin, 50 g/ml kanamycin, and 2% (w/v) glucose and grown at 37°C until the bacterial culture reached an absorbance of 0.2 at 600 nm. After induction for 3 h in the presence of isopropyl-␤-D-thiogalactoside, the recombinant fusion protein was purified using the His-Bind System (Novagen, Madison, WI) following the manufacturer's recommendations for native purification of cytoplasmic proteins. The 48-kDa PFK lacking the N-terminal polyhistidine sequence and inserted into pALTER-Ex1 was expressed as follows. The plasmid construct was transformed into DF1020 E. coli, which was grown on LB. After induction by 0.4 mM isopropyl-␤-D-thiogalactoside for 12-24 h at 30°C, the cells were harvested by centrifugation at 5,000 ϫ g for 5 min and resuspended in ϳ2 volumes of ice-cold buffer consisting of 50 mM Tris-HCl, 0.1 mM EDTA, 14 mM ␤-mercaptoethanol, pH 7.4 (extraction buffer). Phenylmethylsulfonyl fluoride was added to the extraction buffer to a final concentration of 1 mM only during the extraction step. The cells were lysed by sonication and centrifuged to remove debris. The supernatant was loaded on a 15-ml column of N-6-aminohexylcarboxymethyl-ATP-Sepharose (ATP-Sepharose) (6) preequilibrated with the extraction buffer. The column was then washed with extraction buffer until the absorbance of the flow-through was below 0.02 at 280 nm. The enzyme was then eluted with extraction buffer plus 1 mM ATP. Elution fractions were pooled and concentrated to a volume of ϳ10 ml, and the enzyme was exchanged simultaneously into 20 mM Tris-HCl, 0.1 mM EDTA, 14 mM ␤-mercaptoethanol, pH 7.2, using a membrane filtration apparatus. The concentrated protein was then applied to a Mono Q HR 5/5 anion exchange column on a fast protein liquid chromatography system (Amersham Pharmacia Biotech) preequilibrated with the same buffer. The enzyme was eluted with a linear gradient of 0 -0.5 M NaCl in the same buffer. The enzyme, which eluted at ϳ100 mM NaCl, was homogeneous on the basis of 10% SDS-PAGE. The purified enzyme was stored in 50% glycerol at Ϫ20°C. Prior to kinetic assays, ultrafiltration was used to exchange the preparation to assay buffer. The 48-kDa PFK lacking the added N-terminal polyhistidine was also purified by chromatography using Blue Sepharose (Cibacron Blue F3G-A, immobilized on Sepharose CL-6B, Sigma). Harvested bacteria were resuspended in extraction buffer plus 1 mM phenylmethylsulfonyl fluoride. Cells were then lysed as described above for ATP affinity column purification. After centrifugation, the supernatant was loaded onto a 100-ml column of Blue Sepharose preequilibrated with extraction buffer. The column was then washed with extraction buffer until the absorbance of the flow-through was below 0.02 at 280 nm. The enzyme was then eluted with extraction buffer containing 1 mM ATP. Elution fractions were pooled and concentrated to a volume of ϳ10 ml, and the buffer was changed simultaneously to the Mono Q buffer described in the ATP affinity purification section. The concentrated protein was then purified on a Mono Q anion exchange column as described for the ATP affinity purification procedure. The resultant enzyme preparation was homogeneous on the basis of 10% SDS-PAGE. The enzyme was stored and prepared for activity analysis as described above. The Blue Sepharose method yielded ϳ10 fold greater amounts of pure enzyme per unit column volume than the ATP-Sepharose procedure. Using Blue Sepharose, the overall yield from the lysate was ϳ55%. Assay of the 48-kDa PFK-To measure activity, the 48-kDa PFK was first activated by preincubating at standard activation conditions unless otherwise indicated. The standard activation conditions were 4 M 48-kDa PFK and 2 mM ATP in 150 mM KTes (pH 7.2), 3 mM MgCl 2 , 1 mM EDTA at 30°C for 30 min. Aliquots of the preincubations were then diluted 10-fold in a standard dilution buffer (2 mM ATP and 20 mM Fru-6-P in 150 mM KTes (pH 7.2), 3 mM MgCl 2 , 1 mM EDTA) unless otherwise indicated, and fixed amounts of the dilution were added to assay cuvettes to start the reaction. The reactions were conducted at standard assay conditions (1 mM ATP and 20 mM Fru-6-P in the aforementioned assay medium at 30°C in a 1-ml assay cuvette) unless otherwise indicated. Activity was determined spectrophotometrically by measuring the decrease of absorbance at 340 nm. The measured rate of the first 60 s of the reaction was recorded. For the determination of kinetic constants, one of the two substrates (a nucleoside triphosphate and Fru-6-P) was kept saturated while the other substrate was varied from 0.1 to 10 K m . The magnesium ion concentration was kept 4 mM higher than the concentration of nucleoside triphosphates for all assays containing nucleotide to ensure that virtually all of the nucleotides existed as the magnesium complex. All nucleoside triphosphate solutions were determined to contain less than 0.1% PP i . Nucleoside triphosphate decomposition in the assay cuvette to its nucleoside monophosphate and pyrophosphate constituents was undetectable. Kinetic estimates in this study were obtained using unweighted linear or nonlinear least squares regressions to the Michaelis-Menten and Hill models using the GraFit graphical analysis program. All assays were repeated at least twice, and standard errors of intercepts and slopes were all less than 10%. The k cat values were calculated assuming one active site existed per subunit and with a subunit mass of 48 kDa. ATP-dependent phosphorylation of sugar substrates other than Fru-6-P was determined by measuring the generation of ADP. The 48-kDa PFK was first activated using standard activation conditions. The enzyme was then diluted 10-fold in 150 mM KTes (pH 7.2), 3 mM MgCl 2 , 1 mM EDTA, and identical quantities were added subsequently to assay cuvettes containing the same components plus 0.2 mM NADH, 1 mM ATP, 5 mM phosphoenolpyruvate (PEP), 5 units each of lactate dehydrogenase and pyruvate kinase, and 10 mM indicated sugar substrate. The initial velocities were determined spectrophotometrically by measuring the decrease of absorbance at 340 nm. Antibody Preparation and Purification-Antibodies against 60-kDa PFK and histidine-tagged 47-kDa PFK were raised in New Zealand White rabbits. Approximately 200 g of enzyme with adjuvant was injected at 2, 4, and 8 weeks, and blood was removed 3 days after the last injection. For further purification where required, each preparation was purified by passing the polyclonal antibody-containing serum through a column of the respective PFK linked to CNBr-activated Sepharose 4B. In the case of the preparation of the 48-kDa PFK Sepharose column, the enzyme without the histidine tag was used. The columns were washed extensively with 0.1 M Tris-HCl, 0.3 M NaCl, pH 8.0, until the absorbance of the flow-through was below 0.01 at 280 nm. Specific antibodies were then eluted successively in five steps with buffers of decreasing pH from 7.0 to 2.3 containing 150 mM NaCl. Fractions were neutralized after elution. Specificity of eluted fractions was determined by Western blot analysis using dilutions of the elution fractions as primary antibodies. Antibodies against both 48-kDa PFK and 60-kDa PFK that eluted at pH 5.5 and pH 4.3 had the greatest specificity and were pooled and used in all subsequent analyses. Northern Blot and Quantitation of the mRNA Level of the Two PFK Genes-E. histolytica total RNA was isolated from an amoebae cell sediment containing 1-2 ϫ 10 8 cells with the Qiagen DNA/RNA isolation kit. Denatured RNA isolated from trophozoites and RNA markers was then separated on 1.2% agarose gel and transferred to a nylon membrane. The membrane was then air dried and exposed to UV light to cross-link the RNA to the membrane. After prehybridization, membranes were hybridized with 32 P-labeled cDNA probes (1 ϫ 10 6 cpm/ml) prepared by restriction enzyme digestion of the plasmids containing the PFK genes. For quantitation of the mRNA level of the two PFK genes, slot blot analysis was performed. A standard was constructed by a series of 2-fold dilutions of the DNA for each of the PFK genes, beginning from 1 ng to 1/64 ng. The DNA standard and 30 -50 g of E. histolytica total RNA were slot blotted onto nylon membranes. The membranes were air dried and UV cross-linked. Northern blot was performed as described above. The content of PFK mRNA within the total RNA was determined by comparing the intensity of the signal from the total RNA with the DNA standard. The ratio of the mRNA level of the two PFK genes was determined. Western Blot Analysis-Optimal dilution of the affinity-purified an-tisera for Western blot was determined by dot blot. E. coli cell extract and protein molecular weight markers were used as negative controls. When quantitation was required, a series of dilutions of known amounts of the two PFKs ranging from 1 to 100 ng was run in adjacent lanes of the gel electrophoresis. Negative control and protein samples were separated by 10% SDS-PAGE, then transferred onto nitrocellulose membranes in 25 mM Tris, 200 mM glycine, 20% methanol at 24 V. Washing and detection were performed by following the instructions of Amersham Pharmacia Biotech ECL Western blotting protocols using goat anti-rabbit immunoglobin conjugated with horseradish peroxidase. E. histolytica Cultivation-E. histolytica trophozoites (strain HM-1: IMSS) were grown axenically in TYI-S-33 medium (7) at 35°C. Routine cultures were maintained in 15-ml borosilicate glass tubes and transferred every 3 or 4 days. To obtain sufficient cells for 48-kDa PFK purification, trophozoites were cultured in 600-ml Nunclon triple flasks (Fisher Scientific). 48-kDa PFK Purification from E. histolytica-Amoebae from 3-dayold cultures (4 ϫ 10 8 ) were detached from the surface of flasks by chilling at 4°C for 30 min. Cells were then harvested by centrifugation at 500 ϫ g for 5 min, washed twice with phosphate-buffered saline at pH 7.2, and resuspended in 3 ml of the extraction buffer described for ATP affinity purification of the recombinant enzyme. The cells were then lysed by sonication. After centrifugation, the lysate was loaded onto 3 ml of an ATP-Sepharose column preequilibrated with extraction buffer. The column was washed with 20 volumes of extraction buffer, and the enzyme was subsequently eluted using the same buffer plus 1 mM ATP. Each elution fraction was analyzed by Western blot for both PFKs. Molecular Sizing-Molecular mass determinations were carried out on a fast performance liquid chromatography system fitted with a Superdex 200 HR 10/30 column (Amersham Pharmacia Biotech). A standard curve was constructed by using a mixture containing 200 g each of cytochrome C (12.4 kDa), carbonic anhydrase (29 kDa), glycerol-3-phosphate dehydrogenase (70 kDa), and alcohol dehydrogenase (150 kDa) in a medium of 20 mM Tris-HCl, 1 mM EDTA, and 14 mM ␤-mercaptoethanol, pH 7.2. The standard mixture, E. coli PFK (142 kDa), and E. histolytica PFK samples were chromatographed individually using a Superdex 200 column preequilibrated with the buffered medium plus or minus additions as indicated. Other Methods-Gel electrophoresis of proteins was carried out using a 10% polyacrylamide support according to the system of Laemmli (8). Protein concentrations were determined by Bradford's dye binding assay with bovine serum albumin as the standard (9). All chemicals and enzymes were purchased from Sigma. RESULTS Purification of the 48-kDa PFK-In an attempt to repeat the findings of Bruchhaus et al. (1), who were able to detect very low PP i -PFK activity with a recombinant 48-kDa PFK protein bearing a histidine tag, the 48-kDa PFK was prepared as described in their report. A homogeneous protein with a mass of the predicted 50 kDa as indicated by SDS-PAGE was purified successfully (not shown); however no PP i -PFK activity under the conditions described previously could be detected at any point during the purification. Because the relatively high concentrations of imidazole used to elute the enzyme from the nickel column (400 mM) may have denatured the enzyme, the CD spectrum of the preparation was compared with that of the homogeneous 60-kDa PP i -PFK. The spectra were nearly identical, suggesting that the global structure of the 48-kDa protein was maintained. All attempts at dialyzing the eluted protein into a lower salt buffer resulted in an irreversible precipitation. Several other methods of elution from the nickel column were attempted, including various concentrations of imidazole and gradients of imidazole and EDTA. However, all of these methods also failed to produce an enzyme with detectable PP i -PFK activity. The yield of the 48-kDa PFK fusion protein using this method, however, was sufficient to raise polyclonal antibodies that were used as a means of detection of the native protein expressed without the histidine tag during its subsequent purification. Conventional PFK purification procedures were attempted to isolate the 48-kDa PFK without the N-terminal histidine tag using the antibody to follow the 48-kDa protein at each step of the procedure. The recombinant enzyme did not bind to phospho-cellulose, which is commonly used for the purification of PP i -PFKs (3,10), under a variety of conditions. No PP i -PFK activity was detected at any point in the purification process or in cell extracts using the assay conditions described by Bruchhaus et al. (1). The inability to duplicate previously reported activity measurements and these results suggested that the 48-kDa PFK gene does not utilize PP i to phosphorylate Fru-6-P and thus prompted the trial of alternative purification methods. Because this laboratory commonly uses both N-6-aminohexylcarboxymethyl-ATP-Sepharose and Blue Sepharose for the purification of various ATP-dependent PFKs (11)(12)(13), these media were tried with the recombinant 48-kDa PFK. It was found that the protein bound to both ATP-Sepharose and Blue Sepharose. The 48-kDa PFK was eluted from both types of medium by employing 1 mM ATP in the eluting buffer. Subsequent Mono Q anion exchange chromatography of the eluate from either procedure yielded homogeneous enzyme with a size by SDS-PAGE equivalent to the calculated 47.6-kDa mass (not shown). The recombinant enzyme purified by ATP-Sepharose chromatography was identified by the crude antibodies that were raised against the purified, histidine-tagged recombinant 48-kDa PFK. For the isolation of specific 48-kDa PFK antibodies, the ATP-Sepharose-purified enzyme was linked to CNBr-activated Sepharose as described under "Experimental Procedures." Catalytic Properties of the 48-kDa PFK Activation-The affinity chromatography isolation procedure indicated that the 48-kDa PFK interacts with ATP. This observation suggested a reexamination of the activity in the presence of ATP. In such experiments it was observed that when assays with relatively high concentrations of enzyme were allowed to proceed for 30 or more min, a very gradual increase in ATP-dependent activity was observed, suggesting activation in the assay cuvette. This led to preincubation assays of the enzyme with various components of the assay mixture. The testing of the assay components led to the significant finding that ATP-dependent PFK activity can only be detected when relatively high concentrations of enzyme and ATP are preincubated together before adding the enzyme to the assay mixture (details discussed below). Addition of the same amount of enzyme to the assay mixture without prior incubation with ATP resulted in no activity even when ATP concentrations in the assay mixture were high. In such cases no activity is detected because the enzyme concentration in the reaction mixture is too low to become activated. Consistent with this hypothesis, when the enzyme is preincubated with ATP at too low an enzyme concentration, no activity results when adding an equivalent amount of enzyme as above to the assay mixture. The dependence of the activation process on the concentrations of enzyme and ATP is shown in Fig. 1. The enzyme and ATP concentrations in the preincubation mixtures that result in half-maximal activity are 0.72 M and 0.21 mM, respectively. The time course of activation was measured at saturating concentrations of both ATP and enzyme. Maximal activity is attained after 5 min of preincubation as shown in Fig. 2A. To determine whether the temperature of the preincubation had any effect on the resultant rate of the enzyme, the preincubation mixtures were incubated at various temperatures before the resultant activity was measured. The temperature optimum for the preincubation is 30°C (Fig. 2B). Based on these results, preincubations for all standard kinetic assays were subsequently conducted using 4 M enzyme and 2 mM ATP in 150 mM KTes (pH 7.2), 3 mM MgCl 2 , 1 mM EDTA at 30°C and lasted for at least 30 min. The enzyme concentration dependence of the activation process was investigated further using polyethylene glycols. PEGs have been shown to have an associative effect on macromolec-ular solutes in aqueous solution without specifically interacting with them (14). Aggregating systems have been shown to be shifted to higher degrees of association by increasing PEG concentration (15). Inclusion of PEG in preincubation mixtures allowed the 48-kDa PFK to be activated at preincubation enzyme concentrations that were too low to become activated in the absence of PEG (Fig. 3). The activation process was enhanced by increasing concentration and size of PEG in the preincubations, with the enhancement effect peaking at 20% PEG. PEG apparently encourages native self-association of the 48-kDa PFK into the activated state by increasing the local protein concentration in solution. The 48-kDa PFK does not require the MgATP complex for activation because it is activated maximally without Mg 2ϩ in the preincubation buffer. Maximal activation of the 48-kDa PFK can also achieved when it is incubated with other nucleotide triphos-phates (Table I). GTP and ITP as well as the pyrimidines UTP and CTP are all equally as effective as ATP at activating the enzyme for measuring ATP activity in the resultant assay mixture. The nonhydrolyzable ATP analog AMP-PNP and ADP also can activate the enzyme, both being at least 60% as effective as ATP. Incubation with AMP, the cosubstrate Fru-6-P, and the product orthophosphate results in no activation at all. Interestingly, PP i , despite lacking the nucleotide moiety entirely, is quite capable of activating the 48-kDa PFK to achieve ATP-dependent activity, being 75% as effective as ATP in a 30-min preincubation. Inactivation-Once activated, the enzyme spontaneously inactivates by simple dilution. This inactivation can be seen during the PFK assay, where one observes a decrease in the rate about 100 s after the start of the reaction which is the result of the dilution of activated enzyme from the concentrated preincubation mixture into the assay. The inactivation proceeds as a first order reaction. To characterize the dilution effect, the enzyme was activated by preincubation at the optimal conditions and subsequently diluted in 150 mM KTes (pH 7.2), 3 mM MgCl 2 , 1 mM EDTA, with or without 2 mM ATP. After activation, the activity was measured under standard assay conditions. The data were fitted using the Michaelis-Menten model to estimate the preincubation enzyme concentration that achieves half the maximal rate. A, enzyme concentration dependence of activation. The enzyme was incubated at concentrations from 0.1 to 28.5 M in KTes (pH 7.2) assay buffer containing 10 mM ATP in 20-l volumes for 30 min at 30°C. Fixed amounts of enzyme from the preincubations were then diluted 10-fold in standard dilution buffer, and identical volumes were taken from each dilution and added to assay cuvettes. B, ATP dependence of activation. The enzyme was incubated in KTes (pH 7.2) assay buffer at fixed concentrations of 4 M in separate tubes containing increasing ATP concentrations from 0.1 to 4 mM. Incubations were carried out in 20-l volumes at 30°C. Identical amounts of enzyme were taken after 30 min of incubation from each tube and were assayed under standard assay conditions. Increasing the incubation time an extra 90 min did not increase the activation of the enzyme. FIG. 2. Time and temperature dependence of PFK activation. A, time dependence. The enzyme was first prepared at 4 M in assay buffer at 30°C. ATP was added to a final concentration of 2 mM, and aliquots of the activated enzyme were subsequently added to assay cuvettes using the standard dilution method at time points from 1 s to 2 h after the addition of ATP. The reactions were then measured under standard assay conditions. B, temperature dependence. Fixed concentrations of enzyme at 4 M were activated at various temperatures in assay buffer containing 2 mM ATP in 100-l volumes for 30 min. Aliquots were then taken from each dilution mixture at increasing time points and added to assay mixtures to measure the activity (Fig. 4). The first order inactivation rate constant without additions was 0.09 min Ϫ1 , and it decreased by nearly half to 0.05 min Ϫ1 when the enzyme was diluted in buffer containing 2 mM ATP and all other preincubation and dilution conditions were identical. Diluting in buffer containing both substrates at concentrations that produce maximal PFK activity (2 mM ATP and 20 mM Fru-6-P) substantially decreases the rate of inactivation (not shown). This experiment is complicated by the fact that the reaction is proceeding under these conditions. The rate of inactivation measured at early time before significant reaction has taken place gave a rate constant of 0.016 min Ϫ1 . As a result of these experiments, all kinetic assays were performed by diluting the activated enzyme into assay buffer containing 2 mM ATP and 20 mM Fru-6-P when dilution was necessary. Aggregation State-The above experiments on activation and inactivation suggested that the state of polymerization of the molecule was the determinant of activity. To determine the aggregation state of the native 48-kDa PFK as well as the activated enzyme, a size exclusion chromatography experiment was performed. The polymerization state was determined for the native enzyme, the enzyme activated with ATP, and the enzyme incubated with the cosubstrate Fru-6-P alone, which does not activate the enzyme. For the enzyme-ATP experiment, the concentrations of ATP and enzyme which were found to activate the enzyme maximally were used in the preincubation mixtures before chromatography. For the Fru-6-P experiment, the concentration of Fru-6-P in the preincubation mixture was 20 mM, which is the concentration determined to achieve maximal ATP-PFK activity in the kinetic assay (see below). The results (not shown) of the molecular sizing experiments indicate that both the unactivated enzyme and the enzyme incubated with Fru-6-P alone eluted as a single peak at a position similar to that of glycerol-P dehydrogenase (70 kDa), indicating that the 48-kDa PFK exists as a dimer under these conditions. The enzyme preincubated with ATP eluted as a single peak at a position near that of the E. coli ATP-PFK (140 kDa), suggesting that it exists as a tetramer after activation. Kinetic Properties-The 48-kDa PFK was found to be a highly active ATP-utilizing enzyme with a k cat value of 250 s Ϫ1 , which is almost three times the maximum activity of the ATPdependent activity of E. coli ATP-PFK (16) and about threefourths the maximum activity of the PP i -PFK activity of E. histolytica 60-kDa enzyme. No PFK activity (0.01% level of detectability) was observed when PP i (at 2 mM) was used as a phosphoryl donor in the assay. Also, ATP activity was not inhibited by this concentration of PP i . To determine if the 48-kDa PFK could phosphorylate other sugars using ATP as a phosphoryl donor, the production of ADP was measured when the enzyme was incubated with other sugar compounds. Fru-1-phosphate, glucose, glucose 1-phosphate, glucose 6-phosphate, mannose, and ribose 5-phosphate could not substitute for Fru-6-P in the kinase assay. Similar to many other ATP-PFKs, the 48-kDa PFK shows cooperative kinetics with respect to Fru-6-P, with a Hill con- 4. Inactivation by dilution. The enzyme was first activated under standard activation conditions. Aliquots of enzyme were then each diluted 10-fold in assay buffer (150 mM KTes (pH 7.2), 3 mM MgCl 2 , 1 mM EDTA) or assay buffer with 2 mM ATP. At time points from 0 to 120 min, identical amounts of enzyme were assayed under standard conditions. The rate constants (k) were calculated as the negative slope of the first order plot of the natural log of the rate against time. Only time points within the first 20 min of the ATP-buffer diluted mixture were used because the reaction reaches an equilibrium after that time. The reversible first order reaction with ATP was fitted using the first order exponential decay equation stant (n H ) of 2.3 and a relatively high Fru-6-P 0.5 value of 3.8 mM (Fig. 5A). With regard to ATP, the kinetic estimates of the E. histolytica 48-kDa PFK compare favorably with the ATP-PFK from E. coli (16). The 48-kDa PFK has an apparent affinity for ATP (K m ϭ 0.12 mM) (Fig. 5B) which is higher than that for the E. coli ATP-PFK (K m ϭ 0.21 mM). The k cat /K m value with ATP k cat /K m ϭ 2,200) of the 48-kDa PFK is significantly higher than that for the E. coli enzyme (k cat /K m ϭ 390) (16). The presence of PEG in the assay mixture was found to have a modest effect on the apparent affinity for Fru-6-P. The apparent K m value for Fru-6-P was ϳ33% lower with the inclusion of PEG, with the effect consistent under different concentrations and sizes of PEG. There was little effect on cooperativity and on the maximal velocity of the Fru-6-P saturation profiles (data not shown). Although clearly an ATP-utilizing enzyme, the 48-kDa PFK exhibited many characteristics not typical of ATP-PFKs. The pH optimum was relatively acidic, which is more characteristic of PP i -PFKs. The highest activity in the presence of subsaturating Fru-6-P (2.5 mM) was observed between pH 6 and 7 with only 30% activity at 7.5 and 15% activity at 8.5. Activity measurements below pH 6.0 were compromised by the limited activity of one or more of the auxiliary enzymes used in the coupled assay. In contrast, E. coli and all known mammalian PFKs have alkaline pH optima. Also unusual was the proficiency of the enzyme in using other nucleotides as substrates relative to ATP (Table II). The apparent affinity and the activity at low concentrations of substrate were even higher for GTP than for ATP. The 48-kDa PFK still showed cooperativity with Fru-6-P with each of the nucleotides as cosubstrates, and the apparent affinity for Fru-6-P remained relatively high with each of the nucleotides tested. In comparison, the E. coli ATP-PFK has k cat /K m values for GTP and ITP which are an order of magnitude lower than the value for ATP (16). Although the 48-kDa PFK did not require Mg 2ϩ for activation, it was required for catalytic activity, similar to other PFKs. Substituting Mn 2ϩ in the assay resulted in only 16% of the observed activity with Mg 2ϩ , whereas no activity was detectable when substituting with Ca 2ϩ and Zn 2ϩ . The 48-kDa PFK appears to be inhibited by ATP at high ATP concentrations. This inhibition was evident at low concentrations of the cosubstrate Fru-6-P and disappeared at saturating Fru-6-P (Fig. 5B). ATP inhibition has been demonstrated in other ATP-PFKs. Mammalian PFK has a separate ATP inhibitory site (17), whereas E. coli PFK displays mechanism-based, nonallosteric inhibition by ATP (18). The mechanism of ATP inhibition in the 48-kDa PFK remains to be elucidated. Cooperativity in the interaction with Fru-6-P increased at a higher ATP concentration (Fig. 5A). The mechanism of the cooperative interaction appears to be allosteric, but the mechanism needs to be resolved. It may be related to association/dissociation behavior, but the failure of PEG to eliminate cooperativity argues against this interpretation. PEP, which is known to inhibit other PFKs (12,19), is an inhibitor of the 48-kDa PFK (Fig. 6). PEP decreased the apparent Fru-6-P affinity substantially, although it had a limited effect on cooperativity (n) and no effect on ATP binding (Table III). The steady-state PEP concentration in E. histolytica is not known. We investigated many other compounds for their ability to regulate the activity of the 48-kDa PFK and have tentatively found no other effectors. Activity with each potential effector was measured both at half-saturating (2.5 mM) and saturating concentrations (20 mM) of Fru-6-P. The apparent Fru-6-P affinity of the 48-kDa PFK was not affected by the metabolites AMP, ADP, GDP, cAMP, orthophosphate, sodium ion, ammonium ion, phosphocreatine, citrate, fructose 2,6-bisphosphate, 3-phosphoglycerate, and glucose 6-phosphate (all at 1 mM concentrations), metabolites that have been demonstrated to modulate PFK in other organisms. Other compounds that were examined for their ability to regulate the 48-kDa PFK included phosphoglycolate, lactate, calcium ion, and calmodulin. No effects were seen. mRNA Levels for 60-kDa and 48-kDa PFKs in E. histolytica Trophozoites-Total RNA isolated from trophozoites was used for Northern blots to determine the expression of the two PFKs at the transcriptional level. Using a cDNA probe of the 48-kDa PFK gene, blots of the E. histolytica total RNA showed a single band of about 1.3 kilobases. The blot with the 60-kDa gene probe showed a single band of ϳ1.6 kilobases. Both were the expected size as determined from length of the genes and the short untranslated region sequences. A slot blot was used to compare the expression of the two E. histolytica PFK genes at the mRNA level. By comparing the intensity of the signal from the total RNA with that of the signal from the standard DNA series as described under "Experimental Procedures," the amounts of the mRNA of the two PFK genes within total RNA were compared (not shown). In 30 g of E. histolytica total RNA, there were 250 pg of 60-kDa PFK mRNA and only 16 pg of 48-kDa PFK mRNA. Considering the size of the two PFK genes, the mRNA level of the 60-kDa PFK gene is about 10 times higher than that of the 48-kDa PFK gene. PFK Content in E. histolytica Trophozoites-The relative quantities of the two PFK enzymes in amoebal extracts were compared by Western analysis using known amounts of recombinant 48-kDa and 60-kDa PFKs as standards as described under "Experimental Procedures" (not shown). In trophozoites the 48-kDa PFK enzyme was present at about one-tenth the level of the 60-kDa PFK. These data are consonant with the data on the mRNA levels. The native 48-kDa PFK enzyme was readily isolated from trophozoite extracts using chromatography on ATP-Sepharose as described under "Experimental Procedures." The ATP-Sepharose isolation procedure indicated no apparent association between the two PFKs in trophozoites. The 60-kDa PFK was never detected by Western blot at any point during chromatography except in the initial effluent containing proteins that do not bind to ATP-Sepharose. The search for possible interaction between the two PFKs was motivated by the observation of a multisubunit structure in plant PP i -PFKs (4). In the instance of the plant enzymes, catalytic and regulatory subunits copurify. No copurification was observed, nor was coprecipitation of the two enzymes from trophozoite extracts seen when either specific antibody was used. Furthermore, assays of purified 60-kDa PP i -PFK were not influenced by the presence of an equal amount of purified 48-kDa ATP-PFK, nor was there any effect when the two enzymes were preincubated together. Similarly, no effect of 48-kDa PFK was seen when the reverse experiments were performed. Thus any direct interactions between the two proteins are very unlikely. A reinvestigation of trophozoite extracts showed that ATP-PFK activity could be detected without prior activation. ATPdependent PFK activity in amoebae is about 11 fold lower than PP i -dependent activity (0.43 unit of ATP activity versus 4.1 units of PP i activity in 100 l of trophozoite extract), corresponding to the relative amounts of the two PFKs enzymes detected by Western analysis. To ensure that the measured ATP-PFK activity was not an artifact of the 60-kDa PFK catalyzing PP i produced in other metabolic pathways, amoebal extracts were dialyzed exhaustively to eliminate all small metabolites. Also, ATP-PFK activity was readily measured at high Fru-6-P concentrations (20 mM) and was totally undetectable at 1.5 mM Fru-6-P, which is a saturating concentration of the sugar phosphate for the 60-kDa PFK. This indicated that the ATP-PFK activity detectable only at the higher Fru-6-P concentration was not measuring the 60-kDa PFK catalyzing contaminating PP i because such contamination would have been detectable at 1.5 mM Fru-6-P. In the study that first identified the PP i -PFK enzyme of E. histolytica, Reeves et al. (20) also detected ATP-PFK activity in trophozoite homogenates. Those investigators were unable to characterize the E. histolytica ATP-PFK activity further because of activity losses during purification. In fact, Reeves later concluded that the observed ATP-PFK activity was an artifact (21). That is clearly not the case as demonstrated here. An interesting finding was that the amoebal ATP-PFK activity was not increased by preincubation of amoebal extracts with 2 mM ATP even after eliminating the small metabolites by dialysis. In contrast, the trace of ATP-PFK activity in bacterial extracts containing recombinantly expressed E. histolytica 48-kDa PFK was increased dramatically after incubation with 2 mM ATP (data not shown). DISCUSSION The two PFKs of E. histolytica display distinct phosphoryl donor specificities. The 60-kDa PFK is a PP i -dependent enzyme and is responsible for all detectable PP i -PFK activity in trophozoite extracts (3). The 48-kDa PFK, contrary to previous reports, demonstrates no detectable PP i -PFK activity when produced recombinantly. The 48-kDa PFK is in fact a highly active ATP-utilizing PFK that is also able to use other nucleotides efficiently for catalysis. However, the apparent K m value for Fru-6-P of the 48-kDa PFK is more than 20-fold greater than previously measured intracellular Fru-6-P concentrations (0.16 Ϯ 0.06 mM) in amoebae (20), indicating that without a positive effector this enzyme may have limited physiological activity unless one invokes compartmentalization. Considering that the 60-kDa PFK has been shown to account for the glycolytic flux in amoebal extracts (21) and that mRNA, protein, and activity levels all indicate that the 60-kDa PFK is present in trophozoites in 10-fold greater amounts than the 48-kDa enzyme, the significance of the ATP-PFK in the glycolysis of trophozoites remains in question. However, E. histolytica has a complex life cycle, and the ATP-PFK may have functions in other stages of that cycle. The findings in this study as well as results from earlier studies from this laboratory (3) argue against the possibility of the two PFKs of E. histolytica associating or affecting each other in some regulatory manner. The two PFKs do not copurify when isolating either protein, a protein 47 kDa in size was not seen in the active PFK fractions during native PP i -PFK purification, immunoprecipitation of the trophozoite cell extract with antibodies against the 60-kDa PFK did not precipitate a protein close to 47 kDa in mass, and the activity of either the 60-kDa PFK or 48-kDa PFK was unaffected by the presence of its PFK counterpart in the assay mixture or in preincubations (data not shown). E. histolytica 48-kDa PFK is an unusual ATP-utilizing PFK that is only active after incubation at high enzyme concentrations with ATP. This activation appears to be the result of a change in the state of aggregation of the enzyme upon binding ATP rather than a catalytic event such as ATP hydrolysis or the formation of a phosphoenzyme complex. The activation can be reversed by simply diluting the enzyme-ATP incubation, and the enzyme-ATP preincubation mixture eventually reaches an equilibrium. These results indicate that activation does not involve a permanent alteration in either the enzyme or the ATP molecule. The enzyme can also be activated by other nucleotide triphosphates, AMP-PNP, ADP, and even PP i , indicating that a specific ATP modification is not involved in activation. These observations suggest that the nucleoside moiety is not essential for activation and that the last two phosphoryl groups of ATP are the most critical features. Closer analysis reveals PP i to be a better activator than ADP and AMP-PNP, both of which deviate from ATP in the terminal polyphosphate region. This polyphosphate moiety is completely absent in AMP and orthophosphate, and incubation with these compounds does not activate the enzyme at all. The Michaelis constant value for ATP derived from the ATP-dependent activation assay (K m ϭ 0.21 mM) is similar to that observed from the substrate dependent assay (K m ϭ 0.12 mM), which is consistent with ATP binding at the same site for both activation and catalysis. However, our results show that the adenosine moiety seems to be of little relevance in activation, whereas PP i activates the enzyme to nearly maximum levels. Although PP i can activate the enzyme, it is not a substrate, nor can it inhibit ATP activity. These results introduce the possibility that the 48-kDa PFK may bind PP i for activation at a site other than the substrate binding site. It is also possible that the activators may produce their effects by chelating some unknown inhibitor; however, this situation is unlikely because all activation assays were performed in the presence of 1 mM EDTA. Although the enzyme cannot be activated by the cosubstrate Fru-6-P alone, the sugar phosphate does provide some protection against inactivation when present with ATP. The dependence of the activation process on protein concentration suggests that the reversible activity loss is associated with association-dissociation behavior of the protein. This was supported by experiments with molecular crowding with PEG which showed that crowding increased the rate of activation. Finally, molecular sizing experiments show that the inactive PFK exists as a dimer that associates into an active tetramer upon incubation with ATP. The 48-kDa PFK in dialyzed amoebal extracts was found in an activated state. This information introduces many new possibilities for the native activation state of the enzyme. Although it is possible that ATP was not eliminated from the trophozoite extracts by dialysis, it is more likely that another activator exists in the amoeba which is either too large or too tightly associated to be removed by dialysis. Also, the 48-kDa PFK may be activated in trophozoites by some other means such as subcellular localization. Whatever the mechanism of native activation, it is likely that in vivo the 48-kDa PFK displays substantially different kinetic features. The two PFKs of E. histolytica have a low sequence identity of about 17%, although there are many identical residues in the presumed active site. Phylogenetic studies of the sequences of PFKs place the two E. histolytica PFKs in a large group of proteins, most of which have been described as PP i -PFKs that are distinct from the typical ATP-PFKs such as those found in E. coli as well as all mesozoans. The 60-kDa enzyme falls into a monophyletic subgroup that contains a number of other well characterized PP i -PFKs including those of plants (22,23). On the other hand, the 48-kDa PFK sequence from E. histolytica falls into a monophyletic subgroup within the PP i -PFK group that also contains Treponema pallidum and Borrelia burgdorferi and the peroxisomal ATP-PFK of Trypanosoma brucei (22,23). Of the other three members of the group, the T. pallidum gene product has not been characterized and preliminary studies of the B. burgdorferi product have not found either ATP-or PP i -PFK activity (24). The T. brucei ATP-PFK is a homotetramer with a subunit mass of 50 kDa and is not regulated by the metabolites that modulate the activity of ATP-PFKs in other organisms (25). The members of this group of four proteins have a common sequence in the presumed region where the phosphoryl transfer reaction takes place. The two sequences are GGDG and PKTIDND, which may be contrasted to GGDD and PKTIDND of almost all well characterized PP i -PFKs and GGDG and PGTIDND in ATP-PFKs of E. coli and all mesozoans. Recently we have shown that mutation of the second Asp in the GGDD sequence of the E. histolytica 60-kDa PP i -PFK to Gly changes the specificity to that of an ATP-PFK (26). The last residue in the GGDG sequence would appear to be a particularly important determinant of the phosphoryl donor specificity of all PFKs.
2018-04-03T00:11:24.508Z
2001-06-08T00:00:00.000
{ "year": 2001, "sha1": "2fb375902500287491e562f07029ef97cb30c8a1", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/276/23/19974.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "ee4ec5a03b75fb6f56506202cf2cc8b06b7b4022", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
14875751
pes2o/s2orc
v3-fos-license
Inflationary paradigm in trouble after Planck2013 Recent results from the Planck satellite combined with earlier observations from WMAP, ACT, SPT and other experiments eliminate a wide spectrum of more complex inflationary models and favor models with a single scalar field, as reported by the Planck Collaboration. More important, though, is that all the simplest inflaton models are disfavored statistically relative to those with plateau-like potentials. We discuss how a restriction to plateau-like models has three independent serious drawbacks: it exacerbates both the initial conditions problem and the multiverse-unpredictability problem and it creates a new difficulty that we call the inflationary"unlikeliness problem."Finally, we comment on problems reconciling inflation with a standard model Higgs, as suggested by recent LHC results. In sum, we find that recent experimental data disfavors all the best-motivated inflationary scenarios and introduces new, serious difficulties that cut to the core of the inflationary paradigm. Forthcoming searches for B-modes, non-Gaussianity and new particles should be decisive. The Planck satellite data reported in 2013 [1] shows with high precision that we live in a remarkably simple universe. The measured spatial curvature is small; the spectrum of fluctuations is nearly scale-invariant; there is a small spectral tilt, consistent with there having been a simple dynamical mechanism that caused the smoothing and flattening; and the fluctuations are nearly Gaussian, eliminating exotic and complicated dynamical possibilities, such as inflationary models with noncanonical kinetic energy and multiple fields. (In this Letter, we will not discuss the marginal deviations from isotropy on large scales reported by the Planck Collaboration [2].) The results not only impose tight quantitative constraints on all cosmological parameters [3], but, qualitatively, they call for a cosmological paradigm whose simplicity and parsimony matches the nature of the observed universe. The Planck Collaboration attempted to make this point by describing the data as supporting the simplest inflationary models [4,5,6]. However, the models most favored by their data (combined with earlier results from WMAP, ACT, SPT and other observations [7]) are simple by only one criterion: an inflaton potential with a single scalar field suffices to fit the data. By several other important criteria described in this Letter, the favored models are anything but simple: Namely, they suffer from exacerbated forms of initial conditions and multiverse problems, and they create a new difficulty that we call the inflationary "unlikeliness problem." That is, the favored inflaton potentials are exponentially unlikely according to the logic of the inflationary paradigm itself. The unlikeliness problem arises even if we assume ideal initial conditions for beginning inflation, ignore the lack of predictive power stemming from eternal inflation and the multiverse, and make no comparison with alternatives. Thus, the three problems are all independent, all emerge as a result of the data, and all point to the inflationary paradigm encountering troubles that it did not have before. We further speculate about how recent results from the Large Hadron Collider (LHC) suggesting a standard model Higgs could create yet another problem for inflation. Our analysis is based on considering the "favored" models according to the current observations. (Here and throughout the Letter we use the ranking terminology of the Planck Collaboration). Although the simplest inflationary models are "disfavored" relative to these by 1.5 σ or more, it is too early in some cases to declare them "ruled out." We discuss in the conclusions how forthcoming searches for B-modes, non-Gaussianity and new particles could amplify, confirm, or resolve the problems for inflation. Which inflationary models survive after Planck2013? Planck2013 has added impressively to previous results in three ways. First, it has shown that the non-Gaussianity is small. This eliminates a wide spectrum of more complex inflationary models and favors models with a single scalar field. This restriction to single-field models is what justifies focusing on the plot of r (the ratio of tensor to scalar fluctuations) versus n s (the scalar spectral index), since it is optimally designed to discriminate among the single-field possibilities. In terms of the r-n s plot, a second contribution of Planck2013 [1] has been to independently confirm the results obtained previously by combining WMAP with other observations. The data disfavors by 1.5σ or more all the simplest inflation models: power-law potential and chaotic inflation [8], exponential potential and power-law inflation [9], inverse power-law potential [10,11]. Third, the r-n s plot favors instead a special subclass of inflationary models with plateau-like inflaton potentials. These models -simple symmetry breaking [5,6,12], natural (axionic) [13], symmetry breaking with non-minimal (quadratic) coupling [14,15,16], R 2 [17], hilltop [18] -are simple in the sense that they all can be formulated (in some cases via changes of variable [19,20,21,22]) as single-field, slow-roll models with a canonical kinetic term in the framework of Einstein gravity [1]. A distinctive feature of this subclass of models, following from the Planck2013 constraint on r (r 0.002 < 0.12 at 95% CL), that will be important in our analysis is that the energy scale of the plateau (M 4 I ) is at least 12 orders of magnitude below the Planck scale ∼ M 4 Pl [1], Pl r * 0.12 (1) at 95% CL, where A s is the scalar amplitude and r * the value of r evaluated at Hubble exit during inflation of mode with wave number k * . A classic example that we will consider first is the original new inflation model [5,6] based on a Higgs-like inflaton, φ, and potential V(φ) = λ(φ 2 − φ 2 0 ) 2 , as illustrated in Fig. 1a. The plateau region is the range of small φ ≪ φ 0 . Other examples illustrated in Figs. 1b and 1c will then be considered. An obvious difference between plateau-like models like this and the simplest inflationary models, like V(φ) = λφ 4 , is that the simplest models require only one parameter and absolutely no tuning of parameters to obtain 60 or more e-folds of inflation while the plateau-like models require three or more parameters and must be fine-tuned to obtain even a minimal amount of inflation. For V(φ) = λφ 4 all that is required is that φ ≥ M Pl , where M Pl is the Planck mass. However, the fine-tuning of parameters is a minor issue within the context of the more serious problems described below that undercut the inflationary paradigm altogether. How do plateau-like inflationary models affect the initial conditions problem? As originally imagined, inflation was supposed to smooth and flatten the universe beginning from arbitrary initial conditions after the big bang [4]. However, this view had to be abandoned as it was realized that large inflaton kinetic energy and gradients within a Hubble-sized patch prevent inflation from starting. While some used statistical mechanical reasoning to argue that the initial conditions required for inflation are exponentially rare [23,24], the almost "universally accepted" [25] assumption for decades, originally due to Linde [8,26,27,28,29,30,31,32,33,34], has been that the natural initial condition when the universe first emerged from the big bang and reached the Planck density is having all different energy forms of the same order. For the inflaton, this means 1 2φ Pl . Roughly speaking, the assumption is based on the notion that all these forms of energy density rst in ation second in ation ψ Figure 1: Plateau-like models favored by Planck2013 data: (a) Higgs-like potential V with standard Einstein gravity that has both plateau at φ ≪ φ 0 (solid red) and power-law behavior at φ ≫ φ 0 (dashed blue), where N max is the maximum number of e-folds of inflation possible for the maximal range ∆φ; (b) unique plateau-like model (solid red) for semi-infinite range of φ if perfectly tuned compared to continuum of power-law inflation models (dashed blue) without tuning; (c) periodic (axion-like) plateau potential (solid red) for φ plus typical power-law inflation potential (dashed blue) for second field ψ; (d) designed inflationary potential with power-law inflation segment or false vacuum segment (dotted green) grafted onto a plateau model (solid red). span the same range, from zero to M 4 Pl , so it is plausible to have them of the same order at a time when the total energy density is M 4 Pl . Evolving forward in time from these initial conditions, V(φ) almost immediately comes to dominate the energy density and triggers inflation before the kinetic and gradient energy can block it from starting. After Planck2013, the very same argument used to defend inflation now becomes a strong argument against it. Because the potential energy density of the plateau M 4 I is bounded above and must be at least a trillion times smaller than the Planck density to obtain the observed density fluctuation amplitude, the only patches that exist have 1 In particular, beginning from these revised initial conditions and evolving forward in time, the kinetic energy decreases as 1/a 6 and the gradient energy as 1/a 2 , where a(t) is the Friedman-Robertson-Walker scale factor. Hence, beginning from roughly equal kinetic and gradient energy, gradients and inhomogeneities quickly dominate and the combination blocks inflation from occurring. To quantify the problem, for inflation to initiate, there must be a seed region at the Planck density (t = t Pl ) that remains roughly homogeneous until inflation begins (t = t I ) and whose radius r(t) has expanded to a size at least equal to a Hubble radius, H −1 (t I ) at the time inflation initiates. After Planck2013, this requires, by simple comparison of the scales M Pl /M I ∼ 10 3 · (10 16 GeV/M I ) as constrained by Planck2013, that there exist homogeneous initial volumes before inflation begins whose size is -initial smoothness on the scale of a billion or more Hubble volumes [25]! In sum, by favoring only plateau-like models, the Planck2013 data creates a serious new challenge for the inflationary paradigm: the universally accepted assumption about initial conditions no longer leads to inflation; instead, inflation can only begin to smooth the universe if the universe is unexpectedly smooth to begin with! Is a plateau-like potential likely according to the inflationary paradigm? All inflationary potentials are not created equal. The odd situation after Planck2013 is that inflation is only favored for a special class of models that is exponentially unlikely according to the inner logic of the inflationary paradigm itself. The situation is independent of the initial conditions problem described above; even assuming ideal conditions for initiating inflation, the fact that only plateau-like models are favored is paradoxical because inflation requires more tuning, occurs for a narrower range of parameters, and produces exponentially less plateaulike inflation than the now-disfavored models with power-law potentials. This is what we refer to as the inflationary "unlikeliness problem." To illustrate the problem, we continue with the classic plateau-like model V(φ) = λ(φ 2 − φ 2 0 ) 2 . Like most plateau-like inflationary models, the plateau terminates at a local minimum, and then the potential grows as a power-law (∼ λφ 4 in this case) for large φ. The problem arises because within this scenario the same minimum can be reached in two different ways, either by slow-roll inflation along the plateau or by slow-roll inflation from the power-law side of the minimum. It is easy to see that inflation from the power-law side requires less tuning of parameters, occurs for a much wider range of φ, and produces exponentially more inflation: constraints on an inflationary model are determined by the amount of inflation (N ∼ 60); the scale of density fluctuations (δρ/ρ ∼ 10 −5 ); and the condition called "graceful exit" (which ensures that inflation ends locally and marks the start of reheating). Using the well-known slow-roll approximation, N ∼ V/V ′′ , dρ/ρ ∼ V 3/2 /V ′ , these constraints can be specified for both plateau-like ∼ λφ 4 0 −2λφ 2 0 φ 2 and power-law ∼ λφ 4 inflation [35]. One immediately observes that the first constraint imposes no parameter tuning constraints on power-law models but does require fine-tuning for plateau-like models. For the plateau-like model, inflation occurs if φ lies in the range and the maximum number of e-folds is By comparison, coming from the power-law side of the same potential, inflation occurs for the range ∆φ(power-law) λ −1/4 M Pl , so that ∆φ(power-law) ≫ ∆φ(plateau), where we have followed convention in confining the powerlaw range to those values for φ for which V(φ) is less than the Planck density and used the fact that λ must be of order 10 −15 to obtain the observed density perturbation amplitude on large scales. Also, the maximum integrated amount of inflation on the power-law side is Obviously, given the much larger field-range for φ and larger amount of expansion, inflation from the power-law side is exponentially more likely according to the inflationary paradigm; yet Planck2013 forbids the power-law inflation and only allows the unlikely plateau-like inflation. This is what we call the inflationary unlikeliness problem. Although we have demonstrated the principle so far for only a single potential, completion of most scalar field potentials, plateau-like or not, entails power-law or exponential behavior at large values of φ. There are notable examples that have no power-law completion, such as axion and moduli potentials. However, as discussed in Sec. 5, unless all scalar fields defining our vacuum are of this nature, inflation from a scalar field with power-law or exponential behavior is exponentially more likely; but this is disfavored by Planck2013. Therefore, post-Planck2013 inflationary cosmology faces an odd dilemma. The usual test for a theory is whether experiment agrees with model predictions. Obviously, inflationary plateaulike models pass this test. However, this cannot be described as a success for the inflationary paradigm, since, according to inflationary reasoning, this particular class of models is highly unlikely to describe reality. The unlikeliness problem is an alarm warning us that a paradigm can fail even though observations favor a class of models if the paradigm predicts the class of models is unlikely. Is Planck2013 data compatible with the multiverse? A well-known property of almost all inflationary models is that, once inflation begins, it continues eternally producing a multiverse [36,37] in which "anything that can happen will happen, and it will happen an infinite number of times" [38]. A result is that all cosmological possibilities (flat or curved, scale-invariant or not, Gaussian or not, etc.) and any combination thereof are equally possible, potentially rendering inflationary theory totally unpredictive. Attempts to introduce a measure principle [39,40,41,42,43,44] or anthropic principle [45,46,47] to restore predictive power have met with difficulty. For example, the most natural kind of measure, weighting by volume, does not predict our universe to be likely. Younger patches [48,49] and Boltzmann brains/babies [50,51] are exponentially favored. Planck2013 results lead to a new twist on the multiverse problem that is independent of the initial conditions and unlikeliness problems described above. The plateau-like potentials selected by Planck2013 are in the class of eternally inflating models, so the multiverse and its effects on predictions must be considered. In a multiverse, each measured cosmological parameter represents an independent test of the multiverse in the sense one could expect large deviations from any one of the naive predictions. The more observables one tests, the greater the chance of many-σ deviations from the naive predictions. Hence, it is surprising that the Planck2013 data agrees so precisely with the naive predictions derived by totally ignoring the multiverse and assuming purely uniform slow-roll down the potential. Is there any escape from these new problems? In the previous sections we introduced three independent problems stemming from the Planck2013 observations: a new initial conditions problem, a worsening multiverseunpredictability problem, and a novel kind of discrepancy between data and paradigm that we termed the unlikeliness prob-lem. It is reasonable to ask: is there any easy way to escape these problems? One approach that cannot work is the anthropic principle since the new problems discussed in this Letter all derive from the fact that Planck2013 disfavors the simplest inflationary potentials while there is nothing anthropically disadvantageous about those models or their predictions. The multiverse-unpredictability problem has been known for three decades before Planck2013 and, thus far, lacks a solution. For example, weighting by volume and bubble counting, the most natural measures by the inner logic of the inflationary paradigm, fail. By contrast, one might imagine the unlikeliness problem first brought on by Planck2013 could be evaded by a different choice of potential. Above we used as an example the potential V(φ) = λ(φ 2 − φ 2 0 ) 2 , which has a plateau for φ ≪ φ 0 and a power-law form for φ ≫ φ 0 . Here it was clear that inflation from the power-law side is exponentially more likely because inflation occurs for a wider range of φ and generates exponentially more accelerated expansion. An alternative, in principle, is to have a plateau at large φ and no power-law behavior, as sketched in Fig. 1b. The problem with this is that the desired flat behavior, marked in red, is a unique form that only occurs for a precise cancellation order by order in φ (if one imagines V expanded in a power series in φ). Within the inflationary paradigm, this perfect cancellation is not only ultra-fine tuned, but also uncalled for since there are infinitely many power-law inflationary completions of the potentials (blue-dashed) in which V increases as a power of φ. The single plateau possibility is extremely unlikely compared to the continuum of blue-dashed possibilities. Yet now Planck2013 disfavors everything except for the unlikely plateau case. Examples of this type include the Higgs inflationary model with non-minimal coupling f (φ)R with f (φ) = M 2 Pl +ξφ 2 [16,52,53,54,55,56] and the f (R) = R + ξR 2 inflation model [17], where R is the Ricci scalar, once they are converted by changes of variable to a theory of a scalar field φ in the Einsteinframe. Note that a plateau only occurs if f (φ) or f (R) are precisely cutoff at quadratic order, when there is no reason why there should not be higher order terms. Yet the addition of any one higher order term is enough to ruin plateau inflation. A third possibility is periodic potentials of the type shown in Fig. 1c, as occurs for axion-like fields (e.g., as in natural inflation [13] or in string theory moduli). This form is enforced by symmetry to be periodic and, unlike the previous cases, forbidden to have power-law behavior at large φ. This makes it the best-case scenario for evading the unlikeliness problem. The problem arises if there are any non-axion-like scalar fields that define the vacuum since they will generically have power-law behavior at large φ. The more ordinary scalar fields that exist in fundamental theory, the more avenues there are for power-law inflation, each of which is exponentially favored over plateau-like inflation from the periodic potential but disfavored by Planck2013. Hence, none of these three cases evades the unlikeliness problem. At the same time, it is clear that none does anything to evade the new initial conditions problem caused by Planck2013. In each case, the plateau-like inflation begins well after the big bang, enabling kinetic and gradient energy to dominate right after the big bang. A fourth possibility consists of models, like those sketched in Fig. 1d, in which complicated features are added for the purpose of turning an unlikely model into a likely one. For example, we have already shown that the plateau side (solid red) in Fig. 1a has exponentially less inflation than the power-law side and an initial conditions issue; so the fact that Planck2013 disfavors the power-law and favors the plateau is a problem. By grafting the sharp upward bend or false vacuum (dotted green) onto the plateau in Fig. 1d, the combination technically evades those problems, but at the expense of complicating the potential. So, in terms of the addressing the central issue of this Letter -does Planck2013 really favor the simplest inflationary model? -this approach does not change the answer. Furthermore, the only reason for grafting onto a plateau model rather than some other potential shape is because of the foreknowledge that the plateau model fits Planck2013 data. That means, effectively, what was supposed to be predicted output of the model has now been used as an input in its design. It does not make sense to apply the unlikeliness criterion to models in which the very same volume and initial conditions test criteria were already "wired in" as input. In fact, not only has the likeliness criterion been used as input, but all the Planck2013 data (tilt, tensor modes, spatial curvature, non-Gaussianity) have been used in selecting to graft onto a plateau potential rather than some other shape potential. If the only way the inflationary paradigm will work is by delicately designing all the test criteria and data into the potential, this is trouble for the paradigm. More trouble for inflation from the LHC? Thus far, we have only focused on recent results from Planck2013, but recent measurements of the top quark and Higgs mass at the LHC and the absence of evidence for physics beyond the standard model could be a new source of trouble for the inflationary paradigm and big bang cosmology generally [57,58]. Namely, the current data suggests that the current symmetry-breaking vacuum is metastable with a modest-sized energy barrier ((10 12 GeV) 4 ) protecting us from decay to a true vacuum with large negative vacuum density [59]. This conclusion is speculative since it assumes no new physics for energies less than the Planck scale, which is unproven. Nevertheless, this is the simplest interpretation of the current data and its consequences are dramatic; hence, we consider the implications here. The predicted lifetime of the metastable vacuum is large compared to the time since the big bang, so there is no sharp conflict with observations. The new problem is explaining how the universe managed to become trapped in this false vacuum whose barriers are tiny (by a factor of 10 28 !) compared to the Planck density when it is obviously much more probable for the field to lie outside the barriers than within them. However, if the Higgs field lies outside the barrier, its negative potential energy density will tend to cancel the positive energy density of the inflaton and block inflation from occurring, unless one assumes large-field inflation and a certain kind of coupling between the inflaton and the Higgs [60,61]. Even in the unlikely case that the Higgs started off trapped in its false vacuum and inflation began, the inflaton would induce de Sitter-like fluctuations in all degrees of freedom that are light compared to the Hubble scale during inflation. These tend to kick the Higgs field out of the false vacuum, unless the Hubble constant during inflation is smaller than the barrier height [62]. Curiously, a way to evade the kick-out is if all inflation (not just the last 60 e-folds) occurs at low energies where the de Sitter fluctuations are smaller than the barrier height. This would be possible if the only possible inflaton potentials are plateau-like with sufficiently low plateaus: the very same potentials that have the initial conditions and multiverse problems. Discussion In testing the validity of any scientific paradigm, the key criterion is whether measurements agree with what is expected given the paradigm. In the case of inflationary cosmology, this test can be divided into two questions: (A) are the observations what is expected, given the inflaton potential X?, here the analysis assumes classical slow-roll, no multiverse, and ideal initial conditions; and (B) is the inflaton potential X that fits the data what is expected according to the internal logic of the paradigm?. In order to pass, both questions must be answered in the affirmative. The Planck2013 analysis, like many previous analyses of cosmic parameters, focused on Question A. Based on tighter constraints on flatness, the power spectrum and spectral index, and non-Gaussianity, the conclusion from Planck2013 was that single-field plateau-like models are the simplest that pass and they pass with high marks. However, our focus in this Letter has been Question B -are plateau-like models expected, given the inflationary paradigm? Based on the very same tightened constraints from Planck2013, we have identified three independent issues for plateau-like models: a dangerous new type of initial conditions problem, a twist on the multiverse problem, and, for the first time, an inflationary unlikeliness problem. The fact that a single data set like Planck2013 can expose three new problems is a tribute to the quality of the experiment and serious trouble for the paradigm. Future data can amplify, confirm, or diffuse the three problems. Detecting tensor modes and constraining the non-Gaussianity to be closer to zero would ease the problems provided the r-n s values are consistent with a simple power-law potential. Given the Planck2013 value for the tilt (n s = 0.9603 ± 0.0073), the only simple chaotic model that can be recovered is m 2 φ 2 , predicting 0.13 r 0.16 (depending on the value of N). Alternatively, if the observed r lies at 0.01 or below, powerlaw models are ruled out and all three current problems remain. Yet a third possibility is finding no tensor modes or detecting non-negligible non-Gaussianity (e.g., f NL ∼ 8 is well within Planck2013 limits but inconsistent with plateau models); measurements like these would create yet more problems for the inflationary paradigm and encourage consideration of alternatives.
2013-06-01T23:28:53.000Z
2013-04-09T00:00:00.000
{ "year": 2013, "sha1": "656dfb2fc33924b1198d3fa583cd0a325eae2d61", "oa_license": null, "oa_url": "https://dash.harvard.edu/bitstream/1/41461284/1/79363%201304.2785.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "656dfb2fc33924b1198d3fa583cd0a325eae2d61", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
52850113
pes2o/s2orc
v3-fos-license
Charged residues between the selectivity filter and S6 segments contribute to the permeation phenotype of the sodium channel. The deep regions of the Na(+) channel pore around the selectivity filter have been studied extensively; however, little is known about the adjacent linkers between the P loops and S6. The presence of conserved charged residues, including five in a row in domain III (D-III), hints that these linkers may play a role in permeation. To characterize the structural topology and function of these linkers, we neutralized the charged residues (from position 411 in D-I and its homologues in D-II, -III, and -IV to the putative start sites of S6) individually by cysteine substitution. Several cysteine mutants displayed enhanced sensitivities to Cd(2+) block relative to wild-type and/or were modifiable by external sulfhydryl-specific methanethiosulfonate reagents when expressed in TSA-201 cells, indicating that these amino acids reside in the permeation pathway. While neutralization of positive charges did not alter single-channel conductance, negative charge neutralizations generally reduced conductance, suggesting that such charges facilitate ion permeation. The electrical distances for Cd(2+) binding to these residues reveal a secondary "dip" into the membrane field of the linkers in domains II and IV. Our findings demonstrate significant functional roles and surprising structural features of these previously unexplored external charged residues. I N T R O D U C T I O N Voltage-gated Na ϩ channels are responsible for initiating action potentials in excitable tissues including heart, muscle, and nerve by selectively transporting Na ϩ ions across the surface membrane (Hille, 1992). Mutagenesis experiments have demonstrated that the P loops, contributed by each of the four domains (Marban et al., 1998), contain the major determinants of ion permeation and channel block (Noda et al., 1989;Pusch et al., 1991;Terlau et al., 1991;Backx et al., 1992;Heinemann et al., 1992a;Satin et al., 1992;Perez-Garcia et al., 1996;Li et al., 1997). P-loop residues critical for these processes have been identified (Heinemann et al., 1992a;Chiamvimonvat et al., 1996a,b;Tsushima et al., 1997a) and have been proposed to lie deep within the external vestibule of the channel protein ( Fig. 1). Disulfide trapping studies and single-channel recordings have further revealed that this region of the channel is highly asymmetrical (Chiamvimonvat et al., 1996b;Benitah et al., 1997;Tsushima et al., 1997b). While the functional and structural aspects of this deep region of the Na ϩ channel pore have been exten-sively studied, little is known about the flanking regions of the P loops. Inspection of the primary sequence of Na ϩ channels reveals that the linkers between the P segments and S6 on the carboxyl-terminal side of the selectivity filter contain a number of charged residues; in particular, the domain III linker contains five charges in a row (Fig. 1). Most of these charged residues are highly conserved from jellyfish to human. This striking pattern leads to several immediate questions. Do these charged residues play similar structural and functional roles as those located deeper in the pore, or are they simply peripheral residues of little functional importance? Do these residues participate in the process of ion permeation; e.g., by increasing the local effective Na ϩ concentration at the external mouth of the pore and/or by affecting ionic selectivity? To address these questions, we neutralized each of the charged residues in these P loop-S6 (P-S6) 1 linkers (from position 411 in D-I and its analogues in D-II, III, and IV to the putative start sites of S6, see also Fig. 1) individually by cysteine substitution. Side-chain accessibility of cysteine-substituted mutants was probed by sensitivity to Cd 2 ϩ blockade and by reactivity to sulfhydryl-specific methanethiosulfonate (MTS) reagents using whole-cell patch-clamp recordings. Single-channel recordings were performed to probe changes in channel conductance with charge neutralization and to assess the electrical distance for Cd 2 ϩ binding to the cysteine-substituted charged residues. Our results demonstrate unsuspected functional and structural features of this previously unexplored region of the Na ϩ channel. Site-directed Mutagenesis and Heterologous Expression Mutagenesis was performed on the rat skeletal muscle sodium channel ␣ subunit ( 1-2; Trimmer et al., 1989) gene cloned into pGFP-IRES vector (Johns et al., 1997) using PCR with overlapping mutagenic primers as previously described . All mutations were confirmed by dideoxynucleotide sequencing. Wild-type (WT) and mutated channels were expressed in TSA-201 cells (a transformed HEK 293 cell line stably expressing the SV40 T-antigen) by addition to the cells of 1 g/60-mm dish of DNA encoding the ␣ subunit using the Lipofectamine Plus transfection kit (GIBCO-BRL). Transfected cells were incubated at 37 Њ C in a humidified atmosphere of 95% O 2 -5% CO 2 for 48-72 h for channel protein expression before electrical recordings. Given that the ␣ subunit suffices for permeation, ␤ 1 subunits were not routinely coexpressed. Nevertheless, we verified that E1551C, a representative domain IV mutant, was unaltered in its selectivity, Cd 2 ϩ blocking affinity, or MTS susceptibility when coexpressed with ␤ 1 subunit. Electrophysiology Electrophysiological recordings were performed using the whole-cell or cell-attached single-channel variants of the patch clamp technique (Hamill et al., 1981) with an integrating amplifier (Axopatch 200A; Axon Instruments). Transfected cells were identified under epifluorescent microscopy using the green fluorescent protein as a reporter. For whole-cell recordings, pipettes were fire-polished with a final tip resistance of 1-3 M ⍀ when filled with the internal recording solution (see below). All recordings were performed at room temperature. Single-channel currents were measured in the presence of 20 M fenvalerate (Dupont) to promote long channel openings (Holloway et al., 1989;Backx et al., 1992). Fenvalerate is particularly useful for permeation studies as it does not alter unitary conductance or selectivity (Chiamvimonvat et al., 1996b). Data were sampled at 10 kHz and low-pass filtered (four-pole Bessel, Ϫ 3 dB at 2 kHz). Electrodes for unitary recordings were fire-polished to a final resistance of 5-10 M ⍀ and coated with Sylgard. Data Analysis and Statistics Half-blocking concentrations (IC 50 ) for Cd 2 ϩ were determined by least-square fits of the dose-response data to the following binding isotherm using the Levenberg-Marquardt algorithm: I/I O ϭ 1/{1 ϩ ([Cd 2 ϩ ]/I) n }, where I and I O are the peak currents measured from a step depolarization to Ϫ 10 mV from a holding potential of Ϫ 100 mV before and after application of Cd 2 ϩ , respectively, and n is the Hill coefficient (assumed to equal 1 for a single binding site for Cd 2 ϩ ). Current-voltage relationships were obtained by holding cells at Ϫ 100 mV and stepping from Ϫ 60 to ϩ 50 mV in 10-mV increments. Reversal potentials were calculated by fitting the current-voltage relationship to a Boltzmann distribution function: where I is the peak I Na at a given test potential V t , V rev is the reversal potential, G max is the maximal slope conductance, V 1/2 is the half point of the relationship, and k is the slope factor. The regions between the fifth and sixth transmembrane segments (S5 and S6, respectively) in each of the four domains form the P loops. These P loops (thickened lines) dip toward the central axis of the channel to form part of the pore. The deeper portions of the P segments have been previously studied (approximately enclosed by dashed line). The regions investigated in this report correspond to the outer region of the pore (enclosed by solid line). (B) Primary sequence of the P segments of the rat skeletal muscle (rSkM1) Na ϩ channels. Residues enclosed in dashed-and solid-line boxes correspond to the same areas bracketed in A. Charged amino acid residues neutralized by cysteine substitutions in this study are shown as bold letters. Residues previously shown to be critical for ionic selectivity are in bold italics. For single-channel analysis, amplitude histograms were fitted to the sum of Gaussians using a nonlinear least squares method. Slope (single-channel) conductance was obtained by linear fit of the current-voltage relationship. The fraction of the transmembrane electric field that Cd 2 ϩ traversed (i.e., electrical distance, ␦ ) to reach its binding site was estimated by making a logarithmic plot of the ratio of unblocked and blocked unitary current amplitudes as a function of membrane potential followed by linear fits (Woodhull, 1973;Backx et al., 1992;Chiamvimonvat et al., 1996b). Steady state activation (m ϱ ) curves were derived from the relation m ϱ ϭ g / g max , where the conductance g was obtained from the current-voltage relationship by scaling the peak current (I) by the net driving force using the equation g ϭ I/(V t Ϫ E rev ), where V t is the test potential. For steady state inactivation (h ϱ ), we recorded the current in response to a test depolarization to Ϫ 20 mV (I test ), which immediately followed a 500-ms prepulse to a range of voltages. h ϱ was estimated as a function of the prepulse voltage by the ratio I test /I, where I is the current measured in the absence of a prepulse. Steady state gating parameters were estimated by fitting data to the Boltzmann functions using the Marquardt-Levenberg algorithm in a nonlinear-squares procedure: m ϱ or h ϱ ϭ 1/{1 ϩ exp[(V t Ϫ V 1/2 )/ k ]}, where V t is the test potential, V 1/2 is the half point of the relationship, and k ( ϭ RT / zF ) is the slope factor. Data reported are mean Ϯ SEM. Statistical significance was determined using paired Student's t test at the 5% level. Changes in Cd 2 ϩ Sensitivity of Na ϩ Channels by Cysteine Substitutions of P-S6 Linker Charged Residues We assessed the side-chain accessibility of P-S6 residues to the aqueous phase by examining the Cd 2 ϩ sensitivity of single cysteine mutants. 12 of 16 single cysteine mutants expressed functional channels. D1248C, R1250C, K1252C, and E1259C did not express in 5-10 rounds of transfection, with and without exposure to 10 mM dithiothreitol to exclude a spontaneous internal disulfide bridge that might render the channels nonconducting (Benitah et al., 1996;Tsushima et al., 1997b). We first characterized gating in each of these mutants and found hyperpolarizing shifts of 5-7 mV for steady state inactivation (h ϱ ) in four instances (Table I). Given the small magnitude of these changes, we next turned our attention to permeation. Fig. 2 summarizes the half-blocking concentration for Cd 2 ϩ of each of the functional cysteine mutants. All mutated channels but three (K415C, E1251C, E1254C) showed enhanced Cd 2 ϩ sensitivity ( P Ͻ 0.05) when compared with WT rSkM1 channels. Because Cd 2 ϩ is presumably binding to the introduced sulfhydryls, thereby blocking Na ϩ flux through the pore physically and/or electrostatically, the observation of enhanced Cd 2 ϩ block indicates that the side chains of these residues line the aqueous lumen of the pore . Modification by MTS Reagents One explanation for the unaltered Cd 2ϩ sensitivity of K415C, E1251C, and E1254C is that the side chains of these residues are buried within the channel protein and are not exposed to the aqueous phase. However, it is also possible that Cd 2ϩ indeed binds to the substituted cysteines of these "Cd 2ϩ -insensitive" mutants, but that such binding does not reduce Na ϩ flux due to the relatively small size of Cd 2ϩ as a blocker (ionic radius ϭ 1.1 Å). To distinguish between these possibilities, we employed the hydrophilic sulfhydryl-reactive methanethiosulfonate derivatives MTSEA (positively charged) and MTSES (negatively charged) as molecular probes. These Summary of Cd 2ϩ sensitivity of outer pore cysteine mutants. (A) Representative raw current traces of WT, R411C, E765C, E1253C, and D1545C elicited by depolarization to Ϫ10 mV from a holding potential of Ϫ100 mV in the absence and presence of Cd 2ϩ , as indicated. The degree of sodium current blockade by Cd 2ϩ was significantly (P Ͻ 0.05) greater for the cysteine mutants than for WT channels. Control peak current amplitudes shown were 3.1, 1.1, 1.5, 0.6, and 1.5 nA for WT, R411C, E765C, E1253C, and D1545C, respectively, and have been scaled for comparison. (B) Plot of IC 50 for Cd 2ϩ block of cysteine substituted mutants. Broken lines indicate the level of wild-type sensitivity. *Mutant channels statistically different (P Ͻ 0.05) from WT. n.e., no expression. agents introduce bulky adducts via a mixed disulfide bond (MTSEA, 66 Å 3 ; MTSES, 90 Å 3 ) such that successful modification of an accessible substituted cysteinyl near the pore is more likely to influence permeation (Akabas et al., 1992(Akabas et al., , 1994aKürz et al., 1995;Pascual et al., 1995;Perez-Garcia et al., 1996;Li et al., 1997). MT-SEA and MTSES modifications of the side chains of the cysteine mutants introduce positive and negative charges, respectively, thereby permitting the study of the effects of restoration and reversal of the native charges (Chiamvimonvat et al., 1996a;Li et al., 1997). Fig. 3 summarizes the effects of MTS reagents on peak sodium currents (I Na ) of WT and cysteine mutant channels. In these experiments, saturating concentrations of MTS reagents (2.5 mM MTSEA or 10 mM MT-SES) were applied to the channels by external perfusion for 10-15 min followed by washout. Consistent with previous reports, WT channels were modified by neither MTSEA nor MTSES, indicating that an accessible cysteine is required for these agents to be effective (Chiamvimonvat et al., 1996a,b;Perez-Garcia et al., 1996). Sodium currents through all mutant channels, except E1251C, E1254C, and E1551C, were significantly reduced after treatment with MTSEA. In con- trast, application of the negatively charged MTSES increased the current carried by E765C and D1547C channels. I Na of D762C also increased after MTSES modification, but the increase did not reach statistical significance (0.05 Ͻ P Ͻ 0.1). Fig. 4 depicts the time course of sulfhydryl modification of cysteine-substituted channels upon addition of MTSEA or MTSES. Representative current traces before (᭺) and after (᭹) modifications are also shown (Fig. 4, left). Application of MTSEA (Fig. 4 A) or MT-SES (B) decreased or enhanced I Na of E765C channels, respectively. MTS modifications were irreversible even after extensive washout of the reagents (5-10 min with ‫05ف‬ ml control bath solution). To further verify that sulfhydryl modification was complete, we also examined the sensitivity of I Na to Cd 2ϩ blockade after treatment with MTSES, since Cd 2ϩ is known to bind with much higher affinity to free sulfhydryls than to oxidized sulfhydryls (Torchinsky, 1981). Indeed, Cd 2ϩ -sensitive E765C channels became insensitive to Cd 2ϩ after modification with MTSES ( Fig. 4 B, inset). Cd 2ϩ sensitivity of MTSEA-modified E765C channels was not assessed due to the small size of the residual currents. It was, however, tested in other mutant channels with measurable currents after modification (i.e., D1545C, D1547C, and E1551C). As anticipated, these channels became Cd 2ϩ insensitive after modified by MTSEA (data not shown). Similar analysis of the other cysteine mutants was used to verify successful sulfhydryl modification by the MTS derivatives (data not shown). Interestingly, application of MTSEA to the Cd 2ϩ -insensitive K415C channels led to complete elimination of sodium current (Fig. 4 C), suggesting that this residue is indeed accessible from the external medium. In contrast, the addition of MTSES to this construct did not affect I Na (Fig. 4 D). Unlike K415C, the Cd 2ϩ -insensitive E1251C and E1254C channels were not modified by either MTSEA or MTSES. It is possible that MTS agents may have reacted with the substituted cysteines of these channels without producing any functional consequences. However, we are unable to distinguish these changes from side-chain effects per se. Cd 2ϩ sensitivity of these mutants after MTS modification was not assessed as they were by themselves insensitive to Cd 2ϩ block (Fig. 2). Single-Channel Conductance One putative role of the superficial negative charges studied in this report is that these residues may constitute another outer cluster of vestibular charge that functions to increase the local effective Na ϩ concentration at the external pore mouth, thereby supplementing the rings of charge closer to the selectivity filter (Chiamvimonvat et al., 1996a). We performed singlechannel recordings to investigate whether channel con- Figure 5. Single-channel recordings of representative cysteine mutants. (A) Representative single-channel currents elicited by step depolarization to Ϫ100 and Ϫ40 mV with 20 M fenvalerate in the bath. Addition of 500 M (R411C and E1253C) and 1 mM (D762C and D1551C) Cd 2ϩ resulted in rapid unresolved block of single Na ϩ channels as apparent reduction in unitary currents. (B) Single-channel current-voltage relationships for each of the cysteine mutants shown in A in the absence () and presence (᭺) of Cd 2ϩ . The slope conductance was obtained by linear regression through the data. While the channel conductances of E765C, E1253C, and D1547C channels were significantly (P Ͻ 0.05) reduced compared with WT, that of R411C was unaffected (P Ͼ 0.05). Singlechannel conductances of other cysteine mutants are summarized in Table II. (C) Logarithmic plot of the ratio of the unblocked (I C ) and blocked (I B ) single-channel current amplitudes against voltage allows estimation of the slope (z␦/RT) and the fractional electrical distance (␦) for Cd 2ϩ binding to these residues. ductance is affected by neutralization of these charged residues. Fenvalerate was added in the bath to promote long-lasting channel openings (see MATERIALS AND METH-ODS). At the whole-cell level, fenvalerate did not alter ionic permeability and reversal potential when added to WT channels (data not shown). It is also known not to affect unitary conductance compared with unmodified channels (Holloway et al., 1989;Backx et al., 1992; our unpublished observations). Therefore, it is reasonable to assume that the permeation properties of fenvalerate-modified channels closely resemble those of the native channels and that the channel pore confor-mation is not significantly altered. Fig. 5 A shows typical unitary currents of representative mutant channels from each of the four domains (R411C, E765C, E1253C, and E1551C). Fig. 5 B shows the corresponding currentvoltage relationships of these single channels and their slope conductances (see MATERIALS AND METHODS). Unitary conductances of all charge-neutralized mutants studied are summarized in Table II. Single-channel recordings were not attempted on E1560C channels because of their low level of expression (Ͻ5 pA/pF). In general, neutralization of negatively charged P-S6 residues, with the exception of E1254C and E1523C, resulted in decreased conductance, consistent with an electrostatic effect on conductance. Unlike K1237 of the DEKA locus, whose neutralization doubled single-channel conductance (Chiamvimonvat et al., 1996b), neutralization of the positively charged residues R411, K415, and R1558 did not enhance Na ϩ conductance through the channel. Fractional Electrical Distances for Cd 2ϩ Binding of Cd 2ϩ -sensitive Cysteine Mutants The voltage dependence of unitary current blockade by Cd 2ϩ has been used to estimate the relative depths of substituted cysteines in the pore (Backx et al., 1992;Chiamvimonvat et al., 1996b). We next determined the voltage dependence and hence the fractional electrical distances (␦) for Cd 2ϩ block of Cd 2ϩ -sensitive mutants. The addition of Cd 2ϩ to R411C, E765C, E1253C, and E1551C channels led to rapid unresolved blocking events appearing as reductions in unitary current (Fig. 5 A). Fig. 5 B shows the corresponding current-voltage relationships recorded in the presence of Cd 2ϩ (᭺). blocked unitary current amplitudes of these channels as a function of the membrane potential allows estimation of their electrical distances (Fig. 5 C) (Woodhull, 1973;Backx et al., 1992;Chiamvimonvat et al., 1996b). Similar measurements were also made for Cd 2ϩ -sensitive D762C, D1545C, D1547C, and R1558C channels. The fractional electrical distances of all the residues examined () are summarized in Fig. 6. ␦ values of selected pore residues that are known to be located deeper in the pore close to the selectivity filter region (domain I: E403; II: I757, E758; III: D1241; IV: D1532) are also shown for reference (ᮀ; Chiamvimonvat et al., 1996b). The charged residues investigated in this study (except E765C and R1558C) had a relatively shallow voltage dependence of Cd 2ϩ block, consistent with their more superficial locations as predicted by the alignment of the primary amino acid sequence. Interestingly, the more carboxy-terminal domain II residue E765 was located deeper in the electric field than D762. Both residues were in turn deeper than E758. A similar trend was also notable in domain IV: R1558 was deeper than E1551, which was in turn deeper than D1547. Ionic Selectivity Certain P-loop residues have been identified as critical determinants of ionic selectivity (Heinemann et al., 1992a;Chiamvimonvat et al., 1996b;Tsushima et al., 1997a). In particular, neutralization of the domain III lysine residue in the DEKA "filter" dramatically renders Na ϩ channels nonselective among group IA monovalent (Li ϩ , Na ϩ , K ϩ , Cs ϩ , and NH 4 ϩ ) and group IIA divalent (Ca 2ϩ , Mg 2ϩ ) cations (Heinemann et al., 1992a;Chiamvimonvat et al., 1996b;Tsushima et al., 1997a). To determine whether the P-S6 outer charges are involved in conferring ionic selectivity to the Na ϩ channels, we measured the reversal potential (E rev ) of WT and mutated channels under mixed ionic conditions. Neutralization of charged residues in these P-S6 linkers did not significantly alter E rev compared with WT channels (Table III). Consistent with these results, perfusion with monovalent cations such as Li ϩ , K ϩ , Cs ϩ , and NH 4 ϩ did not produce currents significantly different from WT (data not shown). These observations indicate that these external charges residing outside the conventional pore region are not critical determinants of ionic selectivity, despite their significant role in channel conductance. D I S C U S S I O N We have previously combined electrophysiological and mutagenesis techniques to explore functional and topological features of the Na ϩ channel pore on both the amino-and carboxyl-terminal sides of the immediate putative DEKA "selectivity" ring. In brief, we demonstrated that the Na ϩ channel pore structure is highly asymmetrical (Chiamvimonvat et al., 1996b;Perez-Garcia et al., 1996) as well as flexible (Benitah et al., 1997; see also Tsushima et al., 1997b). Also, in contrast to K ϩ channels, the Na ϩ channel P segments descend and ascend in the pore, but do not span the selectivity region to the cytoplasmic side . In the present study, we exploited the same strategy to extend our study of the Na ϩ channel pore to charged residues located between the carboxyl-terminal side of the outer ring of charge (E403, E758, D1241, and D1532) and the S6 membrane spanning segments (i.e., P-S6 linkers) (Fig. 1). The goal was to determine the structural and functional roles of these previously unexplored regions of the P segments. Accessibility of Cysteine Mutants to Cd 2ϩ and MTS Reagents All of the functional P-S6 linker cysteine mutants studied but three exhibited heightened sensitivity to current blockade by the group IIB metal Cd 2ϩ relative to WT. However, these Cd 2ϩ -sensitive mutants (two-to fivefold enhancements) were generally not as sensitive as those located putatively deeper in the pore that, when mutated to cysteine, often display 10-100-fold increased sensitivity (Backx et al., 1992;Chiamvimonvat et al., 1996b;Perez-Garcia et al., 1996;Li et al., 1997;Tsushima et al., 1997b). These observations could result from their more superficial locations: Cd 2ϩ binding may result in current blockade either by complete physical occlusion of the pore or by electrostatic repulsion preventing entry of Na ϩ ions into the pore (or both), depending on the local geometry of the area surrounding the inserted cysteine. If Cd 2ϩ binding occurs in more superficial or open locations, the bound Cd 2ϩ is more likely to be displaced by competing ions, thereby giving rise to the intermediate sensitivities observed in these mutants. Reductions in I Na by MTSEA modification of many mutants and the increase in I Na by MTSES observed in D762C, E765C, and D1547C channels could result from simple electrostatic effects on the permeation pathway as a result of charge restoration or reversal, respectively; in contrast, the complete elimination of current by MTSEA modification (a charge restoration) of R411C, K415C, and R1558C and the lack of effects of MTSES (a charge reversal) on I Na of these constructs cannot be explained by simple electrostatic theory. In addition, the differential responses (or lack thereof) of negatively charged neutralized mutants other than D762C, E765C, and D1547C to MTSES, despite the susceptibility of the same mutants to the smaller MTSEA, also require more complex interpretations, as discussed below. Covalent modification of channel proteins by MTS compounds with alteration of the current magnitude is dependent on a number of factors, including the size and charge of the agent (Akabas et al., 1992), the linker length, the locations of the inserted cysteine and the final docking site for the moiety linked to MTS, and the micro-environment (e.g., whether it is hydrophobic or charged) (Li et al., 1999a). For instance, addition of a bulky adduct to a cysteinyl residue located in a constricted region of the pore is likely to result in a reduction of peak current by steric hindrance irregardless of the charge. This was indeed the case for MTS modifications of many of the deep P-loop residues (e.g., I: Y401, W402, E403; II: I757; III: W1239C, M1240C; IV: W1531C) (Chiamvimonvat et al., 1996a,b;Perez-Garcia et al., 1996). However, this is obviously not the case for the P-S6 linker residues. Successful modification by MTSES (90 Å 3 bulk) (as confirmed by loss of Cd 2ϩ sensitivity, data not shown) did not produce current reduction. One possible explanation is that the attached ethylsulfonate (MTSES) moiety is anchored near the pore via the ethyl alkyl linker, but is prevented from entering the permeation pathway by anionic exclusion. On the other hand, the positively charged ethylammonium (MTSEA) moiety, when attached at the introduced cysteine (including R411C, K415C, and R1558C), could be attracted to the pore, thereby blocking it despite being smaller in size than MTSES. Cysteine scanning mutagenesis has the advantage of allowing assessment of side-chain accessibility as well as post-translational protein modifications at specific sites. However, this technique also makes certain basic assumptions that critically influence data interpretation. Firstly, it is assumed that the side chain of the substituted cysteine lies in an orientation similar to that of the native wild-type residue. Addition of aqueous-limited sulfhydryl-specific modifying agents should therefore react more readily with ionized cysteine sulfhydryls exposed to the aqueous phase (i.e., the lumen of the channel) than with nonionized sulfhydryls buried within the lipid membrane or protein. Any changes in current or channel function upon such reaction are then used as an indication of whether the residue in question is accessible. Nevertheless, it is possible that application of sulfhydryl modifiers could result in trapping of "abnormal" or atypical channel states. Also, successful modification may not necessarily lead to changes in function, as mentioned earlier. Cysteine-scanning mutagenesis also assumes that amino acid replacements do not result in global or nonspecific alterations of the structure and function of the protein of interest and that any elevation in Cd 2ϩ sensitivity of the substituted channels arises entirely from the inserted cysteine. However, mutations may expose endogenous cysteine(s) that is (are) inaccessible in the native channel, which in turn may underlie changes in sensitivity to Cd 2ϩ blockade and sulfhydryl modification observed in some mutant channels (Sunami et al., 1999). Functional Roles in Ion Permeation The Na ϩ channel pore is known to contain two rings of charge: an inner NH 2 -terminal or DEKA ring (I:D400, II:E755, III:K1237, and IV:A1529 in rSkM1) and an outer COOH-terminal ring (I:E403, II:E758, III:D1241, and IV:D1532 in rSkM1). These charge rings are separated by three to four neutral residues in the ascending portion of the P loops or the so-called SS2 region (Noda et al., 1989;Mikala et al., 1993). Residues from both rings were found to profoundly affect ionic selectivity and channel conductance when neutralized (Heinemann et al., 1992a;Chiamvimonvat et al., 1996a,b;Perez-Garcia et al., 1996;Tsushima et al., 1997a). In the present study, we determined the effect of charged residues located farther away from these rings ‫02-01ف(‬ residues to the COOH-terminal end of the outer ring) in the P-S6 linkers on conductance and selectivity of the channel. Although the P-S6 charged residues did not influence ionic selectivity (Table III), they nevertheless affect channel conductance. Unitary recordings revealed that the neutralization of six of eight negative charges led to reduction in conductance. Although these changes in conductance were not as dramatic as neutralization of the domain I aspartate (i.e., D400) from the DEKA ring, which led to ‫%09ف‬ decrease in conductance (Chiamvimonvat et al., 1996b), an ‫%04ف‬ reduction was routinely observed in each of these mutants (in the most extreme case for E765C, Ͼ60% de-crease). Assuming no major changes in the pore structure induced by the mutations, these negatively charged residues may work in concert to concentrate permeant ions (i.e., Na ϩ ) at the external mouth, thereby supplementing the two inner rings of charge to optimize ion conduction. To confirm that these residues indeed alter surface charge in the external pore by electrostatically interacting with Na ϩ , examination of channel conductances over a range of permeant ion concentrations would be required. If a pure electrostatic mechanism were operative, the maximal conductances should converge at high external permeant ion concentration (Chiamvimonvat et al., 1996a). Structural Inferences from Single-Channel Recordings The electrical distances of domain II pore residues reveal a striking pattern: they ascend (I757 and E758), and then descend (D762 and E765) back into the pore. Assuming that no significant structural or conformational changes of the pore are induced by the mutations and upon Cd 2ϩ binding to the substituted cysteine, one possibility for this observation is that this region of the pore (i.e., the DII linker) may reverse direction and dip back into the membrane. Fig. 7 demonstrates a schematic representation of such possible orientations of the domain II P-S6 linker. This could occur by forming a partial ring that extends horizontally at a tilted angle. One should, however, recognize that electrical distances do not directly translate into physical distances, particularly in regions where the transmembrane electric field gradient is not linear. Nevertheless, our data raise new possibilities about the local topology of the domain II P segment since many of the previous pore mutations studied in this domain either did not express or were inaccessible, making its topology relatively uncertain . Similar to DII, residues in DIV also show a similar "reverse" pattern. Residues in this segment ascend (D1532, D1545, and D1547), and then descend (E1551 and R1558). Further mutagenesis experiments are required to determine whether the same pattern is also observed in domains I and III. It is also noteworthy that only a total of nine residues (from I757 to E765) in DII span an electrical distance of ‫,51.0ف‬ whereas 27 residues (from D1532 to R1558) in DIV span a relatively short electrical distance of ‫,80.0ف‬ suggesting the former are more extended. Such domain-specific topological arrangements provide further evidence for the asymmetrical structure of the Na ϩ channel pore (Chiamvimonvat et al., 1996b;Benitah et al., 1997). Contribution to Gating The Na ϩ channel outer pore may undergo conformational changes in some forms of slow inactivation (Balser et al., 1996;Benitah et al., 1999), analogous to C-type inactivation observed in K ϩ channels, which clearly involves dynamic rearrangements of outer pore residues (Liu et al., 1996). In fact, certain P-loop residues have been reported to affect Na ϩ channel slow inactivation (Tomaselli et al., 1995). Our data (Table I) show that several of these P loop-S6 linker residues also affect gating properties when mutated. Further investigations of the roles of these residues in channel gating are currently underway. Toxin Pharmacology Guanidinium toxins such as tetrodotoxin (TTX), saxitoxin, and -conotoxin (-CTX), whose 3-D structures are known, are useful molecular tools to investigate the Na ϩ channel pore structure. These toxins are site I Na ϩ channel blockers that block Na ϩ ion flux by physically occluding the pore (Catterall, 1988). Our preliminary data showed that none of the superficial P-S6 charged residues drastically affected TTX block when neutralized (Li et al., 1999b), consistent with the toxin's binding site being located in the deeper region of the pore (Noda et al., 1989;Backx et al., 1992;Heinemann et al., 1992b;Satin et al., 1992;Perez-Garcia et al., 1996). In contrast, -CTX is much larger in size (Lancelin et al., 1991) and is therefore more likely to interact with some of the surface residues investigated in this study. Indeed, we have successfully identified two critical determinants, D762 and E765, for -CTX block in domain II that when neutralized and charge-reversed dramatically reduced the toxin sensitivity by 100-and 200-fold, respectively (Li, R.A., P. Velez, G.F. Tomaselli, E. Marbán, manuscript submitted for publication). Further toxin- Figure 7. A schematic representation of the proposed "doubledipp" orientation of the domain II P-S6 linker. This segment of the channel may reverse direction, dipping back into the pore. It could do so by forming a partial ring that extends horizontally at a tilted angle. Figures are not drawn to scale. Electrical distances may not directly correlate with actual physical distances (see text). channel analyses will allow more detailed molecular modeling of this region of the channel. Summary In summary, the negatively charged residues located in the P-S6 linkers are critical for determining the wildtype channel conductance, possibly by enriching the local effective Na ϩ concentration at the external pore mouth. These residues also play significant roles in toxin binding and modulation of channel gating. We conclude that this unexplored outermost region, previously thought to be remote from the pore, contributes significantly to both structural and functional properties of the Na ϩ channel.
2014-10-01T00:00:00.000Z
2000-01-01T00:00:00.000
{ "year": 2000, "sha1": "35047b68fa8bd10854ff7f4f983f38b126ccaed6", "oa_license": "CCBYNCSA", "oa_url": "http://jgp.rupress.org/content/115/1/81.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "a8a9298f18c955627ccf20ccbcc0d43371d7a9c3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
237315021
pes2o/s2orc
v3-fos-license
Cerebral Autoregulation during Postural Change in Patients with Cervical Spinal Cord Injury—A Carotid Duplex Ultrasonography Study Patients with a spinal cord injury (SCI) frequently experience sudden falls in blood pressure during postural change. Few studies have investigated whether the measurement of blood flow velocity within vessels can reflect brain perfusion during postural change. By performing carotid duplex ultrasonography (CDU), we investigated changes in cerebral blood flow (CBF) during postural changes in patients with a cervical SCI, determined the correlation of CBF change with presyncopal symptoms, and investigated factors affecting cerebral autoregulation. We reviewed the medical records of 100 patients with a cervical SCI who underwent CDU. The differences between the systolic blood pressure, diastolic blood pressure, and CBF volume in the supine posture and after 5 min at 50° tilt were evaluated. Presyncopal symptoms occurred when the blood flow volume of the internal carotid artery decreased by ≥21% after tilt. In the group that had orthostatic hypotension and severe CBF decrease during tilt, the body mass index and physical and functional scores were lower than in other groups, and the proportion of patients with a severe SCI was high. The higher the SCI severity and the lower the functional score, the higher the possibility of cerebral autoregulation failure. CBF should be assessed by conducting CDU in patients with a high-level SCI. Introduction Significant cardiovascular and autonomic dysfunction is a common consequence of high-level spinal cord injuries (SCIs) [1]. Patients with an SCI frequently experience a sudden reduction in blood pressure (BP) upon postural change, which is characterized by dizziness, light-headedness, or even syncope. Orthostatic hypotension (OH) is very difficult to control and severely impairs the quality of life [2]. The symptoms associated with OH significantly interfere with the rehabilitation of patients with an SCI. BP reduction that is diagnostic of OH occurs in 74% of SCI patients, and symptoms of OH (such as lightheadedness or dizziness) occur in 59% of individuals with an SCI during physiotherapy [3]. Possible mechanisms underlying OH in SCI patients include changes in sympathetic activity, altered baroreflex function, lack of skeletal muscle pumping activity, cardiovascular deconditioning, and altered salt and water balance [4,5]. However, some people are relatively insensitive to low BP and can maintain consciousness despite low arterial pressure [4]. In these individuals, despite low perfusion pressure, there is a change in cerebral autoregulation (CA) that maintains cerebral blood flow (CBF). Therefore, CA, and not systemic BP, is the predominant factor responsible for the symptoms of OH [6]. Few studies have investigated whether the measurement of blood flow velocity within vessels can reflect brain perfusion during postural change. The blood flow velocity in the middle cerebral artery (MCA) is highly correlated with CBF under conditions of varying mean arterial pressure and cerebrovascular conductance [7,8]. We believe that transcranial Doppler (TCD), which has been mainly used so far, can evaluate cerebral blood flow velocity, but cannot evaluate the cross-sectional area of blood vessels; thus, there is a limit to accurately measuring CBF with TCD. Several studies have shown that the blood flow volume (BFV) of the internal carotid artery (ICA) can be reliably measured, and that it has a close correlation with CBF values [9][10][11]. A previous study used carotid duplex ultrasonography (CDU) as a tool for CBF measurement in patients with brain injury, and it revealed a significant correlation with BFV compared to when TCD was used [12]. Few studies have measured CBF in patients with an SCI and evaluated cerebral hemodynamics during postural changes. In this study, we assessed CBF by using CDU to measure changes in the internal carotid blood flow during postural changes. The purpose of this study was to investigate, with CDU, the changes in cerebral blood flow volume (CBFV) after postural change, using a tilt table in patients with tetraplegia due to a cervical spinal cord injury (CSCI). We aimed to confirm the correlation of CBFV with symptoms of OH and evaluate the necessity of applying this test in actual clinical practice. In addition, we aimed to determine whether the degree of the SCI was related to changes in CBF when standing, and to elucidate the factors that affect CA. Study Design This study was a retrospective review of the medical records of all patients who underwent CDU after a CSCI at the Department of Rehabilitation Medicine, Chungnam National University Hospital, from 1 January 2018 to 31 December 2019. Patients with complete or incomplete tetraplegia due to a CSCI and who were 18 years of age or older were included in the study. We excluded cases with insufficient medical records or insufficient test results to explain hemodynamic changes, cases with history of fatal cardiovascular disease, and cases of unstable SCIs. Finally, the medical records of 100 patients were reviewed. Protocol of Carotid Duplex Ultrasonography We conducted CDU during head-up tilt in patients with a CSCI. The ICA was studied on both sides, and intravascular flow volumes were calculated using a 9-MHz linear array transducer. BFV measurements were automatically calculated by the built-in software of the ultrasound device (Siemens ACUSON, Siemens Healthcare, Erlangen, Germany). For flow-volume measurements, the head was turned 25 • -40 • to the contralateral side, and a straight segment of the ICA, at least 2 cm above the carotid bulb, was selected ( Figure 1). Measurements were performed on a horizontal segment in the sagittal plane. The arterial diameter was calculated as a vertical line through the lumen between the echogenic intimal layers. The value obtained from the test is volume flow rate, which was automatically calculated from the cross-sectional area of the blood vessel and the time-averaged mean velocity: volume flow rate = area (cm 2 ) × time-averaged mean velocity (cm/s). The peak systolic velocity, end diastolic velocity, time-averaged mean velocity, vessel diameter, and vessel area were all measured during the test. All patients were allowed to rest on the examination table for 5 min before the test. CBFV, BP, and heart rate (HR) were measured in the supine position, immediately after the patient was tilted by 50 • , and 5 min after the tilt. To increase the reliability of the test, CBF was measured three times in each position, and the mean value was used. In addition, the presence or absence of presyncopal symptoms, such as dizziness, light-headedness, nausea, and blurry vision, in each position, was recorded. Diagnostics 2021, 11, x FOR PEER REVIEW 3 of 10 symptoms, such as dizziness, light-headedness, nausea, and blurry vision, in each position, was recorded. Data Acquisition and Data Analysis The data obtained during the ICA Duplex ultrasonography were systolic blood pressure (SBP), diastolic blood pressure (DBP), HR, average BFV, and presence of OH symptoms in each position. In order to determine whether the patient experienced OH symptoms when BFV (L/min) was reduced to some extent during the tilt, after obtaining the BFV difference (BFV difference (%) = BFV(supine) − BFV(tilt 50° or 5 min)/BFV(supine) × 100) between the value measured in the supine posture and the value measured after 5 min at 50° of tilt, the point with the highest sensitivity and specificity was obtained by applying the receiver operating characteristic (ROC) curve to the relationship between the BFV difference and the presence of symptoms. It was assumed that there would be symptoms of OH when CBF fell below that point. We calculated the difference between the SBP, DBP, and CBFV values measured in the supine position and the values measured after 5 min at 50° tilt. Based on the presence of OH or ΔCBF, the patients were divided into four groups. The presence of OH was determined according to criteria defined by The American Autonomic Society and the American Academy of Neurology (OH was defined as a decrease in SBF of at least 20 mm Hg or a decrease in DBF of at least 10 mm Hg within three minutes of standing up). If there was a difference in BFV beyond the set point of the presyncopal symptoms mentioned above, it was marked as ΔCBF+; otherwise, it was marked as ΔCBF−. As shown in Figure 2, patients in group 1 (G1) had OH and decreased CBF (OH+, ΔCBF+), those in group 2 (G2) had OH but preserved CBF (OH+, ΔCBF−), those in group 3 (G3) did not have OH but had decreased CBF (OH−, ΔCBF+), and those in group 4 (G4) had neither OH nor decreased CBF (OH−, ΔCBF−). Data on age, height, body mass index (BMI), duration of injury (DOI), American Spinal Cord Injury Association impairment scale (AIS) grade, sex, neurological level of injury (NLI), and underlying disease (such as diabetes mellitus (DM) and hypertension (HTN)) were collected in each group. To confirm the relationship between functional status and CA, the motor score (MS), sensory score (SS), and Korean spinal cord independence measure (K-SCIM) score were assessed. In addition, the presence or absence of presyncopal symptoms after 5 min at 50° tilt was noted in each group. Data Acquisition and Data Analysis The data obtained during the ICA Duplex ultrasonography were systolic blood pressure (SBP), diastolic blood pressure (DBP), HR, average BFV, and presence of OH symptoms in each position. In order to determine whether the patient experienced OH symptoms when BFV (L/min) was reduced to some extent during the tilt, after obtaining the BFV difference (BFV difference (%) = BFV(supine) − BFV(tilt 50 • or 5 min)/BFV(supine) × 100) between the value measured in the supine posture and the value measured after 5 min at 50 • of tilt, the point with the highest sensitivity and specificity was obtained by applying the receiver operating characteristic (ROC) curve to the relationship between the BFV difference and the presence of symptoms. It was assumed that there would be symptoms of OH when CBF fell below that point. We calculated the difference between the SBP, DBP, and CBFV values measured in the supine position and the values measured after 5 min at 50 • tilt. Based on the presence of OH or ∆CBF, the patients were divided into four groups. The presence of OH was determined according to criteria defined by The American Autonomic Society and the American Academy of Neurology (OH was defined as a decrease in SBF of at least 20 mm Hg or a decrease in DBF of at least 10 mm Hg within three minutes of standing up). If there was a difference in BFV beyond the set point of the presyncopal symptoms mentioned above, it was marked as ∆CBF+; otherwise, it was marked as ∆CBF−. As shown in Figure 2, patients in group 1 (G1) had OH and decreased CBF (OH+, ∆CBF+), those in group 2 (G2) had OH but preserved CBF (OH+, ∆CBF−), those in group 3 (G3) did not have OH but had decreased CBF (OH−, ∆CBF+), and those in group 4 (G4) had neither OH nor decreased CBF (OH−, ∆CBF−). Data on age, height, body mass index (BMI), duration of injury (DOI), American Spinal Cord Injury Association impairment scale (AIS) grade, sex, neurological level of injury (NLI), and underlying disease (such as diabetes mellitus (DM) and hypertension (HTN)) were collected in each group. To confirm the relationship between functional status and CA, the motor score (MS), sensory score (SS), and Korean spinal cord independence measure (K-SCIM) score were assessed. In addition, the presence or absence of presyncopal symptoms after 5 min at 50 • tilt was noted in each group. Statistical Analysis The ROC curve was used to determine whether the patient experienced symptoms when the BFV (L/min) decreased to a certain extent at 5 min after 50° tilt. A one-way analysis of variance was used to compare differences in age, height, BMI, DOI, MS (upper extremity, UE), MS (lower extremity, LE), total MS, SS (light touch, LT), SS (pin prick, PP), total SS, and K-SCIM among the four groups. For post-analysis, the Scheffe test was used to determine whether the differences among the groups were significant. The data are reported as mean ± standard error. Crossover analysis and chi-squared test were used to evaluate differences in distributions of AIS grades, sex, NLI, DM, HTN, and the presence of presyncopal symptoms among the four groups. All statistical analyses were performed with IBM Statistical Product and Service Solutions software, version 26.0 (IBM Corporation, Armonk, NY, USA). Statistical significance was set at p < 0.05. The Relationship between the Decrease in CBFV (L/min) after Tilt and Presence of Presyncopal Symptoms Of the 100 patients included in the study, 40 complained of presyncopal symptoms during tilt. The ROC curve of the difference in CBFV after tilt in the presence of presyncopal symptoms is shown in Figure 3, and the data are further presented in Table 1. According to the analysis of the ROC curve, presyncopal symptoms occurred when CBFV decrease was more than 21% after tilt, with a sensitivity of 0.875 and specificity of 0.967 (Table 1). Figure 2. Recruitment of patients and division into groups for data analysis. OH, orthostatic hypotension; CBF, cerebral blood flow; SBP, systolic blood pressure; DBP, diastolic blood pressure. Statistical Analysis The ROC curve was used to determine whether the patient experienced symptoms when the BFV (L/min) decreased to a certain extent at 5 min after 50 • tilt. A one-way analysis of variance was used to compare differences in age, height, BMI, DOI, MS (upper extremity, UE), MS (lower extremity, LE), total MS, SS (light touch, LT), SS (pin prick, PP), total SS, and K-SCIM among the four groups. For post-analysis, the Scheffe test was used to determine whether the differences among the groups were significant. The data are reported as mean ± standard error. Crossover analysis and chi-squared test were used to evaluate differences in distributions of AIS grades, sex, NLI, DM, HTN, and the presence of presyncopal symptoms among the four groups. All statistical analyses were performed with IBM Statistical Product and Service Solutions software, version 26.0 (IBM Corporation, Armonk, NY, USA). Statistical significance was set at p < 0.05. The Relationship between the Decrease in CBFV (L/min) after Tilt and Presence of Presyncopal Symptoms Of the 100 patients included in the study, 40 complained of presyncopal symptoms during tilt. The ROC curve of the difference in CBFV after tilt in the presence of presyncopal symptoms is shown in Figure 3, and the data are further presented in Table 1. According to the analysis of the ROC curve, presyncopal symptoms occurred when CBFV decrease was more than 21% after tilt, with a sensitivity of 0.875 and specificity of 0.967 (Table 1). Figure 2. There was no statistically significant difference in age, height, and DOI among the groups, but there was a significant difference in BMI among the groups ( Table 2). The difference in AIS grades among the groups was also significant (Table 3). In G1, the number of patients with AIS grades A, B, and C was 26 out of 32 (81.25%), which was much higher than that of the other groups. The proportions of the AIS grades in G1 were 92.31% for AIS grade A (12 out of 13 patients), 75.00% for AIS grade B (3 out of 4), 57.89% for AIS grade C (11 out of 19), and 9.38% for AIS grade D (6 out of 64). In G4, 54.69% of patients (35 out of 64) had AIS grade D injury, which was higher than the proportion of AIS grades A, B, and C injuries. Sex, NLI, and presence of underlying disease (DM and HTN) were not significantly different among the groups (Table 3). When comparing the functional scores among the four groups, there were significant differences in the MS, SS, and K-SCIM scores (p-value < 0.05, Table 4). In multiple comparisons, the average BMI in G4 was 3.095 kg/m 2 (p = 0.006) higher than that in G1. MS (UE), MS (LE), MS (total), and K-SCIM scores were lower in G1 than in the other groups, and the differences were statistically significant. SS (LT), SS (PP), and SS (total) values in G1 were also significantly lower than those in G4 (p < 0.05). Figure 2. There was no statistically significant difference in age, height, and DOI among the groups, but there was a significant difference in BMI among the groups ( Table 2). The difference in AIS grades among the groups was also significant (Table 3). In G1, the number of patients with AIS grades A, B, and C was 26 out of 32 (81.25%), which was much higher than that of the other groups. The proportions of the AIS grades in G1 were 92.31% for AIS grade A (12 out of 13 patients), 75.00% for AIS grade B (3 out of 4), 57.89% for AIS grade C (11 out of 19), and 9.38% for AIS grade D (6 out of 64). In G4, 54.69% of patients (35 out of 64) had AIS grade D injury, which was higher than the proportion of AIS grades A, B, and C injuries. Sex, NLI, and presence of underlying disease (DM and HTN) were not significantly different among the groups (Table 3). When comparing the functional scores among the four groups, there were significant differences in the MS, SS, and K-SCIM scores (p-value < 0.05, Table 4). In multiple comparisons, the average BMI in G4 was 3.095 kg/m 2 (p = 0.006) higher than that in G1. MS (UE), MS (LE), MS (total), and K-SCIM scores were lower in G1 than in the other groups, and the differences were statistically significant. SS (LT), SS (PP), and SS (total) values in G1 were also significantly lower than those in G4 (p < 0.05). The Incidence of Presyncopal Symptoms in Each Group Presyncopal symptoms were found in approximately 96.88% of patients in G1, 5.56% in G2, 40% in G3, and 10% in G4 (Figure 4). In G2, even if there was a drop in BP during tilt, since CBF was maintained, symptoms rarely occurred. Discussion To our knowledge, this is the first study to use CDU to investigate changes in CBF during postural change in patients with a CSCI. The main findings are as follows: 1. Presyncopal symptoms occurred when the BFV of the ICA decreased by ≥21% after tilt in patients with a CSCI. 2. In G1, i.e., patients who had OH and severe CBF decrease during tilt (because CA did not occur), the BMI was lower than that in G4 (patients who had neither OH nor CBF decrease); physical and functional scores such as MS (UE), MS (LE), MS (total), SS (LT), SS (PP), SS (total), and K-SCIM scores were low; and the proportions of AIS grades A, B, and C were high. 3. In G2, i.e., the group of patients who had no decrease in CBF even though there was OH during tilt (because CA occurred), presyncopal symptoms rarely occurred (5.56%). The mechanism that underlies OH after an SCI remains unclear. According to some studies, if there is a problem in the pathway from the motor center to the sympathetic nerves due to a high-level SCI, the activation of the sympathetic nervous system through baroreceptors and chemoreceptors fails. Therefore, when external factors that can affect BP, such as postural changes, occur, the mechanism of maintaining normal BP does not work. Therefore, in the early stages of an SCI, OH is a result of decreased sympathetic nerve response, venous vasodilation, abdominal muscle paralysis, and inadequate secretion of hormones [4,13]. However, over time, the distal sympathetic preganglionic neuronal conduction pathways of patients with an SCI adapt, and OH improves through the partial recovery of sympathetic nerve function, increased secretion of vasoconstrictors, and increased susceptibility of blood vessels to these vasoconstrictors [14]. In addition, studies have shown that compensation through the kidneys is an important part of the long-term adaptation mechanism for sympathetic nervous system abnormalities [15]. Another study reported increased sensitivity to arginine vasopressin in patients with an SCI [16]. On the other hand, animal experiments in one study showed that neither the Discussion To our knowledge, this is the first study to use CDU to investigate changes in CBF during postural change in patients with a CSCI. The main findings are as follows: 1. Presyncopal symptoms occurred when the BFV of the ICA decreased by ≥21% after tilt in patients with a CSCI. 2. In G1, i.e., patients who had OH and severe CBF decrease during tilt (because CA did not occur), the BMI was lower than that in G4 (patients who had neither OH nor CBF decrease); physical and functional scores such as MS (UE), MS (LE), MS (total), SS (LT), SS (PP), SS (total), and K-SCIM scores were low; and the proportions of AIS grades A, B, and C were high. 3. In G2, i.e., the group of patients who had no decrease in CBF even though there was OH during tilt (because CA occurred), presyncopal symptoms rarely occurred (5.56%). The mechanism that underlies OH after an SCI remains unclear. According to some studies, if there is a problem in the pathway from the motor center to the sympathetic nerves due to a high-level SCI, the activation of the sympathetic nervous system through baroreceptors and chemoreceptors fails. Therefore, when external factors that can affect BP, such as postural changes, occur, the mechanism of maintaining normal BP does not work. Therefore, in the early stages of an SCI, OH is a result of decreased sympathetic nerve response, venous vasodilation, abdominal muscle paralysis, and inadequate secretion of hormones [4,13]. However, over time, the distal sympathetic preganglionic neuronal conduction pathways of patients with an SCI adapt, and OH improves through the partial recovery of sympathetic nerve function, increased secretion of vasoconstrictors, and increased susceptibility of blood vessels to these vasoconstrictors [14]. In addition, studies have shown that compensation through the kidneys is an important part of the long-term adaptation mechanism for sympathetic nervous system abnormalities [15]. Another study reported increased sensitivity to arginine vasopressin in patients with an SCI [16]. On the other hand, animal experiments in one study showed that neither the sympathetic nervous system nor arginine vasopressin had any effect on BP fluctuations after an SCI [17]. In 1991, a study using TCD suggested that the difference in CA, which regulates CBF, rather than changes in BP, would be involved in symptomatic improvement in patients with sympathetic nervous system abnormalities due to an SCI [6]. Since then, several studies have investigated CBF control in patients with a high-level SCI using TCD [18,19]. CA involves myogenic, metabolic, and neurogenic control mechanisms, as well as systemic factors [20,21], and while static CA is well preserved in patients with a high-level SCI, dynamic CA is severely altered [22,23]. If CA fails, irreversible neuronal cell death can occur [24]. Therefore, the importance of CA for maintaining CBF during tilt has been recognized. A previous study has also assessed whether CBF increased when midodrine, a drug for treating OH, was administered [25]. In the aforementioned studies of CBF in patients with an SCI, TCD was used to assess the flow velocities, but the cross-sectional areas of blood vessels could not be evaluated; therefore, there is a limit to measuring the exact CBFV with TCD. For this reason, we used CDU to measure blood flow, and we measured ICA BFV as an estimate of CBF. Previous studies have measured ICA BFV to estimate CBF [9][10][11][12], confirming that it shows a significant correlation with CBFV, compared to TCD [12]. To our knowledge, this study is the first to measure changes in CBF in patients with an SCI by performing CDU, and we found that presyncopal symptoms occurred when ICA blood flow was reduced by ≥21% with the patient in a tilted position. It was confirmed once again that CA, which modulates CBF, is important in the presence or absence of OH symptoms. The incidence of OH after an SCI is related to the level of injury, degree of damage, time spent lying in bed, physical function, etc. The more severe the paralysis, the higher the incidence rate of OH [26]. Consistent with this, in our study, the physical and functional scores in the group in which CA failed were significantly lower than those in the group in which CBF was maintained, and the proportion of AIS grades A, B, and C (i.e., the proportion of patients with a relatively severe SCI) was high in the former group. Regarding the correlation with presyncopal symptoms, we believe that CBF was preserved in G2, and the symptom-free rate was, therefore, high (only 5.56% had OH symptoms). In this group, even when OH was confirmed, there was no need to discontinue the rehabilitation treatment with tilt, and it was not necessary to administer drugs such as midodrine. Contrastingly, in G3, there was no decrease in BP during tilt, but CBF had decreased by more than 21% resulting in presyncopal symptoms in 40% of patients, which required the consideration for the administration of an anti-OH drug. Since CBF is often reduced in patients with OH symptoms due to CA failure, it is appropriate to measure CBF as well as BP in clinical practice, and to administer drugs if there is up to 21% decrease in CBF. Additionally, in clinical practice, patients with a high-level SCI often discontinue rehabilitation treatment, such as tilt table application due to reduction in BP. In these patients, if CA is ascertained by measuring CBF and there are no symptoms, treatment can be continued and progress can be observed even if there is a drop in BP. Overall, to confirm orthostatic tolerance, both BP and CBF should be assessed in order to properly implement the treatment for OH. Limitations CDU was used to assess CBF in this study. Previous studies have measured regional CBF according to changes in BP using TCD, and some studies have shown that the ICA/MCA region is more sensitive to orthostatic challenges than the vertebral artery/posterior cerebral artery region [25,27,28]. Further studies are needed to determine the extent to which ICA blood flow can reflect total CBF. In addition, several factors, such as blood viscosity/hematocrit and intracranial pressure, may affect blood flow velocity, but these factors were not controlled in this study. Future studies will need to take these factors into consideration. When performing CDU, it is important to measure the cross-sectional area of the artery at a certain location and time. In this study, CDU was performed by more than one person; thus, there might have been inter-observer differences in the data obtained, depending on the proficiency and skills of the observers. Moreover, this study is a retrospective, non-blinded study; the examiner already knew about the patient's condition when performing the examination, and this may have introduced some bias. Furthermore, the differences in DOI were not considered. A future study that correlates changes in CBF with DOI in individual patients can help elucidate the cerebral hemodynamics over time. Patients with an SCI suffer from voiding problems, for which most of them take medications such as α-blockers, which can cause OH. This study did not find out whether patients were taking such medications, and this should be considered in future studies. Another limitation is that only the presence or absence of presyncopal symptoms was noted; their severity, which would be more useful for determining the degree of orthostatic tolerance, was not considered. In order to obtain a more accurate conclusion, future studies should include more patients and have a longer follow-up period. There is no precise guideline on whether to administer an anti-OH drug if the SBP falls but the CBF is preserved. In a future study, the follow-up of changes in CBF and prognosis after treatment of OH will be helpful in establishing guidelines for further treatment. Conclusions This study used CDU to confirm changes in CBF during postural changes in patients with a CSCI. If CBFV decreased by more than 21%, presyncopal symptoms occurred. However, even in the presence of OH, if CBFV was preserved (i.e., if CA occurred), patients were less symptomatic. The higher the SCI severity and the lower the functional score, the higher the possibility of CA failure. It is, therefore, necessary to use CDU to assess CBF in patients with a high-level SCI, in order to ensure proper administration of drugs and smooth rehabilitation, and to ultimately improve the quality of life of patients.
2021-08-28T06:17:18.579Z
2021-07-23T00:00:00.000
{ "year": 2021, "sha1": "afd764e921027d2d14f54cc4af5a8e6173946f44", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/11/8/1321/pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "9277227133a0bff24437684d2d0d78b0b6e862b3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
262187764
pes2o/s2orc
v3-fos-license
The Effects of Restrictive Fluid Resuscitation on the Clinical Outcomes in Patients with Sepsis or Septic Shock: A Meta-Analysis of Randomized-Controlled Trials This study aims to assess the impact of a restrictive resuscitation strategy on the outcomes of patients with sepsis and septic shock. This meta-analysis was conducted in accordance with the recommendations from the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P) guidelines. A systematic search was performed in databases, including PubMed, Web of Science, EMBASE, and the Cochrane Library, covering the period from the inception of the database to August 2023, with no limitations on the language of publication. Outcomes assessed in the meta-analysis included mortality, duration of intensive care unit (ICU) stay in days, duration of mechanical ventilation in days, acute kidney injury (AKI) or the need for renal replacement therapy (RRT), and length of hospital stay in days. Overall, 12 studies met the inclusion criteria and were included in the present meta-analysis. The findings of this study indicate that although the risk of mortality was lower in fluid restriction compared to the control group, the difference was statistically insignificant (risk ratio (RR): 0.98; 95% confidence interval (CI): 0.9-1.05; P value: 0.61). Additionally, the duration of mechanical ventilation was significantly shorter in the restrictive fluid group compared to its counterparts (mean difference (MD): -1.02; 95% CI: -1.65 to -0.38; P value: 0.003). There were no significant differences found in relation to the duration of ICU stays, the incidence of AKI, the requirement for RRT, or the length of hospital stays measured in days. Introduction And Background Sepsis poses a significant global health challenge, with approximately 49 million new cases and 11 million associated deaths reported annually [1].Sepsis is characterized by life-threatening organ dysfunction resulting from an uncontrolled response to infection, while septic shock is an advanced stage of sepsis characterized by severe circulatory dysfunction, a notably sharp drop in blood pressure, which can ultimately lead to organ dysfunction and failure due to inadequate blood flow and oxygen delivery, carrying a higher risk of mortality compared to sepsis alone [2].The primary approach to treating sepsis in its initial stages involves administering intravenous antibiotics and fluids, controlling the source of infection, and providing necessary supportive care [3].In the context of septic shock, the foremost component of hemodynamic support is fluid administration, with a crucial focus on optimizing preload.However, recent observations have raised concerns about the potential harm of overly aggressive fluid resuscitation, as excessive positive fluid balance has been associated with increased mortality in intensive care units (ICUs) [4][5].While various hemodynamic protocols have been studied in randomized controlled trials during the early hours of septic shock treatment [6], there is limited evidence concerning the practical aspects of fluid administration in the later stages. Numerous physiological studies have highlighted the unreliability of static preload indices, such as central venous pressure (CVP), in assessing fluid responsiveness, especially in septic patients [7].In contrast, dynamic preload indices like pulse pressure variation (PPV) or changes in stroke volume during passive leg raising (PLR) have proven to be highly dependable for evaluating fluid responsiveness, provided that the appropriate conditions for accurate PPV measurement are met [8]. Hospitals worldwide administer over 200 million liters of 0.9% sodium chloride intravenously every year [9].Given this substantial volume of fluid administration, experts argue that each type of fluid possesses its own therapeutic index and recommend implementing active fluid restriction and deresuscitation strategies for septic patients after the initial resuscitation phase [9].Pharmacists can play a crucial role in these strategies, from selecting and dosing fluids appropriately to overseeing pharmacist-driven deresuscitation protocols [10].Despite the conflicting data surrounding fluid resuscitation in septic patients, there is an ongoing need for studies to determine the ideal volume and timing of fluid resuscitation, both in the initial resuscitation phase (within the first six hours) and beyond (during the restriction phase). Although there is a consensus on the importance of adequate fluid therapy in sepsis, and despite numerous recent clinical trials exploring fluid management in sepsis, the optimal fluid management strategy remains contentious and unclear, lacking definitive guidelines for the ideal fluid resuscitation approach in critically ill septic patients.This study aims to assess the impact of a restrictive resuscitation strategy on outcomes in patients with sepsis and septic shock. Review Methods This meta-analysis was conducted in accordance with the recommendations from the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P guidelines) [11]. Search Strategy A systematic search was performed in databases, including PubMed, Web of Science, EMBASE, and the Cochrane Library, covering the period from the inception of the database to August 2023, with no limitations on the language of publication.The keywords used for searching relevant articles included "restrictive resuscitation," "sepsis," and "standard care," along with their symptoms and Medical Subject Heading (MeSH) terms.Additionally, the reference lists of all included studies were manually screened to identify relevant studies. Study Selection and Eligibility Criteria We included studies that were randomized controlled trials (RCTs) comparing restrictive resuscitation with standard care or other resuscitation approaches.The study population comprised adult patients with sepsis or septic shock.We excluded observational studies, reviews, editorials, and expert opinions.Eligible records were imported into ENDNOTE software (X9 version).After removing duplicate studies, two reviewers independently conducted an initial assessment of the titles and abstracts.Articles meeting the criteria underwent a comprehensive full-text screening.In cases where there were differences in opinion between the two reviewers, resolution was achieved through discussion, consensus, or involving a third author.Subsequently, studies not relevant to the research criteria were excluded, with explicit reasons for their exclusion documented. Data Extraction Data from the included studies were extracted using a predesigned data collection table created in a Microsoft Excel Spreadsheet.Data extraction encompassed the following elements: author name, year of publication, sample size, participant characteristics, and outcomes.Outcomes assessed in the meta-analysis included mortality, duration of intensive care unit (ICU) stay in days, duration of mechanical ventilation in days, acute kidney injury (AKI) or the need for renal replacement therapy (RRT), and length of hospital stay in days. Quality Assessment Two reviewers independently assessed the potential bias in the included studies using the quality assessment from the Cochrane Collaboration.Seven aspects of bias were examined: (1) generation of random sequences (selection bias), (2) concealment of allocation (selection bias), (3) masking of participants and staff (performance bias), (4) blinding of outcome evaluation (detection bias), ( 5) incomplete outcome information (attrition bias), ( 6) selective reporting (reporting bias), and (7) other factors (follow-up duration, baseline characteristics). Data Analysis All data were analyzed using REVMAN (version 5.4.1)software to determine pooled effects.For continuous outcomes, the mean difference (MD) with a 95% confidence interval (CI) was calculated, and, for categorical outcomes, the risk ratio (RR) was reported with a 95% CI.A significance level of P < 0.05 was considered to indicate a significant difference.Heterogeneity among the study results was reported as I-square.I-square values of 0% to 25% showed low heterogeneiety, 25% to 50% represented moderate heterogeneity, 50% to 90% represented substantial heterogeneity, and 75% to 100% represented considerable heterogeneity.After a comprehensive search, a total of 548 records were imported into ENDNOTE software.After removing 52 duplicates, the abstracts and titles of the remaining 406 studies were assessed.The full text of 23 studies was obtained and detailed assessment was done based on predefined inclusion and exclusion criteria.Through reading of the full text, 12 studies met the inclusion criteria and were included in the present metaanalysis.Figure 1 shows the PRISMA-P flowchart of study selection.Table 1 shows the characteristics of the included studies.Figure 2 shows the quality assessment of included RCTs. Mortality All 12 studies compared the mortality between restrictive resuscitation and standard care groups.As shown in Figure 3, the risk of mortality was higher in the standard care group compared to the restrictive restriction group, but the difference was statistically insignificant (RR: 0.98; 95% CI: 0.9 to 1.05; P value: 0.61).There is an insignificant heterogeneity. Number of Days in Ventilation Eight studies were included in the pooled analysis of the number of days in ventilation.As shown in Figure 4, the mean of ventilation days was significantly lower in the restrictive fluid group compared to the other group (MD: -1.02; 95% CI: -1.65 to -0.38; P value: 0.003).There is an insignificant heterogeneity. Length of Stay in ICU Six studies were included in the pooled analysis of the mean length of stay in the ICU.As shown in Table 2, no significant difference was found in the mean length of ICU stay between the two groups (MD: -0.04; 95% CI: -0.46 to 0.39; P value: 0.87).Significant heterogeneity was reported among the study results. Acute kidney Injury (AKI) or Need for Renal Replacement Therapy (RRT) Seven studies assessed the impact of restrictive fluid resuscitation on AKI or RRT.As shown in Table 2, the risk of AKI or RRT was higher in patients randomized in the restrictive fluid resuscitation group compared to the patients in the other group (RR: 0.89; 95% CI: 0.77 to 1.03; P value: 0.12).No significant heterogeneity was reported among the study results. Hospital Length of Stay Through a pooled analysis of four studies that assessed the duration of hospital stays, no significant difference was found in the mean of hospital duration of stay MD: 0.80 (with a 95% confidence interval ranging from -0.50 to 2.11) and a P value of 0.23.However, it is worth noting that there was substantial heterogeneity observed among the outcomes of these studies. Discussion This meta-analysis was conducted to assess the effect of fluid restriction on patients with sepsis and septic shock.The findings of this study indicate that although the risk of mortality was lower in fluid restriction compared to other approaches, the difference was statistically insignificant.Additionally, the duration of mechanical ventilation was significantly lower in the restrictive fluid group compared to its counterparts. The meta-analysis conducted by Reynolds et al. also comprised eight RCTs and reported similar findings [24]. Our study's discovery of a shorter period of mechanical ventilation when employing a restricted volume approach in septic patients aligns with previous research that has indicated the benefits of such an approach in reducing the duration of mechanical ventilation in cases of acute lung injury [25] and preventing excessive pulmonary fluid buildup in acute pancreatitis [26].These findings not only support earlier systematic reviews that suggested a trend toward reduced mechanical ventilation duration but also strengthen the evidence base by incorporating new data that surpasses the threshold of statistical significance [24,27].Significantly, our study did not identify any concerning signals related to adverse effects like acute kidney injury (AKI), digital ischemia, or increased vasopressor requirements associated with a restrictive resuscitation approach beyond the initial six-hour resuscitation phase.Consequently, it appears that implementing a restricted fluid strategy is not only safe but also justified after the initial six-hour resuscitation bundle, which is particularly relevant to our own research. Pharmacists have the opportunity to play an active part in managing fluid levels and reducing excess fluids in critically ill patients.Research has indicated that a multidisciplinary approach, which emphasizes limiting fluid intake and promoting urine production during the initial 72 hours after recovering from shock, is linked to achieving a more balanced fluid state, increased days without needing intensive care, and a decrease in hospital mortality rates [10].The findings from this meta-analysis add additional confirmation to the safety and potential advantages of these approaches and offer a foundation for further investigation into controlled fluid management for patients with sepsis. This meta-analysis contributes to the existing body of research on the safety of fluid restriction.It does so by delving into previously identified safety indicators, including factors such as ICU stay, acute kidney injury (AKI), or the need for renal replacement therapy (RRT).The results showed that there were no statistically significant differences between the two groups in terms of any of these events. It is also crucial for healthcare professionals to recognize that there is still uncertainty about the pros and cons of intravenous (IV) fluid therapy, especially during the initial stages of resuscitation.It is worth noting that a significant multicenter study exploring a strategy of limited fluid use in patients undergoing elective major abdominal surgery actually revealed poorer outcomes, despite smaller studies and observational data supporting this approach [28].Currently, several large-scale randomized trials are underway to investigate various hemodynamic resuscitation protocols for patients in septic shock.These trials are expected to fill the current knowledge gap and provide valuable insights for both patients and healthcare providers. The current meta-analysis has certain limitations.First, the extent of mortality outcome was primarily influenced by a single, larger RCT [18], highlighting the need for additional high-quality studies with minimal bias to generate more robust conclusions.Second, in all these studies, fluid restriction was implemented either during or after the six-hour resuscitation period (in which all studies administered at least 30 ml/kg of fluids within the initial six hours).Consequently, these studies were unable to evaluate the recommendation from the Surviving Sepsis Campaign, which advises administering 30 ml/kg of fluids within the initial three hours of fluid resuscitation for septic patients.One of the current trials compares restricted and liberal fluid in sepsis (NCT05453565), which will validate the findings of this meta-analysis. Conclusions In conclusion, this meta-analysis, based on a rigorous selection of studies, revealed that while there was a trend towards reduced mortality with restrictive fluid management in septic patients, this difference did not reach statistical significance.Notably, restrictive fluid strategies were associated with a shorter duration of mechanical ventilation, aligning with previous research.Importantly, no concerning signals of adverse effects were identified beyond the initial resuscitation phase.However, the analysis found no significant differences in ICU length of stay, risk of AKI or need for RRT, and hospital length of stay.It is evident that more high-quality studies are needed to establish conclusive findings, but these results contribute to the ongoing discussion on fluid management in sepsis and septic shock, emphasizing the potential benefits of a restrictive approach in specific clinical contexts. FIGURE 1 : FIGURE 1: PRISMA-P flowchart showing the study selection process
2023-09-24T15:49:40.996Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "f99d887f33aad2631dae6aae050c0b83bf79f3d2", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/review_article/pdf/188608/20230920-519-vdphrw.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0a2748a5d0ce3b5e1b5d6e92d1f7380c684548f7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237495238
pes2o/s2orc
v3-fos-license
KAHRP dynamically relocalizes to remodeled actin junctions and associates with knob spirals in Plasmodium falciparum‐infected erythrocytes The knob‐associated histidine‐rich protein (KAHRP) plays a pivotal role in the pathophysiology of Plasmodium falciparum malaria by forming membrane protrusions in infected erythrocytes, which anchor parasite‐encoded adhesins to the membrane skeleton. The resulting sequestration of parasitized erythrocytes in the microvasculature leads to severe disease. Despite KAHRP being an important virulence factor, its physical location within the membrane skeleton is still debated, as is its function in knob formation. Here, we show by super‐resolution microscopy that KAHRP initially associates with various skeletal components, including ankyrin bridges, but eventually colocalizes with remnant actin junctions. We further present a 35 Å map of the spiral scaffold underlying knobs and show that a KAHRP‐targeting nanoprobe binds close to the spiral scaffold. Single‐molecule localization microscopy detected ~60 KAHRP molecules/knob. We propose a dynamic model of KAHRP organization and a function of KAHRP in attaching other factors to the spiral scaffold. Red blood cells infected with the human malaria parasite Plasmodium falciparum acquire thousands of small protrusions that render their initially smooth surface bumpy (Gruenberg et al., 1983). These protrusions, termed knobs, play a pivotal role in the pathophysiology of falciparum malaria. They form a platform on which parasite-encoded adhesins, such as the immunevariant PfEMP1 antigens, are presented and anchored to the membrane skeleton (Warncke & Beck, 2019). As a result, parasitized erythrocytes attain cytoadhesive properties and sequester in the deep vascular bed of inner organs, which, in turn, can lead to severe sequelae including impaired tissue perfusion, hypoxia, and local microvascular inflammation followed by barrier dysfunction (Lee et al., 2019;Smith et al., 2013). Knobs also play an important role in reorganizing and stiffening the cell envelope, leading to rounder shapes and reduced deformability (Fröhlich et al., 2019;Zhang et al., 2015). Previous studies have suggested that knobs are supramolecular structures composed of parasite-encoded factors and components of the erythrocyte membrane skeleton (Warncke & Beck, 2019). A central building block is the parasite-encoded knobassociated histidine-rich protein (KAHRP). Parasite mutants lacking the corresponding gene are knobless and do not cytoadhere inflow (Crabb et al., 1997). KAHRP features several low-affinity interaction domains, allowing it to self-aggregate and to bind to actin, ankyrin, spectrin, and the cytoplasmic domain of PfEMP1 antigens (Cutts et al., 2017;Kilejian et al., 1991a;Oh et al., 2000;Pei et al., 2005;Waller et al., 1999;Warncke & Beck, 2019;Weng et al., 2014). However, spatial and temporal aspects of the assembly are still debated. In particular, there are dissenting views as to where knobs form, at the actin junctional complex (Oh et al., 1997(Oh et al., , 2000, the ankyrin bridge (Cutts et al., 2017) or in the mesh formed by spectrin filaments (Looker et al., 2019). In a recent development, it has been shown that knobs are underlaid with a spiral-like scaffold (Watermeyer et al., 2016). However, the molecular composition of this spiral is still unclear as is the overall architecture of knobs. Recent advances in super-resolution fluorescence microscopy and image processing (Hell & Wichmann, 1994;Schnitzbauer et al., 2018) have offered new opportunities to interrogate the membrane cytoskeletal organization of erythrocytes at the nanometer scale (Pan et al., 2018). To gain insights into the structural organization of knobs, we implemented several high-resolution imaging techniques, including two-color stimulated emission depletion (STED) microscopy with a two-dimensional pairwise crosscorrelation analysis, photo-activated localization microscopy (PALM) with single-molecule counting, and electron tomography labeling with specific nanoprobes and stereological computer simulations. The combination of these imaging platforms allowed us to map the physical localization of KAHRP within the membrane skeleton and on the spiral scaffold underlying knobs over different stages of parasite development. | Analysis of the membrane skeleton of red blood cells by super-resolution microscopy To investigate the organization of knobs and their interaction with the membrane skeleton, we used a recently developed imaging protocol based on lysing erythrocytes and exposing their plasma membrane and the membrane skeleton on a planar substrate (Figure 1a) (Looker et al., 2019). The exposed membranes were subsequently stained with antibody combinations, for example, against the N-terminus (actin-binding domain) of ß-spectrin (spectrin B2) and protein 4.1R (Figure 1b,c), before being imaged by STED microscopy. The STED images were subsequently processed and defined fluorescence clusters were observed. Using custom-made algorithms, the cluster distribution densities, their sizes in full width at half maximum (FWHM), the nearest neighbor distances, and the spatial relationship between the two targets were calculated ( Figure 1c) (see Experimental Procedures). We first validated our imaging setup by determining the size and spatial arrangement of membrane skeletal components of uninfected erythrocytes and comparing the findings with previous results. Major components of the red blood cell membrane skeleton are spectrin and actin filaments (Figure 1a). The spectrin filaments consist of α-and ß-spectrin, which form α 2 ß 2 heterotetramers by the head-to-head association of two αß dimers (Lux, 2016;Machnicka et al., 2014). The N-termini of the ß-spectrin subunits are positioned at the tail ends of the heterotetramer and contain two calponin homology (CH) domains for binding to actin protofilaments consisting of 6-8 actin monomers in each of the two strands (Lux, 2016;Machnicka et al., 2014). Protein 4.1R strengthens the spectrin actin interaction (Lux, 2016;Machnicka et al., 2014). Groups of up to six spectrin heterotetramers can attach to an actin protofilament, resulting in a pseudohexagonal meshwork (Lux, 2016). Ankyrin binds to the C-terminal domain of ß-spectrin and connects integral membrane proteins with the actin spectrin network in an ankyrin complex (Lux, 2016;Machnicka et al., 2014). The measured mean of the actin cluster size in FWHM was 45 ± 7 nm (mean ± SD; number of determinations, N = 357; number of independent cells analyzed, n = 23) and that of the N-termini of ß-spectrin was 48 ± 8 nm (N = 2,220, n = 61) ( Figure S1). These values are slightly larger than the reported physical dimension of the protofilaments of ~37 nm (Lux, 2016) and might be explained by the lateral localization of the spectrin binding sites and the additional sizes of the primary and secondary antibody trees used to detect the two targets. As a second validation step, we determined the nearest neighbor distances for selected targets, which are largely defined by the length of the spectrin filaments. Spectrin filaments in erythrocytes have measured lengths of ~50-100 nm and in the fully stretched conformation of ~200 nm (Lux, 2016;Pan et al., 2018). The average nearest neighbor distances of the targets, protein 4.1R, N-terminus of ß-spectrin, C-terminus of ß-spectrin (center of spectrin filaments), and ankyrin, followed a Gaussian distribution in each case and were on average 92 ± 25 nm (N = 1,741, n = 45), 111 ± 34 nm (mean ± SD; N = 2,021, n = 45), 112 ± 31 nm (N = 3,131, n = 32), and 110 ± 30 nm (N = 2,884, n = 45), respectively (Figure 1d and Figure S2). These values indicate a slightly stretched membrane skeleton in the exposed membranes. To measure the distance between different fluorophores, we used a two-dimensional pairwise cross-correlation analysis between the signals recorded in the two different fluorescent channels from the same sample. The resulting coefficient is expected to be, at zero intermolecular distance, >1 for two colocalizing targets, <1 for two excluding targets, and ~1 for two independent, randomly distributed F I G U R E 1 Workflow of STED imaging and method validation. (a) Cartoon depicting the preparation of exposed membranes by hypotonic shock of erythrocytes immobilized on a glass slide. RBC, red blood cell. (b) Current model of the spectrin/actin network of erythrocytes (see main text for further details). (c) Separate two-color STED images of uninfected erythrocytes stained with an antibody against the N-terminus (acting binding domain) of ß-spectrin (spectrin B2, left panel) and protein 4.1R (right panel). Scale bar, 0.5 µm. Middle panel, zoom-in of overlaid STED images showing both fluorescence clusters. Right panel, the same image but resampled. Scale bar, 50 nm. (d) Distribution of nearest neighbor distance of protein 4.1R. The histogram was fitted using a Gaussian function (red line). N = 1,471, n = 45. (e) Calculated two-dimensional cross-correlations between protein 4.1R and tropomodulin (yellow; n = 28), actin (red; n = 13), N-terminus of ß-spectrin (spectrin B2, purple; n = 36), C-terminus of ß-spectrin (spectrin A12, medium purple; n = 18), and ankyrin (orange; n = 29). The means ± SEM of n cells from at least three different donors are shown. Representative STED images underpinning the cross-correlations are depicted in Figure S3 distance [nm] targets. Cross-correlation between protein 4.1R and other components of the actin junctional complex, including tropomodulin, actin, and the N-terminal domain of ß-spectrin (spectrin B2), revealed maximal values >1.5 at an intermolecular distance of 0-6 nm (the first binning interval in the cross-correlation analysis, see Experimental Procedures for details), which quickly declined to ~1 within ~100 nm ( Figure 1e and Figure S3). In comparison, cross-correlations between protein 4.1R and components of the ankyrin bridge, including ankyrin and the C-terminal domain of ß-spectrin (spectrin A12), revealed minimal values at zero intermolecular distances, which rose to a value of ~1 within ~30 nm (Figure 1e and Figure S3). These results agree with the known molecular architecture of the membrane skeleton, with protein 4.1 stabilizing the connection between the N-termini of ß-spectrin and the actin protofilaments at the vertices of the cytoskeletal meshwork and tropomodulin and adducin capping the pointed and barbed ends of actin filaments, respectively (Lux, 2016). Ankyrin and the C-termini of ß-spectrin localize to the edges of the meshwork at a distance of 30-35 nm from the actin junctional complex in the native skeleton (Lux, 2016;Pan et al., 2018) (Figure 1b). Thus, the imaging protocol reveals the established architecture of the spectrin/actin network and appears to be suitable to investigate parasite-induced changes of it. Further details on the spatial resolution of the experimental setup are provided in Figure S4. Comparable results were also obtained when KAHRP stained exposed membranes were imaged by direct stochastic optical reconstruction microscopy (dSTORM) (FWHM of 46 ± 12 nm [N = 3,383; n = 14]; distribution density of 22 ± 11 µm −2 [n = 14]) ( Figure 2a and Figure S5). The appearance of the KAHRP clusters was also unaffected by whether or not specimens were fixed with paraformaldehyde ( Figure S6b). In contrast, adding even minute amounts of glutaraldehyde as a fixative substantially reduced staining efficiency and the overall quality of the super-resolution images ( Figure S6b,c), consistent with previous reports (Mehnert et al., 2019). Two-color STED, using the two KAHRP antibodies, revealed the expected colocalization of the corresponding signals and, accordingly, a high coefficient of 5.9 at 0-6 nm distance in the cross-correlation analysis (Figure 2b and Figure S7). Colocalization at 0-6 nm distance was also observed between KAHRP and tropomyosin and actin (cross-correlation coefficient >4) and, with coefficients between 1.5 and 1.7, between KAHRP and the N-terminus of ß-spectrin and protein 4.1R (Figure 2b and Figure S7). The two actin-associated proteins, adducin and tropomodulin, displayed an independent spatial correlation with KAHRP, with coefficients of ~1 ( Figure 2b and Figure S7). In comparison, KAHRP was found to be anticorrelated with the C-terminus of ß-spectrin and ankyrin. In these cases, the cross-correlation coefficients reached a minimum at 0 nm distance before it approached a value of ~1 at larger distances ( Figure 2b and Figure S7). Two-color STED imaging further confirmed colocalization of KAHRP with PfEMP1 and PF3D7_0532400 (Cutts et al., 2017;Oberli et al., 2014), with cross-correlation coefficients >2 at 0-6 nm distance, but not with the Maurer's cleftassociated histidine-rich protein 1 (MAHRP1) (Spycher et al., 2003) or the Maurer's cleft residential skeleton binding protein 1 (SBP1) (Blisnick et al., 2000) ( Figure S8). | Repositioning of KAHRP during parasite development Previous in vitro studies have revealed that KAHRP is a multidomain protein with disordered regions that can interact with actin and spectrin and also with ankyrin (Cutts et al., 2017;Warncke & Beck, 2019). We therefore wondered whether the observed strong correlation of KAHRP with actin and the N-terminus of ßspectrin and the anticorrelation with ankyrin and the C-terminus of ß-spectrin in trophozoites represents only a snapshot of a more dynamic process. To address this hypothesis, we repeated the analysis, but this time using highly synchronized parasites at 12 ± 2, 16 ± 2, and 20 ± 2 hr postinvasion. Cross-correlation analyses of two-color STED images showed maximal values at 0-6 nm distances between KAHRP and targets of both the actin junctional complex (N-terminus of ß-spectrin) and the ankyrin bridge (C-terminus of ß-spectrin and ankyrin) at the earliest time point investigated (Figure 3a and Figure S9a). Comparable results were found for KAHRP at 16 ± 2 hr postinvasion ( Figure 3b and Figure S9b). However, 20 ± 2 hr postinvasion, the spatial association with KAHRP and ankyrin dramatically changed and the colocalization became statistically independent, with a constant cross-correlation coefficient of ~1 (Figure 3c and Figure S9c). In comparison, the high cross-correlation at 0-6 nm distance with KAHRP and the N-terminus of ß-spectrin remained. | 3D structure of knob spirals To better understand the structural organization of knobs, we applied cryo-electron tomography (cryo-ET) and subtomogram averaging to ghosts prepared from infected erythrocytes, preserved by rapid freezing in vitreous ice ( Figure 4 and Figure S11a (Watermeyer et al., 2016). Although we observed spirals with 5 or more turns, the additional turns could not be resolved in our map because the peripheral regions were less ordered ( Figure S11) and, thus, were averaged out. We next performed local classification of the densities of the spi- Figure S11d). The observed intraspiral densities F I G U R E 3 Dynamic colocalization of KAHRP with membrane skeletal components during early parasite development. (a) Exposed membranes were prepared from highly synchronized Plasmodium falciparum cultures 12 ± 2 hr postinvasion and stained with an anti-KAHRP antiserum and antisera against the N-terminus of ß-spectrin (spectrin B2, purple; n = 15), the C-terminus of ß-spectrin (spectrin A12, medium purple; n = 18), and ankyrin (orange; n = 22). The calculated two-dimensional cross-correlations between KAHRP and the membrane skeletal components investigated are shown. The means ± SEM of n cells from at least three different donors are shown. STED images underpinning the cross-correlations are depicted in Figure S9. (b) as in (a) but investigating parasites 16 ± 2 hr postinvasion. Cterminus of ß-spectrin (spectrin A12, medium purple; n = 28), N-terminus of ß-spectrin (spectrin B2, purple; n = 31), ankyrin (orange; n = 28), and adducin (dark green; n = 19). (c) As in (a) but investigating parasites 20 ± 2 hr postinvasion. N-terminus of ß-spectrin (spectrin B2, purple; n = 39), ankyrin (orange; n = 21), and tropomyosin (blue; n = 27). (d) Temporal colocalization of KAHRP with ankyrin. The cross-correlation coefficients at 0-6 nm distance between ankyrin and KAHRP were analyzed as a function of the time postinvasion (black data points). The mean ± SEM is shown. The ratios of the KAHRP-spectrin B2 cross-correlation coefficients at 0-6 nm distance to the corresponding KAHRPankyrin values are shown as a function of parasite development (cyan data points) | KAHRP associates with the spiral scaffold Previous studies have suggested that KAHRP forms an electrondense coat around the spiral scaffold (Looker et al., 2019;Watermeyer et al., 2016). This conclusion was drawn from cryo-ET analysis of schizont skeletons labeled with a KAHRP antibody and a 10-nm gold conjugated secondary antibody. However, the localization precision of conventional indirect immuno-labeling is limited by the combined sizes of the primary and secondary antibodies of up to 30 nm (Vicidomini et al., 2018). To better resolve the location of untagged KAHRP in relation to the spiral scaffold, we looked for alternative labeling techniques and realized that KAHRP contains three stretches of six or more histidines within its N-terminal domain, which might be targeted by Ni 2+ -NTA nanoprobes. Ni 2+ -NTA nanoprobes are ~10-fold smaller than antibody trees and can deliver a gold particle or a fluorescent dye within 1.5 nm of the target (Reddy et al., 2005). To explore the possibility of labeling KAHRP with nanoprobes, we initially tested, by two-color STED microscopy, whether appeared offset outward in relation to the spiral blade. We subsequently modeled the blade of the spiral as an Archimedean conical spiral (Figure 5d), using the parameters determined from the cryo-ET mapping depicted in Figure 4d. We next simulated the binding of the gold particles to the spiral, with the distance from the center of mass of the gold particle to the spiral varying from 0 to 10 nm. The simulated binding profiles were subsequently compared with the experimental data using a least squares regression analysis. A distance of 6 nm from the gold particle center of mass to the spiral wall yielded the best fit between simulated and experimental data (Figure 5e,f). This value is in good agreement with the distance between the center of mass of the gold particle and the His-tag. The agreement in the inclination angle of the simulated and experimental binding profile (Figure 5f) further supports the hypothesis that the gold particles indeed associate with the spiral surface. Overall, the labeling data suggest that KAHRP is an important component of the knob spirals. Clusters of KAHRP-labeling via the Ni 2+ -NTA-5 nm gold nanoprobe were also observed in membrane ghosts prepared from ring stages 18 ± 2 hr postinvasion ( Figure 5g). However, the tomograms revealed no spiral structures and the arrangement of the gold particles in the clusters appeared planar rather than spatial ( Figure 5g). | KAHRP is a major numeric component of knobs To quantify the number of KAHRP molecules per knob, we per- To determine the coordinates of the gold particles, the spiral top was marked by a yellow star and the radial distance of the gold particle (center of mass) to the spiral vertical center was measured, as was the z-position of the gold particle. The colors indicate gold particles located in different z-sections. Scale bar, 50 nm. (d) Simulation of the knob spiral as an Archimedean conical spiral with four turns and a basal diameter of 53 nm. (e) The binding profile of the gold particles in reference to the spiral blade was simulated at distances from 0 to 10 nm. The simulated data were subsequently compared with the experimental data using a least-squares regression analysis. The weighted mean squared error between the simulated and experimental data is plotted as the distance from the gold particle (center of mass) to the wall of the spiral blade. The minimum of the curve indicates the model with the best fit. (f) Overlay of the experimental data (cyan) with the best fit model assuming a distance between the wall of the spiral blade and the center of mass of the gold particle of 6 nm (gray). The z-coordinates of the gold particles were plotted as a function of their radial distance from the spiral wall. Error bars indicate the SD. | D ISCUSS I ON KAHRP is a major virulence factor during P. falciparum infections. Here we have, for the first time, imaged its dynamic localization by F I G U R E 6 Single-molecule localization microscopy of KAHRP. (a) Schematic illustration of KAHRP including structural features and interactions domains. The domains harboring epitopes for the monoclonal antibody mAB18.1, the peptide antiserum, and the monoclonal antihistidine antibody are indicated. mEOS2 was inserted between residues 207 and 208. (b) Representative scanning electron microscopic images of erythrocytes infected with the parental line FCR3 or the genetically engineered clone G8 expressing a genomically encoded KAHRP/mEOS2 fusion protein. Scale bar, 1 µm. The knob density per cell was determined and assessed by box plot analyses. Box plots show the individual data points, with the median (black horizontal line), the mean (red horizontal line) and the 25% and 75% quartile range being shown. Error bars indicate the 10th and 90th percentile. Statistical significance was assessed using the Mann-Whitney rank-sum test. (c) Representative dSTORM image of exposed membrane prepared from G8 trophozoites stained with the monoclonal anti-KAHRP antibody mAB18.2. Scale bar, 1 µm. Zoom-ins of boxed areas are shown below. Scale bar 50 nm. (Cutts et al., 2017;Waller et al., 1999;Warncke & Beck, 2019). This includes binding to ankyrin, spectrin, and actin (Cutts et al., 2017;Kilejian et al., 1991a;Oh et al., 2000;Pei et al., 2005;Weng et al., 2014). Current models on the organization of knobs have assumed a static association of KAHRP with these components. Our data suggest a much more dynamic picture. Since our study is timeresolved, we were able to show that the interaction of KAHRP with ankyrin and the C-terminus of ß-spectrin (previously referred to as a ternary KAHRP-spectrin-ankyrin complex, Cutts et al., 2017) is temporal and occurred only during the early ring-stage development (Figure 3), whereas the later trophozoite stage displayed a profound anticorrelation between KAHRP and the two components of the ankyrin bridge ( Figure 2). In comparison, the colocalization of KAHRP with some components of the actin junctional complex persisted throughout intraerythrocytic development, as shown by two-color STED microscopy and pairwise cross-correlation analyses (Figures 2 and 3). The temporal co-localization of KAHRP with ankyrin bridge components would suggest a model in which KAHRP reorganizes its own clusters during knob formation or, alternatively, it might reflect a step in the parasite-induced reorganization and disassembly of the spectrin/actin network. Reorganization of KAHRP might be mediated by the changing phosphorylation and/or acetylation pattern of KAHRP during the intraerythrocytic cycle (Cobbold et al., 2016;Pease et al., 2013), thereby altering the affinity of the protein to components of the membrane skeleton. An initial affinity to multiple membrane skeletal components has the clear advantage of creating a concentration gradient away from the parasite, which allows passive transport by diffusion and does not require yet active transport, for example, along the long actin filaments that form in later stages. Once accumulated at the red blood cell membrane with sufficient concentration, KAHRP then could reorganize to its final destination. Previous models on knob organization have placed KAHRP and the spiral scaffold in the mesh formed by spectrin filaments (Looker et al., 2019), close to the ankyrin bridge (Cutts et al., 2017) or at the actin junction (Oh et al., 1997(Oh et al., , 2000. However, these models did not consider the possibility of dynamic rearrangements, involving a transient state at the ankyrin bridge during early ring-stage development. Our finding of a spatial correlation between KAHRP and actin and the N-terminus of ß-spectrin in trophozoites suggests that KAHRP eventually assembles at actin junctional complexes (Figures 2 and 3). While the actin junction may nucleate knob formation, we do not think that mature knobs maintain the anatomy of the actin junction. Previous cryo-ET has revealed that the parasite mines the actin from protofilaments to generate long actin filaments connecting the knobs with the Maurer's clefts and serving as cables for vesicular trafficking of PfEMP1 and other parasite factors to the red blood cell surface (Cyrklaff et al., 2011(Cyrklaff et al., , 2012(Cyrklaff et al., , 2016. In addition, the spectrin filaments stretch and the mesh size increases (Shi et al., 2013), which, in turn, contributes to membrane stiffening (Fröhlich et al., 2019;Lai et al., 2015). Consistent with dissembled actin junctions in knobs, we observed an independent spatial behavior between KAHRP and the actin filament capping factors adducin and tropomodulin (Figure 2b). In comparison, KAHRP strongly correlated with tropomyosin (Figure 2b), which might suggest that the long actin filaments extending from knobs are stabilized by tropomyosin at least at sites where they are close to, or connected with, KAHRP. Our study further sheds new light on the number of KAHRP molecules per knob. According to our quantitative PALM experiments, knobs contain on average 60 ± 30 KAHRP molecules (range 10-150). This number is more than one order of magnitude higher than previously reported (Looker et al., 2019) and would suggest that KAHRP is a major numeric component of knob protrusions. We considered the possibility that tagging the endogenous, genomically encoded KAHRP with mEOS2 affected trafficking or function of the resulting fusion protein. However, we do not think that such effects influenced the outcome or the interpretation of our results as the corresponding mutant displayed a knobby phenotype, although the knob density was slightly lower according to scanning electron microscopy than that of the parental FCR3 line (Figure 6b), whereas KAHRP cluster sizes and distribution densities were comparable according to dSTORM imaging (Figures S5 and S13e). Previous studies have posited the hypothesis of KAHRP coating the spiral scaffold underlying knobs (Looker et al., 2019;Watermeyer et al., 2016). This assumption arose from cryo-ET of schizont skeletons labeled with an anti-KAHRP antibody and a 10-nm goldconjugated secondary antibody (Watermeyer et al., 2016). However, the combined sizes of the antibody tree and the gold particle of ~40 nm complicates the interpretation of the results. To improve the localization precision of KAHRP, we used an Ni 2+ -NTA nanoprobe. Ni 2+ -NTA nanoprobes are 10 times smaller than antibody trees and can deliver a gold particle or a fluorescent dye within 1.5 nm of the target (Reddy et al., 2005). Each Ni 2+ ion coordinates with two histidines and tight binding is achieved when three adjacent Ni 2+ -NTA groups bind to six consecutive histidines (Reddy et al., 2005). KAHRP contains three stretches of 6 or more histidines within its N-terminal domain, making the protein a potential target for Ni 2+ -NTA nanoprobes. We demonstrated the feasibility of this approach by showing strong cross-correlation at zero distance and, hence, colocalization of a Ni 2+ -NTA ATTO nanoprobe with an anti-KAHRP antibody in two-color STED microscopy of exposed membranes prepared from trophozoites ( Figure 2b). We then showed that a Ni 2+ - The stereological computer simulations were guided by an improved 35 Å resolution density map of the knob complex, which described the geometry of the spiral, defining metric values for its height, width, lengths, basal radius, and inclination angle. In this context, it is worth mentioning that the basal radius of the spiral and the KAHRP fluorescence cluster size have comparable values. The 35 Å map of the knob spiral corroborates previous observations (Watermeyer et al., 2016), such as the presence of stick-like densities that seem to anchor the spiral to the lipid bilayer. In addition, our study revealed some additional features, including the intraspiral densities between the second and third turn, which might stabilize the spiral, and the crown-like extra densities at the periphery of the spiral base, which might be involved in connecting the spiral with filamentous structures. We further noted a high degree of variability between spirals with regard to the number of turns and the amount of peripheral proteins, possibly, reflecting different knob configuration stages and/or functions. Although our nanoprobe-based stereological approach cannot replace high-resolution structural information, it nevertheless provided the first experimental evidence of KAHRP localizing at, or close to, the spiral blade. On the basis of these findings, we propose that KAHRP is a component or an associated factor of the spiral scaffold and is distributed equally along the spiral length. However, we do not think that the spiral is solely made of KAHRP. Given a molecular weight of 62,682 Da for the processed KAHRP and a volume conversion factor of 1/825 nm 3 Da −1 (Erickson, 2009) In summary, our findings suggest a dynamic and stage-dependent association of KAHRP with components of the membrane skeleton ( Figure 7a). We propose that KAHRP initially uses multiple binding sites in the red blood cell cytoskeleton to quickly accumulate at the membrane, but later assembles at actin junctional complexes. We further propose that the anatomy of the junctional complex is lost as the knob matures, replacing it by a KAHRP-containing spiral scaffold from which long tropomyosin enforced, uncapped actin filaments extend towards Maurer's clefts and which stabilizes the actin mined membrane skeleton by bundling the N-terminal ends of ß-spectrin filaments at a cost of an increased mesh size and membrane hardening (Figure 7a,b). | Antibodies and nanoprobes Antisera and nanoprobes used in this study are listed in Table S1. | Oligonucleotides Oligonucleotides and primers used in this study are listed in Table S2. | Preparation of exposed membranes Glass bottom culture dishes (Mattek corporation) were treaded with 2% 3-aminopropyl triethoxysilane (APTES) in 95% ethanol for 10 min, washed with 95% ethanol, and then incubated at 100℃ for 15 min before dishes were incubated with 1 mM bis-sulfosuccidimyl suberate in phosphate buffer saline (PBS) at room temperature for F I G U R E 7 Graphic model depicting the organization of KAHRP. (a) Dynamic association of KAHRP with ankyrin bridges. We propose that KAHRP initially binds to both the ankyrin bridge forming a ternary complex with ankyrin and spectrin (Cutts et al., 2017) and to the actin junctional complex during the early ring-stage development. We further propose that KAHRP then re-positions to the actin junctional complex as the parasite matures. Concomitantly, the knob spiral forms and the actin junctional complexes reorganize. Reorganization of the actin junctional complex would include uncapping and mining the actin protofilaments to form long filaments connecting the knobs with the Maurer's clefts (Cyrklaff et al., 2011). (b) Model depicting KAHRP as an associated factor of the spiral scaffold. According to the model, KAHRP would "glue" additional components to the spiral scaffold. This would include PfEMP1, the spectrin filaments (via a quaternary complex consisting of the N-terminus of ß-spectrin and protein 4.1R), and the long actin filaments. A "glue-like" function would be consistent with the multimodular-binding properties of KAHRP (Warncke & Beck, 2019). The KAHRP symbol indicates one or several KAHRP molecules (Shi et al., 2013). Dishes were subsequently washed with PBS and then treated with 0.1 mg/ml phytohemagglutinin E (PHAE) in PBS as described (Shi et al., 2013). Dishes were then washed with PBS and blocked with 0.1 M glycine for 15 min, washed once more with PBS, and kept at 4℃ until further use. Uninfected erythrocytes or magnetic enriched P. falciparum-infected erythrocytes were immobilized on the functionalized dishes followed by washes with a hypotonic phosphate buffer (10 mM sodium phosphate pH 8, 10 mM NaCl) followed by washes with water (Dearnley et al., 2016). Exposed membranes were either unfixed, or fixed with 4% paraformaldehyde in PBS or 4% paraformaldehyde and 0.0065% glutaraldehyde in PBS for 15 min, washed with PBS, and blocked in PBS containing 3% BSA. Exposed membranes were incubated with primary antibodies overnight at 4℃ and with secondary antibodies for 40 min at room temperature. All incubations and washes were performed in 3% BSA in PBS. | STED microscopy Super-resolution images were recorded using a STED/RESOLFT microscope (Abberior Instruments, Germany) equipped with 488 nm, 594 nm, and 640 nm excitation (Ex) laser sources and 775 nm STED laser lines and an Olympus microscope with a 100× oil immersion objective (UPLSAPO 1.4NA oil, 0.13 mm WD). The STED laser power was adjusted to ~40%. Fluorescence emitted from Abberior star 580 conjugated secondary antibodies were recorded in the Ex 594 channel ("green images") and fluorescence emitted from Abberior star red conjugated secondary antibodies were recorded in the Ex 640 channel ("magenta image"). See Table S1 for antibodies used in this study. The pixel size was 15 nm and the pixel dwell time was 10 µs. Deconvolution of 2D-STED images was performed using the Imspector imaging software (Abberior Instruments GmbH) and the Richardson-Lucy algorithm with default settings and regularization parameter of 1 × e −10 . 3D-STED images were deconvoluted using the Huygens Professional 20.04 software (Scientific Volume Imaging B.V.). The differences between sections in z-direction were 150-200 nm. In the case of mEOS 2.1 imaging by confocal microscopy, the sample was excited using the 488 nm laser line, and the emission was detected using a 500-550 nm bandpass filter. | Cross-correlation analysis Two-dimensional cross-correlation between two-color channels was computed by calculating the pairwise intermolecular distance distribution (PDD) from the images (Schnitzbauer et al., 2018). The shortest distance between the points on two images is limited by the size of a pixel (15 nm), which, in turn, limits the PDD at short distances by the size of a pixel. To overcome this limitation, we used linear interpolation (or resampling) on the original image to interpolate the signal into smaller grids. This is possible because single fluorescence signals cover multiple pixels. The image interpolation step smoothens the image and improves the PDD (Schnitzbauer et al., 2018). We chose one-tenth of a pixel, corresponding to ~1.5 nm, as the width of the interpolated grid. The resultant PDD was binned into different spatial bins to compute the average PDD at different distances. The bin size was chosen (~6 nm) such that it is larger than one pixel unit of the resampled images. The PDD histogram is then radial averaged and normalized by the image area to get the pair crosscorrelation (PCC) between two signals for different distances. The PCC is analogous to a cross-correlation function defined for localization points (Sengupta et al., 2011) | Cluster size and distribution To study cluster statistics, we determine the number of peaks in a given image. The number of peaks divided by the image area defines the cluster density. For each peak, we find the nearest neighbor. The distribution of distances between nearest neighbors is used to estimate the average distance between the clusters. For isolated peaks (with no peaks in 2 pixels or 30 nm distance), we crop a 2 × 2-pixel area and fit the resulting profile with a Gaussian distribution. The full width at half maximum is calculated (FWHM) from the standard deviation (FWHM = 2.355 SD) estimated by the Gaussian fit. Imaging was performed on an FEI Titan Krios microscope fitted with a Gatan Quantum energy filter and a Gatan K2 Summit direct detector operated by Serial-EM software (Schorb et al., 2019). A total of 43 tomographic series was acquired using a dose-symmetric scheme (Hagen et al., 2017), with tilt range ±60°, 3° angular increment and defoci between −5 and −6 µm. The acquisition magnification was 42,000×, resulting in a calibrated pixel size of 3.39 Å. | Cryo-electron tomography, image processing, and subtomogram averaging The electron dose for every untilted image was increased to around 10 e − Å −2 , and tilt images were recorded as 10-frame movies in counting mode at a dose rate of approximately 0.62 e − Å −2 s −1 and a total dose per tomogram of around 110 e − Å −2 . Motion correction of tilt-series movies was performed using MotionCor2 (Zheng et al., 2017). Tilt-series were aligned on the basis of the gold fiducials using the IMOD package (Kremer et al., 1996). Contrast transfer function (CTF) estimation was performed using defocus values measured by Gctf (Zhang, 2016) for each projection. Tomograms were reconstructed from CTFcorrected, aligned stacks using weighted back projection in IMOD. Subtomogram averaging was performed using the Dynamo package (Castaño-Díez et al., 2012). To define initial subtomogram positions, the center of cubic voxels was manually picked on bin4 tomograms (489 particles), using the Dynamo Catalogue system (Castaño-Díez et al., 2017). Initial alignment was done manually on | Labeling with a Ni 2+ -NTA-gold nanoprobe Membrane ghosts were prepared as described above. Ghosts were subsequently extracted with NP-40 substitute (Sigma Aldrich) (10 mM Na phosphate pH 7.4, 2 mM MgCl 2 , 1 mM EDTA, 1 mM DTT, 0.2%, 0.4%, or 1.0% NP-40 substitute where indicated) for 5-10 min at 4℃ before centrifugated at 17,000 × g for 15 min at 4℃. The pellet was collected and resuspended in a small volume of the NP-40 substitute buffer. Two to three microliters were subsequently placed on a 100 mesh copper grid with pioloform (1.5% v/w) support film (Peurla et al., 2019) precoated with carbon and glow discharged and functionalized with 0.1% poly-L-lysine solution for 5 min. The grids were washed in PBS, incubated for 10 min at room temperature in blocking buffer (1% fish skin gelatin, 20 mM Tris/HCl pH 7.4), and a drop containing the Ni 2+ -NTA-5 nm gold nanoprobe diluted 1:30 in blocking buffer (1% fish skin gelatin, 20 mM Tris/HCl pH 7.4) was placed on the grid and incubated for 30 min at room temperature. Grids were rinsed with washing buffer (20 mM Tris/ HCl pH 7.4, 150 mM NaCl, 8 mM imidazole) several times followed by three washes with water. Samples were embedded in a mixture of 1% methylcellulose and 1% uranyl acetate and the grid was air-dried overnight. For electron tomography, the Tecnai F20 transmission electron microscope, which operates with a tungsten field emission gun at 200 kV, was used with a defocusing of 1 µm. Tilt-series were recorded within the maximal angles of −60° to +60° with 1-2° increments using SerialEM software for data collection. The digital images were recorded on a CCD-Eagle 4K × 4K camera at a nominal magnification of 25,000 that corresponds to the pixel size of 0.8919 nm. Tilt-series were processed and reconstructed into 3D tomograms using the IMOD software package and fiducial markers (Kremer et al., 1996). The radial distance from the spiral central axis (r) and the distance in z from the spiral top was determined for each associated gold particle. To do so, the center of the spiral was determined and the top was marked with a yellow star. The spiral scaffold structure was subsequently divided into vertical zones along the zaxis, with one z-interval consisting of five layers (equivalent to five pixels or 4.459 nm in thickness). | Stereological computer simulation We simulated the knob spiral as an Archimedean conical spiral. The equations for the spiral are defined by two parameters: the slope of the spiral surface m and the radius r. The slope is estimated from the y-z cross-section of an average tomogram of the knob spiral. The radius r is computed from the width of the fourth turn of the spiral (corresponding to ϕ = 5π on the left side and 6π on the right side). The 3D coordinate of points on the spiral is described by x = r ϕ cos(ϕ), y = r ϕ sin(ϕ), z = −m r ϕ. The spiral surface is defined by two contours; an upper contour is defined by x, y, z, and a lower contour is defined by x, y, z−z thick , where z thick (=8 nm) is the height of the spiral strip. To assign particles to the spiral, we distributed them using randomly distributed ϕ (between π and 8π) and z values (between the upper and the lower contour). Next, we added Gaussian noise around x, y values with variance similar to the experimentally observed data set. The binding profile was then plotted by binning the z-coordinates in 2 nm bins. For each bin, the mean and SD of radial distance from the spiral center line and z-coordinate values were estimated. To fit our model, we computed the weighted mean squared error between the experimental data and model predictions (for uniform binding on the spiral surface) for different radial distances (0-10 nm) from the center of mass of the particles to the spiral wall. A separation distance of ~6 nm yielded the best fit between simulated and experimental data. This separation distance is in the range of the distance between the center of mass of the gold particle and the His-tag (Reddy et al., 2005). | Generation of a kahrp/mEOS2 mutant For the generation of the KAHRP-mEOS2.1 mutants, the entire kahrp gene was amplified by PCR using FCR3 gDNA and cloned into the pL6-B vector (Ghorbal et al., 2014). A DNA fragment between the AflII and AleI restriction sites (amino acid 208-308 of KAHRP) was recodonized (GeneArt, ThemoFisher Scientific). The fragment was then used to replace the corresponding sequence in the kahrp coding sequence cloned in the pL6-B vector, using In Phusion (Takara Bio). Afterward, the recodonized mEos2.1 coding sequence was inserted into the AflII site by In Phusion. The guide RNAs were cloned into the BtgZ1 site of pL6-B. The guide RNAs were designed such that they would target the region between the AflII and AleI restriction sites of the endogenous kahrp gene. For this reason, this region was replaced by a recodonized version in the transfection vector. The final transfection vector was verified by sequencing analysis. Transfections were performed under standard conditions, using 75 µg each of the plasmid and the Cas9-expressing vector (Ghorbal et al., 2014). Integration events were verified by PCR of genomic DNA and sequencing analysis. Clones were obtained by limiting dilution and integration processes were again confirmed by PCR and sequencing of the resulting PCR fragments and the corresponding mRNA. Primers used for cloning and sequencing are listed in Table S2. | Single-molecule localization microscopy LabTek chambers (ThermoFisher Scientific) were covered with 0.1 mg/ml concanavalin A for 60 min, washed with water and PBS before magnet purified infected erythrocytes were allow to settle on them for 10 min. After washing with PBS, cells were fixed with 4% paraformaldehyde in PBS for 10 min. Paraformaldehyde was removed and cells were washed and kept in PBS at 4℃ until imaging. Samples were imaged within 48 hr after preparation. Single-molecule localization microscopy (SMLM) (Sauer & Heilemann, 2017) was performed on a home-built microscope in total internal reflection fluorescence (TIRF) illumination mode (Karathanasis et al., 2020). In brief, an inverted microscope (Olympus IX71) with a 100x oil immersion objective (PLAPO 100x TIRFM, NA ≥1.45, Olympus) is equipped with three laser modules (LBX-405-50-CSB-PP, Oxxius; Sapphire 568 LP, Coherent; LBX-638-180, Oxxius). The fluorescence signal was filtered with a bandpass filter (BrightLine HC 590/20, AHF) and collected using an EMCCD camera (iXon Ultra X-10971, Andor). mEOS2 was photoconverted with 405 nm (0-8 mW/cm 2 ) and excited with 568 nm (0.26 kW/cm 2 ), using an EMCCD integration time of 100 ms, a preamplifier gain of 1 and an electron multiplying gain of 200. Quantitative PALM experiments were recorded until no more emission events were observed for mEOS2. SMLM images were reconstituted using the localize software of Picasso (Schnitzbauer et al., 2017) using a Min.Net gradient of 3,000 and a filter for the width of point spread functions set to a range of 0.4-1.6. Singlemolecule emission events that occurred in consecutive frames were linked to one event. Single protein clusters were selected and the number of mEOS2 blinking events per cluster was determined. To determine the number of KAHRP proteins per protein cluster, the average number of detection events of mEOS2 was calibrated with single mEOS2 proteins attached to a poly-lysine surface (Fricke et al., 2015). Previous studies have shown that the blinking parameters of poly-lysine bound mEOS2 molecules is compared with those seen for mEOS2 fusion proteins in cells, likely explained by the barrel structure of the fluorescent proteins protecting the chromophore from the environment (Durisic et al., 2014;Fricke et al., 2015;Krüger et al., 2017;Lee et al., 2012). dSTORM imaging (Heilemann et al., 2008) | Statistical analysis Data are presented as following: FWHM (nm), mean ± SD; cluster density (µm −2 ), mean ± SD; nearest neighbor distance (nm), mean ± SD; number of KAHRP molecules/knob, mean ± SD; and cross-correlation coefficient, mean ± SEM. N indicates the number of determinations and n indicates the number of cells investigated from at least three different donors. See main text and/or the figure legends for further information. Statistical analyses were performed, using the Sigma Plot (v.13, Systat) software. Statistical significance was determined using the two-tailed t test or the Holm Sidak oneway ANOVA test, where indicated. Heidelberg University Hospital. We thank Sebastian Wurzbacher for performing scanning electron microscopy. We thank Marina Müller and Atdhe Kernaja for their excellent technical assistance. We are very grateful to Hans-Peter Beck for kindly providing us with rabbit sera against PF3D7_0532400 and MAHRP1, Denise Mattei for the guinea pig anti-PfEMP1-ATS antiserum, and Catherine Braun-Breton for the rat anti-SBP1 antiserum. Open Access funding enabled and organized by Projekt DEAL. CO N FLI C T O F I NTE R E S T The authors declare that they have no conflict of interest regarding the publication of this research. DATA AVA I L A B I L I T Y S TAT E M E N T All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supporting Information.
2021-09-14T06:16:42.757Z
2021-09-13T00:00:00.000
{ "year": 2021, "sha1": "dd8f4af2bab88c8ba2816ed8d727dfaae4cea372", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/mmi.14811", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "55e8f2628f1fc48ba90938bbd4fe36f38643ede5", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
58893412
pes2o/s2orc
v3-fos-license
Developing Undergraduates ’ Awareness of Metacognitive Knowledge in Writing Through Problem-based Learning Metacognitive awareness can improve students’ writing proficiency. Engaging and supporting students in the writing process can increase their metacognitive awareness. This study investigates the effects of a problem-based learning approach on the awareness of metacognitive knowledge of Nigerian undergraduates in writing. An intact class of second-year students in an English composition course participated in the study. The study was conducted over a period of 12 weeks. Quantitative and qualitative methods were used in data collection. A metacognitive questionnaire was administered before and after the PBL treatment. Semi-structured interview was also carried out at the end of the treatment. The results showed significant effects of the PBL approach on the participants’ awareness of metacognitive knowledge of task requirements, personal learning process, strategy use, text and accuracy, problem solving and discourse features. The findings from the interview revealed that the nature of the ill-structured problem, which is related to their real life, and the interactions during the PBL process increased the participants’ awareness of metacognitive knowledge. The findings further showed that PBL approach could be adopted by ESL instructors and teachers to increase students’ awareness of metacognitive knowledge which in turn can enhance their writing proficiency. Introduction Acquiring writing proficiency has been a difficult task for undergraduate students especially in a second (L2) or foreign language (FL) context (Barkaoui, 2007).Over the years, researchers have been analysing students' writing processes and strategies in order to provide solutions to the students' writing problems ( Bitchener & Basturkmen, 2006;Crossley, Kyle & McNamara, 2016;Paltridge, 2004;Raoofi, Chan, Mukundan & Rashid, 2014).Various factors that influence students' writing skills have been identified by different scholars (Mu, 2005;Xiao, 2007).One of the contributing factors is awareness of metacognitive knowledge.In a writing process, awareness of metacognitive knowledge allows writers to be aware of the attributes, structures and demands of the different genres (Harris, Santangelo & Graham, 2010).It also allows writers to be aware of how to regulate their cognitive process in writing, their knowledge of writing process and the demands of different writing genres (Wong, 1999) through conscious use of strategies, namely planning, monitoring and evaluating. Scholars suggest that metacognitive awareness is the main factor that separates high-level writers from low-level ones (Tsai, 2009;Wei, Shang & Briody, 2012).Many studies have been carried out to investigate the relationship between awareness of metacognitive knowledge and the writing proficiency of students and discovered positive relationships between the two (Kasper, 1997;Yanyan, 2010).Yanyan (2010) investigated the relationship between metacognitive knowledge and writing proficiency of Chinese freshmen.She found that students with a higher metacognitive knowledge base performed better in their writing than those with lower metacognitive knowledge.Similarly, Wei et al. (2012) found out that high-level writers employ metacognitive skills effectively in their writing process especially during planning and reviewing than low-level writers.The former generates complete ideas and is more concerned with the needs of the audience and the demands of specific genres.Therefore, to develop students' writing ability, the scholars emphasise the need to develop students' awareness of metacognitive knowledge.Students need to be aware of their writing purpose and processes and learn to actively set and regulate their own cognitive goals associated with writing in order become good writers (Kasper, 1997).Researchers also highlight the need to adopt instructional approach that would develop students' metacognition to enable them to become successful writers (Xinghua, 2010).Graham and Harris (2009) suggest that instructors should adopt approaches that engage students in the writing process and allow them to work together to learn strategies for planning, revising and editing their writing. The term metacognitive knowledge is described as the knowledge a learner has about him/herself, the learning task or the learning process.Wenden (1998) classified metacognitive knowledge into three different but related knowledge: person knowledge (general knowledge that learners have acquired about themselves as learners, which may facilitate or hinder their learning such as age, language aptitude, motivation); task knowledge (knowledge about the purpose of a task.It also includes knowledge about the nature of a particular task and information about a task's demands, such as the knowledge and skills needed to complete a task); and strategy knowledge (strategies to employ in order to manage, direct and regulate learning). In relation to writing, Kim (2013) subcategorises metacognitive knowledge into six components: metacognitive knowledge of task, personal learning process, strategy, text and accuracy, problem solving, and discourse features.Metacognitive knowledge of task is the awareness of various aspects relevant to a writing task, such as the purpose of the writing task and the characteristics of the genre of writing.She describes metacognitive knowledge of personal learning process as the awareness of various aspects of learning to write in English, such as individual ways to improve L2 writing proficiency by oneself or through instruction.For metacognitive knowledge of strategy, she describes it in general as the awareness of effective strategies in L2 writing; for example, strategy to compensate for lack of vocabulary knowledge.Metacognitive knowledge of text and accuracy involves the awareness of the use of discourse markers and accurate textual features in writing.Metacognitive knowledge of problem solving is the awareness of means of problem solving when confronted with difficulty in writing, for instance, sentence formation, and management of time limitations.For the final subcomponent, metacognitive knowledge of discourse features is the awareness of the characteristics of discourse of English and L1 in writing and speaking.Based on the suggestions of Graham and Harris (2009), the present study adopts a problem-based learning approach in order to improve the students' awareness of metacognitive knowledge in writing. Problem-based learning approach (PBL) is a student-centred approach (Wilkerson & Gijselaers 1996) where students assume the major responsibility for their learning by deciding and discovering for themselves what they will learn and how they will learn.In the PBL approach, a problem that is related to students' real life is given as a trigger for the students' inquiry.This leads the students to discovery of relevant knowledge and skills required to solve or understand the problem.Working in groups to discuss the problem, the students eventually develop their collaborative and cooperative learning skills (Mardziah H. Abdullah & Tan, 2008). Many studies have employed the PBL approach in developing students' metacognitive awareness in science related fields.For example, Downing, Kwong, Chan, Lam, and Downing (2009) investigated the effect of the PBL approach on metacognitive skills of students in China and found that it is effective in developing the students' metacognition.Similarly, Tosun and Senocak (2013) revealed the effectiveness of the approach in increasing metacognitive awareness of chemistry students.Numerous other studies have been conducted using the PBL approach and found it effective in in developing students' learning skills such as critical thinking skills (Yuan et. al, 2008), problem-solving skills (Bigelow, 2004), language skills (Norzaini Azman & Shin, 2012) and motivation to learning (Barrows, 2002;Tasoglú & Bakaç, 2010). Although studies have shown the importance of engaging and supporting students in developing their metacognition in a writing process, many of the instructional methods adopted in Nigerian classrooms do not engage or support students in this way (Muodumogu & Unwaha, 2013).Researchers have identified various problems in the writing of Nigerian undergraduate students.Most of these problems are attributed to the students' lack of awareness of metacognitive knowledge.For example, many students are not aware of the skills required for them to achieve their writing goals, such as grammar and rhetoric (Bodunde & Sotiloye, 2013).Despite the importance of metacognitive knowledge in the development of writing proficiency, there is lack of research that aims to develop students' awareness of metacognitive knowledge in the Nigerian context through problem-based learning approach. Therefore, the objective of the study is to investigate the effect of a problem-based learning approach (PBL) in developing students' awareness of metacognitive knowledge in writing.The following research questions are formulated to guide the study: a) What are the effects of PBL on undergraduates' awareness of metacognitive knowledge?b) How does the PBL approach improve undergraduates' awareness of metacognitive knowledge? Design of the Study The study employed a quasi-experimental research design.The independent variable in this study was the PBL treatment which was incorporated in the participants' writing process.The dependent variable of the study was the participants' awareness of metacognitive knowledge in writing.The participants' awareness of metacognitive knowledge was measured twice, before the PBL treatment, and at the end of the treatment.The PBL treatment was given to the participants in two cycles.In each cycle, the participants were given an ill-structured problem to work collaboratively and propose viable solutions within three weeks.With tutor facilitation, the participants generated possible solutions, brainstormed and identified available information related to the problem.They also identified learning issues, namely things they needed to find out more information about.Thereafter, they divided the learning issues among them and identified resources to look up or consult.They gathered the information through self-directed learning and finally proposed viable solutions.A debriefing session was conducted by the tutor to discuss writing and PBL-related issues with the participants at the end of each cycle. Participants of the Study This study was conducted at a college in North-Eastern Nigeria.An intact class of 18 second-year undergraduates taking a compulsory course of English Composition for one semester participated in the study.Before enrolling in the course, the participants studied an Introduction to Composition course where they acquired the basic knowledge of writing in English.The participants were of mixed-gender and their ages ranged from 24-38 years old.They had no experience of collaborative learning as it was not practised in the institution.The participants shared the same first language and culture.They were assigned into three groups to carry out the PBL activities. Instruments Two instruments were used in data collection of the study: a metacognitive questionnaire and semi-structured interview.The metacognitive questionnaire used in the study was adapted from Kim (2013).It comprised 29 closed-ended items designed in a 5-point Likert scale.The questionnaire elicits information regarding the participants' metacognitive knowledge of task requirements, personal learning process, strategy use, text and accuracy, problem solving, discourse features (see Appendix).To test the reliability of the questionnaire, a pilot study was conducted prior to the actual study.Cronbach's alpha coefficient was calculated for the pilot and actual studies.The reliability levels were 0.96 and 0.97 respectively. To triangulate the data collected from the questionnaire, a semi-structured interview was conducted with all the participants at the end of the PBL treatment.The interview enabled the participants to express and share their experiences of PBL in relation to their awareness of metacognitive knowledge of their writing.The interview was audio-recorded, transcribed and categorised based on emerging themes. Results and Discussion To analyse the quantitative data obtained from pre-and post-treatment questionnaire scores, SPSS Version 22.0 was used to calculate descriptive statistics.To address the first research question, the means of the pre-and post-treatment scores were compared using Wilcoxon signed-rank test because the data were not normally distributed due to the small number of sample (Mayers, 2013). The results of the descriptive analysis of the metacognitive questionnaire showed that the participants' awareness of metacognitive knowledge increased after going through the PBL approach.It revealed an increase in the mean scores of all the aspects of metacognitive knowledge tested in the questionnaire as shown in Table 1.The mean score for metacognitive knowledge of task requirements in post-treatment (M = 40.16) is bigger than that of the pre-treatment (M = 31.83).The mean score for personal learning process is also bigger in the post-treatment (M = 22.16) compared to the pre-treatment score (M = 17.33).Metacognitive knowledge of strategy use has a low mean score (M =7.33) in the pretreatment but higher in the post-treatment (M =10.11).The mean score of metacognitive text and accuracy in the pretreatment is 11.88 and it shows an increase in the post-treatment (M = 17.00).For metacognitive knowledge of problem solving, its mean score is lower in the pre-treatment (M = 14.38) but higher in the post-treatment (M = 17.44).Finally, the mean score for discourse features is (M = 15.33) in the pre-treatment, and it shows an increase in the post-treatment (M = 17.66). In order to answer the first research question about the effect of the PBL approach on the participants' awareness of metacognitive knowledge, Wilcoxon signed-rank test was run.The results showed that there were significant differences between all the components of metacognitive knowledge before and after the PBL sessions.The results are as follows: task requirements (z = -3.73,p = .000),personal learning process (z =-3.73, p = .000),strategy use (z =-3.55, p = .000);text and accuracy (z =-3.59, p = .000),problem solving (z =-3.63, p = .000),and discourse features (z =-3.42, p = .001).Table 1 shows the summary result.The results of the analysis show that PBL significantly increased the participants' awareness of metacognitive knowledge in writing.It is shown that the PBL approach prompts participants to reflect and think about what they already know and motivates them to write.The findings are similar to those of Downing et al.'s (2009) study which showed the effectiveness of the PBL approach in improving students' metacognition.Furthermore, the findings confirm the findings of Yanyan (2010) which showed positive correlations between metacognitive awareness and writing proficiency. In order to find out how the PBL approach improved the participants' awareness of metacognitive knowledge, to answer the second research question, a semi-structured interview was conducted with all the participants at the end of the PBL treatment.From the responses of the interview, three themes were identified.The themes revealed that the group interactions during the PBL process encouraged the participants to retrieve task knowledge.It also allowed the participants to be aware of their knowledge of personal learning process.Finally, PBL gives the participants a new perspective to writing stages. The PBL Approach Encourages Thinking and Retrieval of Task Knowledge It was revealed from the findings of the interview that the participants were used to writing alone.None of them had experience of PBL approach or any writing class where students collaboratively write with the help of a teacher.Whenever a writing activity was given, the majority of them relied on their own ability.As the result, they faced some problems while writing alone which include limited knowledge required for the completion of the writing task, such as, ideas relevant to the topic, grammar and appropriate vocabulary.Having gone through the PBL approach, more than three-quarters of the participants reported that the approach encouraged them to think and retrieve knowledge which allowed them to develop and present their ideas clearly and logically. A number of things helped the participants to acquire knowledge relevant to their writing activities, such as, the nature of the ill-structured problem and the support they received from their tutor and peers during the process.Because all the participants have first-hand experience of the ill-structured problem, it was easier for them to contribute and generate ideas to enrich the content of the writing.One of the participants, Yunus, explained that the ill-structured problems given to them during the PBL activities motivated him to think and generate many ideas to write about.This is because the ill-structured problems were related to their real-life situations.In addition, the complexity of the ill-structured problems, the extensive reading and self-directed learning involved in the PBL process allowed the participants to view the problems from various perspectives.This prompted their thinking and allowed them to expand on group member's idea. Ummi explained that the extensive reading and self-directed learning involved in the PBL approach helped her to acquire new knowledge relevant to their writing.Yunus also felt that after the PBL activities he could generate more ideas than he used to do.This is because he learned to improve the content of his writing through the interactions he had during the PBL session.He also learned to generate more ideas from their discussions and consulted other sources to gain new knowledge or ideas that would improve his writing. As for the content, I think now I can write better because I would think of more ideas and I have learned to refer to others materials while I have to write about something not just what I already know alone.So it has improved the content of my writing. Amina also believed that she learned a number of vocabularies from her group members.She gave an example of the word dexterous which she did not know its meaning before but learned it during their group discussion. My vocabulary has increased.I heard a word from a group member, "dexterous" I didn't know the word before, but I heard it from a member in the group.You know you can't just take a dictionary and keep on reading the words inside it.But when you hear a word then you try to find out the meaning. Amina also learned to think and generate new ideas relevant to her writing topic as the result of the support given by her group members.Whenever she was stuck in the process, her group members helped either through suggesting new ideas or by explaining ideas mentioned previously.The interactions allowed Amina to look at various perspectives while writing on a topic.This helped her to improve her writing unlike the occasions when she was writing alone. When you generate ideas yours alone, you may get stuck in an idea, so you can ask a member to help you by explaining more.If that person explains, you can put it down in your own words.So I think this PBL approach is an effective way of improving writing. Another factor that helped the participants to share their ideas freely and acquire new knowledge from one another was the support given by tutors in the process.Habib explained that their tutor encouraged them to speak their minds during the PBL sessions reminding them that there was nothing like a wrong or right answer in PBL.Habib further explained that because he was not used to speak English in public, he sometimes felt nervous during the sessions.However, their tutor encouraged him to speak which helped him to express his ideas and improve his speaking skills. This theme shows that PBL allowed the participants to retrieve knowledge required for them to achieve their writing goals such as the knowledge of the writing topic, grammatical knowledge and vocabulary.This knowledge includes Kim's (2013) metacognitive knowledge of task, discourse features and discourse markers and accurate textual features. The findings are in line with those of Downing et al.'s (2009) study which showed the effectiveness of the PBL approach in helping their students to identify and select important information in the learning situation. PBL Increases Awareness of Personal Knowledge Kim's ( 2013) describes metacognitive knowledge of personal learning process as the awareness of various aspects of learning to write.The findings from the interview revealed that PBL helped the participants to identify factors that may positively influence their writing, such as their motivation and attitude in writing.The PBL approach increased the participants' motivation and changed their attitude towards writing in English.It is further found out that the approach stimulated their interest and increased their confidence in writing.These are as the results of many factors.For example, the participants' different expertise allowed them to help one another in the process.In addition, the ill-structured problems given were related to the participants' real-life situations.They could easily generate as many ideas as possible.Another reason is that the participants worked in groups where they were all familiar with one another.One other factor that encouraged and motivated the participants is the fact that there is no right or wrong answer in the PBL process.Therefore, they could easily share their views and support one another without anxiety.All these helped the participants to overcome their writing difficulties and change their attitudes towards writing.For example, Khadija explained the PBL approach motivated and changed her attitude towards writing in English.Before she participated in the study, she did not like writing at all.One of the reasons was that she used to write in English only for academic purposes such as assignment, test or examination.However, after going through the PBL process, her motivation and attitude towards writing in English increased.This is because during the PBL sessions, she was supported by her group members which made her realise how interesting writing is.As the result, she even started writing on her own at her leisure not only for academic purposes. Before I don't like writing at all; I don't use to write anyhow.The only thing I used to write is when it comes to exams or test, they give us something to write and I write.But now I like it.My attitude towards writing has changed.Now I write my diary every day.At my leisure time I will just pick up my jotter form a topic and start writing on it so that I see if I can develop my writing skills. Ishaq explained that the PBL approach increased his motivation towards writing in English.Before this, he had no interest in writing because he considered it as a difficult activity as he lacked ideas.However, his attitude changed after going through the PBL process.He began to develop interest in writing in English.He also learned to generate more ideas to write about because the ill-structured problem was interesting.He even concluded that writing generating ideas was not as difficult as he thought.He explained: Really, it motivates me to write because before I don't like writing because it is difficult but now I began to have interest and see the simplicity of writing. Especially, the second ill-structured problems, the one talking about students engagement with the social network, it was more interesting and motivated me to generate more ideas to write. These findings also concur with those of Downing et al.'s (2009) study which showed the effectiveness of the PBL approach in improving students' development of students' confidence, motivation levels, and increased their attitudes and interest towards their academic activities. The PBL approach gives a new perspective to writing stages It was further revealed from the interview that the PBL approach gave the participants a new perspective to the writing stages: planning, drafting and editing.The participants explained that the PBL approach increased their awareness of the importance of the writing stages and how to carry out each of the stages.This includes their metacognitive knowledge of strategy, problem solving, text and accuracy, and metacognitive knowledge of discourse features.For example, for planning, more than half of the participants explained that they were taught that planning is one of the stages of writing.However, when writing alone they did not usually plan, brainstorm or outline ideas before writing.This is because they were not aware of the importance of planning in improving their writing quality.Nevertheless, having gone through the PBL approach, their awareness of the need of planning to ensure successful writing increased.Abdul revealed that before he went through the PBL approach, he did not bother about brainstorming or organising his ideas.Through the PBL process, he learned how to outline his ideas and to make a rough draft before engaging in the actual writing.During the PBL session, they spent the two-hour session planning their essay and everyone contributed different ideas. Organisation is another challenging part of writing because it requires the participants to think of appropriate sequence to present and support their ideas logically.More than half of the participants did not think about organising their ideas or paragraphs when writing alone.Many of them did not know how or why they should organise their ideas.However, when they participated in the PBL activities, they learned that organisation is an important aspect of writing.They learned to take their time to organise their ideas.As there were many group members, it was easier for them to suggest better ways to present ideas in the writing.Binta explained that she learned from her group members how to organise ideas clearly. Because, they say two heads are better than one.If you want to put down your points and you don't know how to do that, you have someone with you and that person can assist you present it clearly. In the actual writing process, the participants revealed that the PBL approach helped them to keep track of the writing progress from one aspect to another and to identify any problem that hinders the writing process.Ishaq, explained that during the PBL process his group was cautious of time.They used to remind themselves whenever they were about to exceed the time allocated for a session, or when they unnecessarily spent too much time on a particular aspect of writing.The PBL process helped them to focus on the important things to do. Editing process is another difficult and boring stage of writing for the participants.However, the findings showed that the PBL approach gave the participants a new perspective to editing.It allowed them to be aware of the need to edit and revise their writing, as well as to use punctuation marks and spellings.When writing alone, more than half of the participants did not edit their work after drafting.About one-quarter of the participants only read their essay and made minimal correction.One of the reasons was due to lack of time.Other participants explained that they did not see the need for editing because they thought that their writing was good.However, the interactions during the PBL sessions gave the participants a new perspective to editing.This is because during editing they could also improve the content and organisation of their writing not only spelling and punctuation mistakes. Ummi believed that the PBL approach helped her to improve her editing.She became more aware of the need to edit her work better than before.Before this, she only did minor corrections of grammatical mistakes.However, through the PBL sessions, she learned to plan her work and go through the first draft in order to make appropriate changes and rewrite when necessary. Similarly, Ishaq, did not bother to check his spelling and punctuation after writing.He did not consider it important because he thought all his spellings and punctuations were correct.During the PBL sessions, he realised the importance of checking his spellings and punctuations because some mechanical errors can only be detected while proofreading. You know, it also affects my spelling because I usually, before, I didn't mind to check my spellings after writing but with this approach I learned that spelling is of paramount importance as well. The third theme includes metacognitive knowledge of strategy, problem solving, text and accuracy, and metacognitive knowledge of discourse features.It revealed how the participants learned to carry out their writing successfully following the writing stages.In general, the findings revealed that the PBL approach increase the participants' awareness of metacognitive knowledge in writing due to the nature of the ill-structured problem, tutor support and peer collaboration.The findings are in line with the Graham andHarris' (2009), andXiao's (2007) propositions which emphasise the need for peer collaboration and teacher support throughout the planning, drafting and editing processes in order to develop students' metacognition in writing.Furthermore, the findings of this study concur with Ruan (2005) who showed that students' metacognitive knowledge could be developed through classroom instructions. Conclusion The findings of the study revealed that PBL could be used to increase students' awareness of metacognitive knowledge in writing.When metacognitive awareness increases, writing performance will also improve.This supports the view that engaging students in a writing process where they could be helped by both teachers and peers helps to improve the students' metacognitive awareness (Xiao, 2007). There are some limitations in this study.The study is limited to a small number of participants and was conducted in a short period.To address the limitations of short time frame and small number of participants, further studies can be conducted with larger number of students particularly in public universities in Nigeria and for a longer period.Despite the limitations, various benefits could be derived from the findings.The study contributes to the field of ESL writing by providing an empirical evidence supporting the effectiveness of the PBL approach in developing students' awareness of metacognitive knowledge.Language instructors and teachers can adopt the approach in their writing classrooms to develop students' metacognition and writing performance.They should provide learners with some opportunities to engage actively with peers in the writing process.Eventually, students will change their attitude towards ESL writing, IJALEL 5(7):233-240, 2016 239 realise their potentials as they take charge of their learning and appreciate learning through self-discovery and the use of real-life problem. Table 1 . Wilcoxon signed-rank test for pre-and post-treatment metacognitive scores (n=18)
2018-12-15T00:49:55.700Z
2016-11-01T00:00:00.000
{ "year": 2016, "sha1": "494ff18ba49990cc182f6aecc7179203c3585bef", "oa_license": "CCBY", "oa_url": "http://www.journals.aiac.org.au/index.php/IJALEL/article/download/2835/2510", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "494ff18ba49990cc182f6aecc7179203c3585bef", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
244439912
pes2o/s2orc
v3-fos-license
Toxicity of clothianidin to common Eastern North American fireflies Background Previous research suggests that fireflies (Coleoptera: Lampyridae) are susceptible to commonly used insecticides. In the United States, there has been a rapid and widespread adoption of neonicotinoid insecticides, predominantly used as seed coatings on large-acreage crops like corn, soy, and cotton. Neonicotinoid insecticides are persistent in soil yet mobile in water, so they have potential to contaminate firefly habitats both in and adjacent to application sites. As a result, fireflies may be at high risk of exposure to neonicotinoids, possibly jeopardizing this already at-risk group of charismatic insects. Methods To assess the sensitivity of fireflies to neonicotinoids, we exposed larvae of Photuris versicolor complex and Photinus pyralis to multiple levels of clothianidin-treated soil and monitored feeding behavior, protective soil chamber formation, intoxication, and mortality. Results Pt. versicolor and Pn. pyralis larvae exhibited long-term intoxication and mortality at concentrations above 1,000 ng g−1 soil (1 ppm). Under sub-lethal clothianidin exposure, firefly larvae fed less and spent less time in protective soil chambers, two behavioral changes that could decrease larval survival in the wild. Discussion Both firefly species demonstrated sub-lethal responses in the lab to clothianidin exposure at field-realistic concentrations, although Pt. versicolor and Pn. pyralis appeared to tolerate higher clothianidin exposure relative to other soil invertebrates and beetle species. While these two firefly species, which are relatively widespread in North America, appear somewhat tolerant of neonicotinoid exposure in a laboratory setting, further work is needed to extend this conclusion to wild populations, especially in rare or declining taxa. INTRODUCTION In the United States alone, insects are estimated to provide over $50 billion in ecological services (Losey & Vaughan, 2006). Fireflies have great popular appeal and aesthetic and cultural value, but fireflies also contribute biological control of some pest species, including slugs and snails, which can be important agricultural pests (Godan, 1983;Lewis, 2016). Human activities, however, have put these services at risk by triggering global insect declines (Sánchez-Bayo & Wyckhuys, 2019). Some charismatic groups such as fireflies (Coleoptera: Lampyridae) may be at elevated risk of at least localized extinction due to ongoing human activities such as heavy pesticide use in and around their habitats . Despite broad agreement that pesticides can pose a serious extinction threat to fireflies , there is a very poor understanding of the direct toxicity of insecticides on fireflies. The most commonly applied classes of insecticides (neonicotinoids, pyrethroids, and organophosphates) are broadly neurotoxic to most insect taxa (Sparks, 2013), so fireflies are unlikely to be an exception. Indeed, full-strength organophosphate and neonicotinoid formulations are toxic to the aquatic firefly larvae Luciola cruciata and Luciola lateralis, respectively (Tabaru et al., 1970;Lee et al., 2008). Unfortunately, there have been no studies assessing how terrestrial firefly larvae respond to residual concentrations of these insecticides in soil, a likely route of exposure. Larvae of many common firefly species in North America are soil-dwellers that intimately interact with soil as they forage for prey and form protective molting chambers (Buschman, 1984;Lewis, 2016). These larvae inhabit forested, suburban, and agricultural soils, where neonicotinoid insecticides are often applied directly, or via coatings on crop seeds, to protect against pests (Knoepp et al., 2012;Douglas & Tooker, 2015;Simon-Delso et al., 2015). In these habitats, neonicotinoid concentrations in soil can range from less than 5 ng g −1 to over 4,000 ng g −1 (Knoepp et al., 2012;Schaafsma et al., 2015;Pearsons et al., 2021), concentrations that could plausibly influence behavior and survival of firefly larvae (Lee et al., 2008). Some indirect evidence suggests that firefly larvae are susceptible to neonicotinoids because adult lampyrid densities have been found to be lower where neonicotinoid-coated seeds were planted (Disque et al., 2019); however, to our knowledge, there have been no direct evaluations of how terrestrial firefly larvae respond to neonicotinoid-treated soil. To assess the direct sensitivity of fireflies to neonicotinoid insecticides, we measured feeding behavior, development, and survival of larvae of two common North American firefly species-Photuris versicolor species complex and Photinus pyralis (Linnaeus 1767)exposed to clothianidin-treated soil. We focused on clothianidin, one of the most widelyused seed-and soil-applied neonicotinoid and the primary metabolite of another commonly applied neonicotinoid, thiamethoxam (Douglas & Tooker, 2015). Generally applied to combat sucking and chewing insects, clothianidin disrupts insect central nervous systems, leading to paralysis and death (Simon-Delso et al., 2015). We exposed larvae to multiple field-realistic levels of clothianidin-treated soil for 30 to 100 days with the expectation that they would be sensitive to clothianidin at concentrations that have been detected in firefly habitats. Firefly collection and colony care We ran toxicity assays on three separate cohorts of fireflies ( (Lewis, 2016). Because both species spend 1-2 years in the soil as larvae and feed on soil invertebrates (Pt. versicolor are thought to feed on a diversity of soil invertebrates whereas Pn. pyralis larvae are considered specialists on earthworms; (McLean, Buck & Hanson, 1972;Buschman, 1984;Lewis, 2016) (McLean, Buck & Hanson, 1972). After collection, we housed individual larvae in 16-oz clear plastic deli containers (11.5-cm diameter × 8-cm tall) lined with moist filter paper. Every 1-2 weeks, we provided each larva with one piece of cat food (Grain-Free Real Chicken Recipe Dry Cat Food, Whole Earth Farm TM , Merrick Pet Care Inc., Amarillo, TX, USA), which had been softened in DI-water for 1 h (McLean, Buck & Hanson, 1972). After 24 h, we removed cat food and replaced the filter paper. Occasionally there was extensive fungal growth on the cat food, which could be fatal to Pt. versicolor larvae; in these instances, we gently wiped larvae with DI water and a delicate task wipe then transferred them to clean containers. Early-instar Pt. versicolor and Pn. pyralis cohorts were reared from eggs laid in July 2020. On the evening of 10 July 2020, we collected 3 male and 2 female Pt. versicolor adults and 3 mated Pt. versicolor females. Flying Pn. pyralis males were collected and identified based on their characteristic ''J'' flash pattern (Lewis, 2016) while female Pn. pyralis were collected from nearby patches of short grass and were identified based on their flash pattern and similar morphology to the Pn. pyralis males (Lewis, 2016). Female Pt. versicolor were collected near Pn. pyralis females and identified based on their green-shifted flash color and morphology (Lewis, 2016). Additional Pn. pyralis males were collected to provision the mated Pt. versicolor females. We collected Pt. versicolor and Pn. pyralis in a residential area (State College, Centre Co, PA; 40 • 47 03 N, 77 • 52 25 W) into two separate 16-oz deli container ''nurseries'' kept at ambient room temperature (20-22 • C); each nursery contained a handful of moist sphagnum moss on top of moist soil (2-in deep; silt loam, collected from certified organic fields at the Russell E. Larson Agricultural Research Center at Rock Springs, PA, USA; 40 • 42 52 N, 77 • 56 46 W). Both Pn. pyralis females mated within a few minutes of collection. Female Pt. versicolor and Pn. pyralis laid eggs within the following 3 days (50+ Pt. versicolor eggs and 100+ Pn. pyralis eggs; we did not attempt more accurate counts to avoid damaging eggs). Under ambient temperatures (20-22 • C), first-instar larvae of both species began to emerge three weeks after eggs were laid (5 August 2020). We kept Pt. versicolor larvae in the nursery chambers for two weeks, and then, after we observed significant cannibalism among larvae, moved them into individual soil-lined 1-oz polypropylene portion containers. As with larvae collected and reared from 2019, developing Pt. versicolor were fed moistened cat food (Grain-Free Real Chicken Recipe Dry Cat Food, Whole Earth Farm TM , Merrick Pet Care Inc., Amarillo, TX, USA) in addition to pieces of freeze-killed Lumbricus terrestris (Josh's Frogs, Owosso, MI, USA). As evidence of the hypothesis that Pn. pyralis larvae are specialist on earthworms, Pn. pyralis larvae did not feed on cat food but did feed gregariously on freeze-killed L. terrestris. Unlike Pt. versicolor, Pn. pyralis failed to thrive in isolation, so they were kept in the nursery chamber until starting the toxicity assay. Toxicity assay on Late-instar Photuris versicolor We started the toxicity assay with late-instar Photuris versicolor on 22 June 2020. We used 1-oz polypropylene portion containers containing 8 g of dry soil (same soil source as nursery chambers) for our assay containers. To the soil in each assay container, we added 0.5 mL of the appropriate clothianidin stock solution, allowed the acetone to completely evaporate, then added two mL of DI water to moisten the soil and to achieve clothianidin concentrations of 0 ng g −1 , 10 1 ng g −1 soil, 10 2 ng g −1 soil, 10 3 ng g −1 soil , 10 4 ng g −1 soil. We chose this concentration range (10 1 -10 4 ng g −1 soil) to encompass the range of neonicotinoid concentrations in soil that have been measured in potential firefly habitats (Knoepp et al., 2012;Schaafsma et al., 2015;Pearsons et al., 2021). After setting up assay containers, we weighed the late-instar Pt. versicolor and randomly assigned each to a particular clothianidin concentration (ensuring all larvae in each dose-set were sourced from the same location). All late-instar Pt. versicolor were over 12 months old at the start of the assay, and were over 10-mm long and >50 mg (Table 1). Each clothianidin concentration (0, 10 1 ng g −1 soil, 10 2 ng g −1 soil, 10 3 ng g −1 soil , 10 4 ng g −1 soil) was replicated six times (30 late-instar Pt. versicolor larvae in total). We recorded firefly status at 1, 4, and 24 h, and every day for an additional 99 d. Fireflies were categorized as dead (D), exhibiting a toxic response (T), or apparently healthy (A). A larva was assumed dead if it did not respond to gentle prodding with forceps. If a larva was flipped on its back and/or demonstrating repetitive twitching of its legs or head, it was recorded as exhibiting a toxic response (T). Fireflies were recorded as apparently healthy (A) if they exhibited a usual response to prodding from blunt forceps ( Fig. 1A; quickly curled up on its side, often glowing). During the toxicity assay, we fed larvae once a week by carefully transferring individuals out of the assay containers into clean containers lined with moistened filter paper and containing a piece of moistened cat food. After 24 h, we returned fireflies to the assay containers and noted if the cat food had obvious signs of feeding (Fig. 1B). Feeding activity for each week was measured as a simple binary (0 = no obvious signs of feeding, 1 = obvious signs of feeding). At each status check, we noted if a firefly had constructed a protective soil chamber, then carefully dismantled the chamber to check larval status. Larvae often re-built soil chambers by the next day; if a larva built soil chambers on multiple consecutive days (feeding days as an exception), we noted this behavior as a ''period of chamber formation.'' Assay containers were kept in a dark drawer except when doing daily checks, and we misted containers with DI water as needed to keep the soil from drying out. Toxicity assay on early-instar Photuris versicolor The toxicity assay with early-instar Photuris versicolor was similar to the assay with late-instar larvae, except we added half the amount of soil (4 g) and half the volume of clothianidin stock solutions (0.25 mL) to each assay container to achieve the same clothianidin concentrations (0, 10 1 ng g −1 soil, 10 2 ng g −1 soil, 10 3 ng g −1 soil , 10 4 ng g −1 soil). All early-instar Pt. versicolor were less than 3 months old and weighed between 3 and 15 mg. On 17 Sept 2020, we started trials with early-instar Pt. versicolor (three replicates at each concentration, 15 larvae in total), feeding them cat food once a week and recording their status at 1, 4, and 24 h, and every day for 10 d, then twice a week for an additional 90 d. Unlike for late-instar Pt. versicolor, we fed early-instars by directly placing moistened cat food in the assay containers (we removed the food 24 h later after noting if food had been damaged [1] or not [0]). Toxicity assay on early-instar Photinus pyralis As with the early-instar Pt. versicolor assay, the Photinus pyralis assay was run in 1-oz polypropylene portion containers containing 4 g of soil with 0.25 mL doses of clothianidin stock solutions (to achieve 0, 10 1 ng g −1 soil, 10 2 ng g −1 soil, 10 3 ng g −1 soil , 10 4 ng g −1 soil). All early-instar Pn. pyralis were less than 3 months old and weighed between 0.6 and 2.4 mg. On 17 Sept 2020, we started the assay on early-instar Pn. pyralis, exposing larvae in sets of five (five larvae per container, three replicates at each concentration, 75 larvae in total), recorded their status at 1, 4, and 24 h, and every day for 10 d, then at least twice a week for an additional 20 d. We terminated the Pn. pyralis assay earlier than the Pt. versicolor assays due to an acarid mite infestation, which rapidly increased larval mortality across all doses. During the assay, we fed Pn. pyralis pieces of earthworm (L. terrestris) in the same manner that early-instar Pt. versicolor were fed cat food. Statistical analysis We performed all statistical analyses in R (v4.0.4) (R Core Team, 2021). For each firefly cohort, we calculated median toxic concentrations (TC 50 ) and median lethal concentrations (LC 50 ) at 24 h, 7 d, and 30 d of exposure using probit analysis (LC_PROBIT from the ''ecotox'' package; Robertson et al., 2017;Hlina et al., 2019); for TC 50 estimates, we included both sub-lethal and lethal responses, while LC 50 estimates were based on mortality alone. To assess long-term survivorship across clothianidin levels, we used the Kaplan-Meier method (''survival'' functions SURVDIFF and PAIRWISE_SURVDIFF;Therneau, 2021;Therneau & Grambsch, 2000). To determine how clothianidin exposure affected firefly behavior, we used non-parametric Mann-Whitney U tests (WILCOX.TEST) to compare feeding frequency and soil-chamber construction across clothianidin doses; we made pairwise comparisons using Wilcoxon rank sum tests with continuity corrections (PAIRWISE.WILCOX.TEST). As firefly larvae reduce feeding before pupation (McLean, Buck & Hanson, 1972), we excluded the two feeding events preceding pupation for feeding assessments. h, 7 d, and 30 d TC 50 and LC 50 estimates Dose-response curves and estimated TC 50 and LC 50 indicate that Photuris versicolor and Photinus pyralis were surprisingly tolerant of exposure to clothianidin (Table 2 and Figs. 2-4). Reliable TC 50 and LC 50 estimates were limited by our small sample sizes and low acute mortality within the tested concentration range. Overall, TC 50 values ranged from 500 ng g −1 to 2,000 ng g −1 while LC 50 values exceeded our test limit (above 10,000 ng g −1 ). Firefly survival Clothianidin exposure significantly reduced long-term firefly survival at high concentrations (Fig. 5). Between one and four hours after initial exposure, half of the late-instar Pt. versicolor larvae and 87% of the early-instar Pn. Pyralis larvae exposed to the highest clothianidin concentration (10,000 ng g −1 ) began to exhibit toxic responses. By 24 h, all six late-instar Pt. versicolor exposed to the highest clothianidin concentration (10,000 ng g −1 ) exhibited a toxic response ( Fig. 2A); these larvae never recovered and died by day 84. Photuris larvae were somewhat tolerant to lower clothianidin concentrations (10 ng g −1 or 100 ng g −1 ) and neither late-nor early-instar larvae exposed to low concentrations had significantly lower 100 d survival probability compared to controls (Figs. 5A-5B). All Pt. versicolor in the control treatment either pupated (2 out of 6 late-instar larvae) or survived through day 100 (4 out of 6 late-instar larvae, all three early-instar larvae). Toxic responses after (A) 24 h, (B) 7 d, and (C) 30 d, and lethal response after (D) 24 h, (E) 7 d, and (F) 30 d. Black dots in each panel represent mean responses at each insecticide concentration; the shaded area represents the 95% confidence interval for each curve. Blue diamonds represent the response of the control group. Dotted lines in each panel marks the 50% toxic response or mortality threshold. Although the experiment was terminated at 30 d due to the mite infestation, early-instar Pn. Pyralis exposed to 1,000 ng g −1 clothianidin showed marginally non-significant reduced survivorship (P = 0.07) while Pn. pyralis exposed to 10,000 ng g −1 clothianidin showed significantly reduced survivorship (P < 0.0001) compared to controls (Fig. 5C). Soil-chambers, molting, and pupation of late-instar Photuris versicolor The 14 late-instar Photuris larvae that survived as larvae through day 100 went through 1 to 5 periods of consecutive days when they regularly formed protective soil chambers (median = 2 periods) and spent anywhere from 1 to 20 total days in soil chambers (median = 9 d). Larvae exposed to 10,000 ng g −1 clothianidin never constructed soil chambers while larvae exposed to 1 ppm clothianidin spent significantly fewer days in soil chambers than larvae exposed to 10 ng g −1 (P = 0.01; Fig. 7). Formation of protective soil chambers did not correspond with molting or pupation, and all recorded molting and pupation events occurred outside soil chambers, on the soil surface. Late-instar Pt. versicolor larvae only molted once or twice, irrespective of how frequently or for how long they built soil chambers (larvae that survived through 100 days; frequency: R 2 ad j = −0.09, F 1,10 = 0.10, P = 0.76; duration: R 2 ad j = −0.02, F 1,10 = 0.81, P = 0.39). Six of the thirty late-instar Pt. versicolor larvae pupated; five of which successfully eclosed within 35 d of starting the assay (two controls, one at 10 ng g −1 , two at 100 ng g −1 ) and one which was unsuccessful (1,000 ng g −1 ). The unsuccessful larva failed to shed its last-instar exoskeleton and died during the pupal stage. At 35 d, three of the larvae exposed to the highest clothianidin concentration (10,000 ng g −1 ) were still alive, but none of these larvae ever entered a pupal stage. Of the individuals that successfully eclosed, three were lab-reared from eggs laid in 2019 (3 out of 5) while only two were wild-collected (2 out of 25). DISCUSSION Photuris versicolor complex and Photinus pyralis larvae did not significantly respond to clothianidin concentrations at or below 100 ng g −1 soil, but both firefly species exhibited significant toxic responses to higher concentrations. Although some of the larvae exposed to 10,000 ng clothianidin g −1 soil showed a toxic response within four hours of exposure, compared to other soil invertebrates, larvae of these two firefly species were relatively tolerant to clothianidin-treated soil. , 2007;Cloyd et al., 2009)). The one other study which tested neonicotinoid toxicity to fireflies observed 13% survival of aquatic Luciola lateralis larvae after 24 h of exposure to 10 5 ng thiamethoxam mL −1 in water (Lee et al., 2008); these results suggest that fireflies as a group may be somewhat tolerant to neonicotinoid exposure, although this is likely a tenuous conclusion because it is based on just two studies that represent less than 0.2% of all described firefly species . Tolerance to neonicotinoids may partly explain why populations of Pt. versicolor and Pn. pyralis do not appear to be declining as fast as rarer firefly species , which may be more sensitive to neonicotinoid exposure. Pt. versicolor and Pn. pyralis may tolerate clothianidin exposure due to multiple behavioral, morphological, and biochemical processes that could limit their sensitivity to clothianidin (Alyokhin et al., 2008). Behavioral avoidance of neonicotinoids has been observed across insect orders and beetle families (Easton & Goulson, 2013;Fernandes et al., 2016;Pisa et al., 2021;Korenko et al., 2019), and the results of this current study provide some support for behavioral avoidance of neonicotinoids by Lampyridae. Although firefly larvae could not avoid dermal exposure to the treated soil in our arenas, they may have decreased oral exposure by limiting construction of their soil chambers. To form soil chambers, Pt. versicolor larvae manipulate soil with their mouthparts (Buschman, 1984), providing a potentially more toxic pathway for neonicotinoid exposure (Decourtye & Devillers, 2010). Because neonicotinoids are repellant to other beetle species (Easton & Goulson, 2013), neonicotinoid-treated soil could have repulsed firefly larvae, possibly explaining reduced chamber formation above 1,000 ng clothianidin g −1 soil. Alternatively, sub-lethal neonicotinoid exposure may simply decrease the ability of fireflies to construct soil chambers. Choice-based avoidance studies could be used to test if avoidance or direct toxicity drove the decreased time late-instar Pt. versicolor spent constructing and inhabiting soil chambers at high-clothianidin concentrations. In addition to behavioral avoidance, specific morphological and metabolic characteristics of fireflies may protect Pt. versicolor and Pn. pyralis larvae from toxic clothianidin exposure. Unlike many other soil invertebrates (e.g., earthworms and mollusks), firefly larvae have a comparably protective cuticle that may act as an effective barrier against neonicotinoid uptake (Decourtye & Devillers, 2010;Wang et al., 2012). And even when clothianidin is absorbed, insects can resist target-site exposure by quickly detoxify and/or excrete neonicotinoids (Olson, Dively & Nelson, 2000;Alyokhin et al., 2008). Although there has been no work on neonicotinoid metabolism by fireflies, Pt. versicolor and Pn. pyralis may upregulate detoxification enzymes after clothianidin exposure, similar to an aquatic firefly species after exposure to benzo[a]pyrene (Zhang et al., 2021). Additionally, Pt. versicolor and Pn. pyralis may be tolerant to clothianidin if neonicotinoids have a low binding affinity to target sites on firefly neurons. Neonicotinoids primarily target nicotinic acetylcholine receptors (nAChRs), which regulate cation movement and neuron firing in response to acetylcholine levels (Matsuda, Ihara & Sattelle, 2020). Neonicotinoid insecticides agonistically bind to these receptors, forcing ion channels open, leading to spasms and eventual paralysis (Simon-Delso et al., 2015). As neonicotinoids have broad activity across insect orders (Matsuda, Ihara & Sattelle, 2020), it is unlikely that clothianidin has a low binding affinity for nAChRs of Pt. versicolor and Pn. pyralis. There is also the unlikely possibility that extensive neonicotinoid use has exerted selection pressure on the firefly populations in central Pennsylvania to evolve resistance to clothianidin. The way neonicotinoids are currently used is a perfect storm for developing insecticide resistance (Tooker, Douglas & Krupke, 2017), and while most concern has focused on resistance-development in herbivorous pest species, biocontrol agents and other predatory arthropods can develop insecticide tolerance and resistance in response to heavy insecticide use (Bielza, 2016;Mota-Sanchez & Wise, 2021). Although insecticide-resistance is thought to be rare among biocontrol agents, lady beetles (Coleoptera: Coccinellidae) in particular, have been found to develop resistance to a variety of broad-spectrum insecticides, including neonicotinoids (Tang et al., 2015). Insecticide resistance has not been studied in many non-pest species (including lampyrids), but if the selection pressure is high enough, firefly populations could evolve increased tolerance or even resistance to neonicotinoid insecticides. Differences among any of these potential mechanisms are likely driving differences in tolerance between the two firefly species, namely, the dramatically reduced feeding response of Pn. pyralis to clothianidin exposure. Although this difference could have been exacerbated by mite pressure and the smaller body size of early-instar Pn. pyralis, it is possible that Pn. pyralis has higher uptake, higher active-site affinity, or lower metabolism of clothianidin as compared to Pt. versicolor. Despite their relative tolerance to clothianidin exposure, field-realistic neonicotinoid concentrations may still pose a chronic threat to Pt. versicolor and Pn. pyralis . Although residual neonicotinoid concentrations in soil are often below 100 ng g −1 (Schaafsma et al., 2016;Radolinski et al., 2019;Pearsons et al., 2021), concentrations can regularly exceed these levels after agricultural applications (as high as 594 ng g −1 23 days after planting neonicotinoid-coated seeds; (Radolinski et al., 2019)), after turf applications (3× higher than in agronomic settings; (Armbrust & Peeler, 2002)) and after soil drenches to manage hemlock wooly adelgid (over 4,000 ng AI g −1 soil; Knoepp et al., 2012). Such high concentrations are well within the acutely toxic and chronically lethal range for Pt. versicolor and Pn. pyralis larvae (Table 2). Encountering such high concentrations are likely to be even more lethal under field conditions, as firefly larvae that exhibited toxic responses in the laboratory would be vulnerable to predation and starvation, two risks that can increase mortality from insecticides (Kunkel, Held & Potter, 2001). Additionally, further work is needed to assess if neonicotinoid exposure can exacerbate other stressors affecting firefly populations (i.e., light pollution) or if neonicotinoids pose a significant risk to firefly eggs or adults. As observed with other predatory beetle species (Cycloneda sanguinea [Coccinellidae] and Chauliognathus flavipes [Cantharidae]; Fernandes et al., 2016, firefly larvae exhibited reduced feeding activity in response to high neonicotinoid exposure. Firefly larvae that feed less frequently may have less successful eclosion rates, and those that do eclose may have lower reproductive success. Additionally, the prey that fireflies encounter in neonicotinoid-contaminated environments likely provide an additional neonicotinoid exposure route. Photinus larvae primarily feed on earthworms , which have been found to contain neonicotinoid concentrations above 200 ng g −1 when collected from soybean fields that were planted with neonicotinoid-coated seeds (Douglas, Rohr & Tooker, 2015) and 700 ng g −1 when collected from treated cereal fields (Pelosi et al., 2021). Firefly larvae of other species are known to feed on slugs (Barker, 2004), which can also contain high doses of neonicotinoids (500 ng g −1 ), leading to disrupted biological control provided by carabid beetles (Douglas, Rohr & Tooker, 2015). Compounded with reduced prey availability in habitats where neonicotinoids are used (Ritchie et al., 2019;Tooker & Pearsons, 2021), decreased feeding activity and high risks of further neonicotinoid exposure through contaminated prey may explain why adult lampyrid densities are significantly lower where clothianidin has been used as a seed coating (Disque et al., 2019), even if acute mortality is low. Adult fireflies may also encounter neonicotinoid residues while resting on sprayed vegetation or during oviposition into soil (Pisa et al., 2021), although the risk of such exposure does not appear to have been explored. Despite low acute mortality, the sublethal effects of clothianidin were surprising, as some Pt. versicolor larvae survived in a severely intoxicated state (not feeding, not building protective soil chambers, only occasionally moving legs and/or mandibles) for over two months. A similar phenomenon has been observed in European wireworms (Agriotes spp. [Coleoptera: Elateridae]) after exposure to clothianidin, with individuals surviving and even recovering from a severely intoxicated state that can last months (Van Herk et al., 2007;Vernon et al., 2007). For pests like Agriotes spp., such sub-lethal effects of clothianidin exposure could still decrease crop damage but may exacerbate the risk of Agriotes spp. developing neonicotinoid resistance. For predators like Pt. versicolor, this long-term intoxication may limit their potential to provide biological control beyond what would be expected based on population declines. CONCLUSIONS As larvae of the two firefly species that we studied appear to be somewhat tolerant to clothianidin-treated soil, neonicotinoids alone may not be significant direct factors in firefly declines in North America, at least for common species. Nevertheless, firefly populations around the world appear to be suffering from other stressors (e.g., habitat loss, reduced prey availability, light pollution), and ecological research has demonstrated that animal populations exposed to multiple stressors can suffer disproportionally more than what is suffered from a single stress (Relyea & Mills, 2001). Therefore, continued widespread contamination of larval firefly habitats with neonicotinoids may hold potential to exacerbate the influence of other stressors on declining firefly populations . We encourage researchers with access to other species of fireflies, particularly those with declining populations in areas where neonicotinoids are commonly used, to explore their toxicological responses to insecticides.
2021-11-21T16:15:42.516Z
2021-11-19T00:00:00.000
{ "year": 2021, "sha1": "33a1aae9e8ee03c2a35c7ffbad486acfeb560191", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7717/peerj.12495", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3cda44635f5b567a17a3077052aed2340a2417d3", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
37245653
pes2o/s2orc
v3-fos-license
Multiple Myeloma with Biclonal Gammopathy Accompanied by Prostate Cancer We report a rare case of multiple myeloma with biclonal gammopathy (IgG kappa and IgA lambda type) in a 58-year-old man with prostate cancer who presented with lower back pain. Through computed tomography (CT) imaging, an osteolytic lesion at the L3 vertebra and an enhancing lesion of the prostate gland with multiple lymphadenopathies were found. In the whole body positron emission tomography-computed tomography (PET-CT), an additional osteoblastic bone lesion was found in the left ischial bone. A prostate biopsy was performed, and adenocarcinoma was confirmed. Decompression surgery of the L3 vertebra was conducted, and the pathologic result indicated that the lesion was a plasma cell neoplasm. Immunofixation electrophoresis showed the presence of biclonal gammopathy (IgG kappa and IgA lambda). Bone marrow plasma cells (CD138 positive cells) comprised 7.2% of nucleated cells and showed kappa positivity. We started radiation therapy for the L3 vertebra lesion, with a total dose of 3,940 cGy, and androgen deprivation therapy as treatment for the prostate cancer. INTRODUCTION Multiple myeloma is a malignant disease of plasma cells that manifests as disease in the bone marrow, monoclonal protein in the blood and/or urine, and evidence of end organ damage that can be attributed to the underlying plasma cell proliferative disorder [1]. Biclonal gammopathies are a group of disorders characterized by the production of 2 distinct monoclonal proteins. The presence of 2 monoclonal proteins may be because of the proliferation of 2 clones of plasma cells, each producing an unrelated monoclonal immunoglobulin, or it may result from the production of 2 monoclonal proteins by a single clone of plasma cells [2]. Although there are some reports of synchronous occurrence of multiple myeloma and prostate cancer [3,4], there are no reports of multiple myeloma with biclonal gammopathy accompanied by prostate cancer. Here, we report a rare case of multiple myeloma with biclonal gammopathy accompanied by prostate cancer, which was treated successfully by androgen deprivation therapy for the prostate cancer and surgery combined with radiation therapy for the plasma cell neoplasm. CASE REPORT A 58-yr-old man visited the orthopedic surgery department of a local clinic in January 2008, with a 10-day history of lower back pain and tingling sensation of the left gluteal region and thigh. At the local clinic, spinal magnetic resonance imaging (MRI) was performed, and the clinician suspected a bone tumor. The patient was then transferred to the Hematology/Oncology Department of Eulji University Hospital for further evaluation. The patient had no history of trauma. In addition to the back pain and tingling sensation of the gluteal region and thigh, the patient complained of urinary symptoms, including hesitancy, mild voiding difficulties, and residual urine sensation. A physical examination revealed lower back tenderness, but other findings were normal. Spinal MRI revealed an osteolytic extension lesion with cortical pinning on the left half of the L3 vertebra, including the left transverse process (Fig. 1). Abdominal and chest computed tomography (CT) revealed an enhancing lesion of the prostate gland. Multiple metastatic lymphadenopathies were discovered in the paraaortic, aortocaval, and common iliac lymph nodes as well as in the left pelvic wall. A whole body positron emission tomography-computed tomography (PET-CT) was performed ( Fig. 2A). In the prostate gland, the standardized uptake value (SUV) was 3.45 for the lesion showing fructose-1,6-bisphosphate (FDP) uptake; maximum SUV was 4.57 for the lesion showing FDP uptake in the L3 vertebral body and transverse process; and SUV was 2.87 for the lesion showing FDP uptake in the left iliac bone. Multiple lesions showing FDP uptake were observed in the paraaortic, aortocaval, prevertebral, and left common iliac regions, with an SUV range of 2.27-4.1. A prostate biopsy was performed under transrectal ultrasonographic guidance, and adenocarcinoma was confirmed in the pathologic review (Fig. 3A). Because of the patient's severe back pain, decompression surgery of the L3 vertebra and a biopsy of the lesion were performed. The biopsy results characterized the lesion as a plasma cell neoplasm (Fig. 3B). Blood analysis yielded the following values: serum beta 2 microglobulin, 0.17 mg/dL; serum IgG, 1,436.6 mg/dL (reference interval: 870-1,700 mg/dL); IgA, 445.9 mg/dL (reference interval: 110-410 mg/dL); and IgM, 123.62 mg/dL (reference interval: 35-220 mg/dL). Serum protein electrophoresis showed an M-peak in the gamma fraction (serum Mprotein was 217 mg/dL), and immunofixation electrophoresis revealed the presence of biclonal gammopathy (IgG kappa and IgA lambda). Urine immunofixation electrophoresis showed a dark band for kappa antisera (Fig. 4). In the 24-hr urine samples, the total protein was 151.2 mg/day and urine protein electrophoresis indicated tubular proteinuria with Bence-Jones proteinuria. Immunohistochemical staining of the bone marrow was performed with CD138, kappa, and lambda, and the bone marrow was positive for CD138 and kappa staining. Bone marrow plasma cells (CD-138 positive cells) comprised 7.2% of nucleated cells (Fig. 5). Plain radiographic examination of the whole body did not show any abnormalities other than that in the L3 vertebra. Finally, we diagnosed the patient with multiple myeloma showing biclonal gammopathy accompanied by stage IV prostate cancer (due to an ischial bone metastatic lesion). A B We started radiation therapy on the L3 vertebra plasma cell neoplasm, with a total dose of 3,940 cGy, and androgen deprivation therapy with bicalutamide (50 mg/day) and goserelin (3.78 mg/month) as well as bisphosphonate (90 mg/ month) for prostate cancer treatment. After 27 months, we performed a whole-body PET-CT (Fig. 2B), which revealed no abnormal FDP uptake in the intra-abdominal lymph nodes, prostate gland, and left iliac bone. Left vertebral body SUV uptake increased to SUV 3.10, but this change was considered as a post-radiotherapy change. PSA decreased to 0.05 ng/mL, which was within the reference interval. Follow-up serum protein electrophoresis revealed an M-peak (serum M-protein was 260 mg/dL) and immunofixation electrophoresis revealed the presence of biclonal gammopathy (IgG kappa and IgA lambda). The patient was administered 10 mg amitriptyline to control the neurologic pain in the left thigh and gluteal region. Nineteen months later, the neurologic pain disappeared, and medication was discontinued. The patient is doing well without evidence of tumor recurrence at 37 months after the initial diagnosis and treatment. DISCUSSION We reported a rare case of multiple myeloma with biclonality (IgG kappa and IgA lambda monoclonal proteins) in a 58-year-old man diagnosed with prostate cancer. The incidence of the simultaneous occurrence of prostate cancer and hematolymphoid malignancies has been reported to be 1.2% [5]. Biclonal gammopathy accounts for approximately 1% of monoclonal gammopathies [2]. To the best of our knowledge, this is the first report of simultaneous prostate cancer and multiple myeloma with biclonal gammopathy. We initially had difficulty in determining the correct diagnosis of multiple myeloma in this patient. The diagnosis of myeloma requires 1) 10% or more clonal plasma cells on bone marrow examination or biopsy-proven plasmacytoma, 2) presence of serum and/or urinary monoclonal protein (except in patients with true nonsecretory multiple myeloma), and 3) evidence of end-organ damage (hypercalcemia, renal insufficiency, anemia, or bone lesions) related to the underlying plasma cell disorder [1]. In this case, bone marrow plasma cell distribution was 7.2%, and there was no evidence of end organ damage such as anemia, hypercalcemia, and renal insufficiency, except for a single L3 vertebral osteolytic plasma cell neoplasm. Furthermore, the amount of serum M-protein was very small (217 mg/dL). In addition, we performed immunohistochemical staining of bone marrow specimens for CD138, kappa, and lambda and confirmed positive kappa staining. These findings are indicative of bone marrow plasma cell clonality, and therefore, we diagnosed the patient with multiple myeloma. In many cases, serum electrophoresis produces only a single band on the acetate strip, and biclonal gammopathy [2,6]. In this case, bone marrow plasma cells showed only positive kappa staining and urine immunofixation also showed a single dark band for kappa antisera. The pathogenesis of biclonal gammopathy is unknown, but several potentially related environmental factors have been identified. Biclonal gammopathy may be due to the proliferation of 2 clones of plasma cells, each producing an unrelated monoclonal immunoglobulin, or it may result from the production of 2 monoclonal proteins by a single clone of plasma cells [7]. Unfortunately, we did not conduct a FISH or chromosome study. If these studies had been performed, more information about plasma cell clonality might have been obtained. Monoclonal gammopathies occur in approximately 1-3% of the normal population [2]. Thus, in the present case, it could be possible that the IgA lambda monoclonal protein originated from another cell clone that was unrelated to myeloma. We do not know whether these 2 diseases, multiple myeloma and prostate cancer, occurred independently or if one disease influenced the development of the other. There are only a few reports of multiple myeloma or monoclonal gammopathy accompanied by prostate cancer [3,4,8,9]. Kao et al. [3] suggested the possible impact of immunosuppression from multiple myeloma and chemokines, including insulin-like growth factor-1 (IGF-1), interleukin-6 (IL-6), stromal cell-derived factor-1 (SDF-1), and vascular endothelial growth factor (VEGF), released by circulating myeloma cells on the progression of prostate cancer. Kahr et al. [8] reported a patient with testicular plasmacytoma after chemical castration for prostate cancer. They suggested that surgical stress may have exacerbated the clinical course of the myeloma, partly because of elevated IL-6 levels after surgery, which would stimulate the growth of myelomas. Kyle and Lust [9] have also reported that repeated antigenic stimulation of the reticuloendothelial system, genetic susceptibility for the development of plasma cell dyscrasia in patients with a positive family history, Epstein-Barr virus in-fection, lymphoid growth factors (such as IL-6), impairment of T-cell function, and lack of suppression of B cells by T cells play a role in the development of gammopathy. Most studies suggested that immune mechanisms may affect the development of other malignancies, but none of these mechani sms were clearly identified. In summary, we report a rare case of multiple myeloma with biclonal gammopathy accompanied by prostate cancer. In addition to its rarity, this case may provide some insight into the pathogenesis of plasma cell disorder and prostate cancer.
2014-10-01T00:00:00.000Z
2011-10-01T00:00:00.000
{ "year": 2011, "sha1": "be3eec484e3dabfe7dbc65e4a29d10d1b5a90694", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3190009?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "be3eec484e3dabfe7dbc65e4a29d10d1b5a90694", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
197611828
pes2o/s2orc
v3-fos-license
Ferumoxytol and CpG oligodeoxynucleotide 2395 synergistically enhance antitumor activity of macrophages against NSCLC with EGFRL858R/T790M mutation Purpose: Drug resistance is a major challenge for epidermal growth factor receptor (EGFR)-tyrosine kinase inhibitors (TKIs) treatment of lung cancer. Ferumoxytol (FMT) drives macrophage (MΦ) transformation towards a M1-like phenotype and thereby inhibits tumor growth. CpG oligodeoxynucleotide 2395 (CpG), a toll-like receptor 9 (TLR9) agonist, is an effective therapeutic agent to induce anticancer immune responses. Herein, the effect of co-administered FMT and CpG on MΦ activation for treating non-small cell lung cancer (NSCLC) was explored. Methods: The mRNA expression levels of M1-like genes in RAW 264.7 MΦ cells stimulated by FMT, CpG and FMT and CpG (FMT/CpG) were evaluated by quantitative reverse transcription PCR (qRT-PCR). Then, the effects of FMT/CpG-pretreated MΦ supernatant on apoptosis and proliferation of H1975 cells were detected by flow cytometry, and the expression of EGFR and its downstream signaling pathway in H1975 cells were explored by western blotting. Finally, a H1975 cell xenograft mouse model was used to study the anti-tumor effect of the combination of FMT and CpG in vivo. Results: FMT and CpG synergistically enhanced M1-like gene expression in MΦ, including tumor necrosis factor-α, interleukin (IL)-12, IL-1α, IL-1β, IL-6 and inducible nitric oxide synthase (iNOS). FMT/CpG-pretreated MΦ supernatant inhibited proliferation and induced apoptosis of H1975 cells, accompanied by down-regulation of cell cycle-associated proteins and up-regulation of apoptosis-related proteins. Further studies indicated that the FMT/CpG-pretreated MΦ supernatant suppressed p-EGFR and its downstream AKT/mammalian target of rapamycin signaling pathway in H1975 cells. Furthermore, FMT/CpG suppressed tumor growth in mice accompanied by a decline in the EGFR-positive tumor cell fraction and increased M1 phenotype macrophage infiltration. Conclusion: FMT acted synergistically with CpG to activate MΦ for suppressed proliferation and promoted apoptosis of NSCLC cells via EGFR signaling. Thus, combining FMT and CpG is an effective strategy for the treatment of NSCLC with EGFRL858R/T790M mutation. Introduction The incidence of lung cancer has been increasing yearly with a mortality rate ranking first among malignant tumors. 1 EGFR-tyrosine kinase inhibitors (TKIs) are effective for the treatment of lung cancer patients with EGFR mutation, 2-4 however, drug resistance develops in most cases after 9-13 months. 5 Although molecular targeted therapy improves patient survival as compared to conventional treatment approaches such as radiotherapy, chemotherapy, and surgery, the 5-year overall survival rate is still <15%. 6 Therefore, there is an urgent need to develop alternative treatment strategies for non-small cell lung cancer (NSCLC) harboring resistance mutations. Immunotherapy has shown great potential for cancer treatment. [7][8][9] Macrophages (MΦ) are heterogeneous immunocytes accounting for a significant proportion in the tumor microenvironment (TME). 10 Activated MΦ can be divided into two subtypes according to function: 11,12 M1, which mediates inflammation and the anti-tumor immune response by producing pro-inflammatory factors such as tumor necrosis factor (TNF)-α and IL-12; and M2, which secretes high levels of immunosuppressive mediators to promote tumor growth, invasion, and metastasis. NSCLC is associated with greater infiltration of MΦ than other cancers 13 and the density of M2 MΦ was negatively correlated with the survival of lung cancer patients who had undergone surgery. 14 In addition, M2 MΦ infiltration reduces the sensitivity of EGFR-TKIs in lung adenocarcinoma. 15,16 Whereas gefitinib inhibits M2-like polarization of MΦ in Lewis lung cancer, 17 which represents a promising approach in modulating the TME for lung cancer treatment. However, although restoring the immunological activity of MΦ has substantial antitumor effects, treatment of NSCLC with EGFR mutation-particularly T790M-by steering MΦ to M1 phenotype has not been reported. Ferumoxytol (FMT) is a versatile nanoparticle that has been widely used as a drug carrier. [18][19][20][21] However, its potential as an immune-activator to treat tumors has been overlooked. A recent study has showed that iron released by lysed red blood cells induces the conversion of MΦ into a proinflammatory phenotype capable of directly killing tumor cells; besides, an abundant of iron-loaded MΦ is correlated with reduced tumor size in NSCLC patients. 13 In addition, FMT synergized with TLR3 agonist poly (I:C) significantly suppressed the growth of melanoma by shifting macrophages to a tumoricidal phenotype in our previous work. 22 This suggests that activation of MΦ by FMT may be an effective strategy for the treatment of lung cancer, although it has not been investigated in NSCLC with EGFR T790M mutation . CpG oligodeoxynucleotide 2395 (CpG) is an artificially synthesized oligodeoxynucleotide containing unmethylated CpG motifs that triggers a pro-inflammatory immune response by interacting with Toll-like receptor (TLR) 9 in MΦ. 23,24 The antitumor effect of CpG has been demonstrated in several malignancies including stage I/II melanoma, 25 glioblastoma, 26,27 and non-Hodgkin's lymphoma. 28 However, not all clinical studies on CpG reported the improved patient outcomes. For instance, CpG did not enhance the therapeutic effect of erlotinib in patients with advanced recurrent EGFR-positive NSCLC. 29 Whereas, on the other hand, CpG in combination with the first-line drug taxane and platinum chemotherapy prolonged survival in patients with advanced NSCLC 30 indicating that CpG may potentially maximize the benefits of immunotherapy in patients with EGFR T790M mutation by combined therapy. In this study, we investigated the above possibility in the present study using the human NSCLC cell line H1975 with EGFR L858R/T790M mutations and a xenograft mouse model. Although treatment with FMT and CpG alone or in combination did not have an inhibitory effect on tumor cells, the supernatant of MΦ pretreated with both FMT and CpG (FMT/CpG) promoted apoptosis and inhibited proliferation in tumor cells by suppressing the expression of EGFR signaling pathway components. These results provide the first demonstration that in addition to traditional chemotherapy and molecular targeted therapy, MΦ activation by FMT/CpG is an effective strategy for the treatment of NSCLC with EGFR L858R/T790M mutations. Cell culture and reagents The TLR9 agonist CpG (class C ODN 2395) was purchased from InvivoGen (San Diego, CA, USA; #tlrl-2395-5). FMT was a gift from Prof. Ning Gu. 31 The cells were passaged at 70-80% confluence. RAW 264.7 cells were seeded in 24-well plates and FMT (100 μg/mL) and CpG (2.5 μg/mL) were added to the medium; the MΦ supernatant was collected after stimulation for 12 h. H1975 cells were seeded in a 24-well plate. When the cells were attached, the supernatant was discarded and the cells were treated with FMT (100 μg/mL), CpG (2.5 μg/mL), or the supernatant of MΦ grown for 12 hrs with or without FMT/CpG. H1975 cells were cultured for 48 hrs at 37°C in a 5% CO 2 incubator and used for cell proliferation, apoptosis, and Western blotting experiments. RNA isolation and quantitative reverse transcription-PCR (qRT-PCR) Total RNA was extracted from harvested cells using TRIzol reagent (Thermo Fisher Scientific; #15596018). The mRNA was reverse transcribed into cDNA using HiScript II 1st Strand cDNA Synthesis Kit (Vazyme Biotech Co., Ltd, Nanjing, China; #R211) according to the instructions. QRT-PCR was performed in SYBR Green PCR Master Mix (Thermo Fisher Scientific; #4309155) on a Step One Plus system (Bio-Rad, Hercules, CA, USA) according to the manufacturer's instructions. The relative expression levels of target genes were calculated with the 2 −ΔΔCt method relative to the endogenous control glyceraldehyde 3-phosphate dehydrogenase (GAPDH). Primer sequences are shown in Table 1. Cell viability assay The effect of FMT and CpG on NSCLC progression was examined using Cell Counting Kit (CCK)-8 (Dojindo Laboratories, Kumamoto, Japan; #CK04). The cells were seeded at a density of 5×10 3 /well in 96-well plates and incubated overnight at 37°C in an atmosphere of 5% CO 2 . The cells were then treated with fresh medium containing different concentrations of FMT, CpG, and FMT/CpG for 48 h. The supernatant was discarded and replaced with fresh medium containing 10% CCK-8 solution, followed by incubation for 1-4 h. The absorbance at 450 nm was measured using a multi-well spectrophotometer (BioTek, Winooski, VT, USA). There were six replicate wells for each sample and the experiment was repeated three times. Flow cytometry (FCM) analysis of cell apoptosis Cell apoptosis was detected by FCM using the Annexin V-Alexa Fluor 488/PI Apoptosis Assay kit (FcMACS, Nanjing, China; #FMSAV488-100). H1975 cells were collected, washed twice in phosphate-buffered saline at 4°C, and resuspended in binding buffer. A 100-μL of the cell suspension was transferred to a flow tube, and 5 μL Annexin V-Alexa Fluor 488 and 10 μL propidium iodide (PI) were added, followed by mixing and incubation at 4°C for 15 mins in the dark. The cell suspension was sorted on a FACS Calibur instrument (BD Biosciences, San Jose, CA, USA). Early and late apoptotic cell fractions (Annexin V-positive and Annexin V/PI double-positive, respectively) were quantified. FCM analysis of cell proliferation H1975 cells were labeled with 10 μM 5(6)carboxyfluorescein diacetate N-hydroxysuccinimidyl ester (CFSE; Thermo Fisher Scientific; #65-0850-84) and incubated in CFSE staining solution for 15 mins at 37°C. An equal volume of culture medium (containing serum) was added to the cells along with CFSE staining solution, followed by incubation for 5 mins. The CFSE-containing solution was removed and the cells were washed twice with an equal volume of culture medium. The fluorescently labeled cells were used for subsequent experiments. Cell proliferation was detected by FCM. The proliferation index was calculated using ModFit LT software (Verity Software House, Topsham, ME, USA) as the total number of divisions divided by the number of proliferating parent cells. 33,34 Colony formation assay H1975 cells were seeded at a density of 1×10 3 /well in 12well plates and incubated overnight at 37°C in an atmosphere of 5% CO 2 . The supernatant was discarded, and the cells were cultured for 2 weeks under different treatment conditions. The medium was refreshed at appropriate intervals, which was determined according to the pH of the culture supernatant. The experiment was terminated when macroscopic clones appeared in the culture plates, which were then washed twice with PBS, fixed in 4% formalin for 10-15 mins, dried, and stained with 0.1% crystal violet for 15 mins. The number of colonies was counted under a microscope and those with >50 cells were defined as positive. Tumor xenograft studies H1975 cells were resuspended in 100 μL sterile PBS, and 2×10 6 cells were subcutaneously injected into the right flank of 5-6 weeks old female BALB/c nude mice (Shanghai Laboratory Animal Research Center, Shanghai, China). When the tumors reached a volume of 100 mm 3 , the animals were randomly divided into two groups (n=5 each) that were intratumorally injected with PBS or FMT (10 mg/kg) and CpG (2.5 mg/kg) every 2 days for 10 days. Body weight, maximum tumor length (L), and minimum tumor width (W) were recorded every 3 days. The tumor volume was calculated with the following formula: V = (L × W 2 )/2. At the end of the experiment, the mice were euthanized and the tumors were excised, washed with PBS, and fixed in formalin for immunohistochemistry. The protocol was approved by the Committee on the Ethics of Animal Experiments of the Nanjing medical University and conformed to the Guidelines for the Care and Use of Laboratory Animals. Immunohistochemistry Tumor tissue specimens were fixed with 4% paraformaldehyde, embedded in paraffin, and cut into 4 μm-thick sections that were deparaffinized with xylene, rehydrated in a graded series of ethanol for 5 mins, washed three times with PBS, and then blocked with serum for 30 mins. The sections were incubated overnight at 4°C with primary antibody, washed three times with PBS, incubated with biotinylated secondary antibody for 30 mins at 37°C, and then washed with PBS, followed by staining with 3,3′diaminobenzidine at room temperature for 10 mins in the dark. After staining with hematoxylin for 2 mins, the sections were subjected to hydrochloric acid/alcohol differentiation, dehydrated with ethanol and xylene, dried, and photographed under a microscope. Statistical analysis Statistical analyses were performed using SPSS 19.0 software (SPSS Inc, Chicago, IL, USA). Data were compared by one-way ANOVA or Student's t-test. All statistical analyses were conducted at the significant level of α=0.05 and the Least Significance Difference or Dunnett's test were used for post hoc of ANOVA analysis. Characterization of FMT The polymer coating the outer layer of FMT was synthesized by terminal aldehyde group reduction and hydroxycarboxymethylation of dextran T10. As indicated in Figure 1A, the average diameter of synthesized dextran T10coated FMT was about 7 nm and dynamic light scattering showed that the hydration particle size of FMT was 35 nm ( Figure 1B). The molecular formula of the FMT external material is shown in Figure 1C, with H or COOH as the R group. 31,32 FMT and CpG synergistically promote M1-like gene expression in MΦ addition, the mRNA level of the M1-related co-stimulatory molecule cluster of differentiation 86 (CD86) and inducible nitric oxide synthase (iNOS) was also enhanced by FMT/CpG as compared to treatment with each agent alone. Thus, FMT and CpG synergistically promote MΦ activation towards a tumoricidal phenotype, with upregulation of M1-like genes. FMT/CpG-pretreated MΦ supernatant reduce NSCLC cell viability To determine whether FMT and CpG directly suppress NSCLC growth, H1975 cells were treated with FMT, CpG, and a combination of both for 48 hrs. FMT showed no obvious toxicity to H1975 cells ( Figure 3A). CpG reduced cell viability in a dose-dependent manner, although the effect was nonsignificant at low doses ( Figure 3B). In addition, the rate of inhibition upon treatment with FMT combined with CpG was only 13% ( Figure 3C). These results demonstrate that FMT/ CpG shows negligible cytotoxicity towards H1975 cells. As demonstrated previously,co-treatment with FMT and CpG synergistically induced MΦ activation towards a tumoricidal phenotype. To investigate whether this phenotypic switch can lead to inhibition of tumor cell growth, H1975 cells were treated with FMT/CpG, the supernatant of MΦ grown for 12 hrs without any stimulation (−FMT/ CpG MΦS), or the supernatant of MΦ pretreated with FMT/ CpG for 12 hrs (+FMT/CpG MΦS). After 48 hrs, cell viability was detected. As shown in the Figure 3C, the cell viability of H1975 exposed to +FMT/CpG MΦS was only 58.9% compared with the control group, while that in the −FMT/ CpG MΦS group was 80.48%. Altogether, FMT/CpGpretreated MΦ supernatant has a significant inhibitory effect on H1975 cell viability. FMT/CpG-pretreated MΦ supernatant induces apoptosis of NSCLC cells The decrease in cell viability of H1975 may be a combined effect of tumor cell apoptosis and inhibited proliferation. First, H1975 cells were treated with FMT/CpG, −FMT/ CpG MΦS or +FMT/CpG MΦS for 48 hrs and cell apoptosis was analyzed by FCM. The results of Figure 4A and B showed that FMT/CpG and −FMT/CpG MΦS induced a low level of apoptosis in H1975 cells (9.18% and 10.6%, respectively). In comparison, the rate of apoptosis was significantly higher in cells exposed to FMT/CpG-pretreated MΦ supernatant than in control cells (46.2% vs 5.01%). To investigate the mechanism underlying this effect, we examined the expression of several apoptosis-related proteins by Western blotting and found that FMT/CpG-pretreated MΦ supernatant increased the expression levels of the apoptosisrelated proteins Bax, Cleaved Caspase-3, and Cleaved PARP in H1975 cells ( Figure 4C and D). FMT/CpG-pretreated MΦ supernatant suppresses NSCLC cell proliferation Next, the proliferation of H1975 cells pre-labeled with CFSE and subjected to different treatments was analyzed by FCM. As indicated in Figure 5A and B, the proliferation rate of H1975 cells in the control group was retarded in the fourth generation, with an average proliferation index of 7. decreased apoptosis of tumor cells. 39 Next, Western blotting was used to detect whether the FMT/CpG pretreated macrophage supernatant affects the expression of EGFR and its downstream signaling pathway proteins in H1975 cells. As shown in Figure 6A and B, in contrast to the control group, the expression levels of p-EGFR, p-AKT, and p-mTOR proteins in H1975 cells treated with −FMT/CpG MΦS were partially inhibited, but not as potently as levels of the +FMT/CpG MΦS group. Compared to the FMT/CpG group, incubated with FMT/CpG pre-treated macrophage supernatant more considerably downregulated the levels of above proteins. Collectively, the results indicated that FMT/ CpG pre-treated macrophage supernatant significantly promoted cell apoptosis and inhibited cell proliferation by downregulating phosphorylation of EGFR and its downstream AKT/mTOR protein in H1975 cells. FMT and CpG synergistically inhibit tumor growth in a xenograft mouse model We next investigated whether the combination of FMT and CpG has anti-tumorigenic effects in vivo using an H1975 cell xenograft mouse model. As showed in Figure 7A and B, co-administration of FMT and CpG significantly suppressed the tumor growth in tumor-bearing mice. Ki-67, an antigen present in the nuclei of proliferating cells, is widely used to evaluate the proliferative activity of tumors. 40 The immunohistochemical analysis revealed that the percentage of Ki-67-and EGFR-positive tumor cells in the FMT/CpG group was markedly lower than that in the control group ( Figure 7D and E), whereas the percentage of M1 macrophages stained with F4/80 and iNOS in tumor tissues was significantly up-regulated ( Figure 7F). In addition, the mice showed no obvious weight loss or treatment-related death, indicating that the combined treatment was non-toxic ( Figure 7C). Thus, FMT acts synergistically with CpG to suppress tumor growth in vivo. Discussion Osimertinib, a third-generation EGFR-TKI that has been approved for the treatment of EGFR T790M -positive NSCLC patients, acts by inhibiting p-EGFR and downstream signaling. 41 However, with changes in the TME and the continuous transformation of cancer cells, the emergence of new drug-resistant mutations is inevitable. 42 Various strategies have been developed to overcome resistance to EGFR-TKI treatment including combination chemotherapy, 43 antitumor vascular therapy, 44 and immunotherapy based on inhibitors of programmed cell death-1 and its ligand. 45 However, the toxicity of standard drugs and the high cost of the latter two approaches limit their universal application. As such, there is an urgent need for safer, more effective, and affordable therapeutic strategies to improve the outcome of NSCLC patients with EGFR mutation. NSCLC is characterized by a large number of MΦ in the TME. 13 Given their diversity and plasticity, treatment approaches that induce a phenotype switch in tumorassociated MΦ can be effective against NSCLC. Although MΦ are known to be affected by EGFR-TKIs, 15,16 it is unclear whether inducing their transformation can inhibit the progression of NSCLC with EGFR T790M mutation. FMT is a nanomaterial with good biocompatibility and low toxicity that has been approved by the US Food and Drug Administration for the treatment of anemia. 46,47 CpG has been used for cancer therapy in clinical trials. 30 Many studies have demonstrated the advantages of combined immunotherapy over monotherapy in cancer treatment, 48,49 and our previous studies also showed that the combined treatment of FMT and TLR3 agonist poly (I:C) induced synergistically induces macrophage activation for melanoma regression. 22 Since they are both activators of MΦ, in this study we investigated whether FMT and CpG used in combination can suppress NSCLC by inducing MΦ activation. IL-12 blocked the migration and invasion of lung adenocarcinoma cells 50 and suppressed lung tumor growth, thereby prolonging the survival of lung cancer-bearing mice 51 while TNF-α is a cytotoxic protein produced by MΦ that can directly induce apoptosis in NSCLC cells. 52,53 In the present study, we found that FMT/CpG was relatively non-toxic to H1975 cells, whereas FMT/ CpG-pretreated MΦ supernatant inhibited H1975 cell proliferation and induced apoptosis, which would be related to the anti-tumor effects of IL-12 and TNF-α as well as other cytotoxic cytokines generated by MΦ induced by FMT/CpG. The caspase family plays critical regulatory roles in cell apoptosis 51,54 In our study, FMT/CpGpretreated MΦ supernatant significantly upregulated the protein levels of apoptosis-related Cleaved Caspase-3 and its substrate Cleaved PARP in H1975 cells, indicating that the cell apoptosis of H1975 was through activating caspase-3 pathway. As abnormal activation of EGFR signaling pathway leads to sustained growth of lung cancer cells. 39 We also observed that p-EGFR and the downstream factors p-AKT and p-mTOR were downregulated in H1975 cells exposed to FMT/CpG-pretreated MΦ supernatant. In addition, the growth of subcutaneous Figure 8 Schematic illustration of synergistic MΦ activation by FMT and CpG for the treatment of NSCLC with EGFR T790M mutation. Abbreviations: EGFR, epidermal growth factor receptor; FMT, ferumoxytol; MΦ, macrophages; NSCLC, non-small cell lung cancer. xenograft tumors in mice was inhibited by treatment with FMT/CpG, which was accompanied by a decline in the EGFR-positive tumor cell fraction and increased M1 type macrophage infiltration. Thus, the combination of the two agents synergistically activated MΦ through upregulation of tumoricidal genes and induction of apoptosis, which was achieved by inhibiting the phosphorylation of EGFR and its downstream signaling in NSCLC cells (Figure 8). Although systemic administration of TLR-9 agonists has been unsuccessful, 55,56 local intratumoral injection of CpG ODN is still an effective strategy for tumor treatment by activation of innate responses. 28 Gallotta et al found that delivery of a TLR9 agonist through the airways could effectively render lung tumors permissive to PD-1 blockade by promoting optimal CD4 + and CD8 + T-cell Interplay, 57 which characterizes a strategy to apply localized TLR9 stimulation to a tumor type not accessible for direct injection. Our previous studies have showed that, compared with combined treatment with FMT/PIC, systemic administration of FP-NPs surface functionalized with poly (I:C) showed a better inhibition of lung metastasis. 22 Based on the above findings, we are here to initially explore the effects of FMT combined with CpG on NSCLC with EGFR T790M mutation. Interestingly, similar to the previous results, we found that the combination of FMT and CpG significantly inhibited tumor growth in mice by synergistically activating macrophages. Although this route of administration has some limitations in clinical application it provides a good reference for us to explore novel methods for CpG delivery in the future. Conclusion FMT combined with CpG induced the activation of MΦ towards a tumoricidal phenotype; this was accompanied by increased apoptosis and suppression of cell proliferation and EGFR signaling in NSCLC cells. Our findings for the first time provide evidence of FMT/CpG as a novel and effective treatment for NSCLC with EGFR T790M mutation.
2019-07-20T02:03:48.039Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "50749ddf1813218f30a06dfd8d199564cf806158", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=50701", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "50749ddf1813218f30a06dfd8d199564cf806158", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
247297104
pes2o/s2orc
v3-fos-license
HIV infection is not associated with perioperative blood loss in patients undergoing total hip arthroplasty Background Patients with HIV have a higher prevalence of thrombocytopenia than those without HIV infection, increasing their risk of substantial perioperative blood loss (PBL) during total hip arthroplasty (THA). This study aimed to evaluate PBL risk factors in HIV-infected patients undergoing THA. Methods Eighteen HIV+ patients (21 hip joints) and 33 HIV− patients (36 joints) undergoing THA were enrolled in this study. PBL was calculated using the Gross equation, which comprises total blood loss (TBL), dominant blood loss (DBL), and hidden blood loss (HBL). Risk factors for post-THA PBL in both patient populations was evaluated using multivariable linear regression. Results At baseline, the HIV+ patients were younger, more likely to be male and to have elevated hemoglobin and albumin levels, and lower erythrocyte sedimentation rates than HIV− patients. There were no differences in the T-lymphocyte subsets or coagulation function between the two groups. Age and albumin level were identified as potential HBL risk factors after THA, and albumin level was associated with higher TBL. The unadjusted linear regression analysis showed that the HBL and TBL were significantly higher in HIV+ patients than in HIV− patients. However, after adjusting for other factors, no differences in DBL, HBL, or TBL were observed between HIV− and HIV+ patients. Conclusion PBL was similar in both groups undergoing THA, regardless of their HIV-infection status. THA surgery is a safe and effective procedure in HIV+ patients. Introduction With the widespread use of highly active antiretroviral therapy (HAART), HIV has been transformed from a devastating disease to a chronic condition [1,2]. Moreover, the incidence of HIV-related deaths and HIV-associated opportunistic infections has notably decreased. As a result, previously unrecognized complications of long-term HIV infections are having an increased impact on the patient's quality of life [3,4]. Osteonecrosis of the femoral head (ONFH) has been recognized as an important complication of long-term HIV infection [5][6][7][8][9]. The estimated incidence of ONFH among patients with HIV ranges from 0.3 to 3.7 cases per 1000 person-years, which is much higher than the estimated incidence among the general population [10, 11]. Moreover, asymptomatic ONFH is also common among patients with HIV. Previous studies reported that 4.4% of asymptomatic patients with HIV exhibited signs of ONFH on MRI, while the rate was 1.7% among those without HIV [10,12]. Total hip arthroplasty (THA) is an effective surgical treatment for ONFH, with perioperative blood loss (PBL) being a key factor in postoperative recovery. However, in HIV+ patients, the virus invades the platelets, leading to decreased platelet counts and secondary coagulation dysfunction; the overall prevalence of thrombocytopenia in this population is 4.5-26.2% [13]. The peripheral destruction of platelets and decreased platelet production are two important mechanisms of thrombocytopenia in HIV+ patients [14,15]. Although PBL is a major concern in THA for HIV+ patients, published studies have not estimated PBL in these patients. Hence, this study investigated PBL risk factors among HIV-infected patients undergoing THA. Study design From August 2020 to April 2021, we continuously enrolled patients with or without who underwent primary THA surgery at the Shenzhen Third People's Hospital. The inclusion criteria for HIV+ patients were as follows: (1) diagnosis of HIV by laboratory examination, including HIV 1/2 antibody screening, flow cytometry for T lymphocyte, and HIV-RNA nucleic acid virus load, in combination with a personal history; (2) history of receiving HAART for more than 6 months before the procedure; (3) history of femoral neck fracture and femoral head necrosis caused by osteoarthritis, drug, alcohol, etc.; and (4) age ≥ 18 years. We included HIV− patients that were (1) aged ≥ 18 years, and had (2) femoral neck fracture and femoral head necrosis caused by osteoarthritis, drug, alcohol, etc. We excluded patients with (1) a previous history of hip surgery at the same site; (2) complication with infection or tumor at the surgical site of the hip joint; and (3) a history of abnormal nutritional, immune, and coagulation functions. All the patients had a history of prior ipsilateral hip surgery; had a hip joint to be operated on that was also complicated with an infection or tumor; or had abnormal nutritional, immune, or coagulation functions were excluded. Perioperative process HIV patients had received standardized HAART before the procedure. Ten patients received TDF + 3TC + EFV treatment for more than 2 years. Two patients were treated with TDF + 3TC + LPV / r for 2-3 years. Two patients were treated with TDF + 3TC + RAL for about 1 year, and four patients were treated with ABC + 3TC + RAL for 1-2 years. All operations were performed by a senior surgeon. After general anesthesia and endotracheal intubation, the posterolateral approach was used to perform THA. The same acetabular cup and femoral component (Betacup ® ; Link, Germany) were used in both groups; 5% glucose injection plus tranexamic acid 0.5 g was used 5-10 min before and during the procedure in both groups. All patients were injected subcutaneously with 4100 IU of low-molecular-weight heparin calcium 12 h before the procedure, and once a day after 12 h postoperatively. Intravenous transfusion of leukocyte-reduced red blood cells, virus-inactivated plasma, or apheresis platelet was used as appropriate for patients who needed blood transfusion during the operation. Postoperatively, negative-pressure drainage tubes were placed at the surgical site. The tubes were removed 24-48 h after surgery. Data collection Baseline data (age, sex, height, weight, and body mass index) were routinely collected preoperatively. T-lymphocyte subsets (absolute counts of CD3 + T cells, CD4 + T cells, and CD8 + T cells) were also assessed preoperatively. Other clinical information, including levels of hemoglobin, albumin, and high-sensitivity C-reactive protein, HCT, erythrocyte sedimentation rate (ESR), and indicators of coagulation function (prothrombin time, international normalized ratio, and fibrinogen concentration) were measured before and after the operation. Nutrition status was assessed using the European Society for Clinical Nutrition and Metabolism guidelines [17]. Statistical analysis Continuous data are presented as means and standard deviations or as medians and interquartile ranges. Student's t-test was used to analyze between-group differences if the distribution was normal as assessed by the Shapiro-Wilk test. The Mann-Whitney U test was applied for non-normal continuous data. Categorical data are presented as frequencies and percentages; and the Fisher's exact test was used to compare between-group differences. Univariable analysis was conducted to assess the potential risk factors associated with PBL. Multivariable linear regression analysis was used to evaluate blood loss differences between patients with and without HIV infection; independent variables were chosen from the univariable analysis (those with P values < 0.05). All analyses were conducted using SPSS for Windows (version 22.0; SPSS, Chicago, IL, USA). P < 0.05 was considered statistically significant. Patient baseline characteristics A total of 51 patients were enrolled in the study, including 18 (21hip) who were HIV+ and 33 (36 hip) who were HIV− (Fig. 1). Table 1 shows the baseline characteristics and preoperative clinical information for the HIV− and HIV+ patients. Among the HIV− patients, 55.6% were female, whereas female patients accounted for only 9.5% of HIV+ patients. The average ages of the two groups were 60.97 (HIV−) and 43.19 (HIV+) years. The HIV+ patients were more likely to have higher hemoglobin and albumin levels, and lower ESR and drainage volume than the HIV− patients. There were no differences in blood transfusion volume, T-lymphocyte subsets, and coagulation function indicators between the two groups. Univariable analysis of post-THA blood loss risk factors Table 2 shows the single factor analysis of potential PBL risk factors for all patients. HIV infection was significant associated with blood loss, including DBL, HBL, and TBL (all P < 0.05). Compared with HIV− group, the levels of DBL, HBL and TBL were higher in HIV+ group. Moreover, diabetes, and BMI group were found to be associated with DBL. Age, HB, ALB, and hs-CRP level were potential risk factors of HBL. ALB and hs-CRP level were associated with TBL. Table 3 shows the linear regression analysis of blood loss in both patient groups. The unadjusted linear regression showed that HBL and TBL were significantly higher in HIV+ patients than in HIV− patients. However, when adjusted for variables with P value < 0.05 in the univariable analysis, no between-group differences in DBL, HBL, or TBL were observed. Discussion In this study, we enrolled 18 patients (21 joints) who were HIV+ and 33 (36 joints) who were HIV−. Multivariable linear regression failed to demonstrate any significant difference in PBL between the two groups. With the increasing maturity of artificial joint technology, THA has become a safe and effective surgical treatment for end-stage hip disease [18]. In recent years, HIV+ patients have received HAART, and awareness regarding the complications of HAART is also increasing, including osteoporosis and avascular necrosis of femoral head [19,20]. Currently, THA surgery is a routine surgical treatment for HIV-infected patients with end-stage hip disease. In patients undergoing THA, excessive PBL may lead to increased incidences of postoperative fever, anemia, hypoproteinemia, wound and joint infections, lower extremity deep venous thrombosis, pulmonary embolisms, and other complications. These complications affect the patient's short-term recovery and longterm joint function. Thus, orthopedists need to provide correct, active, and effective control of PBL. Platelets have been reported to interact with the HIV-1 virus, viral membrane proteins, or dysregulated circulating inflammatory molecules resulting from HIV-1 infection [21]. Thus, long-term HIV infections often lead to thrombocytopenia and idiopathic thrombocytopenic purpura (ITP). A study conducted in Brazil revealed that 63.6% of HIV patients had ITP, and 25.5% had platelet production deficiencies that were secondary to HIV infection [22]. An earlier study reported that the cross-reactivity between viral envelope glycoprotein 120 and platelet glycoprotein IIIa promotes platelet capture and lysis in the reticuloendothelial system of the spleen or early apoptosis, both of which result in ITP [14]. Moreover, during the advanced stages of HIV infection, the HIV-1 virus impairs the signaling of colony-forming units associated with megakaryocyte growth and further disrupts megakaryocytic maturation [23]. However, with the widespread use of HAART, the loading of the HIV-1 virus can be stably suppressed, transforming HIV infection from a disease of high mortality into a chronic disease. Many studies have found that the prevalence of thrombocytopenia decreases with higher CD4 + T-lymphocyte counts; the prevalence drops to 0 in patients with CD4 + T lymphocyte counts of > 350 cells/μL [24,25]. In our study, thrombocytopenia and its corresponding complications were not observed. Thus, the difference in PBL between HIV− and HIV+ patients was insignificant. Several management measures are used to prevent PBL, which may be operative, hemostatic, or bloodrelated. During the intraoperative period, surgeons need to ensure accurate anatomical positioning, timely hemostasis, and minimal peeling of the surrounding tissues. Reducing soft tissue and bone damage does not only reduce intraoperative DBL, it also accelerates the postoperative recovery of muscle function. Soft tissue injury can be minimized by avoiding repeated pulling of the muscles and other soft tissues [18]. During the postoperative period, early active rehabilitation can accelerate the recovery of muscles and other soft tissues. Through appropriate active exercise, combined with passive exercise, significant reductions in HBL and improved patient prognoses can be achieved [26]. Tranexamic acid is an effective hemostatic agent for reducing intraoperative bleeding. This agent can be locally applied within 10 min of creating an incision or prior to incision closure to achieve less HBL, without any adverse effects during the perioperative period [27,28]. Blood management is another important measure for reducing PBL. Orthopedic surgeons should evaluate the preoperative hemoglobin status of patients to avoid anemia, which leads to a deterioration of the body postoperatively and increases infection and mortality rates; they should also apply measures to decrease the postoperative recovery period and shorten the hospital stay [29]. The present study has several limitations. First, our study involved a small number of patients; future studies need larger sample sizes to evaluate the validity of the present results. Second, the underlying mechanisms of increased PBL in HIV+ patients are unknown. Thus, exploring the reasons for higher PBL in HIV+ patients is necessary. Third, this study was a single-center investigation, which could reduce the generalizability of the results. Lastly, sex imbalance was inevitable because the HIV+ patients in this study were mostly male homosexuals, while the proportion of men in HIV− group was notably less. Conclusion In conclusion, there were no differences in post-THA blood loss between patients with and without HIV infection. This study also confirmed that THA is a safe and effective modality in HIV+ patients; the procedure is not associated with an increased risk of bleeding in HIV+ patients and has the same level of surgical effectiveness in HIV+ and HIV− patients.
2022-03-09T14:46:42.389Z
2022-03-09T00:00:00.000
{ "year": 2022, "sha1": "34b27150981be4d9de143576b2785964bbc5a633", "oa_license": "CCBY", "oa_url": "https://josr-online.biomedcentral.com/track/pdf/10.1186/s13018-022-03055-y", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "34b27150981be4d9de143576b2785964bbc5a633", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
139582984
pes2o/s2orc
v3-fos-license
Dose dependence of the erosion of graphite under high temperature ion irradiation In this paper, a dependence of the erosion of graphite under a high energy ion flux at temperature of 2050 °C on the ion dose is investigated. It is shown that high doses of irradiation stimulate diffusion processes that lead to the removal of carbon atoms from the bulk of the sample, significantly altering morphology of graphite at depths exceeding penetration depth of irradiating ions for large doses. Introduction Graphite is used as a plasma-facing element in a number of thermonuclear and plasma technology facilities [1][2][3][4]. However, the behavior of graphite under high energy ion beam at high temperatures has not been sufficiently studied. In this work, a research on the behavior of graphite under intensive deuterium ion flow irradiation at high temperatures is conducted. Experimental setup Investigation was conducted on a "COating Deposition and MATerial Testing" Stand (CODMATT) (figure 1) [5]. Experimental parameters were as follows: residual gas pressure <2·10 -6 torr; ion flux 2-2.3·10 17 ion/s. Average penetration depth of deuterium ions under these parameters is 120 nm. Small-grain graphite was used as a material under investigation. 14×16×2 mm 3 samples were polished using a sequence of sandpapers with grain sizes of 40 to 3 µm, then cleaned in an ultrasonic bath with ethanol and annealed in plasma. A Central Sector (CS) on the surface of the target was analysed, where the average power density was 36 MW/m 2 , and the average ion flux density was 1.7·10 22 ion/s×m 2 , with the variation of ±16 % on the area of 1.3 mm in radius. The samples were irradiated by doses from 5.25·10 20 to 6.3·10 21 ion/cm 2 at a temperature of 2050 °C. Results and discussion Photographs of surfaces of graphite samples after varying doses of irradiation are shown in figure 2. As can be seen, with the increase of dose, surface porosity increased, as well. Some areas that significantly differed in size, concentration and depths of pores were distinguishable. Electron microscope analysis has shown that pore formation was hindered on the surface of graphite grains with graphene layers oriented at small angles relative to irradiated surface. CS mass loss and, consequentially, sputtering coefficient, were measured by weighing the samples before and after irradiation, and also by measuring the depth of graphite layer removed from CS relative to unsputtered surface after each dose of irradiation. The analysis of the measurement results has shown that the quantity of removed material and, as a consequence, sputtering coefficient, decreases steadily with the increases doses of ions. The ratio of loss of graphite material in CS measured through weighing to the loss measured through profilometry started at approximately 3 for a dose of 5.25·10 20 ion/cm 2 and ended at 3.5 at 6.3·10 21 ion/cm 2 . For example, at a dose of 6.3·10 21 ion/cm 2 , the mass loss obtained via weighing and the mass loss calculated through depth were 4.72 mg and 1.33 mg, respectively. This fact leads to the assumption that in the entire range of irradiation doses used in this research, irregularities in the bulk of the material are developed that lead to the formation of a porous layer. Such development of porous layers could be possibly attributed to two radiation-stimulated effects acting simultaneously: development of porousity into the bulk of the sample and carbon atom transport to the surface. Both these processes could be carried out due to the evolution of material structure disordering approaching the surface of the sample. This leads to stimulation of the development of the porous layer into the bulk of the material and to diffusion of carbon atoms to the surface. Sputtering depths obtained via weighing and measurement of the CS relative to the non-sputtered areas of the sample (figure 3) increase with the irradiation dose. At the same time, the "mass" depth grows faster with the increased irradiation dose then the "profile" depth. It is also worth noting that, as can be seen in figure 2, the porosity of the surface layer also increases at higher irradiation doses. If, starting with a certain stage of sputtering, thickness of the porous layer and its porosity were constant, sputtering depth and mass of sputtered material measured by different methods would have been the same and the related sputtering coefficients have been similar. However, as mentioned above, the depth and mass of sputtered material measured by weighing grow faster than the respective values measured by profilometry. This means that: 1) sputtering of material from the surface of the porous layer is partially compensated by a diffusion flux of carbon atoms to the surface and 2) thickness of the porous layer increases with irradiation doses. After the initial irradiation period (5.25·10 20 ion/cm 2 ), the ratio of "mass" depth to profile depth is kept constant, meaning that the rate of growth of porous layer was constant for consecutive doses, and the density of all newly formed layers and related sputtering coefficients were the same (figure 4). This conclusion allows determining the rate of growth for porous layers depending on the irradiation dose, and the density of graphite in such layers. If the increase of thickness of a porous layer at the certain stage of irradiation is ∆L p , S is the surface area of CS and the mass of the removed material measured by weighing and profilometry is, M m and M p , respectively, density of the part of porous layer added at this stage of irradiation is determined as: (1) Density of a porous layer addition after irradiating the sample with a dose between 5.25·10 20 and 1.05·10 21 ion/cm 2 calculated using expression (1) appears to be 1.49 g/cm 3 , and after irradiation with a dose between 4.73·10 21 ion/cm 2 and 6.3·10 21 ion/cm 2 -1.30 g/cm 3 . These results show that density of a porous layer is much smaller than that of the non-irradiated graphite, which is near 1.8 g/cm 3 , and decreases with the irradiation dose. Assuming that the sputtering rate of bulk porous layer areas irradiated by the ion flow is significantly lower than the sputtering rate of the surface porous layer areas, sputtering coefficient of the latter on each stage of sputtering can be determined as: Here, Y pore and Y S are, respectively, sputtering coefficients of the pores surface and of the nonirradiated surface, C graf and C pore are, respectively, density of graphite and density of a porous layer, and N c and N i are, respectively, the amount of irradiating ions and atoms sputtered from the irradiated surface on the current stage of irradiation. The calculations show that the sputtering coefficient of CS measured both by mass and by depth decrease at higher irradiation doses, but there is a tendency of flatting out at approx. 0.16 at./ion and 0.05 at./ion for coefficients measured through weighing and profilometry, respectively, after an irradiation dose of 3.15·10 21 ion/cm 2 . Conclusion Erosion and modification of small-grain graphite's surface layer under irradiation at temperature of 2050 °C by deuterium ions with the energy of 7.5 keV and ion flow power density of 30 MW/m 2 for doses of up to 6.3·10 21 ion/cm 2 has been investigated. It was found that a porous layer is formed on the irradiated surface with a thickness of 2000 µm after being irradiated by a dose of 6.3·10 21 ion/cm 2 , while the level of irradiated surface retreats to the depth of 165 µm relative to the non-irradiated surface, and the average penetration depth of ions into graphite is approx. 120 nm. Sputtering coefficient that starts at 0.38 on the first stage of irradiation, decreases at higher doses and virtually stabilizes at 0.17 for doses of 3.15·10 21 ion/cm 2 and higher. A proposition is made that graphite irradiation at the temperature of about 2000 °C stimulates diffusion of carbon atoms from the bulk of graphite, leading to the growth of the porous layer at higher doses and, consequentially, deceleration of growth of the "crater".
2019-04-30T13:08:42.374Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "5a6ba7ffcfcd3d438149c7ae9435df77df53d2b0", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1121/1/012006", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "fcbfb54b47135791a3fb0597e13db0ed0bc18eef", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
268415800
pes2o/s2orc
v3-fos-license
A guide to measuring expert performance in forensic pattern matching Decisions in forensic science are often binary. A firearms expert must decide whether a bullet was fired from a particular gun or not. A face comparison expert must decide whether a photograph matches a suspect or not. A fingerprint examiner must decide whether a crime scene fingerprint belongs to a suspect or not. Researchers who study these decisions have therefore quantified expert performance using measurement models derived largely from signal detection theory. Here we demonstrate that the design and measurement choices researchers make can have a dramatic effect on the conclusions drawn about the performance of forensic examiners. We introduce several performance models – proportion correct, diagnosticity ratio, and parametric and non-parametric signal detection measures – and apply them to forensic decisions. We use data from expert and novice fingerprint comparison decisions along with a resampling method to demonstrate how experimental results can change as a function of the task, case materials, and measurement model chosen. We also graphically show how response bias, prevalence, inconclusive responses, floor and ceiling effects, case sampling, and number of trials might affect one’s interpretation of expert performance in forensics. Finally, we discuss several considerations for experimental and diagnostic accuracy studies: (1) include an equal number of same-source and different-source trials; (2) record inconclusive responses separately from forced choices; (3) include a control comparison group; (4) counterbalance or randomly sample trials for each participant; and (5) present as many trials to participants as is practical. examiners get these decisions right, their opinions can help law enforcement and the courts to convict the guilty and exonerate the innocent.When examiners get these decisions wrong, however, their evidence may contribute to wrongful convictions (Garrett & Neufeld, 2009) or failures to identify key suspects in criminal investigations.Landmark reports from the National Research Council (2009), the President's Council of Advisors on Science and Technology (2016), and the American Association for the Advancement of Science ( 2017) have put a spotlight on such errors, but there is disagreement about how best to quantify them (Albright, 2021;2022;Dror, 2020;Dror & Rosenthal, 2008;Koehler, 2013Koehler, , 2017)).Here we offer a descriptive guide to measuring human performance in forensic pattern matching disciplines using the framework of signal detection theory. This guide is intended for cognitive scientists and applied researchers who are interested in measuring human decision making in forensic pattern matching disciplines, for forensic scientists who are interested in understanding how and why scientists measure performance in ways that may deviate from decision-making frameworks used in practice, and for others in the legal system interesting in making sense of scientific studies on human performance in forensic science.We begin this guide by introducing signal detection theory and explaining how it can be applied in the specific domain of fingerprint identification.We then introduce several measurement models commonly used by cognitive scientists to quantify human performance: proportion correct, sensitivity, specificity, diagnosticity ratio, d prime (dʹ), A-prime (Aʹ) and empirical area under the curve (AUC).Following this, we use a resampling method to explore a range of common scenarios that arise when measuring performance in forensic pattern-matching domains and examine how these scenarios might affect measurement. Of course, we are not the first to explore signal detection theory and we do not intend for this to be a comprehensive introduction to the topic.There are many texts that provide in-depth background to the framework, and we encourage interested readers to seek these out (e.g., Macmillan & Creelman, 2005).This paper is rather a demonstration of how signal detection models can be applied to real-world data and decisions, such as those made by forensic experts like fingerprint examiners.Within this context, we show that some models can drastically distort performance in certain scenarios and therefore the conclusions that are drawn.While this guide is intended to be more descriptive than prescriptive, we also offer practical considerations to some of the measurement problems arising within each scenario.These considerations can help guide the design and critical evaluation of studies of human performance in forensic pattern matching and other contexts where signal detection models are applied to real-world human performance data. Signal detection in forensic science People are frequently required to choose between two options when navigating the world: Is that person a threat or not?Am I pregnant or not?Is there a fire or not?These are binary decisions; the answer is either yes or no, and one's judgment is either true or false.Many decisions in forensic pattern matching are also dichotomous.For instance, a firearms expert must determine if a bullet was fired from a particular gun or not.An expert in facial comparison must determine whether a photograph matches a suspect or not.A fingerprint examiner must determine whether a fingerprint was left by a suspect or not.Forensic examiners may at times make non-binary judgments.For example, when analyzing blood spatter, an examiner may need to determine what type of weapon was used, where it made contact, and from what direction.These kinds of non-binary determinations are beyond the scope of this paper.Quantifying performance for binary decisions, however, is central to research on expert decision-making in forensic science: Are experts better than novices?How does procedure A compare to procedure B? How well is this examiner performing (Smith & Thompson, 2019)?Signal detection is helpful for answering these questions. According to signal detection theory, performance is characterized by how well a system (an individual, group, technique, or department) can distinguish between what it seeks to find (the signal) against what it seeks to filter out (the noise).The ability to tell signal from noise is known as discriminability1 .In radar operation, for instance, the signal consists of dots on a screen that represent enemy warships, whereas the noise consists of dots on a screen that represent everything else.In forensic pattern-matching, an examiner uses their physical senses to judge the degree of similarity between trace evidence and a reference sample (Albright, 2021).They must decide whether the evidence originated from the same source or not, but there is also a correct answer to this question; the evidence either originated from the source in reality or it did not.Thus, signal is when the trace evidence and the reference sample come from the same source.Noise is when the evidence and the reference sample are not from the same source. To measure how well a person can discriminate between signal and noise, their judgments can be compared with the ground truth.Although no one can know the truth of trace evidence in casework for certain, researchers can develop materials that have been obtained in a way that identifies the source.Controlled experiments allow researchers to gather data on how well people's judgements align with ground truth.These data could be used for the scientific study of human performance, and for operational purposes such as diagnostic accuracy studies, proficiency tests, system-level black box studies, or ongoing management and quality assurance.With this information, signal detection theory then offers a way of quantifying how well observers can detect whether trace evidence came from the same source as a reference sample. A key benefit of a signal detection approach to measuring human performance is that it allows one to distinguish between accuracy and response bias (Macmillan & Creelman, 1990, 2005).Accuracy refers to the number of correct decisions whereas response bias refers to favoring one outcome over another, such as saying 'signal' more often than 'noise'.Imagine that 1% of the population has a particular disease.It is possible for a doctor to be 99% accurate in their diagnoses simply by saying that every patient does not have the disease.Likewise, if this doctor wanted to detect every person with the disease, they could simply say that every patient has the disease in which case they would correctly detect the disease 100% of the time.These are examples of extreme response bias, yet in both instances there is no indication that the doctor can effectively diagnose the disease; they have not demonstrated that they can discriminate signal from noise (the presence vs. absence of the disease).Accuracy is confounded by response bias.Using signal detection theory and discriminability resolves this issue.We return to this idea later. Several studies of forensic performance have employed signal detection theory in their analyses (e.g., Busey et al., 2022;Carter et al., 2020;Growns & Kukucka, 2021;Searston et al., 2016;Tangen et al. 2011;Thompson et al., 2013).More recently, Arkes and Koehler (2022), and Smith and Neal (2021), have also called for widespread adoption of signal detection theory in cognitive forensic research.However, even if many cognitive and forensic researchers do adopt this approach, the models, materials, and study design that they adopt will differ.To understand and communicate the value of forensic decisions, we need reproducible methods and robust models.In the next section, we run through how to measure performance with signal detection theory using a recent experiment that we conducted with fingerprint experts as an example. Signal detection in fingerprint identification Police departments employ fingerprint examiners to identify the source of fingerprints discovered at crime scenes.In this situation, the question is whether this fingerprint came from a specific suspect's finger or not.Throughout their careers, fingerprint examiners spend thousands of hours comparing and inspecting highly structured prints, and then present their findings to factfinders in criminal and civil cases.Prior research has shown that fingerprint examiners outperform novices on a range of perceptual and cognitive fingerprint tasks (Robson et al., 2021;Searston et al. 2016;Searston & Tangen, 2017a, 2017b;Tangen et al., 2011;Thompson et al., 2014;Thompson & Tangen, 2014).Here, we report the results of an original experiment comparing the latent fingerprint matching performance of qualified, court practicing fingerprint experts to untrained novices using a Signal detection framework. We conducted an experiment to investigate differences in fingerprint comparison performance between professional examiners and novices (preregistered project: https:// osf.io/ h4tjq/ wiki/ home).Participants were 44 fingerprint experts and 44 age, gender, and education-matched novices.We presented each participant with 24 pairs of prints collected from actual case files that were selected as being very difficult to distinguish.We also knew the ground truth of each fingerprint pair, i.e., whether or not they matched.Moreover, each yoked expert-novice pair was presented with a unique set of 12 same-source prints and 12 different-source prints that were randomly sampled from a larger pool of 48 fingerprint pairs. On a 12-point scale ranging from 1: sure different to 12: sure same, we asked participants to rate the extent to which they thought the prints came from the same finger or different fingers.Every rating from 1 to 6 was therefore coded as a different-source judgment and ratings from 7 to 12 were coded as a same-source judgment.Participants did not receive feedback on any trials.The task is illustrated in Fig. 1.Note that we are not necessarily recommending that forensic casework decisions be made using a confidence rating such as this.These scales are, however, useful for cognitive scientists and applied researchers interested in measuring human perceptual performance, and how different groups or examiners may differ from one another in their abilities. To establish how well the fingerprint examiners and novices performed on our latent fingerprint matching task, we can use a signal detection framework.There are four possible choice outcomes when each judgment is matched against the ground truth (see Fig. 2).An examiner can correctly say that two fingerprints from the same source are the "same" (a hit), or incorrectly say they are "different" (miss); the examiner can also correctly say that two prints from different sources are "different" (correct rejection), or incorrectly say they are the "same" (false alarm).Performance may be judged in a variety of ways by adjusting how we tally up the number of decisions that fall into each of these categories. A person whose judgments consist of only hits and correct rejections performs perfectly, whereas a poor performer frequently makes misses and false alarms.However, telling fingerprints apart, as with most real-world decisions, is not always clear cut.In a set of fingerprints with a large amount of variation, sometimes two prints deposited by the same finger can look very different from one another, and sometimes two prints from different fingers can appear very similar.Same-source pairs and different-source pairs can be confused.This ambiguity can be represented probabilistically via two overlapping signal and noise distributions (see Fig. 3).The less an examiner confuses signal for noise, and noise for signal, the less overlap there is between these distributions, and the better the examiner's performance. Additionally, a person's response bias, or criteria for making decisions, can range from highly liberal (errs on the side of saying same-source) to highly conservative (errs on the side of saying different-source).A more conservative response bias produces more correct rejections, but also more misses.A more liberal criterion produces more hits, but also more false alarms.Fig. 1 An example trial from a fingerprint matching experiment.On each trial, participants were shown a crime-scene print on the left and a candidate print on the right.They were asked to rate their confi-dence from 1 (sure different) to 12 (sure same).In some cases, the two prints were from the same person; in others, the prints were similar but originated from two different people Fig. 2 When comparing fingerprints, there are four possible outcomes.When an examiner decides whether two prints came from the same source or not, their decision is compared to the ground truth.If two prints originated from the same source and the examiner says "same", the outcome is a hit, but if they say "different", the outcome is a miss.In contrast, if two prints originated from different sources and a person says "different", the outcome is a correct rejection, but if they say "same", the outcome is a false alarm.If one uses a 12-point rating scale, decisions can be collapsed such that ratings from 1 to 6 are coded as a different-source judgment and ratings from 7 to 12 are coded as a same-source judgment The overlap between signal and noise, and the response bias, determine the proportion of hits, misses, correct rejections, and false alarms of a human observer.In practice, the response bias (or decision threshold) that examiners adopt is consequential.For example, Thompson (2023) has demonstrated, using a Signal Detection model, that even small shifts in response bias can dramatically impact the likelihood of conviction or acquittal. We can classify the decisions from the earlier fingerprint task into hits, correct rejections, misses, and false alarms for experts (Table 1) and novices (Table 2) 2 .Values in Tables 1 and 2 show that both experts and novices performed better than chance, but both groups made mistakes on occasion.Experts generally fared better than novices, making fewer mistakes overall in terms of both misses and false alarms. Delving deeper into Tables 1 and 2, we can express performance with several diagnostic accuracies: Positive predictive value (PPV) is the likelihood that the fingerprints came from the same source when responded "same."In our study, the PPV for the experts was 86%.The negative predictive value (NPV), on the other hand, is the likelihood that the fingerprints came from different sources when responded "different."In our study, the NPV for the experts was 80%.Predictive values can be useful to factfinders because they express the reliability of the decision made by a forensic practitioner (Mickes, 2015;Smith & Neal, 2021).However, predictive values in Table 1 do not necessarily reflect the operational decision-making ability of examiners because the task was completed under time constraints and without the usual tools that fingerprint examiners have at their disposal, and outside a broader system.Moreover, for the purposes of determining human performance, sensitivity and specificity are more relevant.Unlike PPV and NPV, sensitivity and specificity are conditioned on ground truth rather than the examiner's judgments, and hence describe their validity. Together, sensitivity and specificity express how well a person or group can distinguish signal from noise.Sensitivity indicates how likely it is that a person will say "same source" when the fingerprints actually come from the same person.Specificity indicates how likely it is that a person will say "different source" when two fingerprints come from different people.When sensitivity is higher than specificity, the response bias is relatively liberal, whereas response bias is conservative when specificity is higher than sensitivity. Tables 1 and 2 indicate the relatively comparable sensitivity of experts (77%) and novices (70%), but specificity is higher for experts (88%) than for novices (48%).Expert examiners clearly perform better than novices overall, but examiners are also more careful than novices to declare that two fingerprints are from the same source.Sensitivity and specificity each provide only a partial picture of performance.Consider a scenario in which experts had higher specificity than novices, but lower sensitivity.It would be unclear which group is better.Frequently, researchers will want to distil performance into a single value to unambiguously compare individuals or groups. Common measurement models of performance The aim of this section is to introduce some commonly used single-value models of human performance.Rather than simply presenting data from the comparison task above to discuss these models, we used a resampling method to generate more stable estimates of performance and come to a better sense of the performance variation.We first obtained the means and standard deviations for the confidence ratings of the experts and novices for the same-source and different-source trials in the fingerprint matching task described earlier.We then randomly sampled datapoints from beta distributions based on these means and standard deviations.Specifically, we sampled confidence ratings for 12 samesource and 12 different-source trials for 44 hypothetical experts and 44 hypothetical novices 100 times over as if we had conducted the study many times.In Fig. 4, we present the data for proportion correct, diagnosticity ratio, dʹ, Aʹ, empirical AUC.The rainclouds represent the distribution of scores across the 100 resampled experiments. Proportion correct Perhaps the most used measure of performance is proportion correct (or percent correct), which is a tally of the total number of hits and correct rejections divided by the total number of trials.Several forensic expertise studies have used proportion correct as a measure of ability (Bird et al., 2010;Searston & Tangen, 2017a;Tangen et al., 2011;Thompson & Tangen, 2014;White & Dunn et al., 2015).Proportion correct provides an intuitive sense of performance by showing how many times a person answered correctly.As seen in Tables 1 and 2, experts (83%) have a higher proportion correct than novices (59%) on the fingerprint discrimination task.Experts were incorrect on 17% of trials whereas novices were incorrect on 41% of the trials.Figure 4A also shows the distribution for proportion correct across the resampled expert (M = 79%) and novice (M = 57%) data. Fig. 3 A representation of the relative proportion of hits, misses, false alarms, and correct rejections for signal and noise distributions that overlap.The vertical line in each panel depicts the decision threshold or response bias.In A, the system has no response bias.In B, the system has a conservative response bias and so there are more correct rejections and misses.In C, the response bias is liberal, resulting in a greater number of hits and false alarms Diagnosticity ratio Diagnosticity ratio is a model of performance that combines sensitivity and specificity into one value.More specifically, it is a ratio of the odds of a same-source decision on samesource trials relative to the odds of a same-source decision on different-source trials.To compute the ratio, sensitivity is divided by the inverse of the specificity (i.e., hit rate divided by the false alarm rate).Diagnosticity ratios have been used in several forensic studies, most notably in comparisons between sequential and simultaneous eyewitness lineups (see Wells & Lindsay, 1985;Steblay et al., 2011). Taking the data from Tables 1 and 2 reveals that the diagnosticity ratio of experts in the fingerprint matching task was 6.50, and for novices it was 1.35.Experts clearly performed better than the novices according to these ratios.Figure 4B, however, shows a tendency for these odds ratios to take on extreme values, especially if performance is not collapsed across participants.Extreme values occur when the false alarm rate or hit rate approach zero or one.Mickes et al. (2014) and Wixted and Mickes (2018) have also articulated why a diagnosticity ratio is frequently a poor measure of performance for a variety of other reasons, including the influence of response bias.Moreover, the diagnosticity ratio is more closely related to PPV than to discriminability (Wixted & Mickes, 2018).For these reasons, we do not see much utility in using a diagnosticity ratio to gauge forensic performance, and so we do not discuss it further. dʹ (d-prime) Performance in signal detection theory is conceptualized as two overlapping Gaussian distributions, one representing signal and the other noise.In any task, however, a decision-maker uses some sort of threshold or criteria to make a decision.In the fingerprint discrimination task, for example, experts erred on the side of caution, saying "different" more frequently than "same", whereas novices said "same" more frequently than "different".If an observer shifts their response bias, however, this can alter the number of hits, correct rejections, false alarms, and correct rejections (see Fig. 3).By extension, the values for proportion correct and diagnosticity change as well.Signal detection models such as dʹ (d-prime), Aʹ, and empirical AUC, have been devised to account for differences in response bias. One of the most widely used signal detection measures is dʹ (Green & Swets, 1966).It is a measure of the distance between the signal and noise distributions, such as those depicted in Fig. 3.If the mean of the signal distribution is one standard deviation away from the mean of the noise distribution, then dʹ is equal to one.A higher dʹ means that the distance between the two distributions is greater, indicating that the observer can better distinguish noise and signal.To calculate dʹ, the standardized false alarm rate is subtracted from the standardized hit rate (Macmillan & Creelman, 2005).By standardizing these values, the model factors in response bias by assuming that the values fall on hypothetical normal distributions.The variances of these distributions are also assumed to be homogeneous.In fact, dʹ monotonically maps onto proportion correct when there is an equal ratio of noise and signal trials (i.e., match and no-match trials) and there is no response bias.dʹ has been used as the key measurement model in several forensic matching studies (e.g., Estudillo et al., 2021;Towler et al., 2017;Vogelsang et al., 2017). A common issue with using dʹ is that the performance estimates are unbounded, potentially being infinite or undefined when the hit rate or false alarm rate are equal to zero or one.There are a few ways of addressing this issue.One solution is to aggregate data from several participants.In doing so, the chance of obtaining a value of zero or one is reduced since a larger sample is used to calculate the hit and false alarm rates.However, this solution is not viable if a researcher is interested in the performance of each individual, nor does it guarantee that the values will be appropriate after aggregation. At least two computational solutions are also possible.Before computing dʹ (or even the diagnosticity ratio), a correction can be made.One option is Macmillan and Kaplan's (1985) recommendation of converting all values of zero to 0.5/n and all rates of one to (n -0.5)/n,where n is equal to the number of signal or noise trials.Another option is the log-linear method (Hautus, 1995) where 0.5 is added to the number of hits and false alarms and 1 is added to the total number of signal and noise trials.For further discussion of the costs and benefits of these methods see Stanislaw & Todorov (1999).In this paper, we adopted Macmillan and Kaplan's (1985) correction method.Figure 4C shows the distribution for dʹ across the resampled expert and novice data. Aʹ (A-prime) Aʹ (A-prime; Pollack & Norman, 1964) has been proposed as a non-parametric alternative to dʹ.However, the assumption that it is non-parametric has been challenged (Macmillan & Creelman, 1996;Verde et al., 2006;Pastore et al., 2003).Aʹ is nevertheless often used when the signal and noise distributions are presumed to have unequal variance.Rather than modelling the distance between the signal and noise distributions, Aʹ uses a ROC function to model performance.ROC analyses have long been relied on to measure discriminability in medicine (Lusted, 1971, Metz, 1978;Pepe, 2000) and some have suggested they be used ◂ for evaluating forensic decision-making (Gronlund et al., 2014;Mickes et al., 2012).Few forensic pattern matching studies use Aʹ, but it is popular in more basic categorization research (see Zhang & Mueller, 2005). A ROC curve is a two-dimensional plot of the hit rate (y-axis) and false alarm rate (x-axis).Aʹ is calculated using only a single hit rate and false alarm rate, which define a single point on the two-dimensional plane.A series of quasi-ROC curves (lines) can pass through this point, each with a different gradient, which extrapolate how the hit rate and false alarm rate might change with response bias.The area underneath each of these lines is a polygon with a certain area.Aʹ is defined as "the average of the maximum area and minimum area under the proper ROC curve constrained by the hits and false alarms."The greater the area beneath a ROC curve, the better the performance.If the hit rate is one and the false alarm rate is zero, the area underneath the ROC curve would fill up the entire plane and Aʹ is equal to one.Performance is perfect.In signal detection terms, there is no overlap between signal and noise.For a person performing at chance levels, the ROC curve would be a straight line running from the bottom left to the top right of the plane with an area underneath of .5.An area of .5 is equivalent to signal and noise distributions that overlap completely.Zhang and Mueller (2005) made a correction to the original computation of Aʹ.This correction computes the area differently depending on where the false alarm rate and hit rate intersect.We use this corrected method here.In instances where the false alarm rate was greater than the hit rate, we used the inverse of both values and subtracted the output from one.Figure 4D shows the distribution for Aʹ across the resampled expert and novice data. Empirical AUC Aʹ and dʹ are examples of theoretical discriminability because they are not solely based on empirical data.Performance at one decision threshold is extrapolated to others by making assumptions.Using theoretical estimates of performance allows one to account for changing response biases given only a single hit rate and false alarm rate.However, these assumptions can be erroneous.The empirical area under the curve (AUC) is a signal detection model of performance that does not rely on underlying assumptions about signal and noise. Like Aʹ, empirical AUC is based on a ROC curve, but its computation requires a hit rate and false alarm rate at multiple decision thresholds rather than just one.Recall that the participants in our fingerprint task from earlier provided a response about whether two prints came from the same source using a 12-point scale (1 = "sure different", 12 = "sure same").This response scale allows us to compute a hit rate and false alarm rate at 12 different points, forming a curve when plotted on a two-dimensional plane.The area under a ROC curve represents performance; an area of one indicates perfect performance whereas a value of .5 indicates chance performance.Empirical AUC can be computed in several ways, but we used the pROC package (Robin et al., 2011) in R, which uses the trapezoidal rule.Conceptually, this method involves adding together the area of several trapezoids using the points along the ROC curve.Figure 4E shows the distribution for empirical AUC across the resampled expert and novice data. Several forensic performance studies have used empirical AUC as a key measurement model (Mickes et al., 2012;White & Phillips et al., 2015;Wixted & Mickes, 2018).Models of discriminability such as dʹ and Aʹ are useful when one knows the underlying distribution of signal and noise.However, for real-world decisions, some scholars advocate for atheoretical models like empirical AUC (Wixted & Mickes, 2018) because no assumptions need be made about how signal and noise vary in reality. Summary and considerations There are several factors that researchers might consider when deciding which performance model to use.If underlying latent variables are of interest, then dʹ and Aʹ may be preferable whereas an empirical measure like AUC may be more suitable when interested in real-world performance.If a parametric measure is unsuitable, then Aʹ and AUC may be preferable (whether Aʹ is truly non-parametric, however, has been questioned).As continuous measures of similarity (e.g., 1 to 12) are more sensitive than dichotomous measures (e.g., same-source/different-source), empirical AUC may be preferred over dʹ or Aʹ.That said, a confidence rating scale may not reflect the options available to examiners in routine casework.The underlying assumptions of each model can have important implications for how researchers measure performance and the conclusions they might draw (for discussion, see Brady et al., 2023).There are also other performance models that we have not explored here (see, for example, Macmillan & Creelman, 1990;Rotello et al., 2008;Verde et al., 2006). Whatever decision a researcher makes with respect to these models, the reasons for that decision, as well as how the model was computed and corrected, should be made explicit.For example, were the data aggregated among participants?Were extreme values corrected?How was the AUC calculated?We generally recommend that all analytic decisions and data be made as transparent as possible so that researchers can better understand and replicate each other's work.These choices should also ideally be made prior to viewing and analyzing the data to reduce selective reporting of results (Chin et al., 2019), a practice known as preregistration (see also the case for Registered Reports outlined by Chin et al, 2020).For the remainder of this paper, we explore how various performance models are affected by participant response patterns and experimental design variables. Resampling method For the remainder of the paper, we explore a variety of scenarios researchers may frequently encounter.We use a data resampling method to graphically demonstrate how performance models (e.g., proportion correct, dʹ, Aʹ and AUC) are affected by variables such as response bias, prevalence, inconclusive responses, ceiling effects, case sampling, and number of trials.We will discuss how these variables can affect interpretations about expert performance in forensics.Knowing the kinds of situations that can have a significant impact on performance can help researchers make more informed decisions about which models to use and how to avoid drawing inaccurate conclusions from their observations.The general methodology was quite similar in each section.It extends on the data resampling method described earlier.We obtained the means and standard deviations for the expert and novice confidence ratings in the fingerprint discrimination task introduced above.We used these values to define distributions from which we then randomly sample data points (confidence ratings) for 44 experts and 44 novices many times over, effectively simulating the experiment 100 times.However, in each section we also modify a parameter by either gradually varying the means, varying the number of trials, or removing or replacing certain values, to see how these changes affect the performance models.Of course, each demonstration rests on the data and distributions from the fingerprint comparison task, but we intentionally chose to base our work on this dataset to ensure our methods have direct relevance to real-world human performance studies in forensics. Response bias Rachel wants to compare experts and novices on a fingerprint matching task like the one described earlier.She presents 12 pairs of prints that are from the same source, and 12 pairs of prints that are from different sources, to 44 experts and 44 novices.She notices, though, that many experts are very hesitant to say "same," so on most trials, they say "different."Novices, on the other hand, don't seem to have much of a bias in either direction.Do differences in response bias present a problem? When introducing the various models of performance in previous sections, we noted that dʹ, Aʹ, and empirical AUC, which are grounded in signal detection theory, are supposed to account for response bias, whereas measures like proportion correct do not.There are many forensic disciplines in which examiners invariably exhibit a response bias.Fingerprint examiners, for instance, tend to err toward saying "different source" more than "same source" (Tangen et al. 2011), whereas firearms examiners appear to be more liberal in their responses (Mattijssen et al., 2020).For the purposes of evaluating human performance in pattern matching, however, response bias must be disentangled from accuracy because response bias can confound accuracy (Smith & Neal, 2021). To what extent does a more liberal response bias (saying "same" more), and a more conservative response bias (saying "different" more), affect estimates of performance?We put this claim to the test by gradually either increasing or decreasing the response bias of the expert participants.We gradually varied their mean confidence ratings for the samesource and different-source cases (see Fig. 5).The middle plot depicts expert and novice performance with values corresponding to the actual experiment.The plots further to the left illustrate expert performance as their responses become more conservative; for each successive plot, the mean confidence rating for the same-source and different-source trials both decrease by .5 on the 12-point scale.The plots further to the right illustrate the change in performance as responses become more liberal; the mean confidence ratings for both same-source and different-source trials increase by .5 for each successive plot. Figure 5 shows that experts consistently outperform novices across all performance models despite changes to their response bias.However, the model that one uses can influence the extent of the expert-novice difference somewhat.For example, the expert-novice differences as measured by Aʹ and dʹ appear least affected by changes to response bias.Proportion correct, on the other hand, appears to slightly underestimate the expert-novice difference as the experts' response bias becomes more pronounced.Empirical AUC for experts also appears to increase slightly as responding becomes more conservative. Summary and considerations Our data indicate that the chosen model had little effect on performance despite shifts in the experts' response bias.That said, dʹ and Aʹ appeared slightly more robust than proportion correct and empirical AUC to these shifts.There are, however, caveats to these results.The experts in our original dataset were generally quite confident in their decisions on our 12-point scale, meaning that the underlying signal and noise distributions for our data were not precisely what some theoretical models might assume.Experts also significantly outperformed novices on the task, and while this is not surprising in expert-novice research, a large performance gap can mean that small changes in either group's performance appears less striking.Relatedly, the shifts we made to the experts' response bias may not have been extreme enough to greatly affect performance estimates.Although not obvious in our case, signal detection models (like dʹ, Aʹ and empirical AUC) generally have more utility than summary metrics like proportion correct when dealing with extreme response bias.We explain why this is so in the introduction.Moreover, the problem of conflating response bias and accuracy can be compounded by differences in prevalence.We turn our attention towards such a scenario next. Prevalence rates Sam intends to compare fingerprint matching experts and novices.However, he only has a limited number of fingerprints from different sources.He decides to present participants with more trials from the same source than from different sources.Sam also expects that experts will reply more conservatively than novices (saying "different" more frequently than "same"), whereas novices will be more liberal in what they consider to be from the same source.When comparing experts and novices, does an unequal proportion of same-source and different-source trials pose a problem? The effect of response bias can be exacerbated when the ratio of same-source and different-source trials is unequal.Growns and Kukucka (2021) have shown that when the proportion of same-source trials is low, the proportion of misses increases.However, when the proportion of same-source trials is high, the proportion of false alarms increases.Imagine a situation in which 90% of trials come from the same source and only 10% come from different sources.Saying "same" on every trial (i.e., an extremely liberal response bias) would allow a person with no knowledge or expertise with fingerprints to be 90% correct in their decisions.In contrast, a competent examiner who responds somewhat conservatively -to avoid false alarms -may have a smaller proportion correct than this novice merely because the majority of print pairs originated from the same source. In the real world, the prevalence of signal to noise may vary significantly.In baggage screening, for instance, potentially dangerous items appear in only a fraction of cases (Van Wert et al., 2009;Wolfe et al., 2005Wolfe et al., , 2007)).Several forensic studies (e.g., Growns & Kukucka, 2021;Growns et al., 2022;Papesh et al., 2018;Weatherford et al., 2021) have demonstrated that performance can vary significantly based on the proportion of same-source to different-source trials.In fields like fingerprint identification and forensic face matching, the ground truth proportion of same-source versus different-source cases cannot be known for certain.However, an unequal number of signal trials and noise trials will affect measures of performance because sensitivity and specificity are given different weighting. We resampled data from distributions based on the means and standard deviations for confidence in the fingerprint matching experiment described earlier.Note that the experts had a somewhat conservative response bias in this task whereas novices were more liberal.For each hypothetical participant, confidence ratings for 24 trials were sampled.However, we either increased or decreased the number of same-source trials relative to different-source trials.We present the results in Fig. 6.The middle plot depicts performance when the number of same-source and differentsource trials was equal (12 of each).As the plots move to the left, the number of trials from the same source decreases by three, and increases by three moving right.The leftmost plot depicts performance when just three of the 24 trials (12.5%) were from the same source, whereas the rightmost plot depicts performance when 21 of the 24 trials (87.5%) were from the same source. Figure 6 shows that for proportion correct, experts performed much closer to novice levels when most trials came from the same source.Conversely, the difference between experts and novices increased when there were more different-source trials.Changing the ratio of trials had relatively little effect on dʹ, Aʹ and empirical AUC, but the group means at extreme ends became more variable. Summary and considerations The proportion of same-source versus different-source trials in an experiment can have a significant impact on an individual's or a group's performance when using proportion correct.To eliminate the confound that response bias may have on performance, researchers could ensure that the proportion of same-source and different-source trials is roughly equal.Relative to proportion correct, signal detection models such as dʹ, Aʹ or empirical AUC are also less affected when there is an unequal proportion of same-source versus different-source trials. Inconclusive responses Chloe wishes to compare experts and novices on a fingerprint matching task.She offers three response options to participants: same-source, different-source, and inconclusive. A response that is inconclusive would suggest that based on the information provided, the participant cannot determine whether the prints match.Because there are three response options, Chloe finds it challenging to apply signal detection theory to her data and is concerned that the different inconclusive response rates may present confusion. In forensic disciplines such as firearms, shoe marks, handwriting, and fingerprints, it is typical for examiners to reach an inconclusive judgment.When an examiner cannot determine whether evidence originated from the same source as a reference sample, the evidence is deemed inconclusive.The option to respond "inconclusive" allows examiners to refrain from making a determination and mitigates the possibility of a false identification or false exclusion (Arkes & Koehler, 2022).So far, we have discussed scenarios where examiners only have two options: same source or different source.What happens when a third option is introduced?We can take the expert data from the prior fingerprint matching experiment and classify all ratings of six and seven (ratings of least confidence) as "inconclusive" to determine how inconclusive judgments could impact diagnostic accuracy (see Table 3). Comparing Tables 1 and 3 reveals that the addition of an inconclusive response option boosts both the PPV and NPV.This makes sense given that the more challenging trials in which examiners were less confident have been removed from the calculation.As indicated previously, these predictive values may be of most relevance to factfinders since they convey a sense of how certain one should be about a conclusion.Permitting examiners to respond with "inconclusive" may be advantageous in court settings since it appears to increase confidence that a determination (if conclusive) will be in line with ground truth.However, this trade-off is offset by the fact that some cases that would have been correctly judged to be from the same source or from different sources are now classified as inconclusive. The inconclusive positive predictive value (IPPV) in Table 3 suggests that when an examiner says "inconclusive," the chance that the two impressions came from the same source is 47%, which means there is a 53% chance that the fingerprints came from different sources (inconclusive negative predictive value; INPV).Factfinders might therefore be less confident in an instance where an examiner says "inconclusive."When a researcher has data on how many trials an examiner responds with "inconclusive", then we recommend that this be taken into account when measuring PPV and NPV because these data are relevant.If inconclusive was a response option and not chosen, then a conclusive same-source or different-source decision should be more convincing. The values for sensitivity and specificity in Table 3 are lower than in Table 1 because the inconclusive responses (in Table 3) are added to the tally in the denominator but not the numerator when calculating each performance estimate.That is, an inconclusive decision reflects neither a hit (numerator for calculating sensitivity) nor a correct rejection (numerator for calculating specificity), but they are nonetheless a decision to be counted in the base rates for each true state.There is currently some debate over whether or not inconclusive responses should be counted as errors (see Arkes & Koehler, 2022;Biedermann & Kostoglou, 2021;Dror & Scurich, 2020;Morrison, 2022).Regardless of their classification, inconclusive decisions are decisions nonetheless, and a full consideration of examiners' performance ought to take them into account.Researchers interested in validating a decision-making system, forensic methodology, or new processes in a particular forensic laboratory, for example, may be concerned by these changes to sensitivity and specificity because they suggest that including an inconclusive response option can artificially inflate or reduce error rates.Where inconclusive responses have been allowed, one solution to quantifying performance is simply to subdivide the remaining outcomes.Using Table 3, of the same-source trials that were not identified, 56% were misses and 44% were inconclusive.Of the different-source trials that were not excluded, 36% were false positives and 64% were inconclusive.However, this solution does not make it easy to compare performance between examiners, groups, or techniques. Consider that by responding "inconclusive" to every case they see; examiners can avoid making any mistakes at all (in the sense of true misses and false alarms).Though this example is extreme, we can turn to a real-world study by Bird and colleagues (2010) in which professional handwriting examiners were compared to novices in their ability to distinguish between genuine and disguised handwriting samples.In this situation, it would be important to determine how effectively each group can distinguish genuine samples (signal) from disguised samples (noise).In each case, however, participants were given three response options: identify (same source), exclude (different source), or inconclusive (unable to identify or exclude).Professional examiners were correct on 73% of the trials and responded "inconclusive" on 23% of the trials.Novices were correct on 80% of trials and responded "inconclusive" on 8% of trials.The number of correct responses by novices was higher than that of examiners; did novices therefore do better than examiners?Or did examiners do better because they made errors only 4% of the time, whereas novices made errors 12% of the time?Based on the above findings, Bird and colleagues drew the conclusion that handwriting expertise requires the ability to determine when there is sufficient information to make a determination.However, a researcher could simply instruct novices to make a conclusive decision only when they are extremely confident, which would likely increase the number of inconclusive decisions.If judging sufficiency were an ability, it ought to depend much less on the characteristics or instructions of the task.This behavior might be better characterized as a willingness or bias to say 'inconclusive'.The performance of a confident participant, who is more likely to make a call, and a cautious participant, who is less likely to make a call, depends a lot on how inconclusive decisions are factored into the overall evaluation of performance.We explored how inconclusive responses affect performance estimates (see Fig. 7).Once again, we used real expert and novice fingerprint matching data, but gradually increased the experts' propensity to respond with "inconclusive". The data underlying the leftmost plot in Fig. 7 offers a baseline where none of the examiner or novice responses were replaced with "inconclusive."However, the plots further to the right depict performance as more ratings are substituted for "inconclusive."All confidence values between 5.5 and 7.5 were coded as inconclusive for the plots second from the left.All ratings between 4.5 and 8.5 were coded as inconclusive for the plots third from the left, and so on.Importantly, trials where experts responded with "inconclusive" were not included in computing performance. Figure 7 illustrates that when inconclusive trials are eliminated from the calculation of performance models, performance appears to improve.Trials judged to be inconclusive are challenging by definition, so removing them makes performance look better.When participants are able to provide inconclusive responses, it would therefore be misleading to use signal detection models, as any value would be affected by how willing a participant is to make a call.Signal detection theory relies on a binary outcome, but this is no longer true when inconclusive decisions are allowed. Moreover, a judgment of similarity (e.g., same-source vs. different-source) is distinct from a judgment of whether sufficient evidence exists to make a call (conclusive evidence vs. inconclusive evidence). Even without knowing what someone would have decided if they had not said "inconclusive," it could be assumed that when someone says "inconclusive," they are truly undecided about whether the evidence comes from the same source or a different source.Given this assumption, we can now include all trials, including inconclusive decisions, when calculating performance.In Fig. 8, we display the same data as Fig. 7, but all inconclusive responses have now been substituted for a confidence rating of 6.5 (the midpoint between 1 and 12) when calculating empirical AUC, and a score of 0.5 for accuracy (the midpoint between 0 and 1).None of the novice responses were substituted with "inconclusive" as these serve as a baseline.Moving further right in the Fig. 8, an increasingly wider range of expert responses were re-labeled as "inconclusive." For all models depicted in Fig. 8, expert performance now decreases as the proportion of inconclusive responses increases, approaching novice levels with each consecutive plot.Whereas removing all inconclusive trials can inflate performance, treating an inconclusive response as a coin toss in the mind of the observer reduces performance.An examiner may not be completely on the fence when they say "inconclusive" so adding randomness when a person is better than chance can only harm performance.When there is a choice of whether to include or exclude evidence, an inconclusive response does not necessarily mean that an examiner is completely unsure.Rather, it indicates that they have low confidence in making an accurate decision and may be taking precautions to avoid false positives and misses. Summary and considerations Including an inconclusive response option when measuring performance is problematic because performance can be artificially inflated or reduced depending on how the inconclusive responses are handled.Deciding whether two impressions came from the same source is different from deciding whether there is sufficient evidence to make such a decision.The latter judgment is about the preponderance of evidence whereas the former is about source.Researchers might therefore want to collect a forced-choice response (either on a dichotomous scale or a continuous scale) about source (e.g., same source/different source, identification/ exclusion) separately from a response about whether there is sufficient evidence.Inconclusive responses can, for example, be collected before or after the source question using either a two-response question (i.e., conclusive or inconclusive) or a three-response question that reflects case work (i.e., exclusion, identification, inconclusive).Including both questions allows one to learn how many cases (and which ones) evoke an inconclusive response and therefore the ability to compute predictive values that are useful to factfinders, but performance can also then be measured using signal detection theory in a way that is not undermined by the presence of a third response option. Task difficulty Matt wants to see how well a group of examiners performs compared to novices on a fingerprint matching test with 24 cases, half of which are from the same source and half are from different sources.The expert group is correct in Fig. 7 Expert (purple) and novice (green) performance as if experts become progressively more likely to say "inconclusive."Rainclouds depict the distributions of participants' scores across 100 'simulated' experiments where each drop depicts a group mean.The connected red points represent the average of the group means.In the plot on the far left, experts never respond with "inconclusive".As the plots move right, experts respond "inconclusive" more often: the interval of confidence ratings coded as "inconclusive" increases by two points (on a 12-point scale) for each successive plot.Trials rated as "inconclusive" were excluded from performance calculations.Dashed lines represent chance performance virtually every instance, and he's impressed by their performance.But then Matt gives the same task to a group of undergraduate students, and he finds that they also do extremely well.Matt is now unsure whether the experts are actually better than the novices, or whether his test is a poor assessment of their abilities. Forensic proficiency tests have come under fire in recent years for being too simple.For instance, professional examiners performed exceptionally well on proficiency tests created by collaborative testing systems (CTS), but it was later found that even people with no formal experience with fingerprints could identify many of the test cases correctly (Smith, 2019).Many forensic proficiency tests are too easy (Koehler, 2017).To be considered an expert in a field, one must be able to perform better than untrained individuals in situations where the expert claims to be proficient.When experts and novices achieve similar results on a test, either the experts are not true experts, or the task is not a good We explored how the performance of experts and novices can differ as a test becomes easier (see Fig. 9).We again used the means and standard deviations from the fingerprint task from earlier.Baseline performance is illustrated by the leftmost plots in Fig. 9.Each plot moving to the right shows performance as if the task was made easier.For the same-source trials, the mean for each group was increased to be 25% closer to 12 for each successive plot, and the means for different-source trials were gradually decreased to be 25% closer to 1. Experts outperformed novices in the real world (leftmost plots in Fig. 9) across all performed models.However, as the trials become easier on average, both experts and novices perform closer to ceiling, and they become virtually indistinguishable across all measurement models.A test's ability to differentiate between experts and novices is compromised if everyone, experts and novices alike, can achieve a similar level of performance.A similar problem can arise if the test is too difficult, as both experts and novices may struggle to pass it.Because the groups are not as easily distinguishable as they should be, the test fails as a measure of skill. Summary and considerations An expert is someone who consistently outperforms the great majority of other people in a given domain.If a test fails to differentiate between experts and novices, either the experts are not really experts, or the test is a poor indicator of domain expertise.We cannot know which of these propositions is true if a test is given to a group of experts but not to a novice control group.Researchers might consider including a control group when measuring performance in forensic domains, or when validating a test.Likewise, pilot testing the difficulty of test trials helps ensure that a test is neither too easy nor too challenging.Because everyone, including non-experts, will appear to perform well on a simple test, it will be impossible to identify genuine expertise when it exists.A test that is overly difficult will fail to detect true expertise when it exists, as everyone, including experts, will appear to perform poorly.A good test reveals variance in performance. Trial sampling Brooklyn designs an experiment to see if her training intervention can increase people's performance on a matching task.She gives each participant in her study the same fixed sequence of 24 trials before training, and then tests them again after training with a completely different fixed sequence of 24 trials.Brooklyn discovers that the group improved from pre-to post-test, but she is uncertain about whether the source of these improvements was the training intervention or because different cases were used at pre-and post-test. So far, we have focused on situations in which experts are compared to novices.In contrast, Brooklyn tests a single group of novices both before and after her training intervention.The implications of this section are relevant for any study, but they are especially important when dealing with a small sample size or a limited number of trials.Confounds can be caused by differences in the test cases given to different groups or at different times.In Brooklyn's case, we don't know if the trials given before training are harder or easier than the ones given at the end of training.A disproportionate number of easy (or hard) trials in one test, but not another, can skew conclusions about the effectiveness of the training intervention.Maybe the subjects were lucky enough to be tested with more difficult trials first, followed by easy trials after training, or vice versa. To demonstrate, we again used the novice means and standard deviations from the earlier fingerprint experiment as the basis for our resampling method.In Fig. 10, novice performance at pre-test is displayed in green, and performance at post-test is displayed in purple.The leftmost plot shows a situation where neither group had particularly easy trials and the two distributions overlap almost entirely.This makes sense as we are assuming that the training intervention had no effect on performance.However, we systematically increased the number of easy trials for the post-test group in each successive plot moving right.Specifically, all participants responded with 1 ("sure different") on a certain number of non-match trials.We increased the number of easy trials by one for each plot moving right.In the rightmost plot, four of the 24 trials that the post-test group received were easy.The values displayed above each plot indicate the p-value for the median t-test statistic across the 100 resampled experiments. Figure 10 reveals that even if a training intervention did not improve performance in reality, a small increase of just three easy trials out of 24 (12.5%) at post-test can result in a significant difference in performance between pre-test and post-test more than half the time.In the case of empirical AUC, just two trials were required to produce a consistent significant difference.Using fixed sequences and not providing a control group may result in unintended differences in the cases presented to different groups or conditions.These differences leave the door open for confusion, which limits what researchers can infer about performance. Summary and considerations Fixed sequences can make it difficult for researchers to determine if a difference between two groups, or two timepoints, was caused by an interesting effect or intervention, or whether it was simply due to a difference in the trials presented.There are a variety of solutions to this problem.First, as mentioned in the preceding section, researchers could include a control group in their study design.Even if fixed sequences are used, differences in the trials can be controlled for if participants who receive training are compared to another group that receives no training at all, or another intervention altogether.Researchers can also counterbalance which fixed sequence a participant sees at each timepoint.Some can receive Sequence A at pre-test and Sequence B at post-test, whereas others can receive B first and then A. Alternatively, one can generate a completely unique, random sequence of trials for each participant, which can be sampled from a larger pool.Though this sampling method may increase noise, it will increase the generalizability of a study's results because they would be based on a broader cross-section of trials.Researchers could even randomize sequences whilst also ensuring that a member of each group is presented with the same randomized sequence as a member of the other group.This method ensures that results are generalizable while also reducing noise.In fact, in the earlier fingerprint experiment, members in each matched expert-novice pair were shown the same unique, randomized sequence of trials.With all this said, however, fixed sequences can be useful for detecting differences with the greatest sensitivity, particularly in research on individual differences (see Mollon et al., 2017). Number of trials Jason is aware of Brooklyn's earlier predicament; he knows that the disparities between pre-and post-test performance could be due to a few particularly easy trials.Jason wants to avoid this problem, so he decides to increase the number of trials in his experiment.He suspects that the effect of the easy trials will be negated in the aggregate because these easy trials will have proportionally less impact on overall performance. To rule out the possibility that any apparent differences between groups or timepoints are due to chance, researchers may choose to include additional trials in their test.However, participants in high-profile tests like those discussed earlier may only be exposed to fewer than a dozen trials.When evaluating performance, having fewer trials means that performance estimates are less reliable so researchers should be less confident in their conclusions and generalizability of such studies. We explored how many trials might be required to counteract the influence that a small number of easy trials can have on pre-post performance gains.We used the mean and standard deviation of novice performance in the earlier fingerprint task and assumed that the hypothetical training intervention has no effect on performance.Thus, the groups should not have differed statistically with randomly sampled cases.Four easy trials were sampled at post-test whereas no easy trials were included at pre-test.To mimic the easy trials, we simply ensured that four of the different-source trials received a rating of 1 ("sure different") from each hypothetical participant.The data are presented in Fig. 11 where the green distributions represent pre-test performance, and the purple distributions represent post-test performance.For each plot moving right, we doubled the total number of trials that we sampled. Figure 11 shows a lot more variation in performance when there are fewer trials.In addition, the gap between the distributions widens when the total number of trials is small and narrows when the total number of trials increases.Easy trials have less influence on performance when they account for only a small proportion of overall trials.However, up to 96 trials are needed for performance to become non-significant more than half of the time for proportion correct, dʹ and Aʹ.For AUC, up to 192 trials are required.In other words, even a small number of easy trials in one test set and not another can have a major impact on performance, and this issue might not be easily solved by simply adding more trials. Summary and considerations When only a few trials are used to assess performance, the presence of a few easy trials in one group or at one timepoint can significantly affect performance.Increasing the total number of trials can lessen the effect that these trials have, but many trials are required.To circumvent any potential problems posed by specific sequences of trials, researchers may choose to include as many trials as is practical in conjunction with more stringent case selection procedures, such as counterbalancing and random sampling of trials for each participant. Take Home Messages • Determining when and what performance metric to use: Courts are interested in positive and negative predictive values because they characterize the reliability of an examiner's judgment.In the hands of a human examiner, however, the general validity of a forensic method is best described by its sensitivity, specificity, and discriminability, which come from signal detection models that take these values into account.• Transparent research practices: Specifying design and data analytic decisions prior to analyzing one's data (preregistration) as well as making materials and data as transparent as possible, will help ensure the reliability of research in forensic science and its practical impact.• Response bias and prevalence: Discriminability metrics (dʹ, Aʹ, empirical AUC) are more robust to response bias effects and unequal ratios of same-source and different-source cases compared with metrics that simply count the number of correct vs. incorrect judgments (i.e., proportion correct). • Inconclusive responses: The handling of inconclusive judgements can greatly affect estimates of forensic examiners' performance.Consider collecting inconclusive judgments separately from forcedchoice judgments about source.Collecting data for these two claims separately allows one to compute discriminability and response bias irrespective of an inconclusive decision, and to compute PPV and NPV taking into account inconclusive decisions.• Task difficulty: Without an appropriate comparison group, it is impossible to determine whether a test was too easy or too difficult for the target expert population (e.g., fingerprint examiners), rendering a single performance estimate for this population worthless.This issue can be resolved by including a comparison group (e.g., non-experts).• Trial sampling: Fixed trial sequences, where different participant groups see different sets of cases, can introduce spurious effects in some circumstances.To remedy this issue, consider randomly selecting trials from large case sets for each participant.Other solutions include counterbalancing and yoking participant trial sequences. • Number of trials: Experiments with only a small number of trials can produce unreliable performance estimates.Increasing the number of trials is a simple method for improving performance estimation. Conclusions Our goal with this paper was to give useful descriptive guidance for anyone interested in creating and assessing studies that model the performance of forensic examiners and their procedures.We have explained how signal detection theory can be used to conceptualize performance, outlined several commonly used models of performance, and reported new data on the performance of novices and expert fingerprint examiners.We then used these results to define distributions from which we could resample data and explore how different models of performance hold up in a variety of scenarios. In doing so, we offer considerations and solutions to many issues that researchers frequently face when studying forensic examiner performance.Many of these considerations complement advice offered by other scholars such as Martire and Kemp (2018).Partitioning examiners' decision outcomes into estimates of positive and negative predictive values, for example, would be most useful in contexts where the probability of the true state (e.g., the prints were made by the same source) given the examiner's judgment (e.g., when the expert says "same source") is of primary interest.The PPV and NPV are most likely to be useful when the true state is unknown, like when evaluating the credibility of a forensic examiner's source-attribution testimony in court.On the other hand, separating examiners' decisions into estimates of sensitivity and specificity, and/or combining these values using the performance models discussed here, is a better way to answer general questions about the validity of a forensic technique, method, or decision-making system.For these kinds of questions, we need to know the probability of the examiners' decision (e.g., the expert says "same source") given the true state (e.g., the prints were made by the same source).If researchers and forensic examiners can agree on how to measure and model performance, make their data available to others, and make their analytic decisions transparent, then it will be possible to gain a better understanding of expert forensic pattern matching ability as well as how to best communicate errors in forensic decision-making to factfinders. Fig. 5 Fig. 5 Expert (purple) and novice (green) performance varying the experts' response bias.Rainclouds depict the distribution of participants' scores across 100 'simulated' experiments.Each drop depicts a group mean.The connected red points represent the average of the group means.The middle plot depicts real-world data.Plots further Fig. 6 Fig. 6 Expert (purple) and novice (green) performance varying the prevalence of same-source and different-source cases.Rainclouds depict the distributions of participants' scores on 24 trials across 100 'simulated' experiments.Each raindrop depicts a group mean.The connected red points represent the average of the group means.The Fig. 8 Fig.8Re-presentation of expert (purple) and novice (green) data from Fig.7as if experts are gradually more willing to say "inconclusive," but with inconclusive responses now coded as half correct and half incorrect.Rainclouds depict the distributions of participants' scores across 100 'simulated' experiments with each drop depict- Fig. 9 Fig. 9 Expert (purple) and novice (green) performance varying the difficulty of the cases.Rainclouds depict the distributions of participants' scores across 100 'simulated' experiments with each drop depicting a group mean.The connected red points represent the average of the group means.The leftmost plot depicts real-world finger- Fig. 10 Fig. 10 Pre-test novice group (green) and a post-test novice group (purple) for a training study where the training intervention had no effect on performance.Rainclouds depict the distributions of participants' scores across 100 'simulated' experiments with each drop depicting a group mean.The connected red points represent the average of the group means.The leftmost plot depicts performance when Fig. 11 Fig. 11 Pre-test novice group (green) and a post-test novice group (purple) for a training study where the training intervention had no effect on performance, but the post-test case set had four easy trials.Rainclouds depict the distributions of participants' scores across 100 'simulated' experiments and each drop depicts a group mean.The Table 1 Binary classification table for expert participants Table 2 Binary classification table for novice participants Table 3 Classification table for expert participants (all ratings of 6 or 7 are coded as inconclusive responses)
2024-03-16T06:17:51.693Z
2024-03-14T00:00:00.000
{ "year": 2024, "sha1": "c5d89637bca86449fb55630185eba3147cd494bc", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.3758/s13428-024-02354-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4ad5147f45a3aebc19d11a3cb2af2f2994d5aa6d", "s2fieldsofstudy": [ "Law", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
232429498
pes2o/s2orc
v3-fos-license
State Feedback and Synergetic controllers for tuberculosis in infected population Abstract Tuberculosis (TB) is a contagious disease which can easily be disseminated in a society. A five state Susceptible, exposed, infected, recovered and resistant (SEIRs) epidemiological mathematical model of TB has been considered along with two non‐linear controllers: State Feedback (SFB) and Synergetic controllers have been designed for the control and prevention of the TB in a population. Using the proposed controllers, the infected individuals have been reduced/controlled via treatment, and susceptible individuals have been prevented from the disease via vaccination. A mathematical analysis has been carried out to prove the asymptotic stability of proposed controllers by invoking the Lyapunov control theory. Simulation results using MATLAB/Simulink manifest that the non‐linear controllers show fast convergence of the system states to their respective desired levels. Comparison shows that proposed SFB controller performs better than Synergetic controller in terms of convergence time, steady state error and oscillations. | INTRODUCTION Tuberculosis (TB), having two major types, MDR tuberculosis and XDR tuberculosis, is caused by a bacterium called mycobacterium. It is one of the top 10 causes of death across the world [1], which affects mainly the human lungs apart from other parts like brain, bones, kidney and spine. It is a transferable disease that can spread over a population. Figure 1 shows that when an infected individual exhales, mycobacterium is transferred to the air that can affect the healthy individuals in the surrounding. The other reasons of TB are bad living conditions, malnourishment, smoking, and so forth. Microbacterium Tuberculosis (Mtb) patients who show resistant to anti-TB drugs, Isoniazed and Rifampicin, are termed as MDR patients and who show resistant to any injectable anti-TB drug are termed as XDR. These types of patients do not respond to 6 or 9 months' treatment. They may take about 2 years of treatment with high toxic anti-TB medicines to fully recover. These types of Mtb patients are real threat to control and prevent the TB from its spread [2]. Individuals who are HIV positive and infected from TB have 20%-40% more chances to develop active TB which is a leading cause of death HIV positive patients [2]. According to the WHO annual report 2019, globally 1.7 billion people are infected with Mtb and approximately 10 million people suffer from TB every year. About 50-500 people per million population are infected across the world. The male-female ratio of TB is 2:1. It can affect anybody but is more dangerous for the adults. Developing countries are highly burdened from TB because of poverty, bad living conditions, unavailability of treatment facilities and malnourishment. The spread of TB can be curtailed by timely diagnosis, treatment, improvement of the health facilities and introducing health conscious activities in the society [1]. Early diagnosis of the disease decreases both social and medical impacts. Surveillance of TB can be done by using Google trends [3] and by observing its counter medication [4]. Spread of TB can also be controlled by giving health education to the society. Effect of different health education methods on secondary and primary school students in northern province of Jiangsu has been discussed in [5]. Bio-mathematics has played a very important role in the development of the mathematical models of epidemic diseases including TB. In previous research works, several mathematical models of TB have been developed. First of all the stability of the model is examined, then a preventive control in the form of vaccination and treatment of infected class is established This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. and an objective function is defined by researchers. Later, an optimal control law for prevention and control of the infectious TB was defined [6,7]. Discrete TB model with two different infectious compartments has been discussed in [8,9]. Stability analysis, bifurcation of TB model and complex system modelling of TB in Nigeria have been discussed in [10,11]. Computer modelling of sensitive type mycobacterium TB and modelling using regression analysis have also been studied [12,13]. Influence of multiple reinfections in TB dynamics is discussed in [14]. Stability analysis has been discussed for five states TB model in [15]. Optimal control technique has been applied on four and five states non-linear mathematical models of tuberculosis using Pontryagin's maximum principle [16]. A dynamic behaviour of four states of TB transmission and an optimal controller for its treatment has been discussed in [17]. Synergetic control technique takes into account a macrovariable whose number depends on the number of inputs. It contains the errors of the states which we want to track [18]. It has been applied for tracking of infected cells during anti-viral therapy [19], to control the growth of cancer cells [20], on systems of non-linear equations [21], magnetic levitation system [22], to minimize HIV concentration in blood plasma [23] and for stabilizing the medium voltage microgrids [24]. In State Feedback controller design, the output tracks the desired reference signal asymptotically if the reference signal and its derivative are bounded []. In this research work, an updated mathematical model of the TB has been considered to design two non-linear controllers for preventing the spread of TB and reducing the infected individuals by giving them treatment and vaccination on time. These non-linear controllers are Synergetic and State Feedback (SFB) controllers which have been designed for the treatment and vaccination of infected population.Schematic diagram for the proposed close loop control system has been shown in Figure 2. The rest of the article is organized as follows; Section 2 describes the non-linear mathematical model of TB considered for this research. Section 2.2 details the problem statement, and Section 3 describes the design of the proposed non-linear controllers. Simulation results have been presented in Section 4, where the comparison of the proposed controllers has been made and finally the article has been concluded in Section 4. | NON-LINEAR TUBERCULOSIS MODEL TB is an airborne disease that can spread from person to person via aerosolization to the individuals and through air. | SEIR tuberculosis model There are a number of mathematical models for the transmission of TB. SIR model [28] incorporates the three state variables: susceptible, infected and recovered. Latest model of the transmission of the Mtb is a five state SEIR model [29] which describes the transmission of the Mtb in a human host taking into account the effect of the MDR and XDR without making the model complicated. In this model, human population is classified into five classes; Infected (x 1 ), Susceptible (x 2 ), Exposed (x 3 ), Recovered (x 4 ) and Resistant (x 5 ). Size of the human population N can be written as where the recruitment to the susceptible class is taken by birth rate (λ). Size of each class varies due to natural death rate and rates at which the individuals become susceptible, exposed, infected, recovered and resistant. The model is given below. Parameters that are used in the model are described as recruitment by births (λ), force of infection (β ), human death rate (μ), active TB disease induction rate (α), MDR TB disease induction rate (α 1 ), humans recovery rate from MDR TB (δ), rate at which exposed become infectious (ϵ), rate at which infected becomes resistant (σ) and rate at which recovered becomes susceptible (ρ). | Problem statement There are number of optimal control strategies for the prevention and control of TB, but it still is one of the leading causes of the death worldwide. There is no nonlinear controller purposed for the prevention and control of TB so far in the literature. This model is nonlinear due to presence of terms x 2 x 3 and x 1 x 3 in Equations (3) and (4), respectively. Therefor designing a non-linear controller would be a good option to cater for the spread of TB, as non-linear controllers usually show better convergence, lesser steady state error and negligible oscillations and undershoots/ overshoots. | NON-LINEAR CONTROLLERS DESIGN FOR SEIR TB MODEL We have considered SEIR TB model given by Equations (2)-(6) in order to design the controllers. Two nonlinear controllers, Synergetic and State Feedback controller, are to be designed for treatment and vaccination of infected population. The control inputs u 1 and u 2 give the number of infected and susceptible individuals for the treatment and vaccination respectively. | Synergetic controller design Synergetic controller is to be designed for the system to track some state of the system to its desired level. Synergetic control technique will be used to design the control input u 1 and u 2 . We have taken two macro-variables, since the number of input variables are two, defined as and The error of each state is defined below which is the difference between actual value and reference value of that state. F I G U R E 2 Schematic diagram for close loop control system BILAL ET AL. -85 All the states would track the desired value if the errors converge to zero respectively. Taking the time derivative of Equation (10), we have Since reference value of each state is constant, so their time derivatives will be zero, we get Taking time derivative of Equations (7) and (8), gives The macro-variables σ 1 and σ 2 are supposed to satisfy the dynamic evaluation presented by the following equation where T represents the convergence rate of σ 1 and is a positive constant. Putting down the values of σ 1 and _ σ 1 from Equations (7) and (13) respectively and solving for u 1 , we get Now, putting down the value of σ 2 and _ σ 2 from Equations (8) and (15) respectively and solving for u 2 , we get The control input u 1 and u 2 in Equations (17) and (18) are the required controls obtained through the Synergetic control technique which gives the number of infected and susceptible individuals to be treated and vaccinated, respectively. To prove asymptotic stability of Equation (16), we consider the Lyapunov candidate function as Taking the time derivative of Equation (19), we get Putting down the value of _ σ from Equation (16), we get Using Equation (19), we can write When t is zero V 3 becomes equal to V o which is its initial value. Hence the dynamical system is exponentially stable using Lyapunov theory. | State Feedback controller design In order to design the State Feedback controller, we take x 1 as output of the system that is Taking the time derivative of Equation (24), we have The state x 1 will track the desired value if the error e 1 will converge to zero. Therefore, taking time derivative of Equation (24) and putting down the value of _ x 1 from Equation (2), Error e 1 will converge to zero if Lyapunov candidate function of error e 1 given by Equation (26) is negative definite. For this purpose, we keep where F 1 is positive constant. Equation (26) becomes Solving Equation (27) for u 1 , we have The control input u 1 in Equation (29) is the required one obtained through State Feedback control technique which gives the number of infected individuals to be treated. In similar way, we can design u 2 by choosing x 3 as output Taking time derivative of the Equation (30), we have Taking time derivative of e 3 given by Equation (5), we have Using the value of _ x 3 given by Equation (3), we have Error e 3 given by Equation (5) will converge to zero if Lyapunov candidate function of error e 3 given by Equation (33) is negative definite. For this purpose, we keep Equation (33) becomes Solving Equation (34) for u 2 , we have The control input u 2 in Equation (36) is the required control obtained through the State Feedback control technique which gives the number of susceptible individuals to be vaccinated. | For infected class Responses of the proposed controllers for the infected class of people have been shown in the Figure 3. It has been observed that convergence time for the SFB controller and Synergistic controller are 1.5 and 20 years respectively and there is no steady state error and oscillations shown by any of the proposed controller. | For susceptible class Responses of the proposed controllers for susceptible class have been shown in Figure 4. The convergence time of the SFB controller and Synergetic controller are 3 years and 1 year, respectively. | Control signal u 1 and u 2 of proposed controllers The two control signals from proposed controllers given by Equations (17) and (29) and Equations (18) and (36) are shown in Figures 5 and 6, respectively. The control input u 1 is the signal for the treatment of the infected class which for SFB tracks infected class to zero after 1 year. The control input u 1 of the Synergetic controller tracks the infected class to zero after 1.5. The area under the curve u 1 gives the total number of infected individuals to be given treatment for the cure of TB. The control input u 2 is the vaccination of the susceptible class. As vaccination of TB is the continuous process, each control signal from proposed controller depicts continuous process of vaccination, but with different number of individuals to be vaccinated. The area under the curve u 2 gives the total number of susceptible individuals to be given vaccination.Comparison of two non-linear controllers: SFB and Synergetic controllers for infected individuals (x 1 ) and susceptible individuals (x 3 ) in terms of convergence time, steady state error (SSE) is given in the Table 2. Controller's responses have also been checked for uncertainties or disturbances in the system. The disturbance can be in the form of increase or decrease in number of infected individuals due to migration to the infected population from nearby areas or due to the migration of infected individuals from infected population to other areas respectively. For simulation purpose, this type of disturbance is taken as Gaussian noise with mean 10 and variance 100 as shown in Figure 7. Figures 8 and 9 exhibit responses of SFB and Synergetic controllers due to this disturbance. Both controllers respond well to the disturbance. When there is increase in the number of infected and susceptible individuals, the response of each F I G U R E 7 White Gaussian noise F I G U R E 8 Comparison of infected individuals due to effect of disturbance 90controller says that higher number of infected and susceptible individuals would be given treatment and vaccination respectively. SFB controller is not much affected by the disturbance as it shows similar convergence time with negligible oscillations. Convergence time of Synergetic controller is also similar as before, but it takes more time to converge as compared to State Feedback controller. | CONCLUSION In this research work, a community-based five state mathematical model of TB named as SEIR epidemiological, has been considered. This model is unique in the sense that it includes all the states including infected, susceptible, exposed, recovered and resistant classes. SFB and Synergetic controllers have been designed for the prevention and control of this viral disease. Asymptotic stability of the system has been proved using Lyapunov theory. The simulations for the proposed controllers have been performed in MATLAB/Simulink. From the graphs, it is clear that the SFB controller shows good behaviour in terms of the convergence time, steady state error and oscillations as compared to the proposed Synergetic controller.
2021-04-01T06:17:21.507Z
2021-03-30T00:00:00.000
{ "year": 2021, "sha1": "f2a0484612d8ce106b86613a62c49156875863f5", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1049/syb2.12013", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "56045c5d560dc634fe859bd88e37089e8c1d53b0", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
232370389
pes2o/s2orc
v3-fos-license
Commentary: on the effects of health expenditure on infant mortality in sub-Saharan Africa: evidence from panel data analysis Background This commentary assesses critically the published article in the Health Economics Review. 2020; 10 (1), 1–9. It explains the effects of health expenditure on infant mortality in sub-Saharan Africa using a panel data analysis (i.e. random effects) over the year 2000–2015 extracted from the World Bank Development Indicators. The paper is well written and deserve careful attention. Main text The main reasons for inaccurate estimates observed in this paper are due to endogeneity issue with random effects panel estimators. It occurs when two or more variables simultaneously affect/cause each other. In this paper, the presence of endogeneity bias (i.e. education, health, health care expenditures and real GDP per capita variables) and its omitted variable bias leads to inaccurate estimates and conclusion. Random effects model require strict exogeneity of regressors. Moreover, frequentist/classic estimation (i.e. random effects) relies on sampling size and likelihood of the data in a specified model without considering other kinds of uncertainty. Conclusion This comment argues future studies on health expenditures versus health outcomes (i.e. infant, under-five and neonates mortality) to use either dynamic panel (i.e. system Generalized Method of Moments, GMM) to control endogeneity issues among health (infant or neonates mortality), GDP per capita, education and health expenditures variables or adopting Bayesian framework to adjust uncertainty (i.e. confounding, measurement errors and endogeneity of variables) within a range of probability distribution. Background There is a growing concern of the importance of population health and its contribution to the national economy, but the issue of infant mortality remains a major concern in most of the developing economies including Sub-Sahara Africa [1,2]. One of the possible reasons for high infant mortality in sub-Saharan Africa could be low level of public health expenditure, low level of female education, poverty, poor sanitation, lack of safe drinking water and other basic utilities such as telecommunications and electricity [3]. This commentary aims to critique and correct the shortcomings observed in the panel data analysis about the effects of health care expenditures on infant mortality in 46 sub-Saharan African countries over the period 2000-2015. The paper is well written and deserves careful attention. Main text The effects of health expenditure on infant mortality in sub-Saharan Africa has been published in Health Economics Review, 10 (5) by Kiross et al. [4], using macro data, relying on frequentist/classic methods. A study like this, among others preferred to use total health expenditure, public health expenditure and private health expenditure to explore its impact on life expectancy, infant mortality and under-five mortality, offering different conclusion (See, [1,[5][6][7][8][9]) with no consensus. The reasoning behind it is that, frequentist/classic estimation techniques depend entirely on sampling, and the likelihood of the data point given to the model without considering any kinds of uncertainty. Further, frequentist approach (i.e. random effects models) employed by Kiross et al. [4], among others (e.g., [1,5]) leads in point estimates of parameter values, standard errors, CIs (confidence interval) and P-values arising from hypothesis tests (See, [10]). For instance, in authors article the Pvalue at 5% significance level shown by Kiross et al. [4] represent the probability that the data occurred in the specified model (i.e. random effect model) with the assumption that the null hypothesis is true and not false. Moreover, the overall model estimates relied on the coefficient of variables and fixing the values of parameters (i.e. ß as explanatory variables) using maximum likelihood leading to the final results and policy implication under uncertainty. Likewise, Kiross et al. [4] estimation ignored the true range of uncertainties (both model and parameter) and unobserved variables such as individual true disease, nutrition status and other confounding (i.e. number of physicians, corruption and misuse of public health funds) were not taken into account and are very common in sub-Saharan Africa. Kiross et al. [4] findings showed that, an increase in total health expenditure (external, public and private) was significant in reducing infant mortality in sub-Saharan Africa. Their findings was not clear whether the progress in decline of infant and neonatal mortality were primarily attributed to the increase/decrease of health care expenditures or other confounding factors. Conversely, Kiross et al. [4] used health care expenditures (public or private), real GDP per capita, primary school enrollment rate as a proxy for education, population and other variables in the random effects regression as shown in. Nevertheless, the use of random effect models cannot overcome the problem of endogeneity issues (i.e. omitted variable bias, reverse causality) arising among health (i.e. infant mortality), education, GDP per capita and health care expenditures variables (See, [11,12]). The endogeneity occurs when two or more variables simultaneously affect/cause each other. In other words; education, GDP per capita, health (i.e. infant or neonates mortality) and health expenditures variables in regression models may be correlated with the error term. This endogeneity bias can cause inconsistent estimates leading to misleading conclusion. The endogeneity bias observed in Kiross et al. [4] paper can be eliminated by using the instrumental variables or two stages least square and system generalized methods of moments [13][14][15][16]. The Generalized Methods of Moments (GMM) is a known methodology to avoid the endogeneity bias by using instruments which are correlated with dependent variable and uncorrelated with the error term [14]. It is also known that countries with similar level of development like sub-Saharan Africa, public health spending differs significantly in health outcomes measures. For example, between 2010 and 2014, the average public health expenditure as percentage of GDP in Tanzania and Zambia was 2.5% and 2.5% respectively [17]. Similarly, infant mortality was higher in Zambia (48.8 per 1000 live births) than Tanzania (39.1 per 1000 live births). The key argument here is that, with the same level of public resources (i.e. 2.5% of GDP); one country can generates better health outcomes than another country. This also raises the question whether greater public health spending in sub-Saharan Africa suggested by Kiross et al. [4] can buy better health outcomes (i.e. reduce infant or neonatal mortality) or not. Wagstaff [18] argued that, if extra funds are likely to be applied extensively to health care, more staff at hospitals and adequate stocking of medications (i.e. panadol, amoxicillin etc) without complementary services (e.g. lack of roads networks to hospitals and clinics), the impact of extra public health expenditures on health outcomes (infants and neonates) as suggested by Kiross et al. [4] may be little or none. This implies that, increasing public health expenditures need to be complementary with spending in other sectors (water works, network of roads and education) to reduce both infant and neonatal mortality in sub-Sahara Africa. Such increases also need to be accompanied by policies, institutions, instruments (e.g., Public Expenditure Review and Management) and combating corruption (See, [18]). Based on the outlined facts above, it is clear that Kiross et al. [4] conclusion that, increasing government's health care financing over the next years will be crucial in reducing mortality and improving health outcomes in sub-Sahara Africa still falls under uncertainty. This uncertainty may also occur due to the failure of random effects models to control endogeneity and other omitted variables bias. Conclusion To address the aforementioned weakness encountered in Kiross et al. [4], the use of panel dynamic system Generalized method of moments (GMM) would be preferred to overcome endogeneity and its omitted variable bias present among health (infant or neonates mortality), real GDP per capita, education and health expenditures variables. Similarly, the use of Bayesian framework would be important for capturing the uncertainty of health expenditures (public and private) on infant mortality in Sub-Sahara Africa (See, [7]). The framework takes full account of uncertainties related to models, control confounding or unmeasured variables, and it uses decision making which is informed by both prior (i.e. hypothesis before observing the data) information and the new evidence obtained [10]. As a take home message for the readers and reviewers is that, random effects models require strict exogeneity of regressors and in the presence of endogeneity of variables, it leads to inaccurate estimates and misleading conclusion. Further, the Bayesian framework allows the authors to make use of prior knowledge or beliefs about the specific question being studied, as well as the new evidence collected specifically for the study [10]. It also enables the policy makers to use their own judgments about a sufficient level of evidence to make a policy decision [10]. The framework involves the probability that the true effect (i.e. the effects of health expenditure on infant mortality) falls into a particular range of values. Future studies examining the effects of health expenditures (i.e. public or private) on health outcomes (i.e. infant and neonatal mortality) should either use dynamic system Generalized Methods of Moments (GMM) to control endogeneity and its omitted variables bias or adopting a Bayesian framework that provides a clear picture of parameter uncertainty adjusting for confounding, endogeneity and measurement error within a range of probability distribution (credible intervals). Abbreviations GMM: Generalized method of moments; GDP: Gross domestic product
2021-03-27T14:01:04.285Z
2021-03-27T00:00:00.000
{ "year": 2021, "sha1": "e55d159509385d71bdcfc6d467fd7b42d5a95571", "oa_license": "CCBY", "oa_url": "https://healtheconomicsreview.biomedcentral.com/track/pdf/10.1186/s13561-021-00310-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e55d159509385d71bdcfc6d467fd7b42d5a95571", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
145518611
pes2o/s2orc
v3-fos-license
Comparative Dynamics: Healthy Collectivities and the Pattern Which Connects In this paper, I introduce the notion of “comparative dynamics” and the importance of connectivity as an essential and vital underlying principle for healthy collectivities. Such a notion resonates with Gregory Bateson’s idea of the “pattern which connects,” suggesting not only the functional importance of connectivity as an aspect of a healthy organization at some given scale, but also connectivity as an important principle, which is the basis for how all living patterns are connected together. This paper ends with some reflections on why and how teachers experience stress and burnout as an absence of connectivity while highlighting its importance in the well-being of teachers in healthy learning organizations. The pattern which connects is a metapattern . It is a pattern of patterns. It is that metapattern which defines the vast generalization that, indeed, it is patterns which connect. (Bateson, 1979) Introduction The historiography of dynamical systems, as Aubin and Dalmedico (2002) suggest, has shown that a "great surge of interest in dynamical systems theory" has emerged over time, especially since the 1970s, and is stretching well beyond the usually-taken-for-granted boundaries of mathematics, moving into other areas of study, as well as the popular press. To be sure, however, dynamical phenomena, broadly speaking, have been studied for centuries, but the late 20 th century has shown itself to be a rather important time for many scholars and researchers across various disciplines, including, of course, mathematics, but also the natural and social sciences, and the arts. In the past three decades, researchers in geology, biology, ecology, psychology, sociology, economics and organizational theory, to name a few disciplines, slowly have grown more attentive to the complex and apparently disorderly nature of what have since become known as complex systems or non-linear dynamical systems. Of the many different conceptualizations of dynamical phenomena that have emerged during the 20 th century-for instance, general systems theory, dissipative systems theory, chaos theory, and self-organized criticality-the term "complex adaptive systems" (CAS) currently stands as one of the better known terms under the umbrella of non-linear dynamical systems. At the heart of CAS is the idea that large numbers of agents interacting locally give rise to their own structures, self-organizing in such a fashion so as to bring forth the possibility of larger dynamically coherent, persistent patterns. That is, in the absence of an overall "blueprint", globally emergent patterns can arise through local interactions for the on-going movement and unfolding of the system itself. Studies of dynamical phenomena suggest that the concept of connectivity is an important aspect of, and for, complex phenomena in terms of coherence and communication, for instance. To begin, I invoke the notion of "comparative dynamics" and explore how the concept of connectivity plays an important part in the "health" of complex organizational collectivities. Comparative Dynamics Following other branches of comparative inquiry like "comparative anatomy," "comparative literature," "comparative education," and so on, the notion of "comparative dynamics" comes to mind as a way to help frame an approach to compare and understand the dynamics and dynamical patterns of a variety of different phenomena. As the word suggests, a "comparison" involves a likening of things where certain characteristics are highlighted for their similarities or differences between those things, with the aim of showing certain relative qualities. In addition, the other term, "dynamics," is concerned with the dynamical forces that make something happen. As a branch of physics, dynamics addresses the relation between the forces of a system and the ways in which the patterns of the system change temporally and spatially. The focus of a comparative dynamics approach is, thus, on the similarities and differences of dynamical patterns that arise from within particular, and across various, scales of organizations of dynamical patterns (Stanley, 2004;2005). For instance, the gait of a healthy human being and that of a person living with Huntington's disease (Hausdorff et al., 1997) and the dynamics of the heart, especially congestive heart failure and atrial fibrillation (Goldberger et al., 2002), are illustrations of patterns that are manifestations of both "complicated" and "complex" patterns. Although the kind of phenomena of interest here are not necessarily restricted to "complex" phenomena, what ought to be kept in mind is the need to compare patterns from the same kinds of phenomena. That is, the comparison ought to be between complex systems, or simple systems with simple systems, and so on. But more importantly, what ought to be remembered is that complex phenomena do bring forth a wide variety of different patterns, including seemingly simple, predictable and regular patterns. This, I wish to claim, is a reflection of the nature and kind of connectivity present. As an analogy, the research in the area of Boolean networks by Stuart Kauffman (1995) illustrates the same idea. Whether by way of buttons tied with strings or light bulbs connected in a wired network, Kauffman has observed how particular network patterns create specific "portraits" of Boolean nets. More specifically, on one hand, sparsely connected networks manifest highly ordered patterns and, on the other hand, highly connected networks tend toward chaos. "Fine tuning" the network, however, allows the system to enter into a "phase transition regime" (Kauffman, 1995, p. 80) between ordered and chaos. In other words, the system has the capacity to move into a region that lies near the "edge of chaos." Often when references to complex systems are made, comparisons and analogies across different kinds and scales of organization are invoked, comparing, for instance, bird flocks with termite colonies, human riots with bee swarms, and traffic jams with the growth of cancer cells. And so, for this reason, the term "comparative dynamics" has emerged. Comparisons that others have invoked to understand the nature of complex systems also have prompted medical researchers to think about healthy physiological systems in terms of CAS. Although the terms "health" and "healthy" may be subjective, there appears to be a connection between different physiological structures in terms of diverse dynamical tendencies (Kelso & Engstrøm, 2006) for states of "health" and the presence of particular dynamical patterns for those states. The introduction of the term "comparative dynamics" has emerged also from a realization that, under certain conditions, the dynamics of a particular complex phenomenon might give rise to tendencies for particular kinds of patterns which can be described as "healthy" or "unhealthy." The notion of an organizational dynamic described as "healthy" or "unhealthy" certainly can be illustrated through examples from the field of human physiology-for example, the physiological organization of the human heart and the human gait. Moreover, terms related to notions of health are, in fact, already used in more popular parlance to describe particular human relationships as with the notions of divorce, sick ideas and toxic workplace environments (Frost, 2003). Through the notion of comparative dynamics, therefore, the concept of dynamical organizational health can be extended to other scales of complex organization, which include the biological body, as well as other socially-, culturally-, politically-, and ecologically-organized bodies. But, even more, such organizations must not only be connected for purposes of coherence, for instance, but must be connected to one another to sustain life. Connectivity, therefore, could be said to be a matter of and for vitality. In other words, in matters of health and disease (or illness), connectivity is not merely crucial-in one sense of the word "vital"-but is important to the overall picture of health of all living beings, as the etymology of "vitality" ought to remind us. Connectivity, therefore, "speaks"-albeit quietly when living beings are healthy and quite "loud and clear" in times of distress, illness and disease-in particular ways that reflect the nature of health for all living beings. Why Connectivity? Our own collusion with the world-out of sheer necessity-brings human beings into a complex set of connections, both with themselves through self-reflexivity and the world. As such, through one's perceptions of the world, human beings are already complicit in the creation of some sense of complexity-at-work. This is reflected, for example, in the various and diverse perspectives and ideas that human beings bring to one another in conversation. In a manner of speaking, therefore, it also may be through one's "mindset" of the world that one becomes disconnected from the world through particular conceptualizations of that world. Thus, a different mindset that draws one's attentions to the presence of various kinds of connections might give one cause to be suspicious of any kind of cutting up of the world that one might do. As Ralph Stacey (2003) argues, human beings cannot self-regulate in isolation from the rest of the world: human beings require other human beings to come into contact with one another and to form relationships. In other words, connections are a matter of survival for human beings. Such a notion, of course, requires interaction, iterated temporally and spatially, with one another. As such, connectivity gives rise to globally emergent and interconnected phenomena. Social organizations are specific manifestations of connectivities that are always and already embedded within and across other various patterns of varying scales of organization, e.g., other families, communities or neighborhoods, municipalities and other settings of state. In other words, interactions alone, as temporal processes, shape on-going emerging patterns that give rise to varieties of patterns within and across many different scales of organizations. Such is the case in contexts of learning organizations like classrooms and schools. Healthy Collectivities Health is not a condition that one introspectively feels in oneself. Rather, it is a condition of being involved, of being in the world, of being together with one's fellow human beings, of active and rewarding engagement in one's everyday tasks. (Gadamer, 1996) Wendell Berry (1995) has remarked that we must be seriously diseased for all of the talk that we hear about "health". Such a view may be cynical, naïve, narrow, unhelpful or even false, but it is hard to ignore, considering the nature of so many global problems. In fact, it is hard not to notice the many problems with the world and ourselves in times of great dis-ease. Whereas health, as its etymological root suggests, is concerned with notions like "healing" and "wholeness", disease and illness can, very much, make human beings conscious of the disconnectedness and isolation that come from a sense of unhealthiness (Ratson, 2003). In other words, health is not simply about biological bodies and how one feels in and with those bodies, it is ultimately about connectedness. More traditional views of life, death, health and illness, however, are rooted in the everyday assumption of body-as-object that fills a particular space not shared by any other body. It follows, therefore, that our bodies, distinguishable from all other bodies, are thought of as containers. Thus, the origin and location of disease, as one might typically experience it, is rooted in the physical body, and is traditionally thought to be a malfunction of certain "building blocks" in the body which no longer work as they should. Conventional thought, therefore, suggests that illness is the result of some outside disturbance to one's inner structure. Of course, this sense of illness is not something that is accepted by everyone, although such expressions of illness are common experiences felt by many people. This traditional view of health, as Dossey (1982) suggests, frames illness and disease as matters that can be treated in isolation from everything else. Moreover, as Ratson (2003, p. 15) remarks, "Modern medicine has advanced to the point where doctors can virtually ignore us and still do a pretty good job." One might conclude, therefore, that doctors as well need not invest much time to advance some kind of relationship with their patients. Disease and illness, thus, are things that happen to us as isolated and isolatable beings in the world at any time. Dossey continues: Bodies, as in the classical view of atoms, stand alone, both in space and in time. Although they form patterns, at heart they are single units in a deep, fundamental sense. Connectedness is seen only in terms of interaction of quintessentially separate bits and pieces. (p. 141) This particular view of the world is proving to be rather limited and limiting. As the concern here is for the living, the notion that life, as a property of single bodies, does not fit well with the view that life is an emergent property of the entire universe, where all things are interconnected with one another. As such, there seems to be a certain measure of blindness, in a manner of speaking, of the greater connectivity in the world. Thus, in some sense, some healing is needed so that the usually invisible connections that hold us together bring forth a greater whole, as the notion of health suggests. In other words, at the heart of a view of healthy organizations of all scales is this notion of connections that all-at-once manifest various kinds of dynamical patterns. As Gregory Bateson (1979) writes, we are dealing with "patterns which connect." Considering the adaptive nature of living phenomena, the notion of living organizations as learning organizations is not far away. Thus, it seems quite appropriate to consider, in a broad manner that addresses the nature of all kinds of living organizations, how and why the notion of connectivity might be an important principle for dynamical patterns of healthy organizations. What does a healthy learning organization, comprised as it is of (healthy) learners, look like? Learning and Healthy Social Organizations Many social organizations seem to be touting a shift toward or a greater emphasis on human relationships-our ability to bring forth particular kinds of connections that serve the possibility for healthy learning organizations to emerge and unfold. The essence of such a move is perhaps more toward being a more cohesive organization. For instance, "closeness" speaks to a kind of intimacy as when one is physically/emotionally close to someone else. But sometimes people in intimate or meaningful relations "drift apart." This suggests that relationships may reflect different "strengths." Framed in this manner, "complicated" or "mechanical" organizations have weak connections or perhaps, quite simply, none at all. At the other end of the spectrum, a relation could be so strong that the possibility for action becomes rather limited, and a lock-step, rigid, predictable pattern emerges. It is, in fact, the relationships "in the middle" of these two extremes that are the kinds of relations that make for healthy organizations, where adaptation can happen and a kind of "dancing" between people is possible. It is not so much that one should be concerned only with attempts to move toward the "middle" of these two extremes-at one end a "lock-step" relation and at the other end no relation at all-so that healthy relations in a healthy organization appear. Living systems inevitably manifest a wide range of different patterns-a reflection of their robustness and ability to "hang on." In certain human physiological patterns, for example, elderly human beings often have stable physiologies even though they may be frail. As Timothy Buchman (2002) writes, "It is not that aged patients have maladaptive responses to stress-rather their adaptive responses are inadequate." As we age, therefore, the connectedness that "breaks down" or weakens, gives rise to patterns that show either excessive order or uncorrelated randomness. In between the predictable stability of homeostatic processes and random fluctuations is a pattern of optimal connectedness which can be expressed in patterns of great variability. Moreover, such patterns of optimal connectedness are often noticed for particular forms: fractals (Gleick, 1988). As such, one might recognize a healthy well-connected organization by attending to its form: is it fractal? But healthy organizations are not merely healthy because they are manifestations of particular forms. They are also connected to, and with, other healthy forms because living organizations need other living organizations to survive and sustain themselves. Thus, patterns of healthy organizations are the same patterns which give life to everything. But, even more, it is the pattern of all living things that are connected to one another in a massively entangled web of life. Intimations for the Health of Teachers and Education The implications for learning and healthy learning organizations, therefore, suggest a need to be attentive to the kinds of connections which appear simultaneously across many scales all-at-once. In the broad context of education, therefore, and specifically the project of schooling, the notions of healthy learning organizations and comparative dynamics open up the possibility for some compellingly different stances and perspectives for thinking about a number of different aspects of education and learning. These potentially include learning and its relation to the identity, practices and knowledge of learners; classroom dynamics; the framing and understanding of school subjects; curriculum design; pre-service programs for new educators; the influences of community and physical space; and, leadership to name some. Given that the concern here is for "connectivity", the remaining remarks in this paper will focus on the importance and relevance of the concept and notion of connectedness for education and, in particular, teachers. Like the phrase, "No man is an island," no school, no classroom, no teacher ever stands alone. As Morrison (2002) remarks, "In schools, children are linked to families, teachers, peers, societies, and groups; teachers are linked to professional associations, other teachers, other providers of education, workplace placements for children, support agencies like psychological and social services, policy-making bodies, funding bodies, the courts and police services, and so on" (p. 18). In as much as a school may be seen as being and having a particular "body," it would seem clear that schools are relational patterns with connections within itself and with the world at large. And, in such a world, where there is often too much going on in terms of communication, say, the possibility for stress and teacher burnout is enormous, and the demands for a highly connected network create much chaos and ill-health. Moreover, as Gabor Maté (2004, p. 34) writes, there are three factors that "universally lead to stress": uncertainty, the lack of information and the loss of control. These three factors, in fact, are present in the lives of all people with chronic stress. Moreover, these three factors speak to an "absence that the [teacher] perceives as necessary for survival" (p. 34). Put differently, there is a loss of connectedness. Teachers-especially new and pre-service teachers-speak of these matters in a myriad of different ways. For myself, I have not only experienced these matters first hand, but as a teacher-educator in a pre-service teacher education program, I have seen this in the program's teacher candidates. It is quite hard to miss. After all, as human beings, we do tend to notice occasions of dis-ease rather than health. There is, for some, a rather pervasive idea that matters related to teaching and learning should rest upon the ideas of certainty, clarity and absolute control. These are, in fact, the exact opposite of what Maté talks about in terms of stress and burnout. This swing of the proverbial pendulum, however, does not make things any better. In fact, there is a need for some kind of "middle ground" where teachers and all involved might find greater health. Thus, some measure of connectivity is required, but not too little nor too much. In addition, while a certain degree of connectivity is required for healthy self-organizing structures, like a teacher, classroom or school, the connectivity of any healthy organization must reflect a "distributed knowledge system" (Morrison, 2002, p. 18). In other words, the location and control of information, knowledge, and meaning must not be centrally located in a command-and-control environment, but distributed, shared and circulated through the organization itself. To be sure, as if it were possible, no teacher, principal, student, parent or ministerial body can be the holder of all knowledge. In fact, some measure of democracy must be present where all can and must participate in the co-creation of the larger organization. And, moreover, self-organization must be a part of that process. If one does not feel in control, then one is most likely being controlled. Put differently, there is a lack of autonomy and, yet, paradoxically speaking, all people need to be connected in some way, although autonomy does not simply imply disconnectedness and being able to do whatever one wants. Of course, one should not think of leadership as gate-keeping, directing, or preserving the senior figure of a school, but rather as something which is distributed, shared and circulated. But such matters require flexibility and adaptability for survival. They also require feedback within, and iteration throughout, the organization to function well enough. Certainly, there are many other principles at work in an organization that play a part in its health. For now, however, the importance of connectivity, expressed in particular ways, cannot be overlooked as it is a vital principle of, and for, living organizations. Whether there is too little or too much connectivity, dis-ease and/or toxic relations are bound to be present. But where there is just enough, we should find a robust organization living on the edge of chaos.
2019-05-06T14:07:46.711Z
2006-12-01T00:00:00.000
{ "year": 2006, "sha1": "f6d398ec37f0a635f2e9115ae9ae12164e3d4514", "oa_license": null, "oa_url": "https://journals.library.ualberta.ca/complicity/index.php/complicity/article/download/8745/7065", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "94d3681f40098cd7f2f13d7331475eb0c90e958b", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
214160323
pes2o/s2orc
v3-fos-license
Utilization of Blood Cockle Shell (Anadara Granosa) Waste and Silica Sand in Manufacturing Calcium Silicate as Fillers in Paper Making Industry —This research aims to utilize blood cockle shell waste and silica sand as a new innovation in the manufacture of calcium silicate minerals which were used as fillers in paper making industry. In this study the manufacture of calcium silicate mineral fillers (CaSiO 3 ) used solid state reaction technique at a temperature of 1000 o C. Some researchers have tested the use of blood cockle shells as the raw materials for making calcium silicate, so further research was needed to find out whether the calcium silicate produced can be used in paper making. Sample characterization was carried out with Fourier Transform Infrared (FTIR), Keyence microscopy and X-Ray Diffraction (XRD). The results showed that the calcium silicate phase contained the wollastonite-1A phase (CaSiO 3 ) with the highest intensity peak at an angle of 2θ = 26.8225. FTIR analysis also showed the structure of calcium silicate with the formation of Si-O-Si and O-Si-O functional groups at a wavelength of 460 cm -1 , Si-O-Ca at a wavelength of 962 cm -1 and O-Si-O cm -1 at a wavelength of 901 cm -1 . The microstructure analysis using Keyence microscope showed that the sample granules were globular in shape with a particle size of approximately 14 µm. From all the paper testing results consist of brightness, bulk density, tearing, bursting, tensile and folding endurance showed that calcium silicate filler has fulfill the TAPPI International standard. I. INTRODUCTION Paper companies always innovate in creating quality products at relatively inexpensive costs. One of the market needs for paper quality is the high level of paper and bulky brightness. The most widely used fillers in the paper industry are the Ground Calcium Carbonate (GCC) and Precipitated Calcium Carbonate (PCC), but the largest consumption is GCC type. To meet market needs related to high quality, it is necessary to conduct laboratory-scale research, especially on supporting raw materials for paper making such as mineral fillers. Mineral filler is the second raw material after fiber is used for several grades of paper [1]. Mineral fillers are used on printing paper and writing paper to improve the optical properties and printing power of paper. Mineral filler can act as a cellulose fiber substitute at a low price which reduce the production costs. Mineral fillers can also increase opacity, brightness and sheet paper formation [2]. One of the innovations made toward fillers is gaining fillers with good quality, without utilizing additional chemicals and as much as possible from waste. From several studies that have been carried out by previous researchers, blood cockle shells can be used to manufacture calcium silicate filler minerals (CaSiO3) which can be used as fillers in paper making [3]. Blood cockle shells are one of the mineral sources which generally come from marine animals in the form of shells that have been milled and have high carbonate. The main components of blood cockle are 66.7% CaO, 7.88% SiO2 0.03%, Fe2O2, 22.28% Al2O3 and 1.25% MgO [4]. The blood cockle shell (Anadara granosa) has the potential to produce calcium silicate (CaSiO3) because it contains CaCO3 which when calcined at a temperature of > 800 o C produces CaO [5]. Synthesized calcium silicate from the composition of natural silica with natural carbonates from quartz sand and limestone has been done by researcher [6]. The study used a solid reaction method with a calcination temperature of 1100 o C. The study results indicated that calcium silicate appears at temperatures of 850 o C and 1100 o C. While another researcher [7] conducted a study of the synthesis and characterization of calcium silicate, the results of the study showed that the microstructure of natural calcium silicate after calcination at 850 o C had different chemical phases and compositions. After 900 o C calcination, the phase composition of the composting process and crystallization increases. Then the physical properties of calcium silicate studied showed that density, shrinkage and fracture strength increased with increasing temperature, while porosity and water absorption decreased. Based on research conducted by previous researchers, the authors are interested in making calcium silicate fillers from blood cockle shells with a solid reaction method. The manufacturing process is carried out by reacting calcium oxide extracted from the shells with silica sand. Furthermore, the calcium silicate produced will be characterized by its structure and morphology and applied in paper making. II. MATERIALS AND METHODS The materials used include blood cockle shells (Anadara granosa), distilled water, pure silica sand from brands, bayclin bleach liquid, PC-101 dispersants. Paper sheeting requires materials such as the pulp of short fiber type Leave Bleach Kraft Pulp (LBKP) with 4-5% consistency, paper chemicals such as fillers of PCC, GCC and CaSiO3 types with 68% consistency, Alkyl Ketene Dimer additives (AKD), OBA-type lightening chemicals with 3-4% consistency, positively charged tapioca with 2-3% consistency, detention chemicals and water. The tools used have been calibrated including furnace, Excalibur UMA 600 type FTIR, Marvern Mastersizer 2000, Keyence Microscope, oven, Hanna PH2211 pH meter, Elrepho 2000 brand brightness test, XRD Pert Powder Analytical, dispersion mill, press tools, desiccator, agitator, tensile strength testing equipment, folding endurance testing, tearing strength, bursting resistance and laboratory sheet paper making machines. Other tools were large glass beaker of plastic with a capacity of 8 liters, stirrer, tools glass commonly used in laboratories, analytical scales and paper thickness test kits, paper cutters, alumina balls, medium size porcelain cup, ring press, power press, sieve, mortar, analytic balance sheet, aluminum foil, heat-resistant gloves, gloves, magnetic stirer, coating plates, blotting paper (standard 200 mm 2 , weight 250 ± 10 g / m 2 and thickness 0. 508 ± 0.013 mm as absorbent. A. Sample preparation The first process was the preparation of blood cockle shells and silica sand by cleaning the shells and silica sand with running water, then drying with an oven. After dried up, the shells then grounded with mortar, then sieved with a 60 mesh sieved for blood cockle shells and 100 mesh sieved for silica sand to obtain blood cockle and silica sand powder. After that the blood cockle powder was calcined at 800 o C for 2 hours. The calcined blood cockle powder then mashed with a dispersant mill for 10 minutes and sieved with a 100 mesh sieved. The calcined blood cockle powder was mixed with silica sand powder by composition 1:1 in an alumina ball for 1 hour. Then the mixture was calcined in a furnace with a temperature of 1000 o C for 2 hours. The calcination results were then smoothed in a dispersant mill and sieved with a 100 mesh sieved to obtain calcium silicate powder. B. Characterization of calcium silicate Functional groups test was carried out based on the FTIR Instrument standard test method. A calcium silica mixture was mixed with 0.15 g potassium bromide to be printed on the mold (pellets). Measured with FTIR spectral resolution of 4 cm -1 , wavelength of 400-4000 cm -1 and scanning process 32 times. Morphology test was carried out based on the standard test method of the Keyence Microscope Instrument Manual. 1 g of calcium powder Silicate dissolved in 100 ml water. Stirred using a stirrer ± 15 minutes. Drop the solution on the test glass and observed with a Keyence microscope with magnification 400 times and 800 times. Crystallinity analysis was carried out using CuKα radiation (1,506 Å), X-ray tubes operated at 45 kV and 30 mA. The measured diffraction range (2θ) was in the range of 10-90 o , with a scan size of 0.05 o /minute. The peaks in the diffractogram were then identified using the Search Match method with the standard data contained in the ICDD programmed. C. pH measurements The test was carried out based on the ASTM E70-97 standard test method and the pH meter manual Hanna PH2211 model. Make a 0.05% calcium silicate solution and measure it with a pH meter by dipping the pH meter electrode in the solution. D. Particle size measurements The test was carried out based on the Manual standard Instrument Mastersizer 2000 test method. Weighed 10 grams of calcium oxide powder and calcium silicate in 100 ml glass beaker and dissolved it by adding 100 ml of distilled water. Insert the sample into the Mastersizer 2000 to 20% container (seen on the monitor screen). Run the Mastersizer 2000 tool by pressing the run button on the monitor screen. Wait for the results to appear on the monitor screen about ± 10 seconds. Also did particle size measurements for GCC and PCC fillers with the same procedure. E. Brightness measurements The test was carried out based on the TAPPI standard test method 425 om-08 and the Elrepho color manual (L & W). Solid calcium silicate was pressed in a ring press with a pressure of 210 kPa for 5 seconds. Measure brightness on a flat surface using Elrepho 2000. Also did brightness measurements for GCC and PCC fillers with the same procedure. F. Application of Calcium Silicate on Paper Sheets Prepare paper chemicals with the following dosage: 7,000 ppm of OBA-type chemical, 100 ppm of coloring agent, 20% of filler, 0.8% of positively charged tapioca starch, 500 ppm of retention chemicals, 0.8% other additives Alkyl Ketene Dimer (AKD) and 600 gr LBKP pulp (paper making with 75 or 80 g/m 2 ). Mix LBKP pulp with water until the consistency was 3.5-4% and stirred at 1,200 rpm. Add filler, tapioca starch, OBA lightening chemicals, paper dye (blue and violet) and AKD then stirred for 2 minutes with an agitator at the same speed. Then diluted the mixture to a consistency of 0.8-1% while stirring and finally add the chemical retension. Print sheets of paper using a laboratory-scale paper sheet making machine. Perform this procedure to make paper sheets for PCC and GCC fillers. III. RESULTS AND DISCUSSION The initial characterization carried out after the manufacture of calcium silicate (CaSiO3) from blood cockle shells and silica sand was the measurement of functional groups by FTIR, morphology by Keyence Microscopes and Crystallinity by XRD. The parameters of pH, particle size and brightness observed either. These three parameters were the initial stages to determine whether or not a mineral filler was suitable for making paper. After that, it was continued by observing the quality of paper sheets after adding calcium silicate filler compared with GCC and PCC filler. The main reason of using calcium silicate (CaSiO3) in paper making besides as a filler between cellulose fibers, also helps to reduce the use of fiber and improve the optical properties of paper in the papermaking process. In addition, the filler can increase the softness of the surface on printed paper and improve the quality of paper [8]. The making of calcium silicate (CaSiO3) from blood cockle shells and silica sand was done by reacting calcium carbonate contained in the blood cockle shells and silica on the sand by the solid state method with a calcination process of 1000 o C for 2 hours. The principle of solid reaction is based that the cations and/or anions of one structure must be moved or exchanged by several mechanisms to another structure to form new compounds [9]. The calcination process at a temperature of 1000 o C was intended to cause a process of changes in microstructure such as changes in pore size, grain growth, increased density and mass shrinkage. Previous researchers [10] made calcium silicate using a mixture of shells and rice husks. The results of the study showed that calcium silicate appeared at 1000 o C for 2 hours. A. FTIR analysis From the results of observations in Fig. 1 B. Keyence Microscope analysis As showed in Fig. 2, observations with Keyence microscopy were carried out for microstructure analysis of the resulting calcium silicate particles. The particle shape and particle size greatly affect the quality of the mineral filler. The PCC filler has a needle-like filler shape with a size of approximately 3.55 μm while the GCC filler has the form of rhombohedral (cubic) particles and a size of approximately 2.36 μm. Forms of fillers such as needles or stems were preferred as fillers in paper making. This is because the form of fillers such as needles or rods better adjust the shape of the cellulose fibres that were elongated. So that it can increase the smoothness of the surface and porosity of the paper made. However, the form of round filler, rhombohedral or granular did not mean that it cannot be used as a filler, as titanium TiO2 filler has a round filler shape but its quality was still good for paper making. In the observation with the Keyence microscope in Figure 2, the form of calcium silicate was obtained at a magnification of 400 times and 800 times, globular/spherical shape with a particle size of approximately 14 μm. C. XRD Analysis The resulting diffraction pattern was in the form of diffraction peaks with a relative intensity that varies along a value of 2θ. The peaks obtained from this measurement data were then matched with X-ray diffraction standards called the Joint Committee on Powder Diffraction Standards (JCPDS). Based on the results of matching diffraction peaks obtained from measurement data with the JCPDS standard, it showed that the calcium silicate X-ray diffraction pattern at a temperature of 1000 o C contained the wollastonite-1A phase (CaSiO3) with the similarity of rows of diffraction peaks at an angle of 2θ = 11.4841-63.7146 and the highest intensity peak at an angle of 2θ = 26.8225. The Wollastonite-1A phase was a crystalline phase with a triclin shape. The sharp peak indicates that the mineral filler calcium silicate has a crystalline solid structure. Crystalline solids have neat and regular arrangement of atoms and molecules, whereas amorphous solids are solid structures with random and irregular patterns of atoms and molecules. The sharper the peak diffraction, the more crystalline of a solids in the nature. On the contrary, the gentler the diffraction peak ramps, the more amorphous of solids in nature [13]. The difference in intensity values between the measurement results with the literature depends on the atom or the number of ions that exist and its distribution in the unit cell of the material [14]. The more Crystal fields contained in the sample, the stronger the intensity of refraction will be produced. Each peak that appears in the XRD pattern, represents a Crystal field that has a certain orientation on the three-dimensional axis [15]. Table 3 showed parameter analysis for 3 different fillers. These three parameters were the initial stages to determine whether or not a mineral filler was suitable for making paper. From the results of the 3 parameters measurement, calcium silicate has a measurement value of pH, particle size and brightness fulfilled the standard and the results were not much different from the GCC and PCC filler. The pH value of the filler has an effect on the paper making process [16]. The pH value must be alkaline, if the filler has an acid pH or out of the standard, foam will form in the paper making process. Likewise the particle size and brightness also affect the paper making process. Size of filler particles which were not uniform and low brightness will produce rough paper quality and low brightness. Table 4 above showed the result of paper measurements parameter on the application of calcium silicate filler (CaSiO3), GCC and PCC in the manufacture of paper sheets. The values listed in the table were the results of calculating the average of three measurements (triple) in each parameter. Each measurement parameter was carried out according to the appropriate test method. The paper measurement parameters results were used as a reference for filler comparison in the application of paper making sheets. The standard used in paper quality parameters test was the TAPPI International Standard. The Gramature test was carried out to find out the uniformity of the paper samples that had been made. From the results of the paper strength test results showed that the three types of filler provide a nearly uniform Gramature value with an average of 75 g/m 2 . E. Parameter of Paper Quality Test The brightness of the paper made with GCC and PCC fillers has a brightness of 79.34% ISO and 80.53% ISO respectively, while the brightness for calcium silicate was only 75.16% ISO. According to Wirawan [17], fillers can increase the brightness and optical properties of paper sheets. The low brightness of paper with calcium silicate filler was caused by the basic ingredients of calcium silicate fillers, blood cockle shells and silica sand having low brightness, therefore the brightness of the resulting filler decreased. However, the brightness of paper made with calcium silicate fillers was still fulfill the TAPPI international standards so that it can be used in paper making. Bulk density is the ratio of paper grams to its thickness in g/cm 3 . In the paper industry, bulk density is associated with the thickness of the paper made. Paper companies are competing to make paper with the same dose but able to obtained high bulk density. One of the innovations made is to find a filler that has a high particle size in order to obtain paper with high bulk density. In Table 4 it was found that the bulk density for paper sheets made with calcium silicate filler 1.80 cm 3 /g is highest than the others. Tearing strength is the force in grams (gf) or milli Newton (mN) required to tear a sheet of paper. There are three factors that affect tearing strength, namely the total number of fibers damaged by sheets, fiber length and the amount and strength of bonds between fibers [17]. If it is associated with a filler, it is known that the type of filler used in paper making greatly influences the tearing strength value of the paper. According to Kurniati [18] increasing the amount of filler will reduce the tearing strength of the paper. This is possible because the filler that fills the pores of the fiber prevents bonds between the fibers. In the tests that have been done, the dosage of filler used for the three types of filler has the same dose, so the dosage of filler cannot be used as a comparison in this discussion. Table 4 showed the paper sheets made with calcium silicate fillers having lower tearing strength than GCC and PCC fillers. This is probably caused by differences in interactions that occur between the fiber and the filler. However, the tearing strength value was still fulfill the TAPPI International standards. Bursting resistance is influenced by the use of fillers, the composition of long fibers and short fibers and the use of positively charged starches [19]. From Table 4, it can be seen that the bursting resistance of paper sheets made with PCC fillers has a higher bursting resistance value than calcium silicate fillers and PCC. This is probably caused by differences in interactions that occur between the fiber and the filler. Tensile strength of paper is influenced by the type of fiber used (long fibers or short fibers), the fiber grinding process and the use of positively charged starches [19]. The effect of using filler only slightly affects the paper's tensile strength. From Table 4 it can be seen that paper sheets made with calcium silicate fillers have lower tensile strength compared to GCC and PCC fillers. This may be due to differences in paper thickness (bulk), where paper sheets made with calcium silicate fillers have a greater thickness (bulk) of 1.80 cm 3 /g while paper sheets made with PCC and GCC fillers have thickness (bulk) 1.67 cm 3 /g and 1.68 cm 3 /g respectively. The thickness of the paper will affect the tensile strength of the paper sheet made. The folding endurance is affected by the use of long fibers, positively charged starch and little influenced by the filler. The higher the dose of the long fibers used, the higher the folding endurance, as well as the use of the filler, the increase in the filler will reduce the paper folding endurance. This is possible because of the filler that fills the pores of the fiber prevents bonds between the fibers [19]. From Table 4 it can be seen that paper sheets made with calcium silicate fillers have almost the same folding endurance as GCC and PCC fillers. IV. CONCLUSIONS From the observation with FTIR and X-Ray Diffractometer (XRD), it showed that the calcium silicate sample contained wollastonite-1A (CaSiO3) phase with similarities of Si-O-Si bend functional groups at wave numbers 460 cm -1 , Si-O-Ca at wave numbers 962 cm -1 , O-Si-O at wave number 901 cm -1 in FTIR equipment, and similarity of diffraction peaks rows at 2θ = 11.4841-63.7146 angle on XRD devices. A sharp hill at the peak showed that calcium silicate filler minerals have a crystalline solid structure. Observation result with Keyence Microscope showed that calcium silicate microstructure was globular in shape with particle size of 14.55 μm. Calcium silicate made has pH, particle size and brightness respectively 10.3; 14.55 μm and 78.66% ISO which were fulfilled the TAPPI International standard. From all the test results that have been carried out, the brightness, tearing, bursting, tensile and folding endurance, it can be seen that calcium silicate filler has a lower value than GCC and PCC filler, but still fulfilled the TAPPI International standard. Therefore, calcium silicate will potentially be an alternative source of filler which cost effective.
2019-12-12T10:47:05.789Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "f1356369806845f64ea368e1ede23863875f9bc8", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/iccelst-st-19.2019.16", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c2f941a866f2a53967acc3fec660cb32a9e930d8", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Chemistry" ] }
230635570
pes2o/s2orc
v3-fos-license
Cost-Effectiveness of Icosapent Ethyl (IPE) for the Reduction of the Risk of Ischemic Cardiovascular Events in Canada Background Despite the use of statins, many patients with cardiovascular disease (CVD) have persistent residual risk. In a large Phase III trial (REDUCE-IT), icosapent ethyl (IPE) was shown to reduce the first occurrence of the primary composite endpoint of cardiovascular death, nonfatal myocardial infarction, nonfatal stroke, coronary revascularization, or hospitalization for unstable angina. Methods We conducted a cost-utility analysis comparing IPE to placebo in statin-treated patients with elevated triglycerides, from a publicly funded, Canadian healthcare payer perspective, using a time-dependent Markov transition model over a 20-year time horizon. We obtained efficacy and safety data from REDUCE-IT, and costs and utilities from provincial formularies and databases, manufacturer sources, and Canadian literature sources. Results In the probabilistic base-case analysis, IPE was associated with an incremental cost of $12,523 and an estimated 0.29 more quality-adjusted life years (QALYs), corresponding to an incremental cost-effectiveness ratio (ICER) of $42,797/QALY gained. At a willingness-to-pay of $50,000 and $100,000/QALY gained, there is a probability of 70.4% and 98.8%, respectively, that IPE is a cost-effective strategy over placebo. The deterministic model yielded similar results. In the deterministic sensitivity analyses, the ICER varied between $31,823-$70,427/QALY gained. Scenario analyses revealed that extending the timeframe of the model to a lifetime horizon resulted in an ICER of $32,925/QALY gained. Conclusion IPE represents an important new treatment for the reduction of ischemic CV events in statin-treated patients with elevated triglycerides. Based on the clinical trial evidence, we found that IPE could be a cost-effective strategy for treating these patients in Canada. Introduction In Canada, cardiovascular disease (CVD) is the second leading cause of death after cancer and a leading cause of hospitalization. 1,2 Ischemic heart disease (IHD), the most common form of CVD, is the first cause of years of life lost and the second leading cause of disability-adjusted life years lost. 3,4 Early detection and management of CVD risk factors have contributed to reducing the burden of CVD in the last few decades, but despite the widespread use of statins, many patients have persistent residual CV risk. 5 Studies have shown that rates of CV events remain high even among patients who are receiving recommended treatments for CV prevention. [6][7][8][9][10] In these patients, an elevated triglyceride level is believed to be an independent marker for an increased risk of ischemic events. [11][12][13][14][15] Medications commonly used to reduce triglyceride levels are extended-release (ER) niacin and fibrates; however, these have not proven to be efficacious in reducing CV events. [16][17][18][19][20] Icosapent ethyl (IPE) is a member of a new class of drugs that acts in multiple ways to reduce CV risk. Studies suggest that IPE may impact atherosclerotic processes, resulting in reduced development, slowed progression, improved endothelial functions, and increased stabilization of atherosclerotic plaque. [21][22][23] It may also have anti-inflammatory, antioxidative, plaque-stabilizing, and membrane-stabilizing properties. [24][25][26] In a randomized, placebo-controlled trial, the Reduction of Cardiovascular Events with Icosapent Ethyl-Intervention Trial (REDUCE-IT), the primary composite endpoint event (ie, cardiovascular death, nonfatal myocardial infarction (MI), nonfatal stroke, coronary revascularization, or unstable angina) occurred in 17.2% of the patients in the IPE group versus 22.0% in the placebo group (hazard ratio [HR], 0.75; 95% confidence interval [CI], 0.68 to 0.83; P<0.001). 27 Additionally, in the Effect of VASCEPA ® on Improving Coronary Atherosclerosis in People With High Triglycerides Taking Statin Therapy (EVAPORATE) trial, IPE demonstrated a regression of coronary plaque, suggesting an anti-atherosclerotic effect. 23 Treatment advances are allowing many individuals who would have died of CVD in the past to now live longer with the disease. 28 In Canada, in 2013, approximately 2.4 million Canadian adults lived with diagnosed heart disease and over 740,000 had a history of stroke. 29,30 Although a reduction in mortality due to CVD has increased life expectancy for Canadians, a longer life lived in poor health is not necessarily indicative of improved health outcomes. Besides causing considerable difficulties for patients and affecting their quality-of-life (QoL), CVD also has a significant economic cost. 31 Not only does CVD affect the health system, it also affects the overall economy through missed work and lower productivity. In Canada, total costs for CVDs were estimated to be $12 billion in 2008. 31 The objective of the current study was to assess the economic impact of IPE in the reduction of ischemic CV events in Canada based on the results of REDUCE-IT, which evaluated the brand name IPE VASCEPA. 27 VASCEPA is a new drug, approved in Canada to reduce the risk of CV events in statin-treated patients (primary and secondary prevention). Preventing or mitigating CV events can have a significant positive impact on healthcare costs and patient well-being. Cost-Utility Analysis We conducted a cost-utility analysis (CUA), from a Canadian publicly funded health care payer perspective, according to the most recent guidelines for the economic evaluation of Health technologies published by the Canadian Agency for Drugs and Technologies in Health (CADTH) in 2017. 32 We expressed the results of the analysis as the cost per quality-adjusted life year (QALY) gained. 33 Our target population is in accordance with REDUCE-IT, which compared the effects of IPE 4 grams daily versus placebo in men and women with established CVD or with diabetes mellitus (DM) and other CVD risk factors, despite stable statin therapy and reasonably well-controlled levels of low-density lipoprotein cholesterol (LDL-C). 27 All participants used a stable dose of a statin ± ezetimibe. The 2013 ACC/AHA Guideline on the Treatment of Blood Cholesterol to Reduce Atherosclerotic Cardiovascular Risk in Adults provided guidance on the appropriate intensity of pharmacological treatment to reduce CVD, defining the intensity of statin therapy on the basis of the average expected LDL-C response to a specific statin and dose. 34 Table 1 displays examples of high-, moderate-, and low-intensity statin 296 therapy based on the 2013 ACC/AHA Guideline. In REDUCE-IT, the distribution of patients according to their intensity of statin therapy was high in 30.9%, moderate in 62.7%, and low in 6.4% of patients. The proportion of patients in the trial who used ezetimibe was 6.4% 27 and we assumed that the distribution of specific statins within treatment intensity levels was evenly distributed between all treatments. Consistent with the most recent Canadian HTA review of a CUA model for proprotein convertase subtilisin/kexin type 9 (PCSK9) inhibitors by CADTH, we considered a 20-year time horizon in the base-case analysis. 35 We discounted costs and effects incurred after one year at a 1.5% annual discounting rate, as per the most recent CADTH guidelines. 32 We based the CUA on a probabilistic time-dependent Markov transition model, comparing IPE to placebo for the reduction of ischemic CV events. We cycled patients through the Markov model in one-year cycles to predict the long-term risk of major CV events through five different health states ( Figure 1; Supplement 1). Patients entered the model in the CV event-free (CEF) state, where we assumed that they were at risk of non-fatal CV event (CVE), death from fatal CV causes (DCV), or death from other causes (DOC). Patients with a CVE remained in a post-non-fatal CV event (post-CVE) state, where we assumed that they were at risk of subsequent events. In terms of CVEs, we considered the incidence and distribution of each individual outcome included in the primary composite endpoint: CV death, nonfatal MI, nonfatal stroke, coronary revascularization, and unstable angina. We applied a half-cycle correction to more accurately reflect the continuous nature of the state transition (the assumption being that transitions occur, on average, half-way through each cycle instead of at the beginning of the cycle). We obtained efficacy and safety outcomes from REDUCE-IT. 27 We retrieved utility and disutility data from the literature, while we acquired costs, presented in 2019 Canadian dollars (CAD), from the manufacturer, provincial formularies and pharmacists' databases, and Canadian literature sources. Efficacy and Safety We applied the individuals hazard ratios (HRs) of each event from REDUCE-IT ( Figure 2) to reconstitute the Kaplan-Meier based on individual patient-level data (IPD), appraising the proportional hazards assumption for all HRs used in the model (Supplement 2). We extrapolated survival rates for the placebo group over the 5-year time horizon using parametric survival models, evaluating the best fit based on the Akaike Information Criterion (AIC) and Bayesian 297 Information Criterion (BIC) statistics, using the Flexsurv for R package for time-to-event data and by visual fit to the Kaplan Meier curves (Supplement 3). In the absence of observed data after the initial 5-year period, we assumed subsequent event rates were equal to that of the placebo group (ie, a HR of 1), as opposed to a less conservative scenario where the benefit of IPE continued to accrue over time. The incidence of treatment-induced adverse effects (AEs) included in this analysis were those of peripheral edema, constipation, atrial fibrillation, and serious bleeding. 27 We only took into account AEs that were statistically different (p <0.05) in disadvantage of IPE, with the exception of serious bleeding (p=0.06) as it is considered an important safety parameter. Utilities and Disutilities In this model, we applied multiplicatively acute CVE health state disutilities and post-event utilities to the baseline utility value ( Table 2). The baseline utility represented the population under study, which comprised 70.7% patients with established CVD and 29.3% with DM and at least one additional CVD risk factor. In our model, patients experienced acute disutility in the year after their event, after which they would experience a chronic post-event utility. We also conducted a structured literature review to identify the disutility associated with each AE included in our model (Table 3). Costs We used drug prices paid by the public payer (lower-cost alternatives) for IPE and all comparators. We obtained unit costs from the Ontario Drug Benefit Formulary/Comparative Drug Index, as of March 28, 2019 (Table 4). 36 Given their respective recommended doses, the proportion of patients using ezetimibe (6.4%), and the distribution of different statins per treatment intensity, we estimated average annual costs of standard of care (SOC) + placebo and SOC + IPE to be $148.55 and $3728, respectively. Since we assumed after the trial period of 5-year CVE rates were equal in both groups, we also applied the annual costs of SOC + placebo to both groups. 299 The annual costs related to CV events are depicted in Tables 5 (see Supplement 4 for more detailed cost data). We inflated the average annual per patient healthcare cost of complications to 2019 prices using the healthcare component of the consumer price index (CPI). 37 In our model, we assumed that 75% of revascularizations were percutaneous coronary intervention (PCI) and 25% were coronary artery bypass grafting (CABG), based on the recommendation of clinical experts. According to Kaul et al, revascularization rates in the United States (US) were almost 3 times greater than in Canada. 38 Therefore, for each coronary revascularization in the model, we only considered 35.7% to be performed in Canada. We estimated the subsequent years' cost associated with coronary revascularization by using the proportional difference between the first year and subsequent year's cost of CABG and PCI in a US study. 39,40 Lastly, we counted 300 acute care costs associated with episodes of hospitalizations at the initial onset of fatal MI and stroke, similarly to a published Canadian cost-utility analysis on hypertension. 41 The annual medical costs associated with follow-up and monitoring are presented in Table 6 (seeSupplement 4 for more detailed cost data), which were based on a number of assumptions (Supplement 5). We only considered the medical appointment to evaluate the response and adverse events, and the initial fasting lipid panel for the IPE group. We obtained the mean cost of adverse events (Table 7) from the Costing Analysis Tool of the Ontario Case Costing (OCC) database, which provides patient-level costs for inpatients and ambulatory care cases in Ontario for the years 2010-2018. 39 Base Case Consistent with recent CADTH guidelines, 32 we derived the base case analysis results from an analysis of uncertainty using a probabilistic model (see Supplement 5 for model assumptions). We generated probabilistic analyses (PAs) by simultaneously sampling from estimated probability distributions of model parameters (Table 8), performing a total of 5000 simulations. We generated descriptive statistics based on the simulated values for costs, QALYs, incremental costs, and incremental QALYs, and also constructed a cost-effectiveness acceptability curves (CEACs). Uncertainty We performed both deterministic sensitivity and probabilistic scenario analyses to assess the impact of each parameter on the base case results (see details in Supplement 6). Model Outputs We calculated incremental cost-effectiveness ratios (ICERs) as the incremental cost/QALY gained. As per the most recent CADTH guidelines, we presented the median probabilistic ICER. 32 We evaluated the cost-effectiveness of IPE versus placebo based on the established willingness-to-pay (WTP) threshold of $50,000/QALY, which is a commonly accepted threshold in Canada. Base Case Analysis The results of the base case analysis are presented in Table 9. IPE was associated with an incremental cost of $12,523 and an additional 0.29 QALYs gained compared to placebo. The mean probabilistic ICER was $42,797/QALY gained. Expected discounted costs by treatment and cost categories are shown in Table 10. The majority of the total incremental costs is from the cost of IPE; however, the increase in medication costs is offset by the reduction in costs associated with the first and subsequent CV events. The gain in QALYs is driven largely by the decrease in CV events observed with IPE (Table 11). The disutility associated with AEs has only a minimal impact on the gain in QALYs. A scatter plot depicting the cost/QALY gained for each of the 5000 simulations is shown in Figure 3. The CEACs for WTP thresholds from $0 to $200,000 are presented in Figure 4. At a WTP of $50,000, there is a 70.4% probability that IPE is cost-effective relative to placebo. At a WTP of $100,000, there is a 98.8% probability that IPE is cost-effective relative to placebo. Uncertainty Analyses The results of the deterministic sensitivity and probabilistic scenario analyses are presented in Supplement 6. The deterministic model yielded comparable results (ICER = $40,529/QALY gained) due to similar incremental costs and QALYs gained. From the deterministic sensitivity analyses, we found that the results were most sensitive to age, to the percentage of CV death among primary endpoints, and to the percentage of coronary revascularization in subsequent events. Based on the ranges tested, the ICERs varied between $31,823 and $70,427/QALY gained. 303 The results of the probabilistic scenario analyses indicated that the time horizon can have a large impact on the results, as extending the model to a lifetime horizon would result in an ICER of $32,925/QALY gained, while limiting the timeframe to 5 years would result in an ICER of $253,227/QALY gained. All other scenario analyses (ie, varying the Discussion The objective of this study was to assess the economic impact of IPE in the reduction of ischemic CV events in Canadian statin-treated patients with CVD or DM+ RF with elevated triglycerides in Canada. Based on a CUA, from a publicly funded healthcare payer perspective over a 20-year time horizon, the probabilistic base-case analysis revealed that treatment with IPE was associated with an ICER of $42,797/QALY gained relative to placebo. At a WTP of $50,000 and $100,000/QALY gained, there is a probability of 70.4% and 98.8%, respectively, that IPE is a cost-effective strategy for this patient population. The largest driver of the increased costs seen with IPE is the cost of the medication itself; however, as IPE reduces the occurrence of CV events, 27 it may not only reduce the costs associated with these events, but also improve the QALYs gained in these patients. Though IPE may increase the risk of certain AEs, 27 the disutilities associated with these AEs are minimal. This is the first economic evaluation specifically on IPE from a Canadian healthcare payer perspective; however, evaluations have recently been conducted on omega-3 fatty acid therapies in similar patient populations in other countries. Kodera et al (2018) performed a cost-effectiveness study comparing eicosapentaenoic acid (EPA), IPE is a highly purified form of EPA, plus statin therapy versus statin therapy alone in Japanese patients with hypercholesterolemia using data from the JELIS trial. 42 The original study revealed that adding EPA to statin therapy significantly reduced the risk of major coronary events. 43 In the cost-effectiveness analysis conducted over a 30-year period from a public healthcare funder perspective in Japan, the authors concluded that EPA plus statin therapy showed acceptable cost-effectiveness in secondary prevention (¥5.5 million/QALY gained or ~$68,669 CAD) but not primary prevention (¥29.6 million/QALY gained; ~369,564 CAD). 42 In addition, Philip et al (2016) developed a cost-effectiveness model extrapolating the results of the same JELIS trial to a US population. 44 In their analysis conducted over a 5-year time horizon, the authors reported both cost savings and improved utilities with EPA plus statin therapy versus statin monotherapy. 44 Gao et al conducted a cost-effectiveness analysis in Australia with a 25-year time horizon, which also used data from REDUCE-IT. 45 The authors also found that icosapent ethyl was associated with both higher costs and benefit, with an ICER of 59,036 Australian dollars (AUD)/QALY gained (~56,913 CAD), though this was not costeffective according to their WTP threshold of 50,000 AUD/QALY.45 This study differs from our current analysis, not 305 only because it was conducted from the perspective of another country with different healthcare cost prices and considerations, but also in terms of model inputs; however, of note, the key difference between the study by Gao et al and our current analysis was the difference in costs between IPE and placebo (16,805 AUD [~16,200 CAD] versus 12,523 CAD in our study), whereas the difference in QALYs between treatments was identical (0.29). This may be explained by the longer time horizon used in the study by Gao et al (25 versus 20 years in our study). In the US, Weintraub et al performed a cost-effectiveness analysis also based on the results of the REDUCE-IT clinical trial. They found that compared with standard care, IPE had an 89.4% probability of costing less than $50 000 per QALY gained when using SSR cost and a 72.5% probability of costing less than $50 000 per QALY gained when using WAC. These results are in line with what we found in our study. Lastly, after completing a systematic review of cost-effectiveness studies on treatment strategies for the secondary prevention of CVD, Marquina et al concluded that omega-3 polyunsaturated fatty acids were cost-effective relative to standard care in most of the included studies, with ICERs ranging from 57,128 to 139,082 US dollars (~75,286 to 183,289 CAD). 46 Some limitations are related to our model assumptions (Supplement 5). We assumed that the baseline characteristics obtained from REDUCE-IT (eg, age and gender distribution, the proportion of patients using ezetimibe, the distribution of statin treatment intensity levels, etc.) would be consistent with the target patient population in Canada. We also assumed event rates were equal in both treatment groups after the initial 5-year period, considering our 25-year time horizon, as we did not have follow-up data beyond this timepoint. Lastly, our assumptions on revascularization rates, the proportion of patients expected to receive a PCI versus a CABG, and the proportion of the patient's management expected to be done by a primary care physician were based mainly on clinical expert consultation or limited data. The estimated ICER was very sensible to the selected time horizon. Since it is a cost per QALY analysis, by limiting the time horizon for example to 5 years instead of 20 years, the long-term impact on the QALY gained with the treatment would be largely underestimated. IPE represents an important new advanced treatment for the reduction of ischemic CV events in statin-treated patients with elevated triglycerides. Based on the clinical trial evidence, this CUA suggested that IPE could be a cost-effective strategy in Canada, based on the conventionally quoted threshold of $50,000/QALY gained.
2020-12-17T09:07:24.184Z
2020-12-01T00:00:00.000
{ "year": 2023, "sha1": "c9e3205b2c5b1542696dc9a478830822c280023f", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "435d29c98e7e4ced105cd2dde9a9707ba9d312cb", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
215745942
pes2o/s2orc
v3-fos-license
Expression and serum levels of the neural cell adhesion molecule L1-like protein (CHL1) in gastrointestinal stroma tumors (GIST) and its prognostic power Introduction: Diagnosis of gastrointestinal stroma tumors (GIST) is based on the histological evaluation of tissue specimens. Reliable systemic biomarkers are lacking. We investigated the local expression of the neural cell adhesion molecule L1-like protein (CHL1) in GIST and determined whether soluble CHL1 proteoforms could serve as systemic biomarkers. Material and Methods: Expression of CHL1 was analyzed in primary tumor specimens and metastases. 58 GIST specimens were immunohistochemically stained for CHL1 on a tissue microarray (TMA). Systemic CHL1 levels were measured in sera derived from 102 GIST patients and 91 healthy controls by ELISA. Results were statistically correlated with clinicopathological parameters. Results: CHL1 expression was detected in GIST specimens. Reduced tissue expression was significantly associated with advanced UICC stages (p = 0.036) and unfavorable tumor localization (p = 0.001). CHL1 serum levels are significantly elevated in GIST patients (p < 0.010). Elevated CHL1 levels were significantly associated with larger tumors (p = 0.023), advanced UICC stage (p = 0.021), and an increased Fletcher score (p = 0.041). Moreover, patients with a higher CHL1 serum levels displayed a significantly shortened recurrence free survival independent of other clinicopathological variables. Conclusion: Local CHL1 expression and serum CHL1 levels show a reverse prognostic behavior, highlighting the relevance of proteolytic shedding of the molecule. The results of the study indicate a potential role of serum CHL1 as a diagnostic and prognostic marker in GIST. INTRODUCTION Gastrointestinal stromal tumors (GISTs) present a broad clinical spectrum of symptoms due to their various localizations in the gastrointestinal tract and malignant behavior. They are the most common mesenchymal neoplasms in the gastrointestinal tract and are typically identified by expression of tyrosine kinase receptors, c-kit (CD117) and platelet-derived growth factor receptor alpha (PDGFRa) [1][2][3][4]. Several prognostic parameters were identified and included in clinical staging such as primary tumor size, nodal and metastatic status, mitotic rate and localization of the tumor [5]. GISTs are often diagnosed incidentally at endoscopic or surgical procedures as well as during the evaluation of patients suffering from unspecific abdominal symptoms or upper gastrointestinal bleeding [6]. Until now, diagnosis and prognosis are based on histopathological examination of biopsy or surgical samples since no serum markers exist. Hence, there is an urgent need for novel prognostic markers and potential therapeutic target molecules to improve diagnosis and treatment strategies. GISTs originate from the interstitial cells of Cajal located in the nerve plexus of the muscularis in the gut wall [3,7]. Therefore, several neuronal molecules are Research Paper Oncotarget 1132 www.oncotarget.com expressed in GIST. In other tumors, neuronal adhesion molecules of the L1 family like L1 (CD171), NrCAM and Neurofascin have been extensively investigated [8][9][10][11][12][13]. However, few is known about is the neural cell adhesion molecule L1-like protein (CHL1), which is another L1 family member. CHL1 is a multidomain type 1 membrane glycoprotein of the immunoglobulin superfamily which has analogous physiologic functions in the development of the neuronal system as L1 [14]. Two isoforms of CHL1 are expressed, of which isoform 2 is characterized by the additional mini-exon 8. The physiological relevance of these isoforms is unknown. CHL1 participates in the nerve cell regeneration und cortical development, proliferation and migration of neurons and acts as a survival factor for motoneurons [15,16]. Similar to L1, which is detectable in ascites or blood serum of patients in its soluble form, the ectodomain of CHL1 is cleaved by a disintegrin and metalloproteinase 8 (ADAM8) and beta-secretase (BACE1) from the cell surface [17][18][19]. This ectodomain promotes neurite outgrowth and suppresses neuronal cell death [20]. An increasing number of studies demonstrated a role of CHL1 in cancer growth, invasion and migration for different entities. He et al. described a downregulation of CHL1 in breast cancer and an association with lower tumor grading. On the other hand, overexpression seemed to suppress the invasion and proliferation of tumor cells [21]. Manderson et al. reported that compared to healthy ovarian tissue gene expression of CHL1 is elevated in serous epithelial ovarian cancer [22]. Furthermore, CHL1 expression in esophageal and lung cancer is significantly correlated with a favorable outcome [23,24]. In addition, a functional role as a tumor suppressor in the Akt pathway in esophageal cancer was found [25]. Another recent study also demonstrated a tumor suppressive function of CHL1 in neuroblastoma [26]. In addition, a down-regulation of CHL1 via overexpression of miR-21-5p promotes the propagation and invasion of tumor cells in colon adenocarcinomas [27]. Our group has recently described a significant role of CHL1 expression in non-small-celllung cancer and a correlation with overall survival of the patients [24]. Hence, the neuronal adhesion molecule CHL1 might also play a role in the genesis of GIST. This study investigates the expression of CHL1 and evaluates its association with clinical and pathological aspects in GISTs. Furthermore, we investigated the potential of the shed CHL1 as a peripheral tumor marker in sera of GIST patients. CHL1 is expressed in GIST and the majority is proteolytically cleaved To investigate the expression of CHL1 in GIST, we used qPCR to determine the relative mRNA expression levels of CHL1 and its isoforms 1 and 2 in eight GIST primary tumors (PT). In all examined samples CHL1 transcripts were detected. Furthermore, it was shown that, with one exception, both CHL1 isoforms are expressed in GIST. ( Figure 1A). On protein level, we further confirmed CHL1 expression in five primary tumors, and two distant metastases. All samples displayed a distinct CHL1 expression in Western blot analysis. Interestingly, the soluble proteolytic CHL1 fragments with the molecular weights of 165 kDa and 125 kDa were the most prevalent, while full length CHL1 with the molecular weights of 185 kDa was only detectable in an extenuated form ( Figure 1B). Reduced local CHL1 expression correlates with advanced tumor stages After confirmation of CHL1 expression on RNA as well as protein level in GIST we analyzed 58 samples of primary GIST on a TMA. The staining pattern of the CHL1 immunohistochemistry showed a predominantly membranous expression of the CHL1 molecule in GIST ( Figure 2). Although some cytoplasmic staining was sometimes seen, this was always associated with a much higher staining level at the membranes. Primary tumors were CHL1-positive in 44.8% of cases (n = 26). Low tissue expression was significantly associated with advanced TNM stages (stage I and II versus III and IV, p = 0.036). In addition, a correlation with favorable tumor localization (gastric versus small and large intestine, esophagus (p = 0.039) was observed. Miettinen score failed to show a significant association with local CHL1 expression by a small margin (p = 0.078). When performing a Kaplan-Meier survival analysis, neither did recurrence free survival nor did overall survival reach significant values (p = 0.113 and p = 0.387, respectively). However, a trend towards a reduced recurrence free survival in GIST with decreased local expression was seen ( Figure 3A and 3B). No further correlation between CHL1 protein expression levels and other clinicopathological parameters was found (Table 1). Serum CHL1 levels are elevated in GIST patients indicating advanced tumor stages and reduced recurrence free survival To further investigate the finding of predominantly soluble CHL1 fragments by Western Blot in human samples we analyzed the sera of 102 GIST patients by ELISA. A total of 91 sera of healthy volunteers were used as controls. Systemic CHL1 levels were significantly elevated in GIST patients (n = 102, median 11.6 ng/ ml, standard deviation (SD) ±4.7 ng/ml) compared to healthy controls (n = 76, mean 8.5 ng/ml, SD ±5.1 ng/ml, p = 0.001; Figure 4A). A receiver operating characteristics curve was used to establish the sensitivity-specificity Oncotarget 1133 www.oncotarget.com relationship for CHL1 levels ( Figure 4B). The Youden index determined the cut-off level at 11.0 ng/ml (75th percentile at 14.5 ng/ml). For this cut-off a sensitivity of 72.3% and specificity of 52.7% with an area under the curve (AUC) of 0.690 was calculated. For the correlation of clinicopathological data with systemic CHL1 levels, patients were divided into a low-level (< 11.0 ng/ml) and a high-level serum CHL1 group (≥ 11.0 ng/ml). Cross tabulation showed a significant association of increased systemic CHL1 levels and advanced tumor sizes (pT3 and pT4 compared to pT1 and pT2; p = 0.023), higher UICC classification (I and II versus II and IV, p = 0.021), and increased Fletcher score (very low and low risk compared to intermediate and high risk; p = 0.041). Correlation with the Miettinen score missed significance by a small margin (p = 0.067). Other clinicopathological parameters did not show any significant differences. For further details see Table 1. Survival curves plotted by the Kaplan-Meier analysis demonstrated a significant correlation of serum CHL1 levels and recurrence free survival (p = 0.010; Figure 3C), while overall survival did not show a significant association (p = 0.197; Figure 3D). Multivariate analysis failed to show an independent effect of systemic CHL1 levels for a cut-off at 11.0 ng/ml. When using the 75th percentile at 14.5 ng/ml as cut-off for systemic CHL1 levels, log rank analysis demonstrated an increase of the prognostic value for recurrence free survival (p < 0.001; Figure 3E). Overall survival still failed to reach significant parameters (p = 0.290; Figure 3F). Multivariate analysis further confirmed an independent prognostic effect of systemic CHL1 levels on recurrence free survival (p = 0.004) with a cut-off at 14.5 ng/ml. In addition, the mitotic count or grading of the GIST had a significant impact on recurrence free survival (p = 0.030). However, none of the other clinicopathological variables was of prognostic significance in multivariate analysis (Table 2). DISCUSSION Until now, clinicopathological parameters, as well as c-kit and PDGRFa mutation status, serve as prognostic markers in GIST. Diagnostic or serum markers with prognostic relevance for GISTs have not yet been reported. Hence, we wanted to investigate whether CHL1, which has been found in in other solid malignancies, is expressed by GISTs and might serve as potential diagnostic and prognostic marker in tissue and blood. In this study, we were able to demonstrate that human GIST expresses CHL1 on mRNA as well as on protein level. We further demonstrated by immunohistochemistry, that the majority of CHL1 displayed a membranous localization and that 44.8% of the GIST yielded a significant local CHL1 expression. Reduced CHL1 tissue expression was associated with advanced tumor stages and a localization distant from the stomach indicating a worse tumor biology. In other entities, effects of CHL1 on biological functions of cancer cells have also been reported. A recent study demonstrated that inhibiting the CHL1 expression by microRNA-21 increases the invasiveness of tumor cells in colon adenocarcinomas [27]. Similar results have been shown for breast cancer cells in which CHL1 deficiency led to tumor formation, and a knockdown of CHL1 expression led to increased proliferation and invasion [21]. Of note, patient derived GIST predominantly showed soluble proteolytic CHL1 fragments with a molecular weight of 165 and 125 kDa in our study. Hence, our results implicate that the majority of the predominantly membranous CHL1 in human GISTs is cleaved. The ectodomain of CHL1 is a substrate of a disintegrin and metalloproteinases 8 (ADAM8) and beta-scretase 1 (BACE1) regulating cellular interactions [19,20,28]. Hence, one might hypothesize that cleavage of CHL1 along with other cell adhesion molecules leads to a functional downregulation that decreases cell adhesion. Soluble CHL1 or the remaining intracellular domain might then foster tumor cell migration and invasiveness. In accordance with this hypothesis, an association of decreased CHL1 expression with distant and lymph node metastases, as well as with reduced overall survival was reported for esophageal cancers [25]. The rate of CHL1 cleavage directly correlates with the amount of soluble CHL1 proteoforms, which are released into the environment and subsequently emitted into the blood stream. The possible role of mini-exon 8 in CHL1 isoform 2 has yet to be conclusively investigated. Interestingly, we were able to detect a significant higher amount of soluble CHL1 in the sera of GIST patients as compared to controls. For a cutoff at 11.0 ng/ml a good sensitivity (72.3%) and low specificity (52.7%) with an AUC of 0.690 was found indicating a sufficient detection of patients with GISTs. Other studies have investigated the role of anoctamin-1 (ANO1) positive circulating tumor cells as potential biomarker in GIST reporting a sensitivity of 64.2% and a specificity of 88.1% [29,30]. The latter study also investigated the predictive role of systemic carcinoembryonic antigen (CEA) in GISTs describing a sensitivity of 69.5% and a specificity of 30.6% [30]. Hence, serum CHL1 might have a role in a panel of different serum markers in screening for GIST. Especially due to the non-invasiveness of obtaining a blood sample as opposed to taking a tissue biopsy, which might also be hampered by missing tumor tissue completely or by the collection of insufficient usable material, liquid to link increased CHL1 serum levels in GIST patients with advanced tumors stages and high-risk tumors in accordance to the UICC classification and the Fletcher score. This is in line with a recent study by Kotani et al., describing a significant association between elevated CHL1's blood-secretion and tumor size in a lung cancer xenograft mouse model [31]. To further support the prognostic importance of serum CHL1, GIST patients with high systemic CHL1 levels demonstrated a significant shorter recurrence free survival as compared to patients with low systemic CHL1 levels independent of other clinicopathological factors. In line with the latter finding, we found a trend towards earlier recurrences in patients with reduced local CHL1 expression. The divergent prognostic effect of local and systemic CHL1 levels seems to be a result of local CHL1 cleavage releasing CHL1 fragment into the blood stream. Interestingly, the CHL1 levels were not significantly increased in metastasized patients, leading to the hypothesis, that the CHL1 shedding might basically occurs in the primary tumor. However, no significant association of local CHL1 expression in univariate analysis with survival was found. In addition, systemic CHL1 levels (cut-off at 11.0 ng/ ml) failed to reach an independent prognostic value in multivariate analysis. The latter results might be caused by the small cohort analyzed and further studies are needed to verify our data. Nevertheless, using the 75th percentile as cut-off for systemic CHL1 levels a clear independent prognostic effect was observed. Thus, the bad prognostic effect of increased serum CHL1 caused by increased local CHL1 cleavage might lead to a loosening of cell adhesions within the GIST intensifying the spread of tumor cells and thereby inducing earlier recurrences. In conclusion, we demonstrated for the first time that GIST express CHL1 and that a significant amount of local CHL1 undergoes cleavage. Hence, reduced membranous CHL1 expression is associated with advanced tumor stages and unfavorable localizations of GIST. In addition, we were able to show that systemic CHL1 levels are increased in GIST patients and an inverse prognostic effect of local and systemic CHL1 levels was found. Finally, we were able to link systemic CHL1 levels with a shortened recurrence free survival independent of other clinicopathological parameters. Hence, serum CHL1 levels might have the potential to serve as a diagnostic and prognostic marker for GIST. Further studies with larger patient cohorts must validate the data of this preliminary study. Patients The study was approved by the Medical Ethical Committee, Hamburg, Germany. Written informed consent was obtained from all patients before study inclusion. All procedures performed in this study involving human participants were in accordance with the ethical standards of the institutional and national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Clinical data like sex, age at diagnosis, location and size of the tumor, Fletcher and Miettinen score, resection margin, metastasis, UICC stage according to the 8th edition, presence of recurrence as well as date and cause of death were obtained from a combination of clinical and pathological record review, reports of outside medical records and communication with patients and with their attending physicians. Overall survival was calculated from the date of surgery to the date of death or last follow-up. None of the patients received neo-adjuvant therapy. Tissue samples and tissue microarray Fifty-eight patients with surgically resected GISTs, who were treated at the University Medical Center Hamburg-Eppendorf over a period of ten years were chosen retrospectively. All resected GISTs were diagnosed Oncotarget 1137 www.oncotarget.com by immunohistochemical analyses (CD117, CH34, DOG1) and mutation analysis (KIT/PDGFRA) and interpreted as well as staged by two independent pathologists on a routine basis. Tissues were fixed in 4% buffered formalin and embedded in paraffin. Haematoxylin-eosin stained sections were cut from selected primary tumor blocks with representative tumor regions. Tissue cylinders with a diameter of 600 µm were punched out of the original donor block and arrayed on a new paraffin block using a semi-automated tissue arrayer. Subsequently, five µm sections of the complete TMA were constructed using the Paraffin Sectioning Aid System (Instrumentics, Hackensack, NJ, USA). Immunohistochemistry The CHL1 staining protocol for paraffin tissues was optimized in an extensive multistep procedure on various benign and malignant tissues, modifying the staining protocol until selective staining with the lowest background signals were established [32,33]. Freshly cut TMA sections were immunostained on one day and in one experiment. Slides were deparaffinized and exposed to heat-induced antigen retrieval for 5 minutes in an autoclave at 121°C in pH 7.8 Tris-EDTA-Citrate buffer. Primary antibody specific for CHL-1 (goat, polyclonal antibody; R&D Systems, USA; cat# AF2126; dilution 1:450) was applied at 37°C and pH 9.1 for 60 minutes. Bound antibody was then visualized using the EnVision Kit (Dako, Glostrup, Denmark) according to the manufacturer´s directions. The staining intensity and the fraction of positive tumor cells were scored for each tissue spot, as described recently [34,35]. Specimens were considered immunopositive for CHL1 if ≥ 30% of the tumor cells had clear evidence of immunostaining. Two independent investigators (EG and MT) performed the immunohistochemical analysis and scoring without knowledge of the patients' identities or clinical statuses. Serum samples For this study, 102 patients with GIST were chosen retrospectively. The serum samples were collected across Germany, Austria and Switzerland by the aid of a self-help organization for GIST patients called "Lebenshaus". We selected patients on the basis of availability of specimens and did not stratify them due to rare occurrence and different treatment strategies. As control group, 91 blood bank donors were included in the study. Due to missing clinicopathological data of 21 patients a total of 81 patients were included into correlation and survival analyses. ELISA for the detection of soluble human CHL1 For the detection of soluble CHL1 (s-CHL1), 96well flexible microtiter plates (Costar 9019) were coated with 50 µl per well of 10 µg/ml of capturing antibody (monoclonal murine IgG1 anti-human CHL1 antibody, Clone 316223, R&D Systems ® ; MN, USA) overnight. Wells were blocked with 3% w/v BSA Bovine serum albumin (BSA; Fraktion V, 98% purity, Sigma Aldrich, Munich, Germany) in PBS/T (PBS containing 0.05% v/v Tween) for 45 min and then incubated for 1 h with human sera diluted 1:5 in PBS. After five washes with PBS/T, bound protein was detected with biotin-conjugated goat mAb BAF2126 (anti-human CHL1 antibody, Lot Number URF01, R&D Systems ® , MN, USA) followed by streptavidin-conjugated peroxidase using BMP as substrate. The color reaction was stopped by the addition of 10 mM H 2 SO 4 and analyzed at 450 nm using an ELISA reader. Human CHL1-Fc protein (Catalog Number: Oncotarget 1138 www.oncotarget.com 2126CH, R&D Systems ® , MN, USA) served as an internal standard for the assay. To ensure that the immunoassay was suitable for measuring clinical serum samples, reproducibility, linearity, and cross reactivity were examined. The assay showed negligible cross reactivity to L1, displayed excellent linearity with serial dilutions and showed < 10% coefficient of variation for intra-and inter-assay variability studies. CHL1 Western blots analysis For analysis of CHL1 protein expression tissues were taken up in ice cold RIPA buffer (Pierce Biotechnology, Rockford, Il, USA) containing Halt Protease Inhibitor Mix (Pierce Biotechnology, Rockford, Il, USA) and lysed using a Dounce homogenisator. Lysates were centrifuged at 20,000 g at 4°C for 15 min and the supernatants were collected. Protein concentrations were estimated using the BCA Protein Assay (Pierce Biotechnology, Rockford, Il, USA). Lysates were then subjected to SDS-PAGE and Western blot analysis as previously described [36]. A polyclonal antibody raised against the extracellular domain of human CHL1 (CHL1/L1CAM-2 (AF2126; R&D Systems, USA) was used. Statistical analysis SPSS for Windows (Version 24, SPSS Inc., Chicago, IL USA) was used for statistical analysis. The cut-off level for serum CHL1 quantification was determined using the Youden index. Correlation were performed using a cross table and statistical analysis was done with Chi-Square test. Survival curves of the patients were plotted using the Kaplan-Meier method and analyzed using the log rank test. Median survival was not reached either for diseasefree or overall survival. For that reason, survival data are presented as mean values. Cox regression analysis was used for univariate and multivariate analysis to assess the independent influence of CHL1 tissue expression or serum CHL1 levels and other covariates on recurrence and survival. P values less than 0.05 were considered statistically significant. Author contributions Karl-F. Karstens: study design, data collection, data analysis and interpretation, manuscript draft, Eugen Bellon: data collection, critical revision of manuscript, Adam Polonski: data collection, data analysis and interpretation, Gerrit Wolters-Eisfeld: data collection, data analysis and interpretation, critical revision of manuscript, Nathaniel Melling: critical revision of manuscript, Matthias Reeh: critical revision of manuscript, Jakob R. Izbicki: critical revision of manuscript, Michael Tachezy: study design, data collection, data analysis and interpretation, critical revision of manuscript. ACKNOWLEDGMENTS We thank the patients who willingly and generously provided data and samples for research purposes. We would like to thank Prof. Melitta Schachner for providing CHL1-Fc protein.
2020-04-09T09:20:40.623Z
2020-03-31T00:00:00.000
{ "year": 2020, "sha1": "e2b9071ff475bf895c15b79d2ffbe7feac323164", "oa_license": "CCBY", "oa_url": "https://www.oncotarget.com/article/27525/pdf/", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2bdedab51cd24a1af04a5b6cdf3939431e7130f0", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
31252184
pes2o/s2orc
v3-fos-license
The association between infant and young child feeding practices and diarrhoea in Tanzanian children Background Diarrhoea is a leading cause of child mortality in Tanzania. The association between optimal infant feeding practices and diarrhoea has been reported elsewhere, but the evidence has been limited to promote and advocate for strategic interventions in Tanzania. This study examined the association between infant and young child feeding (IYCF) practices and diarrhoea in Tanzanian children under 24 months. Methods The study used the Tanzania Demographic and Health Survey data to estimate the prevalence of diarrhoea stratified by IYCF practices. Using multivariable logistic regression modelling that adjusted for confounding factors and cluster variability, the association between IYCF practices and diarrhoea among Tanzanian children was investigated. Results Diarrhoea prevalence was lower in infants aged 0–5 months whose mothers engaged in exclusive breastfeeding (EBF) and predominant breastfeeding (PBF) compared to those who were not exclusively and predominantly breastfed. Infants aged 6–8 months who were introduced to complementary foods had a higher prevalence of diarrhoea compared to those who received no complementary foods, that is, infants who were exclusively breastfed at 6–8 months. Infants who were exclusively and predominantly breastfed were less likely to experience diarrhoea compared to those who were not exclusively and predominantly breastfed [adjusted odds ratio (AOR) 0.31, 95% confidence interval (CI) 0.16–0.59, P < 0.001 for EBF and AOR = 0.30, 95% CI 0.10–0.89, P = 0.031 for PBF]. In contrast, infants aged 6–8 months who were introduced to complementary foods were more likely to experience diarrhoea compared to those who received no complementary foods (AOR = 2.91, 95% CI 1.99–4.27, P < 0.001). Conclusions The study suggests that EBF and PBF were protective against diarrhoeal illness in Tanzanian children, while the introduction of complementary foods was associated with the onset of diarrhoea. Strengthening IYCF (facility- and community-based) programmes would help to improve feeding behaviours of Tanzanian women and reduce diarrhoea burden in children under 2 years. Background The early childhood period is considered one of the most important phases of life, determining the quality of health, behaviour and well-being across the lifespan [1]. The early year period also reflects a time of high vulnerability of the child to various adverse health outcomes such as diarrhoea [2]. In the past two decades, significant progress has been made in reducing diarrhoeal morbidity and mortality among children under 5 years old; however, recent reports have shown that diarrhoea is still a leading cause of under-five mortality in many developing countries [3,4]. In sub-Saharan African (SSA) countries, including Tanzania, children are likely to die before their fifth birthday from illnesses such as diarrhoea and upper respiratory tract infection compared to their counterparts in developed countries [5][6][7]. Additionally, under-five deaths in developing countries are mainly attributable to preventable infectious diseases (including diarrhoea) in which appropriate breastfeeding, improved water and sanitation, routine vaccination programs and strong operational policies can play significant roles [8,9]. Optimal breastfeeding practices protect children against diarrhoea-related diseases [10,11], and mothers who optimally breastfeed their babies have been shown to have a reduced risk of type 2 diabetes mellitus and breast and ovarian cancers [5,12]. A recent review indicated that only approximately 40% of infants aged 0-5 months in SSA countries were exclusively breastfed, reflecting gaps in optimal infant feeding practices in many communities [5]. The study also suggested that many infants were introduced to complementary foods too early in SSA countries. While the introduction of complementary foods to infants is crucial for infant growth and development, previous studies have indicated that the poor conditions in which those foods are prepared, stored or fed to the child as well as the immunological status of the child may contribute to the onset of diarrhoea [13][14][15][16][17]. Previous reports from Tanzania have indicated that the prevalence of exclusive breastfeeding (EBF) and continued breastfeeding at 1 year was high (50 and 92%, respectively) [18], and the median duration of any breastfeeding was 21 months, suggesting that some mothers exclusively breastfed for more than 6 months [19]. Nonetheless, the relationship between EBF and continued breastfeeding at 1 year and diarrhoea has not been investigated in Tanzania. In 2016, the global burden of disease study indicated that diarrhoeal diseases contributed to the burden of under-five deaths and disabilities, which accounted for 5.4% of the total deaths and 5.9% of disability-adjusted life years in Tanzania [20,21]. Additionally, the prevalence of diarrhoea among children under 5 years varied from 6% [22] to 22% [23] in Tanzania. Studies that investigated the impact of infant feeding practices on diarrhoea in SSA (e.g. Nigeria [10]) and Asia (e.g. Vietnam [24] and Bangladesh [7]) have been published, where EBF was protective against diarrhoeal illness. However, evidence on the association between infant and young child feeding (IYCF) practices and diarrhoea has been limited in Tanzania [25], where studies have primarily focused on HIV-exposed infants [26,27]. A country-specific study that is focused on the relationship between IYCF practices and diarrhoea would be essential to public health experts to promote and advocate for strategic investments in IYCF programmes to improve child health outcomes in Tanzania. Additionally, public health programmes are largely funded by international agencies in Tanzania [28], and findings from this study will also be of interest internationally, particularly in the context of the renewed focus on child nutrition agenda at the global, regional and national levels [29]. The present study aimed to investigate the association between IYCF practices (i.e. EBF; early/timely initiation of breastfeeding; predominant breastfeeding; timely introduction of solid, semi-solid and soft foods and continued breastfeeding at 1 year) and diarrhoea in Tanzanian children under 24 months. Data sources The study used the Tanzania Demographic and Health Survey (TDHS, 2010) data, collected by the National Bureau of Statistics, Dar es Salaam, Tanzania, with technical assistance from the Inner City Fund (ICF) International, Maryland, USA. The TDHS collects maternal and child information that includes socio-demographics, child and female reproductive characteristics, obtained from a nationally representative sample of households. Using standardised face-to-face questionnaires, the TDHS collected information on IYCF practices from eligible women of childbearing age (15-49 years). For the IYCF practices, questions in the survey tool included information on breastfeeding and complementary feeding practices, as well as duration and diversity of the infant food provided. A weighted total sample of 10,139 eligible women were interviewed, yielding 96% response rate. Additional information on the data source (including sampling procedure and methodology for data collection) is described elsewhere [19,30]. Outcome variable The study outcome was diarrhoea, defined as the passage of three or more loose or liquid stools per day, and was based on maternal recall of each child under 5 years of age in the household during the 2 weeks preceding the survey. Consistent with previous studies [18,31], this analysis used information from the most recent live birth, aged less than 24 months, living with the respondent to reduce recall bias. Measurement of diarrhoea was based on the child age group for each IYCF practice [10]. Exposure variables The exposure variables for this study were the IYCF indicators, assessed based on the World Health Organization (WHO) definitions for assessing IYCF practices [32]: Early or timely initiation of breastfeeding: the proportion of children 0-23 months of age who were put to the breast within 1 h of birth. Exclusive breastfeeding: the proportion of infants 0-5 months of age who received breast milk as the only source of nourishment but allowed oral rehydration solution, drops or syrups of vitamins and medicines. Predominant breastfeeding: the proportion of infants 0-5 months of age who received breast milk as the main source of nourishment but allows water, water-based drinks, fruit juice, oral rehydration solution, drops or syrups of vitamins and medicines. Continued breastfeeding at 1 year: the proportion of children 12-15 months of age who were fed breast milk. Bottle feeding: the proportion of infants 0-23 months of age who received any liquid (including breast milk) or semi-solid food from a bottle with nipple/teat. Introduction of solid, semi-solid and soft foods: the proportion of infants 6-8 months of age who received solid, semi-solid or soft foods. Early initiation of breastfeeding and EBF were incorporated into the analyses because of their association with reduced morbidity among children under 5 years [33,34]. Predominant breastfeeding, bottle feeding, continued breastfeeding at 1 year and the introduction of solid, semi-solid and soft foods were included due to their effect on the increased risk of diarrhoeal morbidity and mortality among children under 5 years [10,[35][36][37]. A number of potential confounding variables were considered in the analyses based on previous studies [10,24] and data availability. These variables were categorised as socioeconomic factors (mother's employment, maternal education and household wealth), health service factor (frequency of antenatal care visit), individual factors (maternal age, child age and gender), and household factors (urban or rural, drinking water source and type of toilet facility). A detailed description of these variables is provided in Table 1. The household wealth index was calculated as a score of household assets (such as ownership of transportation devices, ownership of durable goods and household facilities), which was derived from a principal component analysis conducted by the National Bureau of Statistics, Dar es Salaam, Tanzania, and ICF International [19]. Source of drinking water was categorised as 'Improved' and 'Not improved'. Improved drinking water source included residences where water was piped into the dwelling-yard, access to a public tap or standpipe, a tube well or borehole, protected well, protected spring, rainwater and/or bottled water. Not improved water source comprised access to an unprotected well, unprotected spring, tanker truck or cart with a drum, surface water, sachet water and/or another source [38]. Type of toilet was categorised as 'Improved' and Not improved'. Improved type of toilet included toilets such as flush or pour-flush toilets or piped to the sewer system, septic tank, and pit latrine; flush or pour-flush to septic tank; flush or pour flush to pit latrine; ventilated improved pit latrine; pit latrine with slab and/or compositing toilet. Not improved toilets comprised flush or pour-flush but not piped to sewer, septic tank or pit latrine; pit latrine without a slab or open pit; bucket or hanging toilet and no toilet facility or the use of bush or field [38]. Statistical analysis The exposures were expressed as dichotomous variables, where respondents (women aged 15-49 years) who exclusively breastfed were coded as '1' and those who did not were coded as '0'. The same analytical approach was employed for other IYCF indicators. Preliminary analyses involved frequency tabulations of confounding variables (i.e. socioeconomic, health service, individual and household factors) in the TDHS, followed by an estimation of the prevalence of IYCF indicators, as well as a combination of IYCF practices and diarrhoea (and their corresponding 95% confidence intervals). Univariable and multivariable logistic regression analyses were used to investigate the associations between IYCF practices and diarrhoea, adjusted for potential confounders. Regression models were restricted to the youngest living child aged < 24 months living with the respondent (women aged 15-49 years) to reduce recall bias. Prevalence estimates were calculated with the 'svy' function used to allow for adjustments for the cluster sampling design and regression modelling using the 'xlogit' function. All analyses were performed in Stata software version 14.0 (Stata Corporation, College Station, TX, USA). Ethics Measure DHS/ICF International obtained ethical approval from the Medical Research Coordinating Committee (MRCC), the national health research coordinating body in Tanzania. All questionnaires used for the DHS were reviewed and approved by ICF International Institutional Review Board (IRB) to ensure they met the US Department of Health and Human Services regulations for the protection of human participants as well as the host country's IRB, to ensure compliance with national laws. The datasets used are available to apply for online, and approval was obtained from Measure DHS/ICF International for the analysis. Results Of the mothers surveyed (N = 10,139), 43.4% initiated breastfeeding within the first hour of birth and 48.6% exclusively breastfed for the first 6 months of their infants' life ( Table 2). Only 10.1% of infants aged 0-5 months were predominantly breastfed, while 3.7% were bottle-fed. Many infants aged 6-8 months were introduced to complementary foods (72.2%), while the majority of mothers continued breastfeeding at 1 year (89.6%). When examining IYCF indicators in relation to the prevalence of diarrhoea, infants under 6 months of age who were exclusively and predominantly breastfed had a lower prevalence of diarrhoea compared to their counterparts who were not exclusively and predominantly breastfed ( Table 2). In contrast, children aged 12-15 months who continued breastfeeding at 1 year had a higher prevalence of diarrhoea compared to their counterparts. Similarly, infants aged 6-8 months who were introduced to complementary foods had a higher prevalence of diarrhoea compared to those who received no complementary foods, that is, infants who were exclusively breastfed at 6-8 months. There was no difference in diarrhoea prevalence among children who were put to the breast within the first hour of birth and those who were not (Table 2). Multivariable analyses revealed that infants aged 0-5 months who were exclusively breastfed were less likely to experience diarrhoea compared to those who were not exclusively breastfed [adjusted odds ratio (AOR) 0.31, 95% confidence interval (CI) 0.16-0.59, P < 0.001]. Infants who were predominantly breastfed were significantly less likely to experience diarrhoea compared to those who were not predominantly breastfed (AOR = 0.30, 95% CI 0.10-0.89, P = 0.031) [ Table 3]. The analysis showed that infants who received solid, semi-solid and soft foods at age 6-8 months were approximately three times more likely to experience diarrhoea compared to their counterparts who received no complementary foods, that is, infants who were exclusively breastfed at 6-8 months (AOR = 2.91, 95% CI 1.99-0.89, P < 0.001). Children Introduction of solid, semi-solid and soft foods at 6-8 months year and those aged 0-23 months who were bottle-fed were more likely to experience diarrhoea, but these findings were not statistically significant (Table 3). Discussion The present study found that the prevalence of diarrhoea was lower among infants whose mothers engaged in EBF and predominant breastfeeding (PBF) practices. The prevalence of diarrhoea was higher in children who continued breastfeeding at 1 year compared to those who had ceased breastfeeding at the same age. Children who were bottle-fed had a higher prevalence of diarrhoea compared to those who were not bottle-fed. Similarly, the prevalence of diarrhoea was higher in infants aged 6-8 months who received complementary foods compared to those who did not, that is, infants who were exclusively breastfed at 6-8 months. Our analyses revealed that EBF and PBF were protective against diarrhoeal illness in Tanzanian children aged 0-5 months, after adjustment for confounding variables. In contrast, infants aged 6-8 months who received complementary foods were approximately three times more likely to experience diarrhoea compared to those who received no complementary foods. The study showed that the prevalence of diarrhoea was higher among infants who were not exclusively breastfed. Also, the likelihood of an infant aged under 6 months to experience diarrhoea was higher among those not exclusively breastfed compared to those who were exclusively breastfed. This finding is consistent with evidence from other developing countries, including Bangladesh [7], Vietnam [24] and Nigeria [10], where EBF was protective of diarrhoeal illness among infants of the same age, even after adjusting for poor sanitation and unsafe drinking water source [10,24]. Similarly, a meta-analysis of 18 studies also reported the protective effects of EBF in reducing the burden of diarrhoeal illness [6]. Reasons for the protective effect of EBF on diarrhoea are based on the fact that EBF limits the infant's exposure to contaminated liquids and foods, as well as the immunological activities of breast milk to protect the infant's gastrointestinal tract from invading micro-organisms [5,17]. Additionally, studies have reported that breast milk also stimulates the innate immune system [39,40] and epigenetic programme of the infant, activities which are also essential for the prevention of infections [41]. Our study suggested that PBF was protective against diarrhoea among Tanzanian children, and this was consistent with previous studies from Nigeria [10], Bangladesh [7] and other SSA countries with high diarrhoea mortality [25]. However, studies from developing countries (such as Ethiopia and Burkina Faso) have shown that PBF increased the likelihood of the infant to experience diarrhoea [6,24,[42][43][44] as parents or caregivers often insist that water-often 'unimproved' and a primary source of infection-induces suckling or quenches an infant's thirst after breastfeeding [45][46][47]. Although PBF has the potential to increase the likelihood of the infant to experience diarrhoea because of the introduction of food-based fluids, Rajiv and colleagues have argued that promoting both EBF and PBF may be more beneficial than promoting EBF over PBF. The authors also noted that there is limited evidence to differentiate the benefits of EBF from those of PBF [11]. Our analysis underscores the fact that both EBF and PBF may have similar impacts on diarrhoea prevention among children aged 0-5 months in Tanzania. Nevertheless, efforts must be made to promote EBF over PBF as the possibilities of water or water-based food contamination in Tanzania are high because of limited access to improved drinking water source, improved sanitation and high-quality food storage facilities for mothers from low SES group and those living in urban slums or rural areas [48]. The study showed that continued breastfeeding at 1 year and bottle feeding were associated with a higher likelihood of the infant to experience diarrhoea, but the relationship was not statistically significant. Previous Table 3 Association between diarrhoea and infant and young child feeding (IYCF) indicators in Tanzania studies reported that continued breastfeeding at 1 year and bottle feeding were associated with an increased likelihood of the child to experience diarrhoea [10,25]. Importantly, the impact of continued breastfeeding and bottle feeding on diarrhoea could be attributed to the introduction of contaminated complementary food/water or use of a contaminated bottle with a teat or nipple given the age intervals used in assessing those indicators correspond to a time when the infant has commenced complementary foods [10,43,49]. Similarly, the WHO considers bottle feeding as a breastfeeding indicator because of the association between bottle feeding and increased diarrhoeal morbidity and mortality [32,37]. The present study suggested that timely introduction of solid, semi-solid, and soft foods was associated with an increased likelihood of the infant to experience diarrhoea. Prior studies have shown that the introduction of complementary foods in many developing countries (including Tanzania) predisposes the infant to experience diarrhoea because of the unsafe conditions in which the foods are handled, prepared and stored [14,50], and this may be the case in our study. In Tanzania, maize is the most commonly used complementary foods for infants, but maize and maize-based foods have been shown to contain a high concentration of fumonisin-a diarrhoea causing mycotoxin [51][52][53]. A number of locally relevant approaches have been suggested to limit the infants' exposure to high concentration of fumonisin-contaminated maize in Tanzania. These include maize sorting, dehulling of maize (removal of the hull from the seed) or limiting the daily intake of maize and/or replacing maize with other less fumonisin-contaminated cereal such as sorghum or finger millet [51]. However, there are significant challenges to the full implementation of those strategies. For example, resource constraint farmers are unwilling to sort out low-quality maize from their annual harvest [54,55] or the household preference for the whole maize instead of the dehulled maize product [55]. Our finding implies that mothers should not only be encouraged to timely introduce complementary foods to their infants but also childhood nutrition intervention strategies must ensure that those foods are safe and nutritionally balanced, consistent with the WHO recommendation. Policy implications Our study has policy and practical implications for public health experts, decision-makers and researchers as it further highlights the important role of IYCF practices in the context of diarrhoeal disease in a developing country. Accordingly, facility-based interventions such as Baby-Friendly Hospital Initiative (BFHI) to adequately promote and support optimal IYCF practices are essential to reduce diarrhoea burden. However, a report from Tanzania has shown that the BFHI is weak. That is, of the 3600 maternity service centres in the country, only 18 health facilities were certified baby friendly centres in 2015 [56], indicating that many health facilities and professionals may be unaware of the BFHI. This gap in the full implementation of the BFHI-shown to promote and support optimal IYCF behaviours [57][58][59][60][61]-suggests that more work is needed to improve breastfeeding behaviours of Tanzania mothers. Broadening the BFHI in Tanzania will not only improve EBF and reduce diarrhoea morbidity and mortality but could also increase household productivity due to a reduction in child sick days [5]. Furthermore, the World Breastfeeding Trends Initiative for Tanzania reported that the national-level policy framework and programme coordination to promote and support breastfeeding practices are 'adequate'. Nevertheless, the full implementation of breastfeeding programmes at the health facility and community levels, as well as linkages between breastfeeding programmes at the national and subnational levels, are fragile [56]. Interventions to improve IYCF practices and reduce diarrhoea burden in Tanzania should include improvement in IYCF information for health professionals at the health system level, strengthening the routine postnatal care follow-up for mothers (such as subsidised or free maternal health care services) and scaling up community-based strategies (such as Baby-Friendly Community Initiatives, BFCI, and reinforcing linkages between facility and communitybased programmes). Strengthening compliance with the Employment and Labour Relations Act 2004 and increasing domestic funding for IYCF programmes will also have impacts on breastfeeding practices [56,62]. A detailed description of those facility-and community-based initiatives to improve IYCF behaviours in Tanzania has been noted elsewhere [56]. In 2016, the Government of the United Republic of Tanzania launched the National Multisectoral Nutrition Action Plan (NMNAP) for the years spanning 2016-2021, with the improvement of breastfeeding practices and the introduction of complementary foods being part of the rationale for this initiative [62]. Since the 1960s, Tanzania's commitment to improving childhood nutrition has been long-lasting. However, the issue has been to translate the government commitment into evidencebased interventions that are appropriately financed, wellimplemented and tracked [62]. To reduce the burden of diarrhoea associated with the introduction of complementary foods in Tanzania, a number of strategic approaches will be needed. These may include increased translation of government commitment to well-resourced and wellcoordinated, measurable and culturally appropriate initiatives, and collaborative efforts between government agencies (such as health, agriculture and education) [62] and industry partners [63] to reduce the contamination associated with complementary foods. Additionally, studies that investigate the nutritional status of children may be warranted to inform childhood nutrition initiatives and policies in Tanzania. Study limitations and strengths A number of methodological limitations should be considered. First, the study used cross-sectional data, and the establishment of a clear temporal association between IYCF and diarrhoea cannot be measured because the data were collected simultaneously. Second, the exposure and outcome variables were based on self-report, and this is a source of recall or measurement bias. The analysis may have overestimated or underestimated the association between IYCF practices and diarrhoea as the mother may have incorrectly reported the number of loose stools passed by the child. Nonetheless, our findings are consistent with previous evidence from developing countries on IYCF practices and diarrhoea [7,10,24]. Third, unmeasured confounding factors such as the impact of culture, and family structure and dynamics may be a limitation of the study. Fourth, data on the duration and severity of diarrhoea were unavailable, information that would have provided an in-depth understanding on the scope of protection from EBF and PBF and the pattern of diarrhoea in the context of IYCF practices. Finally, seasonal patterns exist for diarrhoea in SSA countries (including Tanzania) [64]. However, our study was limited to calendar years. Using monthly data would have provided better and detailed findings. Despite the limitations, the study has strengths. First, we believe that the potential impact of selection bias is unlikely to affect the study findings given the nationally representative sample of participants and the subsequent response rate (96%). Second, the study used the TDHS data, which were collected with standardised questionnaires to ensure similarities across geographies in Tanzania. Finally, the study provides additional evidence on the impact of optimal IYCF practices on diarrhoea-the second most common cause of deaths among children under 5 years-to continue the advocacy for interventions that seek to promote child survival. Conclusion The study found that diarrhoea prevalence was higher among children whose mothers practised suboptimal IYCF in Tanzania. EBF and PBF were protective against diarrhoeal illness, while the introduction of solid, semisolid and soft foods to infants aged 6-8 months was associated with an increased onset of diarrhoea in Tanzanian children. Strengthening facility (e.g. BFHI and postnatal care follow-up)-and community (e.g. BFCI)-based linkages would help to improve IYCF and reduce under-five mortality from diarrhoea in Tanzanian children.
2018-02-04T01:25:38.917Z
2018-01-30T00:00:00.000
{ "year": 2018, "sha1": "07669e79ba7c5aefce88d756e1c49f5ef664bc24", "oa_license": "CCBY", "oa_url": "https://tropmedhealth.biomedcentral.com/track/pdf/10.1186/s41182-018-0084-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "07669e79ba7c5aefce88d756e1c49f5ef664bc24", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237953810
pes2o/s2orc
v3-fos-license
delayed Post-Traumatic Spinal Cord Infarction with Quadriplegia: a Case report http://www.jtraumainj.org eISSN 2287-1683 pISSN 1738-8767 Copyright © 2021 The Korean Society of Traumatology This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited. delayed Post-Traumatic Spinal Cord Infarction with Quadriplegia: a Case report INTRODUCTION Traumatic spinal cord infarction is a rare condition that encompasses various types of vertebral artery injuries, including occlusion, dissection, intimal tears, stenosis, and pseudo-aneurysm formation [1][2][3][4]. Delayed post-traumatic spinal cord infarction is a devastating complication that has been reported in children and adults after injuries without vertebral bone fractures [5][6][7]. Spinal cord ischemia has been reported to occur after aortic surgery, profound arterial hypotension, and intense exercise [2,4,8], but traumatic spinal cord injuries after vehicle accidents have not been reported. This case involved a 52-year-old male driver with severe complications, including progressive loss of strength in all four extremities and sensation below the shoulder, CASE REPORT A 52-year-old man who was involved in a motor vehicle collision was transported to the trauma center by emergency medical services. His mental state was alert, but he exhibited loss of consciousness for about 1 minute. He complained of neck pain and a tingling sensation in his upper arms. A neurological examination revealed an extremity motor strength of Grade 4 and hyperesthesia in both upper arms, but a sensory examination revealed normal responses to temperature, light touch, pressure, and vibration. CT and MRI of the brain and cervical spine showed multiple degenerative changes that were mild at C3-C5 and within the normal limits for soft tissue, as well as no bony fractures (Fig. 1). The laboratory results showed no specific findings, and after 4 hours of observation, the patient's symptoms were relieved. The medical staff explained the necessity of close observation and monitoring symptoms in an inpatient setting despite symptom improvement, but the patient requested to be discharged. The patient revisited the emergency center 56 hours after the accident. His vital signs were as follows: blood pressure, 176/85 mmHg; pulse rate, 64 beats per minute; respiratory rate, 20 breaths per minute; body tempera- After approximately 20 minutes, voluntary movement of the lower extremities became difficult. A neurological examination revealed quadriplegia, in which the motor strength of the upper and lower extremities was classified as Grade 1, and a sensory examination revealed the absence of sensation of pain, temperature, and pressure bilaterally below level C4. Cervical spine CT angiography and magnetic resonance angiography (MRA) suggested narrowing of the vertebral artery and the absence of flow. In addition, a proximal internal carotid artery showed severe stenosis (Fig. 2). The patient was suspected to have cervical spinal cord infarction and underwent diffusion-weighted imaging (DWI) of the cervical spine. MRI showed increased signal intensity at the vertebral level from C3 to C5 (Fig. 3). The patient immediately received heparinization and anticoagulation treatment, but spontaneous respiration deteriorated, and the oxygen saturation decreased after 8 hours. The patient's respiration gradually worsened, and his mental state changed to a deep drowsy state. Intubation was performed, and ventilator care was initiated to maintain respiratory control. After the patient was admitted to the intensive care unit, brain MRI with DWI was performed on the next day. The medical staff could not find any central nervous system problems in areas such as the cerebellum or brain stem that could cause the patient's loss of consciousness and respiratory failure (Fig. 4). On the 14th day, the patient's mental state became alert and his vital signs were stable. Tracheostomy was performed, and spontaneous respiration was maintained with an oxygen supply and nebulizer treatment. On the 20th day, the patient was transferred to a rehabilitation hospital and continued to exhibit quadriplegia with sensory loss below the C4 level after 6 months. DISCUSSION Most cases of traumatic neck pain are associated with cervical sprains, strains, and contusion due to minor crashes or collisions. These symptoms of neck pain improve gradually, but delayed traumatic cervical spinal cord infarction is a devastating condition in rapid (fast) sequences that has not yet been described [2,[6][7][8][9][10]. The differential diagnosis of acute progressive paraplegia after trauma cases includes spinal cord injury without radiographic abnormality (SCIWORA); this acronym was introduced by Pang and Wilberger, who used it to refer to clinical symptoms of traumatic myelopathy without findings of radiographic, CT, and MRI abnormalities [11,12]. In children, several cases of spinal cord infarction or ischemia after minor trauma have been reported [1,6]. A case series of children with spinal cord infarction without vertebral fractures concluded that hypotension and fibrocartilaginous embolism are principal etiological factors. However, several cases of spinal cord infarction in adults have suggested that injuries may be caused by avulsion of the perforating vessel, vasospasm of the artery of Adamkiewicz (radicular artery), or transient ischemia in areas of borderline perfusion resulting from flexion to extension of the spinal cord [3,4,8]. Ischemia of the spinal cord is believed to be the underlying mechanism of spinal cord injuries without radiographic abnormalities, but in the subacute phase, spinal cord ischemia typically manifests on MRI as focal cord swelling and "white pencil-like" high signal intensities on T2-weighted images [6,7,9]. Anatomic transection or instability of the spinal cord or large hematomas (greater than 1/2 of the spinal cord diameter) have a poor prognosis and feature paresis or paralysis clinically [7,9]. As shown in this case report, the vascular territory is also an important factor in the anatomy of spinal cord infarction, which may manifest as abnormal findings of the vertebral artery on cervical spine CT angiography and MRA. The mechanism and timing of these vertebral circulation infarcts in spinal cord injury patients remain unclear [4,7,11,12]. If a patient complains of temporary or rapidly recovering neck pain or back pain without any initial neurological problem or abnormal physical examination findings, consideration should be given to avoiding activities that may increase the risk of exacerbation or recurrence. Otherwise, the patient may experience paraplegia or death due to an unexpressed spinal cord injury. Therefore, physicians should be aware of the possibility of spinal cord injuries and utilize diagnostic tools such as CT angiography, MRA, and MRI with DWI for early-stage diagnosis, and the medical staff should explain the necessity of close observation and monitoring symptoms in an impatient setting for at least 48 hours even if symptoms improve.
2021-09-27T19:48:54.157Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "a8fab7bb7424fcc810cedbeae51ec28600ab1147", "oa_license": "CCBYNC", "oa_url": "https://www.jtraumainj.org/upload/pdf/jti-2021-0004.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4e9f7cddd7dcf112c9721deb56579fa4c066b968", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
244606058
pes2o/s2orc
v3-fos-license
Self-Organising (Kohonen) Maps for the Vietnam Banking Industry : This is the first study to use the self-organisation (Kohonen) map technique, an artificial neural network based on a non-supervised learning algorithm, to categorise Vietnamese banks into super-class groups. Drawing on unbalanced yearly data from 2008 to 2017, this study identifies two super-class groups (one and two). While group one consists of joint stock banks, group two consists of commercial state and joint stock banks. Using the non-structural indicator, the Lerner index, to capture market power, and the data enveloped analysis technique to measure bank performance, our result shows significant differences in Lerner scores (which represent bank market power) of the two groups of banks. Differences in the Lerner scores provide evidence of a group of strong banks that is isolated from other banks. This implies that this strong bank group has the potential to be monopolist and impairs Vietnam’s competitive banking environment. The reason is that group two banks may be more profitable due to greater market power, whereas group one banks may struggle to cut costs to remain viable. These findings provide a better understanding for bank executives, policymakers and regulators of the Vietnam banking industry, and ensure an efficient and competitive Vietnam banking environment. Introduction After the 2008 global financial crisis (GFC), Vietnam banks faced unprecedented challenges, including economic recession, credit growth rate stagnation and extraordinary levels of non-performing loans (KPMG 2013). On the 1st of March 2012, the Vietnam government issued the "Restructuring Financial Institutions 2011-2015" programme as a response to these financial challenges (Decision no.254/QD-TTg). The restructuring programme was designed to bring the Vietnam banking system into line with international standards (Le 2014;Nguyen et al. 2014Nguyen et al. , 2016a. The main features of the restructuring programme were: (i.) permit foreign ownership of local banks with a maximum of 20% share; (ii.) support all local banks to register shares on the Vietnam stock exchange; (iii.) require all commercial banks to have at least 3000 billion VND in bank capital and capital adequacy requirements (minimum 9% in 2010); and (iv.) encourage merger and acquisition (M&A) activity to improve the competitiveness and performance of the Vietnam banking industry (Hoang et al. 2016). Restructuring, achieved largely through M&A, has had a significant impact on the competitive environment and performance of the Vietnam banking industry in several ways. First, as scholars note, M&As reduce the total number of banks and thus increase because the SOM technique is different from other methods, as it does not distinguish between state-owned and joint stock commercial banks. This study also examined the dynamic financial status of Vietnam's banks using the SOM trajectory technique. Tracking bank financial trajectories is crucial because it enables experts to assess companies' current financial conditions and observe financial developments over time . This study adds to the literature in several way. First, most previous studies on bank market power and performance differentiate between state-owned and joint stock commercial Vietnam banks (Nguyen and Nghiem 2018;Vu and Turnell 2010). However, this research does not show any significant differences in bank market power and performance between state-owned and joint stock commercial banks. This study used the SOM technique to categorise Vietnam banks into super-class groups. This is the first study to use a categorising methodology to examine differences in bank market power and performance in the Vietnam banking industry. In addition, this is the first study to build financial trajectories of the Vietnam banking industry. Second, no study has tested differences in bank market power and performance of various Vietnam bank groups (super-class bank groups) using the SOM trajectory technique. This study fills this knowledge gap by comparing the market power (using the Lerner index) and performance (using the cost-efficiency score) of the super-class bank groups in Vietnam. The SOM Technique for Tracking Financial Trajectories Du Jardin and Séverin (2012) note that the major shortcoming of snap point forecasting techniques (such as linear, nonlinear or classification regression) is that they have a horizon time that is very short: it does not exceed one year. This presents a major problem when evaluation is limited to a single year, but debt is repaid over a much longer period, such as several years (Du Jardin and Séverin 2011). The reason is that the risk of a default borrower or bankruptcy may transpire more than a year after evaluation. For example, company executives might ask investors to give them more time to improve their financial health, but, after the grace period, may find it impossible to recover, leading to bankruptcy. When forecasting over one year, the accuracy of the snap point forecasting technique dramatically decreases. For example, Altman's model is 95% accurate in one-year forecasting but accuracy drops to 48% for three-year forecasts (Du Jardin and Séverin 2011). To overcome these limitations, researchers developed a combination technique that uses SOM (or Kohonen map) to track a company's financial trajectory, also known as the SOM trajectory technique Du Jardin and Séverin 2012). The primary purpose of SOM is to improve the accuracy of the forecasting technique over a specific period, not just the accuracy in snapshot forecasting (Du Jardin and Séverin 2011). There are several advantages associated with using the SOM technique to build a financial trajectory. First, the SOM trajectory is a user-friendly imagining technique for exploring financial reports Chen 2012). The SOM technique enables researchers/executives to observe a company's changing position on a trajectory; in short, the major difference is that SOM allows a dynamic view of changed financial status rather than snapshot forecasting (Chen 2012;Du Jardin and Séverin 2011). Second, in the medium term, the SOM trajectory method has an advantage in that it can forecast and detect financial threats (Du Jardin and Séverin 2011). This method enables executives to measure their company's financial health and take immediate corrective actions . The technique also assists experts to identify downward financial trends over time, and enables them to anticipate the risk of bankruptcy . No studies have used the SOM trajectory technique to build the financial trajectories of Vietnam banks. This study is the first to examine the financial status of Vietnam's banks using financial trajectory patterns. Moreover, this study is the first to use the SOM trajectory technique to categorise Vietnam banks into super-class groups. It can be used to benchmark market power and performance between these super-class bank groups. Measuring Bank Market Power In the literature there are two common ways of quantifying bank competitiveness levels: (1) the structural and (2) the non-structural approaches (Ab-Rahim 2017; Liu et al. 2012). The structural model is based on the "structure-conduct-performance" and "efficient structure" theories (Ab-Rahim 2017). Both theories argue that market concentration determines the level of competition in the market (Adjei-Frimpong et al. 2016). These theories assume that banks in markets which are composed of a few large players can offer a higher price for their financial products than banks in markets that have many players (Liu et al. 2012). The structural approach often uses the Herfindahl-Hirschman (HHI) index or the concentration (CRk) score to assess the level of competition (Nguyen and Nghiem 2018;Nguyen et al. 2016b). The non-structural model basically depends on the New Empirical Industrial Organisation Theory (NEIO) to evaluate levels of bank competition in emerging markets Nguyen and Nghiem 2018). The non-structural approach assumes that entering/exiting fencing rules and existing players (or banks) will affect the competitive environment (Liu et al. 2012). Under this approach, indicators used to determine competition levels include the H-statistic, the Lerner index and the Boone index Nguyen and Nghiem 2018;Nguyen et al. 2016b). The major disadvantage of the H-statistic and the Boone index is that both indicators do not measure market power continuously (Nguyen and Nghiem 2018). This is because estimating these indicators requires data from the whole study period to enable identification of different types of competition (such as monopolistic competition, a monopoly or perfect competition) Nguyen and Nghiem 2018). In contrast, the Lerner index indicates market power at a continuous, individual level (Adjei-Frimpong et al. 2016;Nguyen and Nghiem 2018). The Lerner index represents market power, whereby higher market power suggests lower levels of competition (Nguyen et al. 2016b). The Lerner index has the following advantages: • Compared with other concentration measures that evaluate competition at the industry level, the Lerner index can be used to appraise each bank or provide a continuous measurement for each year (Abel and Roux 2017;Nguyen and Nghiem 2018). Hence, the results can be used as a responding variable in a subsequent analysis to evaluate the determinants that impact upon bank power (Delis and Pagoulatos 2009). • The indicator also reflects an individual bank's profitability because the indicator is measured by the change in the 'output price-cost margin' divided by output price . The 'output price-cost margin' can be used to assess profitability. Hence, higher bank market power implies higher profitability. As a result of these advantages, scholars have used the Lerner index to quantify the market power of Vietnam's banks (see for example, Nguyen 2018; Nguyen and Nghiem 2018;Nguyen et al. 2016b). While Nguyen and Nghiem's (2018) study is the only one that tests differences in market power between commercial state and joint stock Vietnam banks, their results show that this is insignificant. No study has evaluated the market power of various bank groups (the super-class bank groups) using the SOM trajectory technique. This study thus compares the market power of the super-class bank groups in Vietnam using the Lerner index. Measuring Bank Performance Bank performance is of interest to bank executives and policymakers, as well as academic researchers (Kočišová 2014;Nguyen et al. 2016a). This is because the banking industry has a central role in national development (Nguyen et al. 2016a). Efficient banks stabilise the banking industry and a nation's monetary system (Kočišová 2014). Moreover, bank executives are always interested in benchmarking or comparing their bank's performance with the top operating bank to improve their operations (Nguyen et al. 2016a). Cost-efficiency (CE) is commonly used to quantify bank performance. The CE method is employed to capture how banks manage their costs compared with the optimal costs produced by best-practice banks (Kočišová 2014). SFA and DEA are two common methods applied to quantify bank performance (Ab-Rahim 2017; Kočišová 2014). SFA is a parametric technique that generates a function of the expense, income or production (Ab-Rahim 2017). The function defines the relationships between inputs, outputs and environmental factors. In contrast, DEA is a non-parametric technique that uses a linear programme to estimate bank efficiency (Ab-Rahim 2017; Barth et al. 2013). The DEA technique has several advantages over the SFA method: • In comparison to SFA, DEA performs well with small sample sizes. This is because the SFA method is a statistical method that needs a huge dataset to create unbiased estimate coefficients (Adjei-Frimpong et al. 2014;Gardener et al. 2011). • The SFA method uses a mathematical formula to measure efficiency. The accuracy of the efficiency scores depends on the suitability of the chosen mathematical formula (Barth et al. 2013). In contrast, the DEA technique uses the linear programme approach to predict efficiency scores. In short, researchers using the DEA technique do not have to choose a functional form (Barth et al. 2013). Researchers have measured bank performance in Vietnam using the DEA method (see for example, Gardener et al. 2011;Nguyen et al. 2014Nguyen et al. , 2016a. Other scholars have used the SFA method (see for example, Nguyen and Nghiem 2018;Nguyen et al. 2016b). This study uses DEA method to measure bank performance. The DEA method uses a linear programme to measure efficiency scores on single and individual observations (Barth et al. 2013). Thus, DEA can identify efficient units or the best practice units. This method also specifies inefficient units and assists in improving inefficient units (Adjei-Frimpong et al. 2014). In addition, our dataset consists of 258 observations, a relatively small dataset. The SFA method is inappropriate because it can generate biased estimate coefficients because of a small dataset. Therefore, the DEA is the best choice to avoid prior problems associated with the SFA method (Adjei-Frimpong et al. 2014;Nguyen et al. 2014). The non-parametric DEA method performs under the constant return to scale (CRS) assumption (Nguyen et al. 2014). However, the DEA method has been modified under the variable return to scale (VRS) assumption (Kočišová 2014). The VRS assumes that banks work at a less than optimal/efficient scale because of an imperfectly competitive atmosphere, facing economic constraints and a strict regulatory system (Adjei-Frimpong et al. 2014;Nguyen et al. 2014). Vu and Turnell (2010) note that the Vietnam commercial state and joint stock banks need to reduce their costs to achieve optimal size and be more cost-efficient (Vu and Turnell 2010). Nguyen et al. (2014) contend that Vietnam banks may include some inefficient scale of banks. Thus, the CRS assumption is not a suitable option for measuring the efficiency of Vietnam banks. To avoid the impact of suboptimal scales in several banks, we use the VRS assumption. No study has compared the cost-efficiency scores of Vietnam banks within super-class groups (as categorised by the SOM trajectory technique). This study fills this knowledge gap, using the paired t-test to compare efficiency scores (measured by DEA) within the super-class bank groups. Data Information As Nadeem et al. (2017) argue, panel data, which cover fewer than 10 years, may generate biased results. This is because statistical conclusions cannot be realised if the data study period is too short (Vu and Turnell 2010). For this study, our bank data is from 2008 to 2017 to ensure a 10-year study period. Bank financial data were obtained from the Bloomberg database and bank websites. Macroeconomic indicators, such as the GDP and the inflation rate, were sourced from the World Bank database. Five of the listed banks did not provide financial data during the period: PVcom bank, Seabank, Bao Viet bank, Co-op Bank and Vietcapital bank. In addition, several banks had missing data. SCB did not provide data in 2011. The Bac-A bank had no financial data from 2008 to 2010. Vietbank only had data from 2016 and 2017. Thus, there were only 27 banks with unbalanced panel data (258 observations) over the study period. The 27 banks represent approximately 85% of the Vietnam banking industry. This study excluded data from the nine foreign-owned banks (HSBC, Standard Chartered, ANZ, Shinhan Bank, Public Bank Bhd, Hong Leong Bank, Woori, UOB and CIMB). Developing Financial Trajectories Using the SOM Technique This section describes the SOM technique used to track bank financial trajectory patterns. The unsupervised SOM technique is a feed-forward ANN that includes input and output layers Samarasinghe 2006). The output neural layer is usually a low-dimension grid; that is, a one-or two-dimensional grid. Each unit of the input layer is linked to all neurons in the output layer by weight. Appendix B.1 shows the training process for the SOM with n-neurons (input layer) and m-neurons (output layer). This study used the same methodology as Chen's (2012) study to investigate Vietnam banks' financial trajectory patterns. Building the trajectory included static and dynamic phases as follows. In the static phase, Vietnam banks' financial statement data from every year (2008-2017) were screened using the SOM technique. After screening, each bank was located within specific neurons in the 2D SOM map. Each year was given a different 2D SOM map to represent a bank's location. Cluster analysis was applied to optimise groups of neurons. In the dynamic phase, ten 2D SOM maps from every year were overlapped into one 2D map. Bank locations revealed changes over time. For example, bank A, which was in group 1 in year 1, moved to group 2 in year 2 and group 2 in year 3. Therefore, bank A's trajectory was determined by connecting the bank's location from year 1 to year 3. By observing the trajectory of every bank in the banking industry, we identified trajectory patterns as well as categorised the banks into super-class groups (Chen 2012; Du Jardin and Séverin 2011). Measuring Vietnam Banks' Market Power Our study used the non-structural Lerner index to quantify the market power of Vietnam's domestic banks. The Lerner index reveals the changes between the price the banks charge (interest rate and fees) and their marginal cost (MC) of total assets. This relationship can be expressed mathematically, using Equation (1) (Demirguc-Kunt and Martínez-Pería 2010): where: P it = price output of the bank at time t, which is computed using total revenue (noninterest income plus interest income), divided by total asset value; and MC it = marginal cost of i th bank at time t that is determined using the derivative of the translog-cost function (see Appendix B.2). The Lerner index values range from −1 to 1. If a bank's Lerner index is closer to 1, this indicates that the bank has greater market power and is considered a monopolist bank. In contrast, if the Lerner index is closer to zero, this implies greater competition. A value of 0 indicates perfect competition. When the Lerner index is negative, this indicates that a bank has reduced their prices to below cost due to external influences, such as economic crises (Abel and Roux 2017). After the market power of each Vietnam bank (represented by the Lerner index score) was determined at time t using Equation (1), the paired t-test was employed to compare the different market powers of the super-class bank groups. Measuring Vietnam Bank Performance Our study used the DEA technique with the VRS assumption to compute bank performance. The CE score was used to determine the performance of each individual bank. CE scores were calculated using a linear programme (see Appendix B.3). This scores ranged from 0 to 1. Banks with higher CE scores have higher cost-efficiency. After the domestic bank performance score was calculated for each bank at time t, the paired t-test was used to compare the different CE scores within the super-class groups of Vietnam banks. Using the SOM Technique to Detect Banks' Financial Locations in 2D SOM Maps The first step in determining a bank's trajectory involves locating it within a 2dimensional (2D) SOM map, based on the bank's financial information. Appendix A Table A2 summarises Vietnam domestic banks' financial data over the study period. The R program (Version 3.4.2) with Wehrens and Buydens (2007) library "Kohonen" software package was used to perform the SOM technique (R Core Team 2017). All banks' financial data in the balance sheet report are used as input variables for SOM technique. These variables are selected based on prior studies (Chen 2012;Chen et al. 2013;Séverin 2011, 2012). The SOM 4 × 4 grid was selected to capture the bank's locations. The 'hexagonal' structure was chosen because it results in more neighbouring networks (Samarasinghe 2006). The SOM was trained repeatedly with 100 iterations and a learning rate between 0.01 to 0.02. The Euclidean distance was used to decide the winning neuron and the Gaussian neighbourhood function was used to alter weight smoothly across distance. This procedure was repeated multiple times until the SOM was thoroughly trained. The training processes (for the period of 2008 to 2017) showed that the mean distance reached a maximum and dropped to the minimum (see Appendix A Figure A1). These outcomes indicated that the 4 × 4 neuron grid was closest to the data information. As a result, the final SOM maps show the locations of Vietnam's domestic banks in the 2D maps (see Figure 1). There were some empty neurons and some neurons that contain many banks. In short, the SOM results provide a clear picture of which banks are categorised into each specific neuron. Sixteen neurons were categorised into six groups. These six groups are shown in orange, purple, red, brown, green and blue (see Figure 1). The mean within group sum of squares (WSS) was used to select the optimum number of clusters. The optimum clusters were chosen as the cluster that had the largest change in WSS value (Waidyarathne and Samarasinghe 2014). The WSS value dropped dramatically when the number of clusters increased from one to two (see Appendix A Figure A2). This suggested that the 16 neurons should be categorised into two clusters. The dendrograms also suggested that groups two to six should be grouped into one cluster (see Appendix A Figure A3). As a consequence, groups two to six were clustered to create a super-class group of banks (named group two Sixteen neurons were categorised into six groups. These six groups are shown in orange, purple, red, brown, green and blue (see Figure 1). The mean within group sum of squares (WSS) was used to select the optimum number of clusters. The optimum clusters were chosen as the cluster that had the largest change in WSS value (Waidyarathne and Samarasinghe 2014). The WSS value dropped dramatically when the number of clusters increased from one to two (see Appendix A Figure A2). This suggested that the 16 neurons should be categorised into two clusters. The dendrograms also suggested that groups two to six should be grouped into one cluster (see Appendix A Figure A3). As a consequence, groups two to six were clustered to create a super-class group of banks (named group two banks). Group one is a super-class bank group (named group one). In short, super-class bank groups one and two represent two clusters of 16 neurons; in total, these represent a total of 27 banks. There were only commercial joint stock banks in group one. Group two contained four commercial state banks and several joint stock banks. The mean total assets over the 10 years was 38,053 billion VND for group one banks and 280,194 billion VND for group two banks (see Appendix A Table A3). In other words, group two banks were larger (have greater total assets) than group one banks. Dynamic Evaluation Phase In the second step, ten 2D mapping plots (see Figure 1) for the period 2008 to 2017 were overlapped on each other. Each bank was observed and connected from neuron to neuron or group to group, to find its specific trajectory. The trajectory pattern results are shown in Figure 2. Table 1 shows how Vietnam domestic banks are located within super-class groups one and two for the period of 2008 to 2017. Some banks remained in the same group over the entire study period. Ten banks maintained their position in group one banks: BacAbank, ABBank, NVB, OCB, Viet A bank, Nam A Bank, Vietbank, KienLongBank, PGbank and Saigonbank. Similarly, there were nine banks in group two banks: these are BIDV, Agribank, Vietinbank, VCB, SCB, Sacombank, Mbbank, Techcombank and ACB (see Figure 2 and Table 1) Measuring Market Power for Group One and Two Banks Our study used the non-structural Lerner index to assess the market power of Vietnam domestic banks. Appendix A Table A4 lists the financial information used to quantify the Lerner index for the period of 2008 to 2017. Data were analysed using STATA software (version R15) (StataCorp 2017). The Lerner index results are displayed in Table 2. The average Lerner index score of Vietnam domestic banks for the period of 2008 to 2017 was 0.210 (see Table 2 Figure 3 shows the Lerner index scores for both group one and two banks for the period of 2008 to 2017. The average Lerner score of group one banks was higher than both the average group two banks' Lerner scores and the average for all banks for the period from 2008 to 2017 (see Figure 3a). The average Lerner index for group two banks was 0.252, which was higher than group one banks (0.162) by approximately 56% (see Figure 3b). This outcome implies that group two banks are stronger (or have greater market power) than group one banks. The paired t-test was used to examine the statistical significance of the difference in the Lerner indexes between group one and two banks. The null hypothesis (H 0 ) is that the mean difference in the Lerner indexes between groups one and two banks is zero. The t-test results were statistically significant at the 1% level (t-value equals −6.83). This result indicates a rejection of the null hypothesis. In other words, there was a statistically significant difference in bank market power between group one and two banks. Our result contradicts Nguyen and Nghiem's (2018) study, which showed that the difference in market power between commercial state and joint stock banks was not statistically significant. The contradictory results imply that the grouping approach that categorises state and joint stock Vietnam banks may be inappropriate when testing differences in market power between these banks. In contrast, the SOM technique, which uses an unsupervised algorithm, can better capture differences in bank market power and categorise Vietnam domestic banks into two super-class groups: group one and two. Group twp banks (with a mean of Lerner index value of 0.252) have greater market power than group one banks (with a value of 0.162). Group two banks also have greater total assets than group one banks (see Appendix A Table A3). Measuring the Efficiency of Group One and Two Banks The non-parametric DEA technique under the VRS assumption was used to compute bank performance (represented by CE scores). Appendix A Table A5 provides the statistical data used to compute CE scores for the period of 2008 to 2017. The dataset was estimated using the R program (version 3.4.2) with the "Benchmarking" package (R Core Team 2017). Bogetoft and Otto's (2018) "Benchmarking" software package was employed to estimate CE scores using the DEA method. Table 3 shows the CE scores of the banks. The mean CE score was 0.855, which indicates that Vietnam domestic banks could reduce their costs by 14.5% (from 100% to 85.5%), while maintaining the same outputs. Our result contradicts Nguyen and Nghiem's (2018) study, which showed that the difference in market power between commercial state and joint stock banks was not statistically significant. The contradictory results imply that the grouping approach that categorises state and joint stock Vietnam banks may be inappropriate when testing differences in market power between these banks. In contrast, the SOM technique, which uses an unsupervised algorithm, can better capture differences in bank market power and categorise Vietnam domestic banks into two super-class groups: group one and two. Group twp banks (with a mean of Lerner index value of 0.252) have greater market power than group one banks (with a value of 0.162). Group two banks also have greater total assets than group one banks (see Appendix A Table A3). Measuring the Efficiency of Group One and Two Banks The non-parametric DEA technique under the VRS assumption was used to compute bank performance (represented by CE scores). Appendix A Table A5 provides the statistical data used to compute CE scores for the period of 2008 to 2017. The dataset was estimated using the R program (version 3.4.2) with the "Benchmarking" package (R Core Team 2017). Bogetoft and Otto's (2018) "Benchmarking" software package was employed to estimate CE scores using the DEA method. Table 3 shows the CE scores of the banks. The mean CE score was 0.855, which indicates that Vietnam domestic banks could reduce their costs by 14.5% (from 100% to 85.5%), while maintaining the same outputs. Figure 4 show the CE scores for the two groups of banks. The mean CE scores for both groups of banks were quite similar (see Figure 4a). The average CE score for group two banks was 0.858: this is 1% higher than the average CE score of group one banks (0.850) (see Figure 4b). The paired t-test was used to test the difference between the CE scores for the two groups of banks. The null hypothesis (H0) was that the difference of CE scores between group one and two banks would equal zero. The t-value was insignificant at all conventional levels. The p-values of the paired CE tests was 0.5817. These results confirm the null hypothesis. In short, the differences in CE scores for groups one and two banks were insignificant. These findings echo previous studies (Nguyen and Nghiem 2018;Vu and Turnell 2010). The authors found no significant differences between the CE scores of commercial state-owned and joint stock banks in Vietnam. Figure 4 show the CE scores for the two groups of banks. The mean CE scores for both groups of banks were quite similar (see Figure 4a). The average CE score for group two banks was 0.858: this is 1% higher than the average CE score of group one banks (0.850) (see Figure 4b). The paired t-test was used to test the difference between the CE scores for the two groups of banks. The null hypothesis (H0) was that the difference of CE scores between group one and two banks would equal zero. The t-value was insignificant at all conventional levels. The p-values of the paired CE tests was 0.5817. These results confirm the null hypothesis. In short, the differences in CE scores for groups one and two banks were insignificant. These findings echo previous studies (Nguyen and Nghiem 2018;Vu and Turnell 2010). The authors found no significant differences between the CE scores of commercial state-owned and joint stock banks in Vietnam. Conclusions and Policy Implications This study used the SOM technique to categorise Vietnam domestic banks into two super-class groups (one and two). While differences in the Lerner scores (which represent market power) between the two super-class bank groups (one and two) were statistically significant at the 1% level, the CE scores (which represent performance) were the same. The different market power between group one and two banks contradict Nguyen and Nghiem's (2018) results. This study shows that the SOM technique (with an unsupervised Conclusions and Policy Implications This study used the SOM technique to categorise Vietnam domestic banks into two super-class groups (one and two). While differences in the Lerner scores (which represent market power) between the two super-class bank groups (one and two) were statistically significant at the 1% level, the CE scores (which represent performance) were the same. The different market power between group one and two banks contradict Nguyen and Nghiem's (2018) results. This study shows that the SOM technique (with an unsupervised algorithm) can better capture differences in bank market power and thus can be used to divide Vietnam domestic banks into two groups, consisting of weak banks (group one) and strong banks (group two). Using the SOM technique provides academics with a new approach, which is based on an unsupervised algorithm. This is different from previous studies, which have divided domestic Vietnam banks into commercial state and joint stock banks (Nguyen and Nghiem 2018;Vu and Turnell 2010). Hence, this study has argued that two groups of banks with different levels of market power exist side-by-side in the Vietnam banking industry. The group of strong banks tends to be monopolists. The existence of these two groups of banks (weak and strong banks) indicates that the competitive domestic banking environment in Vietnam may be at risk. The reason is that group two banks may be more profitable due to greater market power, whereas group one banks may struggle to cut costs to remain viable. Group two banks (the larger banks) occupy the dominant position in this environment and will continue to expand (Tabak et al. 2012;Wang 2015). In such an environment, group two banks may end up acquiring group one banks. This explains why the number of banks reduced from 43 to 32 over the study period of 2008 to 2017. Policymakers and regulators must take this phenomenon (group two banks acquiring group one banks) into consideration when issuing policies in order to maintain an optimal number of banks to ensure the stability and competitiveness of the Vietnam banking system. As Le (2014) notes, the ideal number of banks to achieve stability in the Vietnam banking system is between 15 and 17. In 2017, there were 32 banks. To meet the ideal number, half would need to be merged. Our SOM results showed that 70% of Vietnam banks (19 of 27 banks) maintained their positions, either in super-class group one or two banks. This result indicates that banks tend to maintain their financial position in the industry, and that bank market power persists over time. This fact indicates the existence of some rigidity in the banking industry, which may make it difficult for weak banks to compete. Future research could consider whether bank market power persists over a long period of time. These findings will help policymakers and regulators avoid rigidity and ensure an efficient and competitive banking environment. Appendix A The MC it of the i th bank (at time t) is estimated using the first derivative of Equation (A3) as follows: The coefficients a 0 , b 1 , b 2 , b 3, b 4 , d 3 are estimated from Equation (A3) and plugged into Equation (A4) to compute (MC it ) of the i th bank at time t.
2021-10-16T15:08:10.615Z
2021-10-13T00:00:00.000
{ "year": 2021, "sha1": "e6399a3cab3bc73fa12b3d86e5a5e9920f2ff824", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1911-8074/14/10/485/pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "18814f2a6a32b104cf57ee272da177a88611a9d2", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Business" ] }
260232780
pes2o/s2orc
v3-fos-license
Effect of resonant magnetic perturbations including toroidal sidebands on magnetic footprints and fast ion losses in HL-2M Externally applied resonant magnetic perturbations (RMPs), generated by magnetic coils located outside the plasma (referred to as RMP coils), provide an effective way to control the edge localized mode (ELM) in tokamak devices. Due to the discrete nature of the toroidal distribution of these window-frame coils, toroidal sidebands always exist together with the fundamental harmonics designed for ELM control. In this work, the MARS-F code (Liu et al 2000 Phys. Plasmas 7 3681) is applied to investigate the detailed features of the RMP spectra considering both the dominant harmonic (n = 2) and the associated sideband (n = 6), and the impact of the combined fields on magnetic footprints as well as on the fast ion losses for a reference double-null scenario in the HL-2M device. It is found that the sum of the n = 2 and n = 6 RMP fields splits the footprint and widens the footprint area, as compared to the single-n (n = 2) harmonic case. The resistive plasma response breaks the up–down symmetry of the footprint pattern on the outer divertor plates, which is otherwise symmetric assuming vacuum RMP fields. Considering fast ion losses, a threshold value exists for the initially launched radial position of test particles, as well as for the RMP coil current, before the loss occurs. When the threshold criterion is satisfied, the combined n = 2 and n = 6 RMP fields enhance the fast ion loss rate by ∼20% , as compared to that of the n = 2 component alone. These results illustrate the important role of the sideband of RMP fields on the magnetic footprints and fast ion losses in tokamak plasmas. Introduction Externally applied magnetic field perturbation was proposed to control the heat flux on plasma-facing components (e.g. the divertor plate) [1][2][3][4]. Resonant magnetic perturbation (RMP), produced by externally applied magnetic coils (referred to as RMP coils), is a mature method to control the type-I edge localized mode (ELM), with the purpose of reducing the transient heat flux on the divertor plate in tokamak devices. The efficiency of the RMP technique for suppressing and/or mitigating ELM has been demonstrated in many devices, such as DIII-D [5], JET [6], MAST [7], KSTAR [8], EAST [9], ASDEX Upgrade [10] and HL-2A [11]. The RMP coil efficiency depends on both the amplitude and spectrum of field perturbations, which are mainly determined by the plasma equilibrium parameters and by the RMP coil configuration [12][13][14][15][16][17][18][19][20][21][22]. The RMP technique will be applied to control ELMs in the International Thermonuclear Experimental Reactor (ITER) [23]. For a given equilibrium with fixed RMP coil geometry parameters, a significant aspect to consider when optimizing the RMP field spectrum is the toroidal phasing of the RMP coil current. The resistive plasma response usually plays an important role in optimizing the RMP field spectrum [24][25][26]. In addition, due to the discrete nature of the toroidal distribution of window-frame RMP coils, toroidal sidebands consistently coexist with the dominant harmonic designed for ELM control [27,28]. It has been demonstrated that the sidebands of perturbation play a part in controlling plasma behaviors such as magnetohydrodynamic (MHD) instabilities [29]. Hence, it is expected that RMP sidebands can also impact the magnetic footprints on the divertor target plate and/or fast ion redistribution/losses [27,[30][31][32][33][34][35][36][37][38][39] during ELM control. Precise prediction of magnetic footprints is essential for controlling the heat load pattern on the divertor target surface [40]. The magnetic footprint pattern on the divertor plate surface is easily affected by 3D field perturbations [41][42][43][44][45][46][47][48], in which the plasma response plays an essential role. In addition to the heat flux induced by thermal particles, the losses of fast ions can also induce a heat load on the plasma-facing materials [33,49,50]. Furthermore, loss of fast ions degrades the plasma performance through decreasing the plasma heating power or by affecting MHD instabilities (e.g. the resistive wall mode [51]). Reduction of fast ion losses during the amplification of an RMP field to control ELMs is crucial for maintaining the high performance of tokamak plasma. Understanding fast ion loss mechanisms related to realistic RMP fields is important for simultaneously considering the issue of fast ion loss and the ELM control problem. However, systematic investigations of the RMP field features taking into account toroidal sidebands and their effect on the magnetic footprint and fast ion losses are relatively scarce. Such investigations are required for a deeper understanding, interpretation and optimization of experiments on current devices, which is the motivation of this study. In this work, the MARS-F [52] code is applied to study the features of the RMP fields including both the dominant (i.e. n = 2) and sideband (i.e. n = 6) harmonics, and the effect of RMP fields on magnetic footprints and fast ion loss. The latter is carried out by utilizing the REORBIT module [53,54] developed within MARS-F. MARS-F, using the linearized resistive single-fluid MHD model in the full toroidal geometry, predicted plasma response fields that quantitatively agree with experimental measurements [55,56]. The optimized coil phasing for suppressing or mitigating ELMs obtained from MARS-F computations is consistent with the experimental values for many devices, such as MAST [24], ASDEX Upgrade [57], DIII-D [25] EAST [58] and HL-2A [28]. The remainder of the paper is organized as follows. Section 2 describes the formulation of the employed modeling approach. A brief introduction to the HL-2M tokamak is presented in section 3. Section 4 presents the reference equilibrium and the RMP coil configuration in the HL-2M device. Section 5 reports features of the RMP fields for the single n = 2 case and for the case of combined n = 2 and n = 6 spectra. The effect of RMPs on magnetic footprints is discussed in section 6. The influence of RMPs on fast ion losses is described in section 7. The numerical sensitivity test is reported in section 8. Section 9 draws the conclusion and discussion. Formulation The MARS-F code [52] is applied to compute the total magnetic field perturbations produced by the external RMP coils by solving the following linearized resistive single-fluid MHD equations in Eulerian form for the plasma region. For the vacuum regions and RMP coils, the curve-/divergence-free equations and electromagnetic equations are self-consistently numerically resolved, respectively [15]. Here, R is the plasma major radius.φ andẐ are the unit vectors along the toroidal angle and the vertical direction of the cylindrical coordinates (R,ϕ,Z), respectively. The variables B 1 , J 1 and P 1 represent the perturbations of the equilibrium magnetic field B, plasma current J and pressure P, respectively. ρ and Ω are the plasma mass density and toroidal rotation frequency, respectively. The plasma displacement and the perturbed velocity are denoted by ξ and v, respectively. κ is a coefficient describing the parallel sound wave damping strength [59,60]. n and m are the toroidal and poloidal mode numbers of the perturbation, respectively. v th,i is the thermal velocity of bulk ions. Parallel sound wave damping plays an important role in high-pressure plasmas (which is the case in our study). A large coefficient (κ = 1.5 as assumed in this work) allows proper capture of this damping physics. For plasmas at very low pressures, however, the parallel sound wave damping becomes less important. The plasma response in this case is insensitive to the choice of the damping coefficient κ. In addition, the REORBIT module was recently implemented in the MARS-F code [53] to study the influence of field perturbations on magnetic footprints [22] on the divertor target surface and on the fast particle confinement/loss [21,37,61]. REORBIT is employed to trace the field lines and the guidingcenter drift orbit trajectories of fast ions in this work. For fast ion orbit tracing, the REORBIT module time-advances the guiding-center drift orbit equations for test particles directly in the MARS-F curvilinear coordinates. The magnetic perturbation fields are employed in the internal representation (the raw format) as computed by MARS-F without mapping to other coordinate systems, thus allowing high fidelity in treating the 3D perturbation by REORBIT. The module also allows inclusion of particle collision and electric field effects, but these are not included for fast ion tracing in this study, meaning that the particle energy and magnetic moment are both conserved during the tracing. The particle loss is therefore induced purely by the 3D RMP fields. The HL-2M tokamak HL-2M is a newly constructed copper-conductor tokamak device at the Southwestern Institute of Physics, with a plasma current exceeding 1 MA in 2022. The main goal of HL-2M is to achieve high-performance plasmas, thus providing support for the critical requirements of ITER operation and for the physics design and optimization of future reactor-scale devices. These include, e.g., the plasma physics related to fast ions, high beta high bootstrap current operations and flexible divertor configurations (snowflake, tripod). The main parameters of HL-2M are presented in table 1. The designed maximum plasma current is 3 MA and the toroidal magnetic field is up to 3 T [62]. The present design value of the total auxiliary heating power is 27 MW (with future upgrades expected), including 15 MW neutral beam injection (NBI), 8 MW electron cyclotron resonance heating and 4 MW low hybrid current drive [63,64]. A brief summary of the first plasma experimental results and recent modeling progress for the HL-2M were reported in [64]. For the NBI system, the three beam lines are designed with 5 MW power per line, with the birth energy of the neutral beam being about 80 keV. The two lines are in the co-current direction, while the third one is in the counter-current direction [65]. Equilibrium and RMP coil configuration Since HL-2M also aims to find power exhaust solutions for future reactors, the double-null divertor configuration is one of the primary operational scenarios. This motivates our choice of the double-null shape in this study. Furthermore, the kinetic profiles for the reference equilibrium are carefully designed utilizing the OMFIT code [66] in order to avoid internal kinks and to maximize the plasma current with the purpose of raising the plasma density limit. In addition, the toroidal magnetic field B 0 = 1.8 T is adopted with the purpose of providing support for future steady-state scenario discharges in HL-2M [63]. The radial profiles of the plasma pressure, density, toroidal rotation frequency and safety factor for the reference equilibrium are plotted in figure 1. The safety factor at the magnetic axis and plasma edge are q 0 = 1.27 and q a = 3.13, respectively, which exclude the unstable internal kink and edge-peeling instabilities. The plasma current is I p = 1.53 MA. The normalized plasma pressure is β N ≡ β(%)/[I p (MA)/a(m)B 0 (T)] = 2.42 with µ 0 = 4π × 10 −7 Hm −1 , minor radius a = 0.65 m and β = 2µ 0 < p > /B 2 0 being the thermal pressure normalized by the magnetic pressure. Note that β N = 2.42 is much smaller than the no-wall beta limit (β limit N ≃ 3.5) for the n = 2 external kink mode. <A> denotes the volume average of variable A. The on-axis toroidal rotation frequency is Ωτ A = 0.01, with τ A being the on-axis Alfvén time. The major radius of the geometric center is R 0 = 1.78 m and the on-axis density n e = 6.1 × 10 19 m −3 . In this work, the Spitzer model for plasma resistivity is assumed with the on-axis Lundquist number S = 1.67 × 10 8 , based on the assumed on-axis electron temperate T e0 = 5.85 keV. The plasma boundary of the reference equilibrium, the RMP coil location, the surfaces of the limiter and the vacuum vessel wall are all shown in figure 2(a). There are 2 × 8 in-vessel RMP coils in the HL-2M device. Each RMP coil spans about 36 • along the toroidal angle ϕ, with two adjacent RMP coils being separated by a 9 • gap in ϕ. The total toroidal coverage by the RMP coils is thus about 80%. The current of a single RMP coil, consisting of four turns, is up to 10 kAt. The predominant n = 1, 2 and 4 field perturbations can be produced by the RMP coil system. The relative toroidal phase difference between the upper and lower RMP coil current ∆Φ ≡ Φ U − Φ L for a given toroidal harmonic n, is the essential freedom for controlling the edge plasma, resulting in tuning the spectrum of field perturbations [12,15,28]. Furthermore, toroidal sideband harmonics are intrinsically produced due to the discrete nature of the coil distribution for the RMP coil system as shown in figure 2 In the modeling, the exp(−inϕ) variation for the nth component of the coil current is assumed. The amplitude and toroidal phasing of the effective coil current for a specific toroidal harmonic n is obtained by Fourier decomposition of the assumed RMP coil current distribution (figure 2(b)) along the toroidal angle. Due to their likely weak effect on the plasma, sidebands with rather high harmonic numbers (i.e. n ⩾ 14) are not considered here. However, the main sideband (i.e. n = 6) is taken into account in this work, and a coil current of 10 kAt is assumed for all RMP coils. The RMP coil configuration, as shown in figure 2(b), yields ∼10.5 kAt and ∼5.7 kAt effective coil currents for the n = 2 and n = 6 components, respectively. This coil configuration produces the coil phasing ∆Φ being close to the optimal one, for the dominant n = 2 component, will be shown later. The computed resonant radial field amplitude per unit RMP coil current for (a) the dominant toroidal mode number (i.e. n = 2) and (b) the secondary harmonic (i.e. n = 6), while scanning the coil phasing ∆Φ n=2 and ∆Φ n=6 for the n = 2 and n = 6 components, respectively. The dotted and solid curves denote the vacuum RMP field and the total RMP field including the resistive plasma response, respectively. The red 'O' symbols show the corresponding coil phasing for the n = 2 (i.e. ∆Φ n=2 = −90 • ) and n = 6 (i.e. ∆Φ n=6 = 90 • ) components for the coil configuration in figure 2(b). The Lundquist number is S ≃ 1.67 × 10 8 on the magnetic axis, with the Spitzer resistivity model being adopted. n = 2 and n = 6 RMP field perturbations To characterize the features of the RMP field perturbations, three quantities b 1 res , b 1 m and b ⊥ are defined, which denote the resonant radial field perturbation at the corresponding rational surface, the poloidal Fourier harmonic m of radial field perturbation for the given toroidal component n and the field perturbation perpendicular to the flux surface, respectively, with b 1 m ≡ (J B 1 · ∇s) m and b ⊥ ≡ B1·∇s |∇s| . J = (∇s · ∇χ × ∇ϕ) −1 is the Jacobian of the chosen equal-arc flux coordinate system with s ≡ ψ 1/2 p , χ and ϕ being the radial coordinate, general poloidal angle and toroidal angle, respectively. ψ p is the normalized poloidal flux. b 1 res is defined as the amplitude of b 1 m at the corresponding rational surface. b 1 res at the last rational surface is often used as an indicator for optimizing the RMP coil configuration (i.e. poloidal location, coil phasing, etc) for ELM control [18]. Figures 3(a) and (b) show the single-fluid resistive plasma response to the externally applied vacuum RMP fields, while scanning the toroidal phasing ∆Φ for the specific toroidal harmonics of n = 2 and n = 6, respectively. In both cases, the plasma response affects the dependence of the pitch-resonant radial field amplitude |b 1 res | at the last rational surface (i.e. q = 3) on ∆Φ and reduces the maximum value of |b 1 res | q=3 , as compared to the vacuum RMP fields. For the RMP coil configuration shown in figure 2, the computed field perturbations |b 1 res | q=3 for the predominant (n = 2) and secondary (n = 6) harmonics are 0.4 G kAt −1 and 0.3 G kAt −1 for the vacuum case, respectively. When the resistive plasma response is taken into account, the amplitude of |b 1 res | q=3 for the n = 6 harmonic (∼0.14 G kAt −1 ) is about 35% of that for the n = 2 component (∼0.4 G kAt −1 ). Here, the toroidal phasing difference between the upper and lower rows of the coil current ∆Φ n=2 = −90 • and ∆Φ n=6 = 90 • (labeled by the red circles in figure 3) are adopted for the n = 2 and n = 6 fields, respectively, based on the RMP coil current configuration shown in figure 2(b). In the following sections, the aforementioned toroidal phasing is fixed. In this work, Gauss (referred to as G) is taken as the unit of magnetic field perturbation, with 1 G representing 10 −4 T. The poloidal spectrum including both the resonant and nonresonant parts of the n = 2 RMP is shown in figure 4. The contour plots in figures 4(a) and (b) represent the response field harmonics b 1 m (ψ p ) plotted as a function of the poloidal number m (x-axis) and the radial coordinate ψ p (y-axis). The symmetry breaking with respect to the m = 0 plane is enhanced by the resistive plasma response (figure 4(b)), as compared to the vacuum case (figure 4(a)). The plasma response results in an overall field amplification-a factor of three, when the maximum value of b 1 m in (ψ p − m) domain is compared with that for the vacuum case. For instance, the b 1 m amplitude of the nonresonant harmonic with the m = 10 is about 1.86 G kAt −1 , which is about three times larger than its vacuum counterpart 0.67 G kAt −1 . Figure 4(c) clearly shows that the amplitude of the resonant poloidal harmonics is also amplified by the plasma response. For the m = 6 harmonic, the maximum value of b 1 m along the radial coordinate is enhanced to be ∼1.5 G kAt −1 by the plasma response. For other resonant harmonics (m = 3, 4, 5), the amplification of the perturbation amplitude still occurs. The amplification of the amplitude of the poloidal harmonic is mainly induced by the finite plasma pressure [12]. On the other hand, the plasma also screens the pitch-resonant radial magnetic perturbation (i.e. |b 1 res |) at the corresponding rational surface due to the occurrence of the shielding current resulting from the plasma response. For example, at rational surfaces (i.e. q = 1.5, 2, 2.5), due to the strong screening effect from the toroidal plasma flow in the core and middle regions, |b 1 res | is greatly reduced by the resistive plasma response (figure 4(d)). However, the screening m (ψp) of the n = 2 RMP fields for (a) the vacuum radial field and for (b) the total radial field, including the resistive plasma response, plotted along the poloidal mode number m (x-axis) and the radial coordinate ψp (y-axis). The '+' symbols denote the resonant harmonics at the locations of the corresponding rational surfaces (i.e. q = m/n = 3/2, 4/2, 5/2, 6/2). (c) Comparison of the radial profiles of the resonant harmonics for the vacuum case (in red) and for the case including the plasma response (in blue). (d) The pitch-resonant radial perturbation at the corresponding rational surfaces denoted by the vertical lines in (c), for the vacuum (in red) and the total (in blue) fields. ∆Φ n=2 = −90 • is adopted here. effect of plasma response on |b 1 res | almost vanishes in the edge region (such as the computed |b 1 res | at the q = 3 rational surface shown in figure 4(d)) due to the combined influence of slow plasma flow, the relatively large plasma resistivity and the strong amplification of b 1 m=3 by the plasma response. The full poloidal spectrum of field perturbations for the n = 6 component is reported in figures 5(a) and (b). The overall amplitude of |b 1 m | in the 2D-spectrum domain is increased from 0.38 G kAt −1 for the vacuum case to 0.46 G kAt −1 for the case including resistive plasma response. The pattern of poloidal spectrum is not significantly changed by the plasma response, except for the pitch-resonant radial fields at the locations of the corresponding rational surfaces and the radial perturbations of non-resonant harmonics with m >∼ 18. For the resonant harmonics, the maximum values of |b 1 m | along the radial coordinate are slightly amplified by the plasma response ( figure 5(c)). This is due to the higher toroidal mode number and the associated stronger screening effects from more rational surfaces on the external vacuum field, as compared to the n = 2 case. The |b 1 m | value of the n = 6 perturbations in the core region is very small, as expected. In a pitch-resonant radial field, the plasma response plays an overall screening role at all rational surfaces. At the outermost two rational surfaces (q = m/n = 17/6,18/6), |b 1 res | is about 0.12 G kAt −1 and 0.14 G kAt −1 , respectively, which are about half of their vacuum counterparts. We note here that the total 151 poloidal harmonics with m = −75, . . . , 75 are adopted in the computations, in order to guarantee the numerical accuracy. We have checked that |b 1 res | vanishes at all corresponding rational surfaces, when the ideal plasma response is considered (not shown here). Figure 6 shows the total normal field perturbations b ⊥ inside the plasma in the (R, Z)-plane, including the plasma response, related to the cases shown in figures 4 and 5. (f ) show that the plasma response field in the high-field-side (HFS) is weaker than that in the low-field-side (LFS), consistent with that discussed in [20]. For the n = 2 and n = 6 harmonics (figures 6(c) and (f )), the plasma response produces a finite field perturbation near the top and bottom of the torus. The plasma response fields near the upper null region are much stronger than that near the lower one. The maximum values of the overall amplitude of b ⊥ for the n = 2 and n = 6 components are ∼17.4 G kAt −1 and 7.0 G kAt −1 , respectively. Figures 6(g) and (h) present a mixture of the n = 2 and n = 6 components, which is defined as ∑ n=2,6 b ⊥ exp(−inϕ). Here, b ⊥ is in complex numbers and ϕ = 0 is assumed in figures 6(g) and (h). Figures 6(g) and (h) show that the overall patterns of the real and imaginary parts of the superposed perturbations are mainly determined by the n = 2 component. However, due to the local cancellation or enhancement at certain special locations, the overall pattern of magnitude of mixed |b 1 n | is slightly different from that for the n = 2 component. The maximum value of mixed |b 1 n | is slightly enhanced to 24.1 G kAt −1 (figure 6(i)). In order to study the detailed features of the superposed b 1 n , the distribution of b 1 n on the last closed flux surface (LCFS) is reported in figure 7. As expected, the real (i.e. Re(b 1 n )) and imaginary (i.e. Im(b 1 n )) parts of b 1 n periodically vary along the toroidal angle for the case with a pure toroidal component (i.e. n = 2). The corresponding amplitude of b 1 n varies along the equal-arc poloidal angle χ and is a constant along ϕ at a fixed χ. On the LCFS, the field perturbation near the upper null (i.e. χ ∼ 100 • ) is much larger than that near the lower null (i.e. χ ∼ −100 • ), as shown in figure 7(c). The added n = 6 sideband significantly modifies the patterns of Re(b 1 n ) and Im(b 1 n ), as shown in figures 7(d) and (e). The sum of the n = 2 and n = 6 RMPs results in a substantial periodic variation of the b 1 n amplitude along ϕ, as expected. This implies that the effect of mixed total perturbations with two toroidal components on the magnetic topology differs from that with the single-n (n = 2) harmonic case. Effect of n = 2 and n = 6 RMP fields on magnetic footprints Next, the Poincaré maps of magnetic field lines for the case of the n = 2 RMPs alone and for the case with the sum of the n = 2 and n = 6 RMPs are shown in figure 8. Here, the RMP perturbations are computed assuming the resistive plasma response (with the Spitzer resistivity model). For the case of including the n = 2 RMPs, the dominant magnetic islands occur at the q = 2.5 and q = 3 surfaces, which correspond to resonant harmonics m/n = 5/2 and m/n = 6/2, respectively. Furthermore, the secondary island chain with helicity of m/n = 11/2 occurs at ψ p ≃ 0.96. When the n = 6 component is added, the magnetic surfaces near the edge (i.e. ψ p ≃ 0.98) are further broken and the stochastic region forms. The n = 6 fields produce additional magnetic island chains at the rational surfaces q = 2.66 and q = 2.83, which correspond to resonant harmonics m/n = 16/6 and 17/6, respectively. In addition, at the q = 5.5 (i.e. ψ p ≃ 0.96) surface, corresponding to m/n = 11/2, the n = 6 component induces a distortion of the island structures generated by the n = 2 RMP fields. Note that small island structures exist between the ψ p = 0.97 and the ψ p = 0.98 surfaces when the n = 6 RMPs are included. The n = 6 RMPs play a remarkable role in modifying the magnetic Figure 6. Real (left three panels), imaginary (middle three panels) and amplitude (right three panels) of the normal perturbations for the total RMP fields including the resistive plasma response. The top, middle and bottom three panels correspond to the n = 2, n = 6 and superposition of the n = 2 and n = 6 components, respectively. We adopt the coil phasing ∆Φ n=2 = −90 • and ∆Φ n=6 = 90 • for the n = 2 and n = 6 harmonics, respectively. Here, the toroidal angle ϕ = 0 is chosen for plotting. field topology near the edge region. Due to the difference of the Poincaré maps of field lines between the aforementioned two cases, the n = 6 component is expected to impact magnetic footprints on the divertor target plate as studied in the following. Note that a 1 kAt RMP current is assumed here in order to clearly display the individual magnetic islands in the plasma edge region. If a 10 kAt coil current were to be adopted, magnetic surfaces in the edge region would be substantially broken without individual islands being visible. figure 10(d), the plasma response significantly reduces the minimum value of ψ p , compared with that for the vacuum RMPs, which implies an increase in the particle loss region inside the plasma. When the resistive plasma response is included, the additional n = 6 sideband also has a substantial influence on the trajectories of the field lines, as shown in figure 10(b). In addition, for the convenience of describing magnetic footprints in the following, the distance along the limiter (denoted by L) away from the reference point (labeled by 0 in figure 10(g)) is defined. For the cases studied here, the computations show that the RMP fields mainly impact the magnetic footprints on the upper outer and lower outer divertor plates, referred to as 'FT1' and 'FT2', respectively. Hence, the effects of RMP field perturbations on 'FT1' and 'FT2' footprints are presented Here, the RMP spectra in figure 6 are adopted, assuming 10 kAt RMP coil current. We choose the coil phasing ∆Φ n=2 = −90 • and ∆Φ n=6 = 90 • for the n = 2 and n = 6 harmonics, respectively. distance' defined in figure 10(g) and the toroidal angle, respectively. Figure 11 reports the influence of vacuum RMP fields on the magnetic footprints on the outer divertor plates. For the case with n = 2 RMPs, the overall minimum value of ψ p,min reached by field lines is about ψ p,min = 0.95 ( figure 11(a)). However, when the n = 6 sideband is added, ψ p,min is reduced to 0.93, as shown in figures 11(b) for 'FT1'. In addition, the n = 6 sideband extends to the width of the footprint in terms of the 'limiter' distance L by 75% from ∼4 cm (figure 11(a)) to ∼7 cm ( figure 11(b)). The sum of the n = 2 and n = 6 field perturbations results in an additional secondary periodic variation of the magnetic footprint along the toroidal angle ( figure 11(b)). The 'FT2' footprint pattern is similar to that for 'FT1' for the chosen RMP coil configuration and the reference equilibrium with an up-down symmetric boundary shape. The overall minimum value of ψ p,min for 'FT2' is almost the same as that for 'FT1', implying the symmetry of heat load for the upper outer (figures 11(a) and (b)) and lower outer divertor (figures 11(c) and (d)) plates when the vacuum RMP fields are considered. However, due to the very slight up-down asymmetry of the limiter surface, the width of the footprint for 'FT2' is slightly different from that for 'FT1'. For 'FT2', the width of the footprint is ∼4.8 cm ( figure 11(c)) for the case with n = 2 vacuum field perturbation, which is extended to ∼7.7 cm as the n = 6 sideband is added ( figure 11(d)). The magnetic footprints for the case of including resistive plasma response are plotted in figure 12. For 'FT1', the width of the footprint is ∼8 cm as the n = 2 perturbation ( figure 12(a)) is assumed, which is about double that for the vacuum case. In addition, the minimum value of the overall ψ p,min is ψ p,min ∼ 0.92, which is much smaller than the corresponding value for the vacuum case (ψ p,min ∼ 0.95). When an additional n = 6 component is taken into account, the 'lobe' structure is split ( figure 12(b)), which implies the extension of footprints in the toroidal direction at the 'distance parameter' L ∼ 418 cm. For footprints near the lower-outer divertor plate (figures 12(c) and (d)), the additional n = 6 sideband induces an extension of the magnetic footprint width and modifies the periodic variation in the toroidal angle similar to that for the vacuum field case ( figure 11(d)). Interestingly, the plasma response yields the symmetry breaking of the footprint pattern on the two outer divertor plates. This symmetry breaking is mainly due to the asymmetric features of the total RMP field perturbations in the top and the bottom regions as shown in figures 6(c), (f ) and (i). For the lower outer divertor plate, the overall pattern of magnetic footprint is similar to that for the vacuum case, due to the weak plasma response fields in the bottom region of the torus. Furthermore, the plasma response generally reduces the minimum value of overall ψ p,min researched by field lines. Effect of RMP fields on fast ion losses In addition to their influence on the magnetic footprints, RMP fields also have significant effects on fast ion confinement and losses. Fast ion losses further impact the heat load on the plasma facing materials. In the following sections, the synergistic effect of the n = 2 and n = 6 RMP fields on the fast ion losses is investigated. Externally applied RMPs commonly induce flux surface breaking near rational surfaces and the formation of magnetic islands in the presence of resistive plasma response. Island overlapping often causes magnetic field line stochasticity inside the plasma. It is therefore important to study how the flux surface breaking impacts the fast ion losses. To contrast this, we also consider the ideal plasma response in figure 13. In an ideal plasma, RMPs mainly induce distortion of magnetic field lines without changing the flux surface topology. Consequently, we expect differences in the fast ion losses due to RMPs, by assuming ideal versus resistive plasma response models. These differences indeed appear in figure 13. Figure 13 reports the final positions of 8100 test particles and fast ion losses along the poloidal angle after ∼5 ms timescale simulation, in the presence of RMP fields including either ideal or resistive plasma response. Numerically, we find that most of the losses occur within the first 5 ms, which is why we choose this timescale in our simulations to study the key features of fast ion losses. We note that the same timescale (5 ms) was also chosen to study the effect of RMPs on fast ion transport through the full orbit simulation [39]. Here, the test fast ions are launched at a given radial position (labeled by the normalized poloidal flux poloidal flux ψ p0 ) and from the outboard mid-plane, with a uniform distribution on a 90 × 90 phase-space mesh at the particle pitch (0.3< λ <1) and energy (10 keV < E <80 keV). The initial fast ions thus cover the majority of the particle population produced by the co-current NBI, which is assumed for the reference equilibrium design in this work. Figure 13 shows that the fast ion losses induced by the RMP fields including resistive plasma response are much larger than that caused by the case of including ideal plasma response. For instance, the resistive plasma response enhances the fast ion losses from ∼0.086% to ∼1.3% at the poloidal angle χ ∼ 0 • . The final distribution of test particles in the poloidal cross section for both of the above cases is radially extended. The lost fast ions mainly concentrate at four poloidal locations: the upper outer divertor plate, the outboard mid-plane and the lower two target plates. This indicates that the resistive plasma response induced breaking of flux surfaces is essential for fast ion losses here. This also implies that the n = 6 sideband of RMP fields with resistive plasma response may impact fast ion losses, since the additional n = 6 component enhances the breaking of magnetic surfaces near the edge and widens the stochastic region, as shown in figure 8. However, for the case of an ideal plasma response, the pitch-resonant radial fields from the vacuum RMP fields vanish at the rational surfaces, which does not change the magnetic topology and only induces the distortion of flux surfaces. The fast ion redistribution in (R-Z)-domain is similar for these two considered response models, which suggests that the field perturbations excluding the pitch-resonant component mainly induce fast ion redistribution or transport inside the plasma for the studied case. The particle loss mechanism related to the fractional resonance between the RMP fields and fast ions is not analyzed here; it has been reported in [33] and the references therein. In figure 13, only the n = 2 RMPs is considered to distinguish the difference between the ideal and the resistive plasma responses, in terms of affecting fast ion losses. For a detailed analysis of the effect of RMP fields on fast ion losses, the trajectories of three representative test particles are shown in figure 14. This shows that the sum of the n = 2 and n = 6 perturbations enhances the radial drift of test fast ions, as compared to the case of single-n (n = 2) harmonic. It is evident that the field perturbation can change the type of particle orbit (e.g. from an initially trapped particle to a passing one), as shown in figure 14(a). Test particles #1 and #2 are still well confined in the presence of the n = 2 total field perturbation, which are lost when the n = 6 sideband is taken into account, as shown in figures 14(a) and (b), respectively. Figure 14(c) shows that the added n = 6 sideband affects the deposition of lost fast ions. For the cases of the n = 2 RMP field and of the combined n = 2 and n = 6 components, the lost fast ions hit the outboard middle-plane and the upper-outer divertor plate, respectively. Figures 14(d) and (e) show that the extreme radial positions (i.e. s 2 ≡ ψ p ) reached by the test particle in the HFS and LFS periodically vary due to the n = 2 RMP fields, at the simulation timescale. These two confined fast ions are lost on the timescale of ∼1 ms, when the additional n = 6 component is included. Before being lost, the added n = 6 component induces a gradual increase in the maximum of ψ p reached by the particle and the final loss as the particle moves across the LCFS. Figure 14(f ) shows that the n = 6 sideband induces the loss of the fast ion in a shorter timescale than that with the n = 2 component alone. The fast ion can move from the plasma region to the vacuum region and/or enter back into the plasma region, due to the magnetic drift. Figure 14 shows that the n = 6 sideband induces additional radial drift of fast ions, as compared to the n = 2 RMP alone case. This is mainly due to the enhanced field line stochasticity in the plasma edge region, as shown in figure 8. The additional radial drift due to the n = 6 sideband as discussed above does not significantly impact the final configuration positions of test fast ions inside the plasma as shown in figure 15(a). However, figure 15(b) shows that the fast ion losses greatly rise due to the additional n = 6 RMPs, as compared to the case of considering the n = 2 RMPs alone. For instance, the fast ion loss is enhanced from 0.95% for the latter to 8.8% for the former, near the upper-outer divertor plate (i.e. χ ∼ 100 • ). For a more detailed analysis of the dependence of fast ion losses on the pitch and kinetic energy of particles, figure 16 reports the effect of RMP fields on the fast ion loss region in phase space. The fast ions initially launched in the 'loss region' are finally lost due to the RMP fields. This shows that the n = 2 RMP fields mainly induce the losses of fast ions near the topright triangular area in phase space, where the particles are passing ones. Fast ions with a relatively higher kinetic energy E > 40 keV and a larger pitch λ > 0.6 are likely lost. This suggests that the motions of passing fast ions are more likely to be affected by the RMP field than the trapped ones. The combined n = 2 and n = 6 field perturbations significantly extend the loss region. The kinetic energy boundary of 'loss region' is extended to ∼10 keV and the pitch boundary is slightly modified when the intrinsic n = 6 sideband fields are included. Since the fast ion orbit drift increases with the particle energy, higher energy particles are more prone to loss if the particles drift outwards. Moreover, the 3D trajectory of the particle drift orbit can also resonate with the magnetic perturbation. Such a resonance induces a radial transport or even The corresponding fast ion losses (in percentages with respect to the total number of test particles) along the poloidal angle on the limiter surface. We adopt the coil phasing ∆Φ n=2 = −90 • and ∆Φ n=6 = 90 • for the n = 2 and n = 6 harmonics, respectively. The 10 kAt RMP coil current is assumed. particle loss [67]. Since the fast ion drift orbit depends on both the particle energy and pitch, the resonance condition can only be satisfied in certain regions of the particle phase space, for a given magnetic perturbation structure and magnitude. This is a possible reason for the occurrence of the threshold reported in figure 16(a). Furthermore, the additional n = 6 RMP enhances the stochastic region near the plasma edge, resulting in a larger loss of fast ions with relatively lower energy. This can explain the results in figure 16(b). The effect of the increased stochastic region on the fast ion transport/loss is similar to that for the thermal particles [68]. The loss map method, introduced in [69], has been shown to be useful for detailed analysis of fast ion losses related to the distribution in the pitch-energy space and will also considered in our future studies. In the above simulations, a 90 × 90 particle mesh in the pitch-energy space is adopted. However, simulations with a Figure 16. The lost fast ion initial locations (in yellow) in the pitch-energy phase space for the case of (a) including only the n = 2 RMP fields and of (b) including both the n = 2 and n = 6 components, corresponding to the two cases in figure 15. The initial radial position ψ p0 = 0.95 is assumed. Here, the ∼5 ms timescale simulation is carried out. Figure 17. Loss rate of fast ions as a function of the initially launched radial position (a) and of the assumed RMP coil current, for the case with only an n = 2 component (in blue) and with both the n = 2 and n = 6 components (in red). Here, the resistive plasma response is included in the RMP fields. A 10 kAt RMP coil current is assumed in (a), and the initial position of test particles ψ p0 = 0.95 is adopted in (b). The coil phasing ∆Φ n=2 = −90 • (∆Φ n=6 = 90 • ) for the n = 2 (n = 6) RMPs is fixed during the parameter scan here. 20 × 30 particle mesh in phase space also capture the key features of fast ion losses due to RMP fields in this work, including: (i) the fast ion loss rate (section 8); (ii) the threshold in particle energy and pitch for fast ion losses; and (iii) the loss pattern along the limiting surface. A reduction in the total number of test particles is computationally much more efficient, especially for the following parameter scans. Dependences of the fast ion loss rate on the initial radial position ψ p0 and on the RMP coil current are studied assuming 600 test particles (i.e. on the 20 × 30 phase space grid), with results reported in figure 17. Here, the loss rate is referred to as the ratio of the number of lost fast ions to the total number of test particles. Figure 17 clearly shows that a threshold (denoted by ψ 0,t ) of ψ p0 exists for fast ion losses induced by the RMP perturbations. For the case of the n = 2 RMP fields, ψ 0,t is about 0.9, which is larger than that (ψ 0,t ≃ 0.875) assuming the combined n = 2 and n = 6 perturbations. This shows that the intrinsic n = 6 sideband extends the radial region in which the fast ions can be lost by the RMP fields. For the fast ions initially located in the region ψ p0 < ψ 0,t , the RMP fields mainly induce redistribution, instead of losses, for the studied case here. Studies of fast ion redistribution in phase space and in configuration space will be carried out in the future. In this work, we mainly focus on the features of fast ion losses. The fast ion loss rate is significantly increased by the combined field perturbations, as compared to the case of the single n = 2 RMPs. In the case of particles deposited at ψ p0 = 0.98, adding the n = 6 sideband increases fast ion losses from ∼18% to ∼48%. Another key parameter affecting fast ion loss is the magnitude of the RMP coil current (I RMP ) at a given toroidal phasing between the upper and lower coil current. The threshold of the RMP coil currents I RMP,t for fast ion loss is about 5.3 kAt for the combined n = 2 and n = 6 RMP fields, which is smaller than that (∼7 kAt) with the n = 2 harmonic alone. The occurrence of the threshold in the RMP coil current for fast ion loss is likely due to the fact that the radial extension of the stochastic region near the plasma edge reaches the initial position of test particles. Assuming 10 kAt RMP coil current, the fast loss rate reaches ∼30% for the superposed perturbations, which is much larger than that (∼10%) for the n = 2 field case. The sideband-induced enhancement of the fast ion loss rate is consistent with that observed in the DIII-D tokamak [27]. The threshold signature of I RMP,t agrees with the experimental observations in the KSTAR device [35,70]. For various initial positions of launched test particles or different magnitudes of RMP coil current, the n = 6 sideband significantly enhances the fast ion loss rate, as ψ p0 or I RMP exceeds a critical value. Numerical sensitivity test In the above computations, the initial poloidal angle location for test particles is fixed at χ = 0 (i.e. at the outboard middle-plane). However, the fast ion loss rate is also related to the initially poloidal location, in addition to the initial radial location. In order to test the dependence of the main conclusions on the initial poloidal location, figure 18(a) shows an example of launching test field lines at different poloidal angles (χ = −100 • , 0 • , 50 • ), while the initial radial position (i.e. ψ p0 = 0.95) is fixed. At χ = 50 • , the field line directly connects to the limiter, before passing one whole period in the toroidal direction in the plasma, which implies that the open field line occurs at this location. For the case of χ = 0 • , the field lines are in the marginal region to form closed field lines. For the case with n = 2 RMP fields, the fast ion loss rate for the case of launching fast ions at χ = 50 • is about 34%, which is much larger than that with χ = 0 • ( figure 18(b)). This is mainly due to the fact that the magnetic topology at χ = 50 • differs from that at χ = 0 • . For the case with χ = −100 • (i.e. region below the outboard middle-plane), the field lines are at the boundary of forming open field lines. While for the fast ions, the equivalent initial radial position ψ p,0 at the outboard middle-plane is larger than 0.95 due to the particle's outward magnetic drift in the LFS. As a result, the fast ion loss rate for the case with χ = −100 • is much larger than that for the reference case with χ = 0 • . However, we emphasize that the intrinsic n = 6 sideband enhances the fast ion loss rate by ∼20%, which does not depend on the choice of initial poloidal locations for launching test particles ( figure 18(b)). Furthermore, this implies that the relative enhancement of the fast ion loss rate by the combined n = 2 and n = 6 components is insensitive to the initial toroidal location of the test particles, due to the toroidal periodic feature of Poincaré maps of field lines. However, for a given RMP, the absolute loss rate of fast ions will depend on the initial 3D positions of test particles. Further numerical sensitivity tests are carried out to study the dependence of the fast ion loss rate on the assumed number of test particles ( figure 19). The loss rate of fast ions with 8100 particles becomes 9.1%, which is slightly lower than the value of 9.5% obtained with 600 particles. This shows that the loss rate very slightly depends on the number of test particles. Hence, the chosen number (N = 600) of test particles in figure 17 is sufficient to capture the key features of the fast ion losses induced by the RMP fields. For quantitative predictions of fast ion loss rate for the HL-2M scenario, a realistic fast ion distribution (e.g. from TRANSP modeling) should be considered and large-scale modeling by launching a large number of test particles is required, which is left for a future study. Conclusion and discussion In this work, the MARS-F code is applied to study important aspects of the plasma response to the dominant toroidal component (n = 2) and the secondary sideband (n = 6) RMP fields for the double-null scenario for the new HL-2M tokamak device. The amplitude of the n = 6 pitch-resonant radial field perturbation reaches 35% of that for the n = 2 component at the q = 3 rational surface for the reference case. The total RMP fields, including the resistive plasma response, in the top region of the torus are much larger than that in the bottom region, for both the n = 2 and n = 6 components. We find that the n = 6 sideband modifies the 3D structure of the field perturbations in the configuration space, as compared to the case of including the n = 2 harmonic alone. In particular, the additional n = 6 harmonic induces substantial periodic variation of the amplitude of the total perturbations along the toroidal angle, which is constant for the n = 2 RMP field. The n = 6 sideband induces secondary magnetic islands and extends the stochastic region near the plasma edge, as shown in the Poincaré maps of field lines. As a result, adding the n = 6 harmonic significantly modifies the pattern of magnetic footprints, as compared to the case of considering the n = 2 fields alone. When the plasma response is included, adding the n = 6 harmonic results in the splitting of 'lobe structure' of the upper outer magnetic footprints and extends the width of the lower outer footprints. Interestingly, the plasma response yields the symmetry breaking of the patterns of the footprints on the two outer divertor plates. Furthermore, we show that the fast ion losses are sensitive to the magnetic topology near the plasma edge, by comparing cases of the ideal and resistive plasma response. The n = 6 sideband of the RMP fields enhances fast ion losses. For the studied case, a threshold value exists for inducing fast ion losses, in terms of the initial radial position of test particles and of the RMP coil current. The existence of the n = 6 sideband reduces the aforementioned threshold values and greatly enhances the fast ion loss rate as the threshold criterion is satisfied, as compared to the case of including the dominant component n = 2 alone. The n = 6 sideband also extends the 'loss region' in phase space where the fast ions will be lost in the presence of field perturbations. Although a relatively small number of test particles is assumed in this work, the role of the sideband of RMP fields on fast ion losses and threshold features of the RMP coil current for fast ion losses are consistent with the experimental observations on the DIII-D and KSTAR devices, respectively [27,35]. However, quantitative prediction of fast ion loss properties (the loss rate, the phase space dependence as well as the deposition location of lost particles) requires knowledge of the realistic equilibrium distribution in both the particle phase and 3D configuration spaces. We leave this to a future study using more dedicated particle tracing codes such as ASCOT [71]. Moreover, this study does not account for the influences of electric field, recombination and charge exchange on fast ion confinement and losses. These aspects are left for future investigations, especially concerning their quantitative comparison with experimental data. In addition, we emphasize that the qualitative findings from this study, being that the toroidal sidebands of the RMP field play important roles in divertor magnetic footprints and fast ion losses in HL-2M, should also generally apply to other tokamak devices. In particular, these new findings can have significant implications for RMP applications in future devices, where linear plasma response modeling efforts have often neglected sideband effects. Disclaimer This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
2023-07-28T15:09:07.444Z
2023-07-26T00:00:00.000
{ "year": 2023, "sha1": "9a5bb055161be69c37951ea3ffabf7ebde7da940", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1741-4326/acea93/pdf", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "05267579f4ac73861d88f7f3f32e0323f89d424e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
211113785
pes2o/s2orc
v3-fos-license
Comparison of overfed Xupu and Landes geese in performance, fatty acid composition, enzymes and gene expression related to lipid metabolism Objective The aim of this study was to compare overfeeding performance, fatty acid composition, blood chemistry, enzymes and genes expression overfed Xupu and Landes geese. Methods Sixty male Xupu geese (80 d) and Landes geese (80 d) were selected. After a period of one-week of pre-overfeeding, Xupu and Landes geese were overfed three meals of 550 and 350 g/d, respectively, of a high-carbohydrate diet in the first week of the overfeeding period. The next week, geese were given four meals of 1,200 and 850 g/d, respectively, over 8 to 14 d. Finally, geese were given five meals of 1,600 and 1,350 g/d, respectively, for the last two weeks. Results After overfeeding for 28 d: Compared with Landes geese, Xupu geese liver weight and liver-to-body weight ratio decreased (p<0.05), while final weight, slaughter weight, total weight gain, abdominal fat weight, and feed-to-liver weight ratio increased (p<0.05). The levels of elaidic acid (C18:1t9), oleic acid (C18:1n-9), eicosenoic acid, and arachidonic acid in the liver of Xupu geese significantly increased (p<0.05), and the levels of myristic acid and stearic acid significantly decreased (p<0.05), while methyleicosanoate acid significantly increased (p<0.05). Xupu geese had higher plasma concentrations of triglyceride and very low density lipoprotein cholesterol (p<0.05), and decreased activities of alanine aminotransferase, aspartate aminotransferase, and lipase (LPS) (p<0.05). Landes geese had higher LPS activity (p<0.05), but lower cholinesterase activity (p<0.05) when compared with Xupu geese. The mRNA expression levels of fatty acid dehydrogenase (FADS) gene, elongase of long-chain fatty acid 1 (ELOVL1) gene, ELOVL5, and acyl-Co A: cholesterol acyltransferase 2 (ACAT2) gene were significantly upregulated (p<0.05) in Landes goose when compared with Xupu geese. Conclusion This study demonstrates that the liver production performance of Landes geese was better than that of Xupu geese to some extent, which may be closely related to LPS activity, as well as the expression of FADS, ELOVL1, ELOVL5, and ACAT2. INTRODUCTION With the occurrence of African swine fever in China, the consumption of safer poultry meat has increased dramatically [1,2]. When compared with other poultry meat, goose meat has beneficial characteristics including high protein and low fat content [3]. Moreover, goose liver has a high capacity of fat accumulation and is used for the production of "foie gras" in poultry production [4,5]. Unlike mammalian fatty liver, main components of foie gras are unsaturated fatty acids (UFA), which have been shown to help protect against car diovascular and cerebrovascular disease in humans [6,7]. The Landes goose (Anser anser) originated in the south western Landes province of France. Landes geese have a beneficial liver capacity in that it can increase 5 to 10fold in size over the course of a shortterm overfeeding proce dure. As a result, the Landes geese have become the world's most famous breed for producing fatty liver products [8]. The Xupu goose (Anser cygnoides domesticus) is an indige nous breed of western Hunan Province in China and has been included in the list of National Livestock and Poultry Genetic Resources Protection in China [9,10]. When com pared with other domestic goose breeds, the Xupu goose has the best capacity for fat accumulation in the liver. To this end, Fournier et al [11] showed that the difference of liver size between the two breeds of geese may be closely related to heredity. This difference may also be attributable to a wide range of other factors, including sex, nutrition, housing den sity, and housing environment. At present, many studies have been conducted on Landes geese relative to Xupu geese; as a result, comparative studies between the two breeds remain scarce. Here, we sought to investigate overfeeding performance, plasma biochemistry indices, fatty acid composition, enzymes and gene expression related to lipid metabolism between Xupu and Landes geese. These data not only form the basis for the study regarding the mechanisms of liver fat deposition, but also provided a theoretical reference for the breeding of fatty liver geese. MATERIALS AND METHODS Experimental design, diets, and animal management All experiment were approved by the Institutional Animal Care and Use Committee of the Hunan Agricultural University (Hunan, China). All methods and procedures were performed in accordance with the approved guidelines and provided by the regional Animal Ethics Committee. Sixty male Xupu geese (4,947.00±377.54 g) and sixty male Landes geese (4,751.00±244.73 g) were selected for this study. All geese were maintained under the same feeding and man agement conditions. Xupu geese were provided by the Hunan Hongyu Xupu Goose Industry Development Co., Ltd. (Huai hua City, Hunan Province, P. R. China) and Landes geese were provided by the Hunan Fugoose Industry Development Co., Ltd. (Chenzhou City, Hunan Province, P. R. China). At 80 d of age, a period of oneweek of preoverfeeding began. During this time, food intake was progressively increased to enlarge the volume of the digestive tract and to initiate metabolic adaptation to overfeeding. At the end of the preoverfeeding period, all geese were forcefed with a carbohydrate diet con sisted of 98% boiled maize, 1.0% plant oil, 0.5% salt, and 0.5% multivitamin (multivitamin provided per kilogram of diet: vitamin A, 80,000,000 IU; vitamin D, 6,000,000 IU; vitamin E, 40,000 IU; vitamin B 1 , 12,000 mg; vitamin B 2 , 50,000 mg; vitamin B 6 , 500 mg; vitamin B 12 , 2,000 mg; vitamin K 3 , 3,000 mg; vitamin C, 16,000 mg; pantothenic acid, 3,000 mg; folic acid, 2,000 mg; nicotinic acid, 4,000 mg; biotin, 2,000 mg; methionine, 10,000 mg; lysine, 8,000 mg; tryptophan, 800 mg; arginine, 1,800 mg; serine, 8,000 mg; alanine, 18,000 mg). In total, this intake amounted to 3,370 kcal/kg, with a compasition of 90 g of protein/kg and 4.5 g of fat/kg. Xupu geese, having a greater capacity for overfeeding ingestion, were fed by the operator to the maximum of their ingestion potential. The feed intake of Xupu and Landes geese were different because of their different body weights. Xupu and Landes geese were overfed three meals of 550 and 350 g/d, respectively, of a highcarbohydrate diet in the first week of the overfeeding period. The next week, geese were given four meals of 1,200 and 850 g/d, respectively, over 814 d. Finally, geese were given five meals of 1,600 and 1,350 g/d, respectively, for the last two weeks. This overfeeding experiment was conducted from July to August in 2018. The overfeeding room had a constant tem perature (28°C to 34°C) and humidity (65% to 70%) range. All geese were divided into two groups according to the va rieties and reared on the ground (4 m×6 m). Each goose was labeled with a ring on its right foot. An overfeeding machine was used in our study, and operated by the same person each time overfeeding occurred. During the overfeeding period, all goose had ad libitum access to water. Sample collection Initial weight of each geese was recorded before the first meal of overfeeding. After 28 d of overfeeding, all geese were food deprived overnight for 12 h. During this time, geese had ad libitum access to water. On the following morning, geese were weighted and blood samples were taken by puncture of the occipital venous sinus. Blood sampling were maintained at room temperature for 1 h, and plasma was obtained by cen trifugation at 3,000×g for 20 min at 4°C. After blood sampling, the goose were killed by exsanguination and each individual liver was quickly removed and weighed. Liver sample were immediately taken from the ventromedial portion of the main lobe (right lobe) of eight Xupu geese and eight Landes geese with similar body weight gain. Liver samples were immedi ately frozen in liquid nitrogen and stored at -80°C until later analysis of enzyme activities and mRNA levels. Initial weight, final weight, slaughter weight, liver weight were recorded daily by each goose to calculate the total weight gain, body weight gain rate, livertobody weight ratio, and feedtoliver weight ratio. The indicator is calculated as follows: Fatty acid composition The fatty acid composition of the liver was determined ac cording to a previously described method [12]. Briefly, total lipids were extracted from the liver tissue using petroleum ether/anhydrous diethyl ether (1:1, v/v). Methyl esters of the lipids were prepared using saponification with a solu tion of KOH: methanol (4 mol:1 L). The organic layer was aspirated for fatty acid analysis using an Agilent 7890N gas chromatography equipped with a flame ionization detector (Agilent Technologies, Santa Clara, CA, USA) and a CPSil 88 fused silica open tube capillary column (100 m×0.25 nm; Agilent Technologies, USA). The gas chromatograph tem perature program was as follows: Initial temperature of 140°C for 5 min, temperature increase of 3°C/min to 220°C, 1 min temperature hold at 220°C, and then holding temperature at 220°C for additional 40 min. The injector and detector temperatures were maintained at 240°C and 260°C, respec tively. Hydrogen was used as the carrier gas at a flow rate of 40 mL/min. Individual fatty acid peaks were identified by comparing their retention times with those of the standards (Cat#: 189191AMP; Sigma Chemicals, St. Louis, MO, USA). The results were expressed as grams per 100 g of total iden tified fatty acids. Lipid metabolism enzymes activities of liver and plasma Approximately 0.5 g of liver sample was used to prepare the tissue homogenate. Tissues were diluted in 1:9 (w/v) using icecold 154 mmol/L sodium chloride solution, and homog enized using an UltraTurrax homogenizer (T10BS25, IKA, BadenWurttemberg, Germany). Resulting homogenates were then centrifuged at 3,500×g at 4°C for 10 min. The supernatant and plasma were used to determine the activities of cholinesterase (CHE), lipase (LPS), lipoportein lipase (LPL), hepaticlipase (HL), and content of nonesterified free fatty acids (NEFA). All activities and content were deter mined according to corresponding, commercially available diagnostic kits (Nanjing Jiancheng Bioengineering Institute, China) according to the manufacturer's instructions using a microplate reader (Multiskan GO; Thermo Fisher Scien tific, USA). Protein concentration in the supernatant of the liver homogenate was measured by using a protein assay kit (A0452; Nanjing Jiancheng Institute of Bioengineering, China). Total RNA extraction, reverse transcription, and quantitative real-time polymerase chain reaction Total RNA was isolated from liver tissues using a TaKaRa Min BEST Universal RNA Extraction Kit (Takara, Osaka, Japan) according to the manufacturer's protocol. The concentration and integrity of RNA were determined using a NanoDrop 2,000 Spectrophotometer (Thermo Scientific, Hudson, NH, USA) and 1% agarose gel electrophoresis, respectively. Only RNA specimens with an A260/A280 ratio of 1.8 to 2.0 and an A260/A230 ratio ≥2.0 were used for subsequent analyses. Total RNA from each sample was reverse transcribed into cDNA using the PrimeScript RT reagent Kit with gDNA Eraser kit (Takara, Japan), and cDNA was then diluted 1:10 with nucleasefree water before being used for quantitative realtime polymerase chain reaction (PCR). The primer pairs for the amplification of adipocyte fatty acid binding protein (FABP4), stearoylCoA desaturase (SCD), fatty acid dehydro genase (FADS), elongase of longchain fatty acid 1 (ELOVL1), acylCo A: cholesterol acyltransferase 2 (ACAT2), LPL, fatty acid synthase (FASN), glyceraldehyde3phosphate dehydro genase (GAPDH), and betaactin (β-actin) gene were designed from GeneBank sequences using Primer Premier 5.0 and obtained from Shanghai ShengGong Biological Company (Shanghai, China), as shown in Table 1. Quantitative realtime PCR (qPCR) was performed using SYBR Green Master Mix (Vazyme Biotech, Nanjing, China) in CFX96 Touch RealTime PCR Detection System (BioRad Laboratories, Hercules, CA, USA). The PCR systems con sisted of 2 μL of diluted cDNA template (1:9), 12.5 μL of SYBR Premix Ex Taq II, 1 μL PCR Forward Primer (10 μmol/L), 1 μL PCR Reverse Primer (10 μmol/L) and 8.5 µL of sterilized distilled water. The PCR programs was as fol lows: 95°C for 30 s, followed by 35 cycles of denaturation at 95°C for 5 s and 60°C for 30 s. Dissociation curves of the products were generated by increasing the temperature of samples incrementally from 55°C to 95°C as the final step of the PCR. GAPDH and β-actin genes were used as the dual in ternal standard for normalizing transcript abundance of mRNA expression. The relative expression levels of target genes were calculated by the 2 -ΔΔCt method as described by Livak and Schmittgen [13]. Statistical analyses All data were analyzed using SPSS 21.0 (2015, IBMSPSS Inc., Chicago, IL, USA). Variability of all the data is expressed as standard error of the mean (SEM). Differences between mean values were compared using independent samples ttest, and considered significant at p<0.05. Overfeeding performance The results of overfeeding performance of Xupu and Landes geese are presented in Table 2 and 3. When compared with the Landes geese, final weight, total weight gain, slaughter weight, abdominal fat weight and feedtoliver weight ratio of Xupu geese significantly increased (p<0.05). In Xupu geese, the liver weight and livertobody weight ratio both decreased (p<0.05). There were no significant differences in the initial weight and body weight gain rate between Xupu and Landes geese (p>0.05). Blood chemistry As is shown in Table 5, the results of blood chemistry analyses for Xupu and Landes geese revealed that there were no signifi cant differences in either TC or LDLC (p>0.05). However, plasma levels of TG and VLDLC were significantly higher (p<0.05) in Xupu geese when compared with Landes geese. When compared with Landes geese, Xupu geese had a sig nificant decrease (p<0.05) in HDLC content, as well as ALT and AST activities. Lipid metabolism-related enzymes activities The results of lipid metabolismrelated enzymes activities in the plasma and liver of Xupu and Landes geese are presented in Table 6. When compared with the Landes geese, LPS ac tivity in the plasma and liver of Xupu geese both significantly decreased (p<0.05). However, a significant enhancement (p<0.05) of CHE activity in the liver of Xupu geese was ob served. There were no significant differences in either plasma or liver LPL or HL activities or in the NEFA content between Xupu and Landes geese (p>0.05). Gene expression The lipid metabolismrelated gene mRNA expression in livers of Xupu and Landes geese are shown in Table 7. The mRNA expression of FADS, ELOVL1, ELOVL5, and ACAT2 (p<0.05) in liver of Xupu goose were significantly downregulated than in liver of Landes goose. There were no significant differences in the mRNA expression of FABP4, SCD, LPL, FASN, and retinol binding protein 7 in liver between Xupu and Landes geese (p<0.05). DISCUSSION In our study, we found that the livers of Xupu geese were smaller and the feedtoliver weight ratio of Xupu geese were higher than those of Landes geese. Since Xupu geese had a larger body shape, the index of livertobody weight ratio re vealed an opposite result. The abdominal fat weight of Xupu geese was higher than that of Landes geese. Possible reasons for this result include that a large amount of fat was trans ferred from the liver to extrahepatic (e.g., abdominal adipose tissue) in Xupu geese. The above results confirmed that Landes geese are the best breed globally to produce fatty liver products. Overfeeding with a carbohydraterich diet results in high de novo lipogenesis. The present study was the first time to compare the fatty acid composition of the fatty liver in Landes and Xupu geese. There were both similarities and differences between the two breeds. The main products of fatty acid syn thesis were 16:0, 18:0, and above all 18:1. This fatty acid in particular accounted for more than 90% of the hepatic TG fatty acid content, and the proportions of 18:1 was more than 50% in both breeds. These results were consistent with the fatty liver content found previously in both geese and duck [14,15]. Relative to Landes geese, Xupu geese had an increase in the proportions of 18:1, but a decrease in the proportions of 18:0 in their fatty liver. This finding was in agreement with those found by both Cazeils et al [16] and Hatsugai et al [17]. Simultaneously, the proportions of monounsaturated fatty acids and polyunsaturated fatty acids (PUFA) in the fatty liver of Xupu geese were significantly higher than those in Landes geese. Moreover, the proportion of saturated fatty acids in the fatty liver of Xupu geese was significantly lower than that of Landes geese. These results were consistent with the pre vious study, which used 21 days of feeding for both Xupu and the Landes geese [18]. Given these findings, it is clear that the performance of overfeeding production and fat deposition in Xupu geese were inferior to that of Landes geese. However, the proportions of UFA in fatty liver of Xupu geese was signifi cantly higher than that of Landes geese, which may indicate that Xupu fatty liver is better for human health. Fatty liver occurs in geese or duck when fat synthesis ex ceeds fat secretion. In response to overfeeding, de novo hepatic lipogenesis is dramatically increased, TG do not fully enter the secretion pathway, and a large proportion of TG remains stored in the liver [11,19]. During overfeeding, part of the newly synthesized TG in the liver are incorporated into he patic lipoprotein, mainly VLDL which can be secreted into blood and used (or stored) in extrahepatic tissues. We found that the plasma contents of TG and VLDL of Xupu geese were higher than those of Landes geese. These results were in agreement with previous studies, which showed higher VLDL concentration in Poland geese when compared with Landes geese, even though the liver weight of the former was less than the latter [11]. Xu et al [20] found that the plasma concentrations of TG and VLDL were higher in Sichuan white geese than in Landes geese. In the present study, overfeed ing with highenergy corn diet for 28 d induced elevations in the concentration of plasma HDL, which was in accor dance with findings of previous studies in broiler chickens and geese [21,22]. It clear the transfer of fat from the liver to the extrahepatic tissue is one of the reasons why the liver weight of Xupu geese was smaller, but their abdominal fat heavier than that of Landes geese. These results suggested that the mechanism behind geese fatty liver formation is mainly attributable to an imbalance between the storage and secretion (as plasma lipoproteins) of newly synthesized endogenous lipids and exogenous lipids in the cytoplasm. When overfeeding using a highenergy corn diet, plasma ALT and AST are mainly from the liver, resulting in higher ALT and AST activities. Our study found that the plasma activities of ALT and AST in Xupu geese were lower than those of Landes geese, indicating that Landes geese had a higher degree of liver damage due to longterm overfeeding. The results in this experiment were in agreement with those of Zhu et al [21], which showed that longterm overfeeding induced liver cell inflammation. Therefore, we supported the opinion that goose hepatic adaptation to overfeeding is of notable importance. Our study found that the activity of LPS in the plasma and liver of Landes geese increased relative to that of Xupu geese. Pancreatic lipase plays an important role in fat absorption. Kobayashi et al [23] showed that lipase activity was high by using the higher fat content of diets. Krogdahl [24] also reported that when given a highfat diet, the lipase activity of birds was higher than that in birds fed a lowfat diet. Much past work has found that the activity of LPL was positively correlated with the weight of fatty liver in goose or duck, moreover, that LPL was useful in the se lection of Landes geese breeders with a higher susceptibility to liver steatosis [6,25,26]. Despite this, we found no differ 1963 Liu et al (2020) Asian-Australas J Anim Sci 33:1957-1964 ence in the activity of LPL in either plasma or liver between Xupu and Landes geese, suggesting further study is needed. Collectively, our previous study [18] and current data in dicate the development of severe steatosis was closely related to the genes involved in lipid synthesis, packaging, secretion, transportation, deposition or metabolism, including FADS, ELOVL1, ELOVL5, and ACAT2 [27,28]. It is well known that FADS gene plays a key role in the synthesis of longchain poly unsaturated fatty acid (LCPUFAs) and the metabolism of essential fatty acids [2933]. ELOVL1 plays an important role in the elongation of superlongchain saturated fatty acids and superlongchain monosaturated fatty acids. ELOVL5 is mainly responsible for the elongation of 18carbon fatty acids. ACAT2 catalyzes the conjugation of cholesterol and longchain fatty acids to form cholesterol esters, and plays an important role in the absorption, storage, transport, and apolipoprotein metabolism of cholesterol. Given this, we observed that the expression of FADS, ELOVL1, ELOVL5, and ACAT2 in the liver of Landes geese increased significantly relative to Xupu geese. Osman et al [34] showed that FADS expression levels were gradually increased after overfeeding, moreover, that the induction of FADS promoted the generation of LCPUFAs in goose fatty liver. This increase appeared to be well coordi nated with the size of fatty liver in Landes geese, which suggests that the FADS, ELOVL1, ELOVL5, and ACAT2 genes are im portant to the development of goose fatty liver. In conclusion, the results of the present study showed that the overfeeding performance of Xupu geese was inferior to that of Landes geese, which may be related to the activity of LPS, and the expression of FADS, ELOVL1, ELOVL5, and ACAT2. However, the the proportions of UFA in Xupu fatty liver were significantly higher than those of Landes geese. Taken together, there results provide new insights into the cultivation of highquality fatty liver geese.
2020-01-16T09:04:52.326Z
2020-01-13T00:00:00.000
{ "year": 2020, "sha1": "e5ae083df352d93912e6ea49e832108a791140d8", "oa_license": "CCBY", "oa_url": "https://www.animbiosci.org/upload/pdf/ajas-19-0842.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9fabb26cf681849ff3a7403fd1345e0cf8fcc0a2", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
213330625
pes2o/s2orc
v3-fos-license
Bismuth sulphide decorated ZnO nanorods heterostructure assembly via controlled SILAR cationic concentration for enhanced photoelectrochemical cells The current study investigates Bi2S3 thin films coated on ZnO NRAs with varying cationic concentrations through ionic layer adsorption and reaction (SILAR) technique. XRD patterns reveal that Bi2S3 is successfully synthesised and exhibits orthorhombic structure on the wurtzite ZnO NRAs. The band gap energy (Eg) of Bi2S3/ZnO NRAs shows a notable red shift with increasing cationic concentration. The photocurrent density increases significantly with increasing concentration from 1 mM to 3 mM before decreases at higher concentration due to agglomeration of Bi2S3 NPs and formation of recombination centres. The hybrid photoanode Bi2S3/ZnO NRAs at 3 mM exhibits the highest photocurrent value (1.92 mA cm−2), which is about six times greater than that of plain ZnO NRAs (0.337 mA cm−2). The high photoconversion efficiency value of 1.65% versus 0.5 V Ag A−1g−1C−1l−1 is obtained by Bi2S3/ZnO NRAs (3 mM) in comparison with pristine ZnO NRs, mainly due to the stepwise band alignment edge and significant enhancement of morphological and optical properties. The study reveals that controlling the cationic concentration can potentially improve the photoconversion efficiency. Introduction Traditional sources of energy such as petroleum, coal and natural gas are non-renewable and are bound to deplete over time. Heavy reliance on these sources may lead to a severe global energy crisis which should be avoided at all cost. One of the best alternatives is to rely on solar energy. Photoelectrochemical cells (PECs) which is the conversion of solar energy into chemical energy have attracted much attention since they were first found by Fujishima and Honda with their experiment using TiO 2 as a photoanode [1]. Later, metal oxide-based photoanodes have been heavily revolutionised with successful researches and developments. Among various metal oxides for PECs applications, ZnO has been highly preferred as an alternative to TiO 2 for its distinctive characteristics such as moderate band gap energy (3.37 eV), high electron mobility, small electrical resistance, as well as abundancy worldwide [2]. However, ZnO has a critical drawback as its photocurrent production is noticeably limited owing to its relatively large band gap energy leading to restriction in the visible light absorption. With the passage of time, great efforts were strategised including organic dyes doping [3], heterostructure nanocomposite synthesis [4,5], organic dyes [6] and quantum dots sensitising [7] to overcome these disadvantages. Among them, the synthesis of inorganic-based heterostructure nanorods can be a promising strategy to overcome the ZnO drawback due to the high surface area of the produced composite materials. The arrangement of the band gap energy position of heterostructure is also imperative. The properlyaligned band gap energy in the heterostructure leads to vital enhancement of the photocurrent density in PECs Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. since it introduces the fastest and easiest bath of photogenerated electrons. Many narrow band gap metal sulphides such as lead sulphide (PbS) [8], cadmium sulphide (CdS) [9], silver sulphide (Ag 2 S) [10], copper (I) sulphide (Cu 2 S) [11,12] and bismuth sulphide (Bi 2 S 3 ) [13] can be used as a photosensitiser of ZnO NRs. For this purpose, Bi 2 S 3 is selected due to its versatility and possible applications in solar cells [14], photocatalysis [15], photodiode arrays [16], infra-red (IR) spectroscopy and photoelectrochemical cells [17]. Bi 2 S 3 may potentially aid PECs to generate good photocurrent response due to the wide range of visible light absorption (∼1.3-1.7 eV), but it can also accelerate the recombination rate of (e − -h + ) pairs. Therefore, the combination with ZnO could potentially suppress the recombination rate and substantially improve the PECs performance. In order to assemble Bi 2 S 3 on ZnO nanorods, several approaches have been applied such as hydrothermal method [17,18], chemical bath deposition (CBD) [19] and, successive ionic layers and adsorption reaction (SILAR) technique [20]. Among them, SILAR technique is the easiest and most cost-effective method to produce Bi 2 S 3 coated on ZnO at ambient temperature. Cationic concentration is one of the important parameters that can affect the amount of deposited Bi 2 S 3 on ZnO NRAs and then the overall photo response of the nanocomposite photoelectrode. The study reported by [20] revealed that increasing the cationic concentration caused band gap energy blue shift due to the increasing of thickness which reduce the number of defects and decreasing the density of localized state accordingly. Therefore, this study seeks to synthesise a cascade structured of Bi 2 S 3 /ZnO NRAs/ITO by varying cationic solution concentrations using SILAR technique purposely to enhance the photoconversion efficiency of ZnO NRs for PEC application. Synthetisation of ZnO nanorods arrays (NRAs) Highly-oriented perpendicular ZnO nanorods were synthesised at 110°C for four hours on an indium tin oxide (ITO) substrate by hydrothermal technique as demonstrated in earlier reports [21,22]. Synthetisation of Bi 2 S 3 /ZnO nanorods arrays Bi 2 S 3 thin films on ZnO NRAs were synthesised using the SILAR method. Different molarity of bismuth nitrate (Bi(NO 3 ) 3 ) and 0.03 M of sodium sulphide (Na 2 S) as cationic and anionic precursors solutions were used, respectively. The excess unreacted precursors were removed using de-ionised water (DIW) (18.2 MΩ). The ZnO NRAs was successively immersed for 60 s into the solution containing Bi, followed by rinsing with deionised water. Then, the resultant film was immersed into sulphide solution for 60 s followed by another rinsing in DIW. These four steps were considered as one cycle and each Bi 2 S 3 thin film produced was synthesised for seven cycles. The concentration of Bi(NO 3 ) 3 was varied between 1 mM and 10 mM in order to investigate the effect of Bi(NO 3 ) 3 concentration on the morphological structure, optical property as well as photoconversion efficiency of Bi 2 S 3 /ZnO NRAs/ITO photoanode. Characterisation of Bi 2 S 3 /ZnO nanorods arrays The morphological structure and elemental analysis of ZnO NRAs and Bi 2 S 3 /ZnO NRAs were examined using field emission scanning electron microscopy (FESEM) and energy dispersive x-ray (EDX)(JEOL JSM-7600F). High-resolution transmission electron microscopy (HRTEM) with selected area (SAED) and electron energyloss spectroscopy (EELS) modes (Tecnai TF20 x-twin FEI) were used to investigate the structure, crystalline phase, lattice fringe and chemical composition. The crystallite nature of the composite films was examined by x-ray diffraction (XRD) analysis using the Philips PM1730 diffractometer at 40 kV and 40 mA which the data were interpreted using the Panalytical Xpert Highscore software. The scanning range was kept within 20°−80°2 θ with scanning frequency of 5°min −1 . Raman spectroscopic analysis was carried out using Alpha 300 R Raman spectrometer (WITec GmbH, UIm, Germany) at 532 nm laser excitation wavelength and 5 sec of integration time. The x-ray Photoelectron Spectroscopy (XPS) analysis was performed using x-ray Photoelectron Spectroscopy (ULVAC-PHI Quantera II) and Al K-Alpha Monochromatic Source (1486.6 eV) with x-ray beam size of∼100 μm. Absorbance spectra of the produced thin films were measured using the UVvis spectrophotometer (UV-2600, Shimadzu) with wavelength ranging from 200 nm-800 nm. E g values of bare ZnO NRAs and coated Bi 2 S 3 /ZnO NRAs photoanodes prepared from various cationic concentrations deposition were estimated from the absorption measurements using the Tauc equation [21,23]: where: α is the absorption coefficient, hv is the energy of incident photon, A is constant, E g is the optical band gap energy (eV) and n value depends on the type of transmission (= ½ and 2 allows direct and indirect transmission respectively). n value is equivalent to ½ as ZnO is a well-known direct band gap semiconductor [23,24]. The values of E g for the deposited Bi 2 S 3 /ZnO photoanodes at different cationic precursor concentrations were estimated by deducing the linear portion of (αhv) 2 versus incident photon energy (hv) to zero. The performance analysis of photoelectrochemical cells (PEC) was carried out in a three-electrode electrochemical cells set up that includes the bare ZnO NRAs/ITO and binary heterostructured Bi 2 S 3 /ZnO NRAs/ITO at different cationic concentrations as the semiconductor working electrode, platinum wire as the counter electrode while saturated Ag/AgCl electrode was the reference electrode. The PEC performance was measured using linear sweep voltammetry (LSV) (Autolab PGSTAT204/FRA32 M module) at 20 mVs −1 using the binary mixture of 0.1 M Na 2 S and 0.1 M Na 2 SO 3 as electrolyte solution. All synthesised working electrodes were illuminated using a halogen lamp with 100 mW/cm 2 radiation leakage. The estimation for both light and dark currents was done by constantly cutting the irradiation from the light source for every 2 sec at a constant frequency. The photoconversion efficiency was calculated based on the following equation [22]: is employed potential, the normal reversible redox potential reversible redox potential of water electrolysis according to the standard hydrogen electrode (NHE) which is signified as 1.23 V. Meanwhile, P in refers to the power intensity of illumination source (mW/cm 2 ). Results and discussions X-ray diffraction measurements were done to characterise the crystal structure of bare ZnO NRAs/ITO and ZnO NRAs/ITO decorated with Bi 2 S 3 as a function of cationic solution concentration. Figure 1 exhibits the x-ray diffraction patterns of bare ZnO NRAs and Bi 2 S 3 /ZnO NRAs prepared at various concentrations of cationic solution between 1 mM and 10 mM range. It can be observed that ZnO NRAs grown via hydrothermal method exhibit a hexagonal wurtzite phase (JCPDS card No. 00-003-0888) while the prepared Bi 2 S 3 /ZnO NRAs/ITO of different concentrations show an orthorhombic structure (JCPDS card No. 03-065-3884) matching the former report by [25,26]. The distinguishable XRD peaks at affect the crystalline nature of ZnO and no impurities were detected in the composites. The outcomes evidently confirmed the successful Bi 2 S 3 formation on the ZnO NRAs. Figure 2 displays the Raman spectra of ZnO NRAs and Bi 2 S 3 /ZnO/ITO at the optimal cationic concentration (3 mM). Many distinct vibration peaks of the synthesised samples situated at 102, 152, 236, 437.5, 480,710 cm −1 can be observed clearly. The two vibration peaks at around 102 and 437.5 cm −1 are associated with E 2 (low) and E 2 (high) phonons vibration modes of ZnO NRAs, respectively [27,28]. The peak at 575.5 cm −1 is ascribed to O 2 vacancies and the low intensity of this peak indicates its low defect on the ZnO thin film [13]. Meanwhile, the rest of the peaks located at 152 and 236, 480 and 710 cm −1 correspond to Bi 2 S 3 which is in agreement with the Raman peaks as demonstrated by [13,26,28]. The XPS analysis was used to explore the chemical components in the heterostructured Bi 2 S 3 /ZnO/ITO (optimal sample 3 mM). The binding energy at 284.60 eV was indexed to C 1 s transition in all XPS spectra and used as a reference to standardise the binding energy of other elements in the thin film [29]. The XPS survey profile in figure 3(a) for standard Bi 2 S 3 /ZnO/ITO (400 • C) shows that the heterostructured photoanode comprised only four elements: Zn, O, Bi and S. The O 1 s profile can be tailored to three distinct peaks which were positioned at 529.64, 531.19 and 532.58 eV, respectively, demonstrating that the sample possessed three distinct O type elements as illustrated in figure 3(c). The peaks at 531.19 eV is attributed to the oxidation state of the element bound with Zn in the crystal lattice of ZnO while the peak at 532.58 eV is correlated to spacious surface of OH − [30]. Meanwhile, the peak observed at 529.64 eV corresponds to O in Bi 2 O 3 . However, x-ray diffraction and Raman analyses were unable to identify the existence of Bi 2 O 3 in the heterostructured photoanode of Bi 2 S 3 /ZnO/ITO thin film thus, it is more reasonable to suggest that the formation of Bi 2 O 3 trace was due to water which was used as the solvent for cationic and anionic precursor [30]. The XPS peaks of Zn 2p in figure 3(b) show that there are two peaks located at 1045.06 eV and 1022.02 eV which are attributed to Zn 2p 1/2 and Zn 2p 3/2 , respectively, proving that Zn exists in Zn 2+ state [30,31]. On the other hand, figure 3(d) shows two signals at158.84 and 163.55 eV which can be ascribed to Bi 4f 7/2 and Bi 4f 5/2 at 158.84 and 163.55 eV, respectively, verifying the existence of Bi in Bi 3+ state [29,32]. Figure 3(a) also shows that two peaks, observed at 162.26 and 160.97 eV can be ascribed to S 2p 1/2 and S 2p 3/2 , respectively indicating the existence of sulphide species S 2− within the thin film. The morphological surfaces of the bare ZnO NRAs and Bi 2 S 3 /ZnO NRAs photoanodes at different concentrations of cationic precursor were analysed using FESEM. Figures 4(a)-(d) illustrate the FESEM images of the plain ZnO NRAs and Bi 2 S 3 /ZnO NRAs at 1 mM, 3 mM and 10 mM of cationic precursor, respectively. The bare ZnO NRAs structures are well-aligned and distinct hexagonal phase on ITO substrate with an average diameter ∼43.6±2 nm. The Bi 2 S 3 thin film prepared from 1 mM cationic concentration shows non-uniform aggregates with the formation of small grains of Bi 2 S 3 NPs on the top of ZnO NRs and no alteration on the hexagonal wurtzite structure of ZnO NRAs is observed. When the cationic concentration was increased to 3 mM, the Bi 2 S 3 nanoparticles became distinct and covered the entire surface of ZnO NRs uniformly with an average diameter of∼112.35±2 nm. The FESEM images of Bi 2 S 3 /ZnO NRAs photoanode at this concentration show better surface porosity, which could enhance the surface area of the prepared photoanode. As the cationic concentration was increased up to more than 3 mM, the Bi 2 S 3 nanoparticles became clustered together on the ZnO NRs and blocked the spaces between the nanorods. Hence, it can be suggested that 3 mM was the optimum cationic precursor concentration that can provide a large surface area for PEC application. figure 5(b) demonstrates the smooth surface of the rod before the Bi 2 S 3 deposition. After the deposition, uniform distribution of Bi 2 S 3 nanoparticles with an average diameter of around ∼7 nm are clearly observed on the surface of ZnO NRAs. Figure 6 illustrates the HR-TEM image of Bi 2 S 3 /ZnO NRs (3 mM) with d-spacing of the plane fringes of 0.26 and 0.31 nm of wurtzite ZnO NRs (002) and orthorhombic Bi 2 S 3 (023), respectively. The findings were in good agreement with earlier discussed XRD patterns and other reports [25,33]. The element mapping of Bi 2 S 3 /ZnO NRs (3 mM) was illuminated with EDS measurement. It was noted that Zn, O, Bi and S elements was detected (figure 7) and the atomic ratio was 2:3 for Bi to S which was close to the stoichiometrically composition of Bi 2 S 3 verifying the successful deposition of Bi 2 S 3 on the ZnO NRAs . The atomic percentage of Bi and S increased remarkably by increasing the cationic concentration as illustrated in the inset table of figure 7 (a,b). The obtained results were agreeable with the reported findings by Ying Wang and co-workers [34]. Thus, it can be deduced that the ZnO NRAs have been effectively coated with Bi 2 S 3 , without the presence of other element which demonstrates the excellent quality of the acquired sample. The optical properties of the synthesised Bi 2 S 3 /ZnO NRAs photoanodes were studied by employing a UV-Vis spectrophotometer at the wavelength ranging from 300 nm-800 nm. Figure 8 displays the absorption spectra of the uncoated ZnO NRAs and coated Bi 2 S 3 /ZnO NRAs/ITO photoanodes prepared from different cationic concentrations (1 mM to 10 mM). It can be noted that ZnO absorbed in the UV range spectrum (∼385 nm). However, the absorption edges of the synthesised photoanodes increased after the deposition of Bi 2 S 3 on the ZnO NRAs. As the cationic concentration increased from 1 to 10 mM, there was a red shift of the absorption edge to the longer wavelengths. As the cationic concentration increased from 1 to 10 mM, the amount of Bi 2 S 3 particles deposited on ZnO NRAs also increased which led to the increment in the nanoparticle size and the reduction in the band gap energy. When the nanoparticle size was increased, it caused a reduction in the optical band gap energy as demonstrated in the insets of figure 8. This impact, known as 'quantum confinement effect' [29], exhibits a comparable behaviour with regard to other materials of such nature [35]. The values of E g for the deposited Bi 2 S 3 /ZnO photoanodes at different cationic precursor concentrations were estimated by deducing the linear portion of (αhv) 2 versus incident photon energy (hv) to zero as illustrated in figure 8(B). A significant increase in E g value was observed with increasing cationic concentration due to the increase quantum confinement effect by increasing amount of deposited Bi 2 S 3 on the ZnO NRAs. The E g value of Bi 2 S 3 /ZnO NRAs/ITO decreased from 3.22 eV to 1.90 eV as the cationic concentration was risen from 1 mM to 10 mM. The calculated values agreed with earlier studies [13]. Furthermore, when the cationic precursor concentration increased, it led to the increase in the thickness of thin film. As a result, the homogeneity of the obtained thin films was increased, the localised state intensity in the band gap was decreased, thus reducing Eg value as reported by [36]. The obtained E g values are considered optimal values for PEC application, proposing that Bi 2 S 3 is a promising photosensitiser candidate of ZnO NRAs. PEC performance of heterostructured Bi 2 S 3 /ZnO NRAs/ITO The performance of PEC of the assembled Bi 2 S 3 /ZnO NRAs as a function of varied concentrations was investigated in a three-electrode configuration cell. The linear sweep voltammogram plots of the Bi 2 S 3 /ZnO NRAs are displayed in figure 9. It is noticed that the photocurrent density (J ph ) as obtained at +0.5 V applied potential increased from 0.337 mAcm −2 to 1.92 mAcm −2 as the cationic concentration was increased from 0 mM to 3 mM, but then decreased afterwards when prepared using higher concentration of Bi 3+ cationic precursor. The photoconversion efficiency of Bi 2 S 3 /ZnO NRAs/ITO as a function of cationic concentration at +0.5 V applied voltage versus Ag/AgCl rose very significantly with increasing Bi(NO 3 ) 3 with the value of 1.65% at 3 mM. Further increment of cationic concentration caused a decrease in the photoconversion efficiency as a result of reduction in the photocurrent density. The calculated photoconversion efficiency at 3 Mm results in a significant improvement in comparison with plain ZnO NRAs ( 0.25%) as presented in figure 10. These results could be reflected in terms of two significant factors: the absorption and levels of band alignment of the photoanodes components. As the cationic concentration was increased from 1 mM to 3 mM, the absorption of the incident solar photons increased as illustrated previously and more electron-hole pairs were formed, thus the J ph and η% values increased. In addition, increasing the cationic concentration to 3 mM led to the increment of the photoanode surface area as proven through FESEM images ( figure 4). Moreover, as cationic concentration was increased, the absorption edge showed a red shift, which induced the production of electron-holes pairs that contributed to the significant increase of both J ph and η% values. In contrast, increasing the cationic concentration by more than 3 mM led to a reduction in J ph and η% as demonstrated in table 1. As this performance is primarily ascribed to the increase in the photo generated electron-hole pairs recombination rate in PEC, the excess Bi 2 S 3 nanoparticles acted as the recombination centres [37,38]. Additionally, increasing the concentration of Bi ions by more than 3 mM might cause excessive amount of Bi 2 S 3 being coated which could block the spaces between ZnO NRs and induced the recombination processes. This can lead to reduction in J ph value and also the photoconversion efficiency of the photoanode. Nevertheless, the highest value of photocurrent density was achieved by the sample prepared using 3 mM of Bi(NO 3 ) 3 that was almost seven times greater than pure ZnO NRs. Conclusion Heterostructured Bi 2 S 3 /ZnO NRs was synthesised using SILAR method and their properties upon varying the cationic concentrations were examined for photoelectrochemical applications. Significant improvement was noted in the photocurrent density of Bi 2 S 3 /ZnO NRs (3 mM) with 1.92 mAcm −2 and 1.65% of photoconversion efficiency that was seven times greater than bare ZnO NRAs (0.25%). In contrast, when higher cationic concentration of more than 3 mM was employed for synthesising, the photoconversion efficiency was reduced dramatically due to the formation of recombination sites.
2020-01-23T09:06:23.698Z
2020-02-10T00:00:00.000
{ "year": 2020, "sha1": "ec95dae2583006886a6020acdf195d42ed13e1a5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2053-1591/ab6e2e", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2963919074366d0fc00234a329d26bb98a678fd2", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
237243275
pes2o/s2orc
v3-fos-license
Vaccine innovation prioritisation strategy: Findings from three country-stakeholder consultations on vaccine product innovations As part of the Vaccine Innovation Prioritisation Strategy (VIPS), three immunization-stakeholder consultations were conducted between September 2018 and February 2020 to ensure that countries’ needs drove the prioritization of vaccine product innovations. All consultations targeted respondents with immunization program experience. They included: (1) an online survey to identify immunization implementation barriers and desired vaccine attributes in three use settings, (2) an online survey to identify and evaluate the most important immunization challenges for ten exemplar vaccines, and (3) in-depth interviews to better understand the perceived programmatic benefits and challenges that could be addressed by nine innovations and to rank the innovations that could best address current challenges. The first consultation included responses from 442 participants in 61 countries, representing 89% of the 496 respondents who correctly completed at least one section of the online survey. For facility-based settings, missed opportunities for vaccination due to reluctance to open multidose vaccine vials was the barrier most frequently selected by respondents. In community-based (outreach) and campaign settings, limited access to immunization services due to geographic barriers was most frequently selected. Multidose presentations with preservative or single-dose presentations were most frequently selected as desired vaccine attributes for facility-based settings while improved thermostability was most frequently selected for outreach and campaign settings. The second online survey was completed by 220 respondents in 54 countries. For the exemplar vaccines, vaccine ineffectiveness or wastage due to heat or freeze exposure and missed opportunities due to multidose vial presentations were identified as the greatest vaccine-specific challenges. In-depth interviews with 84 respondents in six countries ranked microarray patches, dual-chamber delivery devices, and heat-stable/controlled temperature chain qualified liquid vaccines as the three innovations that could have the greatest impact in helping address current immunization program challenges. These findings informed the VIPS prioritization and provided broader application to designing immunization interventions to better meet country needs. a b s t r a c t As part of the Vaccine Innovation Prioritisation Strategy (VIPS), three immunization-stakeholder consultations were conducted between September 2018 and February 2020 to ensure that countries' needs drove the prioritization of vaccine product innovations. All consultations targeted respondents with immunization program experience. They included: (1) an online survey to identify immunization implementation barriers and desired vaccine attributes in three use settings, (2) an online survey to identify and evaluate the most important immunization challenges for ten exemplar vaccines, and (3) in-depth interviews to better understand the perceived programmatic benefits and challenges that could be addressed by nine innovations and to rank the innovations that could best address current challenges. The first consultation included responses from 442 participants in 61 countries, representing 89% of the 496 respondents who correctly completed at least one section of the online survey. For facility-based settings, missed opportunities for vaccination due to reluctance to open multidose vaccine vials was the barrier most frequently selected by respondents. In community-based (outreach) and campaign settings, limited access to immunization services due to geographic barriers was most frequently selected. Multidose presentations with preservative or single-dose presentations were most frequently selected as desired vaccine attributes for facility-based settings while improved thermostability was most frequently selected for outreach and campaign settings. The second online survey was completed by 220 respondents in 54 countries. For the exemplar vaccines, vaccine ineffectiveness or wastage due to heat or freeze exposure and missed opportunities due to multidose vial presentations were identified as the greatest vaccine-specific challenges. In-depth interviews with 84 respondents in six countries ranked microarray patches, dual-chamber delivery devices, and heat-stable/controlled temperature chain qualified liquid vaccines as the three innovations that could have the greatest impact in helping address current immunization program challenges. These findings informed the VIPS prioritization and provided broader application to designing immunization interventions to better meet country needs. Introduction Immunization programs in low-and middle-income countries face challenges with current vaccine products, such as the need for refrigerated storage and transport, complex preparation and administration requirements, and multidose container presentations; these challenges can lead to higher vaccine wastage, safety issues, and missed vaccination opportunities [1,2]. Global immunization coverage has plateaued over the last decade. Despite the fact that as a result of population growth, more children than ever are receiving three doses of diphtheria, tetanus, and pertussis vaccine before their first birthday, in 2019 there were at least 20 million children who were un-or under-vaccinated [3,4]. There is increasing recognition of the need to employ targeted solutions to extend vaccine access to reach the unreached and increase equitable coverage of vaccines [5]. The global COVID-19 crisis has further highlighted the need for vaccine product innovations that enable vaccines to reach underserved populations, particularly during rapid, large-scale responses. Vaccine product innovations (e.g., on primary containers, delivery technologies, heat-stable and freeze-stable formulations, packaging, labeling, and supply systems technologies) are powerful tools that could help overcome vaccine coverage and equity shortfalls. Such innovations have the potential to simplify logistics, increase the acceptability and safety of immunization, minimize missed opportunities, and facilitate outreach of vaccines [2,5,6]. In the Gavi, Vaccine Alliance (Gavi) 2016-2020 Supply and Procurement Strategy, the need to drive product innovation to better meet country needs and support Alliance goals on coverage and equity was defined as one of the strategic priorities to create healthy markets for vaccines and other immunization products in the countries Gavi supports [7]. Under this priority, a key activity was alignment of partners and setting a common agenda on vaccine product innovation. To lead this effort, the Vaccine Innovation Prioritisation Strategy (VIPS) was launched in 2017 by Gavi, the World Health Organization (WHO), Bill & Melinda Gates Foundation, United Nations Children's Fund (UNICEF), and PATH-known collectively as the VIPS Alliance [8]. At its inception, the goal of VIPS was to articulate a clear and aligned perspective on vaccine product priority innovations and communicate these priorities to donors, immunization program partners, as well as technology and vaccine developers, to help inform priority setting and investment decisions. This goal was achieved in May 2020 on the completion of a comprehensive evaluation process, which culminated in the prioritization of three innovative vaccine technologies: microarray patches (MAPs), heat-stable and controlled temperature chain (CTC) qualified vaccines, and barcodes on primary packaging. The prioritized technologies represent a diversified portfolio with innovations at varying stages of the product development pathway and addressing different programmatic challenges. Details on the innovations evaluated as well as the methodology and process leading to the prioritization is described elsewhere [8] and summarized in the accompanying article, A Global Collaboration to Advance Vaccine Product Innovations -the Vaccine Innovation Prioritisation Strategy [9]. Briefly, the VIPS prioritization process consisted of two phases, of which the first began in April 2018 and evaluated 24 innovation types. These 24 innovations were assessed for their ability to address general immunization program challenges, their applicability to one or more vaccines, and their potential impacts on health, coverage and equity, safety, and economic costs in comparison to current technologies in use. This first evaluation phase resulted in a shortened list of nine innovation types that were assessed to have attributes that offered the greatest potential public health value. These nine innovations were further analyzed against a specific set of representative vaccine antigens during a second evaluation phase, occurring between June 2019 and May 2020. In this second phase, each innovation was assessed in combination with the vaccines it could apply to and evaluated against the vaccine-specific challenges it could address; its potential impact on health, coverage and equity, safety, economic costs, and environment; as well as technical readiness and commercial feasibility. Innovations that apply to all vaccines were also evaluated using similar criteria. The VIPS process involved in-depth consultations with a diverse set of country-and global-level stakeholders, including industry and regulators. It also involved the development and application of a qualitative analytical framework capable of evaluating a variety of technologies at different stages along the product development continuum from technology ideation to implementation. Establishing a better understanding of countries' needs was intended as the foundation of VIPS. As such, between 2018 and 2020 the VIPS Alliance conducted three consultations with varied country decision-makers and Expanded Programme on Immunization (EPI) staff to inform the prioritization process. Opinions from these stakeholders collected through the consultations were critical inputs used for that process. This article describes the methodology used, the results, and conclusions from these three countrystakeholder consultations. Materials and methods The surveys and interview tools underwent pre-testing by potential respondents prior to being finalized and used. No incentives were provided to the respondents for participation in the consultations. Every effort was made to obtain maximum geographic and economic diversity in the responses, ensuring countries from a broad range of regions and income levels were targeted for participation. The results from all consultations were analyzed in Microsoft Excel. Online survey on general immunization barriers The first stakeholder consultation was conducted between September 2018 and January 2019 to identify general immunization implementation barriers (i.e., across vaccine types, formulations, and presentations, and not specific to a certain vaccine) that could be addressed by vaccine product innovations. The target audience for the survey was EPI managers, procurement staff, logistics/supply chain staff, data managers, senior policymakers (including National Immunization Technical Advisory Groups), health care service providers, implementing partners (nongovernmental organizations, civil society organizations), UNICEF and WHO country/regional office staff, and in-country research/university partners. This consultation was carried out by means of an online survey offered in four languages (i.e., English, French, Spanish, Russian), which was widely distributed via online professional forums, relevant networks across all WHO regions, and targeted emails to potential respondents including vaccine-focused distribution lists (i.e., TechNet-21, BID Learning Network, and Africa Resource Centre) [10][11][12]. Clinton Health Access Initiative (CHAI) staff facilitated completion of the survey by health care service providers without internet access in Uganda and Kenya. The survey asked each respondent to select, from a list of 18 implementation barriers, the 5 they thought were most important in preventing improvements in vaccine coverage and equity. Respondents were asked to select the barriers in the context of three vaccine use settings: routine facility-based immunization, routine community-based (outreach) immunization, and campaigns including outbreak response. A second question asked them to select 5 out of a list of 15 vaccine product attributes, which they thought could best help address the identified implementation barriers in the same use settings. The pre-populated lists of country implementation barriers and vaccine product attributes given to respondents in this survey were developed through literature review and expert inputs by VIPS Alliance members; only barriers that could be addressed by vaccine product innovations, and similarly only vaccine products attributes that could address the barriers, were included in the list (e.g., barriers related to immunization financing were not included). Information was also collected through open-ended questions on additional barriers and desirable vaccine product attributes. See Supplementary Table 1 for detailed survey questions. The survey responses on the implementation barriers and vaccine product attributes were analyzed by use setting. Due to a software issue with the online survey, some respondents selected more than five barriers or vaccine attributes per setting. Therefore, we excluded the data of those respondents who provided more than five barriers or vaccine product attributes for the use setting being evaluated in this analysis. The ranking of implementation barriers and the vaccine product attributes was then compared to evaluate whether the key implementation barriers selected by respondents could be addressed by the most frequently selected vaccine product attributes. Online survey on vaccine-specific immunization challenges A second online survey was conducted between November 2019 and February 2020 to identify vaccine-specific immunization challenges that could be addressed by the nine innovations shortlisted by VIPS. The survey was conducted in five languages (i.e., English, French, Spanish, Portuguese, Russian). Immunization experts with knowledge of vaccination strategies and existing vaccine products from Gavi-supported and non-Gavi-supported countries were invited by email to complete this online survey. The survey was shared with potential respondents through distribution lists of country immunization experts managed by Gavi, PATH, CHAI, and WHO regional and country offices. The questions in this second online survey focused on ten exemplar vaccines, which were identified as part of the second evaluation phase of VIPS [8]. These vaccines were selected to be representative of the broader vaccine landscape based on vaccine type, formulation, and presentation. During the survey design, an initial list of challenges was provided for each of the ten vaccines, based on the priority immunization implementation barriers identified in the first survey. These initial lists were then further refined through consultation with vaccine delivery program experts at WHO. When completing the survey, respondents were asked for inputs concerning only the vaccines that they had experience with. For each vaccine evaluated, the respondent had to select challenges from the list provided that applied to the vaccine; they also had the opportunity to suggest additional challenges not included in the provided list. Then, from the challenges they had selected (including the ones added), the respondent was asked to short-list and rank the three most important ones. If the respondent identified fewer than three challenges for the vaccine, they were asked to rank all the challenges they had identified. Supplementary Table 2 shows the second online survey questionnaire. Given that barcodes address a unique set of challenges compared to other innovations evaluated which are focused on vaccine preparation and administration challenges, the survey included separate questions that informed the evaluation of barcodes. These questions focused on electronic systems for vaccine inventory and electronic patient record keeping in order to gather data on current use of electronic systems as well as country interest and readiness to use barcodes on primary containers. These questions are also shown in Supplementary Table 2. During data analysis, we tabulated by vaccine the number of respondents who selected a given challenge as one of their three most important challenges. We included responses from respondents who selected at least one and up to three of the challenges for any of the vaccines. We also tabulated the responses from the questions on electronic systems. In-depth interviews to evaluate VIPS short-listed innovations The third country consultation took place between November 2019 and February 2020, in parallel to the second online survey and consisted of in-person interviews. These interviews were conducted in six countries in Africa and Asia to gather feedback from decision-makers and immunization staff on the nine short-listed VIPS innovations. The countries included in the consultations were based on the availability of PATH and CHAI staff to conduct the interviews and willingness and availability of country stakeholders to participate. The nine short-listed innovations of focus in this consultation were classified as either vaccine-specific (i.e., applicability is vaccine dependent) or vaccine-agnostic (i.e., relevant to all vaccines). The vaccine-specific innovations were compact, prefilled, autodisable devices (CPADs), dual-chamber delivery devices, MAPs, solid dose implants (SDIs), freeze damage resistant liquid vaccines, and heat-stable/CTC qualified liquid vaccines. Vaccineagnostic innovations were sharps injury protection (SIP) syringes, combined vaccine vial monitors with threshold indicators (VVM-TIs), and barcodes on vaccine primary containers. For each innovation, the aim was to understand the perceived benefits of the innovation and challenges that could hinder the adoption of the innovation, specific vaccines for which the innovation would be most useful, as well as interest in eventual adoption and use. Each respondent was also asked to select the three innovations they thought would have the greatest impact in helping address their immunization program's current needs and priorities. Interview respondents were purposively selected because they were known to have experience with, and knowledge of, immunization systems and strategies as well as vaccine management. These respondents were selected according to two profiles: the first group consisted of those with decision-making authority or influence over vaccine purchase decisions (referred to as decision-makers). This group included EPI program managers at the national and regional levels, and advisors for the EPI, such as members of National Immunization Technical Advisory Groups and Interagency Coordinating Committees. The second group consisted of immunization staff working within the national programs whose roles included managing and administering vaccines at the district or health facility levels. The in-depth interviews were conducted by PATH and CHAI staff who coordinated with the EPI managers in each country to identify the respondents for the survey, using the participant inclusion criteria outlined above. Interviewers were trained beforehand to ensure consistency in conducting the interviews. Ministries of health in each country approved the in-depth interviews. The PATH Research Determination Committee determined that this activity did not meet the definition of research involving human subjects so the survey did not require an ethical approval. Before answering questions on each innovation, the respondent was familiarized on the use of the innovation without being provided with information on potential benefits and challenges. Where applicable, commercially available examples or prototypes of the innovation were shown to the respondent. Technology cards were also presented with images that described the purpose of each innovation and how it is used. A short video clip was then shown to demonstrate the use of some of the technologies where their use was not deemed intuitive based on the description provided. After the familiarization with each innovation, the interviewer asked semi-structured, open-ended questions with a research approach to questioning, to engage the respondent in conversation, exploring the anticipated benefits and trade-offs of the innovation based on the respondent's experience and opinions. The respondent provided their views on the benefits and challenges of the innovation and the interviewer probed to inquire if there were more benefits or tradeoffs the respondent wanted to provide but not to lead them towards specific benefits or tradeoffs. If after this probing, the respondent said they had no additional benefits or tradeoffs to mention, the interviewer moved to the next question, irrespective of the number of benefits or tradeoffs that had been mentioned by the respondent. Additionally, for innovations that are vaccine-specific, the respondent was asked to provide examples of vaccines they believed could benefit from their use. This process was repeated until the respondent had evaluated each of the nine innovations. After the evaluation of the last innovation, the respondent was asked to select and rank the three most preferred innovations based on what they believed would have the greatest impact in helping address their immunization program's current challenges. The questions used in the interviews are shown in Supplementary Table 3. Four different orders of presenting the innovations were used and rotated between interview participants to avoid biasing the quality of responses through interview fatigue. Responses were documented on tablets or smartphones using Open Data Kit software [13]. The interviews were audio-recorded with permission of respondents to allow checking of the accuracy of data entry after the interviews were completed. For ease of data entry during the interviews, anticipated benefits and challenges were available to interviewers to select in the Open Data Kit interface, along with space to enter additional comments provided by the respondents. The respondents could not see this interface. Vaccines that could benefit from use with each innovation were also pre-populated in the data form with additional spaces provided to allow entry of other vaccines that might be mentioned by the respondents. During data analysis for each innovation, the number of respondents stating each benefit and challenge were counted as well as the vaccines for which the innovation would be particularly useful. Data were aggregated and analyzed for all countries. The results were also disaggregated based on roles (immunization staff vs. decision-makers). The overall ranking of innovations, based on the innovations that respondents believed would have the greatest impact in helping address their immunization program's current challenges, was achieved using a weighted scores approach. For the weighted scores, if the innovation was ranked as a first choice, it was given a weight of 3 points, a second choice was given a weight of 2 points, and a third choice was given a weight of 1 point. A weighted scores approach was used for ranking innovations given that all respondents had selected their top three innovations and ranked them by order of anticipated impact. Online survey on general immunization barriers The first online survey was completed by 496 individuals, of which 442 (89%) correctly selected at most five barriers or vaccine product attributes, per the survey instructions, for at least one of the delivery settings. These 442 respondents were from 61 Gavi-supported and non-Gavi-supported countries. Seventy five percent of these respondents were from Gavi-supported African countries. Eighty percent of the countries represented in the survey had less than 10 respondents. The summary of survey respondents by organization is presented in Supplementary Fig. 1 and shows that the majority of respondents (55 percent) were ministry of health staff at different levels of the health system including the service delivery level. For each setting (i.e., routine facility-based, outreach, and campaigns) a total of 268, 254, and 298 respondents, respectively, selected at least one to at most five of the most important barriers preventing improvements in immunization coverage. The number of respondents selecting each of the barriers for each use setting are shown in Table 1. Missed opportunities for vaccination due to reluctance to open multidose vials was the barrier selected by the most respondents (126/268) for routine facility-based immunization. Limited access to immunization services due to geographic barriers (e.g., remote populations) was selected by the most respondents as the greatest barrier for both outreach and campaign settings, 147/254 and 126/298, respectively. There were broader parallels in the priority implementation barriers between outreach and campaign settings, because for both settings, social barriers (e.g., limited access to immunization services for marginalized populations, such as those living in urban slums, single mothers, orphans and vulnerable children, certain ethnic/religious groups, refugees, etc.) was the second-most selected. For routine facility-based settings, inadequate infrastructure (e.g., buildings and electricity) for vaccine and immunization equipment storage at delivery points was the second-most selected barrier. Similarly, for each setting (i.e., routine facility-based, outreach, and campaigns) a total of 309, 306, and 324 respondents, respectively, selected at least one to at most five vaccine product attributes that could help address the implementation barriers. The number of respondents selecting each vaccine product attribute are shown in Table 2. Prevention of missed opportunities (e.g., through multidose presentation with preservative or single-dose presentation) was selected by the most respondents (223/309) as the desirable product attribute for routine facility-based settings, which aligns with missed opportunities being an implementation barrier selected by the most respondents for this setting (as per Table 1). The ability to withstand heat exposure was selected by the most respondents as the desired attribute to meet challenges faced for outreach and campaigns (176/306 and 197/324, respectively). Such an attribute could enable vaccines to reach populations that typically have limited access to immunization services due to geographic barriers. Therefore, the desired vaccine attributes identified by survey respondents align with the barriers they most frequently selected. In addition to the attributes listed in the survey, several vaccine product attributes were mentioned as desirable by survey respondents. These included needle-free vaccine presentations (i.e., oral, nasal spray, MAPs, aerosols), combination/multiple antigen vaccines, reducing the number of doses in the regimen/vaccine schedule, and improved thermostability including shelf-stable vaccine products that do not require cold chain storage. Online survey on vaccine-specific immunization challenges The second survey was completed by 220 stakeholders from 54 countries including global-and regional-level stakeholders. The global-and regional-level stakeholders accounted for 26 percent of the respondents and about half of the respondents were from Gavi-supported African countries. Of the countries with respondents participating in the survey, 85 percent had less than 10 respondents completing the survey. See Supplementary Figure 2 for a summary of survey respondents by organization. Stakeholder rankings of vaccine-specific challenges are shown in Table 3. As seen in Table 3, fewer responses were received for newer vaccines and vaccines used in specific regions (such as vaccines against yellow fever, rabies and typhoid) as the survey guided participants to only provide responses for those vaccines with which they had experience. Vaccine ineffectiveness or wastage due to damage by freeze exposure was the most frequently selected challenge for pentavalent, inactivated polio, human papil- The values in this table are the number of respondents selecting each implementation barrier as one of the top five implementation barriers out of the barriers provided by use setting: routine facility-based immunization, outreach, campaigns. The survey was correctly completed by 442 respondents but not all respondents provided responses for each survey section focused on immunization barriers or vaccine attributes) and for each use setting. As a result, the number of respondents included in each sub-analysis is different. We indicate the barriers that were selected by most respondents using this key: a Barrier selected by most respondents for the use setting. b Barrier selected by second-most respondents for the use setting. c Barriers selected by third-most respondents for the use setting. The values in this table are the number of respondents selecting each desired vaccine attribute as one of the top five desired vaccine attributes out of the attributes provided by use setting: routine facility-based immunization, outreach, campaigns. The survey was correctly completed by 442 respondents but not all respondents provided responses for each survey section focused on immunization barriers or vaccine attributes) and for each use setting. As a result, the number of respondents included in each sub-analysis is different. We indicate the vaccine attributes that were selected by most respondents using this key: a Vaccine attribute selected by most respondents for the use setting. b Vaccine attribute selected by second-most respondents for the use setting. c Vaccine attribute selected by third-most respondents for the use setting. lomavirus (HPV), and hepatitis B birth dose vaccines. Vaccine ineffectiveness or wastage due to heat exposure was the most frequently selected challenge for measles-containing, rotavirus, typhoid conjugate, and rabies vaccines. Both challenges (vaccine ineffectiveness or wastage due to either heat or freeze exposure) were selected as the top two challenges for five of the ten vaccines evaluated. A subset of respondents answered the questions included to provide information to evaluate the barcode innovation and they reported that in the public immunization system in the countries where they primarily work, 58 percent (75/130) currently use an electronic system for vaccine inventory and 22 percent (28/128) for patient records. Of the respondents who indicated that the public immunization program in their country does not currently use electronic systems or they do not know, 91 percent (50/55) responded that transitioning to an electronic system would be beneficial to vaccine inventory, and 92 percent (92/100) responded that it would be beneficial to patient vaccination records. These results suggest that there is strong interest from survey respondents in electronic systems, for which barcodes could improve accuracy of data entry. However, to realize the full potential offered by barcodes on primary packaging, a transition to use of electronic inventory and health records would be required down to the health facility level, which could be a challenging process in many low-and middle-income countries because of the equipment costs, training needs, and other requirements. This second survey identified the immunization challenges that apply to exemplar vaccines, providing insight on which vaccine product attributes might offer broad cross-vaccine benefits. Some vaccine-specific challenges selected across multiple assessed vaccines are consistent with the generic barriers most selected in the first survey (e.g., vaccine ineffectiveness/wastage due to heat exposure) while others (e.g., vaccine ineffectiveness/wastage due to freeze exposure) were not strongly highlighted by the first survey. The results also informed the assessment of barcodes and showed while some countries have initiated the transition to electronic inventory management, even fewer of them have initiated the transition for electronic patient records. In-depth interviews to evaluate VIPS short-listed innovations A total of 84 respondents were interviewed across six countries: Ethiopia (n = 15), Nepal (n = 15), Nigeria (n = 21), Senegal (n = 15), Uganda (n = 17), and Mozambique (n = 1). A total of 55 immunization staff and 29 decision-makers completed the surveys. Tables 4 and 5 show the perceived benefits mentioned for each of the nine short-listed innovations. For the vaccine-specific innovations, easier preparation or easing of logistics was the most frequently mentioned benefit identified for CPADs (75/84 respondents), dual-chamber delivery devices (71/84), MAPs (76/84), and SDIs (66/84), as shown in Table 4. Most respondents (78/84) identified the benefit of freeze damage resistant liquid vaccines as preventing vaccine damage/vaccine wastage due to accidental freezing. For CTC qualified liquid vaccines, 56/84 respondents mentioned allowing vaccines to be kept out of the cold chain as a benefit while 55/84 respondents mentioned preventing vaccine damage/vaccine wastage due to suspected heat exposure as a benefit. There was general consistency in the types of perceived benefits mentioned by decision-makers and immunization staff but there were some differences in the rankings between these two groups. For example, for MAPs, improving ease of use was the most mentioned benefit by immunization staff while increased acceptability to vaccine recipients or caregivers was the most mentioned benefit by decision-makers. Table 3 Number The values in this table are the number of respondents selecting each vaccine attribute as one of the three important challenges facing delivery of the priority representative vaccines. The survey was correctly completed by 220 respondents but not all respondents provided responses for each vaccine as they were instructed to only provide responses for the vaccines with which they have experience. As a result, the number of respondents (n) included in each vaccine analysis is different. Also, some respondents only included one or two challenges, while others provided up to three as requested. Perceived benefits of the innovations We indicate the challenges to the vaccine that were selected by most respondents using this key: Table 4 Perceived benefits identified for the vaccine-specific innovations and number and percentage of respondents mentioning the benefits of each innovation-in-depth interviews of 84 total respondents composed of 55 immunization staff (IS) and 29 decision-makers (DM). Abbreviations: IS, immunization staff; DM, decision-makers. The numbers in the table are the number of respondents mentioning each perceived benefit of the innovation. Respondents did not receive any pre-populated lists and so provided these benefits based on the information shared about each innovation. Respondents could provide as many benefits as they desired. The total number shows the total number of respondents mentioning each perceived benefit. The percentages show the proportion of all respondents in that group (n = 55 IS or n = 29 DM) mentioning each perceived benefit. Blank cells show the benefit was not mentioned by any respondent. We indicate the perceived benefits of each innovation that were mentioned by most respondents using this key: a Perceived benefit of the innovation selected by most respondents. b Perceived benefit of the innovation selected by second-most respondents. c Perceived benefit of the innovation selected by third-most respondents. The perceived benefits identified for the vaccine-agnostic innovations were aligned with the main purpose or feature of the innovation (Table 5). For VVM-TIs, the benefit mentioned by the most respondents (41/84) was preventing vaccine damage/wastage of vaccines. For SIP syringes, the benefit mentioned by the most respondents (75/84) was reducing needle-stick injuries while for barcodes, the benefit mentioned by the most respondents (48/84) was improving the ability to track information or have information about vaccines. Perceived challenges of the innovations For the vaccine-specific innovations, cost implications including overall costs or price per dose were most frequently mentioned by respondents as a perceived challenge associated with adoption of these innovations (Table 6). Cold chain volume implications and complexity of using each of the innovations were mentioned as perceived challenges associated with CPADs and dual-chamber delivery devices. The need for community sensitization was mentioned by many respondents as a perceived challenge of MAPs as the innovation may be less acceptable to vaccine recipients or caregivers given the novel vaccination technique. This challenge was also mentioned second-most frequently by respondents for SDIs, another innovation resulting in a novel vaccination technique. Complexity of using the delivery device innovations was also a challenge that tended to be mentioned by immunization staff across these innovations and training needs were mentioned as a challenge across most of these innovations by decision-makers. As shown in Table 7 focusing on challenges for the vaccineagnostic innovations, the need to procure appropriate equipment was the most mentioned challenge for barcode adoption followed by the complexity of using the innovation. Cost implications were most frequently mentioned as a challenge for SIP syringes and VVM-TIs. The second-most mentioned challenges were time required/complexity of use for SIP syringes and training requirements for VVM-TIs. Table 8 shows the total number of respondents mentioning the vaccines or class of vaccines for which they believe each innovation would be most useful. Dual-chamber delivery devices, MAPs, and SDIs were identified as innovations that would be most useful for measles-containing and Bacillus Calmette-Guérin vaccines. This is consistent with the observation that they would remove the need for reconstitution of lyophilized vaccines and, due to their singledose format, avoid missed opportunities due to reluctance to open a multidose vial and prevent vaccine wastage of unused reconstituted, preservative-free vaccine. MAPs and SDIs were also mentioned as most useful for inactivated poliovirus vaccine (IPV), pentavalent, and HPV vaccines. CPADs were perceived as being most useful for IPV, pentavalent, and pneumococcal conjugate vaccine even though they would not address the highest ranked challenges of vaccine ineffectiveness or wastage due to heat or freeze exposure identified for IPV and pentavalent vaccine in the second online survey (Table 3). Freeze damage resistant liquid formulations were identified as being most useful for pentavalent vaccine, IPV, and tetanus toxoid-containing vaccine. Heat-stable/CTC qualified liquid vaccines were perceived as being most useful for HPV, pentavalent, and IPV. Vaccines for which the innovations could be most useful The respondents were also asked about the immunization delivery setting and the target population for which the innovations would be most useful for. However, most respondents said the innovations were useful for all settings and all eligible vaccine target populations and did not generally prioritize one setting or population over another. These results are not reported in the tables. Table 5 Perceived benefits identified for the vaccine-agnostic innovations and number (%) of respondents mentioning the benefits-in-depth interviews of 84 total respondents composed of 55 immunization staff (IS) and 29 decision-makers (DM). Potential benefit Sharps injury protection syringes Abbreviations: IS, immunization staff; DM, decision-makers. The numbers in the table are the number of respondents mentioning each perceived benefit of the innovation. Respondents did not receive any pre-populated lists and so had to provide these benefits based on the information shared about each innovation. Respondents could provide as many benefits as they desired. The total number shows the total number of respondents mentioning each perceived benefit. The percentages show the proportion of all respondents in that group (n = 55 IS or n = 29 DM) mentioning each perceived benefit. Blank cells show the benefit was not mentioned by any respondent. We indicate the perceived benefits of each innovation that were mentioned by most respondents using this key: a Perceived benefit of the innovation selected by most respondents. b Perceived benefit of the innovation selected by second-most respondents. c Perceived benefit of the innovation selected by third-most respondents. Table 6 Perceived challenges facing the implementation of the vaccine-specific innovations and number (%) of respondents mentioning the challenges-in-depth interviews of 84 total respondents composed of 55 immunization staff (IS) and 29 decision-makers (DM). Abbreviations: IS, immunization staff; DM, decision-makers. The numbers in the table are the number of respondents mentioning each perceived challenge of the innovation. Respondents did not receive any pre-populated lists and so had to provide these challenges based on the information shared about each innovation. Respondents could provide as many challenges as they desired. The total number shows the total number of respondents mentioning each perceived challenge. The percentages show the proportion of all respondents in that group (n = 55 IS or n = 29 DM) mentioning each perceived challenge. Blank cells show the challenge was not mentioned by any respondent. We indicate the perceived challenges of each innovation that were mentioned by most respondents using this key: a Perceived challenge of the innovation selected by most respondents. b Perceived challenge of the innovation selected by second-most respondents. c Perceived challenge of the innovation selected by third-most respondents. Additional information gathered about the innovations While answering the open-ended questions, respondents provided feedback about some of the innovations beyond their benefits and challenges. For MAPs, immunization staff mentioned their preference for smaller MAPs without applicators. Similarly, for SDIs, respondents reported that they preferred the version with a disposable applicator instead of the one with a reusable applicator. Respondents also stated that they desired innovations that could combine multiple vaccines to reduce the number of vaccinations. For heat-stable/CTC qualified liquid vaccines, decision-makers provided general feedback that the number of minimum days in CTC use needed to be longer than the current three days and should be at least seven days. For SIP syringes, decision-makers preferred the version with a retractable needle over the one with a needle shield, due to safety concerns given that the version with the needle shield requires manual manipulation too close to the needle; the shield getting in the way during injections was also a concern. They also commented that if SIP syringes were procured, they should be available to all health programs to avoid syringe diversion to health programs other than immunization. Respondents also suggested combining innovations such as heat-stable/CTC qualified liquid vaccines with VVM-TIs, CPADs with heat-stable vaccines, and CPADs with SIP features. Ranking of the innovations As shown in Fig. 1 displaying the weighted ranking of the innovations, respondents suggested that MAPs, dual-chamber delivery devices, and heat-stable/CTC qualified liquid vaccines would have the greatest impact in helping address their immunization program's current challenges. The ranking of innovations was broadly consistent between decision-makers and immunization staff. These results also align with the results of the first online survey. For instance, both MAPs and dual-chamber delivery devices can prevent missed opportunities as single-dose presentations, which was selected as the most desirable vaccine attribute for routine facility-based immunization. The ability to withstand heat exposure, which can be achieved through heat-stable/CTC qualified liquid vaccines, was the most desired vaccine attribute for outreach and campaign settings. Summary of the in-depth interviews to evaluate VIPS shortlisted innovations The learnings from the in-depth interviews provided critical perspectives from country stakeholders on the possible benefits and challenges associated with each innovation, as well as where the greatest potential and interest lies. Detailed information obtained on the perceived benefits and challenges and the most useful vaccine-innovation pairings will also inform follow-on activities for the prioritized innovations, and could inform continued development of all assessed innovations, given that critical feedback was obtained on issues, such as product profile considerations, costs considerations, and training requirements for product introduction. Limitations of the surveys A key limitation of the two online surveys was the low participation from respondents in non-Gavi-supported, middle-income countries and the Americas, Eastern Europe and the Eastern Mediterranean, despite targeted efforts to elicit responses from these regions. A few countries also had proportionately higher response rates to the online surveys than others. Due to the online format, the reach of the survey was also limited by access to suitable devices and a stable internet connection. The second online survey's design was also limited by the vaccine-specific challenges not being evaluated for different settings or delivery strategies (i.e., routine facility-based vs. outreach vs. campaigns) in the interest of keeping the survey length manageable and maximizing the completion rate. This prevented a more comprehensive comparison to results from the first survey. A limitation of the in-depth interviews was that the interviews were only conducted in six Gavi countries (five from Africa and one from South East Asia) due to limited resources and time for partners to conduct or EPI programs to participate in the in-person, in-depth interviews. While the three consultations included responses from many countries, we do not report results disaggregated by country or region as these consultations were designed as global surveys and not powered for country-or regional-level sub-analyses. Table 7 Perceived challenges identified for the vaccine-agnostic innovations and number (%) of respondents mentioning the challenges-in-depth interviews of 84 total respondents composed of 55 immunization staff (IS) and 29 decision-makers (DM). Abbreviations: IS, immunization staff; DM, decision-makers. The numbers in the table are the number of respondents mentioning each perceived challenge of the innovation. Respondents did not receive any pre-populated lists and so had to provide these challenges based on the information shared about each innovation. Respondents could provide as many challenges as they desired. The total number shows the total number of respondents mentioning each perceived challenge. The percentages show the proportion of all respondents in that group (n = 55 IS or n = 29 DM) mentioning each perceived challenge. Blank cells show the challenge was not mentioned by any respondent. We indicate the perceived challenges of each innovation that were mentioned by most respondents using this key: a Perceived challenge of the innovation selected by most respondents. b Perceived challenge of the innovation selected by second-most respondents. c Perceived challenge of the innovation selected by third-most respondents. Conclusions Understanding countries' immunization challenges that could be addressed through vaccine product innovations was a foundation of the VIPS process, and the insights generated through the three consultations with varied country stakeholders informed the VIPS prioritization process. The first phase of VIPS utilized an analytical framework with specific indicators to assess an initial list of 24 innovation types that were short-listed to 9 innovations based on their breadth of potential public health benefits or unique benefits and applicability to several vaccines. The results from the first survey on general immunization barriers were used to provide a qualitative weighting to the indicators that addressed the most important barriers identified by countries. In the second VIPS phase, the nine short-listed innovations were further assessed with representative vaccines, based on a more complete analytical evaluation framework. Innovations were prioritized based on indicators addressing the most important challenges identified by countries for a majority of vaccines (from the second survey) and based on the level of interest (innovation's ranking) from country stakeholders on these innovations (from the in-depth interviews). The results of these three country consultations strongly influenced the final VIPS prioritization of MAPs, heat-stable/CTC qualified vaccines, and barcodes on vaccine primary packaging as they were an important component of a broader evaluation of each innovation's potential impact. Factors considered in the broader Abbreviations: IS, immunization staff; DM, decision-makers. The numbers in the table are the number of respondents mentioning the vaccines or class of vaccines for which each of the vaccine-specific innovations would be most useful. Respondents did not receive any pre-populated lists and so had to provide these vaccines based on the information shared about each innovation. Respondents could provide as many vaccines as they desired. The total number shows the total number of respondents mentioning each vaccine or class of vaccines for which each innovation could be most useful. Blank cells show the challenge was not mentioned by any respondent. We indicate the vaccines or class of vaccines for each innovation that were mentioned by most respondents using this key: a Vaccines or class of vaccines mentioned for the innovation by most respondents. b Vaccines or class of vaccines mentioned for the innovation by second-most respondents. c Vaccines or class of vaccines mentioned for the innovation by third-most respondents. 1 Mentioned by respondents but was not assessed for use with heat-stable liquid/CTC qualified vaccines and SDIs by VIPS due to concerns about technical feasibility. 2 Mentioned by respondents but was not assessed for use with dual-chamber delivery devices by VIPS due to concerns about technical feasibility. 3 Mentioned by respondents but was not assessed for use with MAPs by VIPS due to concerns about technical feasibility. 4 Mentioned by respondents but was not assessed for use with heat-stable liquid/CTC qualified vaccines by VIPS due to concerns about technical feasibility. 5 Mentioned by respondents but was not assessed for use with dual-chamber delivery devices by VIPS due to concerns about technical feasibility. 6 Mentioned by respondents but was not assessed for use with heat-stable liquid/CTC qualified vaccines by VIPS due to concerns about technical feasibility. evaluation of the innovations included potential public health benefit and impact on coverage and equity, safety, total costs, and the environment, as well as each innovation's technology readiness and commercial feasibility. The VIPS Alliance also desired to prioritize a balanced portfolio of different innovation profiles, (e.g., in terms of risk to success based on the stage in the product development pathway and required resources to bring the innovation to market or scale) [14]. Two of the three prioritized innovations at the end of the VIPS process, MAPs and heat-stable/CTC qualified vaccines, were identified in the in-depth interviews as two of the top three innovations that could have the greatest impact in helping address current immunization program challenges. Prioritization of these innovations aligns with the outcomes of the two online surveys since they both address the most challenging immunization barriers by virtue of their valuable vaccine product attributes. Some of these attributes for MAPs include being a single-dose presentation, which would reduce missed opportunities for vaccination and vaccine wastage, potential for enhanced thermostability thereby facilitating outreach, and not needing reconstitution hence avoiding reconstitution-related safety issues. For heat-stable/CTC qualified vaccines, these attributes include ability to withstand heat exposure, and minimize cold chain requirements during outreach. Regarding the prioritization of barcodes, while the in-depth interview participants did not rate them highly against the other delivery and formulation innovations, the second online survey respondents expressed strong interest in transitioning from paper-based to electronic systems for patient vaccination records and vaccine inventories in which barcodes can play a facilitative role. Their prioritization also is intended to support the ongoing efforts of UNICEF, Gavi, and other stakeholders to improve traceability of vaccine products, including COVID-19 vaccines, for LMICs. Although dual-chamber delivery devices were ranked second (after MAPs) in the in-depth interviews, they were not prioritized by VIPS in order to achieve a diversified portfolio. Both dualchamber delivery devices and MAPs offer similar benefits (e.g., reducing missed opportunities and avoiding reconstitution errors), and both also face significant technical and manufacturing challenges and are in early stages of development. However, there is more catalytic work, including investments, underway for MAPs that can be harnessed to move this innovation forward. The final VIPS prioritization is an important first step towards driving product innovation to better meet LMICs' needs, but significant work is still needed to achieve uptake of any innovation as stated country preferences do not imply country adoption when the innovation becomes available for use and there are numerous other barriers preventing adoption and scale up. To better understand these barriers and identify factors impacting country adoption of innovations, the VIPS Alliance analyzed four commercially available vaccine-product-innovations and augmented the evaluation with interviews with 17 experts. The findings are summarized in a VIPS accompanying article titled Strategies for vaccine-product innovation: creating an enabling environment for product development-to-uptake in low-and middle-income countries [15]. The article also highlights actions that should be undertaken in parallel to product development to incentivize sustainable investment and prepare the pathway for uptake and impact. Recognizing the substantial work that lies ahead, the VIPS Alliance is now developing and implementing end-to-end strategies for each of the three prioritised innovations, including 5-year action plans to accelerate their development and uptake. Activities in the action plans [8] include prioritizing vaccine applications for development, assessing the innovations' full economic value and health impact, and understanding willingness-to-pay, clarifying potential demand, identifying and addressing research gaps and needs for implementation research, defining investment cases and the need for new procurement/financing mechanisms, as well as understanding the need for additional push funding. As one of the key components of these 5-year action plans, the VIPS Alliance will ensure sustained engagement with country-and regionallevel stakeholders, which will be essential to clarify and confirm key assumptions in terms of use case scenarios, product preferences, potential demand, and willingness to pay for these innovations, and ensure that country priorities and preferences are central to design and investment in these innovations and ensure successful programmatic impact. Declaration of Competing Interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: The authors confirm that the work submitted for consideration has not been published previously and is not under consid- eration for publication elsewhere. The submission has been approved by all authors. If the manuscript is accepted by this journal, it will not be published elsewhere in the same form.
2021-08-21T06:17:10.161Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "b8180d12da0186855683c311fa991e7083e766d2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.vaccine.2021.08.024", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ccaaca128dc6f7251ee4ef272361aec46c4381e7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245856866
pes2o/s2orc
v3-fos-license
Influence of platelet-rich plasma (PRP) analogues on healing and clinical outcomes following anterior cruciate ligament (ACL) reconstructive surgery: a systematic review Purpose To systematically review the effect of PRP on healing (vascularization, inflammation and ligamentization) and clinical outcomes (pain, knee function and stability) in patients undergoing ACL reconstruction and compare the preparation and application of PRP. Methods Independent systematic searches of online databases (Medline, Embase and Web of Science) were conducted following PRISMA guidelines (final search 10th July 2021). Studies were screened against inclusion criteria and risk of bias assessed using Critical appraisal skills programme (CASP) Randomised controlled trial (RCT) checklist. Independent data extraction preceded narrative analysis. Results 13 RCTs were included. The methods of PRP collection and application were varied. Significant early increases in rate of ligamentization and vascularisation were observed alongside early decreases in inflammation. No significant results were achieved in the later stages of the healing process. Significantly improved pain and knee function was found but no consensus reached. Conclusions PRP influences healing through early vascularisation, culminating in higher rates of ligamentization. Long-term effects were not demonstrated suggesting the influence of PRP is limited. No consensus was reached on the impact of PRP on pain, knee stability and resultant knee function, providing avenues for further research. Subsequent investigations could incorporate multiple doses over time, more frequent observation and comparisons of different forms of PRP. The lack of standardisation of PRP collection and application techniques makes comparison difficult. Due to considerable heterogeneity, (I2 > 50%), a formal meta-analysis was not possible highlighting the need for further high quality RCTs to assess the effectiveness of PRP. The biasing towards young males highlights the need for a more diverse range of participants to make the study more applicable to the general population. Trail registration CRD42021242078CRD, 15th March 2021, retrospectively registered. Introduction Anterior cruciate ligament (ACL) rupture is a common injury; 68.6 per 100,000 person years [1]. It often requires surgical intervention and rigorous physiotherapy to allow patients to return to their normal activities. Particularly prevalent in sport, excessive knee rotation can cause ACL rupture, leading to knee pain and instability [2]. The avascular nature of ligamentous tissue and stabilising muscle atrophy result in long recovery times [3]. This inhibits Activities of daily living (ADLs) and can result in prolonged periods of absence from work or /sport. Platelet-rich plasma (PRP) is a biological augmentation reported to aid healing in damaged cartilage, tendon and ligamentous structures [4]. Resulting from blood separation, PRP consists of autologous mixtures of concentrated platelets and growth factors. Associated with recruitment and proliferation of cells and stimulating angiogenesis [5], early studies have shown improvements in healing and subsequent recovery, indicating use for injuries that require longer recovery periods [6]. Further to this PRP has been reported to reduce pain during recovery in orthopaedic setting [7]. Pain is a major barrier in returning to previous levels of function, either due to effect on mental well-being or ability to complete rehabilitation exercises [8]. In an injury that requires substantial physiotherapy, PRP could improve recovery by facilitating better adherence to physiotherapy protocols, improving knee stability and function. Much of the literature investigates PRP use in the context of oral and orthopaedic procedures; however, the literature has failed to reach a consensus on its efficacy in ACL reconstruction [9]. This study aims to systematically review the current literature to determine the influence of PRP on the healing and clinical outcomes of ACL reconstructive surgery and compare the methods of PRP collection and application. Methodology The review was conducted in accordance with the Preferred reporting items for systematic review and meta-analysis (PRISMA) guidelines. The systematic review was registered on PROSPERO; registration number CRD42021242078. The protocol was designed after consultation with a medical librarian. Studies were then selected based on eligibility criteria (Fig. 1). Inclusion • Randomised controlled trial reporting the effect of PRP on ACL reconstructive surgery outcomes • Published in English between 2005 and 2021 • Evaluated knee pain, stability, function, vascularisation, inflammation or ligamentization • Study conducted on humans of any age or gender Exclusion • Reviews and conference abstracts • Not discussing ACL reconstruction or PRP Search strategy A systematic search of Medline, Embase and Web of Science [Figs. 2, 3, 4 respectively] using key words was conducted on 10th of July 2021. Results were exported to Endnote and independently de-duplicated by both authors. Study title and abstracts were then screened against eligibility criteria independently by both authors. Discordance led to inclusion for full text screening to ensure no study was prematurely excluded. Full text screening of the remaining studies was conducted independently by both authors and disagreements remedied by discussion. Quality assessment All included studies were critically appraised by both authors independently using the Critical appraisal skills programme (CASP) RCT checklist [10] with disagreements remedied by discussion. Studies were classed as "good" if answered yes to 9-11 of the CASP criteria and "satisfactory" if answered yes to 6-8 questions. Data extraction Data extraction was conducted in duplicate and inputted into a pilot spreadsheet according to outcomes measured; pain, knee stability, knee function, vascularization, inflammation and ligamentization. The authors, date, year of publication, PRP collection and application method, outcome measured, method of analysis, participant characteristics and results were recorded. Inconsistences were resolved via discussion between authors. Analysis A meta-analysis was considered for pain and knee function outcomes. Given comparable quantitative data, analysis would have involved using a fixed-effects model to calculate Relative risk (RR) and 95% confidence intervals. Following this, a funnel plot and I 2 statistics would have been used to evaluate statistical heterogeneity. Review Manager V.5.3 would have then been utilised to create Forrest plots to summarise meta-analysis results. Where meta-analysis was not possible, a narrative analysis was conducted for each. Results The search identified 212 studies, reduced to 156 after de-duplication. Following title and abstract screening, 126 studies were found not to be investigating ACL reconstruction leaving 30 studies. The full text was reviewed against the inclusion and exclusion criteria resulting in 14 being removed due to not being RCTs, 1 study was not published in English and 2 studies were removed as they did not discuss our outcomes. This left 13 studies and is summarised in Fig. 1. The remaining studies underwent critical appraisal (Table 2). Studies were grouped into overarching healing or clinical themes before export into a data extraction table. Outcomes were grouped as follows: vascularization, inflammation, ligamentization, pain, knee stability and knee function. Methods of PRP collection and application were summarised. A summary of paper characteristics and results is presented in Table 1. Quality assessment CASP appraisal of methodology involved consideration of blinding protocols, demographical analysis between groups and standardisation of care amongst groups. There were a number of studies displaying deficits in aspects of their methodology. Four studies: Cerevellin et al. [11], Lopez-Vidriero et al. [12], Mirzatolooei et al. [13] and Vogrin et al. [14] did not blind all participants and study personnel to treatment. A further three studies; de Almeida et al. [15], Mahdi and Halwas Jhale [16] and Vadala et al. [17] did not state whether all study personnel and participants were blinded. Four studies: Mahdi and Halwas Jhale [16], Mirzatolooei et al. [13], Vadala et al. [17] and Walters et al. [18] made no reference to demographic analysis between groups thus it was not possible to determine whether the groups were similar at the start of the study. Only Walters et al. [18] did not make reference to postoperative protocols so it is impossible to say whether the groups were treated equally throughout the entire study. When appraising data from the studies, only three: de Almeida et al. [15], Rupreht et al. [19] and Seijas et al. [20] provided confidence intervals meaning no precision of intervention effect was reported for the remaining studies. Determination of interventional benefit against cost or harms was not possible for eight studies. Mirzatolooei et al. [13], Rupreht et al. [19], Seijas et al. [20], Vogrin et al. [14], Vogrin et al. [21] and Walters et al. [18] did not state adverse outcomes or reference any costs. Vadala et al. [17] and Valenti Nin et al. [22] reported no difference in adverse outcomes between groups but made no references to cost of the intervention and reported no significant difference between groups in any of the outcome measures. Application of findings to the relevant patient population was scrutinised whereby three studies: Mirzatolooei et al. [13], Vadala et al. [17] and Valenti Nin et al. [22] showed no significant difference between groups for any outcomes at any time, suggesting these interventions did not provide greater value than existing treatments. All studies except Walters et al. [18] had samples that were biased towards young, athletic males. Table 2 provides in depth critical analysis of the included studies using CASP RCT grading. Scores for each individual study are provided in Table 1. Six studies; Valenti Nin et al. [22], Rupreht et al. [19], Cerevellin et al. [11], de Almeida et al. [15], Vogrin et al. [21] and Mahdi and Halwas Jhale [16] employed methods to maximise PRP retention at application site. The methods used were peritendon and fat pad sutures, suturing PRP into the internal aspect of graft and no use of arthroscopic fluid. Only Lopez-Vidriero et al. [12] trialled multiple applications of intervention, with oral supplementation provided once daily for a 90-day period (Table 3). Two studies found that PRP caused a significant reduction in pain. De Almeida et al. [15] (n = 27) found that PRP significantly reduced pain immediately post-op with 3.8 ± 1.0 (± SD) compared to 5.1 ± 1.4 for the control (p = 0.02). Similarly, Seijas et al. [20] (n = 43), who measured up to 24 months post-op, observed significantly reduced pain at 1 month with 0.63 compared to 2.58 for the control (p < 0.0001) and 2 months with 0.54 compared to 2.21 for the control (p = 0.0048). The remaining four studies, of which the maximum period of follow-up was 2 years, reported no significant difference in pain at any interval. Two studies found that PRP significantly improved knee stability. Vogrin et al. [14] (n = 45), who observed up to 6-months post-op, found significantly improved knee stability at 3-months post-op with a 4.9 ± 1.8 mm displacement compared to 6.1 ± 2.1 mm for the control (p = 0.035). This was also found at 6-months post-op with 4.7 ± 1.9 mm compared to 6.7 ± 2.1 for the control (p = 0.003). Additionally, M. Mahdi and Halwas Jhale [16] (n = 27) found significantly reduced laxity at 12 weeks with 12/14 participants having ≤ 5 mm displacement compared to 6/13 for the control (p = 0.033). The remaining three studies, with a maximum follow-up of 2 years, showed no significant improvement in knee stability with the application of PRP. Two studies observed significantly improved knee function with PRP use. Lopez-Vidriero et al. [12] (n = 68), observing up to 90 days post-op, reported a significant improvement at 60 days with 62.5 ± 11.7 compared to 55.5 ± 11.1 for the control (P = 0.029). Further to this, Cerevellin et al. [11] (n = 40) measured up to 12 months postop and found significantly better scores at 12 months with 97.8 ± 2.5 (± SD) compared to 84.5 ± 11.8 for the control (p = 0.041). The remaining five studies, with a maximum follow-up of 2 years, found PRP had no significant effect on knee function at any time. Vascularization and cellularity Three studies (n = 109); Rupreht et al. [19], Mahdi and Halwas Jhale [16] and Vogrin et al. [21] investigated the effect of PRP application on vascularisation of different parts of the ligament graft. Rupreht et al. [19] investigated the tibial tunnel using a 1.5 T MRI scanner to assess contrast enhancement gradient (G enh ). Vogrin et al. [21] investigated the tibial osteoligamentous interface and intra-articular graft using contrast-enhanced MRI signal intensity. Mahdi and Halwas Jhale [16] used T1W-FatSat-Gad to assess vascularisation and PDW-Fat-Sat-signal grades for cellularity at the site of osteoligamentous integration (fibrous interzone-FIZ) in the femoral tunnel. Significance was set at P < 0.05. Two studies observed significantly increased levels of vascularisation with PRP administration. Rupreht et al. [19] (n = 41) measured up to 6-months post-op. They reported significantly increased vascularization at 1 month with a mean of 2.07 compared to 1.41 for the control (p = 0.019) and at 2.5 months, with a mean of 1.64 compared to 1.15 for the control (p = 0.008). Vogrin et al. [21] (n = 41) measured up to 12-weeks post-op and found significantly increased vascularization of the osteoligamentous interface at weeks 4-6 with 0.33 ± 0.09 vs. 0.16 ± 0.009 for the control (p < 0.001). Mahdi and Halwas Jhale [16] (n = 27), who had a minimum follow-up of 12 weeks, found PRP had no significant effect on vascularisation or cellularity at the femoral FIZ. Valenti Nin et al. [22] (n = 100), found no significant difference in CRP up to 10 days post-op and PER 24 h post-op between groups. Two papers found that PRP significantly increased the rate of ligamentization. Seijas et al. [28] (n = 98), measuring up to 12-months post-op, showed significantly increased remodelling at 4-months with 39 participants reaching moderately hyperintense as opposed to 23 for the control (p = 0.003). This was also seen at 6-months with 46 reaching moderately hyperintense or higher compared to 32 for the control (p = 0.0001). Lopez-Vidriero et al. [12] (n = 68) measured pre-op and 90 post-op and displayed significantly improved graft maturation at 90 days with 21 participants attaining grade 3 or higher as opposed to 13 for the control (p = 0.05). Valenti Nin et al. [22] (n = 100) found no significant difference in ligamentization with PRP application at 6 months post-op. Discussion Success of ACL reconstructive surgery can be measured via clinical features or healing parameters. Literature spanning over a decade has offered insight into whether the use of biological augmentation can enhance these outcomes. These were first described in animal models, where PRP appeared to stimulate the healing processes. In human trials, research into the effects of PRP administration has been conducted in many clinical contexts, with varied results. Following surgery, patients are most concerned with pain levels and knee functionality; hence these outcomes have been widely investigated in the literature. Radiological outcomes provide another measure, allowing for a more rounded and comprehensive analysis of the effect of PRP. This systematic review aimed to collate the clinical and radiological results of RCTs to evaluate whether PRP would benefit those undergoing ACL reconstruction. Of the clinical outcomes chosen for evaluation: pain, knee stability and function were most widely reported on, encompassing nine of the 13 included RCTs; with radiological considerations in six. Six studies [11,12,15,18,20,22] evaluated pain, two of which found PRP reduced VAS scores in the early period post-op [15,20]. Five studies [13,14,16,17,22] evaluated knee stability, two of which found PRP reduced anterior-posterior knee laxity [14,16]. No significant difference was found beyond 6 months. Seven studies [11,12,[15][16][17][18]22] evaluated knee function, two of which found PRP to improve overall knee function [11,12]. No significant difference was found beyond 12 months. Three studies [16,19,21] evaluated vascularization and cellularity, two of which found PRP to increase rate of vascularization [19,21]. No significant difference was found beyond 12 weeks. PRP was not found to cause significant change in cellularity levels. Two studies [19,22] measured the effect of PRP on inflammatory parameters, with only one showing a significant reduction in inflammation [19]. Three studies [12,22,28] evaluated ligamentization, two of which found PRP to increase rate of ligamentization [12,28]. No significant difference was found beyond 6 months. The inflammatory response following injury or surgery leads to pain, release of inflammatory cytokines and oedema [29]. Pain is subjective, making it a difficult parameter to monitor in a standardised manner. VAS provides some mediation but cannot negate variation in individual pain thresholds. As pain influences ability to carry out rehabilitation protocols [30], it becomes a barrier to full recovery. Reductions in pain would enable improved proprioceptive and strength rehabilitation, therefore potentially increasing the speed and success of recovery. PRP has been shown to reduce pain levels [31]. Of the six studies that measured VAS, de Almeida et al. [15] demonstrated a significant reduction pain at 24 h post-surgery concurring with the results of Seijas et al. [20] at 1-and 2-months post-surgery. Although a number of the remaining studies suggested that PRP reduced pain, this failed to reach statistical significance hence it cannot be conclusively stated whether PRP has a significant effect on pain. As PRP has not been proven to influence pain, further research must be conducted to evaluate any analgesic role it may play. Method of PRP collection may have an influence on its ability to exert influence on healing and hence clinical parameters. For the method of PRP collection, 10 papers [11,13,14,[16][17][18][19][20][21] used centrifugation whilst de Almeida et al. [15] used filtration. de Almeida et al. [15] produced significant results in VAS whilst 3 of 4 using centrifugation showed no significant change. These different methods may have an effect on the efficacy of PRP however it is difficult to show as there is not enough data on the use of filtration. More studies using filtration should be conducted to ascertain whether this produces any significant change. Inflammation has been shown to facilitate angiogenesis and production of hyper-vascular granulation tissue during healing [34]. However, inflammatory cytokines and oedema have also been reported to have adverse effects on recovery from ACL reconstruction. It has been shown that inflammatory cytokines can lead to atrophy of the surrounding muscles which adversely effects knee stability and function [32]. Previous studies, such as Anitua et al. [33], have suggested that PRP has an anti-inflammatory effect which could imply useful applications in ACL reconstruction. Of the two studies that measured inflammation, Valenti Nin et al. [22] found that PRP had no effect on CRP or knee swelling whilst Rupreht et al. [19] found significantly reduced knee oedema at 1-month post-op. These results appear contradictory, but this may be due to the different times at which the measurements were taken. Valenti Nin et al. [22] only observed up to 10 days post-op whereas Rupreht et al. [19] measured at 1-, 2.5-and 6-months. This could mean that the anti-inflammatory effects of PRP are delayed and hence were not picked up by Valenti Nin et al. [22]. The lack of significant results beyond one month could be due to PRP only having short-term effects or that the inflammatory phase is receding meaning tangible results will not be seen later in the studies. This is supported by Janssen and Scheffler [35], who state that the proliferative phase is over after 4-12 weeks. Alternatively, these effects could be dependent upon the composition of PRP. Azcarate et al. [36], who furthered the work of Valenti Nin et al. [22], added another intervention group; PRP without leukocytes. This form was found to significantly reduce CRP and swelling suggesting that it had more potent anti-inflammatory effects. This provides another avenue for research, suggesting that composition of PRP could alter its influence over different outcomes. As inflammation plays a key role in vascularisation and healing, the anti-inflammatory effects of PRP could be counterproductive. However, it is excessive inflammation that limits healing and therefore finding the balance between the angiogenic effects and limiting excessive inflammation could be integral for PRP to be beneficial in this area. Further to this, it has been reported that platelet concentrations should be over 1 × 10 6 /ml [37]; roughly 5 times baseline (whole blood 2 × 10 5 /ml) for PRP to be effective. Five studies [14-16, 18, 22] reported platelet content within PRP concentrations with only de Almeida et al. [15] indicating an average above 1 × 10 6 /ml. This may explain the significant reduction in VAS reported by de Almeida et al. [15] immediately post-op compared to the lack of a significant difference found by Valenti Nin et al. [22] 24 h post-op, who reported an average platelet concentration of 837 × 10 3 /mm 3 . As the remaining seven studies did not record platelet levels, it is impossible to investigate whether their insignificant results could be attributed to using PRP with insufficient platelet concentrations. Further studies should be completed to investigate how platelet concentrations can influence level of clinical benefit within this setting. Vascularisation is essential for conversion to a functional ligament. In the bone tunnels this aids osteointegration, which roots the ligament providing a stable attachment. In the intra-articular portion, this converts the tendinous structure to resemble the native ACL. Upon grafting, the tendon tissue undergoes remodelling to acquire ligament characteristics, such as higher levels of irregular collagen and proteoglycan bundles that remodel to produce a densely packed, parallel, uniform morphology [38]. Therefore, ligamentization is essential for the graft to achieve the strength and durability required to fulfil the role of the ACL. Hence, the rate of vascularisation and ligamentization have a direct impact on recovery time, improving knee stability and function. This could benefit those looking for a quicker recovery, such as high-level athletes or those who cannot afford time away from work to recover. Studies of the angiogenic effects of PRP have produced positive results, such as those investigating neovascularisation in cardiac muscle [39], suggesting PRP has potential to improve vascularisation in our setting. Rupreht et al. [19] and Vogrin et al. [21] found no significant increase in intra-articular vascularisation. The lack of significant data could be attributed to a decreased retention of the PRP gel at the intra-articular portion of the graft, limiting its angiogenic influence. Significantly increased levels of bone tunnel vascularisation were observed at 4-6 weeks post-op, with no benefit seen beyond this point. This is supported by the results of Silva and Sampaio [40] who observed no significant difference in vascularisation 3-months post-op. This suggests that PRP is no longer able to influence proceedings. This could be a dose issue as only single applications of PRP are used in our studies with exception of Lopez-Vidriero et al. [12]. Alternatively, much like the effects seen in inflammatory outcomes, this may be due to a transient role of new vasculature. This is supported by Janssen and Scheffler [35] who state that vascularisation occurs in the proliferative phase (4-12 weeks) and once it has played its role it recedes. Therefore, our question becomes not just how much vascularisation is present, but how early it is occurring. Of the 12 studies that surgically administered PRP, seven [11,15,16,19,21,22,28] described attempts at PRP retention at the site of application. Of these seven studies, all except Valenti Nin et al. [22] reported significant effects of PRP. In comparison, the five studies [13,14,17,18,20] that did not attempt to retain PRP, three [13,17,18] did not report significant effects. This discrepancy may be owed to the time in which PRP is retained at the site, therefore influencing the ability to yield healing benefits. A comparison of effects of PRP with and without retention efforts, could inform on the significance of this factor. Following vascularisation, ligamentization occurs at 12 weeks and is continuous throughout recovery [41]. Faster ligamentization means that the graft will be able to function as an ACL earlier, decreasing the time taken to transition through the rehabilitation programme. Lopez-Vidriero et al. [12] demonstrated early benefits in ligamentization at 90-days post-op, whilst Seijas et al. [28] demonstrated a more prolonged benefit with significantly higher levels at both 4 and 6 months. This suggests PRP increases the early rate of ligamentization. However, results from Valenti Nin et al. [22] showed no significant difference at the 6-month mark; congruent with later research by Azcarate et al. [36]. Whilst the results of Lopez-Vidriero et al. [12] and Seijas et al. [28] are in agreement, it should be highlighted that the comparison of these results proves difficult due to different forms of PRP being used; Lopez-Vidriero et al. [12] provided oral supplementation of PRP, Seijas et al. [28] using simple injectable PRP and Valenti Nin et al. [22] using PDGF gel. On the other hand, the use of different preparations of PRP may help in determining which form is most suited in improving ligamentization rates. In addition, only Lopez-Vidriero, E., et al. (2019) provided more than one instance of application. This study provided significant improvement in knee function and ligamentization which could suggest repeat applications provides more substantial benefit. Unlike inflammation and vascularisation, ligamentization has been found to continue well beyond 6 months, and the lack of significant data beyond this stage could suggest a dose related issue. Repeat applications of PRP gel should be conducted to determine whether this could augment its influence, or further investigation into the oral supplement should be conducted to determine whether this is a superior form and whether specific preparations of PRP are best suited for this parameter. The ultimate goal of reconstructing the ACL is to restore stability and function of the knee. However, studies suggest that of those with ACL injuries, only 55% returned to their competitive sport of choice [42]. Success of the surgery depends on a combination of ability to take part in rehabilitation, effected by pain and inflammation, and rate of graft remodelling based on vascularization and ligamentization. Stability is a measure of the ability of the ACL to limit anterior-posterior translocation of the tibia in relation to the femur. Joint laxity has been associated with dysfunctional and injury-prone ACLs [43], hence a stable joint is key to recovery and return to sports. Vogrin [14] showed reduced laxity at 3 and 6 months using KT-2000 arthrometers whilst M. Mahdi and Halwas Jhale [16] also showed significantly reduced laxity at 12 weeks post-op via use of the Lachman's test. The remaining three studies found no significant difference in stability when measuring using KT-1000 arthrometers. The lack of consistent, statistically significant differences between PRP and control groups means that it cannot be stated that PRP exerts influence over the anterior-posterior laxity of the knee joint. The contrasting results provide an avenue for further research into the influence of PRP on anterior laxity, as an improvement in this outcome could be directly beneficial to those undergoing ACL reconstruction. On the other hand, knee function provides the yardstick by which all patients will measure the success of their procedure. Thus, demonstration of a tangible difference supplied by PRP is a key outcome. Generally, no significant improvement in IKDC scores was seen when compared to the control. However, Lopez-Vidriero et al. [12] did report a significant improvement compared to control at 60 days post-op, but not at any other date of investigation. In addition, Cerevellin et al. [11] recording significantly improved VISA-P scores at 12 months could indicate some influence on knee function post-op. However, with five of the seven studies showing no statistically significant difference in knee function between groups, this indicates PRP does not have a significant effect on knee function post-op. This being said, it is important to consider that the differing results may be due to the time intervals at which the IKDC questionnaires were completed. Lopez-Vidriero et al. [12] was the only study to record prior to 12 weeks post-op, doing so on three occasions. The significant result of Lopez-Vidriero et al. [12] resides early in the recovery process, indicating that PRP may have an effect, but wears off by later stages. Hence, it would not have been observed at the time intervals used by the other studies. This early influence mirrors the effects demonstrated in vascularisation, inflammation and ligamentization. Whilst a strong relationship is not established between PRP and improved knee function, there is room for further investigation in this area. Frequency of PRP applications may offer an interesting addition to these experiments, determining whether a single application is limiting its effect. This may explain why the benefits are only seen early in the study and is supported by Tavassoli et al. [44] who showed the efficacy of PRP increased after multiple injections over time. In addition, whilst it may appear that earlier and more regular analysis of knee function would be an improvement for subsequent studies, the restrictive nature of the rehabilitation protocols [45] may limit the amount of knee function that can be evaluated in the earlier stages of healing. Limitations of study The strengths of this study include the use of RCTs, so the data collected are of high quality. The use of three major e-databases ensured much of the applicable literature was found via our search. Two authors independently performed the search and critically appraised the included studies in detail using the CASP criteria limiting individual error. Comparison between studies proved to be difficult due to the varying methods of PRP application. There appears to be no standardised approach, meaning caution must be taken when comparing results. We suggest a standardised protocol for the collection and application of PRP with assessment of platelet concentration would make inter-study comparison more reliable. In addition, measurement intervals were inconsistent amongst the studies. As most of PRP's significant impacts were early on in the healing process, some studies may therefore miss the intervention effect due to prolonged periods between measurement. Using published data on when processes occur, e.g., vascularization, may inform on when data collection should be conducted. This study aimed to determine PRP effects following ACL reconstruction for application to the wider population. Unfortunately, the literature is heavily biased towards a younger male athletic populous. Whilst it has been established that the incidence of ACL rupture is higher in the athletic population than the general population [46] it has also been shown that females have a 1.5 times greater risk of rupture when compared to male athletes [47]. Therefore, even though the athletic component of the distribution is representative of ACL injury distribution, the lack of female representation makes the results less applicable as a result of potential selection bias. Future research may benefit from incorporating female participants at a higher frequency. The detailed critical appraisals ( Table 2) show further biases of the studies, including the lack of information regarding blinding and demographic data. The specification of "English language" in our inclusion criteria has exposed our study to selection bias. The lack of research into unpublished work and exclusive use of electronic databases will have increased exposure to publication bias. Conclusions PRP appears to exert early influence on healing in the form of vascularisation and granulation tissue formation, culminating in higher rates of ligamentization. Reductions in swelling and CRP levels suggest PRP could be of benefit in the acute stages of recovery. However, long-term effects were not demonstrated suggesting the influence of PRP to be limited. With no consensus reached on the impact of PRP on pain, knee stability and resultant knee function, future research on these areas must be conducted before a conclusion can be made. Future research may benefit from standardising PRP, incorporation of multiple doses, measurement of platelet levels and increased frequency of observation. Alternatively, more comprehensive comparison between different forms of PRP can indicate which is best for each outcome accordingly. Authors' contributions JM contributed to study conception, data collection, data analysis, drafted and reviewed final manuscript. KHK contributed to study conception, data collection, data analysis, drafted and reviewed final manuscript. IA drafted and reviewed final manuscript. Appendix FD drafted and reviewed final manuscript. AM contributed to study conception, drafted and reviewed final manuscript. Funding Not applicable. . 4 Web of Science search strategy. All search terms shown along with combinations of terms using "and" or "or" function followed by number of results at each stage Answers to the 11 CASP critical appraisal questions provided with reasoning and final score. Studies were deemed adequate if answered "yes" for nine or more questions and partially adequate if answered "yes" to six to eight questions Data availability The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Conflicts of interest The authors have no relevant financial or nonfinancial interests to disclose. The authors have no conflicts of interest to declare that are relevant to the content of this article. The authors have no financial or propriety interests in any material discussed in this article. All authors certify that they have no affiliations with or involvement in any organisation or entity with any financial interest of non-financial interest in the subject matter or materials discussed in this manuscript. Ethics approval As no patients were directly involved no ethics required. Consent to participate Not applicable. Consent for publication Not applicable. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-01-12T14:41:52.719Z
2022-01-12T00:00:00.000
{ "year": 2022, "sha1": "f09434ecfcc80d315f5aacd3e2b637176eae8608", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00590-021-03198-4.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "55a929add8aec7ff7358b3c98ee2680f5b172252", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
208251966
pes2o/s2orc
v3-fos-license
Clinicoepidemiological and patch test profile of patients attending the contact dermatitis clinic of a tertiary care hospital in North India: A 7-year retrospective study Introduction: Allergic contact dermatitis (ACD) is a growing concern due to increased use of cosmetics and topical medications routinely and exposure to a large number of allergens on day-to-day basis. Patch testing is a reliable method for detecting the causative antigens in suspected cases. Aims and Objectives: To assess the demographic profile, pattern of ACD, and patch test profile of suspected cases of ACD attending contact dermatitis clinic of our department. Materials and Methods: It was a retrospective study in which all the data enrolled in the contact dermatitis clinic of our department over a 7-year period were analyzed. Patch testing was done using the Indian Standard Series of 20 antigens primarily, and other batteries were used depending on patient requirement and availability. Results: A total of 582 patients were enrolled in the contact dermatitis clinic over a period of 7 years. Hand eczema was the most common pattern seen in 268 cases followed by feet eczema, hand and foot eczema, facial eczema, forearm and leg eczema and photoallergic contact eczema. A total of 177 patients (30.4%) gave positive patch test results, with nickel sulfate being the most common allergen identified followed by potassium dichromate, cobalt sulfate, paraphenylenediamine, neomycin sulfate, and fragrance mix. Conclusion: Common allergens identified in our study were more or less similar to studies from other parts of India. However, due to the unique climate of the valley, the profile of parthenium sensitivity was low in our study when compared to the rest of the country. in the pediatric population. [5][6][7] An acute response is often characterized by macular erythema, papules, vesicles, or bullae, depending on the intensity of the allergic response. Chronic ACD usually manifests as fissured, scaly, and lichenified dermatitis with or without accompanying papulovesicles. [8] A large number of allergens are present in our environment and are encountered daily in the form of cosmetics, skin care products, hair dyes, medications, accessories, jewellery, cement, plants, and so on. Nickel found in metal industry and household objects along with fragrances and preservatives are the most common allergens responsible for causing a significant number of cases of ACD globally. Allergens such as chromates (present in cement, paints, and coolants) and paraphenylenediamine (PPD) (in hair dyes) follow subsequently, but are more incriminated in occupational settings. Other commonly encountered antigens include rubber additives such as mercaptobenzothiazoles (in rubber gloves, shoes, etc.), cobalt, and plant-derived substances such as colophony, turpentine, and essential oils. [9] Accordingly, ACD is seen in a large number of occupational groups, with the frequency and pattern varying from one group to another. In many countries, occupational contact dermatitis ranks first among occupational diseases worldwide resulting in significant morbidity and work loss days. [10] Patch testing is a reliable method for detecting the causative antigen(s) in suspected cases. The allergens that are included in standard series vary from country to country based on the local experience. Knowledge about the responsible allergen for ACD helps a long way in reducing morbidity in such cases by identifying the incriminating allergen and can thus help minimize the impact of ACD in the affected individuals. [11] With this background, we attempted to assess the demographic profile, pattern of ACD, and patch test profile of suspected cases of ACD attending contact dermatitis clinic of our department over a 7-year period. Materials and Methods It was a retrospective study in which all the patient data enrolled in the contact dermatitis clinic of our department since its inception were analyzed. These data had been collected from those patients who attended the outpatient section of our dermatology department with suspected ACD and had been then referred to the contact dermatitis clinic where all the data were collected and maintained in proper files. A detailed history including the demographic data, occupational details, and exposure to different allergens was taken which was followed by clinical examination and relevant photographs for documentation. The various patterns of ACD observed were categorized into various groups like hand eczema involving primarily the dorsal and palmar aspects of fingers and hands upto the wrists; foot eczema involving primarily the dorsal and plantar aspects of feet upto the ankle joint; hand and foot eczema in which simultaneous involvement of both hands and feet was noticed; facial eczema in which the eczema was seen primarily affecting the convex surfaces of the face, eyelids, lips, and periorificial areas; forearm and leg eczema where primary involvement was of the forearms and legs with nil or minimal concurrent involvement of hands and feet; photoallergic contact eczema involving primarily the photoexposed areas such as face, V area of neck, and dorsal aspects of both hands and forearms with well-demarcated margins where the skin is covered with clothing with sparing of the Wilkinson's triangle, upper eyelids, and area under the chin and air-borne contact dermatitis (ABCD) affecting primarily the exposed areas of face, V area of neck, hands, and forearms, Wilkinson's triangle, both eyelids, nasolabial folds, and area under the chin. The involvement of both light-exposed and protected areas helps differentiate ABCD from photo-related dermatitis. Disseminated eczema was used for patients with extensive involvement of whole body, rarely proceeding to erythroderma. Nonspecific eczema was used for all such types of eczema which were not extensive but did not fit in any of the above-mentioned patterns of eczema and had a variable presentation. All the patients (irrespective of age) were included in the study. However, patients on oral corticosteroids and other immunosupressants, pregnant, and lactating females were excluded. Those patients who had active dermatitis were patch-tested 2 weeks after their clinical symptoms subsided. Doubtful cases (requiring distinction from fungal infections, psoriasis, and other simulating dermatoses) were subjected to investigations like KOH mount and skin biopsy wherever necessary. Patch testing was done by Finn chamber method using the Indian Standard Series (ISS) of 20 antigens recommended by Contact and Occupational Dermatoses Forum of India and other batteries such as ISS of 25 antigens, cosmetic and fragrance series, and footwear series depending on patient requirement and availability of the batteries. Cosmetic agents in the "as is" form were not used for patch testing. The patch tests were applied on nonhairy upper back of the patients. The results were read on day 2 (48 h) and day 4 (96 h) according to the guidelines laid down by the International Contact Dermatitis Research Group. [12] Day 4 reading was taken as final grade of positivity and was interpreted for clinical relevance. In doubtful cases, day 7 reading was also taken. All forms of topical and systemic medication were stopped 2 weeks prior to patch testing. Informed consent was taken from the patients prior to patch testing and before taking any photograph for record purpose. The relevance of positive patch test results was determined clinically using COADEX system [13,14] in which current and old relevance means that patient has been exposed to the allergen during the current and previous episodes of dermatitis, respectively, and there is improvement of the disease after cessation of exposure. When relevance is difficult to assess and no traceable relationship is found between the positive test and the disease, relevance is termed to be doubtful. The data of the entire 7-year period from January 2012 to December 2018 were tabulated, compiled, and subjected to statistical analysis. Results A total of 582 patients were enrolled in the contact dermatitis clinic over a period of 7 years. Of these, 371 were females (63.75%), while 211 were males (36.25%) giving a male: female ratio of 1:1.7. The mean age of the study population was 34.70 ± 11.27 years with age ranging from 9 to 68 years. In all, 310 (53.26%) patients had a rural background, while 272 (46.73%) were from urban areas. The mean disease duration was 5.11 ± 1.2 years with a range of 4 months to 12 years. The pattern of clinical disease noticed in our study population was divided into various groups as mentioned in methodology. Table 1 shows the number of patients with different clinical patterns of eczema with hand eczema being the most common pattern seen in 268 cases (46.05%) [ Figure 1] followed by feet eczema seen in 81 cases (13.92%) [ Figure 2], hand and foot eczema in 70 cases (12.03%), facial eczema in 47 cases (8.08%), forearm and leg eczema in 41 cases (7.04%), photoallergic contact eczema in 29 cases (4.98%), ABCD in 17 cases (2.92%), nonspecific eczema in 17 cases (2.92%), and disseminated eczema in 12 cases (2.06%). Occupation-wise distribution of the study population included farmers, construction workers, housewives, students, office workers, food handlers, artisans, and medical/paramedical workers in that order as enumerated in Table 2. A total of 177 patients (30.4%) gave positive patch test results to various allergens used. A total of 242 positive reactions were seen in these 177 patients, among which 120 patients gave a single positive reaction while 38 patients gave positive reaction to two allergens and the rest 19 patients had more than two positive patch test reactions. Of the total 242 positive patch test reactions, 233 positive reactions were elicited from ISS of 20 and 25 antigens, while the rest 9 reactions were elicited from allergens in footwear and cosmetic series other than those included in ISS [ Table 3]. Of the 233 positive reactions elicited from ISS of 20 and 25 antigens, nickel sulfate turned out to be the most common allergen identified in 49 cases followed by potassium dichromate in 28 cases, cobalt sulfate in 26 cases, PPD in 23 cases, neomycin sulfate in 16 cases, and fragrance mix in 15 cases. Figure 3 shows a patient with positive patch test reaction to nickel sulfate and cobalt sulfate, while Figure 4 shows a patient with positive patch test reaction to fragrance mix. Other allergens seen were mercaptobenzothiazole (10 cases), parthenium (10 cases), thiuram mix (10 cases), formaldehyde (9 cases), colophony (8 cases), peru balsam (6 cases), paraben mix (4 cases), nitrofurazon (4 cases), black rubber mix (3 cases), wool alcohol (3 cases), 4-tert-butylphenolformaldehyde resin (3 cases), epoxy resins (2 cases), benzocaine (2 cases), mercapto mix (1 case), and polyethylene glycol (1 case). Figure 5 shows a patient with positive patch test reaction to mercaptobenzothiazole and thiuram mix. Discussion Clinical manifestations of ACD are highly varied, depending on the degree and frequency of contact with the allergen, the nature of the putative allergen, and host-related factors. The clinical presentation varies from patient to patient, often posing a diagnostic challenge to the treating dermatologist. In our study, the most common allergen identified was nickel sulfate which accounted for 49 (20.24%) of the 242 positive patch test reactions seen in our study group followed by potassium dichromate accounting for 28 (11.57%) positive patch test reactions. Both these allergens have also been identified as the most common allergens in other studies done from Kashmir valley. [15,16] Nickel is present ubiquitously in the environment and was the most common allergen identified in females in our study. The reason for early development of nickel sensitivity in our population can be attributed to the common use of nickel-plated accessories and jewellery especially in females. As most of the population is Muslim, small girls are seen covering their heads with scarfs and using nickel-plated pins to hold the scarf in position. Also, ear piercing is done in almost all girls at a small age and they are found wearing artificial jewellery in the form of ear rings, necklaces, rings, and bracelets. These jewellery items and other accessories like eyeglass frames, belt buckles, pins, clips, zippers, coins, and keys may release nickel as there is poor quality control on the manufacture of these items in our country. [17] Most of the cases of nickel positivity had current relevance to the use of nickel-plated items and jewellery. Potassium dichromate was the second most common allergen identified in our study. It was the most common allergen identified in males in our study population. Most of the patients giving positive patch test reactions to potassium dichromate were construction workers, while the rest were involved in other occupations but would occasionally do the small construction works at their houses or shops to save money. Other possible sources of exposure to chromates included use of paints, woods, glass, and cleaning products. Potassium dichromate has also been identified as a common allergen in other studies. [18][19][20][21] Cobalt sulfate was the third most common allergen identified in our study population. It constituted for 26 (10.74%) positive patch test reactions. Cobalt is an invariable contaminant of nickel and is also found in cement. [22,23] Some patients with cobalt sensitivity in our study especially females had a concomitant allergy to nickel as well [ Figure 3]. PPD was identified as the allergen in 23 (9.5%) cases with almost equal incidence in both sexes. The most common source of PPD in our study population was attributed to the use of hair dyes and dyes used occupationally by some artisans. Also, use of henna tattoos is very common in the local population of the valley (especially young girls) which also contains PPD. [24] Other important sensitizers in our study population included neomycin sulfate and fragrance mix which constituted for 16 (6.6%) and 15 (6.19%) positive patch test results, respectively. Neomycin is available freely as an over-the-counter topical medication in the local markets of valley (especially in combination with other drugs). Neomycin and gentamicin have already been reported as an important allergen in many other studies. [11,16,25,26] Another important allergen identified in our study was fragrance mix similar to some other studies from North India. [16,21] The increased use of cosmetics, toiletries, and skin care products was thought to be responsible for more number of positive reactions to fragrance mix in our study. Parthenium, being an important allergen in whole of India, [11,21] was only rarely encountered in our study. The reason for the lesser positivity to parthenium seen in our study and reported previously [15] has been attributed to the cold climate of the valley where the parthenium weed does not survive in the subzero temperature of winter. Four of the cases identified in our study had history of travel to areas outside the valley. However, the possibility of cross reaction to other members of the Compositae family could not be ruled out as other plants belonging to the same family like dahlias, sunflowers, and dandelion grow in abundance in the valley. More than 200 species of the Compositae family [27] containing a large number of allergens [28] have been reported to cause ACD. Other studies from the valley have identified contact dermatitis in saffron [29] and tulip workers [30] as these plants also grow in specific seasons in the valley. However, it was not possible for us to do patch testing with plant series in patients attending our contact dermatitis clinic on a routine basis due to nonavailability of these batteries which forms an important limitation of our study. Also, "as is" testing for certain cosmetics and food items was not done in our study. Conclusion Having an idea about the common allergens in a demographic area helps the clinician in pointing out the causative factors easily. Such retrospective studies are important to know the cumulative data from a particular geographical area as there can be variation in the allergen distribution which can affect the patch test profile. Common allergens identified in our study such as nickel sulfate, potassium dichromate, cobalt sulfate, and PPD are more or less similar to studies from other parts of India. However, due to the unique climate of the valley, the profile of parthenium sensitivity was low in our study when compared to the rest of the country. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2019-11-25T14:06:20.439Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "0697adebfeab2dd6437e9fbfc575563367fb8495", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/idoj.idoj_26_19", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "0697adebfeab2dd6437e9fbfc575563367fb8495", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
21306685
pes2o/s2orc
v3-fos-license
Visualization of in vivo metabolic flows reveals accelerated utilization of glucose and lactate in penumbra of ischemic heart Acute ischemia produces dynamic changes in labile metabolites. To capture snapshots of such acute metabolic changes, we utilized focused microwave treatment to fix metabolic flow in vivo in hearts of mice 10 min after ligation of the left anterior descending artery. The left ventricle was subdivided into short-axis serial slices and the metabolites were analyzed by capillary electrophoresis mass spectrometry and matrix-assisted laser desorption/ionization imaging mass spectrometry. These techniques allowed us to determine the fate of exogenously administered 13C6-glucose and 13C3-lactate. The penumbra regions, which are adjacent to the ischemic core, exhibited the greatest adenine nucleotide energy charge and an adenosine overflow extending from the ischemic core, which can cause ischemic hyperemia. Imaging analysis of metabolic pathway flows revealed that the penumbra executes accelerated glucose oxidation, with remaining lactate utilization for tricarboxylic acid cycle for energy compensation, suggesting unexpected metabolic interplays of the penumbra with the ischemic core and normoxic regions. because at this time point, blood glucose concentration is high enough to trace metabolic pathways of in vivo organs by MS 2 . 13 C3-lactate (27 μg/g body weight, in saline) was administered by retro-orbital injection 1 min before LAD ligation 3 . We have confirmed the elevation of blood lactate concentration to 7-8 mM at this time point, which is within the physiological range ( Supplementary Fig. S5 online) 4 . Fixation of heart metabolites by FMW We used a laboratory microwave instrument (MMW-05 Muromachi Kikai, Tokyo) designed for the euthanasia of laboratory mice and rats ( Supplementary Fig. S1 online). This instrument differs from kitchen units, particularly in maximal power output (5 kW) and in having a tightly focused microwave beam. All units direct their microwave energy to a specific anatomical location on the animal. Mice were anesthetized with isoflurane, and placed into a transparent water-jacket holder (Muromachi-Kikai, MH28-HZ). The cone-parts of the holder were filled with ~1 mL of water to help elevate the temperature of the heart as uniformly as possible using microwave energy. Care was taken not to introduce air bubbles. This was then inserted into the instrument in a position such that microwave irradiation was targeted on both brain and heart. For reliable fixation it is important to maintain the animal in the correct position; i.e., the mouse should be straight with its nose touching the top of the cone. The holder is set at the position shown in Supplementary Fig. S1, panel-D; the back end of the holder was set at 43 mm from the entrance of the insertion slot. We found this condition optimal for B6/J mice (8 week old males). Microwave irradiation at 5 kW for 0.96 s elevated the temperature of the heart to above 80°C which is sufficient to inactivate metabolic enzymes, such as acetylcholine esterase 5 . To evaluate the effectiveness of the FMW fixation method, we compared it to two other procedures, i) rapid-freezing, in which hearts were isolated immediately after thoracotomy, then frozen in liquid N2 (total procedure takes ~20 s); ii) delayed freezing, in which hearts were isolated 10 min after cervical dislocation to allow postmortem degradation. Preparation of tissue sections for metabolome and MALDI-imaging analyses After FMW, hearts were dissected with a surgical knife at room temperature, embedded into a super cryo-embedding medium (SCEM, Section Lab Co. Ltd, Hiroshima, Japan), frozen in liquid N2, and stored at -80°C. We prepared five sets of short-axis (transverse) tissue sections where each set consisted of a thick 450 µm "block" for quantification of metabolites and an adjacent 8 µm thin "section" for MALDI imaging analyses (see Supplementary Fig. S6 online). The apical two thirds of the left ventricle in sham-operated hearts was subdivided into four short-axis blocks. The thin sections were cut with a cryomicrotome (CM3050, Leica Microsystems) and thaw-mounted on an indium thin oxide-coated glass slide (BrukerDaltonics, Germany) at -16˚C. Heart tissues subjected to FMW tended to be more fragile than those treated by other methods, often making tissue sectioning challenging. However, embedding the tissue with SCEM medium, which did not interfere with the ionization efficiency of metabolites, helped achieve successful sectioning. Capillary electrophoresis-electrospray ionization (CE-ESI)-MS Quantitative metabolome analysis was performed using CE-MS 6 . Briefly, to extract metabolites from the tissue, the frozen tissue block embedded in SCEM medium together with internal control compounds (see below) was homogenized in ice-cold methanol (500 μL) using a manual homogenizer (Finger Masher (AM79330); Sarstedt, Tokyo, Japan), followed by the addition of an equal volume of chloroform and 0.4 times the volume of ultrapure water (LC/MS grade; Wako). The suspension was then centrifuged at 15,000 g for 15 min at 4°C. After centrifugation, the aqueous phase was ultra-filtered using an ultrafiltration tube (Ultrafree-MC, UFC3 LCC NB; Human Metabolome Technologies, Tsuruoka, Japan). The filtrate was concentrated with a vacuum concentrator (SpeedVac; Thermo, Yokohama, Japan); this condensation process helps quantitate trace levels of metabolites. The concentrated filtrate was dissolved in 50 µL of ultrapure water and used for CE-MS. All CE-MS experiments were performed using an Agilent CE System equipped with an air pressure pump, an Agilent 6520 Accurate Q-Tof mass spectrometer, an Agilent 1200 series isocratic high-performance LC pump, 7100 CE-system, a G1603A Agilent CE-MS adapter kit, and a G1607A Agilent CE-MS sprayer kit (Agilent Technologies). Isomeric species, such as glucose 1-phosphate, glucose 6-phosphate, and fructose 6phosphate, can be hard to distinguish by MS/MS per se. However, we took advantage of the CE/ESI/MS system 6 , in which these species are eluted at a different retention time by capillary electrophoresis due to their differential mobility and/or chemical properties. The system separates isobaric or isomeric compounds effectively. Electrospray interface and MS conditions ESI-MS was conducted in negative ion mode, and the capillary voltage was set at 3.5 kV. The nitrogen nebulizer pressure was set at 10 psi, and a flow rate of drying nitrogen gas (heater temperature 240°C) was maintained at 4 L/min. Automatic recalibration of each acquired spectrum was performed using reference masses of reference standards; namely, TFA anion (m/z 112.985587) and HP-0921 compound anion (m/z 1033.988109) (G1969-8500, API-TOF Reference Mass Solution Kit, Agilent Technologies). We confirmed that the mass error was within 10 ppm for the targeted metabolites. Exact mass data were acquired at a rate of 1.4 spectra/s over a 61−1050 m/z range (0.713 s duty cycle). Quantification of metabolites by internal and external standards We used both internal (added to the tissue before extraction) and external (used to produce calibration curves for each compound) standard compounds for concentration calculation. The detailed method is as follows: Internal standard (IS) compounds We used 2-morpholinoethanesulfonic acid (MES) and 1,3,5-benzenetricarboxylic acid (trimesate) as ISs for anionic metabolites. These compounds are not present in the tissues; thus, they serve as ideal standards. Loss of endogenous metabolites during sample preparation was corrected by calculating the recovery rate (%) for each sample measurement. External standard (ES) compounds An external calibration curve was used to calculate the absolute abundance of metabolites. Before sample measurement, we measured the mixture of authentic compounds of target metabolites at three different concentrations (50, 10 and 5 µM for adenine nucleotides, and 500, 100 and 50 µM for lactate) in ultrapure water to generate calibration curves. Quantification (amount of metabolites, nmol/mg tissue or nmol/mg protein) was performed by comparing the IS-normalized peak areas against the calibration curves. Liquid chromatography-tandem mass spectrometry The amount of non-labeled and 13 C 6 -glucose in the heart was quantified using liquid chromatography-tandem mass spectrometry (LC-MS/MS). Briefly, a triple-quadrupole mass spectrometer equipped with an electrospray ionization (ESI) ion source (LCMS-8030; Shimadzu Corporation, Kyoto, Kyoto, Japan) was used in the negative-ESI and multiple reaction monitoring (MRM) modes. The samples were resolved on the PC-HILIC S3 column (150 × 2.0 mm, i.d., 4 μm particle), and separated using mobile phase A (water) and mobile phase B (acetonitrile) at a flow rate of 0.8 mL/min and a column temperature of 40°C. Nonlabeled-and 13 C6-labeled-glucose were quantified by ion transitions from m/z 179 to m/z 89, and m/z 185 to m/z 92 respectively. Matrix coating and MALDI-IMS acquisition Prior to matrix coating, the tissue slices were placed in desiccant for 10 min and allowed to equilibrate to room temperature. We used 9-aminoacridine as a matrix (10 mg/mL, dissolved in 80% ethanol) and manually spray-coated tissues sections with the solution using an artistic air-brush (Procon Boy FWA Platinum 0.2-mm caliber airbrush, Mr. Hobby, Tokyo, Japan). We maintained a distance of ~5 cm between the air-brush and the target during matrix coating, and allowed sections to dry between coating cycles to minimize delocalization of target compounds. MALDI-IMS was performed using an Ultra Flextreme MALDI-time-of-flight (TOF) mass spectrometer (Bruker Daltonics, Leipzig, Germany) equipped an Nd:YAG laser. Accurate MS and MS/MS analyses were performed with a prototype "Mass microscope" (Shimadzu Corporation, Kyoto, Japan). For both instruments, the laser power was optimized to minimize in-source decay of phosphate nucleotides. Data were acquired in the negative reflectron mode with raster scanning using a pitch distance of 100 μm. Each mass spectrum was the result of 300 laser shots at each data point. Signals between m/z 50 and 1000 were collected. Image reconstruction was performed using FlexImaging 4.0 software (Bruker Daltonics). Peaks of specific metabolite molecules were assigned by accurate MS analyses with an ion trap TOF instrument (see Supplementary Table S2 It is not possible to separate 13 C2-glu and 13 C3-gln by MALDI-imaging. To solve this problem, we utilized CE/ESI/MS analyses to determine 13 C2-glu and 13 C3-gln content in adjacent sections from the same sample. By doing so, we confirmed that the level of 13 C3-gln was below the level of detection. This led us to conclude that the m/z value, 149.13, in negative ion mode represents 13 C2-glu and that the sample is less likely to be contaminated with 13 C3-gln. MALDI imaging of metabolites in tissue sections normalized by CE-MS based quantitative data To construct apparent content maps for a specific metabolite, we modified a previously reported method 8 . Briefly, an apparent content of a specific metabolite at the i th spot of tissue ( ) was estimated as follows: where denotes the mean value of a metabolite content determined using CE/ESI/MS in corresponding tissue block, is the maximum intensity of the mass spectra within a specified range at the i th spot, and is the median of the maximum intensities of the metabolite from all the spots. Statistical analysis Measurements are reported as mean ± SEM. For single comparisons, we performed an unpaired two-tailed Student's t-test; for multiple comparisons, we used an analysis of variance (ANOVA) followed by Tukey's correction for post hoc comparisons. Significance was considered at P < 0.05. Statistical analyses were performed using SPSS® software (SPSS Inc., Chicago, IL, USA). Animal welfare According to the American Veterinary Medical Association Recommendations (AVMA Guidelines for the Euthanasia of Animals: 2013 Edition), high-energy microwave irradiation is a humane method for euthanizing small laboratory rodents where unconsciousness is achieved in less than 100 ms with a complete loss of brain function in less than 1 sec. During mouse heart fixation, heartbeat completely ceased within 1 sec. Therefore, we consider FMW-fixation of the heart to be humane and satisfies the criteria provided by an ethical review at each research institute. (v) Figure S2. Quantitative imaging of cardiac metabolites by Q-IMS Application of matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI-IMS), combined with focused microwave irradiation (FMW) to rapidly fix tissue metabolism, accurately and reproducibly visualized regional contents of in vivo endogenous metabolites and spatial differences of metabolites in mice. MALDI-IMS reconstructed by capillary electrophoresis-mass spectrometry (CE-MS)-based data could evaluate the fluctuation of metabolites between different mouse tissue slices more quantitatively. H&E-stained Remnant blood in the left ventricle Figure S3. Sections of heart tissues Sections of heart tissue stained with hematoxylin and eosin following focused microwave irradiation at 5 kW for 0.94 sec, rapid-freezing and delayed-freezing methods. Figure S7. Tandem MS analysis to identify glutamate and NADH The chemical structures show the assignments of the diagnostic fragments. Comparisons of tissue MS/MS spectra with ion peaks at m/z 146 (negative-ion mode) and m/z 664 (negative-ion mode) obtained from tissue or glutamate and NADH standards, respectively. The similarity of the two spectra was used to assign the metabolites as glutamate and NADH. Figure S11. Pathway-tracing analysis using 13 C 6 -glucose in the hearts 5 blocks obtained from LAD-ligated hearts were used for capillary electrophoresis-mass spectrometry (CE-MS) quantification of 13 C-containing metabolites. Data obtained from different blocks were represented with pie charts. Two sets of results obtained from independent experiments are shown. Table S1 List of metabolites that can be visualized by capillary electrophoresis-mass spectrometry (CE-MS) or matrixassisted laser desorption/ionization imaging mass spectrometry (MALDI-IMS). Note that the number of metabolites that can be visualized by IMS is smaller than that measurable by CE-MS. Table S1 List of metabolites that can be visualized by capillary electrophoresis-mass spectrometry (CE-MS) or matrixassisted laser desorption/ionization imaging mass spectrometry (MALDI-IMS). Note that the number of metabolites that can be visualized by IMS is smaller than that measurable by CE-MS.
2018-04-03T04:47:17.626Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "dffc83ed38f4f833d9e348c425947f5d859c8f76", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep32361.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dffc83ed38f4f833d9e348c425947f5d859c8f76", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
251040873
pes2o/s2orc
v3-fos-license
IGFormer: Interaction Graph Transformer for Skeleton-based Human Interaction Recognition Human interaction recognition is very important in many applications. One crucial cue in recognizing an interaction is the interactive body parts. In this work, we propose a novel Interaction Graph Transformer (IGFormer) network for skeleton-based interaction recognition via modeling the interactive body parts as graphs. More specifically, the proposed IGFormer constructs interaction graphs according to the semantic and distance correlations between the interactive body parts, and enhances the representation of each person by aggregating the information of the interactive body parts based on the learned graphs. Furthermore, we propose a Semantic Partition Module to transform each human skeleton sequence into a Body-Part-Time sequence to better capture the spatial and temporal information of the skeleton sequence for learning the graphs. Extensive experiments on three benchmark datasets demonstrate that our model outperforms the state-of-the-art with a significant margin. Introduction Human interaction recognition plays a significant role in a wide range of applications [1,26,36,31]. For example, it can be used in visual surveillance to detect dangerous events such as "kicking" and "punching". It can also be used for robot controlling for human-robot interaction. This paper addresses human interaction recognition from skeleton sequences [28,15]. Compared with RGB videos, skeleton sequences provide only 3D coordinates of human joints, which are more robust to unconventional and variable conditions, such as unusual viewpoints and cluttered backgrounds. Compared with single-person action recognition, one additional crucial cue in recognizing a human interaction is the interactive body parts of the interactive persons. For example, the interactive hands of two persons are critical in understanding a "shaking hands" interaction. Generally, the interactive body parts in interactions demonstrate semantic correlations and correspondence. For example, in the interaction of "Taking a photo" shown in Fig. 1 (a), the hands holding the camera of one person and the hands with "yeah" of the other person demonstrate a strong correlation. Similarly, in "Shaking hands" shown in Fig. 1 (b), the interactive hands of the two persons correspond to each other. In these cases, exploring the semantic correlation between the interactive body parts is crucial for interaction understanding. In addition, for some interactions, the interactive body parts demonstrate distance evolution. For example, the hands of the two persons gradually approach each other when the two persons are "shaking hands". Measuring the distance between body parts of the interactive persons can provide additional useful information to the semantic correlation for better interaction recognition. Inspired by the above observation and the successful application of Transformer in many fields [4,5,41,37], we propose a novel Transformer-based model named Interaction Graph Transformer (IGFormer) for interaction recognition from skeleton sequences. In particular, the proposed IGFormer consists of a Graph Interaction Multi-head Self-Attention (GI-MSA) module, which aims at modeling the relationship of interactive persons from both semantic and distance levels to recognize actions. More specifically, the GI-MSA module learns a semantic-based graph and a distance-based interaction graph to represent the mutual relationship between body parts of the interactive persons. The semanticbased graph is learned by the attention mechanism in a data-driven manner to capture the semantic correlations of the interactive body parts. The distancebased graph is constructed by measuring the distance between pairs of body parts to excavate the distance information between interactive body parts. The two interaction graphs are combined to complement each other in a refinement way, making the model suitable for modeling different interactions. To feed skeleton sequences to the IGFormer, one straightforward solution is to transform each skeleton sequence to a pseudo-image and divide the image into a sequence of patches, similar to the manner of ViT [5]. However, this may destroy the spatial relationship among the skeleton joints in each body part, which could hinder effective modeling of the interactive body parts for interaction recognition. To tackle this problem, we propose a Semantic Partition Module (SPM) to transform the skeleton sequence of each subject into a new format, i.e., a Body-Part-Time (BPT) sequence, each of which is the representation of one body part during a short period. The BPT sequence encodes semantic information and temporal dynamics of the body parts, enhancing the capability of the network for modeling interactive body parts for interaction recognition. We summarize the contributions of this paper as follows: -We introduce a Transformer-based model named IGFormer, which contains a novel GI-MSA module to learn the relationships of the interactive persons from both semantic and distance levels for skeleton-based human interaction recognition. -We introduce a Semantic Partition Module (SPM) transforming each skeleton sequence into a BPT sequence to enhance the modeling of interactive body parts. -We conduct extensive experiments on three challenging datasets and achieve state-of-the-art performance. The overall architecture of the proposed IGFormer for skeleton-based human interaction recognition. Given the skeleton sequences of two subjects, they are first fed into the Semantic Partition Module (SPM) to generate two Body-part-time (BPT) sequences. The BPT sequences are then fed into the Interaction Transformer Block (ITB) for interactive learning. The ITB contains three main components: two self-encoding (SE) modules, the proposed GI-MSA module and two two-layer Feed-Forward Networks (FFN). Finally, a global average pooling followed by a softmax classifier is applied to the outputs of the last ITB to predict interaction labels. (b) The structure of the proposed Graph Interaction Multi-head Self-Attention (GI-MSA) module (DSIG: distance-based sparse interaction graph, SDIG: semantic-based dense interaction graph). Skeleton-based Action Recognition Conventional deep learning-based methods model the human skeleton as a sequence of joint-coordinate vectors [18,28,7,30,35,13] or a pseudo-image [14,9,10,11,6], which is then fed into RNNs or CNNs to predict the actions. However, representing the skeleton data as a vector sequence or a 2D grid cannot fully express the dependency between correlated joints since the human skeleton is naturally structured as a graph. Recently, GCN-based methods [12,29,23] consider the human skeleton as a graph whose vertices are joints and edges are bones and apply graph convolutional networks (GCN) on the human graph to extract correlated features. These methods achieve better performance than RNN-and CNN-based methods, and become the mainstream methods in skeleton-based action recognition. However, these methods consider each person as an independent entity and cannot effectively capture human interaction. In this work, we focus on skeleton-based human interaction recognition and propose to model the interactive relationship of persons from both semantic and distance levels. Human Interaction Recognition Human interaction recognition [36,31,27] is a sub-field of action recognition. Compared with single-person action recognition, human interaction methods should not only be able to model the behavior of each individual but also capture the interaction between them. Yun et al. [34] evaluated several geometric relational body-pose features including joint features, plane features and velocity features for interaction modeling, and found out that joint features outperform others, whereas velocity features are sensitive to noise. Ji et al. [8] built poselets by grouping joints that belong to the same body part of each individual to describe the interaction of each body part. Recently, Perez et al. [24] proposed a twostream LSTM-based interaction relation network called LSTM-IRN to model the intra relations of body joints from the same person and the inter relations of the joints from different persons. However, LSTM-IRN ignores the distance evolution of body parts, which is considered as an important prior knowledge for human interaction recognition. Different from the above-mentioned methods, we model the interaction relationship of interactive humans as two interaction graphs, which are constructed from the semantic and distance levels respectively to capture the semantic correlation and distance evolution between body parts. Visual Transformer Transformer was first proposed in [32] for machine translation task and since then has been widely adopted in various natural language processing (NLP) tasks. Inspired by the successful application in NLP, Transformer has been applied to the computer vision and demonstrated its scalability and effectiveness in many vision tasks. Vision Transformer (ViT) [5] was the first pure Transformer architecture for image recognition and obtained better performance and generalization than traditional convolutional neural networks (CNNs). After that, Transformer-based models with carefully designed and complicated architectures have been applied to various downstream vision tasks, such as object detection [40], semantic segmentation [38] and video classification [2]. In skeleton-based action recognition, Plizzari et al. [25] proposed ST-TR to model the dependencies between joints by substituting the graph convolution operator with the self-attention operator. Different from ST-TR, we focus on human interaction modeling and propose a novel self-attention-based GI-MSA module to model the correlations between body parts of interactive persons. Interaction Graph Transformer One important cue in recognizing human interaction is the interactive body parts. In this section, we introduce an Interaction Graph Transformer (IG-Former), which contains a Graph Interaction Multi-head Self-Attention (GI-MSA) module to model the interactive body parts at both semantic and distance levels for skeleton-based interaction recognition. The proposed IGFormer is also equipped with a Semantic Partition Module (SPM), which aims at retaining the semantic and temporal information of each body part within the input skeleton sequences for better learning of the interactive body parts. The overall architecture of the proposed IGFormer is shown in Fig. 2 (a). Given the skeleton sequences of two interactive subjects S m , S n ∈ R T ×J×C , where T and J represent the numbers of frames and joints in each frame, respectively, and C = 3 represents the dimension of the 3D coordinates of each joint, we first feed the two skeletons into the proposed SPM to generate two Body-Part-Time (BPT) sequences, H m , H n , which are then fed into a stack of Interaction Transformer Blocks (ITBs) for interaction modeling. Finally, a global average pooling followed by a softmax classifier is applied to the output of the last ITB to predict the interaction class. More specifically, each ITB contains three components including two sharedweight self-encoding (SE) modules, the Graph Interaction Multi-head Self-Attention (GI-MSA) module, and two Feed-Forward Networks (FFN). Each SE module is a standard one-layer Transformer [5], which aims at modeling the interaction among the body parts within each individual skeleton. The two outputs of the SE are fed into the GI-MSA to model the interactive body parts and generate an enhanced representation for each interactive person. Finally, each output of the GI-MSA is fed to a Layer Normalization (LN) followed by a FFN. We add an addition operation between the output of GI-MSA and FFNs to improve the representation capability of the model. The ITB can be formulated as follows: where H me and H ne denote the outputs of the SE,Ĥ me andĤ ne denote the outputs of the GI-MSA module, andĤ mo andĤ no are the outputs of the ITB. The two SE modules in the first ITB take the Body-Part-Time (BPT) representations of two interactive subjects, i.e, H m and H n , as input. The inputs of the SE in the following ITB are the outputs of the previous ITB. In the following subsections, we introduce the proposed SPM and GI-MSA in detail. Different from natural 2D images that can be directly divided into a sequence of patches to feed to the Transformer [5], human skeleton sequences are represented as a set of 3D joints. Transforming the 3D skeleton sequences to 2D pseudo-images and passing them through a vision Transformer such as ViT [5] may result in losing the temporal dependency between frames as well as the correlation between joints. To better retain both spatial and temporal information of the skeleton sequences, we propose SPM to transform the skeleton sequence of each subject into a sequence of BPT. Each element in the BPT is the representation of one body part during a short temporal period. The overall architecture of the proposed SPM is shown in Fig. 3. Semantic Partition Module There are three main steps in the SPM, i.e., partitioning, resizing, and projection, which are explained below. Partitioning. Given the skeleton sequences of the interactive persons S m , S n ∈ R T ×J×C , we first divide each skeleton sequence into B=5 body parts, i.e., left arm, right arm, left leg, right leg and torso, according to the natural structure of the human body. After the partitioning operation, each body part of each subject is represented as S m,p , S n,p ∈ R T ×Jp×C , where p ∈ B and J p is the number of joints of body part p. Resizing. Different body parts may have different numbers of joints. In order to adapt these body parts to the input of the Transformer, we adopt the linear interpolation to resize the spatial dimension J p of all body parts to the same dimension P , i.e., S m,p , S n, After the resizing operation, all B body parts have the same dimension. Projection. The projection operation aims to transform the resized body parts of each person into a BPT sequence to feed to the Transformer. Specifically, we apply a 2D convolution with kernel size of P ×P on S m,p and S n,p to generate 2D feature maps, respectively. The size of each output feature map is L × D, where L = ⌈(T + 2 × padding − P + 1)/stride⌉ and D denotes the number of output channels. "padding" and "stride" denote the padding size and the stride of the convolutional filter. Each 2D feature map can then be split into a sequence of L steps, where each step is a feature vector of dimension D. The projection can be formulated as follows: e m,p,1 , e m,p,2 , ..., e m,p,L = Split (Conv(S m,p )), e n,p,1 , e n,p,2 , ..., e n,p,L = Split(Conv(S n,p )), where e m,p,j , e n,p,j ∈ R D denote the embedding of the body part p at temporal step j for interactive person m and n, respectively. j ∈ [1, · · · , L], and D is the dimension of the embedding. L is the number of time steps of each body part. After projection, we concatenate the embedding of all the B body parts step by step for all the L time steps to generate a sequence with M = B * L time steps. The sequence is referred to as the BPT sequence. As shown in Fig. 3, the BPT sequence can be considered as a combination of L sub-sequences, each of which is formed by the features of the B body part. We denote the BPT sequences generated from the skeleton sequences of the two interactive persons as H m , H n ∈ R M ×D . A learnable positional encoding [5] is added to H m and H n to form the inputs of two shared-weight Self Encoding (SE) modules, which are standard one-layer Transformers [5]. The output sequences of SE are denoted as H me , H ne ∈ R M ×D , which are then fed to the Graph Interaction Multi-head Self-Attention (GI-MSA) module to model the interactive body parts and generate an enhanced representation for each interactive subject. Graph Interaction Multi-head Self-Attention To accurately recognize human interaction, one critical cue is the interactive body parts. Considering the semantic correspondence and the distance characteristics that may exist in the interactive body parts, we propose a Graph Interaction Multi-head Self-attention (GI-MSA) module to model the interactive body parts as two interaction graphs as shown in Fig. 2 (b). Specifically, GI-MSA contains a Semantic-based Dense Interaction Graph (SDIG) and a Distance-based Sparse Interaction Graph (DSIG). The SDIG is learned by exploring the semantic correlations of the interactive body parts in a data-driven manner while the DSIG is constructed based on the prior knowledge that the physically close body parts of the interactive persons are generally interactive body parts and should be connected. With the SDIG and DSIG, the proposed GI-MSA models the interaction relationships of humans from both semantic and distance spaces to capture critical interactive information. Finally, the representation of each individual is enhanced by aggregating interactive features from the other person. Semantic-based Dense Interaction Graph In order to capture the semantic correlations between interactive body parts of people (e.g., the hands holding the camera of one person and the hand with "yeah" of the other person in the action of "taking a photo"), we construct a Semantic-based Dense Interaction Graph (SDIG) for each interactive person. We take the learning of SDIG of person m (which is denoted as SDIG m→n ) as an example. As shown in Fig. 2 (b), given the representations of two interactive persons H me , H ne ∈ R M ×D , which are the outputs of the SE module, we first transform H me into the latent space by a linear transformation function T Q : where H Q me ∈ R M ×D is the transformed query feature and W Q ∈ R D×D is the weight matrix. Then, we propose a context transformation function C to transform the representation of the other person H ne into a high-level space as the key features, where H K ne ∈ R M ×D is the key features. W K ∈ R D×D is the learned weight matrix. H tc ne ∈ and H sc ne are temporal and spatial contexts of H ne . To compute H tc ne and H sc ne , we first compute H tc ne,p and H sc ne,t as follows, which denote the temporal context of each body part p and spatial context at time step t in H ne . where L denotes the temporal steps of each body part in H ne , and B is the number of body parts. H ne,p,j ∈ R D and H ne,i,t ∈ R D denote the feature encoding of body part p at time step j and the feature encoding of body part i at time step t in the sequence H ne , respectively. By stacking the temporal context of all B body parts and repeating L times, and repeating the spatial context of each time step B times and stacking the the repetition of all L time steps, respectively, we obtain H tc ne , H sc ne ∈ R M ×D . Finally, SDIG m→n can be obtained by performing the matrix multiplication operation between H Q me and H K ne : where ⊤ is the transpose operation, SDIG m→n ∈ R M ×M . SDIG n→m can be obtained in a similar way, and the learnable weight matrices W Q and W K are shared for learning both SDIG m→n and SDIG n→m . Distance-based Sparse Interaction Graph In addition to modeling the interaction relationship from the semantic level, we also compute the distance correlation between body parts of the interactive persons. The DSIG is a predefined graph and could be constructed in the data pre-processing stage. The idea of DSIG is to leverage the distance between body parts to construct an adjacency matrix that contains the connection information between body parts of the interactive persons. More specifically, if the distance between two body parts of the interactive persons is small, then the two body parts are connected. Given the original skeleton sequences of two interactive humans S m , S n ∈ R T ×J×C , we first divide the skeleton sequences into B body parts S m,p , S n,p ∈ R T ×Jp×C via the same Partitioning process in SPM. To estimate the distance between body parts, we first compute the representations of body parts by averaging the coordinates of joints within each body part: where S m,p , S n,p ∈ R T ×C are the representations of body part p of two interactive persons respectively. S m,p [i] and S n,p [j] denote the i-th joint in S m,p and the j-th joint in S n,p , respectively. J p is the number of joints within body part p. We downsample the temporal dimension of S m,p and S n,p from T to L, i.e., where a, b ∈ [1, · · · , M ], and A m→n ∈ R M ×M records the distance between the body parts of two people. We finally connect each time step a in human m to the k nearest time step in human n to build the DSIG m→n ∈ R M ×M as below: where A k m→n [a] is the k-th smallest value in a-th row of A m→n . The DSIG n→m ∈ R M ×M is built in a similar way to encode the distance between each part of the interactive person n to all body parts of person m. Interaction-based Feature Generation Given the semantic-and distancebased interaction graphs, we aggregate the interactive information of the graphs with the individual features of the interactive persons to generate an enhanced representation for better interaction recognition as shown in Fig. 2 (b). Specifically, we first transform the input individual representation H ne , which is the output of the SE module for person n, into the value features H V ne : where H V ne ∈ R M ×D , and W V is the weight matrix. Then we perform the matrix multiplication operation on H V ne and the combination of DSIG m→n and SDIG m→n , followed by an addition operation with H me to obtain the interactive representation of person m: whereĤ me ∈ R M ×D , and R is the combination function: where α is a trainable scalar to adjust the intensity of each graph enabling the network to be adaptively adjustable between distance evolution and semantic correlation of body parts. Similarly,Ĥ ne can be obtained in the same way. We define the above steps of generatingĤ me andĤ ne from H me and H ne as Graph Interaction Self-Attention (GI-SA), which is formulated as: where h is the number of heads,Ĥ me,i ,Ĥ ne,i ∈ R M ×d are output representations of i-th head of GI-SA, and W m , W n are the weight matrices. H me,i , H ne,i ∈ R M ×d are i-th head representations of H me and H ne . Datasets SBU [34] is a two-person interaction dataset, which contains eight classes of human interactions including approaching, departing, pushing, kicking, punching, exchanging objects, hugging, and shaking hands. Seven participants (pairing up to 21 different permutations) performed all eight interactions. In total, the dataset contains 282 short videos. Each video contains 3D coordinates of 15 joints per person at each frame. Following [34] , we use the 5-fold cross validation protocol to evaluate our method. NTU-RGB+D [28] is a large-scale action dataset containing 56,578 skeleton sequences from 60 action classes. Each action is captured by 3 cameras at the same height but from different horizontal angles. Each human skeleton contains 3D coordinates of 25 body joints. There are two standard evaluation protocols for this dataset including 1) Cross-Subject, where half of the subjects are used for training and the remaining ones are used for testing, and 2) Cross-View, where two cameras are used for training, and the third one is used for testing. This dataset contains 11 human interaction classes including punch/slap, pat on the back, giving something, walking towards, kicking, point finger, touch pocket, walking apart, pushing, hugging and handshaking. The maximum number of frames in each sample is 256. NTU-RGB+D120 [15] extends NTU-RGB+D with an additional 57,367 samples from 60 extra action classes. In total, it contains 113,945 skeleton sequences from 120 action classes. There are two standard evaluation protocols for this dataset including 1) Cross-Subject, where half of the subjects are employed for training and the rest are left for testing, 2) Cross-Setup, where half of the setups are used for training, and the remaining ones are used for testing. In addition to the 11 interaction classes in the NTU-RGB+D, This dataset contains 15 additional human interaction classes including hit with object, wield knife, knock over, grab stuff, shoot with gun, step on foot, high-five, cheers and drink, carry object, take a photo, follow, whisper, exchange things, support somebody and rock-paperscissors, resulting a total of 26 interaction classes. In both NTU-RGB+D120 and NTU-RGB+D datasets, for samples with less than 256 frames, we repeat the sample until it reaches 256 frames. Implementation Details Transformer Architecture. We use a variant of ViT-Base [5] as the backbone of our proposed IGFormer model. The original ViT-base model contains 12 Transformer layers with the hidden size of 768 (D=768). The dimension of each MLP layer is four times the hidden size. However, due to the small number of samples in the human interaction recognition datasets, a lighter model is more suitable to avoid overfitting. Therefore, we reduce the number of Transformer layers to 3 (N=3) and initialize them with the pre-trained weights of the first three layers of the ViT-base model. We also remove the classification token (CLS) and adopt the average pooling operation to obtain the final representation from each sequence of patches. We set the patch size P in the Resizing step Table 1. Performance comparison of different types of inputs and different lengths of the sequences on NTU-RGB+D and NTU-RGB+D 120. "PI" denotes "Pseudo-Image". Input Length NTU 60 (%) NTU 120(%) X-Sub X-View X-Sub X-Set of SPM to 16 and the stride of convolution in the Projection step to 10, which results in BPT sequences with M=125 for each person in all datasets. In each body part, L equals to 25. k in Eq. (9) is set to 15. Training Details. The experiments are conducted on NVIDIA P100 GPU. We adopt SGD algorithm with Nesterov momentum of 0.9 as the optimizer. The initial learning rate is set to 0.01 and is divided by 10 at the 30 th and 40 th epochs. The training process is terminated at the 60 th epoch, batch size is 32. Ablation Study In this section, we conduct extensive ablation studies on both NTU RGB+D and NTU RGB+D 120 datasets to validate the effectiveness of the proposed SPM (Section 3.1) and GI-MSA (Section 3.2) modules. Impacts of SPM. We compare two different representations of the skeleton sequences as the input of the proposed IGFormer to validate the effectiveness of the proposed SPM. The first one is Pseudo-Image representation, which have been widely used in CNN-based models [9,10,11] by transforming each 3D skeleton sequence to a 2D pseudo-image. We define the numbers of frames T and joints J of a skeleton sequence as the width and height of the image and then perform a linear projection on the image as ViT [5]. The second representation is the BPT sequence, which is generated by the proposed SPM. Moreover, skeletons are transformed into the different lengths by changing the stride of convolution projection in ViT and SPM to validate the robustness of the proposed SPM under different input configurations. The experimental results are shown in Table 1. We observe that the BPT representation outperforms Pseudo-Image representation at all three configurations, which validates the effectiveness of the proposed SPM. We also evaluate a baseline that models each skeleton joint as a token of the Transformer sequence and fuses features of two persons, but the performance drops by 2.2% compared with our SPM on X-Sub of NTU-RGB+D. GI-MSA versus Input/Late fusion. We design two interaction learning baselines, i.e., Input Fusion and Late Fusion, to compare with our proposed GI-MSA module. The Input Fusion baseline merges the BPT sequences of two subjects to form a single sequence and passes it through a standard Transformer to learn the interactions between two subjects. The Late Fusion baseline feeds the BPT sequences of two subjects individually through a Transformer model to extract their representations, which are then fused to model the interaction. As shown in Table 2, we observe that the performance of both input fusion and late fusion methods are worse than our proposed IGFormer on both datasets, demonstrating the efficacy of the proposed GI-MSA module for interactive learning. Impacts of SDIG and DSIG. We evaluate the impacts of different components of the proposed GI-MSA, including SDIG, DSIG, the spatial and temporal context for learning SDIG. Here, we employ IGFormer without GI-MSA module as our baseline. Based on the results in Table 3, we draw three conclusions: (1) Both spatial and temporal context in Eq. (5) are important for learning key contextual features, i.e., the performance drops significantly by removing any of them. (2) The GI-MSA containing only SDIG can improve the performance of human interaction recognition, which validates the effectiveness of the proposed SDIG. (3) The DSIG, which serves as the prior knowledge of human interaction, does not perform well individually but provides extra information for interaction learning, leading to improved performance after being combined with SDIG. Impacts of Number of ITB layers. Our IGFormer is built by stacking several Interaction Transformer Blocks (ITBs) to enhance the capability of interaction modeling. Here, we evaluate the influence of different number of ITBs on the performance of IGFormer. As shown in Table 4, stacking 3 layers of ITB achieves the best results on both NTU-RGB+D and NTU-RGB+D 120. Increasing the number of ITBs degrades the accuracy due to over-fitting problem. Impacts of the joint noise on human interaction. The skeletons in NTU-RGB+D are usually noisy, e.g., some joints are missing. We evaluate the performance of our IGFormer on X-Sub of NTU-RGB+D by adding zero-mean noise to the skeleton sequences. IGFormer achieves 93.6%, 93.1%, 92.0%, 90.4% accuracy when the standard deviation (σ) is set to 0cm, 1 cm, 2cm, 4cm, respectively, which demonstrates that IGFormer is robust against the input noise. Table 5. Performance comparison on SBU, NTU-RGB+D and NTU-RGB+D 120. Comparison with State-of-the-arts The experimental results on the interaction classes of SBU, NTU-RGB+D and NTU-RGB+D 120 datasets are shown in Table 5. The proposed IGFormer achieves state-of-the-art performance compared with other skeleton-based human interaction recognition methods. Benefiting from the proposed SPM and GI-MSA modules, IGFormer outperforms the CNN-and RNN-based methods by a large margin. IGFormer also outperforms state-of-the-art GCN-based method, CTR-GCN [3], by 2.0% and 2.2% on X-Sub and X-View of NTU-RGB+D, and 2.2% and 2.1% on X-Sub and X-set of NTU-RGB+D 120. Compared with the baseline Transformer-based method, ViT-baseline, our IGFormer achieves 3.4% and 3.2% gains on X-Sub and X-View of NTU-RGB+D, and 3.9% and 4.0% gains on X-Sub and X-Set of NTU-RGB+D 120. Conclusion In this work, we presented IGFomer, which consists of a GI-MSA module to model the interaction of persons as graphs. The GI-MSA learns an SDIG and DSIG to capture the semantic and distance correlations between body parts of interactive persons. We also presented a SPM to transform each human skeleton into a BPT sequence for retaining interactive information of body parts. The proposed IGFormer outperformed state-of-the-art methods on three datasets.
2022-07-26T01:16:23.080Z
2022-07-25T00:00:00.000
{ "year": 2022, "sha1": "4161e888c5ff733fe6e03a5ba115b91c38c5a4a0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4161e888c5ff733fe6e03a5ba115b91c38c5a4a0", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
266841801
pes2o/s2orc
v3-fos-license
TREX-2-ORC Complex of D. melanogaster Participates in Nuclear Export of Histone mRNA The TREX-2-ORC protein complex of D. melanogaster is necessary for the export of the bulk of synthesized poly(A)-containing mRNA molecules from the nucleus to the cytoplasm through the nuclear pores. However, the role of this complex in the export of other types of RNA remains unknown. We have shown that TREX-2-ORC participates in the nuclear export of histone mRNAs: it associates with histone mRNPs, binds to histone H3 mRNA at the 3'-terminal part of the coding region, and participates in the export of histone mRNAs from the nucleus to the cytoplasm. Gene expression consists of several stages: synthesis of mRNA, formation of a mature mRNP particle, and export of mRNA from the nucleus to the cytoplasm through the nuclear pores.The export receptor NXF1 is recruited to mRNA through adapter proteins, which can bind to mRNA during transcription, splicing, 3'-end formation, and intranuclear transport of mRNP to nuclear pores.NXF1 forms a heterodimer with the p15 protein (Mex67-Mtr2 heterodimer in yeast).The nuclear export receptor heterodimer mediates translocation of mRNA through the nuclear pore by interacting with FG repeat-containing nucleoporins. One of the key regulators of mRNA export is the TREX-2 protein complex.Homologous TREX-2 complexes have been characterized in many eukaryotes, including yeast and humans.TREX-2 is capable of binding to mRNA, associates with nuclear pores, and is required for mRNA export from the nucleus in various organisms [1][2][3][4][5][6][7].The TREX-2 complex of D. melanogaster consists of Xmas-2, PCID2, ENY2, and Sem1p proteins [8].The Xmas-2 protein serves as a platform for the assembly of the complex, and other proteins interact with it.Previously, our team purified the joint complex consisting of TREX-2 and ORC (Origin Recognition Complex) proteins and demonstrated the role of ORC subunits in the export of mRNA [9].The ORC complex was first described in budding yeast as a complex that binds to origins of replication and is involved in the recruitment of the Mcm2-7 complex.Later, homologous complexes were discovered in other organisms [10].However, it was found that, in higher eukaryotes, the ORC complex and its individual subunits perform many different functions not related to replication; in particular, the interaction of ORC proteins with various RNAs was shown [11].We showed that ORC proteins interact with the mRNP particle and that this interaction is mediated by the TREX-2 complex.ORC subunits were shown to interact with the NXF1 export receptor and are required for the binding of NXF1 to the mRNP particle.Knockdown of ORC components, as well as knockdown of TREX-2 components, leads to the disruption of poly(A)-containing RNA (mRNA) export from the nucleus.Thus, it was shown that the TREX-2-ORC protein complex of D. melanogaster recruits the export receptor NXF1 into mRNP particles and is a key participant in the export of the bulk of the synthesized mRNA molecules [9].The aim of this study was to investigate the involvement of TREX-2-ORC in the export of non-polyadenylated mRNAs encoded by replication-dependent histone genes. The mRNAs of replication-dependent histone genes (hereinafter, histone mRNAs) are synthesized at the beginning of S phase of the cell cycle and are the only eukaryotic mRNAs that do not undergo polyadenylation [12].Histone pre-mRNAs do not contain introns; their maturation involves only capping and endonucleolytic cleavage of the 3'-end.At the 3'-end, histone pre-mRNAs contain a stem-loop structure and an HDE element, which both direct the assembly of the specific processing machinery [13].The SLBP protein (stem-loop binding protein) binds to the stemloop.The 5'-end of the U7 snRNA is associated with the HDE sequence.The U7 snRNA forms the BIOCHEMISTRY, BIOPHYSICS, AND MOLECULAR BIOLOGY a Engelhardt Institute of Molecular Biology, Russian Academy of Sciences, Moscow, Russia *e-mail: kursha@mail.ruU7 snRNP together with a protein complex consisting of the Sm spliceosomal proteins, in which two proteins SmD1 and SmD2 are replaced by the Lsm10 and Lsm11 proteins.In mammals, the Lsm11 protein interacts with its extended N-terminus with the N-terminal region of the FLASH protein.The common surface of FLASH with Lsm11 recruits the HCC (histone cleavage complex) processing complex containing the factors Symplekin, CPSF73, CPSF100, CPSF160, WRD33, and Cst64 to U7 snRNP.Histone pre-mRNA is cleaved by the CPSF73 endonuclease at a distance of 4-5 nucleotides from the stem-loop.After cleavage, mature histone mRNAs are rapidly exported from the nucleus to the cytoplasm.Despite the specific structure of histone mRNAs, it was shown that they are also exported by the NXF1 receptor [14,15].In this work, we showed that the TREX-2-ORC complex is associated with histone mRNP particles, binds to histone H3 mRNA at the 3'-end part of the coding region, and is involved in non-polyadenylated histone mRNA export. TREX-2-ORC Complex Is Associated with mRNP Histone Particles This study was aimed to investigate the interaction of the TREX-2-ORC complex with mRNP histone particles.Previously, ORC subunits Orc1, Orc3, Orc4, Orc5, and Orc6 were found in the purified joint TREX-2-ORC complex [9].The interaction of the complex with histone mRNPs was tested for TREX-2 subunits Xmas-2, PCID2, and ENY2 and for ORC proteins Orc3, Orc5, and Orc6.For this purpose, the method of coprecipitation of RNP particles with antibodies (RIP) from the nuclear extract of S2 D. melanogaster cells was used.Antibodies to the TREX-2 and ORC protein components effectively precipitated the mRNA of the histone genes H2A and H3 (Fig. 1a), indicating the interaction of the TREX-2-ORC complex with mRNP particles of these histones.Similar data were obtained for H4 and H2B mRNA.The interaction of TREX-2-ORC with poly(A)-containing mRNA of β-tubulin gene, which was demonstrated previously [9], served as a positive control. The interaction was further confirmed using an alternative approach: mRNP particles containing the H3 gene transcript were isolated from the nuclear extract of S2 cells using a biotinylated antisense RNA probe to histone H3 mRNA (Fig. 1b).The antisense RNA probe specifically associates with the corresponding endogenous RNA in the extract, precipitating the RNP particle.The mRNP particles bound to the RNA probe were precipitated from the extract using streptavidin-agarose, and mRNP proteins were analyzed using Western blot analysis with antibodies to the TREX-2 and ORC components.Precipitation tubulin with an antisense sequence to YFP RNA served as a negative control.Subunits of the TREX-2 and ORC complexes, proteins Xmas-2, PCID2, ENY2, Orc3, and Orc6, as well as processing factors CPSF73 and CPSF100 coprecipitated with H3 mRNP but were absent in the negative control.Thus, the TREX-2-ORC complex is able to associate with histone mRNP particles. TREX-2-ORC Associates with H3 mRNA at the 3'-End Part of the Coding Region To localize the region of H3 mRNA where TREX-2-ORC is associated, biotinylated sense RNA probes corresponding to various regions of the H3 RNA sequence were incubated with the nuclear extract of S2 cells, and the proteins bound to the RNA probes were detected.The probes corresponded to the full-length (H3), N-and C-terminal regions of the 411-nt coding region (CDS1: 1-204 nt and CDS2: 204-411 nt, respectively) and the 3'-untranslated region of the H3 pre-mRNA (3'UTR: 412-595 nt) (Fig. 1c).The Xmas-2, PCID2, Orc3, and Orc6 proteins associated predominantly with the CDS2 sequence.The processing factors CPSF73 and CPSF100, used as control factors, were strongly associated with the 3'UTR probe, which contains a hairpin structure and the HDE element, directing the assembly of the processing apparatus; this confirms the specificity of the identified interactions.ENY2 coprecipitated with both CDS2 and 3'UTR RNA probes, which can be explained by the ability of ENY2 to be recruited into various protein complexes.The proteins did not interact with the control RNA probe.Thus, the TREX-2-ORC complex proteins associate with the coding sequence of histone H3 RNA at the 3'-end part of the coding region. TREX-2-ORC Is Involved in Histone mRNA Export Next, we studied the involvement of TREX-2-ORC in the export of histone mRNA from the nucleus to the cytoplasm.The Xmas-2 protein is a platform with which the other proteins of the TREX-2-ORC complex are associated: PCID2 is associated with the Xmas-2 region near the GANP domain, EYN2 is associated with the CID domain of Xmas-2.Of all ORC subunits of the complex, Orc3 binds most strongly to Xmas-2, interacting together with ENY2 with the C-terminus of Xmas-2 [16,17]. Therefore, to reduce the level of the TREX-2-ORC complex, Xmas-2 in S2 cells was knocked down using RNA interference.The level of Xmas-2 expression significantly decreased, whereas the expression levels of the PCID2, ENY2, Orc3, and Orc6 subunits did not change noticeably (Fig. 2a).Using RIP analysis, we tested whether Xmas-2 knockdown disrupts the interaction of TREX-2-ORC with histone mRNPs.Xmas-2 knockdown led to a decrease in the amount of both Xmas-2 and the PCID2, ENY2, Orc3, and Orc6 proteins associated with histone mRNAs, that is, to a decrease in the binding of TREX-2-ORC to mRNP particles of histones.The results are shown for H3 mRNA (Fig. 2b). Next, we studied how the disruption of the interaction of histone H3 mRNA with TREX-2-ORC complex affects its export from the nucleus to the cytoplasm.To do this, we analyzed the distribution of H3 mRNA in S2 cells upon knockdown of Xmas-2 and Orc3 (the key subunits for for the interaction of complex with mRNP and NXF1) using fluorescent in situ hybridization (FISH) with a DIG-labeled antisense RNA probe to the coding sequence of the H3 gene, stained by anti-DIG rhodamine.(Figs.2d, 2e).Since histone mRNA is synthesized during the S phase and undergoes degradation after the end of the S phase, the FISH signal corresponding to H3 mRNA was detected only in some cells.In the control experiment, the FISH signal of H3 mRNA was detected mainly in the cytoplasm of the cells.Knockdown of NXF1 resulted in retention of H3 mRNA in the nucleus, as was shown in the literature [15].Knockdown of Xmas-2 and knockdown of Orc3 also led to disturbances in mRNA export in part of the cells with the H3 FISH signal (about 20-30%), namely, to accumulation of RNA in the nucleus or uniform distribution between the nucleus and the cytoplasm (Figs.2d, 2d).Thus, a decrease in the amount of TREX-2-ORC bound to mRNP particles of histones leads to disruption of the nuclear export of their mRNA.The fact that export disruption is observed only in some cells can be explained by the fact that, in addition to the TREX-2-ORC complex, other adapters (SR proteins 9G8 and SRp20 [18] and the ALYREF protein [19]) are involved in the recruitment of NXF1 to histone mRNA. We have previously showed that the TREX-2-ORC complex associates with the major export receptor NXF1 and serves as an adapter for the binding of NXF1 to poly(A)-containing mRNAs [9].Histone mRNAs are exported by NXF1 [15].We have shown that the TREX-2-ORC complex is part of the mRNP particle and binds to non-polyadenylated histone mRNAs.The fact that knockdown of the complexforming subunits of TREX-2-ORC leads to the disruption of the binding of the complex to histone mRNAs and disruption of their export in some part of S2 cells suggests that TREX-2-ORC also serves as an adapter for NXF1-dependent export of non-polyadenylated mRNAs histones, possibly participating in export at a certain stage of expression regulation during the cell cycle. Previously, several adapters were discovered through interaction with which the NXF1 export receptor is recruited to histone mRNA.In mammalian cells and Xenopus oocytes cells, the SR proteins 9G8 and SRp20 were shown to bind to a specific transport cis-element at the mouse H2a coding region [18].In human cells, the major export adapter ALYREF binds to the histone mRNA at a region located upstream of the cleavage site, with the peak at 50 nt [19].According to our data, TREX-2-ORC also preferentially binds to histone mRNA at the 3'-terminal part of the coding region.However, while the binding of ALYREF to histone mRNA occurs cotranscriptionally and is associated with the landing of the processing apparatus [19], TREX-2-ORC can be recruited to histone mRNP particles at a later stage of the mRNP export pathway in the nucleus from transcription sites to nuclear pores rather than during transcription.This can be inferred from the data that knockdown of TREX-2 components does not affect the processing of poly(A)-containing model mRNAs [20].FUNDING work was supported by the Russian Science Foundation (project no.22-24-00721). ETHICS APPROVAL AND CONSENT TO PARTICIPATE This work does not contain any studies involving human and animal subjects. Fig. 1 . Fig. 1.TREX-2-ORC associates with histone mRNA.(a) Results of immunoprecipitation of H2A, H3, and β-tubulin mRNP particles with antibodies to TREX-2-ORC subunits Xmas-2, PCID2, ENY2, Orc3, Orc5, Orc6, IgG was used as a negative control.Results are presented as a percentage of the starting material.All data are presented as mean ± standard deviation, Student's t-test was used to compare control and experiment.* indicates Statistical significance at p < 0.05, ** indicates statistical significance at p < 0.01.(b)Western blot analysis of co-precipitation of H3 mRNP particles with biotinylated antisense RNA probe to histone H3 mRNA using antibodies to Xmas-2, PCID2, ENY2, Orc3, and Orc6.RNA probe with an antisense sequence to YFP RNA was used as a negative control (Control).(c) Western blot analysis of protein coprecipitation with biotinylated antisense RNA probes to the histone H3 mRNA fragments (full-sized H3, CDS1, CDS2, and 3'UTR) using antibodies to Xmas-2, PCID2, ENY2, Orc3, and Orc6.RNA probe with an antisense sequence to YFP RNA was used as a negative control (Control). Fig. 2 . Fig. 2. Effect of knockdown of TREX-2-ORC subunits on histone mRNA export.(a) Western blot analysis of protein levels after knockdown of Xmas-2, in S2 cells using the corresponding antibodies.Staining with antibody to lamin Dm0 was used as an alignment control.(b) Results of the immunoprecipitation of histone mRNPs after Xmas-2 knockdown with antibodies to TREX-2-ORC subunits, using IgG as a negative control.Results are presented as a percentage of the starting material.All data are presented as mean ± standard deviation, Student's t -test was used to compare control and experiment.* indicates Statistical significance at p < 0.05, ** indicates statistical significance p < 0.01.(c) Western blot analysis of Xmas-2, Orc3, and NXF1 knockdown protein levels using antibodies to corresponding proteins.Staining with antibodies to lamin Dm0 was used as an alignment control.(d) FISH hybridization with a DIG-labeled antisense RNA probe to H3 mRNA after knockdown of TREX-2-ORC components Xmas-2, Orc3 and NXF1 (Xmas-2i, Orc3i, NXF1i) in S2 cells.The effect of GFP protein knockdown is shown as a negative control.(e) Proportion of S2 cells with impaired mRNA export after the knockdowns of Xmas-2, Orc3, NXF1 or GFP (control).
2024-01-09T06:17:28.792Z
2024-01-07T00:00:00.000
{ "year": 2024, "sha1": "0fedf57913a2619096d1e73849e304e8e6e59835", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1134/S160767292370059X.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "85c9de64c03dbfb1a11d6e9f43ca170238521299", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119265069
pes2o/s2orc
v3-fos-license
Enhancing Electron Coherence via Quantum Phonon Confinement in Atomically Thin Nb3SiTe6 The extraordinary properties of two dimensional (2D) materials, such as the extremely high carrier mobility in graphene and the large direct band gaps in transition metal dichalcogenides MX2 (M = Mo or W, X = S, Se) monolayers, highlight the crucial role quantum confinement can have in producing a wide spectrum of technologically important electronic properties. Currently one of the highest priorities in the field is to search for new 2D crystalline systems with structural and electronic properties that can be exploited for device development. In this letter, we report on the unusual quantum transport properties of the 2D ternary transition metal chalcogenide - Nb3SiTe6. We show that the micaceous nature of Nb3SiTe6 allows it to be thinned down to one-unit-cell thick 2D crystals using microexfoliation technique. When the thickness of Nb3SiTe6 crystal is reduced below a few unit-cells thickness, we observed an unexpected, enhanced weak-antilocalization signature in magnetotransport. This finding provides solid evidence for the long-predicted suppression of electron-phonon interaction caused by the crossover of phonon spectrum from 3D to 2D. metal chalcogenide-Nb 3 SiTe 6 . We show that the micaceous nature of Nb 3 SiTe 6 allows it to be thinned down to one-unit-cell thick 2D crystals using microexfoliation technique. When the thickness of Nb 3 SiTe 6 crystal is reduced below a few unit-cells thickness, we observed an unexpected, enhanced weak-antilocalization signature in magnetotransport. This finding provides solid evidence for the long-predicted suppression of electron-phonon interaction caused by the crossover of phonon spectrum from 3D to 2D 4 . Recent advances in microexfoliation techniques 1 have made it possible to produce 2D atomic layered crystals that were previously inaccessible to the community. This has significantly expanded the breadth of research on low dimensional physics, as exemplified by the discoveries of exotic quantum phenomena in graphene 5 and MX 2 monolayers 6,7 . There is currently a significant effort in the field to extend the palate of 2D systems to the more complex ternary materials, with the expectation that the additional elemental complexity will extend the parameter space and possibly give rise to more exotic phenomena. In this paper, we report on the experimental observation of long-predicted suppression of electron-phonon interaction caused by 2D confinement in a new ternary 2D system -Nb 3 SiTe 6 few-layer crystals. Nb 3 SiTe 6 was discovered two decades ago but scarcely studied 8 . As shown in Fig. 1a and 1b, the structure of Nb 3 SiTe 6 is similar to that of MX 2 (e.g. MoS 2 ); both are formed from the stacks of sandwich layers, with comparable van der Waals (vdW) gaps. In MX 2 , each X-M-X sandwich layer is composed of edge-sharing trigonal MX 6 prisms ( Fig. 1c and 1d), whereas the Te-(Nb,Si)-Te sandwich layer of Nb 3 SiTe 6 consists of face-and edge-sharing NbTe 6 prisms and Si ions insert into the interstitial sites among these prisms ( Fig. 1a and 1b). We have grown Nb 3 SiTe 6 single crystals using chemical vapor transport; the lateral dimension of crystals can reach a few mm (see the inset in Fig. 1e). Sharp (0k0) x-ray diffraction peaks of these crystals ( Fig.1e) confirm their layered structures and excellent crystallinity. Our resistivity measurements along in-plane ( // ) and out-of-plane (  ) directions, as shown in Fig. 1f, indicate that Nb 3 SiTe 6 is a quasi-2D metal, with the anisotropic ratio   / // increasing from 9 at 300 K to 22.5 at 2 K (Fig. 1f, left inset). Our DFT-PBE band structure calculations revealed that the metallicity of Nb 3 SiTe 6 originates from the specific bonding state of Nb ions, as detailed in supplementary information. From the projected band structure and density of state shown in Fig. 2a, it can be seen that the valance bands crossing the Fermi level are derived from Nb-4d orbitals, indicating that the transport properties of Nb 3 SiTe 6 are dominated by Nb-4d electrons. Moreover, the decrease of dimensionality was found to lead to an unambiguous reconstruction of electronic band structure ( Fig. 2b and 2c). When the thickness approaches single sandwich layer, the valance bands become much narrower and the gap between conduction and valence bands is doubled (~0.8 eV), while the Fermi level still crosses valence bands. Atomically thin Nb 3 SiTe 6 crystals can be obtained on Si/SiO 2 substrates via microexfoliation that has been widely used for graphene and MX 2 1,9 . The flake thickness was first estimated by the color contrast under an optical microscope and the precise thickness was then measured by an atomic force microscope (AFM). Thin layers of ~3-5 nm are easily produced, and flakes as thin as only one unit cell (bi-layer) are accessible, as illustrated by the micrograph of a large (> 10 µm) bi-layer flake in Figs 3a and 3b. We further characterized Nb 3 SiTe 6 thin flakes through Raman spectra measurements and Transmission Electron Microscope (TEM) observations. As compared to MX 2 which displays only E 2g 1 and A 1g phonon modes 10 , Nb 3 SiTe 6 exhibits more Raman modes (Fig. 3c) which may be attributed to its relatively complex lattice structure. Unfortunately, we are unable to identify these Raman modes due to the lack of theoretical studies of phonon spectra. However, we observed noticeable red-and blue-shifts 4 caused by the decrease of flake thickness for 165 and 225 cm -1 modes, respectively (see Fig. 3d). Often, such Raman mode shifts reflect the variation of phonon spectra with reducing dimensionality, as seen in MX 2 systems 10 . Our Raman measurements on < 4 nm flakes were unsuccessful since these flakes were easily damaged even by very low laser intensities. The stability of Nb 3 SiTe 6 thin flakes were demonstrated by TEM observations. As shown in Fig. 3e, all the [010]-zone electron diffraction spots can be indexed according to the crystal structure of Nb 3 SiTe 6 , consistent with excellent crystallinity. Moreover, the atomic resolution image shows no visible amorphous structure ( Fig. 3f) even after a few weeks exposure to ambient environment, confirming the stability of the few layers Nb 3 SiTe 6 . To characterize the electronic properties of Nb 3 SiTe 6 thin layers, we fabricated Nb 3 SiTe 6 nanodevices using electron-beam lithography. Although we are able to thin Nb 3 SiTe 6 crystals down to oneunit-cell, successful devices with good electrical contacts can be prepared only on flakes with the thickness  6 nm. The inset of Fig. 4b shows an image of a typical device. Overall, we observed a systematic increase of resistivity with the decrease of thickness, as shown in Fig. 4a. For relatively thick flakes (>12 nm), the resistivity presents a temperature dependence similar to that of the bulk sample. When the thickness is decreased below 12 nm, the resistivity displays marked changes in its temperature dependence. These thinner samples are more steeply temperature dependent and develop a low temperature upturn in the resistivity that is more pronounced with decreasing thickness. In general, there are several mechanisms that can produce a low-temperature resistivity upturn, including Anderson localization, the Kondo effect and weak localization (WL). Through systematic measurements and analyses of the magnetoresistance (MR), we have determined that the observed resistivity upturn at low temperature is primarily attributable to Anderson localization, and not to the other two mechanisms. On one hand, the positive MR in all Nb 3 SiTe 6 devices rules out the Kondo effect 11 and WL 12-15 (see Figs. 4b and 4c), phenomena associated with negative MR. On the other hand, with decreasing flake thickness, the field dependence of MR evolves from superlinear to sublinear behavior, 5 with a zero field dip developing gradually (Fig. 4c). This reminisces the signature of weak antilocalization (WAL) in systems with strong spin-orbit coupling (SOC) 13-15 , or disordered conductors with the electronelectron interaction (EEI) which causes corrections to density of states at the Fermi level 15,16 . Fortunately, these two mechanisms can be distinguished by the measurements of angular dependence of magnetoresistance (AMR). The EEI correction to conductance is not sensitive to the magnetic field orientation 16,17 , while WAL is 15 (see the supplementary information). Our AMR data, together with the systematic analyses of MR data for all samples given below, point to a WAL scenario. As seen in Fig. 4d, when the applied field is low (e.g. 0.2T), the rotation of the field in the x-z plane (see the inset to While the observation of WAL in Nb 3 SiTe 6 is not surprising due to relatively strong SOC induced by the heavy elements Nb and Te, the enhancement of WAL signature with reducing samples thickness is unexpected. WAL results from destructive quantum interference of electron waves, and causes enhanced conductivity 13-15 , and therefore seems incompatible with Anderson localization. As we show below, their coexistence can be understood in terms of an anomalous enhancement of the WAL channel that is superimposed on the Anderson localization background. To gain more insights into the thickness-dependent evolution of WAL in Nb 3 SiTe 6 flakes, we have fitted the magnetotransport data using the Hikami-Larkin-Nagaoka (HLN) model 18 , and extracted 6 the quantum coherence length l  and spin relaxation length l SO ,which characterize the distance scales associated with quantum coherence and SOC, respectively (see supplementary information). With decreasing the sample thickness from 15 nm to 6 nm, the extracted l SO shows very weak thickness dependence. In contrast, l  increases by 50% (Fig. 4e) as thickness is reduced, implying that the observed enhancement of WAL is due to the larger coherence length in the thinner Nb 3 SiTe 6 flakes. We believe that the primary source of enhanced l  in our system is weakened inelastic electronphonon (e-ph) scattering 13 . Other scattering channels such as electron-electron (e-e), electron-impurity, and interfacial scattering generally enhance with decreasing sample thickness 19,20 . These, of course, also contribute to the overall de-phasing rate, so we would naively expect l  to decrease or at least remain roughly constant as the sample thickness is reduced. Elastic scattering from static impurities, disorder, and interfaces can also result in a decrease of l  through their impact on the electronic diffusion constant. The fact that l  increases significantly as the system is brought through the crossover from 3D to 2D can only be attributed to a dimensional suppression of the inelastic e-ph scattering rate. The detailed characteristics of e-ph interaction variation with thickness are further revealed by the temperature dependence of l  , which is obtained from the fitting of magnetotransport data at different temperatures (see supplementary information). As shown in Fig In lower dimensions, the e-ph interaction can be modified by the quantum confinement of both the electronic band structure and the phonon spectrum. As shown in Fig. 2, the reduction of thickness leads to significant electronic band flattening in monolayer. Band narrowing is predicted to suppress e-ph interaction 21,22 . However, our band structure calculations did not show significant band narrowing in bilayer and thicker Nb 3 SiTe 6 flakes (see Fig. 2b). Therefore, the suppression of e-ph scattering in our thinner samples cannot be attributed to 2D confinement on electronic band structure. Instead, we believe that confinement plays a much more important role in its modification of the phonon spectrum, which in turn affects the coherence length. As we show in the supplementary information, the phonon wavelength quantum transport studies on ultra-thin films did not find the evidence for the suppression of e-ph scattering with reducing dimensionality 26-29 . However, those studies were made on polycrystalline films that typically have a significant grain-boundary scattering rate, distinct from our crystallized atomically thin Nb 3 SiTe 6 layers. In addition, those materials may be much more sensitive to interfacial scattering due to their isotropic lattice structures. In contrast, the highly anisotropic sandwich-like lattice structure of Nb 3 SiTe 6 leads to weak interlayer electronic coupling (Fig. 1f). Thus the scattering from the crystal/substrate interface, which is likely responsible for our observed resistivity upturn at low temperature, may be quickly "screened" by the adjacent few layers near the substrate so that WAL can occur in layers further away from the interface. To summarize, we have produced atomically thin analogs of the ternary chalcogenide Nb 3 SiTe 6 . Using quantum interference as a probe of the electron-phonon scattering rate, we find clear evidence for the long-predicted dimensionality suppression of e-ph interactions that resulted from the crossover from a 3D to 2D phonon spectrum. Such phonon dimensionality effect may also play an important role in the quantum transport properties of other layered crystalline materials. Methods The Nb 3 SiTe 6 bulk single crystals were synthesized with a stoichiometric mixture of starting elements using the chemical vapor transport. During the growth the temperature was set at 950 ˚C and 850 ˚C, respectively, for hot and cold ends of the zoned tube furnace. The composition and structure of these single crystals was confirmed using Energy-dispersive X-ray spectroscopy and X-ray diffraction measurements. To avoid any possible secondary phase in our single crystals, in addition to the careful characterization of bulk single crystals, we double-checked the tiny pieces of single crystals on the scotch tapes used during the exfoliation. For TEM measurements, Nb 3 SiTe 6 thin flake was transferred to a Ni/C grid. The Nb 3 SiTe 6 nano devices were fabricated using standard electron-beam lithography followed by a deposition of Ti(5nm)/Au (50nm). The transport measurement for both bulk and nano devices are performed in a physical properties measurement system (PPMS). The electronic band structure calculation was performed using density functional theory in the framework of generalized gradient approximation (GGA) in Perdew-Burke-Ernzerhof 30 parameterization the angle between the out-of-plane axis (z-axis) and the magnetic field vector. The AMR at 9T is multiplied by a factor of 0.3. The solid line shows the fits of the 9T and 3T data to the cos 2  dependence. Inset: schematic of the measurement setup. The magnetic field rotates within the x-z plane defined by the current (x-axis) and out-of-plane axis (z-axis). e, The coherence length l  and spin-orbit relaxation length l SO extracted from fitting (see supplementary information for details). The error bars for l SO is too tiny to
2019-04-13T11:20:16.594Z
2015-03-06T00:00:00.000
{ "year": 2015, "sha1": "31add7c56946a1bfdd900be042b89ecfd9b274d8", "oa_license": null, "oa_url": "https://www.nature.com/articles/nphys3321.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "31add7c56946a1bfdd900be042b89ecfd9b274d8", "s2fieldsofstudy": [ "Physics", "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
109256945
pes2o/s2orc
v3-fos-license
Communication Technology and System Design 2011 Enhanced Cloud Security by Combining Virtualization and Policy Monitoring Techniques Cloud Computing is accessing Services through Internet based on pay per usage model. Software as a Service (SaaS), Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) are available in Cloud. Cloud based products will eliminate the need to install and manage client rich applications. Cloud Service providers are helping companies to reduce the high cost infrastructure installation and maintenance cost. The customer is charged only for the resources consumed like utility based computing. Data is a more valuable Asset. In business making decision is important from the data what is available. Security is one of the major problem of Cloud, where data can be stored anywhere, any part of the cloud. This paper gives details about the challenges and issues of Cloud Computing and solution to overcome those issues Introduction Cloud Computing is defined by a large-scale distributed computing paradigm that is driven by economies of scale, in which a pool of abstracted, virtualized, dynamically-scalable, managed computing power, storage, platforms, and services are delivered on demand to external customers over the Internet. [1] Cloud is providing preconfigured infrastructure at lower cost, which generally follows the Information Technology Infrastructure Library, can manage increased peak load capacity and moreover uses the latest technology, provide consistent performance that is monitored by the service provider. Dynamic allocation of the resources as and when is needed. Cloud computing reduces capital expenditure (CAPEX) and it offers high computing at lower cost. Upgrading your hardware/ software requirement also easy with the cloud, without disturbing the current work. Scalability and maintenance is easy in the case of cloud. Easily user can rent/lease the services offered by cloud computing vendors. User will be charged as pay per usage like utility based services. It is easy to scale if the application is deployed in cloud. It takes away all the risks of managing resources [18] Overall Cloud is giving good performance at lower cost instead of making more capital investment. Apart from IaaS, SaaS, PaaS, XaaS is possible in case of cloud. Users will be charged based on utility computing. Cloud adopts as pay per usage model. Besides having enormous good features like performance, scalability, flexibility, adoptability, cloud has some serious like security, availability, etc. The rest of the paper is organized as follows: Section 2 discusses about the Cloud providers, Section 3 discusses about the Cloud Computing Challenges and Issues, Section 4 discusses about Proposed Method and Section 5 discusses about the Conclusion. Cloud Providers Cloud computing [ Platform as a Service(PaaS) Platform as a Service(PaaS) offers a high-level integrated environment to build, test, and deploy custom applications. Generally, developers will need to accept some restrictions on the type of software they can write in exchange for built-in application scalability. An example is Google's App Engine, which enables users to build Web applications on the same scalable systems that power Google applications, Web application frameworks, Python Django (Google App Engine), Ruby on Rails (Heroku), Web hosting (Mosso), Proprietary (Azure, Force.com) Software as a Service (SaaS) User buys a Subscription to some software product, but some or all of the data and codes resides remotely. Delivers special-purpose software that is remotely accessible by consumers through the Internet with a usage-based pricing model. In this model, applications could run entirely on the network, with the user interface living on a thin client. Salesforce is an industry leader in providing online CRM (Customer Relationship Management) Services. Live Mesh from Microsoft allows files and folders to be shared and synchronized across multiple devices. Other than the listed above companies, many companies started offering cloud computing services. Cloud Computing Challenges and Issues Cloud security is one of the major issue. In general Security means, focus will be giving attention on confidentiality, Integrity, Availability . But will that be sufficient ? Cloud Computing is providing services Such as Infrastructure as a Service, Platform as a Service, Software as a Service, or Anything as a Service through internet based as pay per usage model like utility computing. In cloud ,data can be stored anywhere. User no need to know exactly, from where they are accessing the service. But security and privacy concerns of cloud need that whether it is coming under regulatory compliance and not violating any user's privacy. itself, "provide a model for establishing, implementing, operating, monitoring, reviewing, maintaining, and improving an Information Security Management System". So it is having its own impact of applicability of these standards towards in the area of cloud security. c. European Network and Information Security Agency (ENISA) [13]: It is an EU agency suggested one cloud computing information assurance framework which adopts ISO27000 series standards . In their publication of Cloud Computing Benefits, Risks and Recommendations for Information Security, they suggested in the area of Personnel security, supply chain assurance, operational security, identity and access management, asset management, data and services portability, business continuity management, physical security, environmental controls, legal requirements, legal recommendations and legal recommendations to the European commission d. Information Technology Infrastructure Library (ITIL) [21] : One of the service management framework adopted by major IT organization. Security management is a constant and continuous process which can be mentioned in SLA of the service provider. Identifying risks and improvement to overcome those risks are the continuous service improvement. Security management in ITIL is based on ISO/IEC 27002 e. National Institute of Standards and Technology (NIST): NIST [3][20] has released two drafts, "Guidelines on Security and Privacy in Public Cloud Computing" and "The NIST Definition of Cloud Computing " (Draft Special Publication 800-144,145) giving a broad category of standards and guidelines even though oriented for US Government, widely adopted by most IT industries. Likewise, many standard bodies and organizations work focused on their own country needs. Their works gives us valuable guidance and suggestion s to improve the security controls and architecture. [4][5] [10] [11]. Users generally expect their service at optimal cost with performance. besides providing service at optimal cost, cloud provider should give reliable , trustable , scalable services by adopting legal and regulatory compliances beside the security triad of CIA. Unless otherwise security is not there, this will raise problems in other feature also. So enhancing the security of the cloud is very much important to utilize all the nice features of cloud. Proposed Method Cloud Computing is accessing the services thru internet. Apart from IaaS, SaaS, PaaS, XaaS is possible in case of cloud. Users will be charged based on utility computing. Cloud adopts as pay per usage model. Fig.2. shows the general cloud computing model Cloud uses virtualization as its key technology [8] [9]. When end user submit their requirement, a separate Virtual Machine is created to run their specific application. In a single host machine itself multiple Virtual Machines can be run to utilize the resources. Fig. 3. Shows the Virtualization technique used for cloud. Virtualization [6] helps multiple instances of same application can be run on one or more cloud resources. It automatically provides scalability when more number or user wants to run their application. It gives each user that their application is running on a single virtual machine. Here end user cannot see other user's data. Proper isolation of Virtual machine is important. Fig.4. shows multiple VMs running on a single host machine with hypervisor. Multiple instance of an application can run on a single server or many server, provided by a trusted cloud service provider. Cloud user selects his provider based on their Quality of Service and their service level agreement, According to their usage, user can make an service level agreement covering such as uptime, response time, throughput, scalability, security issues. Cloud Service can be deployed directly on a cloud providers without CAPEX. Many cloud providers are there. Amazon, Microsoft, IBM, Google are the major providers. Here Fig. 5.shows service deployment in an external cloud provider, CloudBees [7] [17] provider which in turn utilizes resources from Amazon. Many providers gives free tier limit to deploy our services in cloud. once if we cross the limit , based on the usage , user will be charged. So anybody who wants to start their company without a CAPEX is possible. It is a flourishing technology. An automated monitoring tool along with virtualization will solve the security problems of cloud. When we use the monitoring tool, it checks for the port scanning as well as for service scanning and protocol scanning and we can check the incoming request for the service and it's route ip also from where request has generated. The following fig. 6. shows system running with some unknown services. By combining Authentication and Authorization (JAAS) along with the service policy monitoring besides updating virtual machines periodically will enhance the security of cloud. Each virtual machine can be allocated for each user requirement, isolating their virtual machine from other machine is important. Once there is a hidden service, un known service or other vulnerabilities, report can be generated. Based on the vulnerability assessment report, appropriate steps can be taken. Even by taking appropriate snapshots, we can bring back to the system to the previously running state , in case any failure is there. Easy rebuild of virtual machine is possible along with all the currently running applications. Because virtual to physical and physical to virtual migration easy. Teleporting is one of the nice feature of virtualization. From the images taken, Virtual machines can be easily rebuilt. Cloning of Virtual machine also very easy. So when many request is coming, by easily cloning the service instance machine running already, multiple instances can be generated to meet the scalability requirement. Moreover, since because of this virtualization key technology, effective utilization of resources are more promising feature of cloud computing. Since many servers can run on a single server, many virtual machine can run on a single host machine with a help of a hypervisor which in turn saving power, which shows the way to green computing. Some of the key points to maintain security of the cloud is listed below: • Any VM is intruded, or any kind of error, it can be isolated separately, without affecting other VMs, since isolation is possible in virtual machines. • By carefully monitoring the resource utilization like CPU, memory, network, port, Isolating the intruded Virtual Machine is possible. • By taking appropriate steps to recover or it can be destroyed for security purposes. Not only security availability, performance improvement and moving our service from one provider to other provider also possible. Bringing back to our own infrastructure also easy with virtualization using virtual to physical migration concept. Conclusion Nowadays, Most of the vendors provide free usage to deploy their services in their cloud with certain limits like hours of usage, or hard disk space, storage or data transfer or number of end users. As a cloud user or developer, they have to choose the vendor based on their Service Level Agreements, security service standards and compliances. Even though Cloud has some serious issues like security, privacy, social and political issues [14] [15], Cloud computing is going to be one of the venture technology in future. Cloud user should understand their own network, system, applications and data are moving to an unknown network which poses serious threat to security and privacy. Using virtualization server, resources, network, desktop, application, operating system and storage can be virtualized. One of the major concerns in future is computing with less power. With virtualization, apart from flexibility, scalability, security, utilizing underutilized resources/idle resources, manageability, 661 B. Loganayagi and S. Sujatha / Procedia Engineering 30 (2012) 654 -661 8 Loganayagi.B, Dr.S.Sujatha / Procedia Engineering 00 (2011) 000-000 cost effective cloud computing with virtualization technology takes less power since more than one virtual machines can be run on a single physical machine. The above discussed monitoring technique along with virtualization, which helps the provider to achieve full security of the virtual machine is possible. Since virtualization is the key technique of cloud computing, enhanced secure cloud service can be achieved with the help of an automated monitoring techniques. In future like IaaS, PaaS and SaaS, Anything as a Service (XaaS) is going to be possible and can be achieved through Virtualization. Challenges and issues of cloud computing can be overcome by combining JAAS, Virtualization and an automated service monitoring tool with proper SLA agreement.
2019-04-12T13:55:56.197Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "db6c7f69d5549e87abae4f03a0b62deb6a5fea77", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.proeng.2012.01.911", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "94176192f0f45396341c5ad23a05adb77f76d308", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
234244048
pes2o/s2orc
v3-fos-license
Construction and Practice of Cultivation System for Innovative Talents in Cyberspace Security In June 2015, The Academic Degrees Committee of the State Council of China and the Ministry of Education decided to add the First-Level Discipline of "cyberspace security" under the category of "engineering". The establishment of the first-level discipline is aimed at implementing the national security strategy, which is not only conducive to speeding up the training of high-level talents for cyberspace security, but also puts forward higher requirements for Discipline Construction and personnel training system. Based on the analysis of the concept and connotation of cyberspace security, this paper summarizes more than 10 years experience of Chengdu University of Information Technology in the construction of information security specialty, and gives some suggestions on the construction and practice of Cyberspace Security innovation personnel training system. INTRODUCTION In 2008, US Presidential Decree No. 54 defined cyberspace: Cyberspace is an overall area in the information environment, which is composed of independent and dependent information infrastructure and networks. Including the Internet, computer systems, embedded processors and controller systems. Cyberspace has gradually developed into the fifth largest strategic space after sea, land, air and sky. Cyberspace Security studies security threats and protection issues in cyberspace. In a confrontation environment, it studies the threats and protective measures faced by information in the production, transmission, storage, and processing links, as well as the network and the system itself Threats and protection mechanisms. Cyberspace security includes not only the integrity, confidentiality, and availability of information studied in traditional information security, but also the security and credibility of the infrastructure of cyberspace. With the improvement of social informatization and the rapid development of information technology, the network has been deeply integrated with traditional industries. The Internet penetration rate in China is rising rapidly. As of June 2020, the number of Internet users in China is 940 million, the Internet penetration rate is 67.0%, the number of national and regional top-level domain names ".CN" is 23.04 million; the number of online shopping users is 7.49 Billion; the number of online government service users reached 773 million. Since 2013, China has become the world's largest online retail market for seven consecutive years. In the first half of 2020, online retail sales reached 5,150.1 billion yuan. Network security is not only related to the national economy and people's livelihood, but also closely related to national security. It not only involves the country's political, military, and economic aspects, but also affects national security and sovereignty. To build a national information security system and ensure the security of important departments and infrastructure, a large number of cyberspace security professionals are needed. In order to fully implement Xi Jinping's Thought on Socialism with Chinese Characteristics in the New Era and follow the spirit of the National Medium and Long-term Education Reform and Development Plan (2010-2020) and the National Medium and Long-term Talent Development Plan (2010-2020), colleges and universities should Reasonably set up professional talent training programs, innovate talent training models, and establish a comprehensive development and application-oriented talent view. Strengthening the school-running characteristics of cyberspace security and other related majors such as information security and adapting to the needs of the new situation has important practical and social significance. THE CONSTRUCTION OF CULTIVATION SYSTEM FOR INNOVATIVE TALENTS IN CYBERSPACE SECURITY The talent training system of colleges and universities generally includes subject system, curriculum system, teaching system, and teaching quality monitoring and guarantee system. The professional talent training program is the compass to achieve the goal of talent training, the basic way for colleges and universities to realize the talent training specifications, and an important guarantee for comprehensively improving the quality of talent training. Information security and other cyberspace security-related majors of Chengdu University of Information Technology pay attention to the improvement of education and teaching quality, and continuously strengthen professional construction and teaching reform. After more than ten years of hard construction and reform and innovation, a distinctive information security major has been formed. It has played a certain demonstrative and leading role in the training of information security talents in local colleges and universities, and is a provincial-level "first-class discipline construction" discipline in Sichuan. The training goal of this major is to train to meet the actual needs of my country's socialist economic construction, and to develop in an all-round way of morality, intelligence, physical beauty; with information security technical knowledge and reasoning ability, information security detection, system design and maintenance, security assurance, and information security application Development ability; possess professional ability and correct attitude, independent learning ability, organization and collaborative work ability, engineering management ability and comprehensive competitiveness; information security application engineering technical talent with innovative spirit and international consciousness. By 2020, Chengdu University of Information Technology has trained 14 undergraduates and 13 postgraduates in information security, with more than 3,000 undergraduates and about 200 graduates. EXPLORATION AND PRACTICE OF CULTIVATION SYSTEM FOR INNOVATIVE TALENTS IN CYBERSPACE SECURITY The cyberspace security major is developed based on multidisciplinary development such as mathematics, computer, cryptography, etc. It has a strong application. In the practice of the innovative talent training system, the construction of relevant teachers in universities should be strengthened, and the multidisciplinary knowledge should be strengthened. Cross-penetrate, adopt a diversified teaching model, attach importance to the construction of relevant curriculum groups, emphasize the cultivation of students' practical ability, strengthen teaching supervision, summarize and optimize existing experience, further improve the quality of talent training, and reform the innovative talent training system. Faculty Building The construction of the faculty is a prerequisite for ensuring the quality of talent training and improving academic standards, and has a decisive influence on the development of higher education. In recent years, Chengdu University of Information Technology has issued relevant supporting policies to encourage teachers to continue their studies; actively introduce talents with high academic qualifications and high professional titles, adopt the "one person, one discussion" approach, and provide talent introduction fees and scientific research start-up funds; hiring enterprises and institutions is rich in Front-line engineers with engineering experience serve as "dual-qualified" tutors and come to the school regularly or irregularly to teach students and exchange experiences. Through the above measures, we have built a team of high-quality teachers with a reasonable age, knowledge structure, rich engineering experience, and continuous innovation capabilities. Diversified Teaching Mode The information security major is very practical. On the basis of strengthening the study of theoretical knowledge, we carry out different forms of practical activities according to the direction of training. In the implementation of specific practical teaching, we change the traditional teaching method of Teaching, problem-oriented learning, project learning, experiential learning, games and simulation learning, flipped classrooms, online or blended learning, group debate or discussion, peer-based teaching, group learning, team presentation, role-playing and other teaching modes, etc. Combine them, highlight actual combat, enhance students' actual combat ability, and improve practical teaching effects. Advances in Social Science, Education and Humanities Research, volume 517 Chengdu University of Information Technology holds four competitions every year, including the Campus Information Security Technology Competition, Sichuan University Student Information Security Technology Competition, Geek Challenge, and Class Competition. At the same time, actively organize and support students to participate in nationwide information security professional-related competitions, so that students can be immersed in the atmosphere of the competition throughout the year, and can be exposed to the latest information security technology with practical application value. Taking 2017 as an example, students majoring in information security won 8 provincial and above awards, with 38 winners, and gained a certain degree of popularity across the country. Course Group and Characteristic Course Construction Course group construction is the basic way to cultivate students' professional ability, and it plays a decisive role in the cultivation of students' professional quality. Chengdu University of Information Technology, based on basic computer professional courses such as "C Language Programming", "Computer Network", and "Database Principles and Applications", has built a course group with reasonable structure, clear levels, and mutual connection between courses. At present, an information security technology course group has been established, including two professional directions of security detection and vulnerability research. Professional courses in the direction of security detection include "PHP Programming", "Network Attack and Defense", "Advanced Attack and Defense Technology", "Code Security Audit", and other courses, vulnerability research directions include "assembly language programming", "virus principles and prevention", "reverse engineering technology", "vulnerability principle and mining" and other courses. These courses are forward-looking and practical, and have cooperated with many domestic companies and are ready to promote them to other universities. In addition, the "Applied Cryptography" course has been established as an excellent course in Sichuan Province and an excellent course in Sichuan Province for resource sharing. Optimize Course Assessment Methods In the assessment process, Chengdu University of Information Technology divides the assessment of most courses into regular assessment/process assessment and final assessment. The assessment link can refer to but not limited to the following methods: class discussion, attendance, oral report, homework, mid-term assessment, individual defense, group defense, course essay, design report, design plan, internship report, experimental report, operation skills, works Presentation, oral examination, closed-book written examination, open-book written examination, etc. In principle, the higher the proportion of usual grades, the more abundant the content of the usual assessment links, and the output of results closely related to the course teaching content (such as displayed works, experimental reports, course papers, etc.). At present, 7 courses such as "C Language Programming", "Database Principles" and "Data Structures" are subject to procedural assessments. After each chapter is completed, students are periodically assessed online, which has changed the tradition The assessment method of "one exam for life" in the final exam has formed a perfect assessment mechanism. Engineering Practice Reform Under the traditional two-semester model, students' practical exercise time is very limited, and they can only train in engineering practice through computer, internship, and experimentation. On the other hand, the employer has "relevant work experience". The requirements are higher. Therefore, Chengdu University of Information Technology regards engineering practice as a professional compulsory course, divided into engineering practice 1-5, corresponding to the 2-5 semesters, and arranges related engineering practice topics around the 12th week of each semester. Arrange code inspection, demonstration and defense every week. Taking the direction of penetration testing of information engineering as an example, the engineering practice is based on the principle of "combat-oriented, moderately advanced", and the content is C language programming practice; master the basic concepts of security, familiar with the principles of common WEB vulnerabilities; master the common tools of penetration testing; Concepts of user permissions, port forwarding, intranet (domain environment, peer-to-peer network), familiar with common intranet attack methods, privilege escalation, forwarding gateway or routed traffic; master the method of writing python scripts, master penetration attacks, and maintain permissions , Trace removal and defense methods. After several years of practice, students' engineering practice ability has been improved, professional practice level and employment quality have been significantly improved. CONCLUDING REMARKS Since the "Prism Gate" incident in 2013, cyberspace security has become a hot spot of concern worldwide, and cyberspace security has become an important part of the national security strategy. Following the approval of the Ministry of Education in 2001 to establish an information security major, in June 2015, "cyberspace security" was added as a national first-level discipline. The establishment of the first-level discipline is aimed at implementing national security strategies and not only Advances in Social Science, Education and Humanities Research, volume 517 helps to accelerate the high-level cyberspace security Talent training also puts forward higher requirements for discipline construction and talent training system. Based on the analysis of the concept and connotation of cyberspace security, this article summarizes the experience of more than 10 years of information security professional construction in Chengdu University of Information Technology, and gives suggestions on the construction and practice of the cyberspace security innovation talent training system. This paper is supported by Sichuan Science and Technology Program 2019YFG0408 and 2018GZDZX0011.
2021-05-11T00:03:03.395Z
2021-01-23T00:00:00.000
{ "year": 2021, "sha1": "2dfed2db6bd14c6e8014ab7ac69696b00f01fd4a", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125951731.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c452c46a499ad13abffb9864d09103ca6f4d4c5c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
116352725
pes2o/s2orc
v3-fos-license
Searching for road deformations using mobile laser scanning Millions of people use roads every day all over the world. Roads, like many other structures, have an estimated durability. In Poland a lot of the roads were built at the turn of the 20th and 21st c., especially for light cars. Many of these roads carry traffic and heavy goods vehicles which were not predicted when the traffic was first estimated. It creates a lot of problems with technical conditions and the infrastructure must be improved. Treatments can be problematic, because restoring the original properties of the road requires workers to restrict traffic as cars may cause a lot of damage to the construction. In the article the authors present a method for estimating the condition of a road using the MLS (Mobile Laser Scanning) measurement technique. It is based on a mobile platform and equipped with the Riegl VMZ-400 scanning system. Post-processing of the data constrain to extract the scan lines and road’s condition analysis in addition to estimate its parameters. In conclusion, the authors present the advantages and disadvantages of Mobile Laser Scanning in addition to the paths. Moreover, tell the possibility of the factors determine which describe the security level of the roads. Introduction Millions of people use roads every day all over the world.Roads, like many other structures, have an estimated durability.In Poland a lot of the roads were built at the turn of the 20 th and 21 st c., especially for light cars [1].Many of these roads carry traffic and heavy goods vehicles which were not predicted when the traffic was first estimated.It creates a lot of problems with technical conditions and the infrastructure must be improved.Treatments can be problematic, because restoring the original properties of the road requires workers to restrict traffic as cars may cause a lot of damage to the construction. Another issue other than repairing roads, are traffic accidents.Most of them are caused by incompetence of the drivers who exceed speed limits.And even when the speed limit is respected by the drivers, the technical condition of the roads combined with bad atmospheric conditions could cause very problematic situations.To reduce accidents, it is very important to implement the safety policy proposed by the transportation office.To create a safety plan a lot of factors have to be taken into consideration such as the environment.Noise pollution in cities is a problem which was created by roads.The solution is to predict the level of traffic noise.It is possible to study the average level of traffic over a period of time (for example traffic volume or average traffic speed) [2].Because the technical condition of roads is so important, we must remember that roads behave dynamically.When developing a proper safety plan and action plans, the condition of the roads cannot be ignored.Apart from environmental factors, an inventory has to be conducted.It is possible to measure the road condition using manual or automatic techniques.Manual techniques are very time consuming, so the best solution is to use innovative measuring techniques which are very efficient and accurate for detecting defects [3].In this study mobile laser scanning was used.The dataset was acquired by the Riegl VMZ-400 scanning system.The workflow of processing the data was designed to acquire the data, post-process and analyse them to create a defect map, where cracks in the road were detected. Data acquisition and data processing To acquire the spatial data, the Riegl VMZ-400 hybrid system was used.It means that the VZ-400 scanner could be used successfully as the terrestrial and the mobile laser scanner.It helps to gather information about the entire scanned object, does not leave any space without points which is a significant assumption during planning.It is especially important in searching for deformations because all of the details of the object have to be moved to the next steps of processing the data.It is commonly known that the car cannot drive into every area so using terrestrial laser scanning as a supplement is a good solution.In circumstances, where the global accuracy should be very high, the TLS could be used as the reference [4] for mobile data. The mobile scanner is placed on a platform.Using IMU (Inertial Measurement Unit) and a GNSS receiver the scanning system is created assisted with the DMI (Distance Measurement Indicator).To adjust all the components into one system, offsets of the device's centre have to be estimated with very high accuracy.The Riegl VMZ-400 has two mounting configurations: vertical and horizontal. Experiment results The main goal of this paper was to propose an application for searching for cracks in the road using mobile laser scanning.During the analysis of the data, the authors found, that based on the amplitude searching method there is a possibility to indicate ruts or road infrastructure such as manholes.The results of this classification are presented in Fig. 7 Fig. 6.Road classification (green -place not affected by the traffic, aqua-ruts, purple-line and cracks). Fig. 2 . Fig. 2.Riegl VMZ-400 Horizontal mounting[5].To prepare a proper calibration providing correct measurements, the correlations between the devices have to be evaluated.The evaluation is based on the direction of the axis of the system (X-axis should be turned into the driving direction, Z-on the ground, Y-on the right Fig. 3 . Fig. 3. Roll, Pitch, Yaw values.The evaluation of the collected data was made after the trajectory alignment, based on GNSS reference network stations (ASG-EUPOS), but in this paper the global accuracy of the data was not that important as the accuracy of the deformations (e.g.cracks), their size and precision of the points.The result of the scanned data is presented in Fig.4.It has to be mentioned that, when the accuracy is important, not only the reference network should be created but also the GPS positioning should be evaluated[9],[10].[11,12] Fig. 4 .Fig. 5 . 3 Fig. 4. The result of road scanning using Riegl VMZ-400 Mobile System.After evaluation of the data, the filtering step was made.The terrain filter used is based on a hierarchic model of data where numerous estimations were made.For each it was established if the point is located on the plane described as the ground.The algorithm starts with a level represented by the X/Y plane of the project coordinate system and estimates robust planes in the next stages.The points which were marked during the next stages, are marked as not belonging to the ground.With that operation, it is possible to remove noise from the cars which interrupted the measurements, from vegetation if any and noise which cannot be defined.The last preparation before searching for cracks and ruts is to triangulate the data.It is a semi-automatic process which creates surfaces by connecting the points with triangles.The results of processing operation of the point cloud are presented in Fig. 5. Fig. 6 . Fig. 6.Amplitude differences.To exactly indicate places with cracks and ruts, the values of the amplitude must be estimated which are not affected by the traffic [15].Fig. 6 shows the representation of the amplitude's values of terrain not affected, affected by the weight of traffic (based on that property the rut paths could be estimates) and cracks in the roads.What is interesting is that lines on the roads have the same amplitude as cracks.With that assumption, it is crucial to classify the road as the reflectivity function of the laser beam. laser scanning (MLS) could be very useful for searching for road deformations.During the experiment the authors showed that the application could provide a valuable proof of the technical condition.A full analysis of the roads (for example to determine the safety factor of the paths) needs to combine geodesy measurement methods (like photogrammetry methods which are terrestrial and mobile laser scanning).Only by applying a complex and professional approach to the problem (which is estimating the roads' condition) can we indicate the technical and organizational threats and help to improve safety for all road users[13,[17][18][19].It has to be mentioned that if the density of the point cloud is too little to analyse, it is possible to use a high resolution point cloud collected using a different method than laser scanning[20] or by using different methods for monitoring[21].
2018-04-17T20:37:37.762Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "2dd3934cc3213d458886f632bbb90d8f496abd75", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/36/matecconf_gambit2017_04004.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2dd3934cc3213d458886f632bbb90d8f496abd75", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
230716454
pes2o/s2orc
v3-fos-license
Sequencing of SARS-CoV-2 genome using different nanopore chemistries Abstract Nanopore sequencing has emerged as a rapid and cost-efficient tool for diagnostic and epidemiological surveillance of SARS-CoV-2 during the COVID-19 pandemic. This study compared the results from sequencing the SARS-CoV-2 genome using R9 vs R10 flow cells and a Rapid Barcoding Kit (RBK) vs a Ligation Sequencing Kit (LSK). The R9 chemistry provided a lower error rate (3.5%) than R10 chemistry (7%). The SARS-CoV-2 genome includes few homopolymeric regions. Longest homopolymers were composed of 7 (TTTTTTT) and 6 (AAAAAA) nucleotides. The R10 chemistry resulted in a lower rate of deletions in thymine and adenine homopolymeric regions than the R9, at the expenses of a larger rate (~10%) of mismatches in these regions. The LSK had a larger yield than the RBK, and provided longer reads than the RBK. It also resulted in a larger percentage of aligned reads (99 vs 93%) and also in a complete consensus genome. The results from this study suggest that the LSK preparation library provided longer DNA fragments which contributed to a better assembly of the SARS-CoV-2, despite an impaired detection of variants in a R10 flow cell. Nanopore sequencing could be used in epidemiological surveillance of SARS-CoV-2. Key points • Sequencing SARS-CoV-2 genome is of great importance for the pandemic surveillance. • Nanopore offers a low cost and accurate method to sequence SARS-CoV-2 genome. • Ligation sequencing is preferred rather than the rapid kit using transposases. Introduction The human pathogen severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) emerged in Wuhan City (China) in early 2020 and causes the COVID-19 disease which is responsible for over one million deaths in less than one year since then. SARS-CoV-2 has spread over the world and has caused marked social, medical, and economical adaptations to the world. Complete genome sequences published in January 2020 (Wu et al. 2020) enabled the development of real-time reverse transcription-polymerase chain reaction (RT-PCR) assays for SARS-CoV-2 detection that have served as the diagnostic standard during the ongoing COVID-19 pandemic (van Kasteren et al. 2020). Sequencing the genome of the SARS-CoV-2 has provided relevant information on its mutation rate of the virus, its spreading dynamic, or its zoonotic origin (Boni et al. 2020). Genomic surveillance of SARS-CoV-2 is a key tool to know which lineages of the virus are circulating in each country, how often new sources of virus are introduced from other geographical areas, or as an indicator of the success of control measures, and how the virus evolves in response to interventions. Sequencing also provides invaluable insights when linked with detailed epidemiology data for epidemiological investigation of the evolution of the pandemic. All these aspects play a key role in surveillance of the pandemic. Joint efforts have contributed to create a nomenclature system for the different linages of the coronavirus . Recent documentation of reinfections has been provided, demonstrating that different lineages of the SARS-CoV-2 can infect the same person (Tillett et al. 2020;. Sequencing the genome of the coronavirus is necessary to confirm those reinfections and exclude medical relapses. Fast and reliable sequencing of samples in hospitals is of main importance for this epidemiological surveillance. Oxford Nanopore Technologies (ONT, Oxford, UK) has developed several strategies for fast sequencing the SARS-CoV-2 genome that may be essential for quick diagnosis and monitoring the community transmission of the coronavirus. The objective of this study was to evaluate the performance of different chemistry and sequencing strategies using ONT sequencing of the SARS-CoV-2 genome in terms of sequencing accuracy, detection of variants, and quality of the genome assembly. Sample collection A panel of clinical samples obtained during initial diagnostics in essential personnel from Madrid city hall services (police, firemen, emergency and health care workers, etc.) was included in this study. The sample with the lowest Ct value (19.24) from an in-house version of the recommended E-gene real-time RT-PCR (Corman et al. 2020) was selected for sequencing in this study. RNA was isolated anew from stored clinical samples using the IndiSpin® Pathogen kit (Indical Bioscience, Leipzig, Germany). Obtaining the cDNA and 400 bp amplicons was conducted using a protocol published by the ARTIC Network (Quick 2020), using primers from V2 (https://github.com/artic-network/artic-ncov2019/blob/ master/primer_schemes/nCoV-2019/V2/nCoV-2019.tsv). The sample selected had the largest DNA concentration (15 ng/μl) measured using the Qubit fluorometer (ThermoFisher Scientific, 150 Waltham, MA, USA). DNA sequencing The MinION device was used for ONT sequencing. Two ONT kits were used to prepare the DNA library: The Rapid Barcoding kit (SQK-RBK004) and the Ligation Sequencing Kit (SQK-LSK109). The first library was sequenced using a R9.4 flow cell by loading 175 ng onto the SpotON flow cell, according to the manufacturer's instruction. The other library was sequenced using the R10 flow cell using the same amount of DNA. Both flow cells (FC) ran until exhaustion. Most of the reads were obtained during the first two hours of the run. The flow cells were controlled and monitored using the MinKNOW software (version 4.0.20, ONT). Reads were basecalled using Guppy version 4.0.11 (community.nanoporetech.com), and the high accuracy version of the flip-flop algorithm. Assembly The SARS-CoV-2 virus from Wuhan strain Hu-1 genome (MN908947) was used as reference. The reads were aligned against SARS-CoV-2 genome using Minimap2 aligner (Li 2016), a general purpose alignment program to map DNA or long mRNA sequences against a reference database. The total coverage of the genome for both sets of reads was calculated from the alignments using GenomeCoverageBed utility of the bedtools suite (Quinlan and Hall 2010), quantile-normalized and smoothed using a window width of 200 bp. Variant calling The variant calling genotyping from alignment files was performed using LoFreq (Wilm et al. 2012), VarScan (Koboldt et al. 2012) and Pilon (Walker et al. 2014). LoFreq is a fast and sensitive variant-caller for inferring single nucleotide variants (SNVs) and indels from Next Generation Sequencing data. It makes full use of base-call qualities and other sources of errors inherent in sequencing. VarScan employs a robust heuristic/statistic approach to call variants that meet desired thresholds for read depth, base quality, variant allele frequency, and statistical significance. This program needs to pre-process the alignment file to generate a mpileup format file. Pilon identifies small variants with high accuracy as compared to state-of-the-art tools and is unique in its ability to accurately identify large sequence variants including duplications and resolve large insertions. Deviations from the reference were analyzed. De-novo assembly The de novo assembly was performed using Canu (Koren et al. 2017). Canu is a fork of the celera assembler designed for high-noise single-molecule sequencing (such Oxford Nanopore MinION). In order to improve de genome assembly, we used Pilon to automatically improve draft assemblies. Pilon requires as input a FASTA file of the assembly along with the BAM files of reads aligned to the input FASTA file. Pilon uses read alignment analysis to identify inconsistencies between the input assembly and the evidence in the reads. The new assembled genomes were compared with reference genome using Gepard (Krumsiek et al. 2007) in order to thoroughly check the new assemblies. Quality check After the quality control for ONT reads, a total of 16,991 (N50 = 409) and 9658 (N50 = 1059) good quality reads (Table 1) were retained from the R9 and R10 FC, respectively. The R9 yielded 6.48 Mb and the R10 7.73 Mb. ONT runs may yield much larger number of bases, however, in this case, the amount of DNA was limiting. The R9 yielded 90% of the reads within the first two hours, whereas 40% of the reads were sequenced during the same time in the R10 FC. The GC content distribution was computed from both runs using LongQC (Fukasawa et al. 2020 are expected to show sharper distribution, because they have smaller deviation due to longer sequences. The reads were chunked in 150 bp fragments, showing the same GC average contents although with slightly larger variability. This latter strategy is more robust to sequencing or sample differences, and this should be comparable to other data if the same target (biological replicates) is sequenced. Alignment Brief statistics of the alignments are shown in Table 1. A larger proportion (98.86%) of reads from R10 were aligned to the reference genome. Almost 7% of reads from R9 were not aligned to the reference genome vs only 1.1% from R10 reads. The alignments show a good quality of reads for the process. However, a larger amount of non-sense reads (Fukasawa et al. 2020) were detected from R9 FC. Nonsense reads are defined as unique reads that cannot be mapped onto sequences of any other molecules in the same library. This concept is similar to unmappable reads; however, mappability depends on references. According to Fukasawa et al. (2020), non-sense read fraction should be less than 30%. If the fraction of non-sense reads is a way high, it might indicate that either sequencing had some issues or simply coverage is insufficient. Fig. 2 shows the mismatches per aligned read against the SARS-CoV2 reference genome. The mismatches distributions showed a mode of 3.5 and 7% of read length for R9 and R10, respectively. The R9 FC showed a lower rate of mismatches than R10, although it might be led by a shorter length of the reads, which is a consequence of the Rapid Barcoding kit (SQK-RBK004) used to prepare this library, as this kit requires a transposase fragmentation. However, the library loaded in the R10 FC was prepared using the Ligation Sequencing kit (SQK-LSK109) which does not fragment the DNA and is optimized for throughput. This might explain the slightly larger yield outcome from the library loaded in the R10 FC. The read accuracy at the homopolymeric sites from each FC was evaluated. Homopolymeric sites in the SARS-CoV-2 reference genome were located with SeqKit (Shen et al. 2016). Longer homopolymeric regions in the SARS-Cov-2 reference genome were composed of thymine (7 nucleotides) and adenine (6 nucleotides), whereas guanine and cytosine homopolymers had a longer region of 3 nucleotides. Mismatches (Fig. 3A) and deletions (Fig. 3B) at homopolymer sites with length >2 were considered. The R9 chemistry produced a lower number of mismatches at the homopolymeric sites, but was more prone than R10 to produce deletions, mainly in thymine and adenine homopolymeric sites. The R10 chemistry produced a larger rate of mismatches, but a lower rate of deletions, although a rate >20% of deletions was observed at homopolymers <4 nucleotides. Both chemistries showed larger accuracy in adenine and thymine homopolymeric sites than in cytosine and guanine sites. Fig. 4 shows the log2 normalized smoothed coverage plus 1 (to avoid zeroes) plot, generated using GNUPLOT (Williams and Kelley 2011). Most regions showed a coverage above 50×, and a coverage >200× was obtained for many regions regardless the FC type. Both FC types had lower coverage in the same regions. It denotes that the primers used produce a good coverage of the whole SARSCoV2 genome, although the 19 k-20 k region and the ending region showed a lower coverage than the rest of the genome. Variant calling Indels and SNP (single nucleotide polymorphism) variants were identified using three softwares: LoFreq, Pilon, and VarScan. There was a large variability for the number of detected variants between different programs. LoFreq detected a larger number of SNP variants from R9 (25 vs 9), Pilon Common SNPs detected in both FC were consistent, with 6-7 SNPs detected in common in both FC in all the analyses. Figs. 5 and 6 show the Venn diagrams of common SNPs by FC and software, respectively. The common SNP variants detected were in frequency >0.5, as shown in Fig. 7. Requiring a large frequency (>0.60) of the detected variants from nanopore long reads was consistent with the SNP variants detected from several software. Eight indels were reported in common from R9 and R10, however none of them were found with frequency >0.5 (Fig. 8). Forty-seven other indels were detected only from R9 FC, and 31 indels only from R10. The combination of these strategies (i.e. large variant frequency and selecting Fig. 7 Allele frequencies of SNPs located in the SARS-CoV-2 genome detected by VarScan, LoFreq and Pilon common variants reported from different software) seems to be a reliable strategy for variant calling from error prone long reads. De-novo assembly Figs. 9 and 10 show the dotplot comparison between the SARS-CoV-2 reference genome and the assembly from R9 and R10 FC. Note that the R9 assembly is much shorter and sparse than the R10 assembly, with a larger number of contigs. Discussion A clinical sample with RT-qPCR Ct = 19.84 for SARS-CoV-2 was used in this study to amplify and sequence the coronavirus genome using nanopore long reads. Two chemistries and two different protocols were used in the study. A library prepared with the Rapid Barcoding kit (SQK-RBK004) was sequenced on an R9 FC, whereas a library prepared with the Ligation Sequencing kit (SQK-LSK109) was sequenced on an R10 FC. Both runs yielded similar coverage of the SARS-CoV-2 reference genome. The R9 FC showed a smaller number of mismatches against the reference genome. The Rapid Barcoding kit resulted on shorter sequences as expected, as it uses a transposase fragmentation to insert the barcodes. The alignments showed a large number of variants but most of them in low frequency. Setting a threshold of 0.50 for the variant frequency led to a consistent number of variants per FC and library preparation kit of 6 to 7 variants, regardless of the software used. Despite of the larger number of mismatches from R10, the consensus sequence resulted in the same variants detected as from R9. Previous attempts to sequence the SARS-CoV-2 genome have provided accurate results for new variants discovery using rapid workflows based on the ARTIC protocol Li et al. 2020;Moore et al. 2020). Other studies also showed amplicon sequencing using ONT flongle flow cells . A de-novo assembly was attempted from both runs. In this case, the R10 FC yielded a more complete genome than the R9 run. We interpret that the larger size of the fragments and the larger number of Mb obtained, explained this better behavior, which is mainly due to the library preparation with the Ligation Sequencing Kit rather than to the R10 chemistry. Bull et al. (2020) achieved highly accurate consensus-level sequences, with SNVs detected at >99% sensitivity and >99% precision. Our results show that nanopore sequencing offers robust consensus sequence regardless of the chemistry used. This may help to optimize time and future protocols when sequencing SARS-CoV2 using ONT. Although the R10 FC is expected to achieve higher accuracy at homopolymeric regions than R9 FC, the errors are corrected in the consensus sequence. It must also be pointed out that a de novo assembly is not strictly necessary for genomic epidemiology surveillance. Mapping against a genome reference can be used instead, which would also facilitate the variant discovery. All called mutations were already described in the GISAID (https://www.gisaid.org/) database by the date the sample was collected, except a mutation at position 21,727 with a C:T substitution on the S protein, which implies an amino-acid change S:P. This mutation was not observed further in the databases, hence we hypothesize that either this is a sequence error, or the transmission of this mutation was unsuccessful due to the strict lockdown imposed in Madrid from March to May 2020. The consensus sequences from both FC were introduced in the pangolin (v2.0) website tool . Both consensus reads belonged to the B1 lineage (Boni et al. 2020), which carries the D614G mutation. This variant is thought to have arisen in Italy at the beginning of the pandemic in Europe, and spread across Europe and later overseas. This mutation has been related with larger infectivity (Korber et al. 2020). This mutation has been previously detected in other studies in Spain (Díez-Fuertes et al. 2020). It must be pointed out that an important limitation of this study is that each library preparation kit was only tested on one of the FC chemistry. It would have been informative to test the performance SQK-LSK109 on R9 and SQK-RBK004 Fig. 10 Dotplot comparison of SARS-CoV-2 reference (x-axis) vs. R10 assembly (y-axis) on R10. This was not possible due to limited availability of DNA material. Nonetheless, we were able to extract some conclusions regarding the benefits and drawbacks from each chemistry and library preparation protocols regarding their convenience to sequence the SARS-CoV-2 genome. The results from this study suggest that the R10 chemistry does not improve the quality of the sequenced SARS-CoV-2 genome, and the Ligation Sequencing Kit is preferred for whole genome sequencing, as it yielded a higher sequencing depth and a much better genome assembly. This should be the kit of choice mainly at low initial DNA concentrations as in the case of this study. Nanopore offers a quicker turnaround genome sequencing for genomic epidemiology surveillance.
2021-01-06T14:20:03.209Z
2021-01-02T00:00:00.000
{ "year": 2021, "sha1": "a24518fa0b493ecf6b56667ce7f6f3e0e0bb4947", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s00253-021-11250-w.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "0efcb5d1f2bd74c10191705b608c51a6d93186ff", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
17926999
pes2o/s2orc
v3-fos-license
Novel Anti-bacterial Activities of β-defensin 1 in Human Platelets: Suppression of Pathogen Growth and Signaling of Neutrophil Extracellular Trap Formation Human β-defensins (hBD) are antimicrobial peptides that curb microbial activity. Although hBD's are primarily expressed by epithelial cells, we show that human platelets express hBD-1 that has both predicted and novel antibacterial activities. We observed that activated platelets surround Staphylococcus aureus (S. aureus), forcing the pathogens into clusters that have a reduced growth rate compared to S. aureus alone. Given the microbicidal activity of β-defensins, we determined whether hBD family members were present in platelets and found mRNA and protein for hBD-1. We also established that hBD-1 protein resided in extragranular cytoplasmic compartments of platelets. Consistent with this localization pattern, agonists that elicit granular secretion by platelets did not readily induce hBD-1 release. Nevertheless, platelets released hBD-1 when they were stimulated by α-toxin, a S. aureus product that permeabilizes target cells. Platelet-derived hBD-1 significantly impaired the growth of clinical strains of S. aureus. hBD-1 also induced robust neutrophil extracellular trap (NET) formation by target polymorphonuclear leukocytes (PMNs), which is a novel antimicrobial function of β-defensins that was not previously identified. Taken together, these data demonstrate that hBD-1 is a previously-unrecognized component of platelets that displays classic antimicrobial activity and, in addition, signals PMNs to extrude DNA lattices that capture and kill bacteria. Introduction When bacteria enter the circulatory system, platelets are among the first cells they encounter [1]. It is well known that bacteria directly and indirectly induce platelet activation [2], but emerging evidence indicates that platelets also alter the activity of bacteria via the release of microbicidal proteins. Collectively, they are termed platelet microbicidal proteins (PMPs), and include chemokines, fibrinopeptides, and thymosin b-4 [3]. Platelets deliver PMPs to sites of infection where they exert direct antimicrobial effects and potentiate the antibacterial properties of leukocytes [3,4,5,6]. Whether PMPs represent the full repertoire of platelet antimicrobial peptides is not known. Other cells use defensins to counter invading bacteria [7]. Defensins are divided into a, b, h family members that differ in structure, activity, and sites of expression [7]. Human a-defensins are very abundant in microbicidal granules of polymorphonuclear leukocytes (PMNs) and Paneth cells whereas b-defensins (hBD) are widely expressed in epithelium [7,8,9,10,11,12]. This indicates that hBD's serve as a first line of innate defense against invasive pathogens [13]. Their primary mode of action is to insert into cell membranes, which allows hBD's to permeabilize and kill bacteria [14]. In general, hBD-1 is constitutively expressed while hBD's 2-4 are induced in response to infectious or inflammatory stimuli. Other mechanisms of bacterial killing have recently been identified, including the formation of neutrophil extracellular traps (NETs) by PMNs [15]. NETs are lattices of DNA, histones and granule enzymes that are released when stimulated PMNs undergo a unique form of cell death [15,16]. These DNA-rich NET complexes capture and kill bacteria. Platelets were reported to induce NET formation by PMNs in sepsis [17]. The signaling factors expressed by platelets that induce NET formation were not identified, however. Here, we explored mechanisms by which platelets directly kill bacteria and, in parallel, actuate NET formation by PMNs. We found that human platelets store hBD-1 in extragranular compartments, identifying a previously unknown platelet antimicrobial factor. Platelets release hBD-1 when they are exposed to lytic toxins and hBD-1 retards the growth of clinical strains of S. aureus. In addition, we demonstrate for the first time that hBD-1 induces NET formation by human PMNs. Ethics Statement All studies were approved by the University of Utah Institutional Review Board committee. Written informed consent was provided by study participants and/or their legal guardians. Platelet and Bacterial Interaction Studies Platelets were freshly-isolated from human subjects as previously described and exposed to CD45 positive selection, which effectively depletes contaminating leukocytes [18,19]. The leukocyte-depleted platelet preparations were resuspended in M199 culture medium. Unless otherwise indicated, S. aureus were isolated from clinical isolates of two sepsis patients and stored at 280uC. Twenty-four hours prior to each study, a portion of the bacteria were expanded on blood agar plates overnight at 37uC until they reached a stationary growth phase. The bacteria were resuspended in phosphate-buffered saline (PBS) and their concentration was determined by colorimetry (VITEK Colorimeter, bioMerieux, Inc., Durham N.C.). The S. aureus were then resuspended in M199 culture. For each study, S. aureus (3610 7 total) was incubated in the presence or absence of freshly-isolated platelets (1610 8 /ml) for four hours in M199 culture media unless otherwise indicated. After this incubation period, which provided an environment for exponential growth, the bacteria were serially diluted and 100 ml of each dilution was plated on blood agar plates. The bacteria were grown overnight at 37uC and the number of colony-forming units (CFU's) were counted the next morning. In select studies, S. aureus was incubated in the presence of recombinant hBD-1 (PeproTech, Rocky Hill, NJ) or hBD-1 that was captured from platelet lysates. Recombinant hBD-1 was also pre-incubated with an anti-hBD-1 antibody (Abnova, Taipei City, Taiwan) or control IgG for 1 hour prior to being added to S. aureus. Based on preliminary studies testing the effectiveness of the anti-hBD-1 antibody in blocking hBD-1 induced NET formation, a final concentration of 20 mg/ml was chosen for all studies. For the capture of hBD-1, immunoprecipitates were prepared from platelets (4610 9 total) that were lysed at 4uC in RIPA buffer (1 x PBS, 1% NP-40, 0.5% sodium deoxycholate, and 0.1% SDS). Insoluble cellular debri was cleared from the lysates by repeated centrifugation (13,000 x G, 10 minutes) at 4uC. Cleared lysates were incubated for 3 hours (4uC) with an antibody against hBD-1 (sc-20797, Santa Cruz Biotechnology, Santa Cruz, CA) or a rabbit IgG control (sc-2027, Santa Cruz). The antibodies were subsequently purified with protein A and G (A/G) coated agarose beads (4uC, overnight). The A/G beads were then centrifuged, pelleted, and washed with PBS. After 3 washes, immunoprecipitated proteins were eluted by adding 200 ml of glycine (100 mM, pH 2.5) to the beads for 10 minutes followed by centrifugation and isolation of the bead-free supernatant. The supernatant containing the eluted proteins was neutralized (pH 7.2) with TRIS and then incubated with S. aureus to measure growth as described above. mRNA Expression Analyses Megakaryocytes or platelets were lysed in Trizol Reagent (Invitrogen, Carlsbad, CA) and RNA was extracted as previously described [18,19]. Glycogen was added to the aqueous phase before precipitation with isopropanol to optimize RNA yields. The RNA was treated with DNAse (DNA free Kit, Ambion, Austin, Texas), precipitated with ethanol, and dissolved in 12 ml of RNAse-free water. Identical isolation methods were used to isolate RNA from HeLa cells, which served as a positive control for hBD family members. RNA (1 mg) from HeLa cells or platelets was used to generate cDNA to characterize the expression of hBD-1, 2 and 3 using procedures similar to those previously published [18,19]. Integrin aIIb was used as a positive control for megakaryocyte and plateletspecific RNA. The relative abundance of the defensin family members was measured by real-time PCR. The specificity of the amplicons for hBD-1, 2 and 3 was verified by agarose gel electrophoresis and subsequent sequencing of the products. Primer sets for these studies were as follows: hBD-1, forward-GTCGCCATGAGAACTTCCTACC, reverse-CTGCGTCAT-TTCTTCTGGTCAC; hBD-2, forward-GACTCAGCTCCT-GGTGAAGCTC, reverse-ATGAGGGAGCCCTTTCTGAA-TC; hBD-3, forward-CAGCGTGGGGTGAAGCCTAGCA, reverse-TTTCTTCGGCAGCATTTTCGGC; and aIIb, forward-ACACTATTCTAGCAGGAGGGTTGG, reverse-CAG-GGCTCAGTCTCTTTATTAGGC. Protein Expression Analyses hBD protein expression was determined by immunocytochemistry and ELISA. The immunocytochemical studies were performed as previously described in platelets that were fixed immediately after they were isolated [20]. The intracellular pattern of hBD-1 expression was determined as previously described [20] using an antibody against hBD-1 (sc-20797; Santa Cruz) or its control (rabbit IgG sc-2027; Santa Cruz). Wheat germ agglutinin (WGA) (Alexa 555, Invitrogen), which stains granules and membranes of platelets [20], or phalloidin (Alexa 488, Invitrogen) was used as a counterstain. To quantify hBD-1 protein levels, platelets were incubated with vehicle or a variety of agonists that included thrombin (Sigma-Aldrich, St. Louis, MO), thrombin receptor activating peptide (TRAP, Sigma), platelet activating factor (PAF; Avanti Polar Lipids, Alabaster, AL), Escherichia coli-derived lipopolysaccharide (LPS; Invivogen, San Diego, CA) or S. aureus-derived a-toxin (List Biological Laboratories, Campbell, CA). In select studies, platelets were preincubated with taxol (Molecular Probes, Eugene, OR) or nocodazole (Sigma) for 30 minutes prior to agonist stimulation ( Figure S4). At the end of the incubation period, intact platelets were pelleted and lysed in RIPA buffer and the supernatants were collected. Cell lysates and supernatants were added in duplicate to commercially available ELISA plates specific for each of the defensin family members (Alpha Diagnostics International, San Author Summary Platelets are small cells in the bloodstream whose primary function is to stop bleeding. In addition to their clotting functions, we show that human platelets stall bacterial growth. This inhibitory property of platelets is due to bdefensin 1, a small antimicrobial protein that kills bacteria. b-defensin 1 also induces white blood cells to discharge spider-like webs that trap and kill bacteria. Together, these findings indicate that human platelets use b-defensin 1 to fight off bacterial infection. Antonio, TX). As previously described [18,21], hBD-1 protein was also measured in platelet membranes and intracellular organelles that were isolated by centrifugation of lysates on sucrose gradients. Transmission Electron Microscopy For the ultrastructural analyses, platelets and S. aureus were cultured in suspension followed by fixation in 2.5% glutaraldehyde in PBS buffer for at least 24 hours. The platelets and bacteria were then washed with 0.1 M phosphate buffer (pH 7.4) followed by dH2O by centrifugation at 800 x g (10 min). The samples were postfixed with 2% osmium tetroxide (60 min), washed twice with dH2O, dehydrated by a graded series of acetone concentrations (50%, 70%, 90%, 100%; 2610 min each) and embedded in Epon. Thin sections were examined with a JEOL JEM-1011 electron microscope after uranyl acetate and lead citrate staining. Digital images were captured with a side-mounted Advantage HR CCD camera (Advanced Microscopy Techniques, Danvers, MA). NET Formation The basic protocol for isolating PMNs from whole blood and imaging NETs was described in detail previously [16]. In brief, PMNs (2610 6 /ml) were placed on glass coverslips coated with poly-L-lysine and incubated with recombinant hBD-1, hBD-2, or hBD-3 or its vehicle for one hour at 37uC. In some experiments, the recombinant defensins were pre-incubated with polymyxin B (10 mg/ml) for 15 minutes at room temperature before addition to PMNs. Recombinant hBD-1 was also pre-incubated with control immunoglobulin or a neutralizing anti-hBD-1 antibody for one hour before addition to PMNs. In addition, PMNs were incubated with platelet proteins released from hBD-1 or IgG immunoprecipitates as described above. LPS was used as a positive control for NET formation [16]. To examine the role of reactive oxygen species in hBD-1-dependent NET formation, PMNs were pretreated (30 minutes) with diphenylene iodonium (DPI, 20 mM) before addition of hBD-1. PMNs derived NETs were detected with a non-cell permeable DNA dye (Sytox Orange, Molecular Probes) while a cell permeable dye (Syto Green, Molecular Probes) was used to visualize nuclei [16]. Statistics Using a minimum of three studies, we calculated the mean 6 standard error of the mean (SEM) for bacterial growth or hBD-1 protein levels for relevant figures. ANOVA's were conducted to identify differences among multiple experimental groups and if differences existed, a Student Newman-Keuls post-hoc procedure was used to determine the location of the difference. p,0.05 was considered statistically significant. Platelets Encapsulate S. Aureus and Inhibit Its Growth S. aureus expresses several proteins, such as clumping factor A and fibronectin-binding protein A, that mediate aggregation and activation of platelets [22,23,24]. Platelets also regulate the activity of S. aureus. In particular, there is evidence that platelets inhibit the growth of S. aureus [25,26]. To examine the potential inhibitory role of platelets more closely, we incubated S. aureus with and without unactivated human platelets. In the absence of platelets, S. aureus grew uniformly in suspension culture over a four hour period ( Figure 1A and 1B, left panels). In the presence of platelets, however, S. aureus were encapsulated in discrete clusters that were surrounded by platelets ( Figure 1A-C). The majority of platelets in the culture were vacuolated and many of the platelets were visibly lysed. Although bacterial growth was noticeably impeded in the presence of platelets ( Figure 1B), S. aureus ingestion by platelets was rare. Quantitative assessment demonstrated that platelets significantly inhibited the growth of two strains of S. aureus growth that were isolated from patients diagnosed with sepsis ( Figure 2A-B and Figure S1). Platelets did not inhibit the growth of common laboratory strains of S. aureus ( Figure S1). Likewise, the addition of thrombin to the cultures did not enhance the inhibitory effects of platelets on bacterial growth, which held steady for 8 hours but relinquished after 24 hours (data not shown). Platelets Express and Release b-defensin 1 Platelets release a variety of anti-microbial mediators, however it is not known if defensins are part of this pool. Therefore, we screened for b-defensins in platelets and found mRNA for family member 1, but not 2 or 3 ( Figure 3A). Precursor megakaryocytes also contained mRNA for b-defensin family 1 ( Figure S2). Consistent with these findings, hBD-1 protein was readily detected in quiescent platelets ( Figure 3B and 3C). hBD-1 distributed to platelet compartments that were distinct from WGA staining of membrane regions ( Figure 3B). We previously showed that WGA also co-localizes with the a-granule protein P-selectin [20]. Congruous with this staining profile, we were unable to detect hBD-1 in granules that were purified by subcellular fractionation ( Figure S3). Furthermore, agonists that induce a-granule secretion (i.e., thrombin, TRAP, or PAF) did not appreciably elicit hBD-1 release from platelets ( Figure 3C). We also found that microtubules, which modulate a-granule secretion [27], remained intact in platelets that were co-incubated with S. aureus ( Figure S4A). Interruption of microtubular function with taxol or nocodazole in platelets had no effect on hBD-1 release in response to a-toxin, a pore-forming exotoxin that is derived from S. aureus ( Figure S4B). Similarly, inhibition of microtubular function did not prevent platelets from limiting the growth of S. aureus (next section and Figure S4C). Platelet-derived b-defensin 1 Impedes the Growth of S. Aureus We determined if platelet-derived hBD-1 inhibits bacterial growth, an activity reported for recombinant hBD1. As expected, we found that recombinant hBD-1 retards S. aureus growth over a 4-hour period ( Figure 4A). Similar inhibitory responses were observed when hBD-1 was captured from platelets by immunoprecipitation and then incubated with S. aureus ( Figure 4B). In contrast, there was no inhibition by a control immunoglobulin immunoprecipitates ( Figure 4B). To investigate if endogenous hBD-1 protein inhibits bacterial growth, platelets were pre-incubated with a neutralizing anti-hBD-1 antibody or control immunoglobulin before the addition of bacteria. In these studies, platelets had a significantly reduced capacity to limit the growth of S. aureus in the presence of the hBD-1 neutralizing antibody ( Figure 4C). b-defensin 1 Induces NET Formation by PMNs We next asked if hBD-1 has novel antimicrobial activities and focused on the formation of NETs, a defensive function used by PMNs to trap and kill microbes in the extracellular milieu [16]. We first examined if platelet-derived hBD-1 could induce NET formation. hBD-1 harvested from platelet immunoprecipitates, but not control immunoprecipitates, induced robust NET formation ( Figure 5A). Consistent with this finding, recombinant hBD-1 induced NET formation ( Figure 5B). A neutralizing anti-hBD-1 antibody, but not control immunoglobulin, blocked hBD-1 dependent NET formation ( Figure 5B) while having no effect on LPS-induced NET formation (data not shown). Unlike hBD-1, hBD-2 and hBD-3 did not induce appreciable NET production over a spectrum of concentrations ( Figure 6 and data not shown). Inhibition of NADPH oxidase activity, a critical enzyme in the formation of reactive oxygen species (ROS), blocked LPS and hBD-1 induced NET formation compared to controls ( Figure 7A). Polymyxin B, however, did not alter hBD-1 induced NET formation indicating that the effect of hBD-1 was not due to residual LPS contamination in the hBD-1 preparation (data not shown). Finally, hBD-1 significantly increased neutrophil elastase activity in the absence of appreciable cell death ( Figure 7B and 7C). Discussion In this report we show that platelets express hBD-1 that localizes to extragranular regions within the cell and is discharged into the supernatant in response to bacterial toxins. Platelet-derived hBD-1 directly inhibits the growth of strains of S. aureus isolated from patients with sepsis, suggesting that platelets may use hBD-1 to limit the growth of other bacteria as well. Moreover, hBD-1 induces PMNs to extrude NETs, identifying a new function of defensins in host defense. Together, these findings provide new insights into the antibacterial activity of platelets and establish a mechanism by which platelets may trigger NET formation. Although NET formation likely contributes to pathogen containment in humans, it is possible that under certain circumstances (i.e. clinical deterioration) it can lead to potentially deleterious pathologic platelet-leukocyte interactions in sepsis [17]. Mammalian defensins are small cationic peptides that have activity against a broad range of pathogens [7]. a-defensins are abundant in the microbicidal granules of PMNs and defensin alpha 1, also known as human neutrophil peptide-1, has been detected at the mRNA level by gene expression profiling in megakaryocytes [28]. b-defensins are generally present in skin and mucosal epithelia. b-defensins are phylogenetically older and new family members continue to be identified, with approximately 40 potential coding regions on the human genome [13]. hBD-1 was the first b-defensin discovered [29]. Although hBD-1 expression is primarily restricted to epithelium, it has been detected in peripheral blood [30] and was originally isolated from plasma filtrates of patients with end stage renal disease [29]. Here, we demonstrate that platelets express and release hBD-1 protein in response to S. aureus-derived toxins. Platelets have also recently been shown to express and release hBD-3 protein [31]. Interestingly, our data suggest that platelets may accumulate hBD-1 and hBD-3 protein through distinct mechanisms. In this regard, we detected mRNA for hBD-1 in precursor megakaryocytes and platelets suggesting that megakaryocytes transfer hBD-1 protein to platelets during thrombopoiesis. We did not, however, detect hBD-3 mRNA in platelets under the conditions of our experiments. This suggests that platelets may endocytose hBD-3 as they circulate in the bloodstream. Consistent with this possibility, Tohidnezhad and colleagues detected significant amounts of hBD-3 protein in plasma and platelets [31]. Unlike b-defensins 2-4, hBD-1 is constitutively produced by most epithelial cells [7,8,9]. It has also been detected in keratinocytes [32]. Thus, hBD-1 is positioned and ready to mediate innate host defense against pathogens in the gut, respiratory tract, oral cavities, and skin [7,32,33]. Megakaryocytes invest platelets with hBD-1 mRNA and hBD-1's basal protein expression in platelets indicates that it may also serve critical functions in defense against pathogens that gain access to the bloodstream. Platelets are immediate responders and the most abundant cell type to accumulate at sites of intravascular infection, which include infective endocarditis, suppurative thrombophlebitis, mycotic aneurysm, septic endocarditis, catheter and dialysis access site infections, and vascular prosthesis and stent infections [1,2,3]. Sequestration of bacteria and localized release of hBD-1 and other microbicidal proteins by platelets may limit the growth of pathogens at infected areas, providing time for leukocytes to gather and kill remaining bacteria. S. aureus is one of the most common pathogens encountered by humans and a primary cause of infective endocarditis [2]. Our studies show that when platelets contact S. aureus, they encircle the pathogen and force it into encapsulated clusters. This may entrap and sequester the bacteria, reducing or preventing intravascular dissemination. The surrounding platelets are structurally altered, as evidenced by empty vacuoles. While hBD-1 is released under these conditions, our studies suggest that the mechanism is distinct from traditional secretory pathways in platelets since the defensin is basally localized in submembrane cytoplasmic domains rather than granules. In this regard, circumferential bands of microtubules are readily detected beneath the plasma membranes of platelets ( Figure S3 and data not shown). This implies that S. aureus induces hBD-1 release by a mechanism that is distinct from granule secretion which is accompanied by microtubule reorganization, possibly by directly forming pores in platelet membranes. Indeed, some platelets were visibly lysed by S. aureus, which is perhaps a terminal event that discharges intracellular contents, including hBD-1, into the extracellular milieu. Furthermore, we found that a-toxin, which forms pores in cell membranes, evoked hBD-1 release by platelets in a microtubule-independent fashion while receptor-mediated agonists (i.e., thrombin, TRAP, or PAF) that typically induce a-granule secretion did not induce release of the defensin. Extragranular storage may guard against inappropriate release of hBD-1, which could have unwarranted cytotoxicity. Similar expression and release patterns are observed in epithelial cells, where hBD-1 is secreted independent of degranulation [33]. As part of the response to infection, host cells often internalize and kill bacteria. Thrombin-activated platelets are reported to ) immunoprecipitates that were captured from human platelets. The NETs (arrows) were detected by live cell imaging as previously described [16]. (B) NET formation in untreated PMNs (control) or PMNs incubated with 100 ng/ml of recombinant hBD-1 alone or in the presence of anti-hBD-1 or its control IgG. Figure 5A and 5B are representative of three independent experiments. doi:10.1371/journal.ppat.1002355.g005 Figure 6. b-defensin 1, but not other b-defensin family members, induce PMNs to form NETs. PMNs were left untreated (control) or incubated with 100 ng/ml of recombinant hBD-1, hBD-2, or hBD-3. After 60 minutes, NETs (arrows) were detected by live cell imaging as previously described [16]. Increasing concentrations of hBD-2 or hBD-3 failed to induce NET formation (data not shown). Images are representative of three independent experiments. doi:10.1371/journal.ppat.1002355.g006 engulf S. aureus [34,35], but it is not clear that they form phagocytic killing chambers like PMNs and macrophages [36]. Internalization of S. aureus by platelets was rare under the conditions of our experiments (data not shown) and inhibition of platelet microtubule function, which facilitates S. aureus uptake in other cells [37], did not affect bacterial growth. This suggests that platelets limit S. aureus growth by quarantining the bacteria and locally releasing a variety of microbicidal proteins. Additional studies are required to determine if platelets display similar activities against S. aureus in more complex milieus that contain plasma and other cells. It will also be important to decipher why platelets limit the growth of clinical, but not laboratory, strains of S. aureus. In this regard, phenotypic and genotypic characterization of persistent S. aureus and determination of virulent signatures, which may or may not induce hBD-1 release from platelets, will be particularly informative. a-toxin induces the release of PMPs that possess staphylocidal activity [40]. Similarly, we demonstrate that platelets release hBD-1 in response to a-toxin indicating that hBD-1 is readily releasable from a cytoplasmic platelet compartment. PMPs and hBD-1 may have cooperative antibacterial activities. Like PMPs, we found that hBD-1 purified from platelets is capable of inhibiting the growth of S. aureus. PMPs permeabilize bacterial membranes in what appears to be a voltage-dependent manner [3] while defensins insert into bacterial membranes, inducing membrane depolarization and activation of lytic enzymes that permeabilize lipid bilayers [14]. Their precise modes of action at the bacterial cell wall, however, may not overlap completely. In side-by-side comparisons, Yeaman and colleagues [41] demonstrated that PMPs and neutrophil adefensins disrupt S. aureus cytoplasmic membranes by distinct mechanisms. If the same holds true for b-defensins, PMPs and hBD-1 may attack bacteria from separate vantage points influencing the susceptibility of different bacterial strains to platelets. It is also possible that hBD-1 inhibits bacterial activity by cooperating with other factors that are yet to be identified. This notion is supported by the fact that neutralization of hBD-1 activity was only partially protective in preventing platelets from inhibiting bacterial growth. It has been suggested that defensins have other functions besides direct microbial killing. Neutrophil-derived a-defensins modulate agonist-induced platelet aggregation and secretion [42]. b-defensins are chemotactic for monocytes, macrophages and dendritic cells [7,43] and have been shown to differentially regulate the expression of numerous cytokines in human mononuclear cells [44]. Specifically, hBD-1 induces the expression of interleukin 8 and monocyte chemotactic protein-1 [44]. It is well appreciated that platelets induce expression and other inflammatory gene products [45,46]. Whether or not platelet-derived hBD-1 influences chemokine synthesis in target leukocytes is not known. Here, we show for the first time that hBD-1, but not other hBD family members, induces NET formation by PMNs, demonstrating an antimicrobial activity distinct from direct killing that amplifies the antibacterial properties of PMNs. NETs are web-like DNA structures that trap and kill bacteria. The DNA backbones of NETs are studded with histones, neutrophil elastase, myeloperoxidase, and bactericidal permeability increasing protein that together can degrade virulence factors and kill ensnared pathogens [15]. Although we demonstrate that platelet-derived hBD-1 induces NET formation, additional work is needed to dissect the exact roles of hBD-1 in eliciting NET formation by platelets that contact PMNs in human diseases such as sepsis. Previous studies have shown that LPS stimulated platelets induce PMNs to form NETs through mechanisms that remain unclear [17] [47]. Although hBD-1 may be involved in this process, LPS alone does not directly induce hBD-1 release by platelets ( Figure S5). This suggests that multiple factors, or more potent lytics such as a-toxin, may be required to trigger hBD-1 release from platelets that subsequently stimulates NET formation by PMNs. Human BD-1 may also work in concert with S. aureus, which has recently been shown to directly induce NET formation by PMNs [48]. In summary, hBD-1 is basally expressed and released by platelets exposed to a-toxin. It is likely that hBD-1 cooperates with other PMPs to influence bacterial activity and growth, although their release patterns and potency against bacteria may be distinct from one another. By itself, hBD-1 directly inhibits the growth of gram-positive bacteria. Further, it has the novel capacity to engender PMNs to form NETs. The binary actions of hBD-1 are likely to play important roles in sepsis and other infectious diseases where platelets and neutrophils work together to trap, kill, and clear invading pathogens. Figure S1 Platelets impede the growth of clinical S. aureus strains. Platelets were incubated with two laboratory strains of S. aureus or two strains of S. aureus (one non-methicillin resistant and one methicillin resistant) isolated from patients diagnosed with sepsis. The bars represent the mean6SEM (n = 3) of the fold increase in S. aureus growth (240 minutes) over baseline (horizontal line). The single asterisk indicates a significant increase (p,0.05) in growth over baseline. The double asterisks indicates a significant (p,0.05) reduction in S. aureus growth in the presence of platelets when compared to S. aureus growth by itself. (EPS) Figure S2 Megakaryocytes express b-defensin 1 mRNA. The curves show a representative real-time PCR result for integrin aIIb and hBD-1 in megakaryocytes and platelets. The table below the figure identifies the Ct values for integrin aIIb and hBD-1 from two megakaryocyte cultures. (TIF) Figure S3 b-defensin 1 is located in an extragranular compartment in platelets. Platelets were exposed to nitrogen cavitation and their organelles (i.e., mitochondria and granules), membranes, and remaining cytosolic constituents were separated by sucrose gradients as previously described [49]. Each compartment was subsequently lysed in RIPA buffer and hBD-1 levels were determined. This graph is representative experiment of two independent studies. (EPS) Figure S4 The antibacterial activities of b-defensin 1 are not dependent on microtubular reorganization. (A) High resolution electron micrograph of a platelet that was co-incubated with S. aureus. The platelet ultrastructure is preserved with an intact microtubule coil (white arrow) and the absence of platelet projections, however, the platelet displays other features of activation that include increased vacuoles (black arrow). Scale bar = 1 mm. This micrograph is representative of numerous other platelets. (B) hBD-1 protein in platelet lysates and supernatants that were stimulated with a-toxin (50 ng/ml) in the presence of Taxol (50 mM), Nocodazole (Nocod; 10 mM), or their vehicle (control) for 30 minutes. The bars represent the mean6SEM of three independent experiments. (C) S. aureus was cultured for 240 minutes (gray bars) in the presence or absence of platelets that were pretreated with Taxol (50 mM) or Nocodazole (Nocod; 10 mM). The bars in the panel represent the mean6SEM (n = 3) of the fold increase in S. aureus growth (240 minutes) over baseline (horizontal line). The single asterisk indicates a significant increase (p,0.05) in growth over baseline. The double asterisks indicates a significant (p,0.05) reduction in S. aureus growth in the presence of platelets that were treated with or without Taxol or Nocodazole. (TIF) Figure S5 LPS does not induce platelets to release bdefensin 1. hBD-1 protein in lysates and supernatants from platelets that were stimulated with LPS (100 ng/ml) for 30 minutes. The bars represent the mean6SEM of three independent experiments. (EPS)
2018-04-03T02:52:39.294Z
2011-11-01T00:00:00.000
{ "year": 2011, "sha1": "692756de79fa58eb853bfa046db3879b7af89578", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1002355&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "692756de79fa58eb853bfa046db3879b7af89578", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235695458
pes2o/s2orc
v3-fos-license
Pathogenic Spectrum and Resistance Pattern of Bloodstream Infections Isolated from Postpartum Women: A Multicenter Retrospective Study Purpose Bloodstream infections (BSIs) cause morbidity and mortality in postpartum patients, resulting in poor prognosis for both mother and neonate. Gram-negative bacteremia is a public health threat, with high mortality among vulnerable populations and significant global economic costs. Gram-negative bacteremia and antimicrobial resistance are increasing. This study retrospectively analyzed the pathogen distribution and drug sensitivity among postpartum patients with BSIs to identify appropriate antibacterial agents for perioperative therapy. Material and Methods All bacteremia cases between January 2015 and December 2020 from three Health Centers for Women and Children in Chongqing, China, were retrospectively reviewed. Clinical data were collected from medical records and charts. Blood samples were cultured by BD BACTEC FX200. Bacterial and fungal species and bacterial susceptibility were identified by a BD PhoenixTM M50 automatic detection machine. Results In total, 274 pathogenic strains were isolated from 272 blood samples. Excluding 25 suspected contamination strains, 248 blood samples yielded 249 microorganisms, including 214 gram-negative bacteria (85.9%), 34 gram-positive bacteria (13.6%), and 1 fungus (0.5%). Escherichia coli (E. coli) was the most frequently isolated pathogen, both overall and among gram-negative bacilli (73.5%). Streptococcus agalactiae represented 3.6% of gram-positive cocci (n = 9). Laboratory-confirmed anaerobic infections comprised 9.2% of cases (n = 23). Additionally, 47.4% of postpartum patients with BSIs suffered premature rupture of membranes (PROM), a suspected infection risk factor. Drug sensitivity levels remained unchanged for less commonly used drugs, but resistance increased against commonly used drugs. Specifically, E. coli resistance against fourth-generation cephalosporins increased during this study period. Conclusion E. coli is the most common gram-negative bacillus in postpartum patients with BSIs, and increased anaerobic bacterial detections suggest genital tract inflammation control before delivery is necessary. Effective drug resistance monitoring remains necessary to alleviate bacterial resistance, such as preventing inappropriate antibiotic applications. Introduction Despite national and international efforts to improve maternal outcomes, infectionrelated maternal morbidity and mortality rates remain a large health care concern. Postpartum infections, a subset of maternal infections that occur between delivery and the 42nd day postpartum 1 represent the fifth most common cause of maternal death, behind only hypertensive disorders, obstetric hemorrhage, abortion/ectopic gestations, and obstetric embolism. 2,3 Improved understanding of postpartum infection is a key to achieving sustainable development goals (SDGs) and executing strategies targeting the reduction of preventable maternal and neonatal mortality. The most common postpartum infections include endometritis, urinary tract infections, surgical site infections, bloodstream infections (BSI), and wound infections. 1,4 In particular, BSIs represent common and serious clinical infections and are among the top seven causes of overall mortality in Europe and the 11th leading cause of death in the USA. 5 The fatality rate of BSIs, based on the general population of mainland China, is approximately 20%. 6 The early diagnosis and rational use of effective antibacterial drugs are key steps to treating patients with BSI. The delayed or inappropriate use of antibacterial drugs can lead to deterioration and complications. 5 Additionally, the pathogenic spectrum and pattern of antimicrobial resistance associated with BSIs often differ across affected regions due to differences in epidemiological and geographic features and the clinical use of antibacterial drugs in different regions over time. 7,8 Understanding the prevalence of different types of microorganisms and their antimicrobial resistance characteristics will better guide physicians, improve infection control, and inform policymakers in various countries and regions when making evidence-based decisions to overcome antimicrobial resistance. 9,10 This multicenter, retrospective study was performed to determine the distribution of pathogens and drug sensitivities in puerperal microbiologically identified BSI, focusing on the antimicrobial resistance of Escherichia coli. This information can guide antimicrobial stewardship programs and infection control activities in hospitals. Data Collection By reviewing the medical records of postpartum patients, we retrospectively collected data from the information systems of three hospitals and laboratories between January 2015 and December 2020. The basic information gathered included gestational age, obstetric history, laboratory values, delivery information, and maternal age, as shown in Table 1. Patients with acute pulmonary embolism, amniotic fluid embolism, adverse drug reactions, drug fever, viral infection, autoimmune conditions, and transfusion reactions were excluded from this study. We reviewed the electronic medical records to determine whether the patient received any antimicrobial agents during the 15 days prior to delivery or underwent any recent surgeries before the onset of bacteremia. The primary focus of infection was determined based on the clinical presentation and a final diagnosis made by the primary care clinician. Ethics Statement This study was performed as a multicenter, non-interventional, retrospective study conducted at three Health Centers for Women and Children, in accordance with all relevant regulations in Chongqing, China. The study protocol was reviewed and approved by the Institutional Review Board/Independent Ethics Committee at each study site, including Chongqing Health Center for Women and Children, Wanzhou Health Center for Women and Children, and Yongchuan Health Center for Women and Children. Although informed consent was not obtained, patients were given the opportunity to decline permission for their clinical records to be used for research (opt-out consent provision). Patient privacy and confidentiality of data were maintained in accordance with the Declaration of Helsinki. Microorganism Identification and Antimicrobial Susceptibility Testing Two aerobic and two anaerobic blood samples (5 mL each) were collected aseptically from two different peripheral veins and incubated in the BD BACTECTM FX200 (BD Company, USA) automated blood culture system for 7 days before reporting no growth. The detected positive strains were inoculated onto blood agar plates (BAPs) and chocolate agar plates (Pangtong Medical, Chongqing, China) and incubated aerobically for 18-24 hours in a humid chamber (Thermo Fisher Scientific, USA) containing 5% CO 2 at 37 °C for subculture. Pure colonies of isolated bacteria were emulsified in 2 mL 0.85% normal saline. Bacterial suspensions with an optical density of 0.5-0.6 at 600 nm wavelength were considered to have acceptable bacterial concentrations. Reagents were then immersed to the suspension and transferred to the BD PhoenixTM M50 (BD Company) identification system for bacterial identification and susceptibility testing. Antibiotic breakpoints were defined using the European Committee on Antimicrobial Susceptibility Testing (EUCAST) guidelines version 5.0 and Clinical and Laboratory Standards Institute (CLSI) M100-S27th. 11,12 Data Quality Control All data collection and recording steps were monitored. The reagents were checked for expiration dates and maintained under appropriate storage conditions, including acceptable temperature and humidity. Standard operating procedures (SOPs) were prepared and strictly followed. The quality of the culture media and antimicrobial susceptibility testing were verified by using standard quality control strains: E. coli ATCC 25922, Staphylococcus aureus ATCC 29213, Enterococcus faecalis ATCC29212, and Klebsiella pneumoniae ATCC700603. The inoculum densities of the bacterial suspensions were standardized using 0.5% McFarland standards for susceptibility testing. Definition Patients who developed fever >38.0°C or hypothermia <36.0°C underwent blood cultures, procalcitonin, and C-reactive protein testing. In the case of potential common skin commensals, such as coagulase-negative staphylococci, Propionibacterium acnes, Clostridium, Corynebacterium diphtheriae, Acinetobacter bacilli, or other common bacteria in the environment, BSI was defined as the laboratory-confirmed growth of a potential pathogen in at least two consecutive positive blood cultures drawn on two separate occasions, to eliminate the possibility of contamination during the collection or cultivation of blood samples. 13 Data Analysis All analyses were performed using SPSS TM software, version 21.0 (IBM Corporation, Armonk, NY, USA). The frequencies and proportions of isolates resistant to the tested antibiotics and other categorical data were calculated. Continuous variables were analyzed by Student's t-test or by the Mann-Whitney U-test, depending on the sample distribution. Normally distributed data are described as the mean ± standard deviation (M ± SD), whereas non-normally distributed data are described as the median and range. Patient Characteristics For the study period from January 2015 to December 2020, a total of 272 non-duplicate BSI-positive samples from postpartum patients were enrolled, and the patient characteristics are summarized in Table 1. In the overall cohort, all participants met the criteria for one or more postpartum infections. 274 pathogenic strains were isolated from 272 blood samples. Excluding 25 suspected contamination strains, laboratory-confirmed BSI was diagnosed in 248 BSI samples, yielding 249 organisms. The isolated organisms were predominantly gram-negative bacilli, regardless of maternal age, gestational week, or delivery mode. According to the data, 220 participants (88.7%) delivered through cesarean section, and 106 cases (48.2%) suffered from premature rupture of membranes (PROM), with E. coli being the most commonly isolated pathogen. Additionally, the onset of fever among participants typically occurred within two days after delivery according to Table 1. E. coli Dominates BSI in Postpartum Patients Of a total 249 isolated microorganisms, excluding coagulase-negative Staphylococcus (CoNS) strains that were suspected of representing contamination, 85.9% of isolates Considering E. coli (n = 183) and anaerobic bacteria (n = 23) were the top two most frequently isolated pathogens in BSI among postpartum women; therefore, we analyzed the incidence rate of these two pathogens in patients suffering from PROM. As shown in Figure 2A, a total of 171 blood culture samples from postpartum patients who underwent cesarean section (n = 220) were identified as E. coli, 55.5% of which were from those who underwent cesarean section complicated by PROM (n = 95). The number of postpartum patients who underwent vaginal delivery (n = 26) who were infected with E. coli was 12, 41.6% of whom underwent vaginal delivery combined with PROM (n = 5). These findings suggest that E. coli is the most common organism in BSI among postpartum patients, regardless of delivery mode, and the mechanisms through which E. coli causes BSI require further elucidation. Similarly, Figure 2B reveals no significant difference in the incidence of anaerobes in BSI between the cesarean section group and the vaginal delivery group in the context of PROM. Anaerobe Detection in BSI Among Postpartum Patients Increased Over the 6-Year Study Period The pathogenic spectrum can change in response to inappropriate antibiotic applications or the development of resistance. We observed the trends in the pathogenic spectrum from 2015 to 2020. As shown in Figure 3A, the detection rate of both gram-negative and gram-positive microorganisms in BSI peaked in 2020 overall. In addition, the detection rate of E. coli peaked in 2018 and again in 2020, as shown in Figure 3B. A slight downtrend in Klebsiella pneumoniae infections and an uptrend in Streptococcus agalactiae infections were observed for BSI over the 6-year study period. An obviously increasing tendency for the detection of anaerobes was observed between 2015 and 2020, suggesting the additional attention should be paid to BSI caused by anaerobes among postpartum patients. Higher Resistance Rate of E. coli Against Fourth-Generation Cephalosporin We then examined the sensitivities of the gram-negative E. coli in response to commonly used antibacterial drugs, as shown in Figure 4. Overall, E. coli exhibited general sensitivity to commonly used antibiotics, including 100% sensitivity in response to carbapenem antibiotics, over 50% sensitivities in response to aminoglycoside antibiotics and quinolone antibiotics. The drug resistance rate of E. coli against cefepime was approximately 20%, whereas only an approximately 6% resistance rate was shown against ceftazidime. We then specifically analyzed the sensitivity trends for E. coli in response to major, widely used antibiotics over the study period. As presented in Figure 5, the trend of resistance against ampicillin/subatan and cefazolin has been declining ( Figure 5A and B), and a fluctuate trend of resistance against ceftazidime and cefepime was observed between 2015 and 2019 ( Figure 5C and D). The proportion of critical patients and the increasing diversity of diseases may have contributed to the increasing trend in resistance observed for the year 2020. Notably, a higher resistance rate for E. coli against cefepime was observed during the 6-year period compared with the resistance to ceftazidime which warranted much attention. Discussion The present data showed that gram-negative microorganisms (especially E. coli) dominated BSIs during the past 6 years at our study centers. Additionally, an increasing trend in anaerobic bacteria detection in BSIs was observed among the postpartum patients in this study. Another point worthy of attention is the increasing resistance rate of E. coli against fourth-generation cephalosporins compared with resistance to third-generation cephalosporins during the study period. DovePress BSI represents an important cause of morbidity and mortality among postpartum patients and remains a substantial clinical problem. In this study, cases in which microorganisms were detected in at least two sets of positive blood cultures were defined as BSI, and the results suggested that E. coli has emerged as the predominant causative organism for BSIs, which is consistent with a recent study from China (BRICS report of 2016-2017), which identified E. coli as the major BSI-causing pathogen. 14 However, reports from Malawi in Africa 15 revealed Salmonella typhi and Streptococcus pneumoniae as the major BSI-causing pathogens. Another report from Rwanda demonstrated a varying pathogenic profile for BSIs, with Staphylococcus aureus and Klebsiella pneumoniae reported as the leading causative organisms. 16 These discrepancies might be attributable to cultural, healthseeking behaviors, health care coverage, the period during which the studies were conducted, and other socioeconomic factors. Although the most frequently detected pathogens remain consistent, their ranks may shift over time. In the present retrospective study, E. coli continued to rank the highest among detected bacteria, but the detection rate of anaerobic bacteria gradually increased to second place (23.9%), according to the data in the year 2020. Anaerobic bacteria remain an important cause of bloodstream infections, and the detection rate of anaerobes in blood cultures ranges from 1-17% across all bacteremic episodes, depending on patient age, condition, and geographic location. 17,18 Recently, a report on the evaluation of the gut bacterial community composition in pregnant women concluded that Prevotella is significantly more abundant in the guts of pregnant women compare with those of non-pregnant women based on 16S amplicon sequencing performed on stool samples. 19 In this study, we found that 33.3% of anaerobic bacterial infections were identified as Prevotella bivia in BSIs, followed by Bacteroides fragilis (13.3%) and Gardnerella vaginalis (13.3%). Another study performed at a university hospital identified that Bacteroides fragilis (39.9%) and Clostridium species (32.8%) were the major anaerobic bacteria in BSIs. 20 A different retrospective study revealed that 43.9% of anaerobic bacteria belong to the genus Bacteroides, 7% to the genus Fusobacterium, and 2.1% to the genus Prevotella. 21 Currently, studies on the analysis of anaerobic bacteria in postpartum infections with BSI are scarce, additional attention should be paid to BSI caused by anaerobes among postpartum women, and more efforts should be spent identifying the potential sources of anaerobe infections and guiding clinicians in rational drug use according to population characteristics in the future. The vaginal microbiome composition changes when women become pregnant, and in this study, two postpartum patients were confirmed with Gardnerella vaginalis infection, a well-organized colonizer of the female genital tract. Recent studies have shown that bacterial vaginosis (BV), which is dominated by Gardnerella vaginalis and can be caused by a number of anaerobic organisms, is linked with pelvic inflammatory disease, and BV-associated bacteria have been related to an increased risk of spontaneous abortions, preterm PROM, postpartum endometritis, and post-cesarean wound infections. [22][23][24][25] Thus, effective diagnosis and treatment of BV may be significant for reducing the perinatal infection rate. Another issue worthy of attention is the seriousness of extended antimicrobial resistance among gram-negative bacteria, and gram-negative organisms have been identified as a critical priority on the World Health Organization's global priority list of antibiotic-resistant bacteria. 26 The limited ability to treat such infections and the lack of new antimicrobial agent development can make BSIs associated with antibiotic-resistant bacteria challenging to treat. Better knowledge of antimicrobial resistance patterns will guide the actions of local and regional bodies as they work to counter antimicrobial resistance. Our data of antibiogram analysis indicated that E. coli isolates were generally susceptible to carbapenem antibiotics (100%), and over 80% of E. coli isolates showed sensitivity to piperacillin/tazobactam (98.4%), amikacin (94.0%), ceftazidime (89.1%), and amoxicillin/clavulanic acid (84.2%). An increase in the number of critical patients and pregnancy complications may have contributed to the increased resistance rate of E. coli against third/fourthgeneration of cephalosporins in 2020. Recently, phenomenologically heteroresistance, a phenotype in which the treated bacteria contain subpopulations with lower susceptibility to the antibiotic than the dominant population, has been detected in clinical isolates 27,28 and found to mediate antibiotic treatment failure. 29 Here in our study, the resistance rate of E. coli to third-generation cephalosporins was 6.0%, compared with the resistance rate of 20.8% against fourth-generation cephalosporins in the present study, pointing out the possibility of drug resistance heterogeneity contributes to this phenomenon. Increasing efforts are warranted to determine and counteract the development of antibiotic heteroresistance in E. coli isolated from blood culture samples of postpartum women. Another finding in our study is that the onset of fever among participants typically occurred within two days after delivery, driving us to wonder whether effective prophylactic antibiotic use might affect the incidence of BSI in the context of the continued rise in the cesarean birth rate and the increased risk of surgical site infections after cesarean birth compared with vaginal birth. Indeed, ACOG has recently released a Practice Bulletin on the role DovePress of prophylactic antibiotics in labor and delivery. Timing is critically important because the goal is to achieve adequate tissue levels of antibiotics prior to pathogen exposure. 30 Smaill 31 claimed that the incidence of wound infections, endometritis, and serious infectious complications would be reduced by 60% to 70% in response to the use of prophylactic antibiotics in women undergoing cesarean section. Previous study reported that the rate of infectious morbidity was similar among women who received prophylactic antibiotics within 30 min and 30-60 min before skin incision. 32 Prophylactic antibiotics might be beneficial for women, but uncertainty regarding the consequences for the baby remains, which should be examined in future prospective research studies. One limitation of the study was that it analyzed antibiograms of BSI caused by gram-negative bacteria from multicenter, which may not represent other populations, and the conclusions may therefore be difficult to generalize. Another limitation is the lack of molecular characterization for observed resistance. Due to the low-resource setting, resistance analyses for anaerobic bacteria that can cause significant puerperal sepsis were not performed. The results suggested that future research on BSI-causing pathogens should be expanded to cover other regions to obtain a more comprehensive understanding of the pathogenic profile and antimicrobial resistance patterns across regions, combined with the molecular characterization of the resistance mechanisms. Additionally, studies examining effective biomarkers capable of distinguishing infections caused by gram-negative or gram-positive organisms can facilitate the early initiation of appropriate antimicrobial therapy for better patient outcomes and are warranted in the future. Conclusion Our data showed that E. coli has dominated BSIs among postpartum patients over the past 6 years. Attention should be paid to the increasing trend in the detection of anaerobic bacteria in BSIs among postpartum women, and improved methods for identifying anaerobic bacteria and performing drug sensitivity testing should be implemented. The increasing resistance rate of E. coli against fourthgeneration cephalosporins in BSIs is also worthy of additional attention.
2021-07-02T05:18:33.036Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "bfdb241626d226bd3801a9a442facdc79eb0afe1", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=70969", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bfdb241626d226bd3801a9a442facdc79eb0afe1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
126083092
pes2o/s2orc
v3-fos-license
Map generation in unknown environments by AUKF-SLAM using line segment-type and point-type landmarks Recently, autonomous mobile robots that collect information at disaster sites are being developed. Since it is difficult to obtain maps in advance in disaster sites, the robots being capable of autonomous movement under unknown environments are required. For this objective, the robots have to build maps, as well as the estimation of self-location. This is called a SLAM problem. In particular, AUKF-SLAM which uses corners in the environment as point-type landmarks has been developed as a solution method so far. However, when the robots move in an environment like a corridor consisting of few point-type features, the accuracy of self-location estimated by the landmark is decreased and it causes some distortions in the map. In this research, we propose AUKF-SLAM which uses walls in the environment as a line segment-type landmark. We demonstrate that the robot can generate maps in unknown environment by AUKF-SLAM, using line segment-type and point-type landmarks. INTRODUCTION In recent years, the development of an autonomous mobile robot that gathers information at disaster sites has been actively developed. It is difficult to give the map in advance in disaster sites where people cannot enter. Therefore, the robot needs to realize autonomous behavior not only in known environments with maps but also in unknown environments without maps. In order to enable robots to move under unknown environments, it needs to estimate the self-location, as well as to generate the map. This is called a Simultaneous Localization and Mapping (SLAM) problem[1] [2]. As a method to solve the SLAM problem, an Extended Kalman Filter (EKF) and an Unscented Kalman Filter (UKF) which adopt stochastic concept, and an Augmented UKF (AUKF) was also proposed so as to treat systematic errors included in odometry, such as measurement errors in a wheel diameter and a tread are proposed [3] [4]. In these methods, features in the environment such as corners where walls vertically intersect are registered as landmarks. Whenever the robot moves, it acquires a landmark and corrects the self-location from the positional relationship between the robot and the landmark. However, when the robot moves in an environment like a corridor consisting of few point-type features, the robot decreases the accuracy in the estimation of self-location and creates a distorted map. There are several researches using line segments in SLAM research [5][6][7] [8]. Therefore, the robot registers the wall as a landmark, which is the feature of the line segment in the environment, and uses it for the estimation of self-location. In this research, the present SLAM is assumed to use an AUKF indoors, and adopts walls as landmarks, as well as corners. Experiments are conducted using an autonomous mobile robot, and it is verified that the line segment-type landmark is useful. In this paper, a SLAM problem is first described using an AUKF. A conventional point-type landmark and the proposed line segment-type landmark are next explained. Finally, experiments using an actual mobile robot are conducted to check the usefulness of the line segment-type landmark. Figure 1 shows the proposed AUKF-SLAM model. In figure 1, (x t ,y t ,θ t ) is the position and posture of the robot at time t, (R R ,R L ,T,S) are the right wheel radius, the left wheel radius, the tread, and the sensor mounting offset. Let the ith landmark information of the corner estimated by the robot be (l xi , l yi , l si ). Also, let the kth landmark information of the wall estimated by the robot be (l x1k , l y1k , l x2k , l y2k ). Then, using the position, orientation and kinematic parameters of the robot and the landmark information, the state x t of the robot is expressed by SLAM solution method using AUKF The state x t is represented by the mean and covariance of a normal distribution to reduce the influence by noises. In the AUKF, those values are corrected by prediction and updating, and the appropriate estimated values are calculated. The following equations are defined as the motion model and the measurement model, respectively, which are necessary for estimation. (3) g and h in each model are nonlinear functions. z t is the measured value and u t is the control input. δ t is a motion noise vector, and ε t is a measurement noise vector. It is assumed that the noise is Gaussian with zero-mean and additively added. Line segment-type and point-type landmarks The landmark is a feature used by the robot to estimate the self-location and create a map. From the sensor data that measured the environment, the wall is used as a line segment-type landmark, and the corner at which the walls intersect is used as a point-type landmark. Increasing the types of landmarks eliminates the shortage of landmarks to be used by robots for estimation, so that it is thought to be able to cope with any environment. Here, we describe the acquisition method and its measurement model of each landmark. Line segment-type landmark In the case of using line segment-type landmarks, it needs to group the sensor data into each wall. The grouping method of points is explained with figure 2. Assume that the group A, whose ith point is a terminal, is determined. Compare the group A and a point group, which is composed of from the (i+1)th point to N next ahead. If the angle θ next formed by two straight lines is less than a threshold value, and the distance d next between the ith point and the (i+1) th point is less than a threshold value, unify them as a one group. On the other hand, when either θ next or d next exceeds the threshold value, the group from the (i+1)th point is set as a group of different straight lines. After grouping is completed, the position at both ends of the obtained straight line (l x1 , l y1 , l x2 , l y2 ) is registered as a line segment-type landmark. The relationship between a robot and a line segment-type landmark is shown in figure 3. The measured value z t obtained from the line segment-type landmark is the distance r t from the robot to the landmark, and the relative angle φ t . Therefore, the measured value is expressed by the following equation. = Using the position (x t , y t ), and attitude θ t of the robot, and also using the linear parameters, i.e., (a, b, c) related to the line segment-type landmark, the measurement function h in equation (2) is expressed by the following equation: Here, the linear parameters a, b, c of the line segment-type landmark used in equation (5) are expressed, using the position at both ends of the obtained straight line (l x1 , l y1 , l x2 , l y2 ), by the following equations: Point-type landmark The position detection of point-type landmarks is explained with figure 4. First, compare the information on the wall acquired when the line segment-type landmark was measured. Next, the angle θ AB formed between the walls is calculated using the linear parameters of the wall. When the angle θ AB is around 90 deg, the intersection of the walls is estimated as a point-type landmark. Assuming that a straight line of group A is a A x + b A y + c A = 0 and a straight line of group B is a B x + b B y + c B = 0, the estimated point-type landmark position (l x , l y ) is expressed as follows: The relationship between the robot and a point type landmark is shown in figure 5. The measured value z t obtained from the point-type landmark is the distance r t from the robot to the landmark, the relative angle φ t and its direction s t . Therefore, the measured value is expressed by the following equation: The measurement function h is expressed, using the position (x t , y t ), the attitude θ t of the robot and the position (l x , l y ) and the direction l s of the point type landmark, by the following equation: Figure 6 shows the robot used in this research. The experimental robot is a front wheel steering type mobile robot driven by a rear wheel. An encoder is attached to the driving wheels, and an odometry is The mobile robot autonomously moved in the environment composed of few point-type features. The robot does not possess the map information in advance, and automatically follows the route generated by using the DT method. Then, the robot carries out SLAM using only point-type landmarks, and similarly does SLAM using line segment-type and point-type landmarks, and then, the performances are compared each other. The robot corrects the self-location and updates a map every 250 ms. Figure 7 shows a search result using only point-type landmarks. Since there were few pointtype features, the robot was not able to correct the self-location when moving through the corridor. As a result, the map was distorted. The distortion was accumulated, so that the robot was not able to create a closed map. Consequently, the route was not able to be generated due to the gap between the map and the sensor data, and the robot was left in the unknown environment. On the other hand, figure 8 shows a map generated using both line segment-type and point-type landmarks. The self-location of the robot was able to be corrected from the wall by using a line segment-type landmark. For this reason, the distortion that occurred during the turning motion and the translational motion was corrected. As a result, the robot succeeded in searching for an unknown environment while generating a map without blocking the route. The standard deviation of self-location is shown in figure 9. The standard deviations of x, y, and θ are increasing with lapse of time. Comparing the standard deviations of x and y, it is found that the former is larger than the latter. In this environment, the robot is always moving along the wall. When using line segment-type landmarks, the robot estimates them in the direction perpendicular to the wall. Therefore, it was impossible to estimate them sufficiently in the x direction where few walls exist. On the other hand, since the wall always existed in the y direction, it is considered that the standard deviation of y became smaller than that of x. There were many changes in the standard deviation of θ because the orientation of the robot changed frequently due to the turning motion and the adjustment for following the route. However, it was confirmed that the increase in the standard deviation of θ was suppressed by the correction using the line segment-type landmark. Furthermore, in near 470 s, the self-location was greatly corrected and the standard deviations became small by remeasuring the highly reliable landmark detected at the start. It was confirmed by this experiment that the line segment-type landmark was very useful in the environment where it was difficult to obtain the point-type features. Conclusion In this paper, we have described AUKF-SLAM, line segment-type and point-type landmarks. Next, we conducted experiments using an actual robot and verified that line segment-type landmarks were useful. In the future, we will conduct an experiment to verify the estimated kinematic parameters of the robot.
2019-04-22T13:11:05.507Z
2018-03-16T00:00:00.000
{ "year": 2018, "sha1": "be41be8090733abfda960a5bb79297fc241eb627", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/962/1/012018", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "bfdaa86fc7cbb30154fa39e044bc985de6e09cf2", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Computer Science" ] }
236945885
pes2o/s2orc
v3-fos-license
Impact of Hypoxia over Human Viral Infections and Key Cellular Processes Oxygen is essential for aerobic cells, and thus its sensing is critical for the optimal maintenance of vital cellular and tissue processes such as metabolism, pH homeostasis, and angiogenesis, among others. Hypoxia-inducible factors (HIFs) play central roles in oxygen sensing. Under hypoxic conditions, the α subunit of HIFs is stabilized and forms active heterodimers that translocate to the nucleus and regulate the expression of important sets of genes. This process, in turn, will induce several physiological changes intended to adapt to these new and adverse conditions. Over the last decades, numerous studies have reported a close relationship between viral infections and hypoxia. Interestingly, this relation is somewhat bidirectional, with some viruses inducing a hypoxic response to promote their replication, while others inhibit hypoxic cellular responses. Here, we review and discuss the cellular responses to hypoxia and discuss how HIFs can promote a wide range of physiological and transcriptional changes in the cell that modulate numerous human viral infections. Introduction Hypoxia is described as a reduction in the normal levels of oxygen due to a decreased availability or delivery of this gas to cells and tissues [1]. Under low oxygen levels, such as below 6%, or 40 mmHg of partial pressure (pO 2 ) or less, tissues and cells can initiate a hypoxic response to adapt to this new condition [1]. Importantly, hypoxic conditions occur in several diseases, such as ischemic heart disease, ischemic stroke, and solid tumors, among others [2,3]. Under these low oxygen situations, numerous physiological changes will occur in tissues and cells, in such a way to rapidly adapt to this adverse condition. For instance, an increase in the respiration rate and its depth will occur at a whole organism level. At a cellular level, metabolic switches will take place, shifting from aerobic to anaerobic enzymatic pathways, altogether followed by a significant decrease in energy generation, which can be accompanied by the induction of a wide arrange of genes [1,2,4,5]. Hypoxia-inducible factors (HIFs) are transcriptional regulators that modulate the expression of hypoxia-dependent genes [1,2,4,5]. HIF-1 was first described in 1992 in human hepatocellular carcinoma cells (Hep3B) and immediately characterized as a critical regulator of oxygen tension levels [6]. This transcription factor was later found to belong to the PER-ARNT-SIM (PAS) family and was then characterized as a heterodimeric DNAbinding protein complex composed of a constitutively expressed β subunit and an oxygendependent α subunit [7,8]. Three isoforms of the α subunit have been described since then, namely HIF-1α, HIF-2α, and HIF-3α, giving rise to three different proteins HIF-1, HIF-2, and HIF-3, respectively [9,10]. When the α and β subunits dimerize, the HIFα/β heterodimer becomes transcriptionally active, translocates to the nucleus, and binds to hypoxia response elements (HREs), which are DNA consensus sequences that are present in the regulatory regions of HIF-target genes [1,7,8]. It is important to note that for this to occur, the α subunit of HIF must be stabilized, which happens under hypoxic conditions. In this scenario, degradation of the α subunit of HIF is inhibited thanks to the obstruction of the von Hippel-Lindau-containing (pVHL-containing) ubiquitin E3 ligase complex function, which otherwise targets the α subunit for degradation [7,8]. Alternatively, inhibition of the factor inhibiting HIF-1 (FIH-1) can also stabilize HIF. Under normoxic conditions (normal oxygen conditions), FIH-1, which is an asparaginyl hydroxylase, hydroxylates an asparagine residue in the C-terminal transactivation domain (C-TAD), which prevents the recruitment of transcriptional coactivators that bind to this region within HIF-1α. Notably, this process inhibits HIF-1α association with the transcriptional coactivator CBP/p300 and hence, the transcriptional activation of HIF-1α [7,8]. Importantly, prolyl-hydroxylase domain proteins (PHDs) hydroxylate two proline residues of HIF-1α (P 402 and P 564 ) [7,8]. Once these residues are converted to hydroxyproline, pVHL can bind to HIF-1α, leading to its degradation [7,8]. PHDs require oxygen as a co-substrate, and thus under hypoxic conditions, their enzymatic activity is inhibited, leading to HIF-1α stabilization [11]. Importantly, oxygen-independent regulation of HIF-1α also occurs. For instance, the activation of phosphatidylinositol 3-kinase (PI3K) increases HIF-1α stabilization [12]. This is mediated by an increase in HIF-1α protein translation through PI3K-target proteins, protein kinase B (Akt), and the downstream component mammalian target of rapamycin (mTOR) [12]. Noteworthy, the latter disrupts the integrity of the eukaryotic translation initiation factor binding protein (4E-BP1) via phosphorylation of the eukaryotic translation initiation factor 4E (eIF-4E) [13]. This modification inhibits cap-dependent mRNA translation and results in enhanced HIF-1α protein synthesis [13]. Additionally, mTOR can induce the phosphorylation of p70S6 kinase (S6K), which also promotes HIF-1α protein synthesis [13]. Moreover, some growth factors such as insulin-like growth factor (IGF-1) and epidermal growth factor (EGF) activate the tyrosine kinases receptor, which in turn activates PI3K, Akt, and the FKBP-rapamycin-associated protein (FRAP) [14]. Subsequently, FRAP induces the expression of HIF-1α in normoxic conditions [14]. RAS (rat sarcoma) protein can also be activated by growth factors, and therefore under these conditions, the RAS/RAF/MEK/ERK kinase cascade is stimulated [12]. Extracellular signal-regulated kinase (ERK) phosphorylates 4E-BP1, S6K, and MAP kinase-interacting kinase (MNK), which can also directly phosphorylate the eIF-4E [12]. ERK is also known to phosphorylate CBP/p300. As a result of all these signaling events, mRNA translation of HIF-1α and its transcriptional activity is increased [12]. Additionally, HIF-1α has been shown to be directly phosphorylated by p44/p42 mitogen-activated protein kinases (MAPK) (ERK1/2) in vitro and in vivo. Furthermore, two HIF-1α serine residues, S641 and S643, were detected by mass spectroscopy analyses with in vitro phosphorylated recombinant HIF-1α as ERK1/2 targets. Moreover, inhibition of this phosphorylation impaired nuclear accumulation and activity of HIF-1α [13,15]. Recently, it has been reported that the proviral integration site of the Moloney murine leukemia virus 1 (PIM1) kinase is able to directly phosphorylate HIF-1α at threonine 455 and HIF-2α at serine 435. These phosphorylations inhibit the ability of PHDs to bind and hydroxylate HIF-1α, thus promoting protein stabilization of these transcription factors [16]. Reactive oxygen species (ROS) are also capable of inducing HIF-1α under normoxic conditions. Furthermore, exogenous hydrogen peroxide (H 2 O 2 ) and glucose oxidase (GOx), which generates H 2 O 2 can increase HIF-1α and HIF-2α stabilization [18]. Interestingly, the use of 2,3-dimethoxy-1,4-naphthoquinone (DMNQ), a redox cycler, has also been described to increase the levels of HIF-1α under normoxic conditions [18]. The mechanisms eliciting the increased levels of HIF-1α by ROS have been reported to be mediated by superoxide (SO) generated by xanthine/xanthine oxidase, which partially inhibited pVHL binding to HIF-1α [18]. Additionally, under normoxia, siRNA targeting of Mn-SOD has been shown to increase HIF-1α levels by inhibiting pVHL-HIF binding, which in turn inhibits the activity of PHDs [18]. There is also evidence that PHDs can be directly inhibited by the presence of SO [18]. On the other hand, nitric oxide (NO) has been reported to have a stimulatory effect over the accumulation of HIF-1α. Different studies have described that NO donors, macrophage-derived NO, NO synthase transfection, and increased endogenous NO via iNOS all induce HIF-1α accumulation and the transcription of HIF-1 target genes [18,19]. However, this effect has been shown to be concentration-dependent, as some concentrations of NO actually have an inhibitory effect over HIF-1α stabilization [18]. It has been reported that PI3K and Akt are essential signaling components in NO-induced HIF-1α stabilization [18]. Additionally, S-nitrosation may contribute to HIF-1α stabilization in a concentration-dependent manner [18]. Interestingly, S-Nitrosoglutathione (GSNO), which donates nitrosonium, induces S-nitrosation of HIF-1α and inhibits the interaction of pVHL with HIF-1 α in the presence of PHD1, PHD2, and PHD3 under normoxic conditions, and induces HIF-1α accumulation [18]. Importantly, different studies have reported a direct relationship between inflammatory diseases and hypoxia over the last decades. For instance, HIF-1α has been reported to play a relevant role in the function of immune cells, which may affect the host's response to pathogens [20]. Additionally, this transcription factor has been reported to be involved in the outcome of many infections, such as those mediated by bacteria, parasites, and fungi [21][22][23]. Moreover, hypoxia may have a considerable effect over numerous viral infections, consistent with several viruses inducing a hypoxic response upon infection to promote their replication or stabilize HIF-1α with concomitant enhanced viral gene expression [24,25]. Below, we revise and discuss cellular responses to hypoxia and how viruses can modulate hypoxia-related genes in their favor. Cellular Responses to Hypoxia Numerous studies have detailed how hypoxic conditions induce a wide range of physiological and transcriptional changes in cells. Consistently, HIFs have been shown to target a wide array of genes with different functions and induce distinct cellular responses to hypoxic conditions, which are detailed below. Regulation of Glucose Metabolism Cell metabolism in mammalian cells is highly dependent on oxygen, and thus, changes in the availability of this gas generate important changes in catabolic and anabolic processes. Under hypoxic conditions, cells will need to switch to anaerobic glycolysis to adapt to this scenario, consequently increasing glucose conversion to lactate [1]. In this regard, HIF-1, but not HIF-2, plays an important role in promoting the transcription of key enzymes involved in glucose metabolism. For instance, the transcription of glucose transporters (GLUT) GLUT1, GLUT3 and, to a lesser extent, GLUT4 and GLUT10, have been reported to be promoted by HIF-1 to offset low ATP availability [26,27]. An increase in the expression of GLUT transporters is positively correlated with the increased expression of other glycolytic enzymes, which suggests an interplay between HIF-1, GLUTs, and glucose-metabolism-related enzymes associated with host metabolic reprogramming in diseases such as cancer and those mediated by oncogenic viruses [28][29][30]. Glycolytic pathway enzymes, such as phosphofructokinase-1 (PFK-1), 6-phosphofructo-2-kinase/fructose-2,6-biphosphatase (PFK-2/F-2,6-BPase), phosphoglycerate kinase 1 (PGK-1), and lactate dehydrogenase A (LDHA), among others, are all upregulated by hypoxia through HIF-1 [31][32][33][34]. Furthermore, mRNA expression of hexokinase 2 (HK II), which catalyzes the phosphorylation of glucose to glucose-6-P, corelates with HIF-1α protein in hepatocellular carcinoma cells and metastatic liver cancer, and HK II and HIF-1α protein expression co-localized in these cancer cells [35,36]. Furthermore, during viral infections, such as those produced by the hepatitis C virus (HCV), HK II is overexpressed and in cells infected with the human respiratory syncytial virus (hRSV) HIF-1α suppression induced a decrease in HK II protein levels [37,38]. Moreover, PFK-1, which phosphorylates fructose-6-P to fructose-1,6 bisphosphate (FBP), is also activated under hypoxic conditions by HIF-1-induced PFK-2/F-2,6-BPase, and the downregulation of the latter in cancer cells has been reported to inhibit tumor growth [39,40]. Other genes encoding other glycolytic enzymes, such as aldolase (ALDO) and enolase (ENO), are also directly upregulated by HIF through HREs in their promoter regions [41]. Under hypoxic conditions, PGK1 and pyruvate kinase mammalian (PKM) catalyze the transfer of phosphoryl group to ADP, leading to ATP production [42]. Interestingly, PKM2, an isoform of PKM, has been described to act as a coactivator of HIF, and the expression of this glycolytic enzyme is induced by mTOR-mediated HIF-1 stabilization [43,44]. Finally, another target of HIF-1 in the glycolysis pathway is glyceraldehyde-3 phosphate dehydrogenase (GAPDH), which catalyzes the oxidation of glyceraldehyde-3 phosphate and reduction of NAD + to NADH, and has been reported to be associated with enhanced aggressiveness of some tumors [45,46] (Figures 1 and 2). Additionally, hypoxia affects oxidative phosphorylation, where HIF-1 has been reported to modulate the cytochrome c oxidase subunit 4 isoform (COOX4), which allows a more efficient use of the available oxygen levels in the cell. A switch from COOX4-1 to COOX4-2 is elicited by the upregulation of the transcription of both the COOX4-2 and LON (a mitochondrial protease required for COX4-1 degradation) genes, which ultimately leads to increased COOX4-2 protein synthesis and increased COOX4-1 proteolysis, respectively [47]. Moreover, pyruvate dehydrogenase (PDH) is inhibited through its phosphorylation by the pyruvate dehydrogenase kinase 1 (PDK1) under conditions of prolongated hypoxia by shunting pyruvate away from the mitochondria, thereby reducing the activity of the tricarboxylic acid cycle (TCA), as an adaptive response to prevent the generation of toxic levels of ROS [48]. HIF-1 will also redirect the use of glutamine towards citrate by upregulating isocitrate dehydrogenase-1 (IDH1) to maintain fatty acid synthesis through reductive carboxylation [49] (Figures 1 and 2). Importantly, hypoxia can induce acidosis due to an increase in lactic acid production by the cell [50,51]. Noteworthy, proteins associated with pH regulation are also targets of HIF. For instance, HIF-1 induces the expression of Na + /H exchanger 1 (NHE1), which mediates proton efflux from the cell [52]. Moreover, carbonic anhydrases IX and XII (CAIX and CAXII), which are also targets of HIF-1 [53], play essential roles in acidifying the extracellular environment, which is necessary to maintain an intracellular alkaline pH [50]. Additionally, the monocarboxylate transporter 4 (MCT4), which catalyzes proton-coupled transport of lactate, is induced by hypoxia (Figures 1 and 2) [51]. , which modulate several key cellular processes. HIF-1 upregulates oxidative phosphorylation by interacting with COOX4 to promote efficient oxygen use, and pyruvate dehydrogenase (PDH) is inhibited. Lipid metabolism is also modulated by HIFs via upregulation of genes involved in lipogenesis and lipid storage and inhibition of genes involved in triacylglycerol degradation. Additionally, under hypoxic conditions, cells may favor anaerobic glycolysis through the upregulation of several genes involved in the glycolytic pathway. HIF-1 has also been related to the regulation of intracellular pH to promote an alkaline internal environment by inducing proton efflux transporters NHE and MCT4, and regulates carbonic anhydrases IX and XII (CAIX and CAXII). Apoptosis is also affected by hypoxia by increasing cytochrome c (cyt c) release, stabilization of p53, and inducing the BCL2 interacting protein 3 (BNIP3). Additionally, different effects of hypoxia over cell proliferation are induced by HIF-1α interaction with MYC leading to the expression of hexokinase 2 (HK II), pyruvate dehydrogenase kinase 1 (PDK1), and vascular endothelial growth factor A (VEGFA), and HIF-2 induces the rapamycin complex 1 (mTORC1) by inducing different growth factors. In contrast, cellular proliferation is suppressed when HIF-1α displaces MYC from target genes, such as CDKN1A, MSH2, and NBS1, and by reducing MYC promoter occupancy. Hypoxia has also been described to cause the specific expression of micro RNAs (HypoxiamiRs) involved in the tight regulation of HIF-1 function. Furthermore, angiogenesis is also affected by hypoxia to augment oxygen levels. HIF-1 and HIF-2 upregulate the expression of angiopoietin 1 and 2 (ANGPT1 and ANGPT2), platelet-derived growth factor B (PDGFB), nitric oxide synthase (NOS), placental growth factor (PGF), and vascular endothelial growth factor (VEGF). Lastly, erythropoiesis metabolism is influenced by HIF-1 and HIF-2, which cause an increase in the uptake of iron through the upregulation of transferrin (TF), ferroportin (FPN), and ceruloplasmin (CP), altogether promoting EPO production to boost erythrocyte production. Green arrows indicate induction or upregulation, while red arrows indicate inhibition or downregulation. Figure 2. Metabolism regulation under hypoxia conditions. Cellular metabolism is affected by hypoxic conditions. Under low oxygen conditions, HIF-1 increases the transcription of enzymes involved in glucose catabolism. Similarly, HIF-1 enhances the anaerobic lactate pathway by increasing the transcription of lactate dehydrogenase A (LDHA). Furthermore, HIF-1 promotes the transcription of glucose transporters to offset low ATP availability. Conversely, pyruvate dehydrogenase kinase 1 (PDK-1) is induced, causing phosphorylation and inactivation of pyruvate dehydrogenase (PDH) to regulate the levels of acetyl-CoA. Additionally, HIF-1 upregulates isocitrate dehydrogenase-1 (IDH-1) and aconitase expression, causing a reduction in acetyl-CoA availability and reducing the activity of the tricarboxylic acid cycle (TCA) to decrease ROS generation by the mitochondria. Furthermore, hypoxia promotes the transcription of LON genes, inhibiting the expression of the Cytochrome c oxidase subunit 4 isoform 1 (COOX4-1). Altogether, HIF-1 upregulates COOX4-2, allowing a cellular environment for optimal use of oxygen during oxidative phosphorylation. Additionally, hypoxic conditions have been reported to affect lipid metabolism. Lipogenesis and lipids are upregulated by HIF-1 through the induction of several target genes, thus promoting fatty acid and triacylglycerol (TGA) synthesis. Additionally, endocytosis of lipoproteins is regulated during hypoxia through the overexpression of the low-density lipoprotein (LDL) receptor-related protein 1 (LRP1) and the very low-density lipoprotein (VLDL) receptor (VLDLR). Moreover, IDH1 is regulated by HIF-1 indirectly, which leads to maintaining fatty acid synthesis through reductive carboxylation. Lipid storage is upregulated by the induction of acylglycerol-3-phosphate acyltransferase 2 (AGPAT2) and lipin-1. Moreover, HIF-1 and HIF-2 inhibit enzymes that participate in fatty acid degradation via a downregulation in the expression of several target genes involved in this process. Green arrows indicate induction or upregulation, and red arrows denote inhibition or downregulation. Regulation of Lipid Metabolism At present, there is accumulating evidence that indicates that lipid metabolism is modulated by HIF, and that this supports some cellular processes during hypoxia in healthy cells, as well as in tumors [54]. Importantly, the transcription factor PPARγ has been reported to be activated by HIF-1 and promote fatty acid (FA) and triacylglycerol (TAG) synthesis [55]. Moreover, the fatty acid-binding proteins (FABP) 3 and 7 are also upregulated by HIF and support lipogenesis in tumor cells [56]. Furthermore, endocytosis of lipoproteins is also regulated by HIF-1α under hypoxic conditions, through the overexpression of low-density lipoprotein receptor-related protein 1 (LRP1) and very low-density lipoprotein receptor (VLDLR) [57,58]. HIF-1 has also been reported to redirect glutamine towards citrate by an indirect upregulation of the IDH1 to maintain FA synthesis through reductive carboxylation (Figures 1 and 2) [50]. In addition, α-ketoglutarate, which acts as a substrate for citrate production and FA synthesis, is also upregulated by HIF-1 through the induction of the expression of glutaminase 1 (GLS1) and E3 ubiquitin ligase SIAH2 [54,59]. Moreover, fatty acid synthase (FAS), which is the primary enzyme involved in lipogenesis, is also upregulated through Akt and HIF-1 activation under hypoxic conditions, leading to an Akt-mediated activation of the sterol regulatory element-binding transcription factor 1 (SREBP-1) [60]. On the other hand, TAGs have been reported to be stored in lipid droplets (LDs) during hypoxia through HIF-1 by inducing the expression of acylglycerol-3-phosphate acyltransferase 2 (AGPAT2) and lipin-1, which participate in these structures, as a mechanism to avoid lipotoxicity [61,62]. AGPAT2 mediates the conversion of lysophosphatidic acid (LPA) to phosphatidic acid (PA), which in turn is converted into diacylglycerol (DAG) by lipin-1; these two products are then used as precursors for TAGs [61,62]. Moreover, the accumulation of lipids within the cell during hypoxia is accompanied by the inhibition of enzymes that participate in FA degradation. In this regard, in cancer cells HIF-1 and HIF-2 have been reported to downregulate the expression of proliferator-activated receptorγ coactivator-1α (PGC-1α), carnitine palmitoyltransferase 1A (CPT1A), and mediumand long-chain acyl-COA dehydrogenases (MACD and LACD), which are involved in β-oxidation inhibition [63,64]. Regulation of Erythropoiesis and Angiogenesis Erythropoiesis and angiogenesis are also strongly influenced by hypoxia [65]. Interestingly, these processes are regulated both by HIF-1 and HIF-2 [65]. To meet iron needs for synthesizing hemoglobin, erythropoiesis induces changes in iron metabolism to increase its availability [66]. Importantly, HIFs upregulate the expression of genes involved in iron homeostasis, such as ceruloplasmin (encoded by the CP gene), which oxidizes Fe 2+ to Fe 3+ , and transferrin (TF), which transports serum iron in its ferric form (Fe 3+ ) [67][68][69]. On the other hand, both HIFs and hypoxic conditions can downregulate hepcidin (encoded by the HAMP gene), which results in increased cell surface expression of ferroportin (FPN) an iron exporter [70,71]. Additionally, erythropoietin (EPO) is also upregulated by HIFs [72], with this hormone playing an important role in producing red blood cells in hematopoietic organs ( Figure 1) [65,70]. Angiogenesis is a process that is required during wound healing and inflammation for the generation of newly formed blood vessels. HIFs are known to target several genes involved in these processes [73][74][75]. For instance, HIF-1 induces the transcription of angiopoietin 1 and 2 (ANGPT1 and ANGPT2), placental growth factor (PGF), nitric oxide synthase (NOS), and platelet-derived growth factor B (PDGFB), among others [75][76][77]. Each of these proteins plays a specific role in forming new blood vessels, and thereby, the regulation of these factors is necessary for maintaining an optimal vasculature [1]. Interestingly, some of these factors, such as ANGPT-1, do not have known HRE sites, which has led to the belief that HIF indirectly induces these factors under hypoxic conditions [77]. Additionally, vascular endothelial growth factor (VEGF), which directs the migration of mature endothelial cells towards hypoxic areas, is also induced by hypoxia [78]. HIF-1 activates VEGF by directly binding to an HRE present in the VEGF gene [78]. Furthermore, it has been described that multiple VEGF receptors are also regulated by hypoxia [79]. Additionally, a study reported that HIF-1 induces an extracellular matrix invasion and tube formation via inducing autonomous endothelial cell activation (Figure 1) [80]. Regulation of Apoptosis and Cell Proliferation Apoptosis and cell proliferation are two cellular processes that are regulated, among others, by hypoxia. Indeed, hypoxia induces the hyperpermeability of the inner mitochondrial membrane, which causes cytochrome c release, and consequently the inhibition of the electron transport chain, which decreases membrane potential and reduces mitochondrialderived ATP production [81,82]. This, in turn, activates BCL2 associated X protein (Bax) causing cyt c release to the cytosol [82]. In a chemically-induced apoptosis model, it has been shown that cells that lack VHL are sensitive to apoptosis but that the reintroduction of this protein renders the cell resistant to apoptosis [81,83]. Furthermore, it has been reported that under hypoxic conditions, HIF-1α can interact with and stabilize p53, which induces programmed cell death by regulating proteins such as Bax [81,84]. This interaction was particularly shown to be mediated by the mouse double minute 2 homolog (Mdm2), a p53 ubiquitin ligase that negatively regulates p53 [81,84]. Interestingly, HIF-1α has been described to upregulate p53 levels by inhibiting Mdm2-mediated degradation, while HIF-2α has been reported to have a contrary effect over p53, by inhibiting this protein in an Mdm2-independent manner [81,84,85]. Moreover, a relation between HIF-1 and the proapoptotic BCL2 interacting protein 3 (BNIP3) has also been established [81]. This latter protein, which can bind and inhibit the anti-apoptotic proteins Bcl-2 and Bcl-xL, was reported to be upregulated under hypoxic conditions in different cell lines [86,87]. Additional data supporting the notion that upregulation of apoptosis under hypoxic conditions may occur in a HIF-1α-dependent manner is supported by the finding that cells lacking HIF-1 are unable to produce a large amount of BNIP3, which correlates with reduced cell death rates [87]. Interestingly, the promoter of BNIP3 contains a HRE [81,88]. Different studies support the notion that cellular proliferation is suppressed by hypoxia in several cell types and, more specifically, that HIF-1α stabilization under hypoxic conditions produces cell cycle arrest through the inhibition of the pro-oncoprotein MYC [89]. Importantly, this protein controls G1/S cell cycle transition and inhibits the expression of p21 and p27, and MYC promotes cellular proliferation by upregulating the expression of glycolytic enzymes and protein synthesis, thus promoting cell growth [89,90]. Interestingly, it has been reported that HIF1α binds to the SP1 transcription factor and displaces MYC from multiple target genes, such as CDKN1A, MSH2, and NBS1 [89,91,92]. Furthermore, it was found that HIF-1α can disrupt the association between MYC and the myc-associated factor X (MAX), and Myc-interacting zinc finger protein 1 (MIZ1), which causes a reduction in MYC promoter occupancy in several target genes [89,93]. Furthermore, under chronic hypoxia, it was reported that HIF-1α promoted MYC degradation [94,95]. Intriguingly, transformed cells expressing HIF-2α exhibit enhanced MYC activity, a rapid entry into the S phase of the cell proliferation cycle, and increased MYC promoter occupancy [89,93]. Additionally, HIF-2α may stimulate the target of rapamycin complex 1 (mTORC1), leading to the induction of cellular proliferation under hypoxic conditions [96]. In this study, it was found that the upregulation of mTORC1 by HIF-2α occurs through the induction of the focal adhesion kinase (FAK) family interacting protein of 200 kD (FIP200) gene, which is a HIF-2α target gene [96]. In turn, FIP200 interacts with tuberous sclerosis complex 1 (TSC1), which induces the disruption of the TSC1-TSC2 complex, leading to mTORC1 activation [96]. Additionally, it was found that HIF-2α could also upregulate mTORC1 activity by inducing the expression of growth factors, such as the transforming growth factor-alpha (TGF-α), platelet-derived growth factor subunit B (PDGFB), and IGF-1, which lead to Akt and mTORC1 activation [97]. Interestingly, the expression of MYC in tumors and its interplay with HIF-1α seem to play an important role in cellular proliferation in transformed cells. High levels of MYC have been reported to sequester and bind to MAX, causing relief in the inhibition by HIF-1α [98]. Additionally, some findings support the notion that HIF-1α can cooperate with MYC to induce the expression of certain genes, including hexokinase 2 (HK II), pyruvate dehydrogenase kinase 1 (PDK1), and vascular endothelial growth factor A (VEGFA) [98]. For a more comprehensive view on the effects of HIFs and hypoxia over tumor cell proliferation, we suggest reading the reviews written by Keith et al. [89] and Gordan et al. [99]. miRNAs Regulated by Hypoxia Several miRNAs have been described to be involved in the cellular response to hypoxia [100,101]. miRNAs are defined as small non-coding RNA molecules of approximately 22 nucleotides [101]. These molecules usually bind to the 3 UTR of mRNAs and modulate their stability and translation, generally reducing translated protein levels [102]. Therefore, they may control HIF expression under hypoxic conditions. Several miRNAs regulated by hypoxia (hypoxamiRs) have been described, and many of these are tissue-specific, except for the "master hypoxamiR" miR-210, which is robustly and ubiquitously induced by cells regardless of the tissue [103]. Interestingly, such miRNAs may contribute to inducing positive or negative feedbacks to the stabilization of HIFs. For instance, miR-210 induces a positive feedback loop with HIF-1α, with this factor driving miR-210 expression, which inhibits the activity of PHDs, which are the most commonly described HIF-1α inhibitors, thus increasing HIF-1α stabilization [104,105]. Additionally, miR-424 has also been reported to elicit a positive feedback loop with HIF-1α, as endothelial cells under hypoxic conditions express this hypoxamiR, which was shown to inhibit cullin 2, a protein necessary for the assembly of the ubiquitin ligase system, thereby stabilizing HIF-1α [106] (Figure 1). In contrast, miR-155, which is also induced by HIF-1α, binds to the mRNA of HIF-1A and consequently decreases HIF-1 protein levels [107]. Furthermore, miR-18a has been reported to be upregulated by hypoxia in human endothelial cells and also targets HIF-1A mRNA, thus reducing HIF-1α protein levels [108]. Interestingly, members of the miR-17/92 cluster may also target HIF-1A in a direct and dose-dependent manner. However, HIF can suppress the expression of this miRNA under hypoxic conditions (Figure 1) [109]. Altogether, the role of miRNAs and their relationship with the cellular responses to hypoxia is intricate and has been extensively reviewed by Bertero and Serocki [100,101]. Taken together, multiple key cellular pathways are directly affected and modulated by hypoxia, which in turn causes a wide range of cellular responses oriented at adapting to these new adverse conditions. In this context, HIFs play a key role as master regulators of numerous physiological changes under low oxygen levels. Interestingly, the study of these transcription factors and their relationship with multiple cellular responses opens the opportunity for targeting these transcription factors as possible candidates for treating different diseases. For instance, VEFG and ANGPT-1 play an important role in ischemic diseases as they are able to stimulate the remodeling of collateral blood vessels, leading to increased blood flow for different tissues [110], and their regulation via HIF is an interesting approach for developing new therapeutic strategies. Effect of Hypoxia over Viral Infections During the last years, numerous studies have reported a relationship between viruses and hypoxia [25]. However, this response varies widely among viruses, as in some cases hypoxia may promote viral replication, while others downregulate this process [24,25]. For instance, hypoxic states have been reported to aid the Epstein-Barr virus (EBV, HHV-4) and the Kaposi's Sarcoma-associated herpesvirus (KSHV, HHV-8) switching from a latent to a lytic mode. Indeed, low oxygen levels have been shown to increase the expression of Zta, a protein that mediates this switch between latent and lytic infection in EBV in a B-lymphoblastoid cell line [111]. This, in turn, increased the number of copies of the viral DNA in the infected cells (Table 1) [111]. Additionally, HIF-1α has been described to induce EBV lytic gene expression by binding to the promoter of the latent-lytic switch BZLF1 gene Zp [112]. Hypoxia increases the expression of the EBV transcription factor Zta in a B-lymphoblast cell line. Increase EBV lytic gene expression. In vitro [113] HIF-1α interacts with the promoter of the viral ORF34 gene in Hep3B cells and induces vGPCR expression in B cells. In vitro [114] Hypoxia induces the expression of the viral inducer TPA, producing an increase in the levels of IL-6 expression in KSHV-infected primary effusion lymphoma B-cell lines. In vitro [116] John Cunningham virus (JCV) HIF-1α interacts with the JCV control region in glial cells. Increases JCV replication. In vitro [117] Adenovirus Hypoxia decreases the expression of adenovirus protein E1A. Decreases adenovirus replication in HEK293 and numerous cancer cell lines. HBV genome copy number reduction in HepG2 cells. Increase DENV replication. In vitro [120] HIV HIF-1α interacts with VPR inducing HIV gene expression in human kidney cells. Increases HIV production. In vitro [121] Hypoxia induces IL-17 expression that upregulates GLUT1 expression, favoring glucose uptake in T cells. Increases HIV production. In vitro [122] Hypoxia decreases CDK9/cyclin T1 and Sp1 expression, producing a reduction in HIV Tat expression. Reduction in HIV production. In vitro [123] VSV Hypoxia reduces VSV mRNA at early time points after infection of HeLa cells, but at later times after infection, VSV reduces eIF2α phosphorylation, promoting viral protein synthesis. Increase in VSV replication. In vitro [124] Decrease of SARS-CoV-2 infection by a reduction of the release of viral particles in lung epithelial cells. Decrease of SARS-CoV-2 replication in human pulmonary artery smooth muscle cells. In vitro [126] Moreover, HIF-1α has been reported to induce the expression of KSHV lytic genes. Under hypoxic conditions, HIF-1α binds to the latency-associated nuclear antigen (LANA) of KSHV. In turn, the HIF-1α:LANA complex binds to an HRE present in the promoter region of the viral Rta gene, eliciting increased expression of its product and playing an important role as a lytic replication and transcriptional activator [113]. Likewise, hypoxia induces the transcription of ORF34 in KHSV, a lytic gene, namely through HIF-1α or HIF-2α, which bind to an HRE located in the promoter region of the corresponding gene (Table 1) [114]. Furthermore, it has been shown that under hypoxic conditions, HIF-1α can elicit a metabolic reprogramming in KSHV-infected B cells by inducing vGPCR (G protein-coupled receptor), which is considered as a lytic gene needed for KSHV viral reactivation [127]. This protein has also been described as a potent candidate in the upregulation of several cell proliferation pathways, such as MAP kinase and angiogenesis [128]. Additionally, this protein plays an important role in tumorigenesis due to its capacity to activate MEK/ERK pathways [128]. Furthermore, the expression of vGPCR is associated with global transcriptional regulation, the generation of ROS, and enhanced glucose uptake [127]. Similarly, hypoxia induces lytic replication of KSHV through the viral inducer 12-O-tetradecanoylphorbol-13acetate (TPA) [115], which elicits an increase in interleukin-6 (IL-6) levels and stimulates spindle cell growth while likely activating angiogenic factors (Table 1) [115]. Interestingly, an oncolytic mutant herpes simplex virus type 1 (HSV-1 G207) with a deletion in the γ34.5 gene has been reported to display increased replication under hypoxic conditions [116]. This phenomenon would be partly due to an increase in the expression of GADD34, a mammalian factor that can complement the mutation, particularly under hypoxic conditions (Table 1) [116]. The John Cunningham virus (JCV) has also been reported to be affected by hypoxic conditions. HIF-1α has been shown to bind to the JCV non-coding regulatory region (control region) in the viral genome and activate early and late viral gene promoters, which may help JCV replicate (Table 1) [117]. Hypoxia has also been described to upregulate infection of some RNA viruses. For instance, genome replication of the dengue virus (DENV), but not virus entry or RNA translation, was enhanced under hypoxic conditions and was mediated through HIF-1α/2α, the serine/threonine kinase AKT, and ROS in liver cells, monocytes and epithelial cells (Table 1) [120]. Additionally, it has been reported that the activity of the promoter of the human immunodeficiency virus (HIV) is upregulated by a heterodimer formed by the viral protein Vpr and HIF-1α, which bind to the GC-rich binding domains in the long terminal repeat (LTRs) [121]. Furthermore, under hypoxic conditions, an interleukin-7 (IL-7)-induced increase in the expression of the GLUT1 has been observed, which leads to an increase in glucose uptake favoring HIV-1 infection [122] (Table 1). On the other hand, replication of the vesicular stomatitis virus (VSV) is higher under low oxygen levels than normoxic conditions, when using a tumor environment and when HeLa cells are cultured under hypoxic conditions. This showed that early after infection, viral mRNAs are reduced, but later on, VSV can surpass the inhibition of viral protein synthesis by overcoming the hypoxia-associated increase in the phosphorylation of the eukaryotic initiation factor 2α (eIF-2α) ( Table 1) [124]. In contrast, some viruses have been described to be downregulated under hypoxic conditions. For instance, low oxygen levels have been reported to induce a reduction in the expression of the adenovirus protein E1A through a post-transcriptional mechanism [118]. Importantly, this protein is necessary for driving the cell into the S phase of the replication cycle of cells, thus suggesting a hypoxia-induced G1 arrest of cells that leads to reduced viral replication (Table 1) [118]. Interestingly, another study showed that E1A mRNA levels are not affected under hypoxic conditions. Thus, a reduction in E1A protein levels under hypoxic conditions may occur without reducing E1a mRNA levels [129]. Additionally, it has been reported that the angiotensin-converting enzyme 2 (ACE2), a cellular receptor key for the infection process mediated by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and which mediates viral entry into the target cells, is decreased upon HIF-1α accumulation, which could translate into lesser infection and viral replication [126,130]. This phenomenon may occur given that hypoxic conditions increase the levels of angiotensin II, which in turn inhibits the synthesis of ACE2 [125]. Consequently, populations that live in high altitudes and are chronically exposed to hypoxic conditions seem to exhibit minor pathology after SARS-CoV-2 infection [131]. Another study also supports the role of HIF-1α over the replication cycle of SARS-CoV-2, with mRNAs and proteins of viral receptors, ACE2 and TMPRSS2, being downregulated in cells cultured under hypoxic conditions [125]. Moreover, this study evaluated the effect of HIF-1α after viral entry by inducing hypoxia 8 h post-infection, which showed a 90% reduction of viral RNA levels, and decreased the release of viral particles compared to during normoxic conditions (Table 1) [120]. In contrast, other studies have reported the expression of the ACE2 gene by hypoxia [126,132,133], which may be explained by regulations at the transcriptional or translation level and may be cell type specific and may also depend on the existence of metabolic phenotypes that modulate HIF signaling [134]. Furthermore, under hypoxic conditions, the deoxyribonuclease 1 gene (Dnase 1) is overexpressed and encapsidated in hepatitis B virus (HBV) particles, which induced viral DNA degradation and thus promoted the formation of genome-free HBV virions (Table 1) [119]. Interestingly, downregulation of HIV-1 replication under low oxygen conditions has also been reported. However, this reduction was not due to HIF-1α expression, as it was transitory at 1% O 2 concentration and absent at 3% O 2 . Noteworthy, HIV transcription is induced by the viral protein Tat and under conditions of 3% O 2 , and this process was found to be reduced compared to a situation with 21% O 2 [123]. This reduction observed at 3% O 2 is suggested to be mediated by a decreased activity of the holoenzyme complex CDK9/cyclinT1, and the transcription factor Sp1 (Table 1) [123]. Noteworthy, heme oxygenase-1 (HO-1), an inducible enzyme expressed in response to physical and chemical stress, has been shown to be induced under hypoxic conditions. Human dermal fibroblasts that undergo hypoxia display a strong stabilization of HO-1 mRNA [135]. Additionally, different studies describe that HO-1 is stimulated by hypoxia in several cell types, such as astrocytes, cardiomyocytes, Chinese hamster ovary cells (CHO), and vascular smooth muscle cells [136][137][138][139]. Interestingly, pharmacological induction of HO-1 by cobalt protoporphyrin (CoPP) has been reported to impair the propagation of herpes simplex virus type 2 (HSV-2) in vitro [140]. Similarly, HO-1 induction by CoPP has been reported to inhibit the replication of the human respiratory syncytial virus (hRSV) and lung inflammation in vivo [141]. Other viruses are also affected by HO-1 expression [141,142]. Thus, hypoxia may elicit antiviral effects through other host factors, such as HO-1. Taken together, hypoxia not only has a significant impact on the regulation of key cellular responses but also modulates infection mediated by several human viral pathogens. Interestingly, pathogens may benefit from such low oxygen levels or the cell responses elicited under these conditions. Indeed, hypoxia affects the replication cycle of several viruses and may promote their replication or activation from latent states eliciting lytic gene expression. Importantly, hypoxic conditions or the induction of cellular responses to hypoxia can also negatively impact viral processes, such as some related to SARS-CoV-2. Interestingly, Vadadustat, an α-ketoglutarate analog that acts as a PDH inhibitor and thus is capable of stabilizing HIF-1α, is currently in clinical trials to treat acute respiratory distress syndrome (ARDS) in COVID-19 patients (https://clinicaltrials.gov/ct2/show/ NCT04478071 (accessed on 4 July 2021)). Thus, hypoxia and the cellular response elicited to this condition arises as an attractive approach to explore the development of new antiviral treatments. We foresee that more clinical trials evaluating the use of drugs capable of regulating the expression of HIF-1α are likely to occur in the near future due to the accumulating evidence available regarding the interplay between hypoxia and human viral infections. Virus Induction of Hypoxia Responses Several studies have reported an upregulation of HIF-1α during viral infections [25]. However, the mechanisms underlying viral modulation of this transcription factor are quite variable and depend on several factors, which are depicted in Figures 3 and 4 and described below. . Viral modulation of HIF-1α inhibitors. Several viruses have been reported to activate HIF-1α and thus, promote the expression of hypoxia-inducible genes that are part of the hypoxic response. This is accomplished by the stabilization of HIF-1α and avoidance of its proteasomal degradation. The E6 oncolytic protein from HPV-16, LMP-1 from EBV, LANA from KHSV, and HBx from HBV have all demonstrated the ability to inhibit HIF-1α ubiquitination and target this protein to the proteasome. On the other hand, the influenza A virus has been reported to be able to directly inhibit the activity of FIH-1, while also inhibiting the proteasome. On the other hand, KHSV miRNAs inhibit PHD1 expression, while the EBV EBNA-3 and EBNA-5 proteins inhibit PHD1 and PHD2, respectively, promoting HIF-1α stabilization. Lastly, hRSV, the HPV-18 E2 protein, the EBV LMP-1 protein, and dsDNA from HIV have been described to increase ROS production by the mitochondria (the former virus through NO, while the other through H2O2), which regulate HIF-1 stabilization by inhibiting PHDs function. Contrarily to the viruses mentioned above, NDV was found to promote proteasomal degradation of HIF-1α. EBV Activation of HIF-1α Mediated by Viral Kinases Different studies have reported that viruses may activate HIF-1α via kinases. For instance, a high-risk type of human papillomavirus (HPV) oncoproteins, E6 and E7 of the HPV type 16 (HPV- 16), have been described to induce cellular HIF-1α protein accumulation by activating the ERK1/2 and the PI3K/Akt signaling pathways [143] (Figure 3). Additionally, the latent membrane protein 1 (LMP1) of EBV has been reported to increase HIF-1α expression through the p44/42 MAPK pathway [144]. Moreover, this viral protein induced the promoter activity of HIF1A, which was accomplished by LMP1 through the recruitment of the C-terminal activating region 1-(CTAR-1) of ERK1/2/NF-κB and consequently led to the induction of the transcription of the HIF-1α gene (Figure 3) [145]. Furthermore, KSHV induction of HIF-1α is also mediated by kinases. Paracrine activation of mTOR has been reported to promote the upregulation of HIF-1, which was accomplished by viral GPCR activation through multiple signaling pathways, such as those associated with ERK and AKT [146]. These pathways, in turn, stimulate tuberin 1 and 2 (TSC1 and 2), leading to a derepression of mTOR and, thereby, to the induction of HIF-1α [146]. Moreover, vGPCR was shown to be able to induce the phosphorylation of the HIF-1α regulatory/inhibitory domain by the signaling pathways of p38 and mitogenactivated protein kinase (MAPK) leading to an upregulation of this transcription factor (Figure 3) [146]. Additionally, the human cytomegalovirus (HCMV or HHV-5) has been described to increase the stabilization of HIF-1α through its chemokine receptor US28 which stimulates the Gq protein alpha subunit (Gαq)-dependent signaling cascades leading to calcium/calmodulindependent protein kinase II (CaMKII)-activation and subsequent stimulation of AKT/ mTOR [147]. Interestingly, the activation of HIF-1α after HCMV infection has also been reported through the PI3K/AKT pathway (Figure 3) [148]. Activation of HIF-1α Mediated by Viral Impairment of HIF-1α Inhibitors HIF-1α stabilization requires the inhibition of molecules such as pVHL ubiquitin E3 ligase complex, PHDs, or FIH-1. Because of the relevance of these interactions, several viruses attack this pathway as a strategy to induce HIF-1α upon infection. For instance, the HPV-16 oncoprotein E6 enhances HIF-1α activation by protecting it from proteasomal degradation [152]. To accomplish this, E6 attenuates pVHL binding to HIF-1α, and consequently promotes HIF-1α accumulation (Figure 4) [152]. Additionally, KSHV's LANA has been described to function as a component of the EC 5 S ubiquitin complex and target VHL (the pVHL coding gene) and p53 for degradation, and therefore inhibit pVHL-and p53-mediated HIF-1α degradation [155]. Interestingly, miRNAs encoded in the KSHV genome downregulate the expression of PHD1, which leads to the stabilization of HIF1α (Figure 4) [156]. Moreover, the influenza A virus (IAV) has been reported to stimulate the inhibition of the proteasome. Furthermore, decreases in the expression of FIH-1 inhibit the degradation of HIF-1α, leading to its stabilization ( Figure 4) [157]. Finally, the HBV protein X has been shown to stimulate HIF-1α protein stabilization by inhibiting the pVHL-mediated proteasomal degradation pathway [158]. Furthermore, this viral protein causes the deacetylation of the oxygen-dependent degradation domain of HIF-1α [159]. This, in turn, induces the dissociation of PHDs and pVHL from HIF-1α, leading to the stabilization of this transcription factor [159]. This process is probably mediated by the metastasis-associated protein 1 (MTA1) and histone deacetylases (HDAC), with their expression being enhanced by HBx (Figure 4) [159]. Activation of HIF-1α by Reactive Oxygen Species Some viruses have also been reported to activate HIF-1α through ROS, NO, H 2 O 2 . For instance, the E2 viral protein of human papillomavirus type 18 (HPV-18) has been reported to localize to mitochondrial membranes and induce the production of mitochondrial ROS without causing cell death [160]. The presence of these ROS correlated with the stabilization of HIF-1α and the induction of HIF-1 target genes (Figure 4) [156]. Additionally, it has been shown that the induction of HIF-1α by the EBV LMP1 protein is mediated through H 2 O 2 production [144]. This was shown in LMP1-transfected Ad-AH cells treated with catalase, an H 2 O 2 scavenger that strongly suppresses LMP1-induced production of H 2 O 2 . HIF-1α induction, in this case, was completely blocked upon this treatment (Figure 4) [144]. Furthermore, it has been shown that the human respiratory syncytial virus (hRSV) can stabilize HIF-1α through NO. Human bronchial epithelial cells infected with this virus release NO, which was reported to stabilize HIF-1α. Consistently, the inhibition of NO blocked the expression of HIF-1α and HIF-1 target genes in these cells (Figure 4) [161]. Moreover, cytosolic double-stranded DNA, generated during HIV replication in CD4 + T cells, has been described to induce mitochondrial ROS-dependent HIF-1α stabilization [162]. Additionally, the viral protein Vpr was described to induce ROS by increasing H 2 O 2 , which in turn leads to HIF-1α stabilization upon HIV infection (Figure 4) [163]. In summary, hypoxic cellular responses and viral infections have a bidirectional relationship, with hypoxia upregulating several processes related to the replication cycle of viruses and with viruses being able to induce a hypoxic response in the infected cells. Interestingly, at present, multiple studies are describing how viruses target different cellular pathways to upregulate HIF-1α. Importantly, the stabilization of this transcription factor may positively affect the infection of viruses and thus opens the possibility of pharmacological inhibition of HIFs as new antiviral treatments. Virus Inhibition of Hypoxic Responses Viruses Downregulating HIF-1α Interestingly, there is also evidence that some viruses may downregulate the stabilization of HIF-1α. For instance, the Newcastle disease virus (NDV) has been described to downregulate HIF-1α through its targeting to the proteasomal pathway and inhibit HIF-1α protein accumulation in various cell lines upon infection, overall producing a decrease in the transcription of HIF-1-target genes ( Figure 4) [164]. Additionally, infection of reovirus-permissive tumor cells and reovirus-resistant tumor cells with the mammalian orthoreovirus (MRV, reovirus) was described to downregulate HIF-1α protein levels. Interestingly, this effect was also induced by UV-inactivated virus, which indicates that this downregulation is independent of virus replication [165]. Moreover, transfection of reovirus genome into human tumor cell lines reduces HIF-1α protein levels, even in the presence of polyinosinic-polycytidylic acid (polyI:C), which is a synthetic double-stranded RNA analog that acts as a pathogen-associated molecular pattern (PAMP), thus indicating that viral RNA plays a key role in the downregulation of HIF-1α [165]. Taken together, some viruses are also able to downregulate the hypoxic cellular response and modulate the effect of HIF-1α, thus overall impacting viral infections and varying their impact in the host. This observation opens the possibility for new studies that may focus on assessing how the stabilization of this transcription factor by different host and viral molecules or by mimicking hypoxia may contribute to viral control and treatment. Conclusions Cellular oxygen sensing mechanisms are essential for helping cells regulate critical functions during hypoxic situations and modulating the outcome of different diseases. The studies revised and discussed in this review highlight the key role of HIF transcription factors in guaranteeing homeostasis related to multiple cellular processes through the modulation and the expression of sets of genes that encode components related to responding to hypoxia. Notably, at present, there is accumulating evidence suggesting that oxygen-sensing components in the cell and their response to varying oxygen levels can significantly affect viral infections, either by directly impacting their replication cycles or by regulating the immune responses elicited by them. This relationship between viruses and the hypoxic cellular response suggests potential pharmacological modulation of key oxygen-sensing components, such as HIF-related pathways, as therapeutic targets against several viruses and the diseases they cause. However, special care must be taken in defining whether HIF stabilization or inhibition exerts beneficial or harmful effects in response to viral infection, as HIF may promote viral replication in some cases, whereas it can inhibit this process in other situations. Therefore, more studies are needed to understand how different viruses respond to hypoxia-related factors and how viruses may modulate oxygen-sensing pathways to their favor. It is crucial that in vitro studies take into account oxygen microenvironments since not all organs receive the same amount of oxygen in the organism, and changes in oxygen levels may have different effects over viruses depending on viral cell tropism as hypoxia may promote the replication of viruses in some cells or inhibit their fitness in others. Therefore, refined studies and experimental settings could aid in translating the findings into effective antiviral therapies. Currently, HIF1α inhibitors are being tested in clinical trials to treat cancers and anemia, among others. These could be assessed as potential antiviral drugs or as adjuvant therapies to traditional treatments that are not fully effective. It is important to note that before the COVID-19 pandemic, no ongoing clinical trials had evaluated the potential therapeutic effects of HIF-1 stabilizers in viral infections. However, in the search for new antivirals to combat SARS-CoV-2, this situation has accelerated multiple investigations in this regard in humans. Vadadustat, an α-ketoglutarate analog, is currently being evaluated in clinical trials to treat acute respiratory distress syndrome (ARDS) in COVID-19 patients (https://clinicaltrials.gov/ct2/show/NCT04478071) (first posted on 20 July 2020). Interestingly, because MRV is known to inhibit HIF-1α, infection with this virus has been evaluated as a potential treatment for cancer [166]. Several studies report that MRV inhibits the accumulation of HIF-1α in infected cells and regulates gene expression in tumors in vitro and in vivo [166][167][168]. Furthermore, a Phase I clinical trial to evaluate the safety of treatment with this virus has provided satisfactory results and allowed the initiation of Phase II and Phase III clinical trials against multiple tumor types [166,169]. However, data so far from clinical trials have shown that this MRV-based therapy does not inhibit the growth or regression of all types of tumors [166,169]. Thus, further preclinical and clinical trials are needed to establish the potential of MRV as an effective treatment for different types of cancer. In the short term, this approach should provide valuable information on whether using this type of procedure works as a new or additional therapy for treating viral infections. Furthermore, this strategy could be beneficial for treating chronic viral diseases, increasing the availability of drugs to be used, which is urgently needed for many viruses, particularly for those displaying resistance to conventional antiviral therapies. Funding: This work was funded by ANID-Millennium Science Initiative Program-ICN09_016: Millennium Institute on Immunology and Immunotherapy (P09/016-F; ICN09_016) and FONDECYT grant #1190864 from the Agencia Nacional de Investigación y Desarrollo (ANID). This work was also supported by the Regional Government of Antofagasta through the Innovation Fund for Competitiveness FIC-R 2017 (BIP Code: 30488811-0). MF and ET are ANID fellows #21191390 and #21211300, respectively. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest.
2021-08-08T06:16:23.600Z
2021-07-26T00:00:00.000
{ "year": 2021, "sha1": "8f472a0badab47aa45f39fefdd1708f70cc301f1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/15/7954/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ae2493e3cc07616092abe344f00c7bad3cccc638", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
259226708
pes2o/s2orc
v3-fos-license
The need for antidepressant withdrawal support services: Recommendations from 708 patients Approximately half of the tens of millions of people currently taking antidepressants will experience withdrawal symptoms when they try to reduce or come off them. Nearly half of these describe their symptoms as severe in surveys. Many prescribing doctors seem ill-informed and unprepared to provide effective discontinuation advice and support, often misdiagnosing withdrawal as a relapse of depression or anxiety. 708 members of online support groups for people on antidepressants, from 31 countries, completed a sentence in an online survey: ’ A public health service to help people come off antidepressants should include ................ ’ . Two independent researchers categorised their responses into themes, and then reached consensus via discussion. Seven themes emerged: ‘Prescriber Role ’ , ‘Information ’ , ‘Other Supports/Services ’ , ‘Strong Negative Feelings re Doctors/Services etc. ’ , Informed Consent When Prescribed ’ , ‘Drug Companies ’ and: ‘Public Health Campaign ’ . The most frequently mentioned requirements of the Prescriber Role were that prescribers be properly informed, provide small doses/liquid/tapering strips, develop a withdrawal plan and believe patients about their withdrawal experiences. The most frequently recommended other services were psychotherapy/counselling, support groups, patient led/informed services, nutrition advice, 24-hour crisis support and ‘holistic/lifestyle ’ approaches. Many respondents were angry about how uninformed their doctors were and how they had been treated. Antidepressant prescribing Annual antidepressant prescribing in the U.K. recently doubled in ten years (Iacobucci, 2019).A government enquiry found that 7.3 million adults (17% of the population) had been prescribed at least one prescription of antidepressants in 2017-2018 in England alone; with particularly high rates for women, poorer people and older people (Public Health England, 2019).The latest government figures for England show that between October 2021 and September 2022 there were 84.8 million antidepressant prescriptions prescribed, an increase of 2.9 million (3.6%) on the preceding year (NHS-BSA, 2022). Increased prescribing is due not only to increases in the number of people being prescribed antidepressants but also to individuals being prescribed them for longer (Kendrick, 2021).For example, between September 2017 and September 2022 the number of people in England prescribed antidepressants rose by 23%, but the number of prescriptions rose by 28% (NHS-BSA, 2022).By 2011, half of antidepressant users in England had been taking antidepressants for more than two years (Johnson et al., 2012).Average duration of use has doubled since the mid-2000s in the U.K. (NHS Digital, 2018).In the U.S. half of antidepressant users have been taking them for at least five years (Mojtabai and Olfson, 2014).revealed the prevalence, severity and duration of antidepressant withdrawal.One review (Davies and Read, 2019) was undertaken for the All-Party Parliamentary Group for Prescribed Drug Dependence in the UK, to inform a government enquiry by Public Health England (2019).Fourteen studies, using a range of methodologies, found that, on average, 56% of people experience withdrawal symptoms when withdrawing from, or reducing, antidepressants.Four surveys assessing severity found that an average of 46% of people described their withdrawal symptoms as 'severe'.Of the ten studies measuring duration, seven found that a substantial proportion of patients have withdrawal symptoms for weeks, months or, more rarely, several years.The most recent review concurs that withdrawal from antidepressants 'represents a quite frequent and burdensome outcome', despite several authors having conflicts of interest relating to drug companies and using the outdated industry euphemism of 'discontinuation syndrome' (Fornaro et al., 2023) Antidepressant withdrawal is characterised by physical and emotional symptoms (Davies and Read, 2019;Fornaro et al., 2023;Moncrieff, 2020;Read et al., 2014Read et al., , 2019) ) that can emerge days, weeks or months after stopping antidepressants, and which sometimes surpass the problems for which the drugs were prescribed (Cosci and Chouinard, 2020;Davies and Read, 2019;Fava, 2021;Fava et al., 2018;Horowitz and Taylor, 2019;Jha et al., 2018).Many doctors are unaware of this evidence and therefore do not understand their patients are presenting with withdrawal symptoms (Cosci and Chouinard, 2020;Davies et al., 2018;Hengartner and Plöderl, 2018).They often, therefore, misdiagnose relapse of depression or anxiety instead (Framer, 2021;White et al., 2021).Most of a small sample of British GPs reported that their knowledge about withdrawal was inadequate (Read et al., 2020).Furthermore, international surveys of antidepressant users find that less than 1% are told anything about withdrawal by the prescriber (Read et al., 2018(Read et al., , 2020)).Cosci and Chouinard (2020) reviewed the literature on withdrawal symptoms of a range of psychiatric drugs, including antidepressants, and concluded that 'The likelihood of withdrawal manifestations that may be severe and persistent should be taken into account in clinical practice and also in children and adolescents ' (p. 283).In 2019 Public Health England produced a report on 'Dependence and withdrawal associated with some prescribed medications: An evidence review.'It acknowledged the full extent of problems withdrawing from antidepressants, and recommended targeted withdrawal services and a helpline to assist people coming off antidepressants and other psychiatric drugs, and more accurate national guidelines.The Royal College of Psychiatrists published a new, evidence-based position statement (Iacobucci, 2019;Royal College of Psychiatrists, 2019) which recommended that patients should be informed of 'the potential in some people for severe and long-lasting withdrawal symptoms on and after stopping antidepressants'.They also produced guidance on 'Stopping Antidepressants' recommending that long-term antidepressants are stopped 'over a period of months or longer' and that tapering plans 'should allow you to reduce the dose at a rate that you find comfortableas slowly as you need to avoid distressing withdrawal symptoms' (Burn et al., 2020). The UK's National Institute for Health and Care Excellence also updated its guidelines (National Institute of Clinical Excellence (NICE), 2022), pointing out that: '-withdrawal can sometimes be more difficult, with symptoms lasting longer (in some cases several weeks, and occasionally several months) -withdrawal symptoms can sometimes be severe, particularly if the antidepressant medication is stopped suddenly.' Facebook groups Prescribers' lack of knowledge about withdrawal symptoms (Read et al., 2020) means many patients are left without meaningful professional support when reducing or coming off antidepressants.Because their doctors struggle to help them withdraw safely, or mis-diagnose their withdrawal symptoms as relapse (Hengartner and Plöderl, 2018), many patients turn to the internet for guidance and support.. A recent study of 13 of these withdrawal Facebook groups reported a total membership of over 67,000, mostly women, increasing at about 25% annually (White et al., 2021).The most common reason for pursuing online support was failed clinician-led withdrawal attempts.For example, Surviving Antidepressants, has around 13,000 members, with 6, 000 case reports publicly accessible on the site, and receives about 750, 000 visits to its site monthly.Forum posts have been used to estimate longevity and time of onset of withdrawal from antidepressants (Stockmann et al., 2018) and instances of drug-induced withdrawal anxiety and mood disorders (Belaise et al., 2012). A previous publication has reported the quantitative findings from the same online survey of Facebook group members that is used in the current paper.It found that 71% of respondents experienced their doctors' advice regarding stopping an antidepressant as 'unhelpful' (57% 'very unhelpful') (Read et al. 2023).The main reasons endorsed were: 'Recommended a reduction rate that was too quick for me', 'Not familiar enough with withdrawal symptoms to advise me' and 'Suggested stopping antidepressants would not cause withdrawal symptoms'. The current study explores the participants' views in more depth, by reporting the qualitative findings about what sorts of services and support they recommend should be provided by public health services. The survey The anonymous online survey, for adults (18 or over), was entitled the 'International Online Survey of Members of Peer Support Groups About Their Experiences of Withdrawing from Antidepressants' (Read et al., 2023).The first page informs potential participants that: 'The purpose is to understand the experience of coming off antidepressants so we can inform UK health services (and other health services around the world) what sort of services need to be provided.This is your opportunity to share your experience of coming off these medications and what you have learned in the process to help others going through similar experiences. We are interested in people around the world who have the following experiences 1 You have stopped an antidepressant in the past 2 OR you are currently trying to stop an antidepressant with the help of an online peer support group 3 OR you tried to stop an antidepressant in the past and had to go back on the antidepressant and you are now seeking help to taper safely from an online peer support group.' The survey asks quantitative and qualitative questions about why people had tried to withdraw, withdrawal symptoms experienced, the process of withdrawal, the role of online support groups, and how useful a list of potential services would be.The current paper reports responses to the following item: Please complete the following sentence: 'A public health service to help people come off antidepressants should include ................' Procedures The study was approved by the University of East London's Research Ethics Sub-Committee (Application ID: ETH2021-0120). The administrators of 15 online support groups for people taking antidepressants were asked to inform their members about the survey.Thirteen of these had been the subject of a previous study (White et al., 2021).We do not know how many administrators did publicise the survey as our colleague undertaking this part of the project, and liaising with the groups, Ed White, has sadly died.The survey, which used the Qualtrics platform, was online from May 2021 to April 2022. Data analysis The methodology used to summarise the data was content analysis (Bengtsson, 2016), which aims to give direct voice to participants, with minimal interpretation from the researchers.One researcher read the first half (1-354) of the 708 responses and another read the second half.Each developed themes and subthemes independently from the other researcher.One proposed 11 themes, with 27 subthemes.The other identified eight potential themes and 16 subthemes.Discussion between the two researchers led to agreement on nine themes and 29 subthemes.It was decided that only themes or subthemes with at least ten examples should be included. One of the researchers then scored all 708 responses on the agreed themes and subthemes.One of the original themes ('Need for more research') did not reach the threshold of ten.Another theme ('Support or information for family or employer') became two separate sub themes of the theme 'Other support/services', scoring services for families and employers separately, since both passed the threshold of ten.Three subthemes of the 'Other support/services' theme (Meditation/ Acupuncture/Homeopathy; Lifestyle/Holistic; Exercise/Yoga) were integrated into one, called 'Holistic/Lifestyle'.The sub themes 'Patient-led Services' and 'Services with Patient Input' were merged into one, 'Patient Led/Informed Services'.This left seven themes, and 27 subthemes. The theme 'Strong negative feelings towards doctors/services etc.' was considered sufficiently subjective to warrant two researchers rating it independently from each other.In 577 cases both raters scored the participant as not fitting the theme; in 66 both scored participants as fitting the theme; and another 65 participants were adjudged to fit the theme by one of the two.This represents a 90.8% agreement rate; and a kappa inter-rater reliability score (which allows for agreement by chance) of 0.62, classified as 'substantial agreement' (Landis and Koch, 1977).Following discussion, 26 of those considered to fit the theme by only one of the two researchers were excluded, leaving 105 participants being deemed to fit this theme. In exploring possible relationships between themes or subthemes and demographics we ran 136 tests [34 (27 subthemes, 7 subthemes) X 4 (gender, age, USA vs others, UK vs others) = 136] and, therefore, applied a p value of 0.004 for a result to be deemed significant, according to the Bonferonni method, in order to correct for Type 1 errors (false positives) (Armstrong, 2014). All respondents had tried to stop or reduce one or more antidepressants.Half (50.8%) were 'currently reducing/tapering'.A third (32.8%) had tried at least twice to come off an antidepressant and failed.One in eight (12.4%) had tried unsuccessfully five or more times. When asked 'When first prescribing the antidepressant medication did the doctor tell you anything about withdrawal effects from stopping the medication?' 93.5% responded 'no', 4.3% 'yes' and 2.2% 'don't know/can't remember'. When asked 'Did the antidepressants help you with the problem they were prescribed for?' 48.7% responded 'yes', 32.8% 'no' and 18.5% 'don't know/not sure'. Themes and subthemes The largest theme (349 responses) delineates the expectations that patients have of prescribers.By far the most frequent recommendation is that doctors need to be better informed.The second largest theme shows that the two types of information needed, and currently missing, are about withdrawal symptoms and about how to help patients withdraw gradually and safely.The third theme reports the numerous other services that are needed. Table 1 lists the final set of seven themes, and the 27 subthemes that were spread across the first three, largest, themes.Tables 2-4 give examples of the 27 subthemes.Table 5 provides examples of the four smaller themes.The gender, age and country of participants are given. What is striking when reading the survey responses is how angry, frustrated, disappointed, and let down people feel (Theme 4; Table 5).These feelings came from being disbelieved by their prescriber that they were in withdrawal to their own disbelief at the lack of accurate information provided to them and the poor level of care they were given once they became ill with withdrawal symptoms.None of the relationships between the themes or subthemes with gender, age or country (UK or USA) reached the p = .0004level required because of multiple testing (see Methods).Only two relationships came close to reaching the threshold.The 147 UK respondents were more likely than participants from all other countries combined to mention the need for 'small doses/liquid/tapering strips' (20.1% vs 10.5%; X2 = 10.0, p = .002).They were also more likely (14.1% vs 4.5%) to recommend 'crisis support/24-hour/hotline' (X2 = 17.6, p < .001). Complex responses Many respondents gave lengthy, thoughtful responses that contributed to multiple subthemes and illustrated how the various issues related to one another.We present just two examples here: Emergency helpline if you are feeling suicidal or the urge to self harm.People who have or are successfully tapering to talk to for support rather than a counsellor who still doesn't really get it.And Validating the patients experience with side effects that the drugs caused, educated and empathetic doctors who create customized Table 3 Examples of theme 2: Information. How to withdraw 226 Full information on how to come off slowly and why this is so important. F USA Information about how to come off (how the reduction steps should be). 45 F Germany Doctors/nurses that are up-to-date with withdrawal methods. F UK Very clear tapering instructions for doctors like "5% or less of the already reduced previous dose, not of the original dose, every 10 to 14 days", and that it may take years. F USA Explanation of what to expect and how to use coping skills and supplements. F Belgium Better access to psychological support on a one to one but open ended. F UK Affordable psychologist services to give practical (non pharma drug) coping techniques. F Canada Counselling (both one to one and in groups). F UK Specialist counselling that is built around the unique features of withdrawal symptoms. F UK Trauma counselling as all of life's trauma is relived in the withdrawal process, family therapy to educate the family and friends of the process and save relationships. Support Group 70 Small local support groups for both withdrawer and any family member/ friend supporting the person withdrawing. M UK Group meetings in person for people who are tapering. F USA Support groups to learn better coping skills. F NZ Support group of others who are or have discontinued psych meds. Peer-led/informed Services 60 Someone who has gone through it themselves, that knows the hell we are going through. F Canada Well-informed practitioners, ideally those who have experienced antidepressant withdrawals themselves. F USA It should definitely include support from people that have lived this nightmare. F UK Understanding support perhaps from people who have been through tapering and withdrawal themselves and not a medic who blindly believes what the pharmaceutical companies say. F Ireland Peer support if possible....from trained people who have been through withdrawal. F Australia I would have more faith in someone who has successfully tapered themself as I feel they would be more understanding and empathetic. F Italy Input from patients-not just western educated MDs and "big pharma". F USA In person support groups, could be peer run but have invited speakers plus access to pharmacists etc who can advise and support the group. Discussion Respondents felt that currently doctors do not know much about withdrawal, don't believe them, and don't know how to support them.They also thought that services do not provide the sort of measures needed to help people withdraw successfully, including pharmacological preparations required for a gradual tapering process, and other sorts of support considered useful.They described the importance of knowledge of withdrawal and of other sources of support that has been available for some years now but has not been disseminated or implemented. Uninformed doctors Perhaps the most powerful finding from this study is that there is a clear divide between prescribers' knowledge and beliefs about antidepressants, and the distressing experiences of their patients.Several of the sub-themes of themes 1, 4 and 5, show that respondents were highly critical of their doctors and the service received, starting at the initial prescribing (Read et al., 2016).Respondents felt their prescribers were unaware of the extent to which antidepressants cause withdrawal, leaving patients feeling initially misinformed and subsequently disbelieved.Respondents also mentioned that they would like doctors to be better informed about tolerance effects from antidepressants, closely related to withdrawal effects, as well as the myriad side effects and consequences of antidepressant use over the long-term (Fava, 2020;Fava et al. 2020;Read et al., 2017). The results of our survey mirror previous surveys.A survey of 319 UK antidepressant users, by the All-Party Parliamentary Group for Prescribed Drug Dependence (APPG-PDD, 2018), revealed 'a deep deficit in the current understanding of the potential harms of antidepressants by doctors and psychiatrists'.Most patients (64%) had not received any information on withdrawal or other risks, 9% had been told to stop cold turkey and 40% had been advised to withdraw over 'a few weeks'.Another survey, of 752 British antidepressant users, confirmed that patients often find the information and support offered by prescribers when they are trying to withdraw to be very varied (Read, Gee et al. 2018) and frequently inadequate (Read, Grigoriu et al., 2020).A smaller (n = 158) British analysis of submissions to parliament (Guy et al., 2020) found: '…. a lack of information given to patients about the risk of antidepressant withdrawal; doctors failing to recognise the symptoms of withdrawal; doctors being poorly informed about the best method of tapering prescribed medications; patients being diagnosed with relapse of the underlying condition or medical illnesses other than withdrawal; patients seeking advice outside of mainstream healthcare, including from online forums; and significant effects on functioning for those experiencing withdrawal.' Two other surveys, of 1,829 New Zealand patients (Read et al., 2018) and 867 patients from 31 countries (Read, 2020), both found that only Fully trained pharmacists who are aware that you can order liquid suspensions or "specials". F UK Going slow (working with pharmacy to lower doses at each step of the way). F Canada J. Read et al. 1% had been told anything about withdrawal when first prescribed antidepressants. Theme 2 shows, unsurprisingly, that the two most important areas of information prescribers are thought to require concern withdrawal symptoms and how to support patients when they decide to reduce or withdraw (see Table 3).A few respondents stressed the importance of the integrity of the source of information, which should be based on patient experience and/or research, rather than on drug company claims. A survey of British GPs (Read, Renton et al., 2020) is consistent with patients' views about GPs' knowledge.Almost half of the GPs underestimated the prevalence of withdrawal and most thought they did not have adequate knowledge about withdrawal effects and would like more training or information. Besides expecting doctors to be better educated so that they can impart accurate information and appropriate advice, respondents had other recommendations.They suggested that if doctors were better trained in withdrawal identification and management, they would be more likely to meet another patient expectation, to be believed about their withdrawal symptoms rather than have them dismissed or misdiagnosed as a relapse of the original problem.Being disbelieved was experienced as invalidating and adding to the burden patients have to deal with on top of the withdrawal process itself.Furthermore, acknowledgement seems to be a perquisite for more compassionate support, something mentioned by numerous respondents. Respondents also recommended that doctors be able to arrange the small doses, liquids or tapering strips so essential for gradual withdrawal, especially in the final stages of hyperbolic tapering (Horowitz andTaylor, 2019, 2021).They should develop a clear plan for withdrawal, individualised to the patient's situation and needs and the specific drug(s) involved.Some respondents emphasised the need for regular monitoring.These three recommendations were the most endorsed in an earlier, quantitative question in the same survey, completed by about 1,000 respondents, asking how helpful were eight specific aspects within and beyond the doctor's office: 'access to smaller F S. Africa The truth!!! 35 F Germany Trying every avenue before going on them!! 42 F Australia Educated help!Dr's and the pharmacist seem to know nothing helpful and their comments and advise regarding the effects of the medication is more harmful than good, not to mention quick to make one feel like an idiot. F Canada A check on pharmaceutical companies to be honest and open about their products, and force them to write withdrawal dangers ON THE BOX! INFORMED CONSENT WHEN PRESCRIBED 87 Honest communication of how long and how severe withdrawal side effects can be when first offered the drug. M UK Comprehensive warning provided at the time of initial prescription about the possible long-term harmful effects of the drug as well as potential difficulties coming off of the drug.without this information provided upfront at the time of initial prescription a patient is not equipped to give informed consent.i cannot know for sure but it seems to me that had i known beforehand about withdrawal syndrome and its variations i would have reconsidered ever taking the drug in the first place.every time of the many times psychotropic meds were prescribed to me, they were presented as entirely safe. NB Denmark It should start at the front end with prescribing and made very clear that these drugs can be extremely tough to come 33 M Canada M Mexico Should be told of problems before issuing. DRUG COMPANIES 17 At some point in time somebody is going to take this fight to where it needs to beat the throat of big pharma -I hope to live to see that day. M Spain Health professionals who learn from their patients, not from drug companies. F UK It is actually a crime that pharma companies can have these drugs on the market for 30++ years and not make it blatantly obvious the effects they can have on the human body. 33 M Canada PUBLIC HEALTH CAMPAIGN 16 Public information campaign about withdrawal symptoms and side effects of ssri mediation. M Netherlands Education of the mass population (no one believes us, its so disheartening). F Australia Public service announcements regarding dangers of AD usage. doses' (88% 'very helpful'); 'a personalised, flexible reduction plan' (79%) and regular follow-up to monitor reduction' (72%) (Read et al., 2023).A few respondents in the current study recommended that monitoring continue after withdrawal is complete.This highlights findings that withdrawal syndromes can be protracted, and can last for months and sometimes years, after stopping medication (Cosci and Chouinard, 2020;Framer, 2021;Hengartner et al., 2020), likely due to the fact that the brain can take months and years for adaptations to antidepressants to resolve after stopping (Horowitz et al., 2022).Withdrawal effects may occasionally not begin until weeks or months after medication is ceased, for reason that are not well understood (Stockman et al., 2018). Other services and supports Theme 3 produced 13 recommendations beyond the role of the doctor, most frequently counselling/psychotherapy, support groups, services led or designed by patients, nutritional advice and 24-hour crisis support/hotlines (see Table 4).This is broadly consistent with the following rates of responses to the earlier question in the survey (Read et al., 2023): Online support group supervised by a professional (68% 'very helpful'), 'Individual therapy/counselling' (65%), 'Support/education for carers and/or families' (62%), 'Telephone/online, video/online chat help line' ( 62%).The open-ended question, however, highlighted needs not covered by our eight specific suggestions, including the importance of services run, or informed, by patients with experience of withdrawal, specialist services, financial support/services, residential 'detox' facilities, involving pharmacists and providing information for employers. Anger Respondents indicated their disillusionment in the system, their pain, their losses, which in many cases had been highly detrimental to their lives, relationships and health.Responses were sarcastic, disbelieving, highly critical, demonstrated by capital letters, exclamation marks and swearing.Levels of trust between prescriber and patient are clearly eroded and that damage is hard to put right.Some, understandably, focussed their anger on drug companies (Theme 5; Table 5). Progress and barriers The information missing from the depressing picture painted by our respondents is, belatedly, now readily available for those willing to use it.Our introduction summarised reviews about the incidence and severity of withdrawal symptoms.Guidelines for how to withdraw gradually and safely have been published (Horowitz and Taylor, 2019, 2021, 2022;Royal College of Psychiatrists, 2020).It would be helpful, however, if the Royal Colleges of Psychiatrists and General Practitioners would take institutional responsibility for actively disseminating the information.The International Institute for Psychiatric Drug Withdrawal (www.iipdw.org)has produced a free video on how to safely withdraw, and translated it in to five languages, featuring two of the current authors (SL, MH) (IIPDW, 2022).Facebook groups, from which our respondents were recruited, are full of information and advice (White et al., 2021), which has been invaluable to thousands of people while professional services have been so slow to act.An array of resources has been made available by The Lived Experience Advisory Panel for Prescribed Drug Dependence, which consists of people (including one of the current authors, SL) with both lived and professional experience of dependence and withdrawal from prescribed psychiatric and painkilling drugs (https://leap4pdd.org).A recent review has even been able to 'outline a preliminary rubric for determining the risk of withdrawal symptoms for a particular patient, which may have relevance for determining tapering rates (Horowitz et al., al., 2022;Taylor et al., 2021).The Maudsley Prescribing Guidelines has sections on safe tapering of major drug classes (Taylor et al., 2021).A textbook dedicated to safe deprescribing of psychiatric medication, including antidepressants, the Maudsley Deprescribing Guidelines is forthcoming (Horowitz and Taylor, 2023). Until recently, the availability of psychotherapy has seldom been addressed in the withdrawal literature, with the exception of Fava and Belaise (2018).2019 saw the publication of 'Guidance for Psychological Therapists: Enabling conversations with clients taking or withdrawing from prescribed psychiatric drugs' (Guy et al., 2019).A manual for a psychotherapeutic approach to problems related to antidepressants dependence was published in 2021 (Fava, 2021). Meanwhile, however, doctors, their professional bodies, politicians, health service managers and insurance companies around the world continue to ignore the suffering of, quite literally, millions of people.One partial exception is the UK, where the Royal College of Psychiatrists, after considerable pressure, published a helpful and broadly accurate position statement in 2019, followed, as mentioned, by some excellent information sheets aimed at the public (RCP, 2020), however, they have neglected to educate psychiatrists or GPs. The previously mentioned comprehensive report published by Public Health England (PHE, 2019) recognised the need for both local services to support withdrawal and, in the short term, whilst waiting for such services to come on stream, a time-limited helpline to support those already impacted.A commissioning framework suggesting (not mandating) the implementation of local services was published by NHS England in early 2023 (NHS England, 2023).A request for funding for a National Helpline, however, was turned down in the government's most recent spending review. Following the PHE review, a Withdrawal Services Working Group was convened to define patient needs for the implementation of PHE's recommendations.One of the recommendations is a dedicated Prescribed Medication Specialist associated with each doctor's practice who can ensure informed consent by advising the patient accurately on the risks and benefits of antidepressants (and other dependence-forming drugs), develop an exit plan and tapering schedule, advise and monitor the patient's progress.The probability of this being implemented is unknown. Meanwhile the National Institute for Health and Care Excellence (2022) recently published a guideline on safe prescribing and withdrawal, but failed, despite much urging, to include detail on how to recognise withdrawal responses or provide step by step guidance on how to taper someone safely in clinical practice, so doctors relying on these national government guidelines still won't know how to do this.Furthermore, even if this detail were to be included there is a long lag time in new guidance to be implemented and so widespread education campaigns will be needed to upskill clinicians. A partial explanation for the lack of action, for so long, is the powerful and pervasive influence of the pharmaceutical industry over medical and mental health practitioners, politicians, health care managers and the media (Davies, 2022;Healy, 2012;Moncrieff, 2006Moncrieff, , 2022)).This wasentioned by numerous respondents.One small, but relevant, example, comes from the survey of GPs in the UK, mentioned earlier.Only 7% acknowledged that their clinical practice had been influenced by contact with drug company reps.However, when asked how much other GPs were influenced, they reported that 82% of their colleagues had been influenced (Read, Renton et al., 2020). Finally, it must be noted that although the question asked was about services, several respondents felt their lives had been so damaged by withdrawal experiences that they stressed the need for a public health campaign to warn the public about the dangers of these drugs and the difficulties experienced in coming off them.This emphasises the gap between what people were led to believe about these drugs and the actual experience of taking them, noting that the vast majority of studies on these drugs are short term (several weeks) and mostly conducted by their manufacturers (Munkholm, 2019), Limitations The population surveyed was not a randomised sample and may not be representative of the wider population of people using antidepressants.All respondents had the emotional and financial resources needed to find online advice on antidepressant withdrawal.Although respondents lived in many different countries, the vast majority identified as 'White/Caucasian' and two thirds had a university degree.The majority being female is less of a limitation since antidepressants are prescribed far more often to women than men. The fact that our sample was selected from withdrawal websites means respondents' experiences are not likely to represent the withdrawal experiences of everyone who stops antidepressants.However, the point of our analysis is to highlight the needs of those who have more severe difficulties, and we know these are not uncommon (Davies & Read, 2019).Moreover, it is notable that almost half of our respondents 49% believed that antidepressants had helped them, despite the negative experiences of withdrawal reported by most participants, and hence they were not a group who held generally negative views about psychiatric treatment. Conclusion The recommendations of our respondents, based on bitter experience, and those of the previous surveys with which they concur, should no longer be ignored.They should be implemented as soon as possible (Davies et al., 2023).It is as simple, and as difficult, as that. In particular, there is an urgent need to educate clinicians to recognise withdrawal and distinguish it from relapse, as well as having detailed knowledge on how to safely stop antidepressants.Patients need to be given access to formulations of medications (such as liquids or compounded medications) that allow for safe and individualised tapering and for some, specialist services that allow tapering to be carefully monitored and adjusted.The consequences of not providing these services are loss of faith in the medical system. Declaration of Competing Interest MAH and JM are collaborating investigators on the RELEASE trial funded by the Medical Research Future Fund in Australia (MRFAR000079).MAH is co-founder of a digital health clinic which helps people to safely stop unnecessary antidepressants in Canada.MAH has been commissioned to write the Maudsley Deprescribing Guidelines in Psychiatry by Wiley Blackwell.JM is a co-investigator on the NIHRfunded REDUCE trial and collects modest royalties from five books about psychiatric drugs.JR and SL report no conflicts of interest. Table 2 Examples of theme 1: Prescriber role. Table 2 Safe and slow tapering guidelines which can be adjusted at any point when it gets harder at the lower doses.Help with working out a tapering schedule.Literature for family and friends to read about what we are experiencing mentally and physically. Table 4 Examples of theme 3: Other support/services.
2023-06-23T14:00:20.960Z
2023-06-22T00:00:00.000
{ "year": 2023, "sha1": "6b85ab3a51b4c5fbcb1e578159bbd693f4eac696", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.psychres.2023.115303", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "b1db49fe297a099c0494d4842a043ee8fcd03069", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
15891038
pes2o/s2orc
v3-fos-license
Associations of high HDL cholesterol level with all-cause mortality in patients with heart failure complicating coronary heart disease Abstract The aim of the present study was to evaluate the association between HDL cholesterol level and all-cause mortality in patients with ejection fraction reduced heart failure (EFrHF) complicating coronary heart disease (CHD). A total of 323 patients were retrospectively recruited. Patients were divided into low and high HDL cholesterol groups. Between-group differences and associations between HDL cholesterol level and all-cause mortality were assessed. Patients in the high HDL cholesterol group had higher HDL cholesterol level and other lipid components (P <0.05 for all comparison). Lower levels of alanine aminotransferase (ALT), high-sensitivity C-reactive protein (Hs-CRP), and higher albumin (ALB) level were observed in the high HDL cholesterol group (P <0.05 for all comparison). Although left ventricular ejection fraction (LVEF) were comparable (28.8 ± 4.5% vs 28.4 ± 4.6%, P = 0.358), mean mortality rate in the high HDL cholesterol group was significantly lower (43.5% vs 59.1%, P = 0.007). HDL cholesterol level was positively correlated with ALB level, while inversely correlated with ALT, Hs-CRP, and NYHA classification. Logistic regression analysis revealed that after extensively adjusted for confounding variates, HDL cholesterol level remained significantly associated with all-cause mortality although the magnitude of association was gradually attenuated with odds ratio of 0.007 (95% confidence interval 0.001–0.327, P = 0.012). Higher HDL cholesterol level is associated with better survival in patients with EFrHF complicating CHD, and future studies are necessary to demonstrate whether increasing HDL cholesterol level will confer survival benefit in these populations of patients. Introduction Heart failure (HF) is a leading cause of morbidity and mortality around the world and coronary heart disease (CHD) is a major cause of HF. [1,2] Despite percutaneous coronary artery intervention (PCI) and coronary artery bypass grafting (CABG) have substantially improved patients' outcomes, nevertheless, the survival rate in patients with CHD complicated with ejection fraction reduced HF (EFrHF) is still extremely low with mortality rate up to 50% within 5 years since symptoms onset. [3] High-density lipoprotein (HDL) cholesterol has well-known cardio-protective effects in CHD via mechanisms of enhancing cholesterol-reverse transport, anti-inflammatory, and antioxidative properties. [4,5] In addition, it has been reported that among HF patients, as compared with survivors, HDL cholesterol level in nonsurvivors was significantly lower, [6] suggesting that reduced HDL cholesterol level may portend poor survival rate in HF patients. Nevertheless, this study included patients with mixed etiologies of HF and whether reduced HDL cholesterol level is associated with all-cause mortality of EFrHF complicating CHD is unknown. In the present study, we retrospectively recruited studied participants who had been angiographically diagnosed with Editor: Leonardo Gilardi. CHD and echocardiographically diagnosed with left ventricular EF (LVEF) 35%. Results from the present study provide firsthand evidence regarding the impact of HDL cholesterol level on all-cause mortality in Chinese populations with EFrHF complicating CHD, and these results may also provide information for future prospective study in improving outcomes of patients with EFrHF complicating CHD via modifying HDL cholesterol level. Studied participants Studied participants were retrospectively selected and enrolled from Medical Document System of Guangdong General Hospital. The inclusion criteria comprised twofold as angiographically diagnosed with CHD treated with PCI and/or CABG and LVEF 35% as assessed by echocardiography. In brief, coronary artery significant stenosis is defined as ≥70% of stenosis of lumina of left anterior descending artery (LAD), left circumflex artery (LCA), or right coronary artery (RCA), and ≥50% of stenosis of lumina of left main (LM) artery. The present study was approved by the Clinical Research Ethic Committee of Guangdong General Hospital. Since this was a retrospective study, therefore no informed consent was obtained from studied participants. We searched studied participants in the Medical Document System of Guangdong General Hospital from the year of 2010 to 2015. At initial, 345 patients were selected according to above-mentioned inclusion criteria, after fully screening, those with HDL cholesterol data missing were excluded and a total of 323 patients were included into final analyses. Data collection Demographic, anthropometric, medical, and laboratory data were collected from Medical Document System of Guangdong General Hospital and were recorded in the case report form and were re-checked. Follow-up was conducted in the first quarter of 2016. In the processes of obtaining pre-specified clinical endpoints in terms of myocardial infarction, ischemic stroke, and cardiovascular and noncardiovascular mortality, we found that >90% of studied participants or their immediate relatives could not remember or elaborate above pre-specified clinical endpoints, neither the accurate date of events occurred. Therefore, we finally defined all-cause mortality as the present study's sole endpoint since vital status of each participant could be determined definitely. Moreover, most HF prognostic models have been developed for the outcome of all-cause mortality which is considered indisputable and unbiased. [7] Statistical analysis Continuous variables were expressed as mean ± SD when variables were normal distribution otherwise expressed as median and interquartile range. Normality of distribution for continuous variables was tested using the Kolmogorov-Smirnov test. Independent Student t test or nonparametric Mann-Whitney U test was used to assess between-group difference as appropriate. Categorical variables were expressed as number and frequency of cases, and between-group difference was analyzed by x 2 -square analysis. Pearson correlation analysis was used to assess relationship between HDL cholesterol level with highsensitivity C-reactive protein (Hs-CRP, using a Ln transforma-tion as LnHs-CRP), alanine aminotransferase (ALT, using a Ln transformation as LnALT), and albumin (ALB), and Spearman rank analysis was used to assess relationship between HDL cholesterol level with cardiac function indexed by the New York Heart Association (NYHA) classification. Univariate regression analysis was used to assess the associations between factors and all-cause mortality, and forward stepwise strategy of multivariate regression analysis was used with an entry level of P <0.20 in univariate regression analysis. In brief, other factors known to have significant impact on HF survival such as age, diabetes, betablocker, angiotensin-converting enzyme inhibitor/angiotensin receptor blocker (ACEI/ARB), spironolactone, heart rate (HR), pulmonary artery pressure (PAP), and LVEF were also included to different models' analyses as appropriate. Odds ratio (OR) and its associated 95% confidence interval (CI) was presented. All statistical analyses were conducted in SPSS 19.0 (SPSS Inc., Chicago, IL). Comparisons by categories of HDL cholesterol level According to the criteria of the National Cholesterol Education Program (NCEP) Adult Treatment Panel (ATP) III, [8] the cutoff value of low HDL cholesterol level is 1.03 mmol/L and therefore studied participants were divided into 1.03 mmol/L and >1.03 mmol/L groups. As shown in Table 1, there were substantial differences in clinical characteristics between these two groups. Compared with participants in the low HDL cholesterol group, participants in the high HDL cholesterol group were older and had less frequency of severe HF (P <0.05). Interestingly, apart from significantly higher HDL cholesterol level (1.23 ± 0.24 mmol/L vs 0.79 ± 0.14 mmol/L, P <0.001), other lipid components including TC, LDL cholesterol, apoA1, and ApoB100 were also significantly higher in the high HDL cholesterol group (P <0.05 for all comparisons). Between-group differences in ALT, ALB, and Hs-CRP were also observed (P < 0.05 for all comparisons). No significant differences in the Echocardiographic examination revealed that LVEF in both groups were comparable (28.8 ± 4.5% vs 28.4 ± 4.6%, P = 0.358), and both groups also had comparable degrees of mitral regurgitation (MR), aortic regurgitation (AR), tricuspid Table 1 Comparisons by categories of HDL cholesterol levels. Relationship between HDL cholesterol level and parameters of interest Relationship between HDL cholesterol level and parameters of interest were analyzed and as presented in Fig. 1, HDL cholesterol level was positively correlated with ALB level (r = 0.248, P < 0.001), whereas inversely correlated with Ln-ALT and Ln-Hs-CRP, with a correlation coefficient of -0.146 (P = 0.013) and -0.207 (P = 0.003), respectively. Spearman rank correlation showed that HDL cholesterol level was inversely correlated with NYHA classification with a correlation coefficient of -0.161 (P = 0.004). 3.5. Logistic regression models 3.5.1. Univariate regression analysis. As presented in Table 2, univariate regression analysis showed that there were a host of factors (including hypertension, higher NYHA classification, statins usage, higher serum levels of creatinine, ALT, NT-proBNP and Hs-CRP, lower serum levels of hemoglobin, HDL cholesterol, and ALB as well as more severe MR) were significantly associated with all-cause mortality in patients with HFrEF complicating CHD (P <0.20). Table 3, multivariate regression analysis of HDL cholesterol effects on all-cause mortality revealed that after extensively adjusted for potential confounding variates, HDL cholesterol level remained significantly associated with all-cause mortality from Model 1 through Model 6 although the magnitude of association was weaken gradually. Discussion There are a substantial number of factors associated with prognosis of HF, and among them lipid profiles are the most attractive and intensively investigative. Previous clinical studies were largely focused on the associations between TC level and mortality, [7,9,10] and the associations between HDL cholesterol level and mortality were less well studied. Our present study firstly provides information about the association between HDL cholesterol level and all-cause mortality of patients with EFrHF complicating CHD. Results indicate that higher HDL cholesterol level is associated with better survival outcome. In addition, HDL cholesterol level is positively correlated with ALB level, an important nutritional index, whereas inversely correlated with Hs-CRP, ALT, and NYHA classification. It is reported that HF is a chronic inflammatory status and most HF patients have elevated Hs-CRP level, [11] suggesting that HDL cholesterol conferred survival benefits in HF patients might via its antiinflammatory property, and inverse correlation between HDL cholesterol level and cardiac function indicated that patients with low HDL cholesterol level might be prone to developing more severe HF. Dyslipidemia is a major risk factor of atherothrombotic disease such as CHD and reducing TC and LDL cholesterol levels by statins therapy has been demonstrated to be independently associated with favorable outcomes in CHD secondary prevention. [12] Nonetheless, the efficacies of statins therapy on improving outcomes in HF patients are controversial and two large randomized controlled trials have showed that rosuvastatin therapy did not confer survival benefit in HF patients despite TC and LDL cholesterol levels were reduced by rosuvastatin therapy. [13,14] Biologically and physiologically, cholesterol is an essential component of cellular membrane and varied hormones. Furthermore, cholesterol is a major energy resource particularly in condition like cachectic HF. In recent decades, many clinical studies have revealed that low TC level was independently associated with poor prognosis in HF, and underlying mechanisms are multifactorial. Indeed, most HF patients require more energy generation because of tachycardia and dyspnea which lead them to a cachectic condition. Furthermore, HF patients commonly suffer intestinal edema which adversely affects nutrition such as glucose and protein absorption. These pathophysiological alterations in HF not only lead patients to a metabolically demanding condition but also to a malnutrition status such as reduced ALB level in our studied patients (33.8 ± 4.7 mg/L). Moreover, circulating cholesterol-rich lipoprotein insufficiency results in endotoxin elevation leading to excessive inflammation [15,16] such as increased Hs-CRP level in our studied patients (median level: 9.9 mg/L). Therefore, it is conceivable that higher cholesterol level is beneficial for HF patients. Fewer studies have investigated the associations between HDL cholesterol level and prognosis of HF, and results of previous studies are inconsistent. For example, Sakatani et al [6] reported that in HF patients with mixed etiologies, compared with nonsurvivors, HDL cholesterol level in survivors was significantly higher and no differences were observed in other lipid components. Nevertheless, Rauchhaus et al [7] reported that compared with nonsurvivors, TC level in survivors was significantly higher and no significant difference in HDL cholesterol level between survivors and nonsurvivors was observed. Horwich et al [10] also observed that in HF patients with 48% of ischemic etiology, both TC and LDL cholesterol levels but not HDL cholesterol level were significantly higher in survivors as compared with nonsurvivors. Different from these studies, we enrolled HF patients with solely ischemic etiology and our results indicated that compared with nonsurvivors, survivors appeared to have higher HDL cholesterol and ApoA1 levels, and no differences in other lipid components like TC and LDL cholesterol levels between survivors and nonsurvivors were observed. Conceivably, differences in study design, clinical characteristics, and HF etiology of studied patients might account for these discrepancies. ACEI/ARB = angiotensin-converting enzyme inhibitor/angiotensin receptor blocker, ALB = albumin, CHD = coronary heart disease, Cr = creatinine, Hb = hemoglobin, HDL = high-density lipoprotein, HR = heart rate, LDL = low-density lipoprotein, LnALT = alanine aminotransferase, LnHs-CRP = highsensitivity C-reactive protein, LnHs-TnT = high-sensitivity troponin T, LnNT-proBNP = N-terminal pro B type natriuretic protein, LVEF = left ventricular ejection fraction, MR = mitral regurgitation, Na = sodium, PAP = pulmonary artery pressure, SBP = systolic blood pressure, TC = total cholesterol. Compared with the low HDL cholesterol group, mortality rate in the high HDL cholesterol group was significantly lower, and after extensively adjusted for potential confounding factors including clinical characteristics, laboratory parameters, number and management of coronary artery stenosis, echocardiographic indexes, and medications at discharge, higher HDL cholesterol level remained independently associated with lower odds of allcause mortality in patients with EFrHF complicating CHD. The mechanisms associated with survival benefits of higher HDL cholesterol level are multifactorial. On the one hand, patients with higher HDL cholesterol level might be at a better nutritional status as reflected by higher TC, LDL cholesterol, ApoA1, ApoB100, and ALB levels. [17] On the contrary, HDL cholesterol has anti-inflammatory property and higher HDL cholesterol level is beneficial for ameliorating systemic inflammation as reflected by lower Hs-CRP level. [18] Furthermore, HDL cholesterol has other cardio-protective properties like cholesterol-reverse transport and theoretically these efficacies may be also associated with survival benefits. Liver is a major organ synthesizing cholesterol and therefore we evaluated the relationship between HDL cholesterol level and ALT level, a sensitive and specific marker of liver function. We observed that HDL cholesterol level was inversely correlated with ALT level. With respect to close relation between liver function and cholesterol synthesis, it was conceivable that impaired liver function might also play a role in prognosis of HF via mechanism of reducing cholesterols generation. There are strengths and limitations of our present study. Different from previous studies, we solely enrolled HF patients with ischemic etiology which may avoid inconsistence of patients' clinical characteristics. Moreover, this was the first study to evaluate the effects of HDL cholesterol level on all-cause mortality of HF in Chinese patients. However, since this was a retrospective study, therefore we could not infer a causal relationship between HDL cholesterol level and all-cause mortality. Second, we did not obtain data on long-term medications usage to adjust for the potential effect of medications treatment on outcome. Third, we did not adjust for body mass index or body weight for association between HDL cholesterol and study outcome because of lacking data on these two parameters. Both body mass index and body weight are closely correlated with lipid profiles including HDL cholesterol, therefore it was possible that the association between HDL cholesterol and study outcome might be over-or underestimation which deserves further evaluation in prospective study. Finally, we had to alter our study design because of lacking data on pre-specified clinical endpoints and accurate date of event occurred. Conclusion Studies have showed a close relationship between total cholesterol level and prognosis of HF, however, the association between HDL cholesterol level and all-cause mortality of HF complicated by CHD is less well studied. Our present study reveals that higher HDL cholesterol level is associated with better survival outcome in these populations of patients. Understanding the association between HDL cholesterol level and prognosis of HF could provide new therapeutic target for improving clinical outcomes in HF patients.
2018-04-03T04:13:02.659Z
2016-07-01T00:00:00.000
{ "year": 2016, "sha1": "ea1ee52b5a4eaab209f52796d7bd7252ce1ad62d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000003974", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ea1ee52b5a4eaab209f52796d7bd7252ce1ad62d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226217543
pes2o/s2orc
v3-fos-license
Iodine-125 radioactive particles antagonize hyperprogressive disease following immunotherapy Abstract Rationale: Increasing evidence has shown that immune checkpoint inhibitors are associated with hyperprogressive disease (HPD). HPD usually resulted in dramatically reduced survival duration, which limited the opportunity to administer other therapies. Patient concerns: A heavily pretreated lung adenocarcinoma patient experienced rapid progression of rib metastasis soon after immune checkpoint inhibitor -based combination therapy. Diagnoses: On the basis of radiographic and pathological findings, the patient was diagnosed with HPD. Interventions: We treated the patient with iodine-125 radioactive particle implantation to the metastatic lesions in the chest wall. Outcomes: The metastatic lesions shrank significantly 1 month later. Lessons: Early detection and adequate treatment are essential for prolonged survival when HPD occurs. Introduction Currently, immune checkpoint inhibitors (ICIs) have evolved as standard treatment modalities in advanced non-small cell lung cancer. They have also shown clear survival benefits as singleagent or combination therapy when compared with standard chemotherapy in treatment-naive or previously treated patients. [1][2][3][4][5][6][7] However, increasing evidence has shown that these new immunotherapy drugs are associated with some novel tumor response patterns, such as delayed responses, pseudoprogres-sions, and hyperprogressive disease (HPD). [8][9] Although the definition and incidence of HPD varied across studies, it always resulted in a dismal prognosis. [10][11][12][13] However, the management of HPD has not been specifically addressed. Here, we present a heavily pretreated lung cancer patient who experienced HPD during ICI therapy and was successfully treated with iodine-125 ( 125 I) radioactive particle implantation. Case presentation In August 2017, a 47-year-old nonsmoking Chinese man was referred to our hospital with a hard, immovable, and non-tender mass in the right supraclavicular fossa, approximately 2 Â 2 cm in size. The patient had an Eastern Cooperative Oncology Group performance status score of 0. He reported no systemic disease. A contrast-enhanced total-body computed tomography (CT) scan (head to pelvis) revealed a mass and obstructive pneumonia in the upper lobe of the right lung. Along with this, lymphadenopathy (short axis > 15 mm) in the right upper mediastinum, supraclavicular fossa, and posterior cervical triangle and an osteolytic lesion in the right fifth rib were also seen. CT-guided biopsy of the lung mass revealed poorly differentiated adenocarcinoma. The adenocarcinoma cells were positive for CKpan, CK7, CD56 (focal), and CK5 (focal), but negative for TTF-1, CgA, Syn, P63, and P40. The positive expression rate of ki67 was 80% to 90%. Genomic analysis revealed no sensitizing mutations in the epidermal growth factor receptor gene or in the anaplastic lymphoma kinase gene. Starting in September 2017, he received 3 lines of systemic chemotherapy before ICI treatment (paclitaxel and carboplatin plus bevacizumab for 6 cycles, followed by 4 cycles of bevacizumab monotherapy with partial response and a progression-free survival of 6 months, pemetrexed and cisplatin for 6 cycles with partial response and a progression free survival of 4 months, and anlotinib for 157 days with subsequent progressive disease). In addition, he also received brachytherapy with 125 I radioactive particle implantation in the primary lung mass. The response evaluation criteria for solid tumors 1.1 was referred. In March 2019, a contrast-enhanced total-body CT scan (head to pelvis) demonstrated partial response of the primary lung mass after 125I radioactive particle implantation, stable disease of the right fifth rib metastasis, and progression disease of the axillary lymph node metastases. Given the patient's performance status (Eastern Cooperative Oncology Group performance status 1), palliative treatment with docetaxel and toripalimab, the first domestic recombinant, humanized programmed death receptor-1 (PD-1) monoclonal antibody approved for use in refractory metastatic melanoma in China on December 17, 2018, was planned. Docetaxel and toripalimab were administered at 75 mg/ m 2 and 240 mg, respectively, every 3 weeks. After administration of the first dose, the patient began to experience a gradual worsening of his persistent right-sided chest pain. Two weeks later, a physical examination revealed a palpable right-sided chest wall mass. An enhanced CT scan showed progression of the rib metastasis. The metastatic tumor cells widely infiltrated the thoracic wall, invading the subcutaneous tissue (Fig. 1, 1st evaluation). Biopsy of the lesion revealed a poorly differentiated adenocarcinoma. Next-generation sequencing showed no targetable oncogenic alterations. Immunohistochemical analysis of programmed death-ligand 1 expression using the murine 22C-3 antibody revealed a tumor proportion score (TPS) of 0%. The primary lung tumor and metastatic lymph nodes remained stable. Immediately, the patient underwent CT-guided 125 I radioactive particle implantation for the treatment of chest pain. Thereafter, the pain gradually subsided over the next weeks. One month later, a CT scan confirmed that the metastatic lesions in the chest wall shrank significantly (Fig. 1, after brachytherapy). The patient then received systemic treatment with nanoparticle albumin-bound paclitaxel and gemcitabine every 4 weeks for 6 cycles. The disease has remained stable for more than 7 months until now. In the present case, a sudden worsening of chest pain and a palpable chest wall mass, initially suspected to be hematoma due to rib pathological fracture or pseudoprogression, were observed. However, the following biopsy findings of poorly differentiated adenocarcinoma confirmed the diagnosis of hyperprogression, presenting as extensive tumor infiltration in the thoracic wall. The current HPD criteria are insufficient to perfectly evaluate the growth speed of non-measurable lesions. However, in the present case, the tumor growth speed accelerated suddenly and greatly after PD-1 inhibitor initiation, as indicated in the serial CT scans performed 6 weeks before baseline, and 2 weeks later (Fig. 1). Previous studies consistently suggested that HPD always resulted in dramatically reduced survival duration, and it limited the opportunity to administer other therapies. [9][10][11][12][13][14][15][16] Fortunately, our patient was diagnosed early via biopsy and treated in a timely manner using CT-guided 125 I radioactive particle implantation, which was essential for symptom relief and prolonged survival. In addition, the HPD in our case presented with oligoprogression, which was crucial in maximizing the benefit from salvage local therapy. The docetaxel used with the PD1 inhibitor in our case did not thwart HPD. It reminds us that a close follow-up schedule should be established for patients receiving ICI-based combination therapy. In this report, we described a case of lung cancer showing rapid progression of rib metastasis soon after ICI-based combination therapy. To the best of our knowledge, this is the first case in which HPD was successfully treated with 125 I radioactive particle implantation. Early detection and adequate treatment are essential for prolonged survival. Further studies are needed to elucidate the molecular and immunological bases of HPD to improve the management of patients receiving ICI therapy. Consent Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editorin-Chief of this journal.
2020-10-29T09:02:39.073Z
2020-10-30T00:00:00.000
{ "year": 2020, "sha1": "04a5ab39c92395915e92453bdc08778aab04e245", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000022933", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b2f1b939794f575427ec9d45eaab9aa16f9a300a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
203620823
pes2o/s2orc
v3-fos-license
A New Regulatory Mechanism Between P53 And YAP Crosstalk By SIRT1 Mediated Deacetylation To Regulate Cell Cycle And Apoptosis In A549 Cell Lines Background Yes-associated protein (YAP) is downstream of the Hippo signaling pathway, which regulates several cellular processes. P53 is a key transcriptional regulator that responds to a variety of cellular stresses and regulates key cellular processes such as DNA repair, cell-cycle progression, angiogenesis, and apoptosis. Overexpression of YAP antagonizes P53 activity and targets its expression. However, the mechanism that underlies the post-transcriptional crosstalk between P53 and YAP has not been well dissected. Methods We performed an integrated analysis and found that SIRT1 is a key candidate that connects YAP and P53 by modulating their acetylation. Results We found that YAP promotes P53 deacetylation, promotes cell survival by inhibiting P53-induced G0/G1 arrest and apoptosis in A549 cells. Conversely, P53 enhances YAP acetylation, and decreases A549 cell survival by strengthening YAP acetylation-induced G0/G1 arrest and apoptosis both in vitro and in vivo. Conclusion Our results demonstrate that SIRT1 is responsible for YAP and P53 deacetylation of specific residues, and reveal for the first time, a new regulatory mechanism of P53 and YAP crosstalk by SIRT1-mediated deacetylation, which may be involved in lung tumorigenesis. Introduction Yes-associated protein (YAP) is a downstream effector molecule of a newly emerging pathway called the Hippo pathway. 1 YAP and TAZ, two closely related transcription co-activators, are mediated by Hippo kinases and adaptor proteins. 2 The Hippo pathway is evolutionarily conserved and a central regulator of organ size and tissue homeostasis. It responds to a variety of extracellular and intracellular signals, reading a broad range of mechanical cues, from shear stress to cell shape and extracellular matrix rigidity, which it then translates into cell-specific transcriptional programs. 3 It is heavily involved in the control of cell proliferation, organ size and shape during development, stem cell maintenance, metastasis, tissue regeneration, apoptosis, senescence, and differentiation. 4 Other factors also regulate it, such as cell density and polarity, metabolism and DNA damage. [5][6][7] Hippo crosstalk with other signaling players so that it resembles a network rather than a linear pathway, such as JAK-STAT3 and fat signaling. 8,9 P53 protein is a well-known tumor suppressor factor that regulates cellular homeostasis, as well as several signaling pathways involved in a cell's response to stress, and regulates cellular homeostasis. 10 Moreover, it becomes activated through several post-translational modifications such as, phosphorylation, sumoylation, acetylation and prolyl-isomerization. 11 P53 is a modular protein harboring with four functionally different domains: the N-terminal transactivation domain, which is essential for binding to transcription factors and regulators of P53 activity; 12 the core DBD domain, which allows the binding to DNA; the oligomerization domain (OLD), which is relevant for the tetramerization of P53, and the C-terminal domain or regulatory domain (RD), which is involved in post-translational modifications (phosphorylation, acetylation, ubiquitination and sumoylation). 12 Recent studies have shown that the P53 and Hippo pathways are physically and functionally connected. They have been shown to modulate common transcriptional programs and pathways that preserve cellular and tissue homeostasis in healthy conditions. ChIP assay results indicate that YAP binds directly to the p53 promoter to improve its expression, which results in P53-dependent cycle arrest and apoptosis. 13 The nuclear YAP induces p21, Bax and Caspase 3 expression and inhibits the anti-apoptotic factors Bcl-2 and Bcl-xL. 13 Besides this transcriptional crosstalk between P53 and YAP, recent studies have revealed another mechanism that underlies the crosstalk between P53 and YAP. Central to the Hippo pathway is a core kinase cascade of the tumor suppressors MST1/2 and LATS1/2, and the adaptor proteins SAV1 and MOB1/2. 14 These proteins form a conserved kinase cassette "Hippo", which typically functions by phosphorylating and inactivating the transcriptional co-activators YAP and TAZ. 15 Recently, LATS2 and its paralog LATS1, have been shown to contribute to tumor suppressive features of P53, also under basal conditions. 16 These findings suggest the complexity of the various mechanisms underlying P53 and YAP crosstalk. SIRT1 is a protein involved in the deacetylation of key histone lysine residues, including histone H3 lysine 9 (H3K9) and histone H4 lysine 16 (H4K16), thus regulating gene expression that governs cell fate. 17 Besides SIRT1's epigenetic role being implicated in the regulation of the chromatin state for the expression of specific genes, it also deacetylates many transcriptional factors in a NAD+-dependent manner, including P53. 18 Many key cellular events are regulated through the SIRT1-P53 interaction. 19 Elevated SIRT1 deacetylates activated P53, allowing cells with damaged DNA to proliferate and thus, promoting tumor development. 19 One recent study showed that, SIRT1 deacetylates YAP2 protein in hepatocellular carcinoma (HCC) cells, and SIRT1-mediated deacetylation increases the YAP2/ TEAD4 association. This led to YAP2/TEAD4 transcriptional activation and upregulated cell growth in HCC cells. 20 Although a acetylation/deacetylation cycle of nuclear YAP exists downstream of the Hippo signaling pathway, 21 the mechanism that underlies this post-translational crosstalk between P53 and YAP, and the role of this P53-SIRT1-YAP axis controlling of cell cycle transition and apoptosis, is still unknown. We utilized A549 cell lines and examined the importance of SIRT1's involvement in the post-translational crosstalk between P53 and YAP. We then disected the feedback loop between these two signaling pathways in maintaining cell cycle arrest and apoptosis. Our study clarifies the effect of SIRT1-induced deacetylation of P53 and YAP on cell growth and identifies the mechanisms responsible for these effects in A549 cells, while shedding new light on the post-translational interaction between P53 and YAP, which may be involved in lung tumorigenesis. Cell Culture And Reagents The human A549 cells were purchased from the Shanghai Cell Bank of the Chinese Academy of Sciences (Shanghai, China). The cells were maintained at 37°C with 5% CO 2 in a humidified atmosphere and grown in Dulbecco's modified Eagle's medium (HyClone Laboratories; GE Healthcare, Logan, UT, USA) supplemented with 10% fetal bovine serum (FBS; Gibco; Thermo Fisher Scientific, Inc., Waltham, MA, USA). Methyl methane sulfonate (MMS) and 2, 6-diisopropylaniline (DIPA) were purchased from Sigma. RNA Interference A549 cells were transfected with siRNA using Lipofectamine 2000 reagent (Invitrogen, CA, USA) according to the manufacturer's instructions. siRNAs specific for P53, YAP and SIRT1 were purchased from Santa Cruz (sc-29435, sc-38637, sc-40986, America). Control siRNA of P53, YAP and SIRT1 were purchased from Santa Cruz (sc-37007). GAPDH was employed as the internal control. The expression of candidate genes was measured by SYBR Green (Takara Biotechnology Co., Ltd., Dalian, China), and realtime PCR assays were performed using ABI-7300 (Applied Biosystems, Shanghai, China). The relative gene expression was calculated by the 2 −ΔΔCt . Cell Cycle Progression Assay Cell cycle analysis was performed by flow cytometry. A549 cells were harvested, fixed, treated with RNase A (50 μg/mL) and stained with propidium iodide (10 μg/mL). Cellular DNA content was analyzed using flow cytometry (FACS Canto II, BD Biosciences, USA).~10,000 cells were acquired for each analysis, and results were analyzed using ModFit LT TM software (version2) and displayed as a histogram. Apoptosis Assay Cell apoptosis was detected by Annexin V-FITC/PI double-staining. Annexin V-FITC/PI staining was used for the quantitation of early and late apoptotic cells. A549 cells were stained with annexin V-FITC (0.2 mg/mL) and PI (0.05 mg/mL) for 20 mins and were examined by flow cytometry (FACS Calibur, BD Biosciences) using Cell Quest pro software at an excitation with 488 nm laser and emission at 530 nm. A minimum of 10,000 cells was analyzed per sample and illustrated as a dot plot using Flowing software. Western Blotting The total protein was extracted from cells after transfection using RIPA buffer. The extract was centrifugated for 15 mins at 4°C at 1,4000 g. The upper supernatant was then collected, and protein concentration was assessed by the bicinchoninic acid (BCA) method. After, the protein was electrophoresed in SDS-polyacrylamide gels (Invitrogen, CA, USA) and transferred to polyvinylidene difluoride (PVDF) membranes (Millipore, Bedford, MA, USA). The membranes were blocked with 5% non-fat milk and incubated with the primary antibody overnight. TBST buffer was used to wash the membrane three times. The membrane was then incubated with the corresponding secondary antibody conjugated by horseradish peroxidase (HRP) for 1 hr. Images were captured by ChemiDoc XRS Immunofluorescence A549 cells were cultured on a glass coverslip and fixed with 4% paraformaldehyde in PBS at room temperature. After treatment with 0.2% Triton X-100 in PBS, the cells were incubated with blocking solution (5% bovine serum albumin in TBS) before incubation with primary Abs for 1 hr at room temperature. Cells were washed with PBS and incubated for 1 hr with Alexa 488-or 546-conjugated secondary Absolute. After PBS washes, coverslips were mounted and viewed on a Carl Zeiss confocal microscope equipped with LSM510 software. Ingenuity Pathway Analysis Protein-protein interaction networks were built by Core Analysis and Network Analysis of the online IPA@ software package (Version 8.7). (https://www.qiagenbioinfor matics.com/products/ingenuity-pathway-analysis). The network size parameters were set in order to optimize visualization and analyze the biologically relevant background. Networks analysis can provide a quick solution to assess the data of interest in regulatory networks. Usngi IPA Core Analysis and Network Analysis, we built a P53 Acetylation network, YAP Acetylation network, P53 Singling pathway and YAP Singling pathway. We used the Cytoscape software to visualize the interaction of these networks. 22 In Vivo Tumorigenesis In Nude Mice Animal experiments were approved by the Ethical Committee of Animal Research at Chinese PLA General Hospital. The experimental protocol was established according to the associated national guidelines from Ministry of Science and Technology of China. The effects of YAP on the in vivo tumorigenic ability was investigated by tumor xenograft experiment. A total of 1 × 10 6 A549 cells with different treatment (WT, YAP or P53 OE and YAP or P53 KD) in 0.2 mL RPMI 1640 medium were subcutaneously injected into the dorsal flanks of 4-6week-old male BALB/c nu/nu mice. The mice were maintained in a barrier facility on HEPA-filtered racks and fed with an autoclaved laboratory rodent diet. Each experimental group contained 10 mice. Tumor size was monitored using a calliper in the process of tumor growth and measured every 3 days. After 5 weeks, mice were killed and tumors were excised and weighed. Tumor volumes were calculated as follows: volume = (D × d 2 )/2, where D is the longest diameter and d is the shortest diameter. Statistical Analysis All the quantitative data are presented as means ± standard deviation (SD). The statistical significance levels for all tests were set as *P < 0.05, **P < 0.01 and ***P < 0.001. For multiple group comparisons, ANOVA with post hoc Dunnett's test was used. Student's t test was used to perform comparisons between two groups. All analysis was performed using GraphPad Prism 5 software (GraphPad Software, Inc., LaJolla, CA, USA). All experiments were repeated three times. 21 Integrated Analysis Identified Crosstalk Between The Acetylation Of YAP And P53 To dissect the crosstalk between the acetylation of YAP and P53 signaling, we performed an integrated analysis on the basis of Ingenuity Pathway Analysis (IPA) and content-analysis-based literature reviews, and constructed the "P53 Acetylation network" (based on 289 published papers retrieved from a search using the key word, "P53 Acetylation"), the "YAP Acetylation network" (based on 22 from doing a search for "YAP Acetylation"), the "P53 Signaling pathway" and the "YAP Signaling pathway" (based on IPA) ( Figure 1A-C). In the P53 Acetylation network, histone acetyltransferase P300 and CBP can acetylate P53 and increase its activation. In contrast, several histone deacetylase (HDAC) family members, including SIRT1, HDAC1, HDAC2, HDAC3, HDAC6, and HDAC8, can deacetylate P53 and inhibit its activation. 21 Interestingly, P53 can also suppress SIRT1 via HIC1 and Myc, forming a negative feedback loop ( Figure 1A). In the YAP Acetylation network, histone acetyltransferase P300 and CBP can acetylate YAP. On the contrary, several HDAC family members, including SIRT1 and HDAC1 can deacetylate YAP and inhibit its activation 23 ( Figure 1B). Interestingly, YAP can also enhance SIRT1 via Myc, forming a positive feedback loop. Together, these results suggest that the common acetylases and deacetylases between P53 and YAP may form a steady-state network, which is maintained primarily by the crosstalk between P53 and YAP ( Figure 1C). YAP Promotes P53 Deacetylation To investigate the regulation of P53 acetylation in response to DNA damage, we treated A549 cells with the S N 2 alkylating agent methyl methane sulfonate (MMS) and used anti-P53 Ab to immunoprecipitated endogenous P53 from lysates of MMS-treated A549 cells, the control group was set as the one with IgG antibody. We performed immunoblotting using anti-pan-AcK Ab, which specifically recognizes acetylated lysine residues. Other DNA damage agents, such as H 2 O 2 , Cisplatin, N-methyl-N'-nitro-N-nitrosoguanidine, and nitrosomethylurea may not lead to the acetylation of YAP (data not shown). The time-and dose-dependent kinetics of P53 acetylation assays, as well as the γH2AX levels showed that MMS treatment induced marked increases in levels of acetylated P53; MMS treatment with 1mM for 2 hrs was the optimal condition for subsequent experiments (Figure 2A). These results show that endogenous P53 is acetylated in response to MMS treatment. To confirm the crosstalk between acetylation of YAP and P53, YAP was then overexpressed in A549 cells under MMS treatment. Interestingly, overexpression of YAP induced marked decreases of acetylated P53 and γH2AX levels with MMS treatment ( Figure 2C). Conversely, when YAP was knocked down by short hairpin RNAs (shRNAs) in A549 cells, the P53 acetylation levels were enhanced in both A549 cells with, or without MMS treatment ( Figure 2C). The expression level of YAP was also examined by Western blotting ( Figure 2B). Collectively, these findings suggest that YAP can inhibit P53 acetylation. To further determine the modulating role of YAP on P53 acetylation, we screened the acetylase and deacetylase with P53 as a substrate (Figure 1). We found that P300, CBP, and PCF are the major acetylases for P53, and HDAC1, HDAC2, HDAC3, HDAC6, HDAC8, and SIRT1 are the major deacetylases for P53. We also found that, SIRT1 is the major candidate for further analysis. SIRT1 is an important responder to MMS, which is upregulated after MMS treatment. 24 It has been well established that SIRT1 can decrease P53 acetylation and transcriptional activity. 19 More importantly, it has been found that YAP induced SIRT1 activation through the pro-proliferation effector MYC. 25 Therefore, we hypothesized that the suppressive effect of YAP1 on P53 acetylation is dependent on SIRT1. To confirm this, we examined the regulation of YAP on SIRT1 under MMS treatment. As expected, the MMS treatment induced upregulation of SIRT1, and overexpression of YAP led to a further enhancement ( Figure 2D). In contrast, YAP depletion decreased the level of SIRT1 ( Figure 2D). We also knocked down SIRT1 by siRNA and analyzed the expression level of SIRT1 by Western blotting ( Figure 2B). Results indicate that knockdown of SIRT1 can enhance the acetylation of P53 and the knockdown of YAP displayed further enhancement. In contrast, overexpression of SIRT1 can reduced the acetylation of P53 ( Figure 2E). Together, our results indicate that, YAP induces deacetylation of P53 by activating SIRT1. YAP Increases The Deacetylation Of P53 And Promotes Cell Survival Deacetylation of P53 has a profoundly negative impact on the capacity of P53 to induce the expression of target genes involved in the cell cycle and apoptosis. We then investigated whether YAP-induced deacetylation of P53 can affect P53's function on the cell cycle and apoptosis. First, we checked the protein levels of P53 targets and downstream effectors involved in cell cycle modulation (P21 and P27) and apoptosis (BIM and PUMA) after overexpression or knockdown of YAP under MMS treatment. Consistent with the above findings, we found that MMS treatment enhanced the expression of P53 targets, and YAP inhibition induced further enhancement ( Figure 3A). In contrast, overexpression of YAP attenuated the MMS-induced promoting effect on the expression of P53 targets ( Figure 3A). Second, we tested the role of YAP in the regulation of mRNA expression of P53 targets. Consistent with the results of proteins expression, P21, P27, BIM, and PUMA were mostly upregulated under MMS treatment, and showed even more elevation after YAP depletion, while they were downregulated after the overexpression of YAP ( Figure 3B). Third, we assessed the effect of YAPinduced P53 deacetylation on cell phenotypes, especially cell cycle modulation and apoptosis. As expected, MMS treatment induced G0/G1 arrest in A549 cells. YAP deletion enhanced this effect, while YAP overexpression attenuated this effect ( Figure 3C). In addition, the apoptosis rate of A549 cells increased after MMS treatment and was further enhanced by YAP depletion, while weakened by YAP overexpression ( Figure 3D). We further evaluated the in vivo effectiveness of YAP in mice bearing tumors originating from A549 cells. As expected, YAP OE promoted tumor growth, and P53 KD further enhanced this effect ( Figure 3E and F). These results demonstrate that, YAP decreases P53 transcriptional activation and promotes cell survival by inhibiting P53-induced G0/G1 arrest and apoptosis. P53 Enhances YAP Acetylation To investigate whether YAP underwent a qualitative change in response to MMS treatment, we performed similar time-and dose-dependent kinetics of YAP acetylation assays. Findings showed that, MMS treatment also resulted in increased acetylated YAP, as well as the γH2AX levels and that, MMS treatment with 1mM for 2 hrs was the optimal conditions for subsequent experiments ( Figure 4A). The control group was set as the one with IgG antibody. These results show that endogenous YAP is also acetylated in response to MMS treatment, and MMS treatment induced the DNA damage. To confirm the crosstalk between acetylation of P53 and YAP, P53 was then overexpressed in A549 cells under MMS treatment. P53 overexpression significantly elevated the acetylated YAP and γH2AX levels with MMS treatment ( Figure 4C). Conversely, P53 depletion resulted in marked decreases of YAP acetylation levels in A549 cells both with, or without MMS treatment ( Figure 4C). The expression level of P53 were also examined by Western blotting ( Figure 4B). Collectively, these findings suggest that P53 can enhance YAP acetylation. We also confirmed these findings by immunofluorescence assays in A549 cells. This was consistent with previous evidence that showed MMS treatment induced YAP nuclear translocation and P53 knockdown resulted in a further enhancement, while P53 overexpression decreased the YAP nuclear translocation in A549 cells both with, or without MMS treatment ( Figure 4D). Those results have been demonstrated in our previous studies. 20 To understand the mechanism underlying these observations, we screened the acetylase and deacetylase with YAP as a substrate (Figure 1). It is well known that, P300 and CBP are the major acetylases for YAP and SIRT1 is its major deacetylase. More importantly, previous work has shown that, P53 inhibited SIRT1 activation by suppressing the pro-proliferation effector MYC, 26 and SIRT1 regulates YAP-mediated cell proliferation and chemoresistance in hepatocellular carcinoma. 20 Therefore, we hypothesized that SIRT1 is also required for the promoting effect of P53 on YAP acetylation. We then assessed the regulation of P53 on SIRT1 under MMS treatment. As expected, MMS treatment induced upregulation of SIRT1, and knockdown of P53 displayed a further enhancement ( Figure 4E). In contrast, P53 overexpression decreased the level of SIRT1 ( Figure 4E). Given that the overexpression of P53 could enhance the level of YAP acetylation, and knockdown SIRT1, displayed further enhancement. In contrast, SIRT1 overexpression decreased the level of YAP acetylation ( Figure 4F). Taken together, these results indicate that P53 induces acetylation of YAP by inactivating SIRT1. P53 Increases The Acetylation Of YAP And Cell Death We then investigated whether P53-induced acetylation of YAP can affect the YAP function on the cell cycle and apoptosis. First, we checked the protein levels of YAP downstream effectors involved in cell cycle modulation and apoptosis (CTGF, Cycline E, MYC, and DIPA) after overexpression or knockdown of P53 under MMS treatment ( Figure 5A). Consistently, A549 cells with MMS treatment exhibited decreased protein expression of these YAP downstream effectors, and P53 overexpression led to further inhibition ( Figure 5A). In contrast, P53 knockdown recovered the MMS-induced suppressive effect on the expression of these YAP downstream effectors ( Figure 5A). Second, we tested the role of P53 in the regulation of mRNA expression of these YAP downstream effectors. Consistent with the results of immunoblotting, CTGF, Cyclin E, MYC, and DIPA were mostly downregulated under MMS treatment and showed greater inhibition after P53 overexpression, while they were substantially elevated after knockdown of P53 ( Figure 5B). Third, we examined the effect of P53-induced YAP acetylation cell phenotypes. As expected, MMS treatment induced G0/G1 arrest in A549 cells and P53 overexpression enhanced this effect, while P53 deletion attenuated this effect compared to controls ( Figure 5C). Additionally, the apoptosis rate of A549 cells increased after MMS treatment and was further enhanced by P53 overexpression, while weakened by P53 depletion ( Figure 5D). We further evaluated the in vivo effectiveness of P53 in mice bearing tumors originating from A549 cells. As expected, P53 OE induced inhibited tumor growth, and YAP KD further enhanced this inhibitory effect ( Figure 5E and F). These results further indicate that P53 decreases YAP transcriptional activation and inhibits cell survival by strengthening YAP acetylation induced G0/G1 arrest and apoptosis. SIRT1 Is Responsible For Deacetylation Of YAP And P53 SIRT1 is a class III deacetylase. Overexpression of SIRT1 could enhance the level of YAP and decrease the level of P53, while knockdown of SIRT1 displayed an opposite result ( Figure 6A). This indicated that SIRT1 is responsible for YAP and P53 expression. We also confirmed these findings by immunofluorescence assays in A549 cells, showing that MMS treatment induced YAP nuclear translocation and SIRT1 overexpression resulted in further enhancement, while SIRT1 knockdown decreased the YAP nuclear translocation in A549 cells both with, or without MMS treatment ( Figure 6D). P53 was the first non-histone protein found to be acetylated in which acetylation competes to modify lysine residues that are also ubiquitinated, sumoylated, and methylated. When P53 is deubiquitinated by HAUSP, P53 can be acetylated at four well-known lysines (K120, K370, K373, and K382) by CBP and P300, leading to the transactivation of various P53 transcriptional targets. 27,28 Thus, we investigated whether SIRT1 deacetylates P53 at these lysines. The total P53 protein was first immunoprecipitated and then immunoblotted with acetylated lysine antibodies recognizing acetyl-P53 K382 , acetyl-P53 K373 , acetyl-P53 K370 , and acetyl-P53 K120 . After MMS treatment, there were significant increases of acetylation at K370, K373, and K382 lysines in in A549 cells, whereas no significant change was observed at K120 acetylation ( Figure 6B). These results provided evidence that MMS is involved in promoting acetylation of specific residues in P53. We further evaluated the impact of SIRT1 on these residues' acetylation in P53 and found that overexpression of SIRT1 inhibited the acetylation at all four residues ( Figure 6B). Conversely, SIRT1 depletion elevated acetylation at the K370, K373, and K382 residues ( Figure 6B), whereas no significant changes were observed at K120. These observations demonstrate that SIRT1 is involved with P53 acetylation of specific residues, including K370, K373, and K382. The crosstalk between YAP and P53 also encouraged us to dissect the involvement of SIRT1 in YAP acetylation. The nuclear acetyltransferases CBP and P300 are responsible for YAP acetylation that occurs on specific and highly conserved C-terminal lysine residues K494 and K497. Thus, we hypothesized that SIRT1 is also involved in YAP acetylation at these two residues. To test this, the total YAP protein was first immunoprecipitated and then immunoblotted with acetylated lysine antibodies that recognize acetyl-YAP K494 and acetyl-YAP K497 . Consistent with a previous finding that, MMS-induced DNA damage causes acetylation of K494 and K497 of YAP, 29 we also observed increased K494 and K497 acetylation in YAP under MMS treatment. When SIRT1 was overexpressed in A549 cells, the K494 acetylation was significantly inhibited, whereas the K497 acetylation exhibited no changes ( Figure 6C). In contrast, knockdown of SIRT1 substantially enhanced K494 acetylation in A549 cells both with, or without MMS treatment. These observations indicate that SIRT1 is involved in YAP acetylation of specific residues, especially K494. Discussion We have demonstrated previously undescribed posttranslational crosstalk between the tumor suppressor protein P53 and YAP. This regulatory mechanism provides a molecular explanation for how the cell can integrate the divergent functional consequences of P53 and YAP activation. Using bioinformatics analyses, we found putative crosstalk between acetylation of YAP and P53. Through numerous biochemical experiments, we showed YAP enhanced the P53 deacetylation and inhibited P53 transcriptional activation, which prevented cell G0/G1 arrest and apoptosis. In contrast, the acetylation of YAP was promoted by P53, which led to decreasing YAP transcriptional activation, and promoting effects on cell G0/G1 arrest and apoptosis. Of note, the deacetylase SIRT1 was shown to be responsible for this negative feedback between the P53 and YAP signaling pathways. Furthermore, we showed that SIRT1 was involved with the P53 acetylation of specific residues (K370, K373, and K382), and the YAP acetylation of K494 residue. Together, these results demonstrate competition between P53 and YAP for limiting quantities of SIRT1 and provide a new paradigm of crosstalk between P53 and YAP which may be involved in lung tumorigenesis. The implications are numerous. It is well known that both the Hippo pathway and P53 act as tumor suppressors to induce senescence and apoptosis. The Hippo pathway is mediated by the canonical function of inhibiting YAP and TAZ oncogenic activation, while P53 functions as a tumor suppressor in response to stress conditions. 30 The cooperation between the wild-type P53 protein and the Hippo components, including YAP, can induce cell cycle arrest and apoptosis, contravening tumor transformation and progression. It was recently revealed that, P53 and YAP share a common transcriptional program showing a significant overlap with gene signatures primarily involved in cell cycle regulation. 31 Our results suggest that, one factor influencing cell survival is the ability of P53 to decrease YAP transactivation and thus promote the transcriptionally dependent induction of cell G0/G1 arrest apoptosis. Consistently, YAP functions in a similar way. It appears likely that the outcome of crosstalk between P53 and YAP depends on the nature of the intrinsic function of the proteins, stimuli, the growth conditions, and the cell type. Sequestration of SIRT1 is likely to be understood as an increasingly common mechanism reducing acetylation of targets, including many transcriptional factors, including P53, E2F1, FOXO, NF-κB, c-Myc, and YAP. 32 The interaction of SIRT1 with tumor-suppressor proteins and oncoproteins implicates its role in cancer development and progression. 33 SIRT1 deacetylates YAP2 protein, and that deacetylation upregulates the YAP2/TEAD4 association, leading to YAP2/TEAD4 transcriptional activation and cell growth in HCC cells. 20 The key role of SIRT1 is exhibited through its specific interaction with P53 via P53 deacetylation at C-terminal lysine-373 and 382 residues in the NAD+-dependent manner. 34 This action decreases P53-mediated transcriptional activity and reduces the expression of its targets, such as p21 (cell cycle inhibitor) and PUMA (modulator of apoptosis). 19 Therefore, SIRT1 could inhibit P53-dependent cell cycle arrest and apoptosis, which facilitates cell death mechanism, while meanwhile enhancing the DNA repair mechanism to facilitate the maintenance of genomic stability. This would have a promoting effect on cell survival and proliferation. 19 Thus, both P53 and YAP are substrates of SIRT1, and function as competitors for SIRT1. This crosstalk between P53 and YAP governs the balance of cell fate decisions in normal human cells. However, the P53 coding gene is frequently mutated in human cancers (50-70% of cases), and most of these mutations are of missense type. 35 Knock-in mice with P53 missense mutations provided evidence that some mutant P53 exert pro-tumorigenic activities. 36 We propose that the axis of P53-SIRT1-YAP may be perturbed by the somatic mutations of P53, resulting in the transformation of a normal cell into a cancerous cell. The cell lines used in this study are adenocarcinomic human alveolar basal epithelial cells carrying wild-type P53. Thus, further work is needed to investigate the dysregulation of this newly identified regulatory axis. Conclusion The present study demonstrated a novel post-translational crosstalk between P53 and YAP, potentially through the competition for the deacetylase SIRT1. Notably, this P53-SIRT1-YAP axis is important for cell cycle transition and apoptosis, while dysregulation of either of P53 or YAP can lead to the other competitor-induced transcriptional activation and cell phenotypes. Given the high mutation rate of P53 in cancers, it is possible that novel lung cancer therapies based on reactivation of wild-type P53 function might benefit from cooperation with YAP to promote a beneficial outcome.
2019-09-26T09:06:06.772Z
2019-09-01T00:00:00.000
{ "year": 2019, "sha1": "da85a4cbfcdc47eeb6f5e1aad09649a86b6a6068", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=52940", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bdd52bd43ee4ff84effef2a2342c4d56bda3434c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
231952959
pes2o/s2orc
v3-fos-license
The critical importance of mask seals on respirator performance: An analytical and simulation approach Filtering facepiece respirators (FFRs) and medical masks are widely used to reduce the inhalation exposure of airborne particulates and biohazardous aerosols. Their protective capacity largely depends on the fraction of these that are filtered from the incoming air volume. While the performance and physics of different filter materials have been the topic of intensive study, less well understood are the effects of mask sealing. To address this, we introduce an approach to calculate the influence of face-seal leakage on filtration ratio and fit factor based on an analytical model and a finite element method (FEM) model, both of which take into account time-dependent human respiration velocities. Using these, we calculate the filtration ratio and fit factor for a range of ventilation resistance values relevant to filter materials, 500–2500 Pa∙s∙m−1, where the filtration ratio and fit factor are calculated as a function of the mask gap dimensions, with good agreement between analytical and numerical models. The results show that the filtration ratio and fit factor are decrease markedly with even small increases in gap area. We also calculate particle filtration rates for N95 FFRs with various ventilation resistances and two commercial FFRs exemplars. Taken together, this work underscores the critical importance of forming a tight seal around the face as a factor in mask performance, where our straightforward analytical model can be readily applied to obtain estimates of mask performance. Introduction Filtering facepiece respirators (FFRs) and medical masks are widely used to reduce inhalation exposure of potentially harmful airborne particles. Medical masks and FFRs (collectively referred to here as masks) have also been recommended by the World Health Organization (WHO) as infection prevention and control measures by health care workers, including for protection against respiratory diseases such as COVID-19 [1], SARS, seasonal influenza, pandemic influenza and avian influenza [2]. The material and filtration efficiency of masks have been the focus of many studies, including examination of repeatability and comfort of masks made of melt-blown fabric and nanofibers [3,4] and the study on filtration efficiency for different mask designs and non- conventional mask materials [5][6][7][8]. However, face-seal mask leakage is also an important penetration route for aerosol particles [9][10][11][12], though far less examined. For example, mask classification testing standards [13][14][15][16] are implemented under the tight fit of the mask, whereas the face-seal gaps are excluded as test factors in these standards. However, factors associated with repetitive use [17,18], non-compliant wearing [19], and change of body position [20] can enlarge gaps at the interface between the mask and the face (as shown in Fig 1), which will degrade the overall mask performance. For example, Chen et al. [21] conducted experiments on the penetration fraction of aerosols with different particle sizes through the face-seal gaps at a constant flow rate. Their results demonstrated that as the flow rate increases, the proportion of aerosol leaking through the face-seal leakage increases. Cho et al. [22] further studied the penetration rate of N95 masks, with experimental results indicating that the majority of particle penetration occurs through face-seal gaps. Rengasamy et al. [23] further studied the penetration rate of particles of different sizes in the presence of face-seal gaps, which indicated that the smaller particles have more inward leakage than larger particles in the presence of face seal gaps. In addition, fugitive flow through gaps in N95 and P100 masks [24] and the influence of the contact pressure on the size of face-seal gaps [25] have also been examined. In short, while previous studies examining mask gaps have been primarily experimental, often for specific mask classifications/mask models/face-seal gap dimensions, an analytical examination would provide a more universal understanding of the importance of mask sealing for a range of mask types, materials and dimensions. In this work we examine the fundamental physics driving the reduction in filtration ratio as a function of mask gaps and filter material permeabilities. While inertial and adhesion forces will dictate particle sedimentation even in the presence of gaps, this being a complex relationship between particle size, charge and a mask's geometry, we seek to examine the airflow itself to give a conservative basis for understanding reductions in overall filtration ratio. In doing so we develop an analytical model incorporating time-dependent breathing rates representing an adult male and female, and corroborated using finite element model (FEM) simulations. By having a clear understanding of the critical relationship between mask gaps and the fraction of incoming air that enters via the filter material, we can provide first-order estimates of filtration ratio across a wide range of potential mask materials and gap dimensions, the latter being a function of material compliance, geometry and user factors. Our results highlight the critical importance of forming a good seal around the mask edges, especially true for lower-permeability (higher pressuredrop) filter materials. The results of this study will enable both mask designers and manufacturers to provide conservative estimates of filtration ratio, while reinforcing the importance of good mask fit for users. Respiration model The primary purpose of this work is to quantify the air flow fraction that bypasses the filter material, requiring a physiologically relevant measure of human breath flow. Since the flow rate varies with time over a breath cycle, we adapt a time-dependent respiratory model to use as the velocity boundary conditions of the inlet /outlet (i.e. the mouth and nostrils). The respiratory flow rate, Q x , is expressed as [26] where the subscript x can be replaced by in or out, representing the inhalation and exhalation, respectively. The parameters α x and β x can be obtained from Eqs 2 and 3: The parameters and their expressions used to calculate α x and β x are given in Table 1. Governing equations of the mask and the gap A representative conceptual model of the mask with a gap is shown in Fig 2. The boundaries include the mask material, the gap between mask and face, the mouth, and the face. Intuitively, an input in air flow from the mouth increases the pressure differential between the interior and exterior of the mask (P 1 > P 2 ) and drives air flow through both the semi-permeable filter PLOS ONE The critical importance of mask seals on respirator performance: An analytical and simulation approach and the gap. The spatially-averaged mask and gap velocities are represented by v mask and v gap , respectively. Similarly, v mouth denotes the average velocity of the air through the 'mouth' boundary. Masks are generally composed of non-woven fabric outer layers and melt-blown fabric filter layers that can be considered (on a macroscopic level) as uniform porous layers, where fluid mechanics principles can be used to analyze the behavior of fluids flow through porous media. In examining flow, the magnitude of the Reynolds coefficient determines the appropriate mathematical formulation. In this study, v mask < 1 m�s -1 (Calculated by Darcy' Law of Eq 5; for commercial masks, ΔP < 350 Pa [29][30][31], and the ventilation resistance μ air �d mask �κ −11 000 Pa�s�m -1 [4]), air density ρ air � 1.29 kg�m -3 , the magnitude of the effective length (L eff , i.e., mean hydraulic diameters of pores) is O(10 −5 ) [32], and the dynamic viscosity of the air, μ air � 1.79�10 −5 kg�m -1 �s -1 . Therefore, the Reynolds number of the mask can be expressed as For Re < 4, Darcy's law can be applied to describe flow through porous material [33], with where ΔP is the pressure drop across the mask thickness, d mask is the mask thickness and κ is the value of Darcy's permeability for a given mask material (in m 2 ). Flow through the gap(s) can be described via Bernoulli's equation (excluding gravitation [34]), with PLOS ONE The critical importance of mask seals on respirator performance: An analytical and simulation approach Here v outside denotes the velocity of the air outside the mask, which is set to zero. Eq 6 therefore simplifies to We then combine Eqs 5 and 7 to obtain the relationship between v mask x and v gap x , with which relates the velocity through the mask to the gap air velocity as a function of the mask and air properties. The subscript x can be replaced by in or out to indicate inhalation and exhalation, respectively. The total volumetric flow rates of inspiration and expiration are equal to the volumetric flow rate through the mask and the gap, so we rewrite Eq 1 to account for these separately, with Where the Q mask x and Q gap x are the flow rates through the mask and gap, respectively. A mask and A gap similarly are the surface area of mask and gap. Substituting Eq 8 into Eq 9 yields Eq 10 is a time-dependent quadratic equation. Therefore, v gap x can be solved in terms of time: ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi It is worth noting that this equation has two solutions (due to the ± symbol), where only the solution with the same sign as Q x should be retained. The value of v mask x can then be solved by substituting v gap x into Eq 8. The time-dependent form of v gap x (in Eq 11) and v mask x (in Eq 8) is appropriate in the case of time-dependent inspiration/expiration flow rates, since this takes into account the sinusoidal nature of human breath. However, to obtain the time-independent expressions of v mask x and v gap x (i.e., with constant air flow), Q x in Eq 11 can be simply replaced with a constant breathing flow rate (in m 3 �s -1 ). Filtration performance In this study we define the filtration ratio, η x , as the fraction of total airflow that passes through the mask material, with where the V tot x is the total air flow through the mask and gap, equivalent to the respiratory in/ outflow. V mask x and V gap x are the volume of air flow through the mask and the gap, respectively. V gap x is the integral of v gap x and A gap over time, with From time t 0 to t 1 , the value of V tot x is given by: Substituting Eqs 11 and 12 into Eq 10 and rewriting this in terms of the average efficiency between timepoints t a and t b , this is expressed as The overall total filtration ratio η can then be found using the total ratio of mask vs. gap flow rates over one breath cycle, including both inspiration and expiration components, given by where η is the filtration ratio over one breath cycle, with [t 0 , t 1 ] and [t 1 , t 2 ] denoting the inhalation and exhalation periods. The mask width (W mask ) and the mask height (H mask ) were measured as 18.6 ± 1.5 cm and 16.7 ± 1.3 cm for two different types of P1/N95 masks and two different types of surgical masks. A mask = W mask H mask is the surface area of the mask. The fit factor, η mask , is distinct from the filtration ratio, where the mask material will only sort an (ideally high) fraction of the particulate and aerosol matter passing through the filter. The filtration ratio thus represents an upper bound for fit factor, with Where η n is the nominal (manufacturer-indicated) filtration efficiency mask rating (i.e. 0.95 for an N95 mask) and C p is the concentration of airborne particles (note that this cancels out in the right-hand side of the equation). Analytical method Analytical solutions were computed in MATLAB (R2019b, Mathworks Inc., Natick, MA). The equations and parameters in Eqs 1-3 and Table 1 are used to calculate the Q x vs. t curves of a representative adult male and female [26][27][28]. These Q x vs. t curves are then used in Eq 10 to solve the quadratic equation of v gap x in terms of time, where the x subscript can be replaced by in or out to indicate inhalation and exhalation, respectively. The calculated v gap x values are then substituted into Eq 16 to calculate the filtration ratio η. It is worth noting that for the different sex and breathing stages, the integration intervals [t 0 , t 1 ] (for inhalation) and [t 1 , t 2 ] (for exhalation) in Eq 16 are time variant, with the sign of Q x switching between exhalation and inhalation phases. Therefore, according to Eq 1, the integration interval [t a , t b ] in Eq 15 should be half of the period (i.e. t b − t a = π/β x ) to represent the complete inhalation process, which also applies to the exhalation process. Finite element method The FEM solution is computed in the COMSOL Multiphysics 5.5 environment with LiveLink™ for interfacing with a MATLAB interface (COMSOL Inc., MA, USA). Fig 3 shows the FEM model corresponding to the analytical boundary conditions in Fig 2, which couples the use of a mask material with defined permeability with the use of the time-dependent flow rates in a modeled geometry. Whereas the actual geometry in a worn physical mask will be highly dependent as a function of user and mask factors, we utilize this simplified model in order to provide generalized results that can provide useful relationships between gap size and resultant filtration ratio. As shown in Fig 3, the geometry of the model is designed in COMSOL Multiphysics according to the geometric dimensions of a representative mask, with W mask = 0.186 m, and W mouth = 0.0463 m [35] is the width of adults mouth and nostrils. For the purpose of this work, we use the word 'mouth' to refer to inspiration/expiration pathways, which in practice include the mouth and nostrils. For the analytical model, W mouth is implicitly equivalent to the width of W mask , since the velocities will be a function of a uniform pressure differential between the mask interior and exterior. Therefore, we also verify the case of W mouth = W mask in the FEM method. Whereas these dimensions have discrete units in the simulation model, we normalize these with W gap , denoted as W gap = σW mask , where σ is ratio of the gap over the mask dimensions, in order to yield a useful and generalizable measure of η. We set the mask thickness to a discrete representative value, T mask = 2.51 mm [4], where we vary instead the overall permeability of the mask material κ as an independent parameter. For simplicity, a representative distance between the mask and the mouth, T air , is represented by T air = 10T mask . PLOS ONE The critical importance of mask seals on respirator performance: An analytical and simulation approach As shown in Fig 3a, Darcy's Law and the Laminar Flow physics module are applied to the mask domain (darker gray) and air domain (light gray), respectively. Fig 3a describes the geometric elements corresponding to each boundary condition. The coupling between the two physical modules is achieved by defining the pressure coupling at the boundary between the Laminar Flow module and Darcy's Law module (shown as the yellow boundary in Fig 3a). Therefore, the pressure of the Laminar Flow module at the wall is the same as the inlet pressure boundary condition of Darcy's Law module. For the inlets/outlets represented by purple boundaries, the boundary conditions are defined as pressure equal to zero to represent the atmospheric zero (gauge) pressure outside the mask; the dashed purple line in Fig 3a represents the inlet/outlet boundary adjoining the gap region, whereas the solid purple line represents the inlet/outlet adjoining the mask domain. We make this boundary distinct from the yellow line on the interior of the mask as no pressure coupling is used on the mask exterior, given that the latter is defined as having a zero (gauge) pressure. The red boundary in Fig 3a represents the mouth and nostrils, which is the inlet/outlet boundary. The velocity boundary condition v mouth x is defined as Q x /A mouth . The materials of the mask and air domain are set to Porous Matrix and Air, respectively. The air material is defined by the built-in properties of COMSOL Multiphysics (ρ air = 1.204 kg�m 3 and μ air = 1.0884 Pa�s, at 20˚C). The porosity in the porous matrix properties is set to 0.9 [36], and the permeability κ is defined according to the different mask materials examined in this study. For Darcy's law in Eq 5, v mask is a function of permeability, κ, as well as d mask . To make the study applicable to masks with various permeability and thickness, we introduce the mask ventilation resistance term, R, which is inversely proportional to the permeability and incorporates the thickness of mask material, with with units of Pa�s�m -1 . The range of R in this study is set to 500-2500 Pa�s�m -1 , representative of the range of permeability values in typical mask materials [37]. To study masks with different R, first the corresponding κ are calculated for various R according to Eq 17. Then each κ is input into the COMSOL numerical model as the permeability property of the porous matrix. The value of σ then varies from 0 to 0.05 (with the step size of 0.002) to represent normalized gap areas between 0 and 5% of the mask area. The mesh of the model is refined to a maximum 2 × 10 −4 m to a minimum � 9.1 × 10 −5 m to ensure the mesh is dense enough for the smallest modelled gap area (σ = 0.002, that is, W gap � 4 × 10 −4 m), and where deviations in this mesh size parameter do not alter the filtration ratio below 2 × 10 −4 m. The mesh distribution of σ = 0.05 is shown in S1 Fig. The time-dependent solver is used to solve the model. After the integral calculation according to Eq 16, MATLAB code is used to process the exported data. Fig 3b shows a representative velocity distribution with σ = 0.05 and R = 900 Pa�s�m -1 , where the highest velocities occur in the vicinity of the gap at the right side in Fig 3b. The simulation model here is comprised of a mask geometry with a singlesided gap. The pressure distribution of the single-sided gap model along the length of the mask is more uneven than the masks with the gaps on two sides, especially in the case of a modelled condition with a finite-width mouth, resulting in a deviation from the analytical model when the gap increases (see S2 Fig). On the contrary, a far smaller difference (mean value <5%) is observed between the double-sided gap model and the analytical model, which is attributed to the more uniform pressure distribution over the internal mask interface. We nevertheless show results for a single-sided gap our figures in order to highlight the maximum deviation possible from our analytical model in the case of a finite width mouth serving as the inlet/outlet boundary condition inside the mask. The impact of the face-seal leakage The contours of efficiency η as a function of mask resistivity and gap size ratio (R and σ) using both analytical method (Eq 16) and FEM simulations are shown in Fig 5, where η is the ratio of the air flow through the mask to the total air flow (the total air flow through the mask and the gap). Fig 5a shows the analytical results for a representative male and female, respectively. Fig 5b shows the corresponding simulation results with W mouth = W mask + W gap . As opposed Each result did not show significant difference in η between male and female (mean difference < 3%). As shown in Fig 5, the trend is that η decreases for a given R with increasing gap size σ, where the degradation in η vs. σ occurs at a higher rate for higher R values. This indicates that a larger fraction of the airflow is directed through the mask gaps with higher R values, especially true when σ is larger. This phenomenon is evident in Eq 8, where κ (permeability) and R are inversely correlated, and where a decrease in κ results in a lower airflow velocity through the mask. Further, as per Eq 11, v gap x / k À 0:5 , which means that a decrease of κ will simultaneously result in the increase of gap airflow velocity. Comparing Fig 5a and 5b, we can observe no significant difference between the analytical results and the simulation results (mean difference < 1.5%), demonstrating the validity of the analytical approach in modelling a simplified system. Since the analytical model in Eq 16, however, is a generalized equation that is not specific to a particular geometry, we seek to understand how a spatially limited inlet/outlet condition (i.e. a mouth) might impact relative airflow pathways through masks vs. gap. Interestingly, this simulation approach results in a marginally higher efficiency for small gap dimensions, but significantly lower efficiency for much larger gap dimensions. Nevertheless, a quantitative examination of Fig 5c show that when the inlet/ PLOS ONE The critical importance of mask seals on respirator performance: An analytical and simulation approach outlet condition is set to a value representative of a mouth width (W mouth = 0.0463 m), there is only a small difference from prior analytical/simulation results when the face-seal gap is small (i.e. σ < 0.015, mean difference < 3%), with the most significant deviation for σ > 0.02 (mean difference > 5%). Limiting our analysis to an intermediate gap size range with 0.015 < σ < 0.02 for values of R < 1500 Pa�s�m -1 , which is the case for the example masks resistivities given later in this work, there is also still good agreement with the analytical results (mean difference < 3%). This deviation at higher σ values is ultimately caused the concentration of airflow in the middle of the mask, with a resulting uneven pressure distribution across the mask interior (see S3 Fig). Therefore, the pressure is unevenly distributed along the coupling boundary between laminar flow and Darcy's law (i.e., the yellow boundary in Fig 3a). However, in the analytical model, the coupling boundary is idealized as equal pressure along the boundary. In the case of a fixed volume flow rate, v mouth increases as the mouth size decreases. The increased v mouth result in the increased local pressure near the mask and gap interior (see S3 Fig). As per Eq 8, the increased local pressure will increase the velocity through gap and mask, whereas the change in v mask x is greater because v mask x / v 2 gap x while v gap x < 1 m�s -1 . For all cases however, we find that the ventilation resistance has surprisingly little effect on filtration ratio for small gap dimensions (σ < 0.015). Fig 6 shows the dependences between η and σ for representative 3M 1860 and 1870+ respirator masks, with common use in medical environments [38][39][40]. Here the R for 3M 1860 is 928 Pa�s�m -1 , and R = 1272 Pa�s�m -1 for the 3M 1870+ [4]. The solid lines in Fig 6a and 6b PLOS ONE The critical importance of mask seals on respirator performance: An analytical and simulation approach Interestingly, there is minimal difference between the performance of masks for male and female wearers for small gap sizes, with only marginally worse filtration ratio for the representative female inspiration model, though this difference is amplified for larger gap sizes. This can be explained by the lower average flow rate for our representative female subject compared to the male (as shown in Fig 4), leading to an increase in the ratio of V gap x to V tot x according to Eqs 10 and 16 and thus a smaller value of η, where more air is directed through the mask with a greater pressure differential between the interior and exterior of the mask. There is minimal overall difference between the performance of these two example masks or the trends in the modeled responses, though the filtration ratio for the higher resistivity mask is lower for all simulation and analytical models across all σ values. In Fig 7 we seek to estimate the impact of the mask fit on the fit factor, the fraction of the airborne particles in the design size range that are captured by the mask. In this analysis we assume that particles going through the gap are not filtered, whereas the airflow going through the mask has a percentage of its particle load filtered according to the mask specifications (i.e. an N95 mask filters > 95% of 0.3 μm particles [41]). We use the simulation model from Fig 5c here, since this is the most representative of the boundary conditions of the two simulation cases for a physical mask. Fig 7a shows the fit factor of airborne particles calculated via Eq 17, shows results for 1860 and 1870+ masks at 0 < σ < 0.016. We show results in this range (as opposed to 0 < σ < 0.05) to limit our analysis to the range of the common mask classifications [3][4][5][6]. The black dashed lines in the figure are the particle filtration rate corresponding to different mask classifications [42,43], where sufficiently large gap dimensions will reduce a given N95 mask's equivalent performance to these lower standards. The error bars here reflect the N95 standard that > 95% of particles are captured in a perfectly sealed mask (i.e. a filter material with a 100% capture rate would also conform to this standard). Notably, when σ is negligible (σ < 0.005, less than 0.5% gap size), the effect of gap on the filtration rate is minimal with reduction in filtration ratio of < 2.1%). The reduction in fit factor for larger gap sizes, however, is steep, with the masks capturing at most 80% of particulates for gap ratios greater than~1.5% (lower than the FFP1 standard), regardless of the initial filtration efficiency of the mask. Whereas PLOS ONE The critical importance of mask seals on respirator performance: An analytical and simulation approach mask, from 500 to 2500 Pa�s�m -1 . Regardless, the curves for these different resistances all show similar trends, with decreasing fit factor for larger values of σ. When R = 500 Pa�s�m -1 , the fit factor drops from 95% to 50.81% between 0 < σ < 0.05. However, in line with prior results showing decreased performance for larger resistance, for R = 2500 Pa�s�m -1 the filtration ratio is reduced from 95% to 21.9%. This is consistent with the prediction in Eq 11 that the reduced permeability results in an increased v gap x Also of note is the reduction in the decreasing error bar with increasing σ, since its magnitude scales with the fraction of airflow passing through the mask material. Conclusion In this article we quantified the impact of mask seal gaps on filtration ratio and fit factor via analytical and simulation approaches, with application to mask materials with a wide range of ventilation resistances and mask gap areas. Our results show that the face-seal leakage has a significant impact on the fraction of airflow passing around the filter material, where both increased ventilation resistance and mask gap dimensions degrade mask performance. For mask areas on the order of~200-300 cm 2 , for example, gap dimensions corresponding to just 1.5% of the mask area (~4 cm 2 ) can result in approximately 20% of the airflow bypassing the filter for typical materials. This unfiltered 20% has an outsized impact on performance; since the relationship between infection risk and viral particle load is non-linear, 80% of air being filtered equates to much less than an 80% infection risk reduction [44]. Compared with previously reported methods that were specific to particular mask geometries and experimental protocols, this analytical and FEM evaluation can be applied universally to readily provide first-order estimates of mask performance, and therefore permissible gap dimensions. Moreover, this approach is equally applicable to full face respirators, surgical masks and respiratory protection masks. Our results highlight the critical importance of not only reducing gap dimensions, where gaps of greater than 1% can result in significant fractions of airflow bypassing the filter material, but also of maximally reducing ventilation resistance (though not at the expense of particulate filtering performance). Choosing engineered nanofiber materials as opposed to the more typical melt-blown filter layer [3,4,45,46], for example, may help to make mask performance less sensitive to small gaps. Interestingly, we also find that very small gap sizes (less than 0.5% of the mask area) are likely to have a minimal impact on fit factor, potentially allowing future design to strike a balance between comfort and fit factor in select usage scenarios. In addition, the simulation and analytical models developed here have the potential to assess the impact of usage-associated changes in filter characteristics, where the accumulation of particles and moisture can alter the filter's flow resistivity. Our results show that an increase in resistivity, for example, results in a decrease in the fraction of air that passes through the filter material, such that the performance of a pristine filter is likely to deteriorate with use in the presence of mask gaps. Whereas the analytical model demonstrates the scaling relationship between bypass ratio, gap dimensions and mask materials for an idealized mask setup, the simulation modelling approach might also be further utilized to examine the impact of non-uniform accumulation of particulate matter as well via the alteration of the localized flow resistivity parameters.
2021-02-19T06:16:13.812Z
2021-02-17T00:00:00.000
{ "year": 2021, "sha1": "ea203084e2572c8c4bd1e1061a6a784bcc83dc7b", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0246720&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f85cfddb29925bf00071b2e2aa548802ff66063c", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
100001644
pes2o/s2orc
v3-fos-license
neurotrans-Volatile Anesthetic Depression of Ca 2+ Entry Into and Glutamate Release from Cultured Cerebellar Granule Neurons Background: Volatile anesthetics (VAs) are known to have actions on a variety of li-gand- and voltage gated ion channels, and thereby inhibit neuronal function. VA effects mediated by actions on voltage-gated Ca channels (VGCCs) were determined by studying their effects on the depolarization- induced rise in intracellular Ca 2+ transients and the consequent glutamate release in cultured neonatal rat cerebellar granule neurons. Methods: Using a glutamate dehydrogenase-coupled assay for glutamate release, and fura-2 to measure intracellular [Ca 2+ ] ([Ca 2+ ] i ), neurons at 37 ℃ were depolarized by a rapid increase in [K + ] o from 5 to 55 mM. Actions of halothane, isoflurane, enflurane, and sevoflurane were compared with effects of altered [Mg 2+ ] o , and by specific blockade of L-, P/Q- and/or N-type VGCC by nicardipine, ω -agatoxin IVA, and ω -conotoxin-GVIA, respectively. Whole-cell patch-clamp studies in these same neurons of VGCC Ba 2+ currents were also performed at 22 ℃ . Results: Clinical VA concentrations dose-dependently depressed both peak [Ca 2+ ] i and glutamate release by 35-70%. With N- and/or L-type VGCC blockade, VAs caused a further marked decrease in [Ca 2+ ] i transients. VAs depressed whole cell patch-clamped Ba 2+ currents in these granule cell neurons by 35-40%. Conclusions: VAs depress Ca 2+ entry by inhibiting a variety of VGCCs, and thereby reduce neuronal glutamate release. This action may contribute to the mechanism of anesthesia as well as provide protection during ischemic insults that cause neuronal injury. I n mediating loss of consciousness and abolishing response to painful stimuli, volatile anesthetics (VAs) appear both to enhance inhibitory neurotransmission mediated by GABAA receptors (1,2) and to inhibit excitatory synaptic transmission (3)(4)(5). In the latter case, the effects of VAs appear to be associated with decreased release of glutamate, mediated in part by a reduction in Ca 2+ influx (3)(4)(5)(6). In adrenally-derived PC12 cells, VAs inhibited the K + -depolarization rise in intracellular Ca 2 + due to influx via L-and N-type voltage-gated Ca channels (VGCCs) (7). At clinical concentrations and physiologic temperatures, VAs cause parallel and dose-dependent reductions in Ca 2 + transients and glutamate release from isolated cerebral synaptosomes induced depolarization with 30 mM K + (8). These decreases are similar to those seen with reductions in external Ca 2 + and consistent with VA mediated inhibition of Ca 2 + entry via VGCCs (8). However, inhibition of Na channels also decrease neurotrans-mitter release have raised the possibility that VAs primarily prevent the Na channel-mediated depolarization that opens VGCCs (9)(10)(11). In addition to possibly mediating components of the anesthetic state, VAs may also mediate neuronal protection (12). The bulk of brain energy utilization is related to glutamatergic neuronal activity (13). A presynaptic decrease in Ca 2+ entry will decrease glutamate release and thereby decrease post-synaptic Ca 2+ entry via glutamate activation of NMDA (N-methyl-D -aspartate) receptors, decreasing neuronal energy consumption. In addition, the inhibition of neuronal VGCC release may directly decrease the cellular Ca 2+ overload that activates autolysis or apoptosis (14). Cerebellar granule (CG) neurons cultured from neonatal rat are a well-established model of uniform cells which can be employed for toxicological and pharmacological investigation of synaptic transmission (15,16). Both pharmacological and toxicological investigations, as well as molecular cloning, have defined the diversity of Ca channels in vertebrate neurons. Using a series of potent channel inhibitors and toxins, five distinct high voltage activated (HVA) Ca 2+ currents (L-, P-, Q-, N-, and R-type) have been identified and are present in in rat CG neurons (17). Four distinct ion conducting α Ca channel subunits have been defined by molecular biologic techniques that correspond to these electrophysiological types: CaV1.2 (L-type, αC), CaV 2.1 (P and Q are splice variants, αA), CaV 2.2 (N-type, αB), and CaV 2.3 (R-type, αE) (18). We have previously demonstrated the ability of VA's to depress Ca 2+ currents through all four channels types when expressed in Xenopus oocytes (19). The present study was undertaken to determine to what extent VAs alter [Ca 2+ ]i transients in CG neurons mediated by these multiple types of VGCCs, and the resulting glutamate release evoked by the Ca 2+ entry. Cell Isolation and Culture Following the National Institutes of Health (NIH) Guide for the care and use of laboratory animals and a protocol approved by the University of Virginia Animal Research Committee, CG neurons were prepared by a modification of the method of Novelli et al. (15) using cerebella isolated from 5-to 7-day-old Sprague-Dawley rat pups. The tissue was coarsely chopped, trypsinized (Type III) for 45 minutes at 37℃ , followed by addition of DNase I and trypsin inhibitor and gentle centrifugation. The supernatant was discarded, and the pellet was triturated, and after 5 minutes, MgCl2 (2.5 mM) and CaCl2 (0.1 mM) were added to solution. The neuronal suspension was filtered through 70 µm mesh and recentrifuged for 2 minutes. Neurons from the resuspended pellet (2 × 10 6 ) were plated onto poly-L-lysine-coated glass coverslips (11 mm × 22 mm) cultured in basal Eagle's medium with 10% fetal calf serum, 2 mM glutamine, 100 µg/ ml gentamicin and 25 mM K + . Glial cell proliferation was prevented by addition of 10 µM cytosine arabinoside 24 hours after plating. Granule neurons were maintained in 5% CO2 : 95% air at 37℃ and were used 4 to 10 days after isolation. A small series of neurons was grown in solution containing 5 mM KCl. Biochemical reagents, buffers and toxins were obtained from Sigma Chemical Company (St. Louis, MO) unless otherwise indicated. Halothane was obtained from Halocarbon Laboratories (Riveredge, NJ), sevoflurane from Abbott Laboratories (North Chicago, IL), isoflurane and enflurane from Anaquest/Ohmeda (Liberty Corner, NJ). [Ca 2+ ]i Measurement in Cultured Granule Neurons For measurement of cytosolic Ca 2 + concentration ([Ca 2 + ]i), neurons on coverslips were incubated at 37℃ for 20 minutes in basal medium containing 3 µM fura-2-AM, BSA 16 µM and (in mM): NaCl 153, KCl 3.5, NaHCO3 5, KH2PO4 0.4, MgSO4 1.2, CaCl2 1.3, glucose 5, N-tris (hydroxymethyl) methyl-2-aminoethanesulfonic acid (TES) 20 with pH adjusted to 7.4. In some experiments HEPES was substituted for TES with no alteration in behavior. After washing the neurons twice in fura-2-free solution, coverslips were inserted into a holder and placed in a 2 ml cuvette and washed twice more with 2 ml fresh medium. [Ca 2+ ]i was determined at 37℃ in a PTI (Photon Technology Incorporated, Monmouth Junction, NJ) DeltaScan luminescence spectrofluorometer equipped with a cuvette warmer and magnetic stirrer to ensure adequate mixing during each experiment. Ca 2 + in-flux into neurons was initiated with addition of 33 µl of 3.0 M KCl, which increased [K + ]o from 5 to 55 mM. Fluorescence at 510 nm was determined for alternating excitation wavelengths of 340 and 380 nm with fluorescence (340/380) ratios collected every 0.5 to 1.9 seconds for 60 to 180 seconds. Subsequent calibration was carried out by determining maximum and minimum fluorescence ratios using 10 µM ionomycin for maximum (Ca 2+ -saturated) values and 10 mM ethylene glycol-bis (-aminoethyl ether)-N, N, N', N'-tetraacetic acid (EGTA) for minimum values for each coverslip. [Ca 2 + ]i was calculated according to the standard formula using a Ca 2 +fura-2 Kd of 224 nM (20 Measurement of Neuronal Glutamate Release Glutamate release was measured in CG neurons at 4-7 days using a glutamate dehydrogenase (GluDH)-coupled assay (Boehringer Mannhein GmbH, Germany). As described (21, 22), 50 U/ ml GluDH was employed in the presence of 1 mM NADP + to catalyze the formation of α-ketoglutarate and the fluorescent species NADPH from glutamate. NADPH fluorescence was excited at 340 nm and measured at 460 nm using the PTI spectrofluorometer. The coverslip of granule neurons was washed in buffer solution and then incubated at 37℃ for 5 min in a 2 ml cuvette containing (in mM): 145 NaCl, 5 KCl, 1.3 MgCl2, 1.5 CaCl2, 1.2 NaH2PO4, 10 glucose, and 20 HEPES, pH 7.4. As in the [Ca 2 + ]i transient study, glutamate release was activated by adding KCl to achieve a final concentration of 55 mM, while monitoring the change in NADPH fluorescence for 300 seconds at a sampling rate of 1-2 Hz. The fluorescence signal in this setting increased to a value 10-20% above the baseline fluorescence, with a typically stable plateau being reached within 10 seconds. To calibrate the fluorescent response to glutamate release, studies were performed with direct addition of NADPH or glutamate under identical conditions. Addition of NADPH in the cuvette solution to obtain a 0.2, 0.5, and 1.0 µM concentration resulted in abrupt increases in the fluorescence signal of 0.92 ± 0.21, 1.94 ± 0.86, and 3.1±1.31 × 10 5 counts per second (cps), respectively (± SD, n=5). When glutamate was added (in aliquots of 0.5 mM solution) to solutions containing 50 mg/ml GluDH enzyme solution, the fluorescence signal increased with an exponential time course time constant of~60 seconds at 37℃ . When the added (glutamate) was 0.2, 0.5 and 1.0 µM (equimolar to the increases in [NADPH]), the respective steady-state increases in fluorescence signals were 0.89 ± 0.19, 1.83±0.29, and 2.89±0.30×10 5 cps, or9 5 percent of the NADPH values. The close agreement of the fluorescence signal between the same quantity of NADPH and glutamate suggests that the glutamate reaction producing NADPH proceeded to completion. In control experiments, when [glutamate] was abruptly increased to 0.2, 0.5 and 1.0 µM in the NADPH buffer and GDH mixture in the presence of halothane or isoflurane, and no difference was observed in the rate or extent of fluorescence increase seen in their absence. Upon addition of KCl and depolarization of the neurons, there was a sudden increase in the NADPH fluorescence signal, followed by a much smaller and slower increase, which typically stabilized by 5 to 15 seconds (Figure 1b, 2b, 3b). The increase in the fluorescence signal was typically on the order of 0.4-1.5 × 10 5 cps, reflecting a final metabolism of 0.2-0.4 nmoles of released glutamate. The maximum value varied with the degree of confluence and coverage of neurons on the coverslips. Compared to the addition of glutamate in solution, the stabilization of the fluorescence signal in the presence of depolarization-induced glutamate released from neurons was far more rapid. Such rapidity suggests that there must be rapid release of a high concentration of glutamate (>40 nmoles yielding >20 µM), followed by rapid arrest (≤1 second) of glutamate release, as well as substantial local glutamate uptake into neurons which would then cause cessation of NADPH production in the first few seconds. A high concentration of glutamate (>1 mM) with rapid reuptake has been predicted to be found in synaptic clefts (23), while a high-capacity system for glutamate uptake in neurons (24) could account for the rapid stabilization of the signal. In additional previously reported control experiments, GABAA receptors (chloride ion channels) were blocked using 100 µM bicuculline, GABAB receptors were activated by 10 µM baclofen, NMDA glutamate receptors were inhibited by D -( -)-2-amino-5-phosphonovaleric acid (AP-5), or intracellular Ca 2+ was mobilized by 5 mM caffeine. None of these separate interventions had any significant action on the depolarization-evoked Ca 2+ transient or glutamate release (25). In that same study, Na channel blockade by 10 µM tetrodotoxin caused an 11% decrease in the Ca 2 + transient peak (a non-significant decrease in glutamate release), while intracellular Ca 2 + immobilization with 10 µM ryanodine caused a 14% decrease in glutamate release (no effect on the Ca 2+ transient). Anesthetic and Drug Administration Prior to either type of experimental study, CG neurons were incubated in the cuvette for 5 minutes in VA-equilibrated solution, which was generated by bubbling filtered VA-containing air which had passed through anesthetic vaporizers (Ohmeda, Madison, WI) calibrated to deliver the specified percent vapor in air. VA vapor concentrations were approximately 0.8 and 1.6 times the minimal alveolar concentration (MAC) value for rats (that concentration at which 50% of rats do not respond to painful stimulation) (26). As periodically verified by gas chromatography, 0.75 and 1.5% halothane yielded aqueous concentrations of 0.25 and 0.5 mM; 1.3 and 2.5% isoflurane yielded 0.23 and 0.42 mM; 2% and 4% sevoflurane yielded 0.22 and 0.43 mM; 1.7 and 3.5% enflurane yielded 0.35 and 0.70 mM enflurane. Solutions were sampled at 37℃ and aqueous concentrations typically varied by ±10%. VA-containing air continually flowed through the cuvette head-space to prevent VA loss to the atmosphere. Control solutions were bubbled with filtered air only. A five minutes incubation was sufficient to achieve a stable effect of the anesthetics, nicardipine or ωagatoxin-IVA (Aga-IVA; Alexis Biochemical, San Diego, CA); a 20 minutes prior exposure to ωconotoxin-GVIA (Ctx-GVIA) was found to be necessary to achieve its maximum effect on either [Ca 2+ ]i or glutamate release. Whole-Cell Patch-Clamp Studies For electrophysiological studies, neurons grown on cover slips under conditions identical to those for the spectrophotometric studies were placed at the bottom of a recording chamber mounted on an inverted microscope where bathing solutions could be exchanged. Prior to establishing the whole-cell-recording configuration, the external bathing solution contained (in mM): 140 NaCl, 5 KCl, 2 CaCl2, 1 MgCl2, 10 HEPES, adjusted to pH 7.4 with 1 N NaOH. The patch pipette solution contained (in mM): 108 CsMeSO4, 10 CsCl, 9 EGTA, 24 HEPES, 4 Mg-ATP, 0.3 GTP, adjusted to pH 7.3 with 1 N CsOH. Once whole-cell recording was achieved, the bathing solution was replaced with one that would eliminate potentially interfering K and Na currents (in mM): 160 TEA-Cl, 5 Ba-Cl2, 10 HEPES, pH 7.3 with 1 N CsOH. Standard whole-cell voltage-clamp methods were employed using the Axopatch 200 patch clamp amplifier (Axon Instruments, Foster City, CA). Data acquisition was performed using a pClamp system version 5.5.1 (Axon Instruments) coupled with an IBM-compatible, 386based microcomputer. Patch electrodes were prepared from borosilicate glass 1B150F-3 (World Precision Instruments), heat polished, and had a resistance less than 5 MΩ when filled with internal solution. All experiments were conducted at room temperature (20-22℃ ). Four to six minutes after initiating whole-cell recording configuration, neurons were typically voltageclamped at -80 mV to establish a stable baseline and maximize currents by reducing steady state inactivation. To define the current-voltage relation, IBa was triggered by step depolarizations 70 msec in duration from -40 to + 40 mV. After control measurements the preparation was superfused for 4 to 6 minutes with solution preequilibrated with either halothane or isoflurane (produced by bubbling the solution at room temperature) and measurements repeated. Anesthet- ic solution was washed out for 5-8 minutes before recording recovery currents. Standard P/n analysis was used to estimate and subtract leakage and capacitative currents. The higher solubility of the anesthetics at room temperature produced aqueous concentrations approximately 60-70% higher than those at 37℃ . In other experiments a depolarization to -10 mV was applied for 400-450 msec. A 9.5 sec depolarization to 0 mV was applied to duplicate the prolonged depolarization obtained with application of 55 mM K + . Statistics and Analysis For neurons at the same day in culture and growing with a similar density, KCl depolarization elicited extremely uniform control responses for either [Ca 2+ ]i transients or glutamate release, varying by less than 8% . Results of [Ca 2 + ]i measurements are reported and compared as absolute values and also as fraction of same day control, while glutamate measurements are only reported in the latter format. Unless otherwise indicated, results are expressed as sample mean ± sample standard error (SEM). Results were compared among anesthetics and drugs by ANO -VA and the Protected Least Significant Difference (PLSD) Test. Control Studies As shown by the control response in Figure 1a, a sudden increase in [K + ]o to 55 mM caused an increase in [Ca 2 + ]i from 80-110 nM to over 800 nM, subsequently declining to less than one-half the peak value within 20 seconds, with a stable plateau of 160-210 nM reached at 50-90 seconds (not shown (Figure 2c). The more modest effects of Ctx-GVIA in depressing glutamate release compared to Aga-IVA or nicardipine agree with previous reports suggesting that N-type channels contribute modestly to glutamate release in these neurons (22,30). When two VGCC toxins were combined the [Ca 2+ ]i transient decreased even more to 40-45% of control (Figure 2a, c). However, the less than additive effect suggests that additive inhibition of the [Ca 2+ ]i influx may be due in part to overlapping drug sensitivity (22). When 1 µM nicardipine was combined with either Aga-IVA or Ctx-GVIA there was more profound depression of plateau of [Ca 2 + ]i. When two VGCC types were blocked, glutamate release (NADPH signal) showed a fractional depression similar to that observed for peak [Ca 2+ ]i (Figure 2c). When the three agents were applied, the KCl-evoked Ca 2+ transient and glutamate release were similarly reduced to 28 ± 11 and 30 ± 13 percent of control (n=4), respectively. Volatile Anesthetic Effects on the [Ca 2+ ]i and Glutamate Release VAs did not alter the basal [Ca 2 + ]i, but markedly depressed the K depolarization-induced [Ca 2 + ]i transient, decreasing both peak and plateau Figure 3d and e, the VA-induced decrease in the glutamate release signal is plotted against the decrease in the peak of the Ca 2 + transient. The depressant effect of the equivalent anesthetic concentrations is similar and clearly concentrationdependent in that 1.6 MAC caused approximately twice the depression of that seen with 0.8 MAC of the anesthetic. To determine if specific VGCC channels were altered by the anesthetics, the decrease in the Ca 2 + transients were observed when 2.5% isoflurane or 1.5% halothane was combined with Ltype VGCC block by 1 µM nicardipine and/or Ntype VGCC block by 100 nM Ctx-GVIA. The combination of the depressant effect of 1.5% halothane or 2.5% isoflurane with either L-type or N-type VGCC blockade caused a further decrease of the [Ca 2 + ]i transient by an additional 20-40% compared to the drug or toxin by itself (Figure 4a-c). Therefore, more than one type VGCC is blocked by the anesthetics. Conversely, since nicardipine or Ctx-GVIA can further augment the depression of the anesthetics, such additional depression also implies that not all Lor N-type channels are completely inhibited by 1.5% halothane or 2.5% isoflurane. In the presence of combined L-and N-type channel block, the additional anesthetic-induced decrease in the Ca 2 + transient was particularly profound: 1.5% halothane or 2.5% isoflurane further reduced the Ca 2 + transient mediated by non L-, non-Ntype VGCCs (40 percent of control) to only 4-6 percent of control (Figure 4d). Even 0.75% halothane or 1.3% isoflurane resulted in a significant decrease to 18-28 percent of control. In contrast to neurons grown in 25 mM K + , the Ca 2 + transient observed in neurons grown in 5 mM K + media was much smaller, increasing from a basal [Ca 2 + ]i of 49 ± 17 nM to a peak of 175 ± 80 nM, and subsequently decreasing to a plateau of 107 ± 35 nM (n=21). The smaller Ca 2 + transient was depressed by nicardipine and Ctx-GVIA to 30 and 34 percent of control, respectively, suggesting that L-and N-type Ca chan-nels contributed similarly to Ca 2+ entry. 1.5% Hal-othane1.5% depressed the smaller transient peak [Ca 2+ ]i and plateau to~46 and~50 percent of control, respectively (n=3), similar to the inhibition seen in neurons cultured in 25 mM K + . Anesthetic Effects on Whole Cell Ca 2+ Currents To determine if the depression of [Ca 2+ ]i transients caused by anesthetics was reflected by a similar effect on voltage gated Ca 2+ currents, these were examined by the whole cell patch clamp technique. Ba 2+ was substituted for Ca 2+ as the charge-carrying ion to enhance the inward current. Under control conditions with -80 mV holding potential, the peak IBa elicited by a test pulse to -10 or 0 mV averaged -141 ± 91 pA (n=29), with substantial variability in amplitude present among the neurons studied. Peak IBa was significantly and reversibly diminished by 2.5% isoflurane to 68± 8 percent of control (P< 0.001, n=11, Figure 5a) and by 1.5% halothane to 67 ± 11 percent of control (P<0.02, n=5). Anesthetic treatment had no obvious effect on the voltage-dependence of current-voltage relationship ( Figure 5b); however, this observation should be interpreted with caution since there are multiple VGCC types in these neurons. Although the amplitude and time course of IBa varied among the neurons over a 450 ms depolarization, the current decrease seen with application of either 2.5% isoflurane or 1.5% halothane showed little difference from that seen by simply scaling the control response ( Figure 5 c, d), suggesting no VA effect on the rate of inactivation of the inward currents. To determine the relative sensitivity of L-and N-type VGCC to the anesthetics, we exposed CG neurons to the combination of 1 µM nicardipine and 100 nM Ctx-GVIA. Since high concentrations of divalent cations, particularly Ba 2 + , are known to inhibit N-type Ca channel blockade by Ctx-GVIA (31), neurons were pretreated with Ctx-GVIA for 30 minutes before recording. With the elimination of N-type Ca channels, 2.5% isoflurane inhibited the remaining IBa (140 ± 41 pA; n=8) by 40 ± 3 percent, slightly more than the 32% depression of IBa in the absence of the toxin pretreatment (P<0.05). In the presence of both 100 nM Ctx-GVIA and 1 µM nicardipine, 2.5% isoflurane decreased the remaining peak IBa from the control value of 143± 20 pA by 37 ± 3% (n=5), suggesting that unblocked P/Q and R-type current were inhibited by VAs to a similar degree as the N-and L-type channels. Figure 5. Anesthetic Actions on Ca Channel Currents in Cultured Rat CG Neurons at 21℃. Neurons at 4-8 days in culture were whole cell patch clamped with pipette solution containing in 130 mM Cs + , 9 mM EGTA, 4 mM MgATP, 0.3 mM GTP, and in the presence 5 mM Ba 2 + and 160 mM TEA extracellularly. a, Current (IBa) in response to a depolarization from -80 mV to -10 mV, with capacity and leak correction. Traces are indicated for control, 2.5% isoflurane equilibrated solution, and after 10 minutes recovery from anesthetic. b, Current voltage relation for the neuron studied in A. c and d, Capacity and leak corrected IBa in response to a more sustained depolarization from -80 mV to 0 mV, for control and the presence of 2.5% isoflurane (c) or 1.5% halothane (d). Open circles indicate the control response scaled by 0.63 for isoflurane and 0.50 for halothane. Recovery currents are indicated by the dotted traces. Since the increase of 50 mM K + results in a sustained depolarization during the [Ca 2 + ]i transient experiments, we examined anesthetic action on more prolonged depolarizations of 9.5 seconds. In this setting, 2.5% isoflurane clearly caused fractionally greater depression of later IBa components. As evident from the difference current in Figure 6a, the isoflurane-sensitive IBa showed a small initial inactivation, followed by a sustained component with minimum inactivation. When 1 µM nicardipine was applied for comparison, the nicardipine-sensitive L-type channel component of IBa was primarily a non-inactivating current of smaller magnitude than the isoflurane-sensitive current component (Figure 6b). The sustained block of IBa by nicardipine is clearly compatible with marked depression of the plateau of the [Ca 2+ ]i transient by this agent. (34,35). The lower order dependence seen here probably reflects the fact that sustained depolarization prolongs Ca 2 + entry, so that [Ca 2 + ]i equilibrates with the vesicular release machinery and glutamate release becomes a more linear function of Ca 2+ entry. The Ca 2 + transient and the associated glutamate release in these neurons appear to be almost exclusively due to entry of Ca 2 + via VGCC. Prior treatment with ryanodine or caffeine, in an attempt to deplete intracellular stores, did not alter the KCl-induced Ca 2 + transient, or glutamate release (25). Specific VGCC blockade by drug and/or toxin is also consistent with virtually exclusive dependence on VGCC Ca 2 + entry for the [Ca 2 + ]i increase that activates glutamate release. The use of Ctx-GVIA, nicardipine, and Aga-IVA was associated with similar decreases in the [Ca 2 + ]i transient peak. In other neurons, Ntype current block was associated with only modest depression of glutamate release (22,30) as observed in these experiments (Figure 2c); and P/Q-type channel blockade was associated with the greatest decrease in neurotransmitter release (36). Electrophysiological studies (17,37) have shown the presence of five distinct Ca channels with differing toxin sensitivity in CG neurons in culture. We used Ctx-GVIA, Aga-IVA, and nicardipine for the respective block of N-, P-and Qand L-type VGCCs. Nicardipine block of the later plateau of [Ca 2 + ]i is consistent with the present and previous voltage clamp studies (38) showing that L-type current demonstrates less inactivation than the other VGCCs and would be expected to promote a more sustained Ca 2+ transient. When combined nicardipine and Ctx-GVIA showed a slightly less than additive effect on [Ca 2+ ]i transients and plateau, probably indicating that some VGCCs in CG neurons have a mixed pharmacology, i.e. sensitivity to both Ctx-GVIA and DHP antagonists (39). When the three toxins were combined, the [Ca 2 + ]i transient was depressed to~25% of control, which is similar to the remaining R-type current observed in CG neurons after similar blockade (17). Ongoing R-type VGCC activity permitted a small ongoing glutamate release suggesting it too supports neurotransmitter release (40). In granule neurons, clearly VAs inhibit the cur- It is important to note that the recorded glutamate release is occurring at the synaptic nerve endings or boutons, while the Ca 2 + transient reflects Ca 2 + increases in the cell body as well as the presynaptic endings; the recorded Ca 2 + currents represents primarily Ca 2 + entry in the cell body. There may be differences in the distribution of the different types of VGCCs between the cell body and the nerve endings and that difference may account for the minimal effect of Ctx-GVIA on glutamate (11% decrease), in spite of larger effect on Ca 2 + transient and Ca 2 + current (~45% decrease). In addition, while granule cell (CG) neurons are the most common neuron in the CNS, it is possible that GC neurons may be more or less susceptible to interventions or injury than cortical pyramidal cells, cerebellar Purkinje cells, other neurons, or glial cells. These results demonstrate that therapeutic concentrations of VAs at physiological temperature markedly inhibit the elevation of [Ca 2 + ]i induced in cultured CG neurons by depolarization with 55 mM KCl. Figure 7 shows schematically the presynaptic ending and the Ca 2 + induced release of glutamate. These effects are complemented by the depression of Ca 2 + currents mea-sured here by whole cell patch clamp experiments. The lower VA concentrations (75% halothane, 1.7% enflurane, 1.3% isoflurane, 2% sevoflurane) employed approximate human MAC (EC50) values, at which movement in response to a surgical stimulus is inhibited in 50 percent of patients. For rats, the species from which the tissues were derived, these lower concentrations are approximately 0.8 MAC, and resulted in depression of the peak [Ca 2 + ]i transient to about 50% of the same day control level; 1.6 MAC caused even greater depression to about 20-30% of control. Both halothane and isoflurane caused substantial but incomplete inhibition of N-type and L-type Ca channel mediated entry into these neurons, since addition of the specific VGCC blocker for either channel still caused further blockade in the presence of VA. The effect of VAs on [Ca 2 + ]i peak and plateau in neurons pretreated with both nicardipine and Ctx-GVIA, or the IBa in similarly treated voltage clamped GCs, showed marked inhibition of the remaining response, suggesting that P/Q-and Rtype VGCC are relatively sensitive to the VAs. After specifically blocking one or more of the various VGCC types, the VAs always had an effect on the remaining channels, consistent with our previous observation that the VA's inhibited the various types of VGCC to a similar extent when the specific associated α1 subunit was expressed in oocytes (α1Α =P/Q-type; α1B=N-type; α1C=L-type; α1E=R-type) (19). To delineate more clearly the relationship between VA actions on the [Ca 2+ ]i transient and glutamate release, the average fractional decrease in glutamate release was plotted versus the average decrease in the peak [Ca 2 + ]i transient for each anesthetic concentration studied (Figure 3d, e). Clearly, there is a parallel decrease, although for certain agents the relation between the peak of [Ca 2 + ]i and the glutamate release appears to be somewhat less than unity. Nevertheless, the effect of VAs is strikingly similar to the glutamate release/[Ca 2 + ]i transient relation observed other methods of blocking Ca 2 + entry. Although some variation in the inactivation rate of IBa was observed in various neurons, the depression was similar for either isoflurane or halothane. The fractional inhibition of IBa at room temperature was somewhat less than the decrease in mea- sured Ca 2 + peak transient, which may represent inherently different sensitivity at the two temperatures. The different charge-carrying ions could also contribute to variation, since with Ba 2+ there will be no Ca 2+ -dependent inactivation. In vivo studies of experimental animals implicate inhibition of VGCC in certain behavioral components of the anesthetic state. Funnel-web spider toxin, which blocks P-type and other highvoltage gated Ca channels, can cause lethargy and stupor in mice (41); blockade of N-type channels by spinally administered -conotoxin MVIIA (SNX-111, ziconotide) has distinct antinociceptive actions (42). Non-specific VGCC blockade by Cd 2+ as well as L-type channel block by verapamil have been found to enhance the anesthetic potencies of ethanol, pentobarbital, ket-amine, and other anesthetics in mice (43,44). While Mg 2 + was employed as a tool to alter synaptic release, high serum [Mg 2+ ] reduce anesthetic requirement; four-fold increase of serum Mg 2 + in rats reduces the anesthetic requirement for halothane by~50% (45). The neuroactive agent riluzole (6-(trifluoromethoxy)benzothiazol-2-amine) also decreases glutamate release and decreases anesthetic requirement (46), an effect that appears to be mediated in part by depression of Ca 2+ entry by N-and P/Q-type VGCC (47). VA-mediated decreases in Ca 2 + entry and glutamate release may also have important implications beyond the analgesia and immobilization. Glutamate release is responsible in large part for the neuronal death that accompanies brain isch-emia or anoxia, since blockade of glutamate release markedly reduces cell death (48). Glutamatergic neuronal activity is responsible for the bulk of brain energy utilization measures as glucose (13), primarily due to post-synaptic activation and its greater energy use (49). The glutamate activation of post-synaptic NMDA channels in particular may activate more sustained Ca 2+ entry and intracellular accumulation. In so far as depression of glutamate release should decrease metabolic activity of the brain, presynaptic depressant effects could contribute to the VAinduced protection as it does for riluzole. The VA inhibition of VGCC-mediated Ca 2+ entry could account in part for the decrease in glutamate release observed with ischemia (50), accounting in part for the apparent protective effect of VAs in some experimental models of brain ischemia (12,51). Depression of presynaptic Ca 2 + entry itself may also contribute to neuronal protection within the central nervous system. For example, blockade of N-type VGCC in the CNS can reduce neuronal injury in models of both global (52) and focal cerebral ischemia (53). Such inhibition of CNS neuronal Ca channels may contribute to the po-tential neuroprotective effects of VAs, which have been described in a variety of previous studies and correlate in part to the decrease in cerebral metabolic activity (54)(55)(56)(57). Consistent with these observations are those demonstrating decreased glutamate release from anoxic cortical brain slices in the presence of halothane and enflurane (58) or from the ischemic hippocampal region of rats with isoflurane. It is evident from the tracings that after glutamate release was maximal, the [Ca 2+ ]i continued to increase due to the prolonged K + -induced depolarization. Such ongoing increase in Ca 2 + caused by the hyperkalemia that accompanies ischemia and hypoxia could contribute to neuronal cell death. In summary, the commonly employed VAs at the equivalent concentrations used in this study had remarkably similar depressant actions on Ca 2 + channel mediated entry into these CG neurons and the associated glutamate release. Such actions may contribute to certain protective actions of the VAs in the brain as well as behavioral and anesthetic effects of these drugs.
2019-04-08T13:11:59.964Z
2017-03-06T00:00:00.000
{ "year": 2017, "sha1": "68c487509981c2a6d55245c246ea7ac2c2a6b635", "oa_license": "CCBY", "oa_url": "http://www.japmnet.com/uploadfile/2017/0320/20170320073313294.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "ab4a5f8ec4b34a2801433ff8bc7407bbeaa5d911", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
227996704
pes2o/s2orc
v3-fos-license
The Status Quo and Reform Thinking of the Talent Training Mode of Biology Teachers in Middle School It has always been one of the hot spots of the whole society to improve teachers’ quality and ability. With the progress of the era and the rapid development of biology, it puts forward higher requirements for the cultivation of biology teachers of middle school. How to cultivate a large number of high-quality biology teachers of middle school with good ethics and outstanding abilities is a focus problem worth exploring. There are some problems in the traditional training mode of biology normal students, such as backward teaching idea, unreasonable teaching arrangement and uneven teaching level. In view of these problems, normal universities should take a series of reform measures to promote the professional development of middle school biology teachers. Therefore, this paper summarizes the reform necessity, current situation and existing problems of the talent training mode. It also puts forward a series of reform measures on the talent training mode in the aspects of learning, innovation and reflection. Thus, this paper will provide important reference for the reform of talent training mode of middle school biology teachers in the future. As the saying goes, if a country wants to prosper, it must value the teachers.Education is related to the prospects of the Chinese nation in the future, while the middle school teachers who take "preaching, teaching and dispelling doubts" as their duty play an important role in cultivating the pillars of society and developing educational business.As an important compulsory course in middle school, biology needs excellent teachers to stimulate students to love life and care about society while imparting knowledge, so as to cultivate their essential biological core literacy.Therefore, in the reform of the training mode of middle school biology teachers, we should not only pay attention to teachers' knowledge and teaching skills, but also strengthen the construction of teachers' quality and ethics, which is the only way to cultivate middle school biology teachers with the spirit of the new era. The Necessity of Reforming the Training Mode of Middle School Biology Teachers Education reform is a hot research topic, so is the reform of the training mode of middle school teachers.First of all, it is very important to know why to reform.It is necessary to reform and gradually improve the training mode of middle school biology teachers, which is not only the requirement of the new era, but also the inevitable demand for the development of biology subject and teachers. The Training Mode Reform of Middle School Biology Teachers is the Requirement of the New Era The impact of education plan is far-reaching.With the rapid development of information technology, the international competition is becoming increasingly fierce and the demand for high-quality talents is particularly urgent.Many international organizations are reforming their talent training strategies in order to obtain long-term development advantages (Zeng, 2019).At the national education conference, President Xi proposed that we should speed up the modernization of education, build a powerful country in education and run the education to make people satisfied.Consequently, it is necessary for middle school biology teachers to help students establish a good outlook on life and nature in order to cultivate socialist builders and successors with all-round development of morality, intelligence, physique, beauty and labor.In the new era, the concept of middle school education is updating, and the talent training mode of biology normal students also needs to be reformed.Therefore, it is an inevitable requirement of the new era to improve their overall level and cultivate high-quality biology teachers under the guidance of the fundamental task of "cultivating people by virtue". The Development of Biology Discipline Requires to Reform the Training Mode of Biology Teachers of Middle School Biology is closely related to information and engineering technology, which has a great impact on many aspects of social life such as medical health, environmental protection and animal husbandry.Facing the new requirements of improving the national quality in the new era, the biology curriculum standard of middle school was revised in 2020 on the basis of the experimental version in 2017, which inherited and developed the content system for nearly ten years, and emphasized to implement the core literacy of biology.It is beneficial for middle school students to cultivate their life concept, scientific thinking, scientific inquiry and social responsibility (Ministry of Education, 2020).The reform of curriculum standards and the updating of biology teaching materials have put forward new requirements for teachers' teaching ability, which requires the normal colleges to reform the training mode of biology normal students and implement the requirements of developing biological core literacy. The Reform of the Training Mode of Biology Teachers of Middle School Needs a Standardized Teaching Team President Xi places great expectations on teachers, hoping that they could achieve the goal of teaching and educating people (Han et al., 2020).If teenagers want to buckle well the first button in their lives, middle school teachers are the "buttoners".If teenagers want to realize the Chinese dream of the great rejuvenation of the Chinese nation, middle school teachers are the "dream builders".Teachers' words and deeds have a profound impact on middle school students' study and life.The demonstration of teachers' work requires to enhance the standardization of teachers' training.In order to cultivate the biological core literacy of middle school students imperceptibly, it is necessary to improve the training mode of middle school biology teachers and build a high-quality modern middle school teachers' team.Only with high-level biology teachers can we have high-quality biological science education in secondary schools. Current Situation of Middle School Biology Teachers' Training The training quality of biology teachers of middle school is related to the improvement and development of biology education quality.Many countries are actively seeking for new modes for training middle school biology teachers, which will lay a good foundation for students who may enter biology-related industries in the future.The improvement of talents training quality will enhance the national cultural soft power, so as to gain significant advantages in the long-term international competition.The following is a brief introduction to the current situations and future development trends of the cultivation of middle school biology teachers at home and abroad. The Present Situation of Middle School Biology Teachers' Training Mode in China The development course of middle school biology teachers' training in China is quite tortuous.Our biology curriculum was first introduced from abroad after the Opium War.In 1902, the Qing Government stipulated that biology students in normal colleges should learn pedagogical knowledge including "biology teaching method".After more than one century's practice and development, China has formed a relatively standard curriculum system of "Biology teaching theory", and vigorously cultivated biology normal students (Cui et al., 2016).After entering the 21st century, the cultivation mode of middle school biology teachers has become more institutionalized and specialized.In 2011, The Teacher Education Curriculum Standards (Trial) vigorously promoted the reform and innovate the curriculum concept of teacher education, so as to cultivate high-quality and professional teacher talents.The biology curriculum structure of normal colleges follows the growth law of normal students.In the continuous of study and practice of different types of courses as shown in Table 1, normal students can gradually grow into qualified middle school biology teachers. With the development of the era, the Ministry of Education has launched a series of plans on training "outstanding teachers".There are higher requirements for the cultivation of biology normal students, so that they can forge ahead towards the goal of "being great teachers" and strive to become the teachers with ideals and beliefs, moral sentiment, solid knowledge and benevolence!Only when middle school biology teachers teach well, can students have a good biological foundation.Each country has their corresponding education laws and policies so as to regulate and guarantee the training plan of teachers.While studying biology, normal students should also cross-study other subjects, such as environmental science and other humanities courses.Besides, the cultivation of biology teachers tends to be highly educated, and the focus of training shifts to postgraduate stage (Liu et al., 2013).With the continuous progress of science and technology, the training modes of biology teachers of secondary school in various countries are adapting to the trend of the era, constantly innovating and developing. The Development Trend of Middle School Biology Teachers' Training Mode What kind of middle school biology teachers needs to be trained in the future?How to train them?It should be explored according to the actual situation of the era.However, the future development ideas of various countries have similar trends. First, the concept of STEAM is in vogue (Wang, 2020).STEAM, firstly proposed by the United States, is a comprehensive education integrating science, technology, engineering, art and mathematics.Biology has multi-level internal relationships with other disciplines.Interdisciplinary education can expand the knowledge of biology teachers of middle school, better grasp the knowledge logic of biology and benefit the comprehensive development of talents. Second, attention has been paid to the "normal nature" of middle school biology teachers.In terms of the curriculum arrangement of normal university students, the proportion of practical courses in biology teaching is increasing.Some countries have even increased the education years and the teaching practice hours for undergraduate biology normal university students, so as to better exercise their teaching ability.Besides, China is also exploring a holistic and continuous training mode for undergraduate and master of education students (Ministry of Education, 2018).Some colleges are trying to train biology normal students with the mode of "4+2" or "3+2" study years (Tian, 2011). Third, lifelong learning gets more emphasis.The researches on middle school biology teaching theory should keep up with the pace of the era.The training of biology teachers should pay more attention to the development of their lifelong teaching career and reform the cooperative training mode between middle schools and universities (Pan, 2018).Teachers must take biology teaching as a lifelong pursuit.In this process, teachers should take the initiative to find problems in biology teaching and innovate ways to solve problems with enthusiasm and confidence.More importantly, biology teachers should practice in research, study in reflection and realize their self-development in the long-term teaching. The reform of the talent training mode of middle school biology teachers is closely related to many factors such as the government and society.However, the most fundamental thing is that colleges and universities should take the main responsibility of biology teaching reform and talent training mode innovation.The cultivation of middle school biology teachers should take "what kind of teachers to train" as the goal guidance and innovate the new reform path of "how to cultivate teachers".Colleges are actively adapting to the development of society and seeking new modes to improve the quality of personnel training, which still needs a long way to go. Problems in the Training Mode of Middle School Biology Teachers The promotion of quality education and the implementation of the new curriculum reform have profound impacts on the teaching in many schools.But there are still some problems in the training mode of biology normal students in some universities. The Educational Concept of Colleges Needs to be Improved If the university's education goal is unclear and the corresponding system is not implemented exactly, it will easily lead to the disconnection between the curriculum and teaching setting for the cultivation of biology normal students (Wang, 2009).Some colleges do not respect the subjectivity of students' learning and the teaching method is mainly based on teaching.Such important courses as pedagogy and psychology still choose to teach in large classes.Teachers speak on the podium themselves, while students' listening or not depends on their own consciousness.It is difficult to arouse students' enthusiasm for learning educational theories without teacher-student interaction.Colleges should set up student-oriented educational idea, focus on students' learning, strengthen the ideological construction of biology normal students, innovate ideas and train high-quality and innovative biology teachers in the new era.Today, with the rapid development of the Internet, colleges should also have information-based biology teaching ideas, give full play to the advantages of information technology in and make use of diversified teaching platforms such as massive open online course and micro-courses to combine classroom teaching with online learning.Diversified teaching ways could better arouse students' interest and love in biology teaching. There are Defects in Teaching Arrangement in Colleges 1) There is a problem that the curriculum of education focuses on biological professional courses but neglects biological normal courses.The proportion of education and teaching courses and middle school biology teaching practice courses are too small and the training of students' teaching is insufficient (Lu, 2016).There is a lag in curriculum setting in some colleges , the new curriculum standard of biology in middle schools and new biology textbooks need to be considered.The differences between theoretical study and practical operate may make normal students unable to do their best in the future biology teaching. 2) In biology education practice, biology interns have some problems, such as lack of basic skills, lack of teachers' sense of responsibility and mission, maladjustment of mentality, etc. Besides, due to the lack of effective communication and norms, biology teaching practice is out of control and becomes a mere formality, which can't achieve the desired results (Zhao, 2019). 3) Teaching evaluation of biology normal students is quite simple and there is a lack of examination on mental health and communication ability for them.Simple internship reports and grades can't effectively represent students' abilities and are not conducive to students' reflection on their own progress and shortcomings.In a word, normal universities should carefully examine their own weak points and rationally adjust the educational curriculum system of biology normal students to promote them to become more professional. There is an Imbalance in the Development of Schools Different colleges have different training modes.Some schools are ahead of the reform of the training mode of middle school biology teachers.For example, in the setting of elective courses, they will provide students with different elective directions, such as development psychology course, blackboard writing training course, biology curriculum standard study course, middle school biology teaching method course, etc. Students can make targeted choices and enrich their knowledge.However, some schools have few or no optional subjects, which is not conducive to the comprehensive development of students. In addition, in the same school, the level of students is also uneven.Some biology normal students do not have enough knowledge and skills of biology teaching.Their actual teaching ability, class management ability and communication ability can't meet the requirements of middle school.With the increasing number of graduates of biology normal students, the training quality of middle school biology teachers can't meet the needs of employers, which forms a special contradiction between supply and demand.As a result, the employment pressure of biology normal students is gradually increasing (Zhang, 2020). Thoughts on the Training Mode Reform of Middle School Biology Teachers The original training mode of biology normal students can no longer meet the requirements of talent development under the information environment of the new era and the reform has entered the present continuous tense.The necessity and urgency of reform is self-evident, but there is no definite answer to the questions of how to reform and what aspects to start with.More importantly, the reform does not need kinds of rules and regulations.Colleges should follow the general requirements of the Ministry of Education that train a large number of high-quality professional teachers with noble teachers' morality, solid professional foundation, outstanding teaching ability and self-development ability.And on the basis of meeting the actual development of local biology disciplines and respecting students' needs, colleges must reform the training mode of biological normal students.Generally speaking, we can improve the talent training mode of biology teachers in middle school from the following aspects. To Train Middle School Biology Teachers Good at Learning In order to cultivate excellent biology normal students, it is far from enough to teach the biological knowledge in class.Students should not only master rich knowledge of education and teaching, but also have independent learning methods, apply them in practice and have a sense of lifelong learning.1) To impart knowledge in an all-round way.Abundant knowledge and skills are essential for an excellent middle school biology teacher.In addition to general education courses and biological professional courses, students majoring in biological science also need to take courses such as Biology Teaching Theory in Middle School, Biology Curriculum Standards Analysis in Middle School, and get exercise through on-campus probation and teaching biology class in middle schools.Through the systematic study of the knowledge, normal students can gradually establish the role consciousness of biology teachers and master teaching skills from shallow to deep. With the popularization and application of multimedia technology in teaching, teachers should be able to skillfully make and use electronic courseware.Especially during the epidemic this year, biology teachers have to teach online.The use of live broadcast platform and the production of recorded lessons pose new challenges to them.Consequently, biology normal students should keep up with the pace of educational informatization and improve their TPACK ability which is the ability to integrate the subject teaching knowledge and technology.They should also pay more attention to the teaching theory and methods under the information technology environment and realize professional development through accumulation (Kui et al., 2017).In addition, the use of PPT courseware saves some teachers' writing time as it can directly display large text descriptions in biology textbooks to improve the efficiency of biology teaching.However, some biological knowledge such as the model diagram of DNA molecules and the calculation of genetic laws, is better to write by teachers and demonstrate to students bit by bit, which is easier for students to understand and remember than simply showing.Therefore, it is also very important to carry out calligraphy and painting courses to improve teachers' blackboard writing skills and writing ability (Zong et al., 2019). 2) To promote independent learning.Confucius once said, "Those who know are not as good as those who are good, and those who are good are not as good as those who are happy."Biology is a subject full of interest.Students should be good at discovering the fun in biology learning and learn it actively independently. First, colleges and universities should update their educational concepts to create free and harmonious learning environment for students.It is necessary to build as many kinds of intelligent classrooms as possible.Small class teaching and multi-screen display form a good teaching atmosphere, which is beneficial to the discussion between teachers and students. Second, lecturers should change the teaching methods appropriately.The lack of learning initiative of normal biology students can't be separated from their own reasons, but it is also related to teachers' traditional teaching methods.It is necessary to strengthen discussions among students and respect students' dominant position.Teachers can extend some biology hot spots to encourage students to actively explore deeper knowledge.In addition, the hybrid biology teaching mode combining "online" and "offline" teaching is helpful to cultivate students' self-directed learning ability and effectively solve the prominent problems of dull and inefficient traditional classroom atmosphere (Zhang et al., 2020).However, when designing online courses, colleges should pay attention to the properly use of teaching platforms, considering from the characteristics of biology and the needs of students.It should not rely too much on online forms, so as not to avoid causing extra burden to students (Kang, 2019). Third, students can establish learning communities to help each other.Independence study does not mean self-study.Establishing a biology study group can make each other find the spark of wisdom in the collision of thinking.The group can also do practical research on a middle school biology education topic.If there is any difficulty, they can consult information or ask teachers for help.In the process of solving the problem, both the group and the individual can grow up. 3) To pay more attention to the cultivation of practical ability.It is not enough to have theoretical knowledge.Biology normal students need to learn how to carry out biology teaching and have a broad world to play freely.Colleges should adhere to the practice orientation, build a multi-level and multi-platform practical education system and cultivate the core literacy of biology normal students in diversified activities (Li, 2019).It is necessary to invite front-line education researchers and middle school biology teachers with rich teaching experience to give lectures and guidance.Regularly holding some teaching skills competitions like lecture contest, teacher's morality speech contest, etc., can strengthen the skills training of normal students in a situational and open environment (Wang et al., 2017)."The second classroom" teaching training, on the one hand, can make them well prepared psychologically to enter the role of middle school biology teachers in advance.On the other hand, it also broadens the vision of students to improve their enthusiasm and initiative of developing teaching skills, which can lay a foundation for practical biology teaching in secondary schools in the future (Lin et al., 2018).4) To establish the concept of lifelong learning.Biology normal students should have the consciousness of lifelong learning.First of all, biology is a science, which is continuously inherited and developed on the basis of previous studies.Biology teachers should establish awareness of paying attention to hot topics in biology.Biology knowledge is being updated, biology curriculum standards are developing, biology textbooks need to be replaced, and biology teachers in middle schools should keep pace with the era to learn new knowledge and new ideas in professional fields.After working in the future, they should participate in academic exchanges and improve their teaching level (Wu, 2017).Secondly, the STEAM education concept has become the education trend and new learning forms such as "flip classroom" and "e-book bag" are constantly appearing, which pose new challenges to the professional ability of middle school biology teachers.Biology normal students should not stop at their undergraduate studies, but should hone their practice in the future teaching.They should keep advancing towards the goal of becoming an excellent middle school biology teacher. To Train Middle School Biology Teachers with Innovative Spirit One of the characteristics of teachers' labor is creativity, which is mainly because the educational objects are special and complex individuals.Teachers cannot treat different students with similar templates.Especially in today's era of advocating the protection of students' personality, modern middle school biology teachers should give full play to their creativity to make biology classes colorful to cultivate more energetic students.Accordingly, in terms of teacher training, colleges should change the training system of biology teachers and reform the previous mode that focused on imparting professional theoretical knowledge.The cultivation mode of biology teachers should also focus on stimulating creative thinking, learning innovative skills, and forming innovative personality (Guo, 2019). 1) To cultivate creative thinking Creative thinking is the core of creativity.Cultivating the creative thinking of biology students will help them grow into creative teachers.It is necessary to create a biology teaching environment conducive to the development of personality.Reasonable arrangement of inquiry teaching and democratic classroom atmosphere can influence students subtly and encourage them to express their views freely.It should be noted that in inquiry teaching, teachers play a guiding and supporting role and should reasonably control the conversation with students to train the initiative and creativity of them (Hiltunen et al., 2020). In addition, creative courses can be set up to cultivate divergent thinking.The opening teaching content can train students' hands-on and brain-thinking ability by appropriately adding biology inquiry experiments in the biology teaching.In the training of biology teaching skills, students should be guided to use different methods to introduce, ask questions and design teaching processes to experience the application of various skills.Students can also fly their imagination and make biological teaching aids with common materials in life, such as making paramecium model with shoe mat and making cell model with hollowed grapefruit shell.It is beneficial to develop students' creative thinking by viewing problems from different angles. 2) To master innovative teaching methods Firstly, to carry out creative teaching activities, it is essential to master a lot of knowledge as the foundation, not only the biological professional theory, but also the principles of various phenomena in society and some interdisciplinary knowledge.There is enough knowledge to give full play to the space of imagination.Secondly, students should learn creative methods and consciously train them with various methods, such as brainstorming, divergent thinking, inference hypothesis, etc. Teachers should learn to use mind maps to clarify the context logic of biology.Finally, creative skills should be used flexibly.When doing teaching design in practice, biology normal students should not copy reference books and online resources, but deal with them flexibly according to students' actual situation and their own abilities.For example, in the middle school classroom teaching of "organic matter produced by photosynthesis of green plants", a teacher made different teaching arrangements for the classic experimental of J. von Sachs (Zhu, 2011).She adopted reverse thinking, first assuming that starch is produced by photosynthesis of leaves.And the problems that how to detect the beginning, how to remove chlorophyll and how to verify the role of light, will inspire students to think and deepen their understanding in the discussion. 3) To shape innovative personality Innovative personality can't be formed overnight.Biology teachers of middle school should make sustained professional development plans.They should use creative techniques flexibly on the basis of creative thinking and form biological science literacy in life (Eda et al., 2018).First, it is important to keep curiosity and insight into life.Curiosity will stimulate the desire to explore.As biology is closely related to the life and society, keeping curiosity can help students find different biological beauty in ordinary life.Second, biology teachers should keep a good attitude, know themselves correctly and be good at adjusting their mentality.Teaching work day after day may produce a sense of professional slackness.Teachers should take biology teaching as their own interest and keep creative enthusiasm for teaching to enhance their sense of professional accomplishment (Yao, 2020).Third, teachers should think critically and view the hot topics of biology and the new progress of education and teaching from multiple angles.By internalizing them into biology teaching, teachers will form their own teaching characteristics. To Train Middle School Biology Teachers with Reflective Spirit 1) To reflect after class "Learning without thinking is useless".Many courses in biology are complicated and there are many contents to be learned in each class.If students don't review after class, they can't master well the context and key points of knowledge.They should find more internet resources to expand their knowledge after class.Of course, discussing with classmates in QQ or WeChat group to solve difficult problems will deepen their understanding of knowledge points.Students should not only review what they have learned, but also learn to adjust their learning mentality (Jim, 2019).Students should enhance their self-discipline, self-confidence and self-efficacy in self-assessment to maintain their enthusiasm and confidence in learning. 2) To reflect on practice The reflection after practice is much more important.It is necessary to find and solve problems in the practical teaching of biology in middle schools.Through cognition, practice, re-cognition and re-practice, the teaching level can rise step by step and achieve a virtuous circle.Biology normal students should ask themselves three questions after each teaching and speaking: what advantages do I have compared with other students?What are my shortcomings?What actions should be taken to improve my biology teaching skills?Students should be good at reflection, learn strengths from each other and put them into action to make more progress next time. At the same time, normal colleges should standardize the management system of middle schools, cooperate with them to implement appropriate teaching arrangements, and conduct professional assessment on the learning situation and teaching skills of students (Yin, 2019).Students' abilities cannot be judged only by a few numbers on transcripts.The multidimensional evaluation like the judge from teachers, classmates, ego and society, and the multiform evaluation methods like evaluation scale, electronic portfolio and student works collection, can be used properly to diversify teaching evaluation and make a comprehensive analysis for students' actual biological teaching ability, which is conducive to students' better understanding of themselves and development of their personality. 3) To reflect in research Teachers should also be researchers.The cultivation of middle school biology teachers should pay equal attention to both pedagogy and academic research.In the teaching practice of normal students, biology teachers in middle schools teach students biological knowledge, while students feed back various educational materials.Teachers should take the initiative to find out the problems existing in the actual teaching and carry out research on biological teaching topics.Task research can help middle school biology teachers master the essence of teaching efficiently and accumulate rich teaching experience and wisdom (Zhang et al., 2020).In the research reflection, the middle school biology teachers will establish their own unique personality charm, edify students imperceptibly in the teaching and form their own teaching art. In the undergraduate stage, normal students can set up a biology study group to explore the research on a certain middle school biology teaching topic, and consciously reflect on their own practice to commit to solving practical teaching problems.In addition, while reading classic education monographs, biology normal students can also take into account some excellent works of modern educators, such as 40 Years Old, Start to Learn to Do Education, which has many educational essays, and Understanding Teaching and Cases of Biology Concepts in Senior High School, which is in line with the actual biology teaching in Beijing No.19 middle school.These new works can not only help students broaden their horizons and learn from the teaching experience of excellent middle school biology teachers, but also inspire them to record their own educational mental process.Through education writing, they can grow into an excellent middle school biology educator during the continual process of reflection. Conclusion The traditional talents training mode of middle school biology teachers has been unable to meet the requirements of the development of talents.The new mode of personnel training is still in constant exploration.Countries all over the world are looking for the new path.Normal colleges and universities in our country should undertake the important task of mode innovation.In view of the problems in the traditional training mode of biology teachers in middle school, they should take a series of measures and combine the characteristics of biology to comprehensively improve the new ideas and methods of biological teacher training.In the aspect of learning, colleges should impart knowledge comprehensively, advocate independent learning methods, attach importance to the cultivation of practical ability and let biology normal students establish the concept of lifelong learning.In the aspect of innovation, it is necessary to cultivate students' creative thinking and teach innovative teaching methods to shape students' innovative personality.As for the reflection, colleges and universities should not only pay attention to the after-school reflection and practical reflection of biology normal students, but also help them to reflect in the research, so as to realize the professional development and career promotion of middle school biology teachers in the future. Although it is necessary to reform the personnel training mode of biology teachers in middle school with normal universities as the leading factor, it is not possible to "run only by higher education institutions".The reform also requires the government to do a good job in top-level design and secondary schools to improve the post service training.The three parties should work together to cultivate biology teachers who are good at learning and have innovative and reflective spirits.More importantly, our country should strengthen the communication and exchange with the international community, and learn the beneficial experience to train the high-quality and innovative middle school biology teachers with interdisciplinary knowledge and ability in the future. There may be difficulties in the process of reform.However, colleges will face and solve them with matchless courage and determination, and strive to form a unique local advantage and the Chinese characteristics of the cultivation mode of middle school biology teachers!Biology normal students should also constantly improve their teaching ability and level to promote the steady improvement of biology teaching quality in middle school, greeting the promising life of biology teaching in middle schools with a high-spirited attitude. Only when industries like medicine, agronomy, scientific research have good talent reserves, can our country have inexhaustible development power.Under the strong
2020-11-19T09:15:06.345Z
2020-11-12T00:00:00.000
{ "year": 2020, "sha1": "1d588320bcdf6b1fac2e4e7e4d4987520f0703c5", "oa_license": null, "oa_url": "https://doi.org/10.5539/jel.v9n6p84", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e0973cf06ca8f1c3ce0476154d2dfea371ae5270", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
259145952
pes2o/s2orc
v3-fos-license
Silymarin and management of liver function in non-alcoholic steatohepatitis: a case report Non-alcoholic fatty liver disease (NAFLD) and its progressive form (non-alcoholic steatohepatitis; NASH) are the main reason for chronic liver disease in the general population, characterized by fat accumulation in hepatocytes (steatosis) and anomalies in liver biochemical analyses. To date, no pharmacological agents have been approved for NAFLD or NASH treatment. However, silymarin, the active ingredient in milk thistle, has been used in the last decades for the treatment of several liver diseases. In this case report, treatment with silymarin 140 mg three-times daily highlighted moderate efficacy and a good safety profile in the management of NASH and liver function, as it decreased serum AST and ALT levels over the treatment period with no side-effects, supporting silymarin as a promising supplemental intervention that can normalize liver activity in NAFLD and NASH. This article is part of the Current clinical use of silymarin in the treatment of toxic liver diseases: a case series. Special Issue: https://www.drugsincontext.com/special_issues/current-clinical-use-of-silymarin-in-the-treatment-of-toxic-liver-diseases-a-case-series Introduction Non-alcoholic fatty liver disease (NAFLD) is the leading cause of incidental elevation of liver enzymes worldwide, with a global prevalence of ~23-25% of the adult population. 1,2 The clinical features of the disease range from simple steatosis with a benign prognosis to non-alcoholic steatohepatitis (NASH) and cirrhosis, with an increase in mortality and morbidity. NAFLD and NASH are known to have a close and bidirectional association with obesity, diabetes and metabolic syndrome. [2][3][4] Silymarin, a milk thistle extract, has long been used for the management of liver disorders due to its assumed hepatoprotective and antioxidant properties. In particular, silymarin may counteract lipid peroxidation and radical-induced damage, which are probable mechanisms of liver injury in NASH. 5 This case report recommends silymarin treatment in patients with diabetes, obesity and NASH to manage abnormal liver enzyme activity. Ethics statement This manuscript has been prepared according to CARE guidelines. No patient consent was required as no information is reported that could allow identification of the patient. Case report In January 2022, a 50-year-old woman was referred to our department (Department of Medicine & Gastroenterology, Saudi German Hospital Jeddah, Jeddah, Kingdom of Saudi Arabia) by an endocrinologist due to deranged liver enzyme levels accidentally discovered during check-up examinations. In January 2022, the general physical examination was unremarkable: no palmar erythema or vascular spiders were observed, whilst the abdominal examination revealed a palpable liver that was relatively soft and mild splenomegaly. Blood and liver function tests revealed elevated liver enzymes, especially for alanine transaminase (ALT or GPT), aspartate transaminase (AST or GOT), and γ-glutamyltransferase (GGT) ( Table 1). In addition, an abnormal lipid profile with high triglycerides (324 mg/dL) was observed. The fibrosis 4 index (FIB-4), a non-invasive liver fibrosis marker used in the diagnosis and management of liver disease, was elevated (6.4), indicating a high risk of liver fibrosis. Hepatitis B surface antigen, anti-hepatitis C virus antibodies and autoantibodies were negative. An abdominal ultrasound was performed and showed bright hepatomegaly and mild splenomegaly (14 cm in length). In addition, the portal vein measured 14 mm in diameter, and no ascites were observed. Accordingly, the patient was diagnosed with NASH and, in January 2022, she was advised to start treatment in the form of weight reduction and dyslipidaemia control. For losing weight, liraglutide with an initial dose of 0.6 mg daily was prescribed, and maintained at a dose of 3 mg daily. In addition, simvastatin 20 mg daily was prescribed to control hyperlipidaemia. To normalize liver enzyme activity, silymarin (Legalon®) 140 mg three-times daily was recommended as it is frequently reported to have antioxidant, hepatoprotective and anti-fibrotic abilities. Moreover, silymarin is known to be well tolerated, with only a few minimal adverse events. Four months following the start of silymarin treatment (May 2022), the follow-up visit showed a gradual decrease in ALT and AST levels and 3 months later (August 2022), normalization of liver enzyme levels was reached, with a reduction also for GGT levels (Table 1). An improvement in the lipid profile also occurred over the treatment period, with reduced blood triglyceride levels within normal limits (<150 mg/dL). In addition, treatment adherence was good, and the patient did not develop any adverse events or other drug interactions due to polytherapy. Discussion Worldwide, NAFLD is one of the leading causes of deranged liver enzymes and chronic liver diseases, with a prevalence of ~23-25% in the general population, increasing up to 55% in those with type 2 diabetes. 6 Different specialists frequently encounter this intricate metabolic disorder in clinical practice, including primary care physicians, gastroenterologists, endocrinologists, gynecologists and radiologists. 7 The clinical spectrum of NAFLD is characterized by the presence of steatosis in >5% of hepatocytes; with liver biopsy, it is possible to differentiate NAFLD from NASH. 4 This is crucial as NAFLD generally has a benign prognosis, whereas NASH progresses histologically and can lead to cirrhosis and liver dysfunction. 1,8 According to previous clinical and experimental studies, oxidative stress is thought to be one of the main mechanisms involved both in hepatic damage in NAFLD and in its progression to NASH. Indeed, lipid peroxidation is induced by the abnormal synthesis of reactive oxygen species (ROS), which in turn lead to inflammation and fibrogenesis. In addition, ROS induce liver fat accumulation and activate several intracellular pathways, causing hepatocyte apoptosis. 9 NAFLD and NASH are usually considered hepatic manifestations of metabolic syndrome. Thus, the main risk factors for its development are increased weight, diabetes, insulin resistance, hypertension and hyperlipidaemia. 4,5 Given the close correlation between NAFLD and metabolic alterations, in 2020, an international panel of experts proposed a disease terminology change to more accurately reflect its pathogenesis. 10 Table 1 Hence, metabolic-associated fatty liver disease is the new naming terminology for NAFLD, identified by hepatic steatosis along with the presence of overweight or obesity, diabetes, or other metabolic dysfunctions. 10,11 Liver biopsy is generally recommended as the 'gold standard' method for the diagnosis and quantification of disease damage; however, it is an invasive, expensive technique with a significant sampling error. 12 As a potential alternative, the FIB-4 index is a liver fibrosis marker for the assessment of liver disease, acting as a 'red flag'. 13 A FIB-4 cut-off of ≥3.25 is currently validated for metabolic-associated fatty liver disease. 13 Our patient had a FIB-4 index of 6.4, reflecting a high risk of advanced liver fibrosis. Although various medications and supplements have been suggested as NAFLD therapies (e.g. vitamin E for advanced fibrosis without diabetes mellitus and pioglitazone for those with NASH and diabetes 14 ) no effective drug treatment is available for NAFLD and NASH to date. 3,12 The milk thistle extract silymarin is a complex compound of plant-derived elements, mostly flavonolignans, flavonoids and polyphenolic molecules. The main flavonolignans identified in the silymarin complex are silibinin, silichristin, isosilibinin and silidianin, with silibinin as the most prevalent and biologically active isomer. 15 Different pharmacological actions of the silymarin complex have been observed, including antioxidant properties, anti-inflammatory properties and anti-fibrotic effects. 15 In particular, silymarin demonstrated antioxidant properties by protecting the hepatocyte membrane from ROS-induced damage and counteracting toxin uptake. 12,16 Both preclinical and clinical studies reported the efficacy of silymarin in the management of NAFLD and in reducing progression to NASH. 4 Silymarin has also been found to increase the enzymatic activity of superoxide dismutase in lymphocytes and erythrocytes, 12 and to significant-ly decrease serum ALT and AST levels. 12,17 Silymarin also reduced hepatic fat accumulation as demonstrated by changes in hepatorenal brightness index at ultrasonography imaging. 12 In this case report, the use of silymarin 140 mg threetimes daily has demonstrated significant hepatoprotective effects on non-alcoholic steatohepatitis as shown by decreased serum ALT, AST, and GGT levels, and ensured good treatment adherence during long-term liver function tests and follow-up (Table 1). These outcomes are in line with those reported by other studies and support silymarin to be markedly effective as NAFLD and NASH treatment. In addition, it is crucial to promptly start silymarin therapy and consider its longterm use as needed. 4 Conclusion NAFLD and its progressive form (NASH) are the most common causes of chronic liver disease worldwide, identified by steatosis in the liver and alterations in liver biochemical analyses. To date, no pharmacological options have been approved for the treatment of NAFLD or NASH. However, silymarin, the active derivate of milk thistle, has long been used for the management of liver disorders due to its assumed hepatoprotective and antioxidant properties. In this case report, treatment with silymarin 140 mg three-times daily highlighted moderate efficacy and a good safety profile in the management of NASH and liver function as it decreased serum ALT, AST, and GGT levels over the treatment period with no side-effects, supporting silymarin as a promising supplemental intervention to normalize liver activity in NAFLD and NASH.
2023-06-14T05:06:55.375Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "c3dcf9bb5cdaac1ada1822485cb8f936e0d3d6d4", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c3dcf9bb5cdaac1ada1822485cb8f936e0d3d6d4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244809475
pes2o/s2orc
v3-fos-license
An Ethical Issue After the Nuclear Accident in Fukushima: Young People’S Perspectives of Thyroid Cancer Screening and its Harms Overdiagnosis of thyroid cancer has become a major global medical issue. Ultrasound-based thyroid cancer screening has promoted overdiagnosis, and recently international recommendations indicate that such screening should not be conducted, even after a nuclear accident. The Fukushima thyroid cancer screening program was initiated in 2011 as a health policy after the nuclear accident, although the risk for radiation-induced thyroid cancer was unlikely given the low radiation levels. However, the thyroid cancer screening program has continued at 2-year intervals with a relatively high participation rate and is now in its fth round. Therefore, it is crucial to clarify whether those targeted for screening understand the disadvantages of screening and identify factors that inuenced their decision to participate. Page 3/15 Background Screening for various diseases, including cancer, has both bene ts and harms, such as overdiagnosis. Therefore, provision of adequate information is important in making decisions about whether to undergo screening. The harm associated with overdiagnosis is greater in children and adolescents than in adults, meaning that factors that aid decision making regarding screening for children and adolescents require more consideration [1]. However, after a nuclear accident, screening of radiation-related cancers, especially radioiodine-induced thyroid cancer, for residents may be conducted as part of a health survey. Media and social networks may amplify residents' radiation-related fears, and screening is often regarded as necessary for scienti c and social reasons rather than concern about the harmful effects of screening [2,3]. More than 10 years have passed since the Fukushima Daiichi Nuclear Power Plant accident that occurred immediately after the Great East Japan Earthquake and tsunami disaster in March 2011. It was di cult to communicate su ciently with residents regarding the health risks associated with radiation exposure for some time after the Fukushima nuclear accident [3,4]. The level of radiation exposure among Fukushima residents was expected to be much lower than that among residents around Chernobyl after the 1986 accident [5]. However, based on experiences following the Chernobyl accident, residents in Fukushima were concerned about the increased risk for thyroid cancer among children and adolescents [6,7]. Thyroid ultrasound examination was started in Fukushima as part of the Fukushima Health Management Survey for all Fukushima residents (about 380,000 people) under age 18 years at the time of the accident [8]. This project started in October 2011, 6 months after the accident and intended to monitor children's health following the disaster. This timing meant that it was not possible to communicate with residents regarding the signi cance of the examination, including risks associated with radiation exposure in Fukushima, characteristics of juvenile thyroid cancer, and bene ts and harms of thyroid cancer screening [7,9]. The thyroid ultrasound examination in Fukushima has been conducted as a screening at 2-year intervals since 2011 [8], but was initially conceptualized as a support program for residents who were worried about the health effects of radiation exposure [7]. However, there was a lack of communication of key information before the examination, especially regarding overdiagnosis. The screening program used an opt-out approach. All potential participants were noti ed about the venue and date of the examination, and examinations were also performed during school classes for school students [10]. A global perspective against thyroid cancer screening emerged during the 10 years in which the thyroid ultrasound examination has been conducted in Fukushima. For example, the rate of overdiagnosis following thyroid cancer screening in South Korea was described as "epidemic" and thyroid ultrasonography has been reported to increase the prevalence of thyroid cancer worldwide [11,12] (Ahn et al. 2014;Vaccarella et al. 2016). Based on global reports, the US Preventive Services Task Force stated that thyroid cancer screening was not recommended for asymptomatic adults [13]. In addition, the International Agency for Research on Cancer (IARC) stated in 2018 that this screening was not recommended even after a nuclear accident [14]. A major reason for screening not being recommended is that evaluations suggested the harms of overdiagnosis by thyroid cancer screening outweighed the bene ts. However, four rounds of the thyroid ultrasound examination have been conducted in Fukushima over the past 10 years, and a fth round with opt-out style screening is progressing despite these global trends. This examination has resulted in more than 200 thyroid cancer diagnoses (116 in the rst round, 70 in the second, and 31 in the third) [15]. The United Nations Scienti c Committee (UNSCEAR) concluded that the signi cant increase among screening participants relative to that expected was because of overdiagnosis and not the result of low radiation dose exposure [5]. There is concern that this examination will continue to cause issues related to overdiagnosis. However, once initiated, a screening program is di cult to cancel because of con icts of interest and economic policies, such as with the mass screening of newborns for neuroblastoma in Japan [16] and thyroid cancer screening in South Korea [11]. The thyroid cancer screening program in Fukushima was also complicated because it started after the nuclear accident. As overdiagnosis is an emerging problem in recent health policy and practice [17], it is important to address its ethical implications. For example, it is necessary to clarify how well young people (as potential screening participants) understand the complex background and international changes in perspective regarding the harms of overdiagnosis following the thyroid ultrasound examination. The examination participation rate was high at 81.7% in the rst round of screening, but showed a gradual decline to 71.0% in the second round and 61.3% in the third round [18]. However, detailed examination of the participation rate showed the participation rate of school students remained close to 90%, even in the third round, whereas that in the age group after graduating from high school dropped sharply. This phenomenon may not occur if decision-making about receiving screening is based on anxiety about radiation health risks. It is therefore essential to understand how residents perceived the health risks of radiation exposure and harms of thyroid cancer screening when they decided whether to undergo screening. This information is important in reconsidering the thyroid ultrasound examination program in Fukushima as well as for future thyroid monitoring following a nuclear accident. After a disaster, various research activities, including health surveys, tend to be conducted, but sometimes these activities are not bene cial for residents living in the affected areas [19,20]. It is an ethically important issue to determine why individuals make ongoing decisions to participate in potentially harmful examinations. To clarify factors that in uenced the decision to undergo thyroid cancer screening in Fukushima, we conducted a questionnaire survey among young people in Fukushima, including those who received the thyroid examination and those who did not. In addition, we sent the questionnaire to young people of the same generation in a neighboring prefecture who were not potential thyroid examination participants as a comparative control. Methods We conducted an anonymous survey to clarify decision-making processes related to the thyroid ultrasound examination and associated ethical issues. The survey investigated three main areas. 1. Reasons why the examinee took the thyroid ultrasound examination. 2. Perceptions of the signi cance, bene ts, and harms of the thyroid ultrasound examination. 3. The impact of the examination being conducted during school classes. Respondents We sent an anonymous questionnaire to 2000 randomly selected young people (aged 16-20 and 25-30 years as at January 1, 2020) who lived in Fukushima Prefecture (most of the subjects) and the neighboring Miyagi Prefecture (most of the non-subjects). Most residents these two prefectures were thought to have been affected by the Great East Japan Earthquake. The questionnaire was distributed via mail on January 7, 2020, and returned questionnaires with a postmark by February 15, 2020, were considered valid. Approval to conduct the survey was obtained from the Fukushima Medical University Ethics Committee (approval number: 2019-180). The questionnaire included a written explanation that returning a completed questionnaire would be considered provision of consent because the survey was completed anonymously and respondents were therefore not identi able. Of the 601 young people (30.1%) who responded, 594 (29.7%) responses from those whose gender and age matched the population registry were considered valid. This questionnaire survey was planned and conducted while the authors were a liated with Fukushima Medical University. Questionnaire content The questionnaire comprised ve categories: 1) basic characteristics, 2) decision-making about participating in the thyroid ultrasound examination, 3) recognition of the bene ts and harms of thyroid cancer screening, 4) risk perception relating to radiation-related health risks, and 5) the impact of conducting the examination during school classes. The rst category covered participants' age, sex, and a question asking whether they were the subject (all Fukushima residents under age 18 years at the time of the Fukushima accident). The second category was only completed by those who were the subjects of the thyroid ultrasound examination in Fukushima. This category contained four questions. i) Do you know the meaning of the thyroid ultrasound examination? If you answer yes, please describe it. ii) Did you take the thyroid ultrasound examination in the last 2 years? iii) Who made the decision whether to take the examination? iv) Why did you take/not take the examination? The third category asked all participants about: i) knowledge of bene ts and harms of the thyroid ultrasound examination, ii) magnitude of bene ts and harms of the thyroid ultrasound examination using a 5-point scale (more bene cial, bene cial, coequal, harmful, more harmful), and iii) knowledge about the IARC recommendation regarding thyroid cancer screening after a nuclear accident. For the analysis, "more bene cial" and "bene cial" were classi ed as "perceived bene cial" and "harmful" and "more harmful" were classi ed as "perceived harmful." For the fourth category, we measured participants' risk perception of the potential health effects of radiation exposure based on their responses to two questions, with responses on a 4-point scale from very unlikely to very likely [21,22]. We investigated the possibility of delayed effects by asking, "What do you think is the likelihood of damage to your health (e.g., cancer onset) in later life as a result of your current level of radiation exposure?" The second question concerned the possibility of genetic effects: "What do you think is the likelihood that the health of your future (i.e., as-yet unborn) children and grandchildren will be affected as a result of your current level of radiation exposure?" For the analysis, "very unlikely" and "unlikely" were classi ed as lower risk perception and "likely" and "very likely" were classi ed as higher risk perception. For the fth category, we asked all participants to rate four statements regarding their perception of the impact of the examination conducted at school on a 5-point scale from likely to unlikely. The statements were: i) Examination at school (during classes) makes you perceive it as a good thing; ii) Examination at school makes you believe it is somewhat mandatory; iii) Examination at school makes it di cult to refuse taking the examination; and iv) The presence of people who were not attending the school examination made you feel as if there was something wrong. Analysis We conducted chi-square tests or non-parametric analyses for questions to compare risk perception, knowledge of harms of the examination, and basic characteristics between the subject and the nonsubject groups, and between those who received the examination (examinees) and those that did not (non-examinees). Reasons for receiving or not receiving the examination and the impact of school-based examination were explored using descriptive statistics. Table 1 presents a summary of the results of the comparisons of knowledge and perception between the examinees and non-examinees and between the subjects and non-subjects. There was no difference in the male to female ratio between the examinees and non-examinees or between subjects and nonsubjects. However, the mean age of the non-subjects was higher than that of the subjects, and the mean age of non-examinees was higher than that of examinees. This was because subjects were aged 18 years or younger at the time of the nuclear accident; therefore, the population aged 25-30 years who returned the questionnaire were all non-subjects, whereas the population aged 16-20 included both subjects and non-subjects. Results The results showed that 40.5% of respondents who were subjects of the thyroid ultrasound examination did not know the meaning of the examination. In addition, there was no signi cant difference in knowledge between examinees and non-examinees. Furthermore, about half of those who knew the meaning of the examination provided incorrect descriptions; for example, "This examination measures the radiation level of the thyroid gland" (data not shown). Regarding decision-making about taking the examination, 54.2% of examinees indicated the decision was made by their parents, and 15.3% made the decision themselves. In contrast, 57.0% of nonexaminees made the decision themselves, and 9.0% reported their parents made the decision. We believed that this corresponded to the decline in the examination participation rate after graduating from high school. Common reasons for taking the examination (multiple-choice question) were: 1) this was an examination done at school (29.6%), 2) concern about radiation effects (13.5%), and 3) wanting to be relieved by undergoing the examination (11.1%). Common reasons for not taking the examination were: 1) time-consuming (11.1%) and 2) no worry about radiation exposure (11.1%). However, no participant indicated "harms of the examination" was a reason for not receiving the examination. Table 1 also shows that only 16.5% of respondents in the examinee and non-examinee groups knew about the harms of thyroid ultrasound examination. The proportion of those who knew about the harms was slightly smaller in the examinee group, but the difference between the groups was not signi cant. In addition, few people knew about the IARC recommendation against thyroid cancer screening, and there was no signi cant difference between groups. Furthermore, both examinees and non-examinees showed a tendency to overestimate the bene ts compared with the harms of screening. The results regarding knowledge about the bene ts and harms of the thyroid examination were similar for the non-subjects. In terms of the risk perception of health effects caused by radiation exposure, there was no signi cant difference between examinees and non-examinees in responses to the questions concerning delayed and genetic effects. However, the risk perceptions for both delayed and genetic effects were signi cantly higher in non-subjects than in subjects (% of higher delayed risk perception: 31.3% vs. 48.3%; % of higher genetic risk perception: 21.2% vs. 36.8%). Figure 1 demonstrates the results of the analysis of items covering the impact of a school-based examination. In summary, these were: i. About 80% of participants perceived the examination was good because it was a school-based examination. ii. Around 78% of participants considered the school-based examination was somewhat mandatary. iii. Overall, 70% of participants said that their preference not to take the examination was not respected. iv. Finally, about half of the participants said they perceived something wrong about those who did not take the examination. There were no differences in these opinions between the subject and non-subject groups, or between the examinee and non-examinee groups. Discussion Because the risk for cancer due to radiation depends on the radiation dose, various activities to measure the radiation dose were conducted to respond to residents' health concerns after the Fukushima nuclear accident [23]. The radiation doses were expected to be extremely low in Fukushima [5], meaning that implementation of thyroid cancer screening did not logically correspond to the concerns about the health effects of radiation exposure. However, it has been noted that the high participation rate in the thyroid ultrasound examination in Fukushima may re ect the concerns about the health effects of radiation exposure among the parents of examination subjects, especially mothers [7,9]. In contrast, this study directly asked young people (who were the subjects) about the most common reason for taking the examination, which was because it was performed at school. This does not support the suggestion that many people took the examination to relieve anxiety about the health risk of thyroid cancer due to radiation exposure. This was also supported by our result that there was no signi cant difference in risk perception about radiation health effects (especially the risk for developing cancer such as delayed effects) between examinees and non-examinees. Furthermore, we found among those that took the examination, slightly more than half reported their parents made the decision to take the examination, whereas among those that did not take the examination, slightly more than half made this decision themselves. It has been reported that parents, especially mothers, were anxious about the increase in thyroid cancer after the Fukushima nuclear accident [6,7]. Therefore, parents' intentions may in uence their decision to take the examination as well as the impact of the examination being conducted at school. To support proper decision-making as to whether young people should take the screening examination, it is essential that they understand the signi cance, bene ts, and harms of the examination. Our results showed that 40.5% of the thyroid ultrasound examination subjects did not know the meaning of the examination, and there was no signi cant difference in this knowledge between examinees and nonexaminees. In addition, the free-form answers showed that 47.8% of respondents who had this knowledge misunderstood the meaning of the examination (e.g., "This examination measures the radiation level of the thyroid gland"). This highlighted that the subjects, even examinees, of the thyroid ultrasound examination often did not understand the meaning of the examination correctly. Most subjects were also unaware of the harms of thyroid cancer screening, and the percentage of those who knew there were harms associated with thyroid cancer screening was the same for examinees and non-examinees. It was also demonstrated that both examinees and non-examinees considered the bene ts were greater than the harms. However, no respondents cited these harms as a reason for not taking the examination. These results suggested that potential thyroid examinees are somehow encouraged to undergo the examination in the decision-making process. Recently, it has been reported that examinees underestimate the disadvantages and overestimate the bene ts of screening, testing, and intervention [24,25]. It has also been noted that recognition of harms is important in decision making about breast cancer screening [26,27]. A qualitative study involving Korean women reported that many women were unaware of the potential harm of overdiagnosis associated with thyroid ultrasound screening [28]. We found that the risk perceptions of health effects due to radiation exposure were signi cantly higher in non-subjects than in subjects. This suggested that non-subjects (who were relatively far from the nuclear accident site) overestimated the radiation health risk compared with subjects. Similar results were shown in a survey by the National Institute of Advanced Industrial Science and Technology, where the risk perception of the general population in Tokyo was signi cantly higher than that of residents in the evacuation area in Fukushima [29]. These results indicated that the subjects had opportunities to access information and education on radiation health risks. In contrast, there was a non-signi cant difference in the awareness of the existence of harms of thyroid cancer screening between the subjects and nonsubjects. Moreover, there was overestimation of the bene ts compared with the harms of thyroid cancer screening in both subjects and non-subjects. Subjects had received an explanation letter with guidance about taking the thyroid examination, whereas non-subjects did not receive a guidance letter. However, the bene t-harm perception did not differ between subjects and non-subjects. This suggested that the explanation and communication through the explanation letter before decision-making about taking the examination was insu cient. This study showed that the examination at school led to the perception that the examination was a good thing or mandatory, and that preferences to not take the examination were not respected. Moreover, this perception was not speci c to the subjects of the thyroid ultrasound examination, but was the same for non-subjects, which suggests that it is a common tendency among Japanese young people. Our data revealed that the methodology whereby the examination was conducted at school during classes could in uence decision-making about taking the examination. This was supported by the sharp decline in the thyroid ultrasound examination participation rate after graduating from school [18]. These results indicated that the school-based examination became a "default," as an opt-out method was used. The impact of opt-in and opt-out methods on decision making has been reported in the issue of organ donation [30,31]. For example, the organ donation rate is high in countries with the opt-out condition, and low in countries with the opt-in condition. Even with major bene ts of organ transplants, perspectives of the methodology of choosing opt-out or opt-in are divided. From an ethical perspective, care must be taken in adopting the opt-out method if a program has some potential harms. Because thyroid cancer screening has been shown to have major harms (e.g., overdiagnosis), it should not be performed in an opt-out and default-setting manner. This study had several limitations. First, we identi ed the subjects/non-subjects and examinees/nonexaminees based on responses to category 1. Based on the respondents' age and address, most of these responses were thought to be correct. Some respondents might have mistakenly answered based on another examination rather than the thyroid examination. Second, because the response rate in the present study was relatively low (30%), the representativeness of the target populations was uncertain. However, the fact that a majority of respondents were unaware of the disadvantages of thyroid examination as a cancer screening tool is likely to be unaffected by bias of response rate. Third, because the age of non-subjects was higher that of the subjects, the risk perception of health effects due to radiation exposure might have been in uenced by age. However, there was no report of a large difference in radiation health risk perception between the ages of 16-20 and 25-30 years. Conclusions We summarized the decision-making process and potential in uencing factors for young subjects of the thyroid ultrasound examination in Fukushima in Figure 2. After a nuclear accident, residents are naturally concerned about the health effects of radiation exposure and tend to have high risk perception of radiation-related cancer. However, our results indicated there were no relationships in these factors. Since screening as a health survey tends to be conducted as a policy to resolve these concerns, opt-out style screening may result in misunderstanding that screening bene ts outweighs the harms, especially school-based examinations. This is not limited to thyroid cancer screening, as screening to detect disease may not be bene cial for residents following a disaster [10,19]. To properly respond to anxieties of affected residents, it is essential to communicate key aspects to residents, including: 1) meaning of the examination, 2) balance between the bene ts and harms of the examination, 3) characteristics of the target disease, and 4) voluntary participation. In addition, based on the principle of the post-disaster code of conduct, investigations and surveys after a disaster should be conducted in an opt-in style. Declarations Ethics approval and consent to participate This study was approved by the Fukushima Medical University Ethics Committee, Fukushima, Japan (approval no. 2019-180). The questionnaire included an explanation that mailing a completed response was indicative of informed consent because the survey was completed anonymously and therefore no individuals were identi able. All methods were performed in accordance with the Ethical Guidelines for Medical and Biological Research Involving Human Subjects by the Ministry of Education, Culture, Sports, Science and Technology of Japan. Consent for publication Not applicable. Availability of data and material Data other than those shown in tables, gures, and supplement are not publicly available. Con icts of interest/Competing interests We declare no competing interests. Funding Figure 1 Impact of the school-based thyroid examination The percentage of "Yes" responses are shown for the following items related to the recognition of a school-based examination. i) 80% of participants perceived the examination was good thing. ii) 78% of participants considered the school-based examination was somewhat mandatary. iii) 70% of participants said that their preference not to take the examination was not respected. iv) About half of the participants felt there was something wrong about those who did not take the examination. The remaining percentages for each item show responses of "No," "no opinion," or "no answer."
2021-12-03T16:31:49.873Z
2021-11-30T00:00:00.000
{ "year": 2021, "sha1": "949fc8a4172236349ce1ce9d5b9d25ec2cedb8b2", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-1081158/latest.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "eafe0ac7b618c07372a652bd658dada6130363ef", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
235700747
pes2o/s2orc
v3-fos-license
Phenolics Profile and Antioxidant Activity Analysis of Kiwi Berry (Actinidia arguta) Flesh and Peel Extracts From Four Regions in China The kiwi berry (Actinidia arguta) has been widely studied because of its rich phenolic, flavonoid, and vitamin C contents. Numerous reports have demonstrated that fruit peels contain higher phenolic content and antioxidant activity than that of flesh. In this study, the phytochemical content and antioxidant activities of peel and flesh extracts of six kiwi berries were analyzed from four regions (namely, Dandong, Benxi, Taian, and Tonghua) in China. The antioxidant activity was determined using the peroxyl radical scavenging capacity (PSC) and cellular antioxidant activity (CAA) assays. The phenolic, flavonoid, and vitamin C contents of kiwi berry peel were 10.77, 13.09, and 10.38 times richer than that of kiwi berry flesh, respectively. In addition, the PSC and CAA values of kiwi berry peel were higher than those of kiwi berry flesh. The analysis of the separation and contents of phenolics were performed by the high-performance liquid chromatography (HPLC)-diode-array detectormass spectrometry/mass (DAD-MS/MS) system, and the results illustrated that protocatechuic acid, caffeic acid, chlorogenic acid, and quinic acid were the major phenolic compounds. In conclusion, this study indicated that kiwi berry peel contains a rich source of antioxidants. These data are of great significance for the full development and utilization of kiwi berries in these four regions of China to produce nutraceutical and functional foods. INTRODUCTION The consumption of fruits and vegetables has been encouraged because they are an excellent source of biologically active compounds, including phenolics, carotenoids, tocopherols, and anthocyanins (Celestino and Font, 2020). Increasing the intake of fruit and vegetable significantly reduces the risk of cancer (Fielding and Walsh, 1994), coronary heart disease, and other chronic diseases (Joshipura, 2001). According to the recent revision of taxonomy, Actinidia is a member of the Actinidiaceae family and it contains 54 species (Huang et al., 2004). Among the various types of Actinidia, kiwi berry (Actinidia arguta) is a relatively new type of commercially grown fruit, which is called the "mini kiwi" and "baby kiwi" (Williams et al., 2003). Kiwi berry possesses delicious taste and health-promoting properties . In addition, kiwi berries are extremely abundant in phenolics (Fisk et al., 2006), flavonoids (Park et al., 2014), vitamin C (Latocha, 2015), carotenoids, chlorophylls (Nishiyama et al., 2005), proteins, and minerals (Bieniek and Dragańska, 2013;Latocha, 2015). Phenolics reduce the risk of many chronic diseases (Liu, 2013). In addition, flavonoids have antiinflammatory, anti-allergic, anticarcinogenic, and anti-ulcer properties (Nayak et al., 2011). Vitamin C is considered the most important vitamin because of its significant antioxidant activity (Forastiere et al., 2000). In this study, both phenolic and flavonoid contents were determined using the Liu's method Guo et al., 2012). The vitamin C content was measured using the high-performance liquid chromatography (HPLC) analysis (Mekky et al., 2018). The analysis of the separation and contents of phenolics were performed by the HPLC-DAD-MS/MS system (Lang et al., 2020). The phenolics profile and antioxidant activities of fruits are different, and fruits with high antioxidant activity usually contain more antioxidants (Guo et al., 1997). It is worth noting that the peel and seed extracts of some fruits have higher antioxidant activities than those of flesh extracts (Sun et al., 2021); mango peel is a rich source of phenolics, carotenoids, and anthocyanins (Ajila et al., 2007b). In contrast, the skin of the kiwi berry is mostly smooth and without hair, which makes the whole fruit suitable for direct consumption without removing the skin (Jackson and Harker, 1997). The intake of antioxidants in kiwi berries is nearly three times higher than that in kiwifruit (Horák et al., 2019). Previous study has reported the phenolics, ascorbate, and antioxidant potency of peel and flesh extracts of kiwi berry (Latocha, 2015); however, the peroxyl radical scavenging capacity (PSC) and cellular antioxidant activity (CAA) have not yet been investigated. To gain insight into the composition and antioxidant activities of kiwi berry flesh and peel extracts cultivated in four regions of China (namely, Dandong, Benxi, Taian, and Tonghua), the objectives of this study were (1) to determine the total phenolic content (TPC), total flavonoid content (TFC), and vitamin C content of free and bound fractions of flesh and peel extracts; (2) to identify and quantify the free and bound phenolic contents of flesh and peel extracts; and (3) to determine the PSC and CAA values of free and bound fractions of flesh and peel extracts of six common kiwi berry varieties. Since peels are not currently used for commercial purposes, they are discarded as waste and become a source of pollution (Vicenssuto and Castro, 2020). This study of the phenolics profile and antioxidant activities of commercial kiwi berry cultivars in China concluded that kiwi berry peel extract had more potential than flesh extract as a health supplement rich in natural antioxidants and deserves further research. Sample Preparation Kiwi berries generally germinate in mid-April, enter the peak growth period from late May to mid-June, enter the full bloom period in late June, and mature around August-October. Therefore, from August to October, kiwi berries in most areas of China reach commercial maturity. In this study, kiwi berries were collected from Dandong (LD-241, LD-121), Benxi (Huairou), Taian (Changjiangyihao), and Tonghua (Longcheng, Liaofeng) in China at the commercial maturity stage. More than 50 fruits of nearly the same size without any disease or pest damage were randomly collected for each variety from four different regions. The samples were placed in cooler containers and immediately transported to the laboratory. Each kiwi berry variety was randomly divided into three groups (each group approximately 100-150 g) as three replicates for each experiment. Different fruit pieces from each kiwi berry variety were washed, manually peeled, and mixed. After the flesh and peel were separated, the weight of the flesh and peel in relation to the whole fruit, soluble solids, and pH were immediately determined. The remaining samples were frozen at −20°C for not longer than 1 week, until phenolic extraction. Extraction of Free and Bound Phenolics Phenolics were extracted using a previously reported method with some modifications (Gorinstein et al., 2009;Guo et al., 2012). In brief, 100 g of fresh kiwi berry sample was extracted with 100 ml of 80% acetone for 10 min, and the mixture was mashed and then filtered using a filter paper. The filtrate was collected after centrifugation at 2,500 × g for 10 min and filtered through a filter paper. All extractions were performed twice. The supernatants were pooled and evaporated at 45°C. The extraction was reconstituted to 20 ml with distilled water to obtain the free phenolic fraction. All residues were collected in centrifuge tubes and digested with 20 ml of 4 N NaOH with shaking for 1 h under nitrogen at room temperature (23°C). The pH of the mixture was adjusted to 2.0 using concentrated HCl and fractioned with ethyl acetate five times. Ethyl acetate was removed by rotary evaporation, and the extraction was reconstituted to 20 ml with distilled water to obtain the bound phenolic fraction. The samples were frozen in liquid nitrogen and stored at −80°C until use. Total Soluble Solids (°Brix) Total soluble solids (TSS) from accurately weighed kiwi berry flesh and peel samples (5 g) were measured using a digital refractometer (Atago Co., Ltd., Tokyo, Japan) and recorded as "degrees Brix" (°Brix), which is equivalent to a percentage (%). The Brix scale or °Brix is numerically equal to the percentage of sugar and other dissolved solids in the solution. Total Phenolic Content Determination Total phenolic content was measured using the colorimetric Folin-Ciocalteu method, as previously reported (Guo et al., 2012). Gallic acid was used as a standard. TPC was expressed as mg of gallic acid equivalents per 100 g fresh weight (mg GAE/100 g FW). The data are reported as the mean ± standard deviation (SD) for three replications. Total Flavonoid Content Determination Total flavonoid content was determined using the sodium borohydride chloranil (SBC) protocol . Catechin was used as a standard. TFC was expressed as mg of catechin equivalents per 100 g fresh weight (mg CE/100 g FW). The data are reported as the mean ± SD for three replications. Vitamin C Content Determination The vitamin C content was measured using the HPLC analysis, with some modifications (Mekky et al., 2018). Vitamin C was obtained from 100 g of kiwi berry with a mixture of 1% meta-phosphoric acid and 1% perchloric acid. Liquid chromatography was used to identify and quantify vitamin C (Agilent 1290II-6460). Detection was performed using an Agilent SB-C18 column (2.1 × 100 mm, 1.8 μm). The mobile phase consisted of A (0.1% formic acid-water) and B (0.1% formic acid-acetonitrile). The vitamin C content was expressed as mg per 100 g fresh weight (mg/100 g FW). The data are reported as the mean ± SD for three replications. Identification of Phenolic Compounds of Kiwi Berry The analysis of the separation and contents of phenolics were performed by the HPLC-DAD-MS/MS system (Agilent 1290II-6460), the method suggested in a previous report and was used with some changes (Lang et al., 2020). The chromatographic column was Agilent SB-C18 column (2.1 × 100 mm, 1.8 μm). The chromatographic column temperature was 30°C. Different phenolics of kiwi berry were separated in gradient elution. The mobile phase consisted of A (0.1% formic acid-water) and B (0.1% formic acid-acetonitrile). The flow rate was 0.2 ml/ min. The injection volume of samples was 20 μl. The autosampler temperature was the same as room temperature. The gradient elution system was as follows: 0-4 min 10% B; 4-6 min 10-21.5% B; 6-16 min 21.5-28% B; 16-25 min 28-50% B; 25-28 min 50-95% B; and 28-30 min 95-95% B. Mass spectrum acquisition parameters include ion source type electrospray ionization; ion source temperature, 350°C; negative iron mode; capillary voltage, 3.5 kV; atomizing gas flow, 10 L/min; atomizing gas pressure, 45 psi; mode, MS2 scan; scan time, 300 ms; scan step, 0.1 amu; and fragmentor, 120 V. Software for data acquisition and processing include the automatic integration used Agile 2 integrator software and the mass spectrum extraction used average spectrum of 10% peak height. The real-time chromatogram was collected at 280 nm by the diode array detection. The monomer peaks were determined by the retention time and MS/MS fragments. Chlorogenic acid was used as the standard of phenolic acids, and quercetin-3-O-glucoside was used as the standard of flavanols and three flavonols. The content of phenolics was calculated by corresponding peak area and calibration curves of standards. Determination of Total Antioxidant Activity Using a Rapid Peroxyl Radical Scavenging Capacity Assay Antioxidant activities were determined using the PSC method, as previously described (Adom and Liu, 2005). The PSC values were expressed as μmol vitamin C equivalents per 100 g fresh weight (μmol VCE/100 g FW). The data are reported as the mean ± SD for three replications. Determination of Total Antioxidant Activity Using a Cellular Antioxidant Activity Assay The CAA assay was conducted as previously described (Liu, 2007). In this study, HepG2 cells at 20-29 passages were used as a model to evaluate the CAA value of flesh and peel extracts. Quercetin was used as a standard. The CAA values were expressed as μmol of quercetin equivalents per 100 g fresh weight (μmol QE/100 g FW). The data are reported as the mean ± SD for three replications. Statistical Analysis All data are reported as the mean ± standard deviation (SD). A one-way analysis of variance (ANOVA) was performed to determine the overall effect of different treatments, and the Duncan's test was used for multiple comparisons. All analyses were performed using SPSS software 19.0 (SPSS Inc., Chicago, IL, United States) with a significance level of 0.05 (two-tailed p value). To establish a correlation between the phenolics profile and antioxidant activities, the multivariate correlation was conducted by partial least squares regression (PLS) using Unscrambler 10.1 (Camo Process AS, Oslo, Norway). In the PLS method, the predictors (variable X) were the content of phenolics profile, with the responses (variable Y) being the PSC and CAA values. Kiwi Berry Conventional Quality The six kiwi berry varieties had different shapes and sizes. The kiwi berry conventional quality of varieties from Dandong, Benxi, Taian, and Tonghua regions are shown in Table 1 and Supplementary Figure 1. Changjiangyihao had the largest weight (14.09 g), and LD-121 had the least weight (5.35 g). The percentages of edible skin in each variety were different. Overall, although the sample date, color, and shape were different, no differences were observed in TSS and pH of the six kiwi berry varieties from the four different regions. Total Phenolic Content of Kiwi Berry The TPC of the flesh ( Figure 1A) and peel flesh ( Figure 1B) extracts of six kiwi berry varieties are shown in Figure 1. The free-TPC was significantly higher than that of the corresponding bound-TPC in both the flesh and peel extracts. As shown in Figure 1A, the free-TPC of flesh extracts ranges from 41.47 mg GAE/100 g FW (Liaofeng) to 94.57 mg GAE/100 g FW (Longcheng), and the bound-TPC of flesh extracts ranges from 11.67 mg GAE/100 g FW (LD-241) to 55.08 mg GAE/100 g FW (Huairou). As shown in Figure 1B, the free-TPC of peel extracts ranges from 276.31 mg GAE/100 g FW (LD-241) to 446.74 mg GAE/100 g FW (Liaofeng); whereas, the bound-TPC of peel extracts ranges from 37.30 mg GAE/100 g FW (Changjiangyihao) to 94.45 mg GAE/100 g FW (LD-121). In general, the peel TPC was considerably higher than that of the corresponding content in the flesh of all six kiwi berry varieties. Total Flavonoid Content of Kiwi Berry The TFC of the flesh (Figure 2A) and peel ( Figure 2B) extracts of six kiwi berry varieties is shown in Figure 2. The free-TFC was significantly higher than that of the corresponding bound-TFC in both the flesh and peel extracts. As shown in Figure 2A, the average of free-TFC of flesh extracts is 23.95 mg CE/100 g FW; whereas, the average of bound-TFC of flesh extracts is 5.22 mg CE/100 g FW. As shown in Figure 2B, the average of free-TFC of peel extracts is 91.69 mg CE/100 g FW, and the average of bound-TFC of peel extracts is 23.13 mg CE/100 g FW. The total-TFC of flesh extracts ranged from 22.23 mg CE/100 g FW (LD-241) to 36.34 mg CE/100 g FW (LD-121). The total-TFC of peel extracts ranged from 68.38 mg CE/100 g FW (Changjiangyihao) to 155.54 mg CE/100 g FW (LD-121). Comparatively, the TFC of peel extracts was considerably higher than that of the corresponding flesh extracts among all six kiwi berry varieties. Kiwi Berry Vitamin C Content The vitamin C content of flesh and peel extracts of the six kiwi berry varieties are shown in Table 2. The vitamin C contents of flesh extracts ranged from 6.82 mg/100 g FW (LD-121) to 25.67 mg/100 g FW (Longcheng). The average vitamin C content of flesh extracts was 15.63 mg/100 g FW and varied by 3.76-fold among the six varieties. The vitamin C content of peel extracts ranged from 56.36 mg/100 g FW (LD-241) to 102.07 mg/100 g FW (Liaofeng). The average vitamin C content of peel extracts was 84.93 mg/100 g FW and varied by 1.81-fold in these varieties. The vitamin C content of peel extracts was considerably higher than that of the corresponding flesh extracts among all six kiwi berry varieties. Phenolic Composition of Kiwi Berry In this study, the composition and content of phenolic monomers were identified and quantified by the HPLC-DAD-MS/MS analysis. As shown in Figure 3, 10 monomers (4 phenolic acids, 3 flavanols, and 3 flavonols) of kiwi berry phenolics were identified. The contents of 10 monomers of the flesh and peel extracts among the 6 kiwi berry varieties are shown in Table 3. Four phenolic acids, i.e., protocatechuic acid, caffeic acid, chlorogenic acid, and quinic acid, were the predominant phenolics in kiwi berry. The free phenolic acid contents were significantly higher than the corresponding bound phenolic acid contents both in the flesh and peel extracts. The average of free protocatechuic acid contents of flesh extracts was 25.61 μg/g FW. The average of bound protocatechuic acid contents of flesh extracts was 8.29 μg/g FW. The total protocatechuic acid contents of flesh extracts ranged from 28.77 μg/g FW (LD-241) to 41.41 μg/g FW (Longcheng). The average of free protocatechuic acid contents of peel extracts was 56.15 μg/g FW. The average of bound protocatechuic acid contents of peel extracts was 24.61 μg/g FW. The total protocatechuic acid contents of peel extracts ranged from 67.05 μg/g FW (Huairou) to 96.16 μg/g FW (Longcheng). The average of free caffeic acid contents of flesh extracts was 17.69 μg/g FW. The average of bound caffeic acid contents of flesh extracts was 7.97 μg/g FW. The total caffeic acid contents of flesh extracts ranged from 16.37 μg/g FW (Changjiangyihao) to 37.25 μg/g FW (LD-241). The average of free caffeic acid contents of peel extracts was 64.47 μg/g FW. The average of bound caffeic acid contents of peel extracts was 22.16 μg/g FW. The total caffeic acid contents of peel extracts ranged from 72.12 μg/g FW (LD-121) to 109.44 μg/g FW (Huairou). The average of free chlorogenic acid contents of flesh extracts was 19.50 μg/g FW. The average of bound chlorogenic acid contents of flesh extracts was 7.29 μg/g FW. The total chlorogenic acid contents of flesh extracts ranged from 14.23 μg/g FW (Longcheng) to 37.40 μg/g FW (LD-121). The average of free chlorogenic acid contents of peel extracts was 52.79 μg/g FW. The average of bound chlorogenic acid contents of peel extracts was 22.76 μg/g FW. The total chlorogenic acid contents of peel extracts ranged from 60.44 μg/g FW (LD-121) to 99.30 μg/g FW (Changjiangyihao). The average of free quinic acid contents of flesh extracts was 10.58 μg/g FW. The average of bound quinic acid contents of flesh extracts was 4.28 μg/g FW. The total quinic acid contents of flesh extracts ranged from 10.45 μg/g FW (Liaofeng) to 17.93 μg/g FW (Longcheng). The average of free quinic acid contents of peel extracts was 38.61 μg/g FW. The average of bound quinic acid contents of peel extracts was 16.84 μg/g FW. The total quinic acid contents of peel extracts ranged from 33.62 μg/g FW (Changjiangyihao) to 69.84 μg/g FW (Liaofeng). The phenolic acid contents of the peel extracts were considerably higher than those of corresponding flesh extracts among all six kiwi berry varieties. Peroxyl Radical Scavenging Capacity of Kiwi Berry The PSC values of the flesh ( Figure 4A) and peel ( Figure 4B) extracts of the six kiwi berry varieties are shown in Figure 4. The free PSC values were significantly higher than those of the corresponding bound PSC values for both the flesh and peel extracts. As shown in Figure 4A, the free PSC values of flesh extracts ranged from 456.64 μmol VCE/100 g FW (LD-121) to 1053.60 μmol VCE/100 g FW (Longcheng), whereas the bound PSC values of flesh extracts ranged from 77.81 μmol VCE/100 g FW (Changjiangyihao) to 184.62 μmol VCE/100 g FW (Huairou). As shown in Figure 4B, the free PSC values of peel extracts ranged from 2855.71 μmol VCE/100 g FW (LD-241) to 4630.56 μmol VCE/100 g FW (Liaofeng), whereas the bound PSC values of flesh peel ranged from 116.40 μmol VCE/100 g FW (Longcheng) to 264.46 μmol VCE/100 g FW (Huairou). The PSC values of the peel were considerably higher than those of the flesh of all six kiwi berry varieties. Cellular Antioxidant Activity of Kiwi Berry The CAA values of the flesh and peel extracts of the six kiwi berry varieties are shown in Table 4. Only the CAA of the free extract was determined because the CAA of the bound extract was too low to be determined. In the "no phosphatebuffered saline (PBS) wash" protocol, the CAA values of flesh extracts ranged from 49.73 μmol QE/100 g FW (LD-121) to 161.42 μmol QE/100 g FW (Longcheng). The CAA values of peel extracts ranged from 161.08 μmol QE/100 g FW (LD-241) to 297.34 μmol QE/100 g FW (Liaofeng). In the "PBS wash" protocol, the CAA values of flesh extracts ranged from 14.20 μmol QE/100 g FW (Changjiangyihao) to 33.58 μmol QE/100 g FW (LD-121). The CAA values of peel extracts ranged from 29.42 μmol QE/100 g FW (Changjiangyihao) to 71.37 μmol QE/100 g FW (LD-241). The CAA values of the peel extracts were higher than those of the corresponding flesh extracts in both the "no PBS" and "PBS wash" protocols. Multivariate Correlation by Partial Least Squares Regression Among Phenolics Profile and Antioxidant Activities The multivariate correlation used the PLS regression models of various groups of the phenolics profile (e.g., phenolics, flavonoids, and vitamin C as well as individual phenolic compounds) with the antioxidant activity (PSC and CAA) of the six kiwi berry varieties as shown in Figure 5. Figure 5A shows the PLS plots of the kiwi berry extracts where 85.02% of the phenolics profile explained 93.79% of the variation in the antioxidant activities in two factors. As shown in Figure 5B, this PLS model could explain up to 90% of the data variability, and the clustering of different varieties of kiwi berry could be observed from all the data. Free-TPC, vitamin C, free-(+)gallocatechin, and free-chlorogenic acid were positively correlated with free-PSC and no PBS wash-CAA values. Some phenolic monomers such as free-quercetin-3-O-galactoside and freequercetin-3-O-rutinoside showed moderate contribution to free-PSC and no PBS wash-CAA values. Moreover, bound-TPC was positively correlated with the bound-PSC value. However, no phenolics profile correlated with the PBS wash-CAA values. Because of the PBS wash, the CAA value cannot reflect the total antioxidant activities of phenolics profile both inside and outside the cell. Total Phenolic Content of Kiwi Berry Phenolics are strong antioxidants that are found in many fruits and vegetables (Liu, 2013). Phenolics reduce the risk of many chronic diseases, such as chemoresistance (Jing, 2008), and are anti-carcinogenic, anti-inflammatory, and protect against certain types of cancer (Kamiloglu, 2015). In this study, free-TPC is significantly higher than that of the corresponding bound-TPC, which is consistent with a previous study (Sun et al., 2002). Free phenolics reduce the oxidative stress response in cells and are rapidly released and absorbed in the stomach and small intestine (Chandrasekara and Shahidi, 2011). Bound phenolic extracts protect the digestive system and other cancers because the extracts are retained after digestion in the stomach and intestines and are released in the colon during the fermentation of colonic bacteria (Adom and Liu, 2002). In this study, the free TPC of the peel extract (344.45 mg GAE/100 g FW) was higher than that of the free-TPC of the flesh extract (64.32 mg GAE/100 g FW). These results are consistent with a previous study (Latocha, 2015), in which the peel extract TPC (212.10 mg GAE/100 g FW) is higher than that of the flesh extract TPC (21.60 mg GAE/100 g FW). Moreover, bound-TPC of the peel extract (64.50 mg GAE/100 g FW) was higher than that of bound-TPC of the flesh extract (28.48 mg GAE/100 g FW). These results are consistent with previous results; the phenolic content in the peel is significantly higher than that in the flesh of all studied varieties of A. arguta (Latocha, 2015). The kiwi berry phenolic content is more than three times that of kiwifruit (Leontowicz, 2016). Therefore, the kiwi berry peel extract is an excellent source of phenolics and may play an important role in preventing chronic diseases in humans. Total Flavonoid Content of Kiwi Berry Flavonoids are widely distributed phenolics with health-related properties that are based on their antioxidant activity (Benavente-Garcı´, 1997). Flavonoids prevent cardiovascular diseases and other cancers (Nayak et al., 2011;Romagnolo and Selmin, 2012). Previously, TFC has been determined using aluminum chloride (AlCl 3 ; Froehlicher et al., 2009); however, due to anthocyanin interference, the detection of the flavonoid content was not accurate. Furthermore, the p-dimethylaminocinnamaldehyde method previously used has a low detection value (Gorinstein et al., 2009). Here, the SBC assay has been used to measure the total flavonoids, including flavonols, flavones, flavanols, flavanones, and anthocyanidins, which is an effective and comprehensive way to determine the TFC . Flavonoids also exist in both free and bound form (Sheng et al., 2019). Similar to phenolics, the role of bound flavonoids cannot be ignored. The bound flavonoids may be absorbed by the intestinal membrane, which may be partially converted to glucuronic acid and sulfate (Liu, 2007). The TFC of kiwifruit (Actinidia deliciosa) has previously been reported to be 36.30 mg CE/100 g FW (Saeed et al., 2019), which is significantly lower than that of the kiwi berry (A. arguta) peel extract (114.82 mg CE/100 g FW) in this study. The flavonoid content in kiwi berry peel extracts was also higher than that in the corresponding flesh extracts. Moreover, kiwifruit had a lot of hair on the surface of the peel, rendering it unsuitable for consumption. In contrast, the kiwi berry skin is mostly smooth and without hair, which makes the whole fruit more suitable for direct consumption without removing the skin (Park et al., 2014). Therefore, the kiwi berry is a richer source of various phytochemicals than kiwifruit. Kiwi Berry Vitamin C Content Vitamin C is considered the most important vitamin because it has significant antioxidant activity (Forastiere et al., 2000); for example, it protects cells from oxidative stress (Zhang et al., 2014) and has an important role in protecting the body against cardiovascular diseases (Padayatty et al., 2003). Moreover, vitamin C is one of the main antioxidants in body fluids; for example, it may relieve asthma symptoms in children (Kim et al., 2009). In this study, the vitamin C content was higher than that of other fruits, such as orange (51 mg/100 g FW), blackcurrant (52-122 mg/100 g FW), strawberry (29-48 mg/100 g FW; Lqbal et al., 2004), and traditional kiwifruit (Baranowska-Wójcik and Szwajgier, 2019). The vitamin C content of the purple star apple is higher in the peel extract than that in the flesh extract (Moo-Huchin et al., 2015). Phenolic Composition of Kiwi Berry Protocatechuic acid, caffeic acid, chlorogenic acid, and quinic acid were the predominant phenolics in kiwi berry and could be detected in both free and bound extracts. Some studies indicated that protocatechuic acid had the potential of chemical protection, which inhibited chemical carcinogens in vitro and produced pro-apoptotic and anti-proliferative effects in different aspects (Tseng et al., 2000). Caffeic acid and chlorogenic acid were rich in ABTS and DPPH, which had the ability to scavenge free radicals (Gulcin, 2006). Due to its antioxidant constituents, quinic acid had played a role in preventing the development and progression of atherosclerotic disease (Hung et al., 2006). The results determined that the peel and flesh extracts of kiwi berry had a great significance. Comparatively, in peel extracts, the concentration of phenolic acids was more than that of corresponding flesh. Therefore, kiwi berry peel extracts may contribute significantly to endorse human health as a functional food. The contents of (+)-gallocatechin, proanthocyanidin B2, and proanthocyanidin C1 were detected from the peel and flesh extracts of kiwi berry. Flavanols had in vitro antioxidant activity and potential anticancer ability (Rodriguez-Ramiro et al., 2011). The contents of quercetin-3-O-glucoside, quercetin-3-Orutinoside, and quercetin-3-O-galactoside were also detected from the peel and flesh extracts of kiwi berry. Flavonols had a significant effect on reducing the risk of heart disease, especially atherosclerotic diseases (Ren et al., 2017). Many studies illustrated that both intrinsic and extrinsic factors, such as genotypes, environmental variation, maturity, and postharvest storage conditions, lead to differences in phenolic composition among the species (Pincemail et al., 2012;Ruiz et al., 2013). The different harvesting locations of kiwi berry were the direct cause of environmental differences. Therefore, further studies on the mechanisms of the influence of the environment on the antioxidants in kiwi berry are needed in the future. The Pearson's correlation analysis was used to determine the correlation between the TPCs, TFCs, vitamin C contents, and PSC values. The correlation of TPCs and PSC values was positive (flesh free extracts R 2 = 0.924, p < 0.01; peel free extracts R 2 = 0.976, p < 0.01; flesh bound extracts R 2 = 0.890, p < 0.01; peel bound extracts R 2 = 0.880, p < 0.01). Moreover, the correlation between vitamin C contents and PSC values was also positive (flesh extracts R 2 = 0.870, p < 0.01; peel extracts R 2 = 0.899, p < 0.01). However, no significant correlations were found between TFCs and PSC values. The correlation analysis suggested that phenolics and vitamin C made a significant contribution to the in vitro antioxidant activity. Peroxyl Radical Scavenging Capacity of Kiwi Berry Antioxidant activity is important to human health, and it is also important to choose a favorable method to determine the antioxidant activity of bioactive compounds (Zang et al., 2021). In this study, the PSC assay (Adom and Liu, 2005) has been used to detect the total antioxidant activity among different parts of six kiwi berry varieties. Previously, the antioxidant activity has been mainly detected using the 2,2-diphenyl-1picrylhydrazyl (DPPH), ferric reducing ability of plasma (FRAP), and 2,2-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) methods (Froehlicher et al., 2009;Lim et al., 2013). However, the FRAP measurement was conducted at pH 3.6, which is not a common condition in the human body . The PSC assay was performed in a neutral environment, such as the human body (pH 7.4). Although chemically synthesized free radicals were used in the DPPH method, the PSC assay used the peroxyl radical, which naturally occurs in the human body . According to a previous study, the PSC values of kiwi berry are higher than those of other fruits (Adom and Liu, 2005). In contrast to the bound extracts, free extract was a significant source of bioactive compounds. A previous study showed that kiwi berry peel is an important source of bioactive compounds and is an important contribution to human health (Latocha, 2015). TPC, TFC, and vitamin C content of kiwi berry peel extracts were up to 10.77, 13.09, and 10.38 times richer than those of the corresponding flesh extracts, indicating the peel extracts had higher antioxidant activity than that of the flesh extracts, and the health benefits of the whole fruit depend on its flesh-to-peel ratio (Łata et al., 2005; Łata, 2007). These data suggested that kiwi berry peel might be a valuable material for the production of functional foods. The Pearson's correlation analysis was used to determine the correlation between the TPC, TFC, vitamin C content, and PSC values. The correlation between TPC and PSC values was positive (flesh free extracts R 2 = 0.924, p < 0.01; peel free extracts R 2 = 0.976, p < 0.01; flesh bound extracts R 2 = 0.890, p < 0.01; and peel bound extracts R 2 = 0.880, p < 0.01). Moreover, the correlation between vitamin C content and PSC values was also positive (flesh extracts R 2 = 0.870, p < 0.01; peel extracts R 2 = 0.899, p < 0.01). However, there were no significant correlations between the TFC and PSC values. The correlation analysis suggested that phenolics and vitamin C significantly contributed to the in vitro antioxidant activity. Cellular Antioxidant Activity of Kiwi Berry The CAA assay represents a marked improvement over traditional chemical antioxidant activity assays, which simulate some in vivo cellular processes (Wolfe et al., 2008). The CAA values of the "PBS wash" protocol differed significantly from those of the "no PBS wash" protocol; the results are similar to those reported in other common fruits (Wolfe et al., 2008). The differences due to the PBS wash may influence the extracellular antioxidant capacity and reduce the overall cellular antioxidant capacity (Liu, 2007). In this study, the kiwi berry CAA value is higher than that of blueberry (Wang et al., 2017), suggesting that kiwi berry extract may have better antioxidant activity. Meanwhile, the peel extract CAA values were higher than those of the corresponding flesh extracts, in agreement with the correlation of the PSC values and TPC. These results strongly suggested that the kiwi berry peel had strong antioxidant activity and might be a good source for producing functional foods. The Pearson's correlation analysis was used to determine the correlation between TPC, vitamin C content, and CAA values. The CAA values of the "no PBS wash" protocol were significantly correlated with the TPC (flesh extracts R 2 = 0.909, p < 0.01; peel extracts R 2 = 0.969, p < 0.01), and vitamin C content (flesh extracts R 2 = 0.919, p < 0.01; peel extracts R 2 = 0.940, p < 0.01). However, when cells were washed with PBS, the CAA values did not show any significant association with TPC and vitamin C content. Natural antioxidants present in fruits have received much attention because of their assumed safety and potential nutritional and therapeutic value (Gao et al., 2020). Fruit peel is a powerful source of natural antioxidants, but it is usually wasted as part of the consumption and food industries. Fruit peels possess higher antioxidant compounds and antioxidant activity than fruit flesh (Wolfe et al., 2003;Ajila et al., 2007a). Therefore, fruit peel extracts, particularly those with high antioxidant activity, may be rich sources of antioxidants and deserve further study. According to these results, the high content of phenolics, vitamin C, and antioxidant activity of kiwi berries, especially peels, indicate that they may impart health benefits when consumed; opportunities for the food industry to develop ingredients for the formulation of functional food products are anticipated. CONCLUSION Kiwi berry is considered a super food and one of the most nutritious fruits. This study aimed to assess and summarize the phenolics profile and antioxidant activities of six kiwi berry flesh and peel extracts, exploring their potential functional properties in foodstuffs. Ten phenolic monomers were analyzed qualitatively and quantitatively. The results showed that various groups of phenolics profiles are mainly free from both kiwi berry flesh and peel extracts. Moreover, kiwi berry peel possessed higher TPCs, TFCs, and VC content than kiwi berry flesh. Similarly, kiwi berry peel extracts have higher antioxidant contents. The multivariate correlation analysis showed that various groups of phenolics profiles are the main contributors to antioxidant activity. Therefore, kiwi berry peel may be utilized as a potential source of natural biologically active compounds. The potential application of kiwi berry peel extract in food, medicine, and cosmetics requires further research and discussion. In addition, the study provided detailed information on the phenolics profile and antioxidant activities of flesh and peel extracts, which will help researchers better understand the nutritional quality of kiwi berries and food manufacturers to develop tailor-made health products in China. In this field, further extensive research is still required to attract the food industrialists to add the value and develop kiwi-based food products. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
2021-07-02T13:27:26.880Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "e11ade2d4ea42ea49906102fc8497883d13d03d5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2021.689038/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e11ade2d4ea42ea49906102fc8497883d13d03d5", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
226539483
pes2o/s2orc
v3-fos-license
Mathematical Literacy in Setting Model Eliciting Activities Nuanced Ethnomathematics Mathematical literacy in the era of disruption is one of the main demands for students in developing new literacy. Mathematical literacy as the capacity of individuals to formulate, apply, and interpret mathematics in various contexts. In this case, it will be easier for students if mathematics learning is bridged between mathematics in daily life based on local culture and school mathematics. Local culture taken as a problem sheet are objects that are in Malaka District, NTT. Model eliciting activities (MEAs) are one of the solutions in developing collaboration capabilities, building connection capabilities between mathematics and real life. The purpose of this study was to find out the MEAs learning with ethno-mathematical power is better than DL learning. This research is quantitative research. Data collection methods in this study were observation, tests, documentation and literature studies. Analysis of the data used was the proportion test and independent t test. The researcher chose the class used by random sampling obtained by class VIII G as the experimental class that received MEAs learning with an ethnomatematics nuance consisting of 32 students and class VIII B as a control class that received Discovery Learning (DL) learning consisting of 32 students. The results showed that the mathematical literacy skills of students who received MEAs learning with ethnomatematic nuances were completed individually and classically, this was indicated by the value of and ( ) , where H0 is rejected. The average and proportion of mathematics literacy abilities of the experimental class students with an average of 78.38 is better than the control class of 71.38. Starting from the perspective of ethnomatematics, literacy is best understood as the integration of schools and cultural contexts through I. INTRODUCTION In the current era of disruption, it takes people who have the skills to develop concepts, have the ability to communicate, collaborate, think critically, creatively and innovatively. To arrive at this, mathematical literacy skills become one solid foundation. This, as revealed by [5] that teachers and educational institutions must be able to strengthen educational institutions from various aspects such as curriculum, systems, management, models, strategies and learning approaches by strengthening 21st century mathematical literacy skills. One of them is strengthening literacy skills to teachers and educational institutions from old literacy (counting, writing, counting) with new literacy (data, technology, human resources / humanism). Mathematical literacy ability can be defined as the ability of students to formulate, use and interpret mathematics in various contexts of solving everyday life problems effectively, [13]. This answers the demand that students in mathematics are not only capable of counting, but have logical reasoning skills, are critical in solving problems, because problem solving is not just routine questions in textbooks but rather leads to problems faced everyday. The results of a student in Programme for International Student Assesment (PISA) mathematics literacy in Indonesia is still low, which ranks 64 out of 72 countries, meaning that in 2012 to 2015 the PISA score for mathematical skills rose 11 pins from 375 to 386 This means that Indonesia is below the international average. The results shown by the international study are not much different from what was revealed by the mathematics teacher at Junior High School, Malaka Regency, East Nusa Tenggara. Difficulties found that most students have difficulty in solving story problems, are less able to interpret questions and model in mathematical contexts, and have difficulty connecting mathematical problems related to daily life, consequently the perceptions that arise in solving problems are not appropriate as expected. It is clear that it has a huge impact in developing mathematical literacy, especially in strengthening the old literacy towards new literacy. The lack of students' mathematical literacy skills can be seen in [8] study of PISA that personal and contextual factors influence the performance of the PISA math test where questions are related to mathematical literacy. The aspects of mathematical literacy skills used are based on PISA 2012 (1) Communication, (2) mathematising, (3) Representation, (4) Reasoning and Argument, (5) Devising Strategies for Solving Problems, (6) Using Simbolic, Formal and Technical Lenguage and Operation dan (7) Using Mathematics Tools. The observations show that learning never associates with the daily lives of students in this case the local culture associated with mathematics and rarely develops group learning which is one of the recommendations in the curriculum 2013. Model Eliciting Activities (MEAs) can be one solution to the problems faced. MEAs is a learning approach to understanding, explaining and communicating concepts contained in one presentation through mathematical modeling, [11]. Then [1] explains that MEAs are appropriate for building mathematical connection skills between mathematics and real life. Model eliciting activities provide increased attention to expand students' ability to be actively involved collaboratively in rich mathematical experiences. Model eliciting activities consists of 4 main parts, namely problemsheets, readiness questions, problems and sharing solutions through presentation activities, (Yu & Chang, 2011). The results of [4] the ability of students' mathematical representation in learning activities eliciting models is better than the ability of mathematical representation of students given learning with scientific approaches. In creating learning that is meaningful, creative and easy to understand the mathematical concepts of mathematics learning, it should be associated with the contextual life of students, namely the use of local culture that is directly in touch with students. This is in accordance with what was stated by [7] that school mathematics needs to expand its parameters and become more inclusive of mathematics in the world inhabited by students. One way to do this is to include aspects of ethnomatematics, culture-based mathematics. Another thing is mathematics must be seen as a cultural product. Furthermore, explained by [15] learning mathematics is very necessary to provide content / bridge between mathematics in the everyday world based on local culture and school mathematics. The same was expressed by [3] that cultural values contribute greatly to student learning processes, help them better understand study material, increase motivation and ultimately improve their achievement in mathematics. This study aims: (1) to find out the proportion of completeness of the experimental class reached 75%, (2) knowing the average mathematical literacy ability of students who receive MEAs learning with ethnomatematic nuances is better than students who receive discovery learning. II. METHODS This research is quantitative research. Research conducted at Junior High School in Central Malaka, Malaka Regency, NTT Academic Year 2018/2019. The problem on the problem sheet given uses local cultural objects as contextual problems related to students' daily lives in the Malaka District. Collection of local cultural objects or ethnomatematics based on documentation and from various media sources. Research at the school takes place in April-May 2019. Data collection methods used in this study were observation, tests, documentation and literature studies. Classes are selected by random sampling provided that the population is homogeneous, normally distributed and the average of the two classes has the same statistically before being given treatment. The researcher determined the class based on mid-semester test results at school and made the experimental class that received MEAs learning with ethnomatematic nuances and a control class that received discovery leraning. The researcher determined the class based on mid-semester test results in school and made the experimental class that received MEAs learning with ethnomatematic nuances and control classes that received discovery learning. Complete limits or (KKM) based on the average value achieved by student groups [14]. KKM in this study is ̅ + 2 SD, obtained a complete limit is 69. Then proceed with the normality test using the Kolmogrov-Smirnov Test and the homogeneity test using the Levene Test. Individual completeness tests using one sample t test, classical completeness test using the proportion test, the difference test on average using the independent t-test. III. RESULTS AND DISCUSSION Ethnomatematics exploration in Malaka district is used as a sheet of problems in MEAs learning with ethnomatematics. The object used is the shape of the building in the traditional house building, the place of siri pinang (Kabi and Koba) which is used to entertain guests and school bags made of woven palm leaves. Some of the objects used as problems in MEAs learning with ethnomatematic nuances are as follows. The researcher performed the data and obtained the final data on tests of mathematical literacy abilities that were normally distributed and homogeneous. Experimental class students obtained an average of 78.38 with 28 students who scored above the minimum completeness criteria (KKM [12] States that starting from a mathematical point of view, literacy is best understood as the integration of schools and cultural contexts through the process of cultural dynamics. This approach allows students to exchange academic knowledge obtained at school with information from their own cultural context. Then [2] explained that mathematics learning based on ethnomatatics is one of the ways that can be expected to make learning more interesting, meaningful and contextual. IV. CONCLUSION Learning Model Eliciting Activities with ethnomatematic nuances is effective to be applied in learning in the classroom. Success indicators can be seen from first, classical and individual completeness, second, the average and proportion of mathematical literacy abilities of students who receive MEAs learning with ethnomatematics nuances better than students who receive discovery learning defenses. Ethnomatematics can be integrated in mathematics learning in the classroom.
2020-07-02T10:14:03.437Z
2020-06-23T00:00:00.000
{ "year": 2020, "sha1": "fe7343c279ee3f32075f93e343c5b09821c81127", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/assehr.k.200620.046", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "92e457f8ce8bf3e3a3c8c46b3c887eca7aca5100", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Sociology" ] }
119172308
pes2o/s2orc
v3-fos-license
Optimality for indecomposable entanglement witnesses We examine various notions related with the optimality for entanglement witnesses arising from Choi type positive linear maps. We found examples of optimal entanglement witnesses which are non-decomposable, but which are not `non-decomposable optimal entanglement witnesses' in the sense of [M. Lewenstein, B. Kraus, J. Cirac, and P. Horodecki, Phys. Rev. A 62, 052310 (2000)]. We suggest to use the term `PPTES witness' and `optimal PPTES witness' in the places of `non-decomposable entanglement witness' and `non-decomposable optimal entanglement witnesses' in order to avoid possible confusion. We also found examples of non-extremal optimal entanglement witnesses which are indecomposable. I. INTRODUCTION Quantum entanglement is now considered as the main key resource for applications to quantum information and quantum computation theory. One of the major research topics in the theory of entanglement is, of course, how to distinguish entanglement from separable states. For this purpose, positive linear maps are known to be the most complete tools [1] among various criteria. This criterion for separability using positive maps is equivalent to the duality theory [2] between positivity of linear maps and separability of block matrices, through the Jamio lkowski-Choi isomorphism [3,4]. In this sense, we need a positive linear map to detect entanglement. This is formulated as the notion of entanglement witness [5] that is just a positive linear map which is not completely positive, under the isomorphism. We refer to [6,7] for systematic approaches to the duality using the Jamio lkowski-Choi isomorphism. An entanglement witness which detects a maximal set of entanglement is said to be optimal, as was introduced in [8]. The notion of optimality may be explained in terms of facial structures of the convex cone P 1 consisting of all positive linear maps between matrix algebras. In fact, it was shown [9] that a positive map φ is an optimal entanglement witness if and only if the smallest face of P 1 containing φ has no completely positive linear map. See also Ref. [10]. Therefore, the most natural candidates of optimal entanglement witnesses are extremal positive maps which are not completely positive. In spite of its importance, the facial structure of the cone P 1 is very far from being understood even in the low dimensional cases. When both the domain and the range are the 2 × 2 matrix algebra, all extreme points of the convex set consisting of unital positive maps had been found in the sixties [11]. The whole facial structures of this convex set is completely understood by the second author [12]. See also Ref. [13]. Another sufficient condition for optimality is the notion of the spanning property, as was introduced in [8]. This is very useful, because the spanning property is much easier to verify than the optimality itself. It turns out [14] that a positive map φ has the spanning property if and only if the smallest exposed face of the cone P 1 containing φ has no completely positive map. Recall that a convex subset F of a convex set C is said to be a face if the following condition holds: If a convex combination of two points x, y ∈ C belongs to F then x and y themselves belong to F . A face F of C is said to be an exposed face if it is the intersection of C and a hyperplane. We will see an example of a face which is not exposed through the discussion. See FIG. 1. For the decomposable case, several necessary and/or sufficient conditions for optimality are known, and there are progresses to characterize optimal decomposable entanglement witnesses. See Refs. [9,15,16] for examples. In the case of indecomposable entanglement witnesses, a condition for optimality has been found [17] recently, and examples of optimal entanglement witnesses without the spanning property were given. Nevertheless, we have still few kinds of examples for optimal entanglement witnesses arising from indecomposable maps. We note that the Choi type positive maps are one of the main resources for indecomposable positive maps. The primary purpose of this note is to analyze those maps between 3×3 matrix algebras, and examine the relations between extremeness, spanning property and optimality. We note that a positive map φ detects entanglement with positive partial transposes if and only if it is indecomposable. An indecomposable positive map φ is said to be a non-decomposable optimal entanglement witness (nd-OEW) in [8] if it detects a maximal set of PPTES. But, it is not clear at all that an optimal entanglement witness which is non-decomposable is really nd-OEW in the sense of [8]. We found that this is not the case. In order to avoid such confusion, we use the following terminology in this note. A positive linear map φ is said to • be co-optimal if the smallest face of P 1 containing φ has no completely copositive map. • be bi-optimal if it is optimal and co-optimal. • have the co-spanning property if the smallest exposed face of P 1 containing φ has no completely copositive map. • have the bi-spanning property if it has both the spanning and co-spanning property. It is clear that φ is co-optimal (respectively has the cospanning property) if and only if the composition φ • t with the transpose map t is optimal (respectively has the spanning property). If we use the Jamio lkowski-Choi isomorphism, then a self-adjoint block matrix W is cooptimal (respectively has the co-spanning property) if and only if the partial transpose W Γ is optimal (respec-tively has the spanning property). It is also clear that φ is bi-optimal (respectively has the bi-spanning property) if and only if the smallest face (respectively the smallest exposed face) of P 1 containing φ has no decomposable map. Therefore, φ is an nd-OEW in the sense of [8] if and only if it is bi-optimal. We note that if φ is bi-optimal then it is automatically indecomposable. We will present examples of indecomposable optimal positive linear maps which are not bi-optimal. Since an optimal decomposable entanglement witness is completely copositive, it is never co-optimal. Therefore, the notions of co-optimality and co-spanning are useful only for indecomposable entanglement witnesses. For nonnegative real numbers a, b and c, the Choi type map is given by where M 3 denotes the C * -algebra of all 3 × 3 matrices over the complex field C. Choi [18] showed that the map Φ[1, 2, 2] is a 2-positive linear map which is not completely positive. This is the first known example to distinguish n-positivities for different n = 2, 3, . . . . The map Φ[1, 0, µ] with µ ≥ 1 is also the first example of an indecomposable positive linear map [19] in the literature, and the map Φ[1, 0, 1] is extremal [20], that is, generates an extremal ray of the cone P 1 . II. FACIAL STRUCTURES AND OPTIMALITY Before going further, we note that the six properties, optimal, co-optimal, bi-optimal, spanning, co-spanning and bi-spanning are properties depending on the faces: If φ 1 and φ 2 determine the same smallest face containing them, then they are interior points of a common face, and share the each property, because the properties are described in terms of faces. Therefore, we can say that a face itself has one among six properties without confusion, and this means that every interior point of the face satisfies the property. It is also clear that a face has a property, then every subface also has the same property. Hence, if a point φ does not have a property then every interior point in the face containing φ does not have the property. Therefore, we need to clarify the facial structures of the 3-dimensional convex body itself determined by (1). It should be noted that the face of the convex body need not give rise to a real face of the convex cone P 1 . Nevertheless, an interior point of a face of the convex body gives rise to an interior point of the face of the cone P 1 determined by the corresponding map. First of all, the convex body has the following four 2-dimensional faces: {(a, b, c) : a = 0, bc ≥ 1}, We note [23] that Φ[a, b, c] is completely positive if and only if a ≥ 2, and it is completely copositive if and only if bc ≥ 1. Therefore, the face f abc has the completely positive map Φ[2, 0, 0] and the completely copositive map Φ[0, 1,1], and so f abc is neither optimal nor co-optimal. It is also easy to examine the optimality for the first three cases. For example, if a > 2 then the map Φ[a, 0, 0] is written by where D is the diagonal map which send [x ij ] to the diagonal matrix with the diagonal entries (x 11 , x 22 , x 33 ). The map D is both completely positive and completely copositive. This means that the map Φ[a, 0, 0] never satisfy optimality and co-optimality. Therefore, every interior point in the 2-dimensional faces f ac and f ab never satisfy above properties. By the same argument, this is also the case for the face f bc . We note that the convex body has also the following five 1-dimensional faces which are on the a-axis, ab-plane or ac-plane: • e a = {(a, 0, 0) : a ≥ 2}, {(a, 0, c) : a + c = 2, 1 ≤ a ≤ 2}. Among them, we have already seen that the face e a is neither optimal nor co-optimal. This is also the case for e b and e c , since it is possible to subtract a map which is both completely positive and completely copositive. It is also clear that neither e ab nor e ac is optimal. In order to find other 1-dimensional faces, we note that the parametrization (a(t), b(t), c(t)) = 1 1 − t + t 2 ((1−t) 2 , t 2 , 1), 0 < t < ∞ satisfies the condition as was considered in [37]. For each fixed positive number t > 0 with t = 1, the line segment given by • e t = (1 − s, st, s/t) : t/(t 2 − t + 1) ≤ s ≤ 1 lies on the surface bc = (1 − a) 2 for 0 ≤ a < 1, and connects the point (a(t), b(t), c(t)) to the point (0, t, 1/t). This gives us 1-dimensional faces e t for each t > 0 with t = 1. Note that Φ[0, t, 1/t] is completely copositive for each t > 0, and so it is clear that e t is not co-optimal. Next, we consider the 0-dimensional face v (2,0,0) . We see that the smallest exposed face F containing v (1,0,1) already contains v (2,0,0) in the FIG. 1.(See Ref. [14] for more general approach). We have seen [36] that Φ[1, 0, 1] has the co-spanning property, and so F has no completely copositive map. This show that v (2,0,0) has the co-spanning property, and so e ab and e ac also have the co-spanning properties. We summarize the result as follows: (Co-)Spanning property (Co-)Optimality Faces Span. Co-span. Bi-span. Opt. Co-opt. Bi-opt. holds. Therefore, we see that interior points of the following faces e b , e c , e ab , e ac , e t , v (1,0,1) , v (1,1,0) , v (a(t),b(t),c(t)) give rise to indecomposable positive maps. We note that every interior point of the face e t gives rise to an example of an indecomposable optimal entanglement witness which is not bi-optimal. So, this is not 'nd-OEW' in the sense of [8]. If we consider the composition by the transpose map then the faces e ab and e ac play the exactly same role. They also provide us examples of non-extremal entanglement witnesses with the spanning property. On the other hand, the Choi maps v (1,0,1) and v (1,1,0) are extremal entanglement witnesses without the spanning property. Therefore, we see that two sufficient conditions, extremeness and spanning property, for the optimality is logically independent. III. CONCLUSIONS In this note, we considered Choi type positive maps between 3 × 3 matrices, and determined their optimality, co-optimality, spanning property and co-spanning property. We have seen that even though a non-decomposable entanglement witness is optimal, it need not to be a 'nondecomposable optimal entanglement witness' in the sense of [8]. Because a positive map detects a PPTES if and only if it is indecomposable, we suggest to use the term PPTES witness in the place of non-decomposable entanglement witness, and use the term optimal PPTES witness in the place of nd-OEW. In other word, we say that a positive map is an optimal PPTES witness when it is bioptimal. This is very natural since a positive map detects a maximal set of PPTES if and only if it is bi-optimal. Optimality is not so easy to determine for a given positive linear map, because we do not know the whole facial structures of the convex cone P 1 consisting of all positive maps. The spanning property is stronger than optimality and relatively easy to check. Another sufficient condition for optimality is extremeness. We also showed that spanning property and extremeness are logically independent.
2012-04-30T11:18:25.000Z
2012-04-30T00:00:00.000
{ "year": 2012, "sha1": "4b1cdc4f00468628326f3fb05a5207c408f82d10", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1204.6596", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "53817c74ada68b48b94935fa8406a7c1e7f4648a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
235556487
pes2o/s2orc
v3-fos-license
Integrated palliative care: triggers for referral to palliative care in ICU patients Palliative care within intensive care units (ICU) benefits decision-making, symptom control, and end-of-life care. It has been shown to reduce the length of ICU stay and the use of non-beneficial and unwanted life-sustaining therapies. However, it is often initiated late or not at all. There is increasing evidence to support screening ICU patients using palliative care referral criteria or “triggers”. The aim of the project was to assess the need for palliative care referral during ICU admission using “trigger” tools. Electronic record review of cancer patients who died in or within 30 days of discharge from oncology ICU, between 2016 and 2018. Patients referred to palliative care before or during ICU admission were identified. Three sets of palliative care referral “triggers” were applied: one that is being tested locally and two internationally derived tools. The proportion of patients who met any of these triggers during their final ICU admission was calculated. Records of 149 patients were reviewed: median age 65 (range 20–83). Most admissions (89%) were unplanned, with the most common diagnoses being haemato-oncology (31%) and gastrointestinal (16%) cancers. Most (73%) were unknown to palliative care pre-ICU admission; 44% were referred between admission and death. The median time from referral to death was 0 day (range 0–19). On ICU admission, 97–99% warranted referral to palliative care using locally and internationally derived triggers. All “trigger” tools identified a high proportion of patients who may have warranted a palliative care referral either before or during admission to ICU. The routine use of trigger tools could help streamline referral pathways and underpin the development of an effective consultative model of palliative care within the ICU setting to enhance decision-making about appropriate treatment and patient-centred care. Background In recent years, studies have shown that there has been an increasing number of cancer patients benefiting from intensive care support [1]. Initiation of a "trial of intensive care unit (ICU) therapy" in patients with advanced cancer is becoming more common [1]. It is estimated that 18-30% of cancer patients use intensive care services [2,3]. Mortality for cancer patients admitted to ICU is similar to general ICU patients at approximately 27-43% [3][4][5]. However, a proportion of cancer patients who survive their ICU stay die several months later from their underlying cancer [6]. Early identification and recognition of cancer patients in ICU who are either at risk of a poor outcome and/or have needs that might benefit from specialist palliative care involvement is key to facilitating prompt palliative care referral and review [7]. Early involvement of palliative care for patients with cancer has been proven in studies to improve patient experience, reduce symptom burden, support communication, and promote patient choice [8][9][10][11][12]. Furthermore, early palliative care has been shown to reduce the use of acute care services including inpatient hospital admissions and emergency department attendances [13][14][15]. Within the ICU setting, the delivery of palliative care has shown to benefit decisionmaking and symptom control, as well as reducing the length of ICU stay and the use of non-beneficial and unwanted lifesustaining therapies [7,13,16,17]. Prominent organisations such as the World Health Organization, the European Society for Medical Oncology [18], and the American Society of Clinical Oncology [19] recommend that palliative care should be delivered early on, alongside standard oncology care, thereby ensuring that patients with or at risk of unmet palliative care needs are identified proactively [20,21]. However, there are several potential barriers to palliative care referral [16,[22][23][24]. Inaccurate prognostication, underrecognition of dying, and a historical association between palliative care and end-of-life care can result in palliative care referrals often being initiated late in the course of a critical illness [13,14,25]. The lack of standardised referral criteria can also contribute to an inequitable access to palliative care services [22][23][24]26]. Identification of patients who may benefit from palliative care review in the ICU and the best time to refer them to the speciality has been the subject of several studies and reviews [27,28]. There is increasing evidence to support screening ICU patients using referral criteria or "triggers" [13,[28][29][30][31][32], as this may help to identify those patients who would benefit from a formal assessment of palliative care needs and offers a pragmatic approach to integrating palliative care in this setting. The aims of this study were to: 1) Examine how many patients, who died on a cancerspecific ICU or within a month of ICU discharge, were referred to palliative care before death 2) Identify the proportion of patients who may have been referred to palliative care if a palliative care referral "trigger" tool had been used, either within 6 months before ICU admission or during ICU admission Setting and cohort This study was approved as a service evaluation by the Royal Marsden Committee for Clinical Research. This study was carried out in a 269-bedded specialist adult cancer hospital located across two sites in London. The hospital is a tertiary referral centre for patients with cancer from across the UK and abroad. There is a 16-bedded mixed medical and surgical ICU, which admits approximately 1,400 patients per year and has capacity for both levels 2 (single organ support, or extensive post-operative care) and 3 (two or more organ support, or advanced respiratory support) care. The study included all patients who had died either in ICU or within 30 days of an ICU admission during a 2-year period, between 01 April 2016 and 31 March 2018. There were no exclusion criteria. Study design This was a two-part retrospective study with the first part relating to palliative care referral before ICU admission, the "pre-ICU admission analysis", and the second relating to palliative care referral during ICU admission, the "analysis of ICU admission". In both parts of the project, trigger tools were retrospectively applied to explore the potential impact of each individual tool on palliative care referrals. A literature search was carried out using the keywords "trigger tools" and "intensive care" to identify palliative care referral trigger tools for inclusion in this study. As there is no evidence to date as to the most effective or useful trigger tool in this setting, a pragmatic approach was adopted to decide on the tools for inclusion in this study (Table 1). For the first part of the study, the "pre-ICU admission analysis", these included: • "The Royal Marsden (RM)" trigger tool: a palliative care referral "trigger" tool that was developed locally, which had been previously tested against acute oncology admissions [33] • The "Hui et al." trigger tool: a set of palliative care referral criteria for outpatient specialty palliative care which had been devised through a process of international Delphi consensus [34] For the second part of the study, the "analysis of ICU admission", three trigger tools were included: The trigger tools were retrospectively applied using the clinical data which would have been available at the time, as documented in the medical notes. As defined by the parent studies, patients were identified as being eligible for a palliative care referral if they were positive for any one or more of the palliative care referral "triggers". For the "pre-ICU admission analysis", patients were assessed as to whether they met the palliative care referral criteria at any time in Table 1 Trigger tools used in the study Palliative care referral "trigger" tool "Hui et al." trigger tool [34] "Royal Marsden" trigger tool (locally derived tool) [ the 6 months prior to the ICU admission. In the "analysis of ICU admission", patients who would have been eligible for a palliative care referral using a trigger tool after they had been admitted to ICU were identified. Patient records were examined using the electronic patient record (EPR), IntelliVue Clinical Information Portfolio (ICIP), and written notes. Data collection included patient demographics (age, gender, cancer diagnosis and stage), the reason for ICU admission [20], date, and cause of death. Referral to palliative care, reason for referral and the number of days between the ICU admission, earliest palliative care referral, and death were recorded. Data were collected by two investigators. Pseudonymised data were entered into a database and were handled in accordance with Good Clinical Practice guidelines and General Data Protection Regulation. Data analysis Data were described using descriptive methods. Median and range were used to describe continuous non-parametric clinical data, with counts and percentages used for discrete variables. A patient is considered to meet the requirements for a palliative care referral based on the "trigger" tool if he/she is positive for any one of the criteria within the individual "trigger" tool. Sixty-two percent of patients (n = 92) were referred to palliative care before death. Of these 25% (n = 37) were referred to palliative care before ICU admission. The median time between the first palliative care referral and death for those referred before ICU admission was 38 days (range 0 to 1145). Reasons for referral to palliative care were categorised into six different subgroups-end-of-life care, symptom control, psychosocial support, team support, advance care planning, and other. Referrals to palliative care before ICU admission were mainly done for help with symptom control (81%, n = 30), psychosocial support (62%, n = 23), or in a minority of patients (11%, n = 4) for advance care planning purposes. During ICU admission, 56% (n = 84) of patients were referred to palliative care. Out of these, 35% (n = 29) were already known to palliative care before admission to ICU. There were 8 patients who had previously been referred to and were known to palliative care but were not re-referred during their ICU admission. The main reason for referral to palliative care during ICU admission was for symptom control, with 75% (n = 63) of referrals stating this as one of the reasons for referral. The other two most common reasons for referral were psychosocial support, 70% (n = 59), and end-of-life care, 59% (n = 50). One-third (n = 28) of the referrals to palliative care included advance care planning as a reason for review. The median number of days between ICU admission and referral was 7 (range 2-24). The median number of days between palliative care referral in ICU and death was 0 (range 0-19). Out of the 92 patients who were referred to the palliative care team, 14% (n = 13) did not have a face-to-face review, but advice was given over telephone consultations. Nine of these patients (69%) had been referred on the day of death for symptom control and end-of-life care. The remaining four patients had also been referred for help with symptom control, and all but one of them had been referred the day before death. Thirty-eight percent of patients (n = 57) were not referred to palliative care at all before death. In the "pre-ICU admission analysis", 71% (n = 106) of the patients met at least one of the criteria for palliative care referral in the 6 months before ICU admission using the locally derived "Royal Marsden trigger tool". A smaller proportion of 59% (n = 88) of the patients would have warranted a palliative care referral using the "Hui et al." trigger tool. The data from the "analysis of ICU admission" part of the study showed that a high percentage of patients would have met the criteria for a palliative care referral on admission to ICU. Using the "Royal Marsden trigger tool", 96% of the patients (N = 143) met at least one of the triggers, whereas 99% (N = 149) and 97% (N = 146) of the patients were positive using the "Zalenski et al." trigger tool and the "Hua et al." trigger tool, respectively. The proportions of patients meeting each of the criteria in the individual trigger tools are presented in Table 2. The use of the ICU-specific "Zalenski et al." and "Hua et al." trigger tools would have also resulted in a palliative care referral being made much earlier before death. All patients who were positive for "Zalenski et al." and "Hua et al." trigger tools met at least one of the criteria on the day of their ICU admission. The median time between becoming positive for either trigger tool and death was 8 days (range 0-70). Table 2 Retrospective application of trigger tools to the data, including full breakdown of the individual triggers Discussion In this retrospective study, most patients were not known to the palliative care team in the lead up to their ICU admission (13,17). Just over half of them were referred to palliative care at any stage, either in the months prior to their final ICU admission or during it. When patients were referred, it was generally when they were close to death. The use of palliative care referral "trigger" tools would have identified a high proportion of patients who may have benefitted from palliative care referral, consistent with findings from other studies [7,[29][30][31][32][33]. The results from this retrospective application of trigger tools suggest that there is a large proportion of patients who could have been identified for, and potentially benefitted from, a palliative care referral, both before ICU admission and during it. Admission to ICU is generally only considered for patients with potentially reversible clinical issues. With advances in treatments and improvements in outcomes, even cancer patients with advanced disease may be considered for a "trial of ICU therapy" [34]. ICU admission is, however, for most cancer patients a period of clinical and prognostic uncertainty. The aim of integrating palliative care at the point of ICU admission is to facilitate parallel planning, considering all outcomes and ensuring that clinical decision-making is informed by patient priorities. Involvement of palliative care at an earlier stage, before patients even get to ICU, may add even greater benefit. In this study, depending on the trigger tool used, between 59 and 71% of patients may have been eligible to be seen by palliative care before their ICU admission. These patients may have benefitted from advance care planning and shared decision-making which may have impacted the decision for ICU [13,17]. There is good quality evidence from well-designed studies that palliative care should be available to patients throughout their cancer journey and that it is no longer just for patients at the end of their lives. Health care organisations, patients, and staff are challenged to embrace the culture shift that enables palliative care to be provided from as early as diagnosis, alongside active medical treatment of their cancer and intercurrent illnesses [18,19]. Not every dying patient needs specialist palliative care, and likewise palliative care is no longer reserved just for patients in the last hours and days of life. The use of palliative care "trigger tools" to identify patients who may benefit from a comprehensive palliative care needs assessment, and input represents a move away from a traditional referral model based mainly on subjective assessment of prognosis to one which is more standardised and centred around the individual needs of patients [35]. The "trigger tool" approach can underpin a "consultative model" of palliative care service provision in ICU whereby there is increased involvement and effectiveness of the palliative care team in ICU [27]. Used routinely, it can also support a triage system whereby a generalist palliative care is provided to all relevant patients by the ICU team with specialist involvement for those likely to have the most complex needs [7,16]. Additionally, the data from this study suggest that a large proportion of cancer patients may warrant palliative care reviews many months before becoming acutely unwell and requiring admission to ICU. A similar retrospective study using the RM trigger tool demonstrated that a high percentage of cancer patients admitted acutely would have been eligible for a palliative care assessment earlier on in their disease trajectory [33]. The findings from this study show that the use of trigger tools during ICU admission could result in more than double the number of patients that were referred to palliative care. Whilst all the trigger tools picked up higher numbers of patients to refer to palliative care, they also identified patients for a referral at an earlier stage. All the patients that were identified by the three trigger tools during ICU admission were positive from the day of ICU admission. Thus, patients would have been seen earlier, compared to actual practice where most patients were referred to palliative care on the day of death. This would have allowed more time for patients and their families to benefit from a palliative care review and could have had a positive effect on patient care [13,14,17]. The selected trigger tools for this study were found through extensive literature searches, as described above. The tools selected not only appeared to be most relevant to the study cohort but were also supported by the most robust evidence [7,16,32,33,36]. The "Hua et al." and "Zalenski et al." trigger tools were studied in general ICU populations and hence have an evidence base in the group. However, as this study was carried out in a specialist adult cancer hospital, the findings are not necessarily generalisable to non-cancer settings. The trigger of "advanced or metastatic cancer"-and some of the other triggers toomay apply differently in this specific cancer population. However, even if the trigger of advanced cancer is disregarded, a significant proportion of patients were positive for a palliative care referral before ICU admission based on other triggers, e.g. physical symptoms. Similarly in the ICU, a high proportion of patients met the referral criteria based on symptom needs; organ failure; having a marker of advancing illness such as anorexia, hypercalcaemia, or any effusion; or because the oncology team wanted palliative care to be involved. There are other triggers that appeared less relevant in this study, due to the nature of the study population. For example, no patients were positive for "admitted from a nursing facility". Future research is needed to refine palliative care referral criteria that are specific for a cancer population. The ICUspecific tools ("Hua et al." tool and "Zalenski et al." tool) identified a similar proportion of patients for referral as the non-ICU specific Royal Marsden tool which has been previously tested in the acute oncology setting. This may suggest that it is the use of a targeted approach to the identification of patients for referral that matters most, rather than differences between the actual items included in the tools. In this study it was not possible to identify the needs of patients who were not referred to the palliative care team. It was also not possible to robustly assess the effectiveness of involvement of the palliative care team in addressing the needs of patients. Incorporating palliative care need assessments and outcome measurements in future research in this area would increase the clinical applicability and impact of this research and provide objective evidence of the severity, breadth, and complexity of patients' palliative care needs and the impact of interventions in the ICU [37]. One of the main limitations of this study was that the data were collected retrospectively; therefore, the results are very reliant on the accuracy, completeness, and quality of documentation. It was not feasible to carry out a comparison of all palliative care referral or trigger tools in this study, and there may be other tools that exist that have not been included here. Also, regarding the trigger tools themselves, although most of the "triggers" were objective, there were some subjective "triggers". Therefore, the results may include some bias on part of the data collector and may have varied if purely objective "triggers" had been used, such as "team perceived need for palliative care". For example, one of the triggers included in both the "Hua et al." and "Zalenski et al." tools used in the "analysis of ICU admission" was around having advanced/metastatic cancer. Although this is clear cut in solid organ tumours, it is more difficult to define in haematological malignancies so was down to the authors' discretion on a case-by-case basis. It could be also argued that the findings are limited to the results of a single, tertiary centre. However, the authors believe that the findings have wider applicability and add to the growing evidence which supports the use of palliative care referral trigger tools. Conclusion This study has demonstrated that the use of specific sets of "trigger" tools may help to highlight patients with cancer who might benefit from a referral to palliative care. The use of a trigger tool would identify most patients during their ICU admission and many patients in the preceding 6 months prior. This supports the shift in the perception of palliative care; so it is not only considered at the end of life but is being considered much earlier in patients' disease course. The use of these trigger tools and early palliative care referral may streamline referral pathways and help with decision-making about appropriate treatment and patient-centred care. These findings lend support to the plausibility of using trigger tools to deliver palliative care to critically ill cancer patients in clinical practice. Although the results are from a small sample size in a single tertiary centre, there is a clear need for the validation of such trigger tools in general and cancer-specific populations.
2021-06-22T17:55:40.360Z
2021-04-26T00:00:00.000
{ "year": 2021, "sha1": "3bc0523f3a1dc1523c45664e88dbe0e54d4eacaf", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-307793/latest.pdf", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "4e0ab01691078839ce55edf08dc677550c04033c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
46891776
pes2o/s2orc
v3-fos-license
Genome-wide association analysis identifies a meningioma risk locus at 11p15.5 Abstract Background Meningiomas are adult brain tumors originating in the meningeal coverings of the brain and spinal cord, with significant heritable basis. Genome-wide association studies (GWAS) have previously identified only a single risk locus for meningioma, at 10p12.31. Methods To identify a susceptibility locus for meningioma, we conducted a meta-analysis of 2 GWAS, imputed using a merged reference panel from the 1000 Genomes Project and UK10K data, with validation in 2 independent sample series totaling 2138 cases and 12081 controls. Results We identified a new susceptibility locus for meningioma at 11p15.5 (rs2686876, odds ratio = 1.44, P = 9.86 × 10–9). A number of genes localize to the region of linkage disequilibrium encompassing rs2686876, including RIC8A, which plays a central role in the development of neural crest-derived structures, such as the meninges. Conclusions This finding advances our understanding of the genetic basis of meningioma development and provides additional support for a polygenic model of meningioma. Meningiomas are adult tumors arising in the membranous layers surrounding the brain and spinal cord and account for around a third of all primary brain tumors. [1][2][3] The incidence of meningioma is 2-fold higher in females than in males, and the disease is more common in individuals with African ancestry. 1 Although mortality rates are relatively low, meningioma is associated with substantial morbidity. Compared with malignant glial tumors, meningioma has been relatively understudied with regard to etiologic risk factors. Indeed, excluding exposure to ionizing radiation, no environmental factor has consistently been associated with tumor risk. 2,3 Evidence for an inherited predisposition to meningioma is provided by the elevated risk seen in neurofibromatosis 4 and Gorlin syndrome. 5 While the risk of meningioma associated with these disorders is high, they are rare and collectively contribute little to the 3-fold increased risk of the tumor in the relatives of meningioma patients. 6,7 Evidence for common genetic variation contributing to meningioma predisposition has been provided by a genome-wide association study (GWAS), 8,9 which identified a risk locus at chromosome 10p12.31. 10,11 To gain a further insight into inherited susceptibility to meningioma, we performed a meta-analysis of a previously published GWAS 10 and a new unpublished GWAS, thereby providing increased study power to identify new risk loci and reduce the likelihood of false positives. 12 Following replication genotyping in 2 additional independent series, we report the identification of a new risk locus for meningioma mapping to chromosome 11p15.5. Ethics Collection of patient samples and associated clinicopathological information in this study was completed with written informed consent and relevant ethical review board approval at the respective centers in accordance with the tenets of the Declaration of Helsinki. Specifically, these centers are for the German-GWAS: the ethics committees of the Medical Faculty of the University of Bonn and University Hospital Essen; USA-GWAS: the institutional review boards at Yale University School of Medicine, Brigham and Women's Hospital, the University of California at San Francisco, The MD Anderson Cancer Center, Duke University School of Medicine, the Kaiser Genome-Wide Association Studies This meta-analysis was completed based on 2 GWAS datasets (Supplementary Table S1). The diagnosis of meningioma (ICD-10 D32/C70) was established in accordance with World Health Organization (WHO) guidelines. The German-GWAS comprised 834 cases (250 male) and 2103 controls (1047 male). The German-GWAS case-control study has been described previously. 10 Case subjects were patients who underwent surgery for meningioma at the University of Bonn Medical Center between 1996 and 2008. Control subjects were healthy individuals with no past history of malignancy from the Heinz Nixdorf Recall (HNR) study. 13 DNA was extracted from samples using conventional methodologies and quantified using PicoGreen (Invitrogen). Genotyping of cases and controls was conducted using either Infinium HD Human660w-Quad or OmniExpress Beadchips according to the manufacturer's protocols (Illumina). The USA-GWAS comprised 772 cases (217 male) and 7720 controls (2966 male). Case patients eligible for the study included all persons diagnosed between 2006 and 2013 with a histologically confirmed intracranial meningioma among residents of the states of California, Connecticut, Massachusetts, North Carolina, and Texas. Case patients were diagnosed between the ages of 20 and 79 and were identified through the Rapid Care Ascertainment systems and state tumor registries at their respective study sites. Controls were obtained through random-digit dialing performed by an outside consulting firm (Kreider Research and Consulting) (n = 689) or are from The Resource for Genetic Epidemiology Research on Aging (GERA) cohort (n = 7031). 14,15 Controls obtained through random-digit dialing were frequency matched with case patients by 5-year age interval, sex, and state of residence. Patients with a prior history of meningioma and/or a brain lesion of unknown pathology were not eligible for inclusion. The GERA cohort comprises 110 266 adult members of the Kaiser Permanente Medical Care Importance of the study Meningiomas are adult tumors arising in the meninges and account for around a third of all primary brain tumors. Evidence for common genetic variation contributing to meningioma predisposition has been provided by a GWAS, which identified a risk locus at chromosome 10p12.31. To gain further insight into the inherited susceptibility of meningioma, we performed a meta-analysis of 2 GWAS and 2 independent validation series comprising 2138 cases and 12 081 controls, and report the identification of a new risk locus for meningioma at 11p15.5. A number of genes localize to this locus, including RIC8A, which plays a central role in the development of neural crest-derived structures, such as the meninges. This is only the second study, and the largest, to robustly associate common genetic variation as a risk factor for meningioma. Neuro-Oncology Plan, Northern California Region (KPNC). Participants were enrolled through participation in a mailed study conducted in 2007 of all KPNC adult members who had been members for more than 2 years. Respondents who completed consent forms were mailed saliva collection kits (Oragene). We sampled 7031 individuals from 56 848 non-Hispanic white individuals whose data passed quality control for inclusion in the control group, to ensure 1:10 matching between cases and controls in the USA-GWAS, thereby optimizing study power, since there is little benefit of additional controls thereafter. 16 Genotyping of cases and controls of all USA-GWAS subjects was completed using Affymetrix Axiom EUR arrays according to the manufacturer's protocols. Statistical Analysis The quality control procedure described by Anderson et al 17 was applied to each GWAS individually (Supplementary Table S1). To identify samples with discordant sex information, the mean homozygosity rate across X-chromosome markers was computed and samples were excluded if this rate contradicted the reported sex or was inconclusive (a rate between 0.2 and 0.8). We next excluded individuals if they exhibited an elevated genotype failure rate (>3%) or an outlying heterozygosity rate (±3 standard deviations from the mean). To identify duplicated or related individuals, the degree of shared ancestry between pairs of individuals was computed (using identity by descent, IBD). If a pair of individuals had an IBD score >0.185, then the individual with the lowest variant call rate was excluded. Individuals with a non-European ancestry were identified by merging data from 3 HapMap version II populations (CEU, JPT/ CHB, and YRI) and conducting principal component analysis on the merged individuals. Individuals with a second principal component score less than 0.072 were excluded. Variants were excluded if they had a high missing data rate (>5%), if the genotyping call rates differed between the cases and the controls (P < 10 −5 using Fisher's exact test), if they had a minor allele frequency (MAF) <0.01, or if they deviated significantly from Hardy-Weinberg equilibrium (HWE, P < 10 -5 ). Individuals were phased using SHAPEIT version 2.r837 software 18 and a merged reference panel (EGAD00001000776, the European Genome-phenome Archive) containing data from the 1000 Genomes Project 19 (Phase 3) and the UK10K. 20 GWAS data were imputed to more than 10 million single nucleotide polymorphisms (SNPs) using IMPUTE version 2.3.0 21 and the same reference panel. Imputation was conducted separately for each of the studies. In each dataset, the data were pruned to the set of variants common to the cases and controls before imputation. Tests of association between the directly genotyped and imputed SNPs and meningioma were performed using logistic regression under an additive genetics model using SNPTEST version 2.5.2. 22 Poorly imputed SNPs (information measure <0.8), SNPs with a low MAF (<0.005), and SNPs that deviated from HWE (P < 10 −5 ) were excluded. To evaluate the possibility of differential genotyping of cases and controls and the adequacy of the case-control matching, quantile-quantile (Q-Q) plots of the test statistics were generated (Supplementary Figure 1). The computed inflation factor λ is based on the 90% least significant SNPs. 23 In each study, the effects of population stratification were limited by including in the analysis the first 2 and 3 principal components for the German and USA series, respectively. Eigenvectors for each of the GWAS datasets were computed using EIGENSOFT version 4.2. 24 Meta-analyses of the individual GWAS were completed using the β estimates and standard errors from each study and the fixed-effects inverse-variance method implemented in META version 1.7. 25 Cochran's Q-statistic and the I 2 statistic were used to test for heterogeneity and estimate the proportion of the total variation that is due to heterogeneity. 26 Meta-analysis was only completed for an SNP if it passed the quality thresholds in all considered GWAS. SNPTEST was used to perform conditional association analysis. SNP associations at P < 5 × 10 −8 in the meta-analyses are considered genome-wide significant. 27 Despite imposing a stringent significance threshold of P < 5 × 10 −8 for declaring a GWAS association as being significant, it is possible that some such associations might still be false positives. To further assess the robustness of an association, Wakefield has proposed the application of an approximate Bayes factor to calculate the Bayes false discovery probability (BFDP). 28 We estimated the BFDP based on a plausible odds ratio of 1.2 and a prior probability of 0.0001. 29 Replication Studies Ten promising SNP associations from the meta-analysis of the 2 GWAS were taken forward for de novo replication (Supplementary Table S2). Promising associations were prespecified as loci with SNP association P-values <10 −5 , which also had support from additional correlated SNPs mapping to the same genetic region (ie, r 2 > 0.5 and P < 10 −3 ). The UK-replication series comprised 439 cases (ICD10 D32/C70) from the INTERPHONE study 30 and 1865 population-based controls with no past history of any malignancy, ascertained through the National Study of Colorectal Cancer Genetics. 31 The Danish-replication series comprised 115 cases (ICD-O 9530-9537) from the INTERPHONE study and 411 controls with no past history of cancer, ascertained through the Danish Central Population Registry. Replication genotyping of UK and Danish samples was performed using allele-specific PCR KASP chemistry (LGC). Primers are detailed in Supplementary Table S3. Thirty-four samples were excluded from the UK-replication series for having 3 or more failed calls. Call rates for each genotyped SNP were >98% in the remaining UK samples. Six samples were excluded from the Danish-replication series due to the failed call of the genotyped SNP. Sequencing To assess the fidelity of imputation of rs7124615, a subset of 126 cases and 56 controls from the German-GWAS series, selected to be enriched for the presumptive T allele, were sequenced using BigDye Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems) in conjunction with ABI 3700xl semi-automated sequencers (Applied Biosystems). We did not detect the presence of the T allele in any of the samples. The SNP rs7124615 maps to a highly repetitive region, suggesting that this SNP may be incorrectly annotated to this region. Primer sequences are detailed in Supplementary Table S3. Heritability Analysis We used genome-wide complex trait analysis (GCTA) to estimate the heritability ascribed to the genotyped SNPs across all autosomes and each individual autosome. 32 SNPs were excluded based on high missing rate (>5%), low MAF (<0.01), or evidence of deviation from HWE (P < 0.05). Individuals identified as being closely related were also excluded. Restricted maximum likelihood analysis was run using a genetic relationship matrix for each pair of samples. The lifetime risk of meningioma was used to transform the estimated heritability to the liability scale, as previously advocated when calculating the heritability of common lethal diseases such as cancer. 33 The lifetime risk of brain and nervous system tumors is 0.62%, 34 meningioma accounts for 36% of primary brain tumors, 35 and we therefore estimated the lifetime risk of meningioma to be 0.224%. We followed the methodology of Yang et al 36 to adjust for incomplete linkage disequilibrium between the genotyped and causal SNPs at a range of MAF thresholds between 0.1 and 0.5. Heritability was estimated for the German and USA series individually and a meta-analysis of the results was completed under a fixed-effects model. We additionally used the phenotype correlation-genotype correlation (PCGC) regression method to estimate the heritability ascribed to the genotyped SNPs across all autosomes, 37 using the genetic relationship matrix and lifetime risk estimate that was used with GCTA. We adjusted for population structure when estimating heritability using the GCTA and PCGC regression approaches by including as covariates the first 2 and 3 principal components for the German and USA series, respectively. Estimates of individual variance in risk associated with meningioma risk SNPs was carried out using the method described in Pharoah et al. 38 Expression Quantitative Trait Loci Analysis Publicly available data from 47 tissues from the Genotype-Tissue Expression (GTEx) project 39 v7 release were used to examine the relationship between SNP genotype and gene expression. We set a significance threshold for the expression quantitative trait loci (eQTL) analysis of P < 2.01 × 10 −5 , corresponding to a Bonferroni correction for 2491 tests (53 genes across 47 tissues). Summary-Level Mendelian Randomization Analysis To examine the relationship between meningioma risk loci and gene expression we performed a summary-level Mendelian randomization (SMR) analysis, as per Zhu et al. 40 Briefly, GWAS summary statistics files were generated from the meta-analysis. Reference files were generated using data from the 1000 Genomes Project (Phase 3) and UK10K. As previously advocated, only probes with at least one eQTL P-value of <5.0 × 10 −8 were considered for SMR analysis. We set a threshold for the SMR test of P SMR < 1.01 × 10 −4 , corresponding to a Bonferroni correction for 496 tests (496 probes with a top eQTL P < 5.0 × 10 −8 across 47 tissues). For the HEIDI (heterogeneity in dependent instruments) test, P-values <0.05 were taken to indicate significant heterogeneity. Data Availability Genotype data from GERA are available from dbGaP (accession phs000674.v2.p2). The 1000 Genomes Project and UK10K imputation panel data are available from the European Genome-phenome Archive (accession EGAD00001000776). Remaining data are available from the authors upon request. Association Analysis We analyzed GWAS SNP data passing quality control for 1606 cases and 9823 controls of European ancestry from 2 studies: a previously reported GWAS of 834 cases and 2103 controls (German-GWAS) 10 Tables S1 and S4). To increase genomic resolution, we used data from the 1000 Genomes Project and UK10K to impute >9 million SNPs. Q-Q plots for SNPs with a MAF >1% post imputation did not show evidence of substantive overdispersion (λ between 0.99 and 1.04; Supplementary Figure S1). We computed joint odds ratios and 95% CIs under a fixed-effects model for each SNP and associated per allele principal component corrected P-values for all cases versus controls from the 2 series (Fig. 1, Supplementary Figure 2). The strongest association was provided by SNP rs530000334 (P = 1.41 × 10 −11 ), which maps to the previously identified risk locus at 10p12.31 (Fig. 1). Excluding the poorly imputed SNP rs7124615 at 11p15.5, no other association was genome-wide significant. We sought independent validation of promising associations (ie, P < 10 −5 ) at 10 loci where support was provided by SNPs in linkage disequilibrium (r 2 > 0.5 and P < 10 −3 ) by genotyping additional case-control series from the UK and Denmark (Supplementary Table S2). In a combined analysis of the GWAS and replication datasets for these select SNPs, the only genome-wide association was shown by rs2686876, also at 11p15.5 (P = 9.86 × 10 −9 ; Table 1, Fig. 2, Supplementary Table S2). The BFDP for this association was 1.8%, thereby supporting the robustness of the association. At both 11p15.5 and 10p12.31, a conditional analysis of SNP genotypes provided no evidence for additional independent signals at either risk locus. Most meningiomas (>80%) are WHO grade I tumors, with the remainder grade II (atypical, 15%) and grade Neuro-Oncology III (anaplastic) meningioma 41 ; males are more likely than females to have atypical or aggressive lesions. We assessed the relationship between 11p15.5 genotype and WHO grade, sex, and age at diagnosis by case-only analysis. WHO grade was not available for all USA-GWAS, UK-replication, and Danish-replication cases and therefore the WHO grade case-only analysis was restricted to the German-GWAS cases. Case-only analyses of sex and age at diagnosis were conducted in all series. These analyses provided no evidence for association between rs2686876 and WHO grade, sex, or age at diagnosis, consistent with a generic effect of genotype on meningioma risk (Supplementary Table S5). A number of genes localize to the region of linkage disequilibrium encompassing rs2686876 (Fig. 3). They include RIC8A, a homolog of C. elegans Ric8/synembryn that encodes a highly conserved G protein regulator. Intriguingly RIC8A plays a central role in the development of neural crest-derived structures including the meninges. 42 To gain insight into the biological basis underlying the 11p15.5 association, we first evaluated each of the risk SNPs as well as the correlated variants (r 2 > 0.8) using the online resources HaploReg v4, 43 RegulomeDB, 44 and SeattleSeq 45 for evidence of functional effects (Supplementary Table S6). These data revealed active chromatin states overlapping SNPs correlated with rs2686876. We explored whether there were any associations between rs2686876 genotype and the transcript levels of genes within 1 Mb using eQTL data on 47 tissues generated by the GTEx project 39 (Supplementary Table S7). After accounting for multiple testing (53 genes across 47 tissues; P < 2.01 × 10 −5 ), significant eQTL for ANO9 were observed in brain caudate basal ganglia (P = 8.30 × 10 −7 ) and brain putamen basal ganglia (P = 2.58 × 10 −6 ), for BET1L in esophagus mucosa (P = 9.03 × 10 −6 ) and for PSMD13 in brain anterior cingulate cortex (P = 1.36 × 10 −5 ). ANO9 upregulation has been observed in colorectal cancer 46 and has been associated with poor prognosis in pancreatic cancer. 47 The rs2686876 meningioma risk allele was, however, conversely associated with lower ANO9 expression at the 2 eQTLs. While the risk allele of rs2686876 is associated with higher RIC8A expression at nominal significance levels (P < 0.05) in 15 of the 47 tissues, the associations were not significant after correction for multiple testing. Neuro-Oncology We used SMR analysis to test for a concordance between signals from GWAS and cis eQTL for genes within 1 Mb of the sentinel and correlated SNPs (r 2 > 0.8) at the 11p15.5 locus and derived b XY statistics, which estimate the effect of gene expression on meningioma risk (Supplementary Table S8). After accounting for multiple testing, the SMR analysis failed to provide overwhelming evidence to implicate a specific gene. Discussion We have provided the first evidence that implicates variation at 11p15.5 as a determinant of meningioma risk. To our knowledge this is only the second study, and the largest, to robustly associate common genetic variation as a risk factor for meningioma. Although functional studies will be required, dysregulation of RIC8A provides an attractive basis of the 11p15.5 association a priori. RIC8A has an essential role in the development of the mammalian central nervous system, maintaining the integrity of pial basement membrane and modulating cell division. 42 Intriguingly, conditional Ric8a knockout mice have been reported to exhibit defects in meningeal layer formation. 42 Thus far, variation at only 2 loci has been robustly shown to affect meningioma risk. 10 To estimate the potential heritability of meningioma attributable to all common variation, we applied GCTA 32 and PCGC regression 37 to the GWAS datasets (Supplementary Table S9). Combining data from the 2 GWAS indicates that the heritability associated with common variation is 27.9% (±4.4%). The identification of risk variants at 11p15.5 provides further evidence for common genetic variation influencing meningioma risk and suggests the involvement of specific genes in tumor development. Since variation at 10p12.31 and 11p15.5 account for only ~4% of the familial risk of meningioma (Supplementary Table S10), it is likely that further risk variants for meningioma will be identified through additional and larger GWAS. Supplementary Material Supplementary material is available at Neuro-Oncology online. Funding This work was supported by Cancer Research UK (C1298/A8362 supported by the Bobby Moore Fund), which provided principal funding for the study in the UK. The UK and Danish INTERPHONE studies were supported by the European Union Fifth Framework Program "Quality of Life and Management of Living Resources" (contract number QLK4-CT-1999-01563) and the Union for International Cancer Control (UICC). The UICC received funds for this purpose from the Mobile Manufacturers' Forum and Global System for Mobile Communications Association. Provision of funds via the UICC was governed by agreements that guaranteed INTERPHONE's complete scientific independence. These agreements are publicly available at http://www.iarc.fr. The UK centers were also supported by the Mobile Telecommunications and Health Research Programme, and the Northern UK center was supported by the Health and Safety Executive, Department of Health and Safety Executive and the UK Network Operators (O2, Orange, T-Mobile, Vodafone, and "3"). The German-GWAS made use of genotyping data from the population based HNR study. The HNR study is supported by the Heinz Nixdorf Foundation (Germany). Additionally, the study is funded by the German Ministry of Education and Science and the German Research Council (Project SI 236/8-1, SI236/9-1, ER 155/6-1). The genotyping of the HNR subjects was financed by the German Centre for Neurodegenerative Disorders, Bonn.
2018-06-12T18:48:45.139Z
2018-05-12T00:00:00.000
{ "year": 2018, "sha1": "66207c0e76c0e6b242306a97311a20e722775ce0", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/neuro-oncology/article-pdf/20/11/1485/30353984/noy077.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "66207c0e76c0e6b242306a97311a20e722775ce0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
251072329
pes2o/s2orc
v3-fos-license
Restoration of coastal ecosystems as an approach to the integrated mangrove ecosystem management and mitigation and adaptation to climate changes in north coast of East Java Climate change is very basic and appears on earth. Climate change has become an issue that must be faced by humans today and in the future. One of the impacts of climate change can be found in coastal areas. Tsunamis and tidal floods repeatedly occur in coastal areas. One of the efforts to overcome sea level rise that causes tsunamis, erosion, and tidal flooding is mangrove forests. This study aims to determine public awareness of the occurrence of tidal flooding and tsunami and to find an easy and inexpensive way to overcome it. This research is integrated using the partial least square (PLS) approach and the coastal vulnerability index (CVI) approach to mangrove forests. The results showed that the awareness and assessment of the community to carry out mangrove forest restoration to overcome disasters caused by climate change must be managed and handled with a co-management approach.. Introduction Climate changes due to global warming have changed rain intensity and duration, temperature fluctuations, wind, and tropical storm frequency, and other climatic phenomena (Seneviratne et al. 2012;Sofian and Nahib 2010;Trenberth 2011). Climate change has altered nature, and the future risks for humans are prolonged suffer (Otto et al. 2017;McMichael 2012). Therefore, we need immediate, quick, and large-scale actions to reduce emissions because the average global temperatures are predicted to reach or pass the warming threshold to 1.5 Celsius degrees within 20 years (Frölicher et al. 2018;King and Karoly 2017). The impacts of global warming on human life are increasing and expanding drought, widespread diseases like malaria, increasing the frequency of storms, sea level rise, effects on agricultural production, heat waves, forest fires, destruction of marine ecosystems, and animal extinction (Ortiz-Bobea et al. 2021;Alig et al. 2011;Wents 2016). The coastal region is a vulnerable region to sea level rise. Sea level rise potentially endangers coastal regions (Mcleod et al., 2010). This condition will bring social, economic, and cultural impacts (Stephens et al. 2018;Yan et al. 2016). Climate change affects manufactured infrastructures and coastal ecosystems in coastal areas and causes catastrophes, such as coastline erosions, coastal flooding, and water pollution. These issues have become a concern in many countries. Coping with additional pressures of climate change may require a new approach to manage land, water, waste, and coastal ecosystems (Mandal et al. 2021;Toimil et al. 2020). Therefore, many countries create innovations to cope with 1 3 37 Page 2 of 17 the impacts of climate change in coastal areas (Hsin-Ning et al. 2017;Kaspersen et al. 2016). The ICCSR (2010) reports that Indonesia's sea level will increase by 10 to 50 cm with an average increase of 25 to 30 cm by 2050. Meanwhile, the IPCC (2019) reports that the height above sea levels increases by an average of 0.86 cm per year. The leading causes of rising sea levels are thermal expansion of the ocean and iceberg melting in polar regions (ICCSR 2010). Oceans absorb 90% of greenhouse gases trapped in the atmosphere, and this condition increases and expands seawater temperature. Consequently, seawater volumes increase. Greenhouse gases will melt the glaciers and ice sheets in the arctic; thus, the amount of water in oceans will increase (Lindsey 2021). Sea level rise (SLR) escalates and worsens the frequency of extreme sea levels (ESLs), leading to beach flooding. The global mean surface level (GMSL) is a function of the global mean surface temperature (GMST). Therefore, targets of temperature stabilization have essential implications for the risks of coastal flooding; for example, 1.5 C and 2.0 C of warming above pre-industrial level as mentioned in the Paris Agreement (Rasmussen et al. 2018). To date, few studies have investigated the impacts of climate change on shoreline change. First, the shoreline data is inadequate or cannot be solved temporally to analyze the dynamics of coastlines. Second, relative sea levels along the coastlines are generally known in an area that has a tide gauge. These two challenges can be solved due to the increasing number of mutually complementing observations of shoreline change and geodetic engineering. Different interpretations regarding the sea level rise in the coastline change recently highlight the need to conduct specific studies that rely on local observations and applicable models in the local geomorphology context. Cozanneta et al. (2014) state that understanding the dynamics of coastlines requires shoreline data that are frequently insufficient or cannot be solved temporally. Besides, data of sea levels along the coast is generally unknown because there are only a few tide gauges. Moreover, this problem can be solved because the observations of shoreline changes have increased; thus, they mutually complement. Various interpretations regarding the sea level rise in the coastline change recently highlight the need to conduct specific studies that rely on local observations and applicable models in the local geomorphology context. Zacharioudaki and Reeve (2011) state that the current climate scenario and future projections report there statistically significant changes to wave climate conditions. For the scenario of future emissions, the most notable change occurs during the late summer from medium to high fluctuations and during the late winter from medium to low fluctuations. Finally, the critical points to manage coastal are observing the significant shoreline changes in the future wave direction and comparing them with wave height fluctuations. Sofian (2010) explain that the increasing sea surface temperature (SST) in the Indonesian sea varies from -0.01°C/ year to +0.04°C/year, and the highest increase trend occurs in the north coast of Papua Island and the lowest occurs in the south coast of Java Island. The decrease of SST on the south coast of Java Island does not happen in the long term. This decline is probably caused by growing upwelling in the southern coast of Java Island due to the increasing frequency of El Nino (Sofian 2010). The sea level rise changes current patterns, increases erosion, changes shorelines, and reduces wetland areas along the coast. In the end, wetland ecosystems in coastal areas may be damaged if the sea level rise and the sea surface temperature exceed the maximum limit of the adaptation capacity of marine biotas. The SST is predicted to increase from 0.6° C to 0.7° C by 2030, and will reach 1° C to 1.2° C by 2050; these numbers are relative to the average SST in 2000. Meanwhile, the SST will rise from 1.6° C to 1.8° C by 2080 and will reach 2° C to 2.3° C by 2100. Compared to the data of SST paleoclimate in the Western Pacific Ocean, this phenomenon indicates that the SST will reach the highest rise in 2050 since 150,000 years ago. In addition, the sea level rise increased along with the increasing SST due to thermal processes and the increasing water from melted ice glaciers in Greenland or Antarctica. The potential increase in SST follows the expanding temperature and the melted ice (Sofian 2010). Climate change brings impacts to Indonesian cities and potentially sinks coastal areas due to the declining land surface. Land subsidence or land subsidence often occurs in the coastal lowlands of Indonesia. The Road Map research (2019) revealed that 21 provinces and 132 districts or cities in Indonesia are indicated to encounter subsidence, particularly in coastal areas. Therefore, coastal lowlands need mitigation and adaptation subsidence. Dobben et al. (2012) state that the vegetation in coastal areas is expected to change due to sea level rise; these changes can be interpreted as the loss of diversity that will decline common species but increase rare species in extreme habitats. However, Dobben et al. (2012) did not discuss the existence of mangrove vegetation to prevent sea level rise towards the mainland. Mangrove vegetation is currently shrinking due to an anthropogenic process. In fact, the density of mangrove vegetation is necessarily improved to protect coastal areas from abrasion. Whidayanti et al. (2021) support this opinion and state that the more extensive and densed the mangrove vegetation in a region is, the lower the abrasion rate will be. However, if the region's area and density levels are low, the abrasion will possibly become greater. Xiaoxu et al. (2016) argues that community' awareness and human vulnerability to potential health impacts due to climate change are active agents. Humans can control the effective use of technology and resources, community awareness, and health effects by adopting proactive measures, including a better understanding of climate change patterns and their effects on health. Based on research by Brown et al (2020) stated that global mangrove forests have experienced fragmentation and Indonesia is one of the countries with high rates of deforestation due to land conversion. For this reason, Cinco-Castro and Herrera-Silveira (2020) states that Cinco-Castro and Herrera-Silveira (2020) states that well-conserved mangroves have low vulnerability and are in good health because of their high sensitivity. Meanwhile, mangroves that are affected by human activities are more vulnerable in terms of sensitivity and adaptive capacity. In the research area, the mangrove forest is degraded because it is heavily influenced by human activities. Thus, mangrove restoration is an option to improve a healthier coastal environment. Research on mapping public awareness of disaster was conducted to more comprehensively observe public perception and appraisal of disasters and analyze shoreline change. Thus, the research could cope with the impacts of climate change by restoring coastal and mangrove forests as a "bodyguard" and integrated effort to manage coastal ecosystems, draft mitigation, and adapt to climate change. This study was conducted on the south coast of East Java Province, an area constantly inundated by water due to tidal flooding and tsunami. BMKG (2020) also reported a similar recurring condition and predicted that coastal flooding or rob would occur on May 27-28, 2020. Sea tides, high waves, and high rainfall can affect the dynamics of coastal regions in Indonesia, such as the south coastal region of East Java, and trigger coastal flooding (rob). BMKG explains that these conditions can disrupt transportation around the harbor and coastal, activities of salt farmers and inland fisheries, as well as loading and unloading activities in ports. The south coastal area of Java experiences a more severe impact of tidal waves and flooding. Hundreds of buildings, such as houses, gazebos, stalls, beach slopes, and buildings on the coast are damaged. In Lumajang, 300 children and women were displaced. This study aimed to determine public awareness of disasters and map the vulnerability of coastal areas that required immediate, simple, and inexpensive management due to climate change. The results will be used to develop mitigation strategies and adaptation utilizing an approach of mangrove forest restoration. Coastal ecosystem restoration is a comprehensive concept and approach to overcome the degradation of coastal ecosystems with interconnected ecosystems. This approach is the basis for restoring damaged (micro) mangrove forests. Integrated restoration of mangrove ecosystems is the method of restoring mangrove forests using the principles of scientific integration which include the PLS model, CVI model by assessing variables of geology, geomorphology, elevation/ altitude, shoreline change, relative sea level rise, the average tidal wave, and significant wave height. While the results of the combination of the two models above produce a mitigation and adaptation model adapted to the Regulation of the Minister of Environment and Forestry of the Republic of Indonesia Number P.33/Menlhk/Secretariat/Kum.1/3/2016 concerning the Guidelines for the Preparation of Climate Change Adaptation. Therefore, this study necessarily composed mitigation and adaptation models using a co-management-based cooperation approach in coastal areas of Lamongan and Gresik Regencies with a coastal length of 187 Km. These areas are prone to tidal flooding and tsunami. CNN Indonesia (2021) and kompas.com (2020) report that flood frequently submerges coastal areas of Lamongan and Gresik, and flood puddles have increasingly widespread. Therefore, this study mapped public appraisal and awareness of disasters due to tidal flooding as well as classified and identified the susceptibility of coastal areas. Materials and methods This research was conducted in coastal areas of Lamongan and Gresik Regencies, as shown in Fig. 1. Central Bureau of Statistics of Lamongan Regency (2020) mentions that astronomically, Lamongan Regency is located 6 ° 51'54' to 7º23'6' south latitude and between 112º4'41' to 112º33'12' east longitude. Geographically, Lamongan shares borders with other areas: the Java Sea in the north, Gresik Regency in the east, Jombang and Mojokerto regencies in the south, and Bojonegoro and Tuban Regencies in the west. Lamongan Regency covers 1,812.8 km2 or 3.78% of the area of East Java Province. Lamongan Regency consists of 47 miles of coastline and 902.4 km2 of marine area calculated 12 miles from the sea surface Tables 1 and 2. The Central Bureau of Statistics of Gresik Regency (2020) describes that Gresik Regency is located between 112 0 -113 0 East longitude and 7 0 -8 0 South Latitude. It shares borders with several areas: the Java Sea in the north, Sidoarjo Regency in the south, Lamongan Regency in the west, and Madura Strait in the east. Almost one-third of Gresik's territory is coastal, consisting of along Kebomas District and some parts of Gresik, Manyar, Bungah, and Ujungpangkah Districts. This study focused on excavating public opinions and judgment on coastal areas in Lamongan and Gresik Regencies. These areas are exposed to tidal flooding and inundation every year. Public opinion and appraisal were employed as the basis of cooperation among stakeholders. It is necessary to map vulnerable coastal areas using the coastal vulnerability index (CVI) to facilitate and direct stakeholders on areas requiring direct handling. This research interviewed several respondents in two regencies, as follows Table 3: This study mapped 129 respondents' opinions and appraisal using the PLS method with a list of questions referring to five goals: the hazards or natural disaster assessment, vulnerability assessment, capacity assessment, resource management in a disaster situation, and risk analysis. Table 4 shows the list of questions answered by one respondent. Based on the above questions, this research composed the PLS structural model of Lamongan and Gresik coastal areas, as presented in Fig. 2. Structural model that emerged from the results of SEM modeling using SmartPLS 3.0. This is in accordance with the INNER MODEL which is a structural model used to predict causality relationships between latent variables or variables that cannot be measured directly. The structural model (inner model) describes the causal relationship between latent variables that has been built based on the substance of the theory. The software used is SmartPLS 3. The coastal vulnerability index (CVI) method was done by assessing variables of geology, geomorphology, elevation/altitude, shoreline change, relative sea level rise, the average tidal wave, and significant wave height. These variables strongly affected coastal region changes. Determining the CVI parameters was necessary to overcome threats of damaging coastal areas and formulate strategies and action plan mitigation to minimize the impacts of coastal damage (Pendleton et al. (2005), Thieler and Hammar-Klose (1999), Gornitz et al, (1994), Shah et al. (2013). Data to analyze CVI included CVI of Lamongan and Gresik Regencies ( Fig. 3 and Tables 5 and 6). Results of the PLS analysis The PLS software operation revealed that the construct correlation between the assessment of hazards or natural disasters and its indicators is higher than the correlation between assessment indicators of hazards/natural disasters and other indicators. The construct correlation between vulnerability assessment and its indicators is higher than that between vulnerability assessment indicators with other indicators. The construct correlation between capacity assessment and its indicators is higher than the correlation between capacity assessment indicators and other indicators. The construct correlation between source management in a disaster condition and its indicators is higher than that between vulnerability assessment indicators and other indicators. Similarly, the construct correlation between the risk analysis and its indicators is higher than the correlation between risk analysis indicators and the other indicators. These findings show that latent constructs predict that indicators on their blocks are better than those on other blocks. Based on Table 7, this concludes several points. 1. The output results show the AVE value of each construct is greater than 0.5. The constructs of hazards/natural disaster assessment, vulnerability assessment, capacity assessment, risk analysis, and resource management in a disaster situation were good models. Therefore, it was estimated that all constructs in the model met the discriminant validity criteria. 2. Composite reliability is considered significant if its value is above 0.70. A. Assessment Variables of Hazards or Natural Disasters A.1.1 Disasters occurring in these areas are combined effects between natural disasters (e.g., landslides of soil slopes due to heavy rains) and disasters due to human activities (e.g., logging of mangrove trees, reclamation, agricultural planting, and mining). A.1.2 Conflicts in these areas are due to human activities, such as pond development, mining excavation, and other activities destructing mangrove forests in coastal areas. A.1.3 Natural disasters in coastal areas, such as flood puddles, flash floods, or flooding, are due to high rainfall. A.1.4 Flood hazards cause many people to suffer from diarrhea, skin diseases, and other diseases. A.1.5 The local government institutions dealing with disasters have documented floods. A.1.6 The officers record the danger of floods and directly observe the field. A.1.7 The local authorities identify causes of floods by explaining the frequency, seasons, geographical regions of disasters, and cyclical or seasonal weather systems. A.1.8 Flood with the quick or slow flow will spoil any flooded objects. A.1.9 Flood in the past was more severe than that today. A.1.10 Recent floods have much greater physical impacts on infrastructures. A.1.11 It is necessary to create a trend to identify the occurrence of floods. Therefore, changes in frequency, season, location, and intensity patterns are identifiable and well-informed decisions on programming can be applied. A.1.12 Local government necessarily estimates the frequency and probability of rain and floods considering return periods. A.1.13 Flood in the past was more severe than that today. A.1.14 Earthquakes will probably increase due to releasing energy or climate change. During the disaster, women in a family play an essential role in protecting children and the elderly and maintaining health, nutrition, and physically disabled family members. B.1.12 When a disaster occurs, people outside the disaster area assist. B.1.13 When a disaster occurs, each individual receives different impacts. B.1.14 When a disaster occurs, the poor usually are affected the most. Assessment Capacity Variables C.1.1 Disasters do not cause significant damage to life or property because they occur in an area without inhabitants. C.1.2 Before a disaster occurs, the government informs the community to leave a disaster area. C.1.3 Before a disaster occurs, the community has taken actions to prevent or reduce the damaging impacts of disasters. C.1.4 When a disaster occurs, not all people in a disaster area have identical suffering. C.1.5 People who have known the emergence of disaster can immediately save themselves and their property. C.1.6 The local government has the policy to determine a policy for the community during a disaster to reduce the damaging effects of dangers and secure sustainable livelihoods. C.1.7 The government could handle the previous disasters by counseling the community before a disaster occurs (the local government's reduction strategy). C.1.8 The local government is experienced in analyzing which resources will be affected by a disaster to reduce the risks. C.1.9 The local government anticipates a disaster by providing various needs required by the affected community and determining which institution will be responsible for delivering and controlling food. C.1.10 The local government has a policy and strategy to reduce disaster risks on the community and increase their ability to cope with disasters. C.1.11 The local government anticipates a disaster by training and providing counseling to the community. Therefore, the community can adjust themselves to disasters occurring in the future. C.1.12 The government trains the community by providing information about disaster prevention or mitigation. C.1.13 The local government gives aid, such as rice, social cash assistance, equipment, employment, etc. C.1.14 The community can handle or control all types of emerging threats, live normally, have adequate food and clean water, and receive better health services to prevent any disease. C.1.15 After the disaster, the community was assisted by the police, army, and local government officials to buy materials and equipment to rebuild their house destroyed by the disaster. C.1.16 Social organizations help communities confront, resist, and deal with possible threats in the future. C.1.17 Many social organizations or NGOs help the community during the disaster. C.1.18 Local social institutions that care about disaster provide much physical and non-physical assistance to the community. C.1.19 These social Institutions support people affected by disasters to realize their abilities and have the self-confidence to deal with the crisis more significantly. Therefore, they can have control over an event and the power to change their conditions and become invulnerable to any threat. factors influenced the other 48.9%. The variable of the danger or natural disaster assessment was an independent variable affecting the dependent variables. Therefore, it did not have an R-squared value. The outer loading for the variable of the danger or natural disaster assessment confirmed that the 14 indicators had outer loading greater than 0.7 with P-values <0.05. This finding concluded that 14 indicators of the variable of the danger or natural disaster assessment met the convergent validity and significantly measured the variable of the danger or natural disaster assessment. Indicator A1.13 had an outer loading of 0.907, and indicator A1.2 had outer loading of 0.803. The outer loading for the vulnerability assessment variable confirmed that 14 indicators had outer loading greater than 0.7 with P-values <0.05. This finding concluded that 14 indicators of the vulnerability assessment variable met the convergent validity and significantly measured the vulnerability assessment variable. Indicator B1.8 had an outer loading of 0.935, and indicator B1.1 had an outer loading of 0.895 Table 8. The above table summarizes several points of the respondents in Lamongan and Gresik. 1. the assessment of hazards or natural disasters was very high. 2. The vulnerability assessment was very high. 3. The capacity assessment was very high. 4. The resource management in a disaster situation was very high. Results of the CVI analysis The results of the CVI analysis were divided into the CVI of Lamongan Regency and the CVI of Gresik Regency. This division aimed to determine the vulnerability of coastal areas in each regency. The grid was necessarily created to determine the CVI values of two districts. The grid was created with a size of 5x5 km; thus, 21 grids were formed in the two regencies. This step was done to simplify the analysis of vulnerability levels in coastal areas of Lamongan and Gresik Regencies. The following table summarizes vulnerability categories and the weight of scores of the CVI variable. Based on the above table, the CVI map was compiled and shown in Fig. 4 with cell divisions of G1 -G7 for coastal regions of Lamongan and G8 -G21 for coastal regions of Gresik. Figure 5 shows that each cell measured CVI based on six criteria following Table 9. These criteria included aspects of geomorphology, erosion/accretion, the average wave height, coastal slopes, tidal range, and sea level rise. The result is presented in maps shown in Figs. 6,7,8,9,10,and 11. The figures indicated that the results of calculating the CVI then grouped the regions into three vulnerability levels: vulnerable, moderate, and very vulnerable. The coastal areas of Lamongan and Gresik had a moderate vulnerability level. Meanwhile, the calculation results (Tables 8, 10)revealed that 16 coastal areas of Gresik had a high vulnerability level. Almost all coastlines of Gresik and Lamongan had a moderate vulnerability level. Although the majority of areas of Gresik and Lamongan showed a moderate category, disasters occurred sporadically there. Adaptation and mitigation strategies The adaptation and mitigation strategies in the coastal areas of Lamongan and Gresik were prepared by considering local wisdom and referring to the Regulation of the Minister of Environment and Forestry of the Republic of Indonesia Number P.33/Menlhk/Secretariat/Kum.1/3/2016 concerning the Guidelines for the Preparation of Climate Change Adaptation. Article 9, point 2 of the Regulation of the Minister of Environment Number P.33/Menlhk/Secretariat/Kum.1/3/2016 states that the determination of priority actions for climate change adaptation referring to paragraph (1) must consider the following points: 1. Coverage of regions and/or sectors associated with climate risks, 2. The area of regions and/or sectors affected by climate change, 3. Resources needed, 4. Potential constraints in implementing climate change adaptation, 5. Benefits from implementing climate change adaptation, 6. Period of the benefits of climate change adaptation, 7. Acquiring investment benefits of climate change adaptation, and 8. Institutional capacity to implement climate change adaptation. Table 10 summarizes several occurrences in the coastal areas of Lamongan and Gresik. 1. The changing situation Resources needed a) The sea becomes warm due to the declining nutrient levels in the mesopelagic zone. Therefore, thegrowth of diatoms, not phytoplankton, is restricted b) The above condition affects marine biodiversities and is very harmful to coral reefs High rainfall results in a high inundation and possibly brings water directly to the sea without storing it in the basin used as a clean water source Sea level rise, tide, and uncertain rainy seasons increase the frequency and intensity of floods and inundation a) Animal habitats change due to changes in temperature, humidity, and primary productivity. Therefore, several animals migrate to find a new and appropriate habitat b) Bird migration will change because seasons, wind speed, wind direction, and ocean currents bringing nutrients and fish migra- Period of the benefits of climate change adaptation A long-term period (25 years) A medium-term period (10 Years) A medium-term period (5 Years) A short-term period (1 year) will threaten human life and the survival of flora and fauna. 2. Public health is disturbed. 3. The seawater intrusion pollutes the quality and quantity of water supply for the community. 4. The micro-climate difficultly predicts climate change. 5. The quality and quantity of water supply for coastal communities decline due to seawater intrusion. 6. Coastal ecosystem Habitats on land and in the sea are endangered due to sea level rise. 7. Flooding and inundation frequently emerge due to high rainfall. 8. The coastal land subsidence occurs. 9. Fish stocks are impaired due to seawater acidification. 10. Ocean currents change due to changes in air pressure and an increase in temperature. Therefore, public, local governments, and private parties must mutually cooperate to protect the beach naturally. The beach protection by building jetty, groin, breakwater, seawalls is necessarily reconsidered because their development and operation cost highly. Therefore, this study proposed a natural and cheap approach mutually performed by all parties, namely the restoration of mangrove ecosystems as a "bodyguard" to protect the beach from changing conditions due to climate change. The results of research, mitigation, adaptation to climate change show that mangrove forests are essential coastal ecosystems and play a major role in human life. Mangrove forests maintain biodiversity and a nursery for many marine and coastal species and support fisheries. Mangrove forests play an important role in supporting coastal communities against extreme weather events, such as hurricanes, stabilizing the shoreline, and slowing down or reducing soil erosion. Newton et al. (2011) discovered that the mangrove forest restoration could cope with climate change. Thus, mangrove ecosystems closely relate to climate change. Moreover, healthy mangroves in coastal areas can increase coastal communities' resilience to climate change and minimize the impacts of natural disasters, such as tsunami, storms, and waves (adaptive function). It is recommended that the restoration plan should initially examine potential pressures, such as blocked tidal waves to prevent secondary succession, and plan to eliminate the stress before trying the restoration (Hamilton and Snedaker 1984;Cintron-Molero 1992). First, the government necessarily addresses mangrove forests in a damaged coastal village as a key of the coastal restoration. Second, the village government then forms a team consisting of society, government, and private sectors. Finally, stakeholders work using the co-management approach (Priyono et al. 2017). Thus, the village government forms institutional aspects by considering the village's characters and using the co-management approach. The CVI data signify that mangrove forests in the entire coastal areas in Lamongan-Gresik should be restored although these areas are currently categorized as lowly and highly vulnerable areas. The mangrove forest is absolutely restored as a solution to protect coastal and critical areas. Coastal land conservation is easily done by the society in cooperation with the local government and private sectors. Conclusion The impacts of climate change in the coastal areas of Lamongan and Gresik are solved by adaptation and mitigation efforts using mangrove forest restoration. The PLS analysis concluded several points. a) The assessment of hazards/natural disasters was classified very high. b) The vulnerability assessment was classified very high. c) The community's capacity assessment was very high. d) The management resource in a disaster situation was very high. The risk analysis was very high. This research implies that people living in the coastal areas of Lamongan and Gresik are highly aware of the dangers of disasters due to climate change. The awareness and assessment of communities and local government are pivotal to coping with disasters in the future. To date, society still "surrender" to emerging disasters. Moreover, the local government will act to overcome a disaster after it occurs. The local government's ability to anticipate and face a disaster is necessarily improved using the available data; priority areas require particular attention. The CVI results discover that the coastal areas in Lamongan and Gresik that need attention are along with the 129 Ha, and a high priority of flooding is in cell 16 in Gresik. However, due to limited funds and personnel, this research did not investigate the next categories. Therefore, further research necessarily sharpens the utmost priority and priority categories based on the results of village discussion. The local government should initiate this discussion to prevent disaster areas. The approach is applied as a solution to protect coastal areas, critical land, and land conservation by restoring mangrove forest ecosystems. Types, structures, and autecology of mangroves should consider their original vegetation adjusted with the coastal land structures and textures.
2022-07-27T13:35:22.678Z
2022-07-27T00:00:00.000
{ "year": 2022, "sha1": "a4417315b43fd30f5e5b5c801f498a9e395df0e1", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11852-022-00865-4.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "57f862805daa3ed5c570be3cf89390775b968ab2", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
6582307
pes2o/s2orc
v3-fos-license
Physical structure and CO abundance of low-mass protostellar envelopes We present 1D radiative transfer modelling of the envelopes of a sample of 18 low-mass protostars and pre-stellar cores with the aim of setting up realistic physical models, for use in a chemical description of the sources. The density and temperature profiles of the envelopes are constrained from their radial profiles obtained from SCUBA maps at 450 and 850 micron and from measurements of the source fluxes ranging from 60 micron to 1.3 mm. The densities of the envelopes within ~10000 AU can be described by single power-laws r^{-p} for the class 0 and I sources with p ranging from 1.3 to 1.9, with typical uncertainties of +/- 0.2. Four sources have flatter profiles, either due to asymmetries or to the presence of an outer constant density region. No significant difference is found between class 0 and I sources. The power-law fits fail for the pre-stellar cores, supporting recent results that such cores do not have a central source of heating. The derived physical models are used as input for Monte Carlo modelling of submillimeter C18O and C17O emission. It is found that class I objects typically show CO abundances close to those found in local molecular clouds, but that class 0 sources and pre-stellar cores show lower abundances by almost an order of magnitude implying that significant depletion occurs for the early phases of star formation. While the 2-1 and 3-2 isotopic lines can be fitted using a constant fractional CO abundance throughout the envelope, the 1-0 lines are significantly underestimated, possibly due to contribution of ambient molecular cloud material to the observed emission. The difference between the class 0 and I objects may be related to the properties of the CO ices. Introduction In the earliest, deeply-embedded stage a low-mass protostar is surrounded by a collapsing envelope and a circumstellar disk through which material is accreted onto the central star, while the envelope is dissipated simultaneously through the action of the powerful jets and outflows driven by the young star. Traditionally, young stellar objects (YSOs) have been classified according to their spectral energy distributions (SEDs) in the class I-III scheme (Lada, 1987;Adams et al., 1987) describing the evolution of YSOs from the young class I sources to the more evolved pre-main sequence class III sources. This classification scheme was further expanded by André et al. (1993) to include sources that mainly radiate at submillimeter wavelengths (i.e., with high ratios of their submillimeter and bolometric luminosities, L submm /L bol ) and it was suggested that these so-called class 0 sources correspond to the youngest deeply embedded protostars. Even earlier in this picture of low-mass star formation, the starless cores Send offprint requests to: Jes K. Jørgensen Correspondence to: joergensen@strw.leidenuniv.nl of Myers et al. (1983) and Benson & Myers (1989) are good candidates for pre-stellar cores, i.e., dense gas cores that are on the brink of collapse and so leading to the class 0 and I phases. The Shu (1977) model predicts that the outer parts of the envelope follow a ρ ∝ r −2 density profile similar to the solution of an isothermal sphere (Larson, 1969), while within the collapse radius, which is determined by the sound speed in the envelope material and the time since the onset of the collapse, the density tends to flatten nearing a ρ ∝ r −1.5 profile in the innermost parts. This model has subsequently been refined to include for example rotational flattening (Terebey et al., 1984) where such effects are described as a perturbation to the Shu (1977) solution. An open question about the properties of YSOs in the earliest protostellar stage is how the structure of the envelope reflects the initial protostellar collapse and how it will affect the subsequent evolution of the protostar, for example in defining the properties of the circumstellar disk from which planets may be formed later. One of the possible shortcomings of the Shu-model is the adopted, con-stant, accretion rate. Foster & Chevalier (1993) performed hydrodynamical simulations of the stages before the protostellar collapse and found that the structure of the core at the collapse initiating point is highly dependent on the initial conditions; only in the case where a large ratio exists between the radius of an outer envelope with a flat density profile and an inner core with a steep density profile, will the core evolve to reproduce the conditions in the Shu model. In an analytical study, Henriksen et al. (1997) suggested that the accretion history of protostars could be divided into two phases for cores with a flat inner density profile: a violent early phase with high accretion rates (corresponding to the class 0 phase) that declines until a phase with mass accretion rates similar to the predictions in the Shu-model is reached (class I objects), i.e., a distinction between class 0 and I objects based on ages. Whether this is indeed the case has recently been questioned by Jayawardhana et al. (2001), who instead suggest that both class 0 and I objects are protostellar in nature, but just associated with environments of different physical properties, with the class 0 objects in more dense environments leading to the higher accretion rates observed towards these sources. The chemical composition of the envelope may be an alternative tracer of the evolution. Indeed, for highmass YSOs, combined infrared and submillimeter data have shown systematic heating trends reflected in the ice spectra, gas/ice ratios and gas-phase abundances (e.g., Gerakines et al., 1999;Boogert et al., 2000;van der Tak et al., 2000a). One of the prime motivations for this work is to extend similar chemical studies to low-mass objects, and extensive (sub-)millimeter line data for a sample of such sources are being collected at various telescopes, which can be complemented by future SIRTF infrared data. In order to address these issues the physical parameters within the envelope, in particular the density and temperature profiles and the velocity field, are needed. The first two can be obtained through modelling of the dust continuum emission observed towards the sources, while observations of molecules like CO and CS can trace the gas component and velocity field. At the densities observed in the inner parts of the envelopes of YSOs it is reasonable to expect gas and dust coupling, which is usually expressed by the canonical dust-to-gas ratio of 1:100 and the assumption that the dust and gas temperatures are similar. Therefore a physical model for the envelopes derived on the basis of the dust emission can be used as input for modelling of the abundances of the various molecules. Recently, Shirley et al. (2000) and Motte & André (2001) have undertaken surveys of the continuum emission of low-mass protostars -using respectively SCUBA (at 450 and 850 µm) and the IRAM bolometer at 1.3 mm. Both groups analyzed the radial intensity profiles (or brightness profiles) for the individual sources, assuming that the envelopes are optically thin, in which case the temperature follows a power-law dependence with radius in the Rayleigh-Jeans limit. Assuming that the underly-ing density distribution is also a power-law (i.e., of the type ρ ∝ r −α ), one can then derive a relationship between the radial intensity observed in continuum images and the envelope radius, which will also be a power-law with an exponent depending on the power-law indices of the density and temperature distributions. Both groups find that the data sets are consistent with α in the range 1.5-2.5 in agreement with previous results and the model predictions. However, as both groups also notice, in the case where the assumption about an optically thin envelope breaks down, the temperature distribution and so the derived density distribution may not be correctly described in this approach. To further explore these properties of the protostellar envelopes we have undertaken full 1D radiative transfer modelling of a sample of protostars and pre-stellar cores (see Sect. 2.1) using the radiative transfer code DUSTY (Ivezić & Elitzur, 1997). Assuming power-law density distributions we solve for the temperature distribution and constrain the physical parameters of the envelopes by comparison of the results from the modelling to SCUBA images of the individual sources and their spectral energy distributions (SEDs) using a rigorous χ 2 method. Besides giving a description of the physical properties of low-mass protostellar envelopes, the derived density and temperature profiles are essential as input for detailed chemical modelling of molecules observed towards these objects. Also, a good description of the envelope structure is needed to constrain the properties of the disks in the embedded phase (e.g., Keene & Masson, 1990;Hogerheijde et al., 1998Hogerheijde et al., , 1999Looney et al., 2000). In Sect. 2 our sample of sources is presented and the reduction and calibration briefly discussed (see also Schöier et al., 2002). In Sect. 3 the modelling of the sources is described and the derived envelope parameters presented. The properties of the individual sources are described in Sect. 3.4. In Sect. 4 the implication of these results are discussed and compared to other work done in this field. The results of the continuum modelling will be used in a later paper as physical input for detailed radiative transfer modelling of molecular line emission for the class 0 objects -as has been done for class I objects (Hogerheijde et al., 1998) andhigh-mass YSOs (van der Tak et al., 2000b) and as presented for the low-mass class 0 object IRAS 16293-2422 (Ceccarelli et al., 2000a,b;Schöier et al., 2002). In Sect. 5, the first results of this radiative transfer analysis for C 18 O and C 17 O is presented. stellar cores L1689B and L1544, also from Shirley et al. (2000) and two class I objects, L1489 and TMR1 taken from Hogerheijde et al. (1997Hogerheijde et al. ( , 1998 . To enlarge the class I sample, L1551-I5, TMC1A and TMC1 were included as well. The physical properties for these three sources were modelled using the same approach as the remainder of the sources, based on SCUBA archive data. They were, however, not included in the JCMT line survey, so the line modelling (Sect. 5) was mainly based on data presented in the literature, in particular Hogerheijde et al. (1998) and Ladd et al. (1998). For the class 0 objects we have adopted luminosities and distances from André et al. (2000), for the class I objects the values from Motte & André (2001) and for the pre-stellar cores and CB244 distances and luminosities from Shirley et al. (2000). There are a few exceptions, however: for the objects related to the Perseus region we assume a distance of 220 pc and scale the luminosities from André et al. (2000) accordingly, while a distance of 325 pc is assumed for L1157 as in Shirley et al. (2000). The sample is summarized in Table 1. The class 0 object IRAS 16293-2422 treated in Schöier et al. (2002) has been included for comparison here as well. Submillimeter continuum data Archive data obtained from the Submillimetre Common-User Bolometer Array, (SCUBA), on the James Clerk Maxwell Telescope 1 (JCMT), on Mauna Kea, Hawaii were adopted as the basis for the analysis. Using the 64 bolometer array in jiggle mode, it is possible to map a hexagonal region with a size of approximately 2.3 ′ simultaneously at, e.g., 450 µm and 850 µm. It is also possible to combine jiggle maps with various offsets to cover a larger region. To perform the initial reduction of the data, the package SURF (Jenness & Lightfoot, 1997) was used following the description in Sandell (1997). The individual maps were extinction corrected with measurements of the sky opacity τ obtained at the Caltech Submillimeter Observatory (CSO) and using the relations from Archibald et al. (2000) to convert the CSO 225 GHz opacity to estimates for the sky opacity at 450 µm and 850 µm. The sky opacities can also be estimated using skydips, and in cases where these were obtained, the two methods agreed well. Most of the sources were observed in the course of more than one program and on multiple days, so wherever possible available data obtained close in time were used, coadding the images to maximize the signal-to-noise and field covered. In the coadding, it is possible to correct for variations in the pointing by introducing a shift for each image found by, e.g., fitting gaussians to the central source. We chose, however, not to do this, because only minor corrections were found between the individual maps. Two sources, L1157 and CB244 only had usable data at 850 µm (see also Shirley et al., 2000), so supplementary data for these two sources were obtained in October 2001 at 450 µm (see Sect. 2.3). For each source the flux scale was calibrated using available data for one of the standard calibrators, either a planet or a strong submillimeter source like CRL618. From the calibrated maps the total integrated fluxes were derived and the 1D brightness profiles were extracted by measuring the flux in annuli around the peak flux. The annuli were chosen with radii of half the beam (4.5 ′′ for the 450 µm data and 7.5 ′′ for the 850 µm data) so that a reasonable noise-level is obtained, while still making the annuli narrow enough to get information about the source structure without oversampling the data. Actually the spread in the fluxes measured for the points in each annulus due to instrumental and calibration noise was negligible compared to the spread due to (1) the gradient in brightness across each annuli, and (2) deviations from circular structure of the sources. One problem in extracting the brightness profiles was presented by cases where nearby companions were contributing significantly when complete circular annuli were constructed -the most extreme example being N1333-I4 with two close protostars. In these cases emission from "secondary" components was blocked out by simply not including data-points in the direction of these closeby sources when calculating the mean flux in each annulus. SCUBA observations of L1157 and CB244 The observations of L1157 and CB244 were obtained on October 9th, 2001. Calibrations were performed by observing Mars and the secondary calibrator, CRL2688, immediately before the observations. Skydips were obtained immediately before the series of observations (all obtained within 3 hours) giving values for the sky opacity of τ 450 = 1.2 and τ 850 = 0.23, which agree well with the sky opacities estimated at the CSO during that night. From gaussian fits to the central source the conversion factor from the V onto the Jy beam −1 scale (C λ ) was estimated and is summarized in Table 2 together with the beam size θ mb also estimated from the gaussian fit to the calibration source. For Mars the estimate of the beam size was obtained by deconvolution with the finite extent of the planet, while CRL2688 was assumed to be a point source (Sandell, 1994). The derived parameters for L1157 and CB244 are given in Table 3. Images of the two sources at the two SCUBA wavelengths are presented in Fig. 1. As seen from the figure, both sources are quite circular with only a small degree of extended emission. Comparison with the 850 µm data of Shirley et al. (2000) for the 40 ′′ aperture shows that the fluxes agree well within the 20% uncertainty assumed for the calibration. using a switch of 180 ′′ in declination -except for the sources in NGC1333, where position switching towards an emission free reference position was used. A more detailed description of the JCMT and the heterodyne receivers can be found on the JCMT homepage 2 . Where archive data were available for one line from several different projects, the data belonging to each observing program were reduced individually and the results compared giving an estimate of the calibration uncertainty of the data of 20%. The integrated line intensities were found by fitting gaussians to the main line. In some cases outflow or secondary components were apparent in the line profiles leading to two gaussian fits. For the C 17 O J = 1 − 0 and J = 2 − 1 lines, the hyperfine splitting were apparent, giving rise to two separate lines for the J = 1−0 transition separated by about 5 km s −1 , while the J = 2 − 1 main hyperfine lines are split by less (0.5 km s −1 ) giving rise in some cases to line asymmetries. In these cases the quoted line intensities are the total intensity including all hyperfine lines. The integrated line data were brought from the antenna temperature scale T * A to the main-beam brightness scale T mb by dividing by the main-beam brightness efficiency η mb taken to be 0.69 for data obtained using the JCMT A band receivers (210-270 GHz; the J = 2−1 transitions) and 0.59-0.63 for respectively the old B3i (before December 1996) and new B3 receivers (330-370 GHz; the J = 3 − 2 transitions). For the IRAM 30m observations beam efficiencies B eff of 0.74 and 0.54 and forward efficiencies F eff of 0.95 and 0.91 were adopted for respectively the C 17 O J = 1 − 0 and J = 2 − 1 lines, which corresponds The contours indicate the intensity corresponding to 2σ, 4σ, etc. with σ being the RMS noise given for each source and wavelength in Table 3. to main-beam brightness efficiencies (η mb = B eff /F eff ) of respectively 0.78 and 0.59. For the Onsala 20m telescope η mb = 0.43 was adopted for the C 18 O and C 17 O J = 1 − 0 lines. The relevant beam sizes for the JCMT are 21 ′′ and 14 ′′ at respectively 220 and 330 GHz, for the IRAM 30m, 22 ′′ and 11 ′′ at respectively 112 and 224 GHz and for the Onsala 20m, 33 ′′ . The velocity resolution ranged from 0.1-0.3 km s −1 for the JCMT data and were 0.05 and 0.1 km s −1 for the observations of respectively the C 17 O J = 1 − 0 and J = 2 − 1 transitions at the IRAM 30m. The line properties are summarized later in Sect. 5. Input To model the physical properties of the envelopes around these sources the 1D radiative transfer code DUSTY (Ivezić et al., 1999) was used. 3 The dust grain opacities from Ossenkopf & Henning (1994) corresponding to coag-ulated dust grains with thin ice mantles at a density of n H2 ∼ 10 6 cm −3 were adopted. These were found by van der Tak et al. (1999) to be the only dust opacities that could reproduce the "standard" dust-to-gas mass ratio of 1:100 by comparison to C 17 O measurements for warm high-mass YSOs where CO is not depleted. Using a power-law to describe the density leaves five parameters to fit as summarized in Table 4. Not all five parameters are independent, however: the temperature at the inner boundary, T 1 , determines the inner radius of the envelope, r 1 , through the luminosity of the source. If the outer radius of the envelope r 2 is expected to be constant, Y = r 2 /r 1 will depend on the value of r 1 , i.e., T 1 . The results are, however, not expected to depend on r 1 if it is chosen small enough, since the beam size does not resolve the inner parts anyway. Therefore T 1 is simply set to 250 K, a reasonable temperature considering the chemistry observed towards these sources. Also the temperature of the central star has to be fixed: a temperature of 5000 K is chosen. This temperature is of course mostly unknown for the embedded sources, but due to the optical thickness of the envelope most of the radiation from the central star is anyway reprocessed by the dust and thus the temperature of the central star does not play a critical role, e.g., in the resulting SED. Although these parameters may not seem the most straightforward choice, one of the advantages of DUSTY is the scale-free nature allowing the user to run a large sample of models and then compare a number of YSOs to these models just by scaling with distance and luminosity as discussed in Ivezić et al. (1999). Output DUSTY provides fluxes at various wavelengths and brightness profiles for the sources, which are compared to the SCUBA data and flux measurements. Given the grid of models the best fit model can then be determined by calculating the χ 2 -statistics for the SED and brightness profiles at 450 and 850 µm (χ 2 SED , χ 2 450 and χ 2 850 respectively). In order to fully simulate the observations, the modelled brightness profiles are convolved with the exact beam as obtained from planet observations. Strictly speaking, the outer parts of the brightness profile also depend on the chopping of the telescope. The chopping along one axis does by nature not obey the spherical symmetry, so simulation of the chopping and comparing this with one dimensional modelling will not reflect the observations. Therefore in this analysis only the inner 60 ′′ of the brightness profiles are considered, which (1) should be less sensitive to the typical 120 ′′ chop and (2) is typically above the background emission. For the flux measurements a relative uncertainty of 20% was assumed irrespective of what was given in the original reference, since some authors tend to give only statistical errors and do not include calibration or systematic errors. By assigning a relative uncertainty of 20% to all measurements each point is weighted equal but more weight is given to a given part of the SED if several independent measurements exist around a certain wavelength. Contour plots of the derived χ 2 values for L483-mm are presented in Fig. 2, while the actual fits to the brightness profiles and the SED for this source are shown in Fig. 3 and 4. In determining the best fit model each of the calculated χ 2 values are considered individually. The total χ 2 obtained by adding the χ 2 SED , χ 2 850 and χ 2 450 does not make sense in a strictly statistical way, since the observations going into these cases are not 100% independent. Another reason for not combining the values of χ 2 SED , χ 2 450 and χ 2 850 into one total χ 2 is that the parameters constrained by the SED and brightness profiles are different. For example the brightness profiles provide good constraints on α as seen in a 2D contour (Y, α) plot of, e.g., χ 2 450 in Fig. 2, while these do not depend critically on the value of τ 100 . The most characteristic feature of the χ 2 -values for the SEDs on the other hand is the band of possible models in contour plots for (α, τ 100 ), giving an almost one-to-one correspondence for a best fit τ 100 for each value α. These features are actually easily understood: χ 2 450 and χ 2 850 are the normalized profiles and should thus not depend directly on the value of τ 100 . On the other hand since the peaks of the SEDs are typically found at wavelengths longer than 100 µm, increasing τ 100 and thus the flux at this wavelength, require the best-fit SED to shift towards 100 µm, i.e., with less material in the outer cool parts of the envelope, which can be obtained by a steeper value of α. The value of Y is less well constrained, mainly because of its relation to the temperature at the inner radius (and through that the luminosity of the central source). As illustrated in Fig. 2, Y is constrained within a factor 1.5-2.0 at the 2σ level, but the question is how physical the outer boundary of the envelope is: is a sharp outer boundary expected or rather a soft transition as the density and temperature in the envelope reaches that of the surrounding molecular cloud? In the first case, a clear drop of the observed brightness profile should be seen compared with a model with a (sufficiently) large value of Y , e.g., corresponding to an outer temperature of 5-10 K, less than the temperature of a typical molecular cloud. In the other case, however, such a model will be able to trace the brightness profile all the way down to the noise limit. The modelling of the CO lines (see Sect. 5) indicates that significant ambient cloud material is present towards most sources, so the transition from the isolated protostar to the parental cloud is likely to be more complex than described by a single power-law. The features of the values of χ 2 SED , χ 2 450 and χ 2 850 provide an obvious strategy for selecting the best fit models: first the best fit value of α is selected on the basis of the brightness profiles and second the corresponding value of τ 100 is selected from the χ 2 SED contour plots. For a few sources there is not 100% overlap between the 450 and 850 µm brightness profiles and it is not clear which brightness profile is better. The beam at 850 µm is significantly larger than that at 450 µm (15 ′′ vs. 9 ′′ ) and even though the beam is taken into account explicitly, the sensitivity of the 850 µm data to variations in the density profile must be lower as is seen from Fig. 2. On the other hand, the 450 µm data generally suffer from higher noise, so especially the weak emission from the envelope, which provides the better constraints on the outer parts of the Fig. 2. χ 2 contour plots for the modelling of L483-mm. In the four upper panels τ 100 is fixed at 0.2, in the 4 middle panels α is fixed at 0.9 and while in the 4 lower panels Y is fixed at 1400. The solid (dark) contours indicate the confidence limits corresponding to 1σ, 2σ etc. envelope and thus the power-law exponent, will be more doubtful at this wavelength. The power-law slopes found from modelling the two brightness profiles agree, however, within the uncertainties (α ∼ ±0.2). Results In Table 5, the fitted values of the three parameters for each source are presented and in Table 6 the physical parameters obtained by scaling according to source distance and luminosities are given. As obvious from the χ 2 plots in Fig. 2, these parameters have some associated uncertainties. The value of α is determined typically within ±0.2 leading to a similar uncertainty in τ 100 of ±0.2. The minimum value of Y gives a corresponding minimum value of the outer radius. As discussed above, increasing Y only corresponds to adding more material after the outer boundary, so if Y is large enough to encompass the point where the temperature T reaches 10 K, the radius corresponding to this temperature can be used as a characteristic size of the envelope. The region of the envelopes within this radius corresponds to the inner 40-50 ′′ of the brightness profiles for all the sources. If one would increase the radius further, the typical 120 ′′ chop throw for SCUBA should be taken into account when comparing the brightness profiles from the models with the observations. This could lead to flatter density profiles with α decreased by ∼ 0.2 (e.g. Motte & André, 2001). Although the models presented here do not extend that far, one should still be aware of the possibility that emission is picked up at the reference position, which would lead to an overestimate of the steepness of the density distribution. The fitted brightness profiles and SEDs for all sources are presented in Fig. 5, 6 and 7. The derived power-law indices are for most sources in agreement with the predictions from the inside-out collapse model of α = 1.5 − 2.0. A few YSOs (esp. L1527, L483, CB244 and L1448-I2) have density distributions that are flatter, but as argued in Sect. 4.2, this can be explained by the fact that these sources seem to have significant departures from spherical symmetry. Apart from these sources, there is no apparent distinction between the class 0 and class I sources in the sample as shown in the the bolometric temperature of the sources. Class 0 objects are marked by " ", class I objects by " " and pre-stellar cores by " ". VLA1623 and IRAS 16293-2422 have been singled out with respectively "•" and "⋆". The mass of the pre-stellar cores in the lower panel is the mass of the Bonnor-Ebert sphere adopted for the line modelling. plot of power-law slope versus bolometric temperature in the upper panel of Fig. 8. There is, however, a significant difference in the masses derived for these types of objects, with the class 0 objects having significantly more massive envelopes (lower panel of Fig. 8). This is to be expected since the bolometric temperature measures the redness (or coolness) of the SED, i.e., the amount of envelope material. Finally, there is no dependence on either mass or power-law slope with distance, which strengthens the validity of the derived parameters. Individual sources For some of the sources the derived results are uncertain for various reasons, e.g., the interpretation of their surrounding environment. These cases together with other interesting properties of the sources are briefly discussed Table 6. Result of DUSTY modelling -derived physical parameters. N1333-I2,-I4: A major factor of uncertainty in the determination of the parameters for the sources associated with the reflection nebula NGC1333 is the distance of these sources. Values ranging from 220 pc (Černis, 1990) to 350 pc (Herbig & Jones, 1983) have been suggested. The latter determination of the distance assumes that NGC 1333 (and the associated dark cloud L1450) is part of the Perseus OB2 association, which recent estimates place at 318 ± 27 pc (de Zeeuw et al., 1999). The first value is a more direct estimate based on the extinction towards the cloud. N1333-I2 can be modelled using either distance -but the 10 K radius becomes rather large, i.e., 19000 AU in the case of an assumed distance of 350 pc, while the distance of 220 pc leads to a radius of 11000 AU that is more consistent with those of the other sources. N1333-I4 is one of the best-studied low-mass protostellar systems, both with respect to the molecular content (Blake et al., 1995) and in interferometric continuum studies (Looney et al., 2000). On the largest scales the entire system is seen to be embedded in a single en-velope, but going to progressively smaller scales shows that both N1333-I4A and N1333-I4B are multiple in nature (Looney et al., 2000). The small separation between the two sources can cause problems when interpreting the emission from the envelopes of each of the sources. On the other hand the small scale binary components of N1333-I4A and N1333-I4B each should be embedded in common envelopes and can at most introduce a departure from the spherical symmetry. L1448-C, -I2: The L1448 cloud reveals a complex of 4 or more class 0 objects with L1448-C (or L1448-mm) and its powerful outflow together with the binary protostar L1448-N being well studied (e.g., Barsony et al., 1998). Also the recently identified L1448-I2 (O'Linger et al., 1999) shows typical protostellar properties. The dark cloud L1448 itself is a member of the Perseus molecular cloud complex, for which we adopt a distance of 220 pc (see above). Note, however, thatČernis (1990) mentions the possibility of a distance gradient across the cloud complex so that larger distances may be appropriate for the L1448 objects. VLA 1623: As the first "identified" class 0 object, VLA 1623 is often discussed as a prototype class 0 object. It is, however, not well suited for discussions of the properties of these objects because of its location close to a number of submillimeter cores (e.g., Wilson et al., 1999). This makes it hard to extract and model the properties of this source and might explain why it has been claimed to have a very shallow density profile of ρ ∝ r −0.5 (André et al., 1993) or a constant density outer envelope (Jayawardhana et al., 2001). If the emission in the three quadrants towards the other submillimeter cores is blocked out when creating the brightness profiles it is found that VLA 1623 can be modelled with an almost "standard" density profile with α = 1.4, although with rather large uncertainties. L1527: L1527 is remarkable for its rather flat envelope profile with α ≈ 0.6. It was one of the sources for which Chandler & Richer (2000) estimated the relative contribution of the disk and envelope to the total flux at 450 and 850 µm and found that the envelope contributes by more than 85%, which justifies our use of the images at these wavelengths to constrain the envelope. On the other hand, Hogerheijde et al. (1997) estimate the disk contribution at 1.1 mm in a 19 ′′ beam to be between 30 and 75% of the continuum emission (≈ 50% for L1527) for a sample of mainly class I objects, so it is evident that possible disk emission is a factor of uncertainty in the envelope modelling. Disk emission would contribute to the fluxes of the innermost points on the brightness profiles and so lead to a steeper density profile. L723-mm: The most characteristic feature about L723mm is the quadrupolar outflow originating in the central source, which has lead to the suggestion that the central star is a binary (Girart et al., 1997). L483-mm: L483-mm is a good example of a central source with an asymmetry yet providing an excellent fit to the brightness profile with the simple power-law (see discussion in Sect. 4.2). The source seems to be located in a flattened filament showing up clearly in the SCUBA maps (e.g., Shirley et al., 2000) and integrated NH 3 emission (Fuller & Wootten, 2000). L1157-mm: L1157-mm is not as well known for the protostellar source itself as for its bipolar outflow, where a large enhancement of chemical species like CH 3 OH, HCN and H 2 CO is seen (e.g., Bachiller & Pérez Gutiérrez, 1997). As a result, the SED of the protostar itself is rather poorly determined. Our images in Fig. 3 show a source slightly extended in the east-west direction and with a second object showing up south of the source in the 850 µm data. Recently Chini et al. (2001) reported a similar observation of the source and added that the southern feature is also seen in the 1.3 mm data in the direction of the CO outflow from L1157, suggesting an interaction between the outflowing gas and the circumstellar dust. CB244: CB244 is the only protostar of our sample not included in the table of André et al. (2000). Launhardt et al. (1997) found that this relatively isolated globule indeed has a high submillimeter flux, L bol /L submm ≥ 2% qualifying it as a class 0 object. It is, however, probably close to the boundary between the class 0 and I stages: Saraceno et al. (1996) found that it falls in the area of the class I objects in a L bol vs. F mm diagram. L1489: Of the two class I sources in our main sample, L1489 has recently drawn attention with the suggestion that the central star is surrounded by a disk-like structure rather than the "usual" envelope for class 0/I objects (Hogerheijde & Sandell, 2000;Hogerheijde, 2001). Hogerheijde & Sandell examined the SCUBA images of this source in comparison with the line emission with the purpose of testing the different models for the envelope structure -especially the Shu (1977) infall model. They found that L1489 could not be fitted in the inside-out collapse scenario, if both the SCUBA images and spectroscopic data are modelled simultaneously. Instead they suggested that L1489 is an object undergoing a transition from the class I to II stages, revealing a 2000 AU disk, whose velocity structure is revealed through high resolution HCO + interferometer data (Hogerheijde, 2001). That we actually can fit the SCUBA data is neither proving nor disproving this result. Hogerheijde & Sandell in fact remark that it is possible to fit the continuum data alone, but that this would correspond to an unrealistic high age of this source. The modelling of L1489 is slightly complicated by a nearby submillimeter condensation -presumably a pre-stellar core, which has to be blocked out leading to an increase in the uncertainty for the fitting of the brightness profile. TMR1: This is a more standard class I object showing a bipolar nebulosity in the infrared corresponding to the outflow cavities of the envelope (Hogerheijde et al., 1998). Power law or not? The first simplistic assumption in the above modelling is (as in other recent works, e.g., Shirley et al. (2000); Chandler & Richer (2000); Motte & André (2001)) that the density distribution can be described by a single power law. This is not in agreement with even the simplest infall model, but given the observed brightness profiles of the protostars it is tempting to just approximate the density distribution with a single power law. As an example Shirley et al. (2000), used this approach citing the results of Adams (1991): if the density distribution can be described by a power-law and the beam can be approximated by a gaussian then the outcoming brightness profiles will also be a power-law, so a power-law fit to the outer parts of the brightness profile will directly reflect the density distribution. This approach is, however, subject to noise in the data and the parts of the brightness profile chosen to be considered. On the other hand, our 1D modelling clearly shows that the data do not warrant more complicated fits and that the power-law adequately describes the profiles of the sources. Modelling of the detailed line profiles will require more sophisticated infall models, since signatures for infall exist for around half of the class 0 sources in the sample (André et al., 2000). However, Schöier et al. (2002) show that for the case of IRAS 16293-2422, adopting the infall model of Shu (1977) does not improve the quality of the fit to the continuum data. Chandler & Richer (2000) assumed that the envelopes were optically thin. In this case, the temperature profile can be shown to be a simple power-law as well and Chandler & Richer subsequently derived analytical models which could be fitted directly to the SCUBA data. For the sources included in both samples the derived powerlaw indices agree within the uncertainties. However, as illustrated in Fig. 9, the optically thin assumption for the temperature distribution is not valid in the inner parts of the envelope, especially of the more massive class 0 sources, so actual radiative transfer modelling is needed to establish the temperature profile, crucial for calculations of the molecular excitation and chemical modelling. Disk emission can contribute to the fluxes of the innermost points on the brightness profiles and thus lead to a steeper density profile. This is likely to be more important in the sources with the less massive envelope, i.e. class I objects. In tests where the fluxes within the innermost 15 ′′ of the brightness profiles are reduced by 50%, the best fit values of α are reduced by 0.1-0.2. This is comparable to the uncertainties in the derived value of α, but can introduce a systematic error. It is interesting to note that there is no clear trend in the slope of the density profile with type of object. In the framework of the inside-out collapse model, one would expect a flattening of the density profiles, approaching 1.5 as the entire envelope undergoes the collapse. This is not seen in the data -actually the average density profile for Fig. 9. The temperature profiles for four selected sources with temperature profiles calculated in the Rayleigh-Jeans limit for the optically thin assumption overplotted for different dust opacity laws, κ ν ∝ ν β . The dashed line corresponds to β = 1 and the dash-dotted line to β = 2. the class I objects is slightly steeper than for the class 0 objects. On the other hand, the suggestion of an outer envelope with a flat density distribution and a significant fraction of material as suggested by Jayawardhana et al. (2001) can also not be confirmed by this modelling. As seen from the fits in Fig. 5 and 6 the brightness profiles only suggest departures from the single power-law fits in the outer regions in a few cases, so if such a component is present, it is not traced directly by the SCUBA maps. The slightly flatter density profiles for the class 0 objects could be a manifestation of such an outer componentbut the density distributions of the sources in this sample are typically much steeper (i.e. ρ ∝ r −3/2 ) than those modelled by Jayawardhana et al. (ρ ∝ r −1/2 ). Geometrical effects As mentioned above, it was assumed in the modelling that the sources are spherical symmetric. This may not be entirely correct considering the structure of YSOs, which may be rotating and are permeated by magnetic fields leading to polar flattening (Terebey et al., 1984;Galli & Shu, 1993) and which are definitely associated with molecular outflows and jets (Bachiller & Tafalla, 1999;Richer et al., 2000). One problem in this discussion is the exact shape of the error beam of SCUBA. Typically an error lobe pickup of 15% in the 850 µm images and 45% in the 450 µm maps is estimated. This error beam is not completely spherically symmetric, but is also not wellestablished, so in fact 1D modelling may be the best that can be done using the SCUBA data. Myers et al. (1998) investigated the results of departure from spherical symmetry of an envelope when calculating the bolometric temperature of YSOs seen under various inclination angles. With a cavity in the pole re-gion, roughly corresponding to the effect of a bipolar outflow, they found that the bolometric temperature could increase by a factor 1.3-2.5 for a typical opening angle of 25 • . A similar line of thought can be applied to our modelling: departure from spherical symmetry by having a thinner polar region will affect the determination of the SED -a source viewed more pole-on will see warmer material which leads to an SED shifted towards shorter wavelengths and accordingly a lower value of τ 100 . Myers et al. also argued that the effect on a statistical sample would be rather small, e.g., compared to differences in optical depth, but for studies of individual sources like in our case, this effect might be of importance. The brightness profile will also change in the aspherical case. In the case of a source viewed edge-on this would result in elliptically shaped SCUBA images with the 850 µm data showing a more elongated structure since the smaller optical depths material in the polar regions would reveal material being warmer and thus having stronger emission at 450 µm, compensating for the lack of material in these images. If we consider the case where the source is viewed entirely pole-on, the image would still appear circular, but the brightness profiles would show a steeper increase towards the center in the 450 µm data (closer to the spherical case) than the 850 µm data for the same reasons as mentioned above. The good fits to the brightness profiles given the often non-circular nature of the SCUBA images is expected based on the mathematical nature of power-law profiles. Consider as an example a 2D image of a source described by: (1) where f (θ) is a function describing the variation of the slope of the brightness profiles extracted in rays along different directions away from the center position. In the case of a source with a simple density profile ρ ∝ r −p Adams (1991) showed that such sources will also give images with power-law brightness distributions corresponding to the case where f (θ) = const. As a somewhat simplistic case, assume that f (θ) is a step function with a power-law slope p 1 in θ ∈ [0, π[ and a p 2 in θ ∈ [π, 2π[. In this case determination of the power-law slope will be dominated by the flattest of the two slopes: the azimuthal averaged brightness profile will be and due to the power-law decline the term corresponding to the shallower slope will dominate the average, especially at the larger radii. This has two effects: first the distribution of power-law "rays" must be very strongly varying in order not to result in a power-law average, and second, the density profile for asymmetric sources will be flattened compared to more spherical sources. As a test, brightness profiles were extracted in angles covering respectively the flattest and steepest direction of L483 and L1527. Modelling the so-derived brightness profiles gives best-fit density profiles of 0.9 and 1.2, compared to the 0.9 derived as the average over the entire image for L483, and 0.6 and 0.8 for L1527 compared to 0.6 from the entire image. Thus, the brightness profile and the derived density profile might indeed be flattened by the asymmetry and could account for the somewhat flatter profiles found towards some sources. The discrepancies between the profiles along different directions are, however, not much larger than the uncertainties in the derived powerlaw slope, so these sources could have intrinsic flatter density distributions instead; with the present quality of the data both interpretations are possible. Pre-stellar cores The modelling of the pre-stellar cores is more complicated than that of the class 0 and I sources, since it is not clear whether the cores are undergoing gravitational collapse, or are centrally condensed and/or are gravitationally bound. In the case of thermally supported gravitationally bound cores, the solution for the density profile is the Bonnor-Ebert sphere (Ebert, 1955;Bonnor, 1956). Recently Evans et al. (2001) modelled three pre-stellar cores (including L1689B and L1544 in our sample) and found that they could be well fitted by Bonnor-Ebert spheres. Evans et al. also found that the denser cores, L1689B and L1544, were those showing spectroscopic signs of contraction thus suggesting an evolutionary sequence with L1544 as the pre-stellar cores closest to the collapse phase. This is supported as well by millimeter observations of this core which show a dense inner region (Tafalla et al., 1998;Ward-Thompson et al., 1999). Evans et al. also discussed the properties of nonisothermal models and found that the quality of the fit to the data is similar to the isothermal case. Ward-Thompson et al. (2002) examined ISOPHOT 200 µm data for a sample of pre-stellar cores (including L1689B and L1544) and found that none of the cores had a central peak in temperature and that they could all be interpreted as being isothermal or having a temperature gradient with a cold center as result of external heating by the interstellar radiation field. Zucconi et al. (2001) derived analytical formulae for the dust temperature distributions in prestellar cores showing that these cores should have temperatures varying from 8 K in the center to around 15 K at the boundary. These equations will be useful for more detailed modelling of the continuum and line data, but for the present purpose the isothermal models are sufficient. Modelling the pre-stellar cores using our method is not possible as is illustrated by the best fits for the pre-stellar cores shown in Fig. 5-7. Given the observational evidence that these cores do not have central source of heating, what is implicitly assumed in the DUSTY modelling, it is on the other hand comforting that our method indeed distinguishes between these pre-stellar cores and the class 0/I sources and that the obtained fits to the brightness profiles of the latter sources are not just the results of, e.g., the convolution of the "real" brightness profiles with the SCUBA beam. For modelling of sources without central heating Evans et al. note that their modelling does not rule out a powerlaw envelope density distribution for L1544, as opposed to, e.g., the case for L1689B: this would imply an evolutionary trend of the pre-stellar cores having a Bonnor-Ebert density distribution, which would then evolve towards a power-law density distribution with an increasing slope as the collapse progresses. In the modelling of spectral line data for the pre-stellar cores we adopt an isothermal Bonnor-Ebert sphere with n c ∼ 10 6 for both L1689B and L1544 as this was the best fitted isothermal model in the work of Evans et al. (2001). Method One main goal of our work is to use the derived physical models as input for modelling the chemical abundances of the various molecules in the envelopes. To demonstrate this approach, modelling of the first few molecules, C 18 O and C 17 O, is presented here. This modelling also serves as a test of the trustworthiness of the physical models: is it possible to reproduce realistic abundances for the modelled molecules? The 1D Monte Carlo code developed by Hogerheijde & van der Tak (2000) is used together with the revised collisional rate coefficients from Flower (2001), and a constant fractional abundance over the entire range of the envelope is assumed as a first approximation. Furthermore the dust and gas temperatures are assumed identical over the entire envelope and any systematic velocity field is neglected. In the outer parts of the envelopes, the coupling between gas and dust may break down (e.g. Ceccarelli et al., 1996;Doty & Neufeld, 1997) leading to differences between gas and dust temperatures of up to a factor of 2. With the given physical model and assumed molecular properties, there are two free parameters which can be adjusted by minimizing χ 2 to model the line profiles for each molecular transition: the fractional abundance [X/H 2 ] and the turbulent line width V D . Since emission from the ambient molecular cloud might contribute to the lower lying 1-0 transitions, the derived abundances are based only on fits to the 2-1 and 3-2 lines. The observed and modelled line intensities are summarized in Tables 7-10, whereas the parameters for the best fit models are given in Table 11. While a turbulent line width of 0.5 − 1.0 km s −1 is needed for most sources to fit the actual line profile, the modelled line strengths are only weakly dependent on this parameter compared to the fractional abundance. For example, for L723 one derives C 18 O abundances between 3.8 × 10 −8 and 4.0×10 −8 from respectively V D = 1.1 km s −1 (FWHM ≃ 1.9 km s −1 ) to V D = 0.7 (FWHM ≃ 1.2 km s −1 ), which should be compared to the [C 18 O/H 2 ]= 3.9 × 10 −8 and V D = 0.8 km s −1 (FWHM ≃ 1.4 km s −1 ) quoted in Table 11. The C 18 O and C 17 O spectra for L723 and N1333-I2 with the best fit models overplotted are shown in Fig. 10. The revised collisional rate coefficients from Flower (2001) adopted for this modelling typically change the derived abundances by less than 5% compared to results obtained from simulations with previously published molecular data. The relative intensities of the various transitions and so the quality of the fits remain the same. As it is evident from Tables 7-10, the 2-1 and 3-2 lines can be well modelled using the above approach, while the 1-0 lines are significantly underestimated from the modelling, especially in the larger Onsala 20m beam. This indicates that the molecular cloud material may contribute significantly to the observations or that the assumed outer radius is too small. The importance of the latter effect was tested by increasing the outer radius of up to a factor 2.5 for a few sources and it was found that while the 1-0 and 2-1 line intensities could increase in some cases by up to a factor of 2, the 3-2 line intensities varied by less than 10%, illustrating that the 3-2 line mainly trace the warmer (≥ 30 K) envelope material. The importance of the size of the inner radius was tested as well. For L723 fixing the inner radius at 50 AU rather than 8 AU increases the best fit abundance by ∼ Notes: The intensities are the total observed intensity summed over the hyperfine-structure lines. 5% to [C 18 O/H 2 ]= 4.1 × 10 −8 without changing the quality of the fit significantly (drop in χ 2 of ∼ 0.1). This is also found for other sources in the sample and simply illustrates that increasing the inner radius corresponds to a (small) decrease in mass of the envelope, so that a higher CO abundance is required to give the same CO intensities. Yet, it is good to keep in mind that the inner radius of the envelope may be different, and even though its value does not change the results for CO, it might for other molecules, e.g., CH 3 OH and H 2 CO, which trace the inner warm and dense region of the envelope. CO abundances The fitted abundances are summarized in Table 11 and plotted against the envelope mass of each individual source in Fig. 11. As can be seen from the values of the reduced χ 2 for the fit to the data for each isotope, the model reproduces the excitation of the individual species. Even in the "worst case" of N1333-I2, the lines do indeed seem to be well-fitted by the model (see Fig. 10). One source, VLA1623 shows a remarkably high ratio between the C 18 O and C 17 O abundances of 12.4 and high C 18 O and C 17 O abundances in comparison to the rest of the class 0 sources. Given the location of VLA 1623 close to a dense ridge of material and the associated uncertainties in the physical models of this source, it may reflect problems in the model rather than being a real property of the source. For the rest of the sources, the ratio between the C 18 O and C 17 O abundances is found to be 3.9 ± 1.3 in agreement with the expected value from, e.g., the local interstellar medium of 3.65 (Penzias, 1981;Wilson & Rood, 1994) -another sign that the model reproduces the physical structure of the envelopes and that no systematic calibration errors are introduced by using data from the various telescopes and receivers. It is evident that the class 0 objects (except VLA 1623) and pre-stellar cores show a high degree of depletion compared to the expected abundances of [C 18 O/H 2 ] of 1.7 × 10 −7 from nearby dark clouds (Frerking et al., 1982) (Wilson & Rood, 1994), the average abundance for the class I sources is (1.1 ± 0.9) × 10 −4 and for the class 0 sources and pre-stellar cores (2.0 ± 1.3) × 10 −5 . The error bars illustrate the source to source variations and uncertainties in classifying borderline class 0/I ob- jects like CB244, L1527 and L1551-I5. Previously Caselli et al. (1999) derived the C 17 O abundance for one of the pre-stellar cores, L1544, and our abundance agrees with their estimate within the uncertainties. It is interesting to see that the class I objects have higher CO abundances close to the molecular cloud values, indicating that the class 0 objects indeed seem to be closer related to the prestellar cores in this sense. Van der Tak et al. (2000b) likewise found a trend of increasing CO abundance with massweighted temperature for a sample of high-mass YSOs and suggested that this trend was due to freeze-out of CO in the cold objects. The derived abundances are uncertain due to several factors. First, the physical model and its simplicity and flaws as discussed above. Second, the assumed dust opacities will affect the results: the dust opacities may change with varying density and temperature in the envelope, and will depend on the amount of coagulation and formation of ice mantles. These effects tend to increase κ ν with increasing densities or lower temperatures, which will lower the derived mass and thus increase the abundances necessary to reproduce the same line intensities. Comparing the models of dust opacities in environments of different densities and types of ice mantles as given by Ossenkopf & Henning (1994) indicates, however, that such variations should be less than a factor of two and so cannot explain the differences in the derived CO abundances. The unknown contribution to the lower lines from e.g. the ambient molecular cloud may affect the interpretation. Even for the 2-1 transitions the cloud material may contribute, leading to higher line intensities than predicted from the modelling. Indeed as judged from Tables 7-10 there is a trend that while the intensities of 2-1 lines are underestimated by the modelling, the 3-2 lines are overestimated. This then means that the derived CO abundances are upper limits -at least for the warmer regions traced by the 3-2 lines. As noted above the gas temperature could be lower by up to a factor of two in the outer parts of the envelope Doty & Neufeld, 1997), but this again would mainly affect the lines tracing the outer cold regions i.e. the 1-0 and 2-1 lines compared to the 3-2 lines. At the same time the gas temperatures from these detailed models may not be appropriate for our sample. Our sources all have lower luminosities than modelled e.g. by Ceccarelli et al. (1996), leading to envelopes that are colder on average, so that the relative effects of the decoupling of the gas and dust are smaller. Another effect is external heating, which as discussed in Sect. 4.3 may lead to an increase in the dust and gas temperatures in the outer parts. A point of concern is also whether steeper density gradients could change the inferred abundances in light of the discussion about geometrical effects (Sect. 4.2). Steepening of the density distribution would tend to shift material closer to the center and so towards higher temperatures. This would correspondingly change the ratio between the 2-1 and 3-2 lines towards lower values (stronger 3-2 lines) and more so for the C 18 O data than the C 17 O data, since it traces the outer less dense parts of the envelope. Altogether none of the effects considered change the conclusion that the abundances in the class 0 objects are lower than those found for the class I objects. One important conclusion of the derived abundances is that one should be careful when using CO isotopes to derive the H 2 mass of, e.g., envelopes around young stars assuming a standard abundance. With depletion the derived H 2 envelope masses will be underestimated. Another often encountered assumption, which may introduce systematic errors, is that the lower levels are thermalised and that the Boltzmann distribution can be used to calculate the excitation and thus column density of a given molecular species. In Fig. 12, the level populations for C 18 O in the outer shell of the model of two sources, L723 and TMR1, are shown together with the variation of the ratios of the level populations from the Monte Carlo modelling by assuming LTE. For the more dense envelope around the typical class 0 object, L723, the LTE assumption gives accurate results within 5-10% in the lower levels, but somewhat more uncertain results for the higher levels. For the less dense envelope around TMR1 (class I object) it is, however, clear that the levels are subthermally excited and the LTE approximation provides a poor representation of the envelope structure. CO abundance jump or not? The large difference in CO abundance found between the class 0 sources and pre-stellar cores on the one side and the class I objects on the other side warrants further discussion. The apparently low CO abundances and the possible relation to freeze-out of CO raises the question whether the assumption of a constant fractional abundance is realistic: freezing out of pure CO-ice and isotopes is expected to occur at roughly 20 K under interstellar conditions (e.g., Sandford & Allamandola, 1993), so one would expect to find a drastic drop in CO abundance in the outer parts of the envelope. Due to the uncertainties in the properties of the exterior regions, however, a change in abundance at 20 K can neither be confirmed nor ruled out. As Table 7 Table 11. CO abundances using the best models of indicates both the intensities of the 2-1 and 3-2 lines of C 18 O and C 17 O can be fitted with a constant fractional abundance for most sources. One can introduce strong depletion through a "jump" in the fractional abundance in the region of the envelope with temperatures lower than 20 K. This naturally leads to lower line intensities of especially the 2-1 and 1-0 lines, but also the 3-2 lines. Since it is mainly the 3-2 line that constrains the abundance in the inner part, it is possible to raise the abundances of the warm regions by up to a factor 2, if CO in the outer part is depleted by a factor of 10 or more. The modelled 1-0 and 2-1 line intensities then, however, also become weaker, which one has to compensate for by introducing even more cold (depleted) material outside the 10 K boundary, accounting for 50% or more of the observed 2-1 line emission and almost all the 1-0 emission. If the dust opacity law varies with radial distance in the envelope, increasing κ ν with the higher densities, the CO abundances in the warmer regions could be higher by a factor of two. Combining these effects would then lower the derived CO evaporation temperature in the envelope material towards the 20 K evaporation temperature of pure CO ice. A changing dust opacity law would be an interesting result in itself, which possibly can be confirmed or disproved through modelling of other chemical species and/or by comparing the exact line profiles with realistic models of the velocity field. Without such an effect introduced, however, the results presented here favors the somewhat higher evaporation temperature for the CO. The differences between the class 0 and I objects and a warmer evaporation temperature may be consistent with new laboratory experiments on the trapping and evaporation of CO on H 2 O ice by Collings et al. (2002). Their experiments refer to amorphous H 2 O ice accreted layer- Fig. 11. The fitted abundances vs. envelope mass (left) and bolometric temperature (right) of each source for respectively the C 18 O data (top) and C 17 O data (bottom). The sources have been split into groups (class 0, class I and pre-stellar cores) with VLA 1623 and IRAS 16293-2422 separated out using the same symbols as in Fig. 8: class 0 objects are marked with " ", class I objects with " ", pre-stellar cores with " ", VLA 1623 with "•" and IRAS 16293-2422 with "⋆". The vertical lines in the figures illustrate the abundances in quiescent dark clouds from Frerking et al. (1982). by-layer so that it has a porous ice structure. This situation may be representative of the growth of H 2 O ice layers in pre-stellar cores and YSO envelopes. CO is deposited on top of the H 2 O ice. When the sample is heated from 10 to ∼30 K (laboratory temperatures), some of the CO evaporates, but another fraction diffuses into the H 2 O ice pores. Heating to 30-70 K allows some CO to desorb, but a restructuring of the H 2 O ice seals off the pores, and the remaining CO stays trapped until at least 140 K. Under interstellar conditions, the temperatures for these processes may be somewhat lower, but it does indicate that a significant fraction of CO can be trapped in a porous surface and that evaporation may occur more gradually from 30 K to ∼60 K. A similar property was suggested by Ceccarelli et al. (2001), who found that the dust mantles in the envelope around IRAS 16293-2422 had an onion-like structure with H 2 CO being trapped in CO rich ices in the outermost regions and with the ices becoming increasingly more H 2 O rich when moving inwards toward higher temperatures. In this scenario, it is not surprising that even the 3-2 lines tracing the warmer material indicate low CO abundances. Observations of even higher lying CO rotational lines (e.g., 6-5) would be needed probe the full extent of this evaporation. Also infrared spectroscopy of CO ices may reveal differences between the class 0 and I objects. Conclusion This is the first paper in a survey of the physical and chemical properties of a sample of low-mass protostellar objects. The continuum emission from the envelopes around these objects has been modelled using the 1D radiative transfer code, DUSTY, solving for the temperature distribution assuming simple power-law density distributions of the type ρ ∝ r −α . For the class 0 and I objects in the sample, the brightness profiles from SCUBA 450 and 850 µm data and the SEDs from various literature studies can be successfully modelled using this approach with α in the range from 1.3 to 1.9 within ∼10000 AU with a typical uncertainty of ±0.2, while it fails for the pre-stellar cores. For four sources the profiles are indeed flatter than predicted by, e.g., the models of Shu (1977), but it is argued that this could be due to source asymmetries and/or the presence of extended cloud material. Taking this into account, no significant difference seems to exist between the class 0 and I sources in the sample with respect to the shape of the density distribution, while, as expected, the class 0 objects are surrounded by more massive envelopes. The physical models derived using this method have been applied in Monte Carlo modelling of C 18 O and C 17 O data, adopting an isothermal Bonnor-Ebert sphere as a physical model for the pre-stellar cores. The 2-1 and 3-2 lines can be modelled for all sources with constant fractional abundances of the isotopes with respect to H 2 and an isotope ratio [ 18 O/ 17 O] of 3.9, in agreement with the "standard" value for the local interstellar medium. The 1-0 lines intensities, however, are significantly underestimated in the models compared to the observations, indicating that ambient cloud emission contributes significantly or that the outer parts of the envelopes are not well accounted for by the models. The derived abundances increase with decreasing envelope mass -with an average CO abundance of 2.0 × 10 −5 for the class 0 objects and pre-stellar cores, and 1.1 × 10 −4 for the class I objects. The 3-2 lines indicate that the lower CO abundance in class 0 objects also applies to the regions of the envelopes with temperatures higher than ∼ 20 − 25 K, the freezeout of pure CO ice. This feature can be explained if a significant fraction of the solid CO is bound in a (porous) ice mixture from where it does not readily evaporate. The physical models presented here will form the basis for further chemical modelling of these sources.
2014-10-01T00:00:00.000Z
2002-05-05T00:00:00.000
{ "year": 2002, "sha1": "a540ae120bb7aac86b8c2254e3b39cfff8f35fed", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2002/27/aa2205.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "654256d78cd8a5014a37840b8e8c9858d57596e4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119003986
pes2o/s2orc
v3-fos-license
Void Lensing as a Test of Gravity We investigate the potential of weak lensing by voids to test for deviations from General Relativity. We calculate the expected lensing signal of a scalar field with derivative couplings, finding that it has the potential to boost the tangential shear both within and outside the void radius. We use voids traced by Luminous Red Galaxies in SDSS to demonstrate the methodology of testing these predictions. We find that the void central density parameter, as inferred from the lensing signal, can shift from its GR value by up to 20% in some galileon gravity models. Since this parameter can be estimated independently using the galaxy tracer profiles of voids, our method provides a consistency check of the gravity theory. Although galileon gravity is now disfavoured as a source of cosmic acceleration by other datasets, the methods we demonstrate here can be used to test for more general fifth force effects with upcoming void lensing data. I. INTRODUCTION Gravitational lensing by cosmological voids -underdense regions of the universe typically 10-100 Mpc in size -is emerging as a promising new tool for studies of dark energy and large-scale structure [1]. Since the detection of lensing by stacked voids in SDSS [2][3][4] and related work on both theory and measurement ( [5][6][7][8][9] and references therein), void lensing has been measured in the Dark Energy Survey [10], with considerable improvements to follow in subsequent data releases. The related phenomenon of trough lensing has been developed into a cosmological probe as well [11][12][13]. In particular, voids have the potential for powerful tests of gravitational 'fifth forces', interactions induced by a new fundamental field mediating long-range forces between matter fields. One setting in which such fifth forces are common is in modified theories of gravity. Many extended gravity theories are accompanied by 'screening mechanisms' that strongly suppress their observable effects in high-density regions of the universe such as the Solar System and the interior of galaxies [14][15][16]. Though known screening mechanisms (chameleon, Vainshtein, etc.) differ in their precise details, a feature common to all is that high density environments are generally screened. In contrast, the low-density nature of voids means that * Electronic address: tessa.baker@physics.ox.ac.uk † Electronic address: clampitt@sas.upenn.edu ‡ Electronic address: bjain@physics.upenn.edu § Electronic address: trodden@physics.upenn.edu screening mechanisms do not operate inside them, allowing deviations from GR to play at full strength [17][18][19]. The aim of this paper is to study the ways that a gravitationally-coupled scalar field impacts the lensing profiles of voids, and hence to uncover useful phenomenology for fifth-force constraints with forthcoming galaxy survey data [20][21][22][23][24]. Recent bounds on the gravitational wave propagation speed using a binary neutron star merger [25][26][27][28] have strongly constrained modified gravity theories as the mechanism of cosmic acceleration, arguably their original motivation (see also earlier work by [29,30]). However, there remain some classes of theories which do not modify the speed of gravitational waves, and hence evade these constraints. For example: 1. Theories with non-universal couplings between baryons and dark matter; 2. Scalar-tensor theories in which the background value of the scalar does not evolve at low redshifts. |φ| 1; 3. Specialised theories within the generalised families of Horndeski, Beyond Horndeski and DHOST gravity, whose defining Lagrangian functions satisfy certain constraint relations [31][32][33][34][35] In the lattermost case, the interesting phenomena of the theory are shifted towards astrophysical (subcosmological) scales. There is a growing body of work studying the effects of fifth forces on clusters, galaxies and the nonlinear regime of large-scale structure formation [36,37]. This shift towards astrophysical scales generally has the consequence that such theories cannot successfully accelerate the universe without an additional cosmological constant. Fifth forces can arise in many other settings, beyond specific applications to modified gravity and late-time cosmic acceleration. Attempts to address cosmological problems such as the behavior of dark matter on small scales, or baryogenesis, involve new scalar fields with long-range interactions. Likewise, attempts to embed our existing theories in a more fundamental model with extra spatial dimensions can give rise to the similar interactions, either through compactification, or as brane bending modes. Furthermore, as shown in [38], during inflation a scalar such as the Higgs field can acquire couplings to the Einstein-Hilbert term at loop order, even if minimally coupled to gravity at tree level. In all such models, the distances/energies/densities at which these deviations from GR are most significant are determined by the mass scales of the underlying fundamental theory. These mass scales could be large enough (and hence the wavelength of the scalar fifth force short enough) for the theory to satisfy cosmological and gravitational wave constraints, yet have non-trivial effects on astrophysical scales. The relevant physics we are interested in, therefore, is not primarily focused on the question of whether modified gravity can explain cosmic acceleration, but rather is part of the grand program of testing gravity on all possible scales. In particular, our question is whether there may be new light scalar fields propagating on sub-Hubble scales. The inherent complexity of astrophysical systems means that existing constraints here are less developed than cosmological ones; hence voids can provide useful new information in this arena. When screening is not effective, the new fields that generically accompany modifications to GR will have non-trivial radial profiles across the void. This modifies the weak lensing profile of the void via two channels: i) Because the new field(s) act as an effective extra source of stress-energy, the gravitational potential inside the void is no longer purely the one deduced from baryons and dark matter. At the level of the gravitational field equations, this means that the Poisson equation is modified (our notation for metric potentials is introduced in eq.19): where ρ eff is the effective stress-energy contribution of the new field. ii) In some models, the new field further acts as an effective source of anisotropic stress. As a result, the lensing potential of the void is no longer equal to the gravitational potential experienced by nearby masses, i.e. Ψ L = (Φ + Ψ)/2 = Ψ. Clearly these two types of modification are partially degenerate in their resulting effects on the lensing shear profile of the void. To understand their different influences, consider a situation in which the new field φ is a subdominant component of the energy density, i.e. ρ eff ρ M (inside the void, at least). Then, the modifications from the first effect above would vanish, while modifications from the second effect would still operate. In this paper we will study two gravity models, corresponding to two different terms in the simple galileon Lagrangian. We will see that one of these models acts via both the above effects, while the other acts only via the first one. A key input to our following calculations is a model for the radial density profile of a void, which can be obtained from fitting to either simulations or galaxy catalogs. In this paper we will use two density profiles; one is a simple fit from galaxy catalogs, whilst the other has enhanced flexibility of the type identified in simulations in [39]. Comparison with real data will rely on a good understanding of the void-tracer connection; see, for example, work on Halo Occupation Distribution models in [40][41][42][43]. Likewise, a high-accuracy analysis will need to account for the possible impact of deviations from GR on galaxy bias relations; in the present work we neglect these, since they are likely to be subdominant to current bias uncertainties within GR itself [44,45]. The structure of this paper is as follows: in §II we introduce the gravity models under study, their background equations and particular features. In §III we present the calculation of the tangential shear profile of a void, the results of which are studied in §IV. §V summarizes our results, caveats, and directions for future work. A. Action & Motivations As a simple framework to study deviations from GR, we restrict ourselves to two terms from the simplest galileon family of gravity theories. Galileon gravity is constructed using a scalar field, φ, which is characterised by its unusual higher-derivative selfcouplings [46]. Galileon fields arise in a number of different ways, for example, having elegant geometrical origins as the description of brane fluctuations in the DGP model [47,48], and describing the helicity zero component of ghost-free massive gravity [49,50]. Furthermore, they exhibit a rich structure and a complex phenomenology, including Vainshtein screening [51] near massive objects [46,52], and possessing an S-matrix with a number of special properties [53][54][55][56][57][58]. They also face some theoretical challenges, such as perturbations propagating superluminally around some sources [46,59] and the existence of arguments that they have no local, Lorentz invariant UV completions [60,61], although there are creative attempts to circumvent some of these [62,63]. Galileons have been of particular interest in recent years as a possible candidate, with a particular choice of parameters, for explaining late-time cosmic acceleration. While the relevant parameter range for that application is now tightly constrained, the models more generally provide a framework for how deviations from GR might exist but remain hidden from local tests of gravity. As we will explain in more detail soon, it is in this spirit that we make use of them in this paper. We consider the simplest example of galileons -a single scalar field, φ(x), which obeys a shift symmetry linear in coordinates: with c, b µ constants. Any term built out of ∂ µ ∂ ν φ, and its derivatives, will be strictly invariant under eq.(2). However, there also exist special operators with fewer than two derivatives per φ, which are not strictly invariant, but rather are invariant up to a total derivative. There are three main examples of these terms. These are the cubic, quartic and quintic models, all of which contain higher derivatives, and which involve three, four and five copies of the field, respectively. In this paper we will work with only the cubic and quartic galileons -these terms are sufficient to demonstrate the two 'channels' for void lensing that we described in the introduction. We stress here that our main interest is the effect of non-GR physics on void lensing observables, and not the detailed specifics of galileon gravity. The general galileon action up to order 4, but not containing terms built out of ∂ µ ∂ ν φ, is: with where M is a mass scale. The first galileon term, L 1 , can be removed by a field redefinition and hence ignored; the second galileon term is merely the usual kinetic term for a scalar field. More interestingly, the cubic galileon corresponds to using L 2 and L 3 (c 4 = 0), whilst the quartic employs all contributions (c 4 = 0). In fact only ratios of the c i appear in the galileon field equations, allowing us to fix one c i parameter arbitrarily. We will use this freedom to fix c 2 = −1 for both models, so that the field is canonically normalized. The choice of the cutoff scale M is an important factor in determining the scales on which the new physics manifests itself. The other important factor in this is the coupling of the galileon field to matter. The interplay between these two factors is nontrivial and highly nonlinear, so that new physics can appear in interesting and unexpected places. For this reason, although the choice M 3 = M Pl H 2 0 allows the galileon field to have non-trivial dynamics on cosmological scales, and therefore to be a candidate to explain cosmic acceleration, there also exist other astrophysical scales at which additional interesting new phenomena can appear. Hereafter we will redefine the scalar field by replacing φ/M Pl → φ, rendering it dimensionless. We will also split the scalar field into a homogeneous 'background' piece and a perturbation via φ =φ + ϕ. Note that ϕ is not a linear perturbation; it is the deviation of the scalar field from its homogeneous component, i.e. a full nonlinear perturbation. Before proceeding to our calculation, let us comment briefly on our choice of gravity models here. As mentioned in the introduction, the recent observations of GW170817 and its electromagnetic counterparts [64] constrain the speed of gravitational waves to differ from c by less than one part in 10 15 , which strongly disfavours the quartic (and quintic) galileon models [26][27][28]65]. Meanwhile, the cubic galileon, when required to drive cosmic acceleration, predicts a negative sign for the Integrated Sachs Wolfe effect (note this has a positive sign in ΛCDM); this feature is disfavoured by cross-correlation analyses of galaxy surveys and CMB observations [25]. Hence galileons are no longer a leading candidate for a viable extension of GR for explaining cosmic acceleration. Despite this, galileons remain a useful toy model for a universe that expands almost identically to ΛCDM, but has interesting fifth-force phenomenology at the nonlinear level. Their basic features -a single new scalar degree of freedom, second-order equations of motion, derivative couplings and a handful of free parameters -are shared by most of the theories of interest on astrophysical scales. Furthermore, terms similar to the cubic and quartic galileons used here appear in the surviving specialised Horndeski, Beyond Horndeski and DHOST theories -see point 3 of our list in the introduction. As such, we will exploit galileons here to lay out our void lensing methodology, which can be applied to other gravity models (or indeed as a 'litmus test' for GR) in future. B. Galileon Background Dynamics In what follows, we will impose the usual requirement that the galileon field drives cosmic acceleration by setting M 3 = M Pl H 2 0 . Doing this will allow us to determine the void lensing signal of the most popular models in the recent literature. As explained above, future studies will likely focus on models with heavier mass scales. The Friedmann equation of galileon gravity is (considering the late-time universe, where only pressureless matter and the galileon field are relevant): and the equation of motion for the homogeneous component of the scalar field is: where overdots denote physical time derivatives. C. The Tracker Ansatz The simultaneous solution of eqs. (4) and (5) is greatly simplified by the use of the tracker ansatz. This is an approximation for the evolution of the homogeneous component of the galileon field, and has been shown to hold extremely well for galileon models [66][67][68]. The tracker ansatz is defined as follows: where ξ is a dimensionless constant. Indeed, it was shown in [69] that any galileon model whose solution forφ does not follow the tracker trajectory by z 1 is inconsistent with Planck measurements of the CMB temperature power spectrum. Furthermore, the behaviour of the galileon field at z > 1 has negligible impact on the CMB, so it is safe to adopt the tracker ansatz for all redshifts. A further implication of the tracker ansatz is a reduction of the free parameters needed to characterize a galileon model. To see this, we substitute eq.(6) into eq.(4). This converts the Friedmann equation to a fourthorder polynomial in H: where E = H/H 0 . Evaluating the above expression at z = 0 gives: Similarly, using the tracker ansatz in eq.(5) and evaluating at z = 0 yields: The second square bracket here must vanish, since the first cannot do so for a general background expansion rate. Together, eqs. (8) and (9) give two algebraic constraints relating the parameters ξ, c 3 and c 4 . After solving these constraints, the cubic galileon (c 4 = 0) has no free parameters remaining (other than the standard cosmological parameters), whilst the quartic model has one free parameter. These constraints are therefore (our expressions are equivalent to those of [25]): Cubic: All parameters are fixed by the value of Ω M 0 : Quartic: Here we take the one free parameter to be ξ, the tracker constant of proportionality. Then we have: We will use these constraints to fix some of our galileon model parameters. For a given value of ξ, eq.(7) becomes a quadratic equation for E 2 . Choosing the solution that gives real H, the Hubble rate is then given by: Compared to eqs. (4) and (5), this represents a remarkable simplification of the background evolution for our models. D. Quasistaticity & Pathologies Whilst the tracker ansatz is strongly supported by measurements of the cosmological background expansion rate, another commonly-used assumption is surrounded by a higher degree of uncertainty. The quasistatic (QS) approximation is a statement about the relative timescales that characterise cosmological structure formation. It states that cosmological structures evolve on approximately Hubble timescales, and hence that the time derivatives of linear cosmological perturbations are expected to be suppressed compared to their spatial derivatives (which vary on significantly sub-Hubble scales). In addition to the metric potentials, the QS approximation is usually assumed to also apply to the perturbations of any new fields present in a modified gravity theory, i.e. |∂ i ϕ| |φ| for the galileon field. This treatment has been shown to hold on scales larger than the sound horizon of the scalar field [70] in several theories [71,72] (though see [73] for a study of structure formation in galileon gravity without the QS approximation). However, these results do not guarantee that the QS approximation holds for all theories. Indeed, the QS assumption has been called into question by the discovery that galileons display pathologies in certain regimes of cosmological structure formation. Specifically, [74,75] have found that the solution for the galileon field profile becomes imaginary inside voids of a certain depth and redshift. The region of imaginary solutions does not span the entire void, but occupies a central region whose radius varies with void depth and redshift. We will meet these pathologies in our own results in §IV. Whether these unphysical solutions signal a real breakdown of galileon theories, or are an artifact of the breakdown of the QS approximation, remains to be seen. The full set of galileon field equations, free of the QS approximation, is a complex set of coupled, nonlinear partial differential equations with derivative interactions, and requires a thorough numerical treatment beyond the scope of this paper. However, in what follows we will at least attempt to explain why these pathologies occur, and pursue a careful delineation of the parameter regime (in void depth vs. radius) in which they arise. A. Density Profiles and Void Sizes In the next subsection we will calculate the gravitational fields associated with a spherical underdensity. Although any given void is likely to be non-spherical, the averaged density profile of all voids in a given sample (of sufficient size) should be spherical to a very good approximation. In this paper we will make use of two different density profiles that have been explored in the literature. The first of these profiles is a simple cubic fit, whilst the second profile has greater flexibility; it has additional parameters controlling a compensation ridge around the void, and hence can be used for a wider range of void sizes (smaller voids of around 10 − 30 Mpc/h tend to be compensated by an external ridge, whilst larger ones with R V 50 Mpc/h do not [39]). One may question whether it is valid to use void profiles originally derived from GR simulations, or fitted to survey data assuming GR, for our work here. Given a measured galaxy profile, obtaining the matter profile responsible for lensing may require some modelling in a GR context. Assuming that galaxies are linearly biased, which is likely to be valid at scales ∼ R v /2 [76], the GR matter density profile is simply a rescaled version of the galaxy profile. The scaling factor is set by the galaxy bias, which is known from the galaxy auto-correlation function. On scales closer to the void center, a GR based mock catalog is required to model the relation of the matter profile to galaxies. This is an area of ongoing research [76,77], and it remains to be determined fully whether such relations are significantly affected by deviations from GR. However, prior work on the cubic galileon [78] (as well as on f (R) theories [79,80]) indicates that the density profiles of voids are not significantly altered in these theories. These works found that any changes were at the level of a few percent or less (we describe the generic reasons to expect small effects in the density profile in §IV A). Such effects are small compared to the errorbars of the SDSS void lensing data we use in this paper (Fig. 3), but will need to be accounted for with future measurements. For the present, we proceed to use the GR-based density profiles for tests of galileon gravity; as we will see in §IV, the impact of modified gravity on the lensing signal can still be substantial. The form of the simpler profile is: where δ = δρ/ρ is the density fluctuation about the mean matter densityρ, and R = r/R V is the radial coordinate in units of the void radius R V . The single parameter δ v then describes the maximum central density contrast of the void. Introduced in [4], this is a simpler form of the cubic profile originally studied in [81,82]. The flexible profile we use is [39]: where again δ v is the maximum density contrast. Variants of this profile have been explored in [10,78], where it was found to provide a good fit to voids in simulations of alternative gravity theories (as well as in ΛCDM). Clearly the disadvantage of this profile is a larger number of parameters to be fitted. An in-depth study of the effects of all five parameters can be found in [78], and we will keep most of these fixed at their best-fit values. We will, however, study the effects of ridge height on the tangential shear signal at large radii in §IV B. The fiducial values of the parameters in eq.(17) are: Generically, one may expect the void size function and central underdensity distrbution to differ in alternative theories of gravity (although, as we discussed above, the shapes of the profiles seem to be largely unaffected). The additional force component mediated by the scalar field typically acts to evacuate matter from a void more quickly 1 , enhancing the number of strongly under-dense voids. A full treatment of the issue requires a study of voids in numerical simulations of modified gravity, which is beyond the scope of this paper. In §IV B we will address this issue in part, by comparing the probability distribution functions of the central underdensity in two models. B. Semi-linearised Field Equations We now proceed to calculate the behaviour of the gravitational and galileon fields over voids with the aforementioned density profiles. We keep the galileon field and matter perturbations fully nonlinear, but the perturbations to the gravitational metric remain small. We describe them using the following metric convention (in conformal Newtonian gauge): In what follows we will rescale the radial coordinate as χ = aH 0 r. The full field equations are lengthy and given in [68]; we will not reproduce them here. After making the quasistatic approximation and integrating once over the radial coordinate, the equations simplify to two equations for the metric perturbations: where subscript commas indicate derivatives. These are supplemented by the equation of motion for the scalar field: Here, the quantities A i , B i and C i , defined in the appendix, are functions of the background cosmology, and hence depend only on time. Using eqs. (20) and (21) to eliminate derivatives of Φ and Ψ, eq.(22) simplifies to a third-order algebraic polynomial for ϕ, χ /χ: The functions η i , defined in the appendix, are linear combinations of the A i , B i and C i . The true polynomial order of eq.(23) depends on which of the galileon terms are present. In the cubic galileon one has η 30 = η 11 = 0, and eq.(23) is then a quadratic equation in ϕ χ /χ. For the quartic galileon, the equation is cubic in ϕ χ /χ. Hence, in both cubic and quartic models, there are multiple branches of solutions. This raises the question of which branch of solutions is the physically realised one? The standard protocol here is to select the branch for which ϕ ,χ /χ → 0 as δM/r 3 → 0, on the grounds that there cannot be a self-sustained field configuration for ϕ in the absence of any mass fluctuation. We will adopt this convention in the absence of any pathologies, which are signaled by the onset of complex solutions for ϕ ,χ /χ. If the dynamics drive the system into a pathological region, we will then instead choose the remaining real solution, if one exists (we will see in §IV C that there is no ambiguity in this choice). If all solution branches becomes imaginary in a particular region of void parameter space and radius, then we deem the theory and/or approximations used to be pathological (and therefore meaningless) there. We will set such regions to zero values in our plots. Having selected the appropriate solution branch for ϕ ,χ /χ, we use this in eqs. (20) and (21) to calculate derivatives of metric potentials. We can also straightforwardly take a second radial derivative of (20) and (21), although we will not show the resulting lengthy expressions here. We then have in hand all the quantities necessary to calculate a void lensing signal. C. Lensing Integrals The lensing convergence of a void, κ, is obtained by integrating the sum of the second derivatives of the metric potentials along the line of sight to the void: where, in the second line, we have written the derivatives in our scaled radial coordinate, χ = aH 0 r. The critical density of a lensing system, Σ c , is defined in terms of the angular diameter distance D A (z) as: where z s and z L are the redshifts of the source and lens (in this case, the void) respectively, and the (1 + z L ) −2 factor is due to our use of comoving coordinates. We set Σ −1 crit (z L , z s ) = 0 for z s < z L in our computations. The observable quantity of interest here is the differential surface mass density, whereκ is the mean convergence as a function of radius from the centre of the void, given by: The distances to the source galaxies and void are not needed for the calculation of an individual void lensing profile; that said, the source and void redshift distributions are needed when calculating the stacked void lensing profile for an entire survey. Note that ∆Σ is in fact the projected effective density profile of the lensing void 2 . However, eq.(27) can equivalently be written as ∆Σ = Σ crit × γ t , where γ t =κ − κ is the lensing tangential shear, so ∆Σ also carries the modifications to the lensing signal. We will express most of our results in terms of ∆Σ, since it is a convenient variable for comparing theory and measurements. IV. RESULTS In this section we present the results of our calculation for the void shear profiles, showing the impacts both of deviations from GR and of variations of the void density profile. We also examine the reported onset of pathological behaviour for some parts of the void parameter space. All calculations in this section use best-fit values for the standard cosmological parameters from the Planck results [83]. Fig. 1 shows the expected tangential shear profiles for voids governed by General Relativity and by the cubic and quartic galileon models, for the void density profile of eq. (17). Based on the discussion of §III A, we use the same input void density profile for all three gravity models (upper panel), and study their differing lensing predictions (lower panel). A. Gravity Models We plot the quantity ∆Σ/R V , which -for a fixed void profile at least -is independent of void size, since ∆Σ scales linearly with void radius (see eq.28) 3 . It is clear from Fig. 1 that the cubic galileon model, having no free parameters after imposition of the tracker ansatz, gives rise to significant deviations from the GR tangential shear signal. There is a factor of ∼ 2 boost in the lensing signal compared to the GR predictions, at all radii. As described in the introduction, modified gravity theories can alter the lensing signal via two channels: i) by contributing to the the effective energy density on the RHS of the Poisson equation, and ii) by creating effective anisotropic stress such that Φ = Ψ. Using eq. (21) and the expressions in the appendix, one can see that the cubic galileon does not generate any effective anisotropic 2 To see this, note that the integrand of eq.(24) can be replaced using eq. (20). 3 The subtlety here being that, in reality, small and large voids tend to have slightly different profiles, see §IV B. Lower panel: corresponding tangential shear profiles in GR, the cubic galileon and the quartic galileon gravity theories. Recall (from §II C) that after applying the tracker ansatz the cubic galileon has no free parameters, whilst the quartic galileon has one; we take this here to be ξ = 1.9. This figure is shown at z = 0. stress. Consequently, all the deviations between the GR and cubic galileon curve in Fig. 1 must arise from the effective energy density of the galileon field. Initially it may seem surprising that the galileon field can have such a substantial effect on the lensing profile, whilst its effects on the matter distribution within the void are much smaller (see §III A). The reason underlying this is the relative evolutionary timescales of the matter distribution and the scalar field profile. Since the galileon field is designed to drive cosmic acceleration (at least in the model considered here), it only becomes a significant fraction of the energy budget of the universe at late times, z < 1 typicall (see [84] for the evolution of spherical perturbations in a comparable gravity model). The void density profile has largely been determined before these redshifts are reached. A related point concerns the galaxy profile. Since galaxies form in the high density environments of halos, the relation of galaxies to matter is likely to remain close to that in GR, except possibly for lower mass halos. We do not consider the details of the galaxy-matter density fields any further. The quartic galileon is an example of a theory which can modify lensing via both of the channels above. Despite this, the quartic curve in Fig. 1 shows only a moderate further enhancement relative to the cubic galileon prediction. This is due to the effect of the constraints in eqs. (10)(11)(12)(13), which fix the value of c 4 used to be small relative to c 3 . Quantitatively, using ξ = 2.05 in the quartic galileon gives c 3 0.08, similar to the cubic value, but c 4 −1.5 × 10 −5 (and for ξ = 1.9 as shown in Fig. 1, c 3 0.06 and c 4 5 × 10 −3 ). This explains why the variation of the lensing amplitude between the cubic and quartic models is less significant that their shared substantial difference from GR. However, Fig. 2 shows that the tangential shear profile of the quartic galileon is quite sensitive to small variations away from ξ = 1.9, particularly for ξ < 1.9. Note from Figs. 1 and 2 that, despite significant variation around the void radius and at a few radii out (r/R V ∼ 2 − 3), the null of the tangential shear remains fixed at r ∼ 1.5 R v in all cases. The reason for this is as follows: the void density profile determines the radius at which the void is exactly compensated, i.e. δM (< r) → 0. In §III B we selected the physical branch of solutions such that φ ,χ /χ → 0 at the same radius. Since the void density profile used is the same for all gravity models (see discussion in §III A), the potential derivatives (eqs. 20 and 21) vanish at the same radius for all gravity models. Via eq.(25), this then ensures that the null of ∆Σ is unchanged by variations of the gravity model. This property should hold true for any model of gravity that does not appreciably impact the void density profile. B. Void Profiles In §III A we introduced two void density profiles: a simple cubic fit, and a compensated ridge profile. For ease of comparison, most of the figures in this paper employ the latter profile. In Fig. 3 we show the corresponding density and tangential shear profile for the cubic fit, together with data obtained from voids identified by [4] in the SDSS DR7-Full LRG catalog of [85]. As reported in [4], a void of central depth δ v −0.5 provides a good fit to the data in GR. In the lower panel we further show a corresponding set of tangential shear profiles in the cubic galileon model. It is clear that the cubic galileon produces a higher amplitude lensing signal than in GR (this can be seen, for example, by comparing the two curves with δ v = −0.5), and that this enhancement persists out to distances well beyond the void radius. We will use the void lensing data and covariance (as per the methods of [4]) to obtain the posterior probability of δ v for the cubic galileon. It is easy to see by eye from Fig. 3 that a cubic galileon model with δ v ∼ −0.4 provides a good fit to the SDSS data points (compared to δ v ∼ −0.5 in GR). Hence we expect there to be some shift in the distribution of δ v for different gravity models. 4 shows our results. These confirm that in the cubic galileon, the best-fit δ v shifts towards more shallow voids than in GR. This makes sense, since a given value of δ v produces a larger lensing signal in the cubic galileon than it does in GR -see Fig. 1. Reversing the argument, a given lensing signal maps back onto a shallower void in the cubic galileon than it does in GR. We will not pursue here a more detailed comparison of our theoretical predictions to current observations. This is due to significant uncertainties in obtaining the mass profile of voids from their galaxy tracer profiles, in gravity models outside of GR. A careful analysis using mock catalogs would be required to obtain these constraints; we leave this to a future work. In Fig. 5 we return to the flexible density profile and regular GR, and demonstrate the variation in tangential shear induced by the presence of the compensation ridge. In reality, ridge height is inversely correlated with void radius, and hence correlated with void depth (i.e. small, deep voids have pronounced ridges, see figure 1 of [39]). Here we wish to disentangle the effects of void depth and ridge height, which we can do by varying the parameter s 2 of eq.(17) but keeping δ v fixed. Although the other parameters in eq.(18) can also affect the height of this ridge, s 2 has the most significant effect (see Fig. 8 of [78]). We see from is evident (even by eye) that the SDSS measurements in Fig. 3 do not support the existence of a compensated ridge, and would rule out large values of s 2 . With future higher signal-to-noise measurements, one can bin voids by size, since smaller voids have more prominent ridges. Given that deviations from GR tend to boost lensing at large radii, much like a ridge does, ridge features need to be studied in cosmological simulations and carefully accounted for when using lensing measurements to test gravity. A subtlety here is that the choice of void tracer and void finder can impact the analysis [10,19,86,87]. C. The Pathological Regime As alluded to in §II D, it is known that galileon gravity suffers from pathological solutions under certain conditions. By 'pathological', we mean that the solutions for the scalar field radial derivative φ ,χ /χ, become imaginary, and hence the lensing calculation laid out here breaks down. As we have noted, eq.(23) is a polynomial in φ ,χ /χthe order of which depends on the model -and hence will generally possess multiple solutions. For the cubic galileon, eq.(23) is a quadratic polynomial in φ ,χ /χ with entirely real coefficients, so any complex solutions must form a conjugate pair. Hence both solutions become pathological simultaneously. This is in contrast to the quartic galileon, for which eq.(23) is a cubic polynomial in φ ,χ /χ. Though the quartic galileon also forms pairs of complex solutions in certain void regimes, we have verified that the physical value of φ ,χ /χ (the one which tends to zero as δM/r 3 → 0) corresponds to the one remaining real solution of the cubic polynomial. Hence, the quartic galileon remains pathology-free inside voids 4 . Fig. 6 compares a set of tangential shear profiles for several void depths in the cubic and quartic galileon models, for the density profile of eq.(17). One can see the onset of the singularity at r/R V ∼ 0.6 in the cubic galileon void with δ v = −0.7, whilst the corresponding quartic galileon profile remains pathology-free. We set the shear profile to zero within the pathological regime. This shift in boundary conditions substantially affects the shear profile outside the pathological regime. Fig. 7 shows the onset of pathological behaviour for the cubic galileon, in a parameter space of void redshift and central density. For δ v ≤ −0.7, the regime of complex solutions first begins at small radii inside the void, and expands outwards as z → 0. For deeper voids, the onset of the pathologies occurs at higher redshifts. The colourscale of Fig. 7 indicates the smallest radii for which real solutions for φ ,χ /χ exist; we denote this in units of the void radius as R sing = r sing /R V . This plot is constructed using the flexible void density profile, eq.(17), with the default parameter values of eq.(18). However, we find that the onset of the pathological regime has only a very weak dependence on the void density profile, at least for the two profiles used in this paper. The significance of this pathological regime is a particularly pressing question. One possibility is that it signals a breakdown of one of the assumptions used in the calculation of §III. Alternatively, it may be a true sickness of the cubic galileon theory itself. As we explain below, it seems likely that this behaviour is related to a breakdown of the quasistatic approximation ( §II D) at late times and in deep voids. It was shown in [25] that, using the cubic galileon, and using the same tracker ansatz as we have used here, the gravitational potentials Φ and Ψ grow at late times in the universe on cosmological scales. This is in sharp contrast to the usual behaviour of ΛCDM, in which gravitational potential wells decay after the matter era. This effect is sufficiently strong to put the cubic galileon into ∼ 7.8σ tension with measurements of the galaxy-ISW cross-correlation from Planck and the WISE survey. Fig. 1 of [25] shows that the lensing potential begins to diverge away from its ΛCDM evolution below z 0.5, which approximately coincides with the onset of pathologies for the deepest voids, as shown in our Fig. 7. Such rapid evolution of the gravitational potentials may violate the quasistatic assumption that their time derivatives are negligible compared to their spatial derivatives. Indeed, one can see from the full, unapproximated set of equations presented in [74] that, at late times, the temporal and spatial derivatives of Φ enter at the same order. It is easy to believe that non-negligible values ofΦ anḋ Ψ could then completely alter the character and solution of the galileon field equations, perhaps removing the complex solutions altogether. A concrete proof of this hypothesis requires a dedicated numerical investigation using the full equations of [74]. Such work is beyond the scope of the present paper. However, our work here usefully delineates the approximate boundaries in void depth and redshift up to which the quasistatic calculations can be used. V. CONCLUSIONS Cosmological tests of gravity are evolving rapidly. When new tools or observations arise, they allow us to probe current models in hitherto-unexplored regimes, sometimes uncovering stark predictions and invalidating the models in question. This information then sculpts the next suite of ideas regarding extensions of GR and mechanisms for cosmic acceleration. Recently, gravitational waves have provided a prime example of this. The gravitational lensing of voids is an equally novel field that has similar potential for probing the behaviour of gravity in low-density environments. This regime is notably orthogonal to all established highprecision tests of gravity, since no comparable low-density environment exists inside the Galaxy. Furthermore, there is reason to believe that any gravitational fifth forces must undergo suppression in high-density environments, leaving the door open for unscreened phenomena to manifest themselves in voids. In this paper we have studied the lensing signatures of a family of gravity models, galileons, that invoke a single scalar field with derivative interactions. The models chosen are not intended to represent realistic models of late-time cosmic acceleration; indeed, whilst this work was in preparation, the LIGO-VIRGO Consortium and collaborating experiments announced results which eliminated the quartic galileon as a viable extension of General Relativity. However, they provide relatively simple and clean examples of modifications to GR that contain higher derivative interactions. Our work should provide a useful study of the typical phenomenology that can be produced by a non-trivial scalar field propagating on the scales of tens of megaparsecs. Let us summarise here some of our observations on how deviations from GR can affect void shear profiles: i) The effective stress-energy contribution of a scalar field to the Poisson equation can produce significant deviations from GR void lensing profiles, even if void shapes in modified gravity are very little changed from those of GR voids [78][79][80]. Variations amongst the gravity models studied here impacted the amplitude of the shear profile, but did not shift its zero-crossing; the lensing signal beyond the void radius was also typically boosted. Our finding are consistent with those of [68]. ii) As shown in Fig. 4, the inferred distribution of the central underdensity, δ v , is changed by approximately 20% in cubic galileon gravity, compared to in GR. If one assumes that the galaxy-mass connection is unaltered, then the galaxy tracer profile of voids can be used to estimate δ v independently, thus providing a consistency test of the theory. In future work, reliable mock catalogs and dedicated, model-specific simulations of alternative gravity theories will be needed to obtain high-confidence constraints. iii) The widely-used quasistatic and tracker approxima-tions have significant impacts on void lensing. The tracker approximation implies theoretical constraints ( §II C) which can affect the lensing amplitude in a non-intuitive way, by fixing some of the gravity model parameters. The quasistatic approximation is the likely cause of the mathematical singularities occurring in the deepest voids at low redshift. One should examine the behaviour of any gravity theory carefully in this regime (low z and large |δ v |) before applying a quasistatic treatment. Of course, it is possible that gravitational models other than those studied here could produce different phenomenology. On large cosmological scales, where linear perturbation theory is valid, one may build unified frameworks that encompass many gravity different models, and use these to perform generalised calculations [88][89][90][91][92][93]. At present there is not an equivalent treatment applicable to fully non-linear scales (although for a recent idea see [94,95]); the best one can do is to investigate an array of models, and attempt to extract general common be- haviours. From an observational viewpoint, the overall enhancement of void lensing -plus the boost in the shear profile outside the void radius for compensated voids -provide a potentially accessible handle on these theories of gravity. The errors on the tangential shear measured by the full Dark Energy Survey (DES) [96,97] will be at least a factor of two smaller than those published in the DES Year 1 analysis, which indicates encouraging prospects for constraining the phenomenology seen here. Likewise, the abundance of voids itself provides a route to constrain the dark energy parameters {w 0 , w a }; forecasts for the Euclid and WFIRST [98] satellites are given in [99]. Void lensing with the Euclid and LSST surveys will provide major advances in the statistical errors and redshift coverage of the void sample. With voids that extend to z ∼ 1, one may also pursue possible evolution in the lensing enhancement due to modified gravity theories. We hope to forecast the potential of voids in these surveys to constrain deviations from GR in a future work.
2018-03-25T22:14:15.000Z
2018-03-20T00:00:00.000
{ "year": 2018, "sha1": "5d5e4c66326a445fb918307b1d767eb0c229a1ae", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.98.023511", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "96f97123128a87fd3a4d19234d1c712215c5bef9", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
255292807
pes2o/s2orc
v3-fos-license
A Novel Chemiluminescent Method for Efficient Evaluation of Heterogeneous Fenton Catalysts Using Cigarette Tar The evaluation of the catalytic capacity of catalysts is indispensable research, as catalytic capacity is a crucial factor to dictate the efficiency of heterogeneous Fenton catalysis. Herein, we obtained cigarette tar-methanol extracts (CTME) by applying methanol to cigarette tar and found that CTME could cause CL reactions with Fe2+/H2O2 systems in acidic, neutral, and alkaline media. The CL spectrum experiment indicated that the emission wavelengths of the CTME CL reaction with Fe2+/H2O2 systems were about 490 nm, 535 nm, and 590 nm. Quenching experiments confirmed that hydroxyl radicals (•OH) were responsible for the CL reaction for CTME. Then the CL property of CTME was applied in-situ to rapidly determine the amounts of •OH in tetrachloro-1,4-benzoquinone (TCBQ)/H2O2 system in acidic, neutral and alkaline media, and the CL intensities correlated the best (R2 = 0.99) with TCBQ concentrations. To demonstrate the utility of the CTME CL method, the catalytic capacity of different types and concentrations of catalysts in heterogeneous Fenton catalysis were examined. It was found that the order of CL intensities was consistent with the order of degradation efficiencies of Rhodamine B, indicating that this method could distinguish the catalytic capacity of catalysts. The CTME CL method could provide a convenient tool for the efficient evaluation of the catalytic capacity of catalysts in heterogeneous Fenton catalysis. Introduction Heterogeneous Fenton catalysis has become a major research focus in the area of wastewater treatment due to advantages over other advanced oxidation processes (AOPs) such as recyclability, wide pH response range, easy solid-liquid separation, and nonproduction iron sludge [1][2][3][4][5][6]. Catalytic capacity is the critical factor for dictating the efficiency of heterogeneous Fenton catalysis in the degradation of pollutants. Therefore, intensive attention has been paid to the synthesis of a wide variety of new catalysts to improve the catalytic capacity [7][8][9][10][11][12][13]. For example, Hu et al. broke through the traditional Fenton theory to synthesize a new type of catalyst with a dual-reaction center [14,15]. In practice, researchers usually synthesize a series of materials in different conditions to obtain, distinguish, and select a catalyst with the best catalytic capacity, which is important yet tedious work. Hydroxyl radical (•OH) plays a crucial role in the degradation of pollutants in heterogeneous Fenton catalysis, where the amount of •OH could be an indicator of the catalytic capacity of a catalyst. Therefore, it is feasible to evaluate the catalytic capacity of catalysts by rapid and in-situ detection of •OH in heterogeneous Fenton catalysis. The current detection methods for •OH mainly include electron spin resonance (ESR), ultraviolet-visible light (UV-vis) absorbance and fluorescence [16]. These methods usually Toxics 2023, 11, 30 2 of 8 need a capture probe to react with free radicals to form a detectable product, followed by solid-liquid separation and measurement, and could not carry out the rapid and in-situ detection of •OH in heterogeneous Fenton catalysis, which is not efficient for the evaluation of the catalytic capacity of catalysts. Chemiluminescence (CL) is an optical phenomenon in which excited-state species generated through chemical reactions release energy (>45 kcal·mol −1 ) in the form of photons. CL is a well-suited method for the rapid detection of free radicals due to its fast detection speed and high sensitivity [17][18][19][20][21][22]. We have previously built a continuous flow CL method for rapid and dynamic monitoring of superoxide radicals in TiO 2 photocatalysis by using the luminol CL system [23]. However, now there are currently no suitable known CL methods for rapid and in-situ detection of •OH in heterogeneous Fenton catalysis, which is largely because the pH value of the heterogeneous Fenton system is incompatible with the current CL reactions. Therefore, constructing a novel CL method suited to the heterogeneous Fenton system on the basis of the new principle of CL reaction would significantly contribute to efficiently evaluating the catalytic capacity of catalysts by rapid and in-situ detection of •OH. Cigarette tar is the condensate product from incomplete combustion of tobacco under high-temperature and anoxic conditions. It has an abundance of various compounds, and though a fraction derives from the original composition of tobacco, most of the components are the products generated from cigarette combustion. To date, research has mainly focused on the hazardous components and their toxicological implications pertinent to cigarette tar [24,25]. In our previous study, we reported the CL property of tobacco extract [26]. For cigarette tar, there are probably some chemiluminophores directly transferred from tobacco. More importantly, however, is that an abundance of fused polycyclic compounds is produced in cigarette tar during combustion, which might be favorable for the chemical transformation of chemiluminophores with more aromaticity comparable to the current CL probes. This might eventually improve the luminous efficiency of cigarette tar in comparison with tobacco. Based on the above analysis, our objective in this current study has been to develop a new method for efficiently evaluating the catalytic capacity of catalysts through rapid and in-situ detection of •OH. Therefore, we first explored the CL properties of cigarette tar-methanol extracts (CTME) and then examined the feasibility of the CTME CL method for the rapid and in-situ detection of •OH. Finally, the CTME CL method was demonstrated to be able to efficiently evaluate the catalytic capacity of catalysts in heterogeneous Fenton catalysis. Preparation of Cigarette-Tar-Methanol Extracts (CTME) The cigarettes were made as follows: ripe fresh tobacco leaves were cured by threestage-curing (yellowing, color fixing, and vein drying) procedures detailed in a previous report [26]. After curing, the leaves were cleaned through dust removal using a brush and then left to regain moisture for 10 h at room temperature. Then the tobacco leaves were cut into shreds after removing veins and made into cigarettes using a cigarette rolling machine. Each cigarette was about 70 mm in length, 27.5 mm in circumference, and 1.1 g in weight. In order to obtain the cigarette tar, 20 cigarettes were placed on a smoke machine (Borgwaldt, Germany), and the Cambridge filter was used to trap the particulate matter of the mainstream smoke. Then the filter was cut into strips, added to 40 mL of methanol, sonicated for 20 min, and filtered through 0.45 µm Millipore membrane to obtain the CTME (filtrate) for further experimentation. The final mass concentration of CTME used throughout the experiments was about 4.0 mg/mL. Synthesis of Catalysts Mesoporous MnFe 2 O 4 and CoFe 2 O 4 nanospheres were prepared by a modified hydrothermal method previously reported [27]. Typically, 1.35 g of FeCl 3 ·6H 2 O and the corresponding transition metal salts (1.61 g MnCl 2 ·2H 2 O and 2.37 g CoCl 2 ·6H 2 O) with a molar ratio of 2:1 were dissolved in ethylene glycol (40 mL) containing 3.6 g of sodium acetate. The mixture was then covered and stirred vigorously on a magnetic stirrer for 30 min, and once a clear yellow solution was obtained, the solution was transferred to a Teflon-lined stainless-steel autoclave. Then, the autoclave was heated slowly to 200 • C and maintained for 8 h. The products were separated by applying an external magnetic field after the solution was cooled down to room temperature. The precipitate was washed several times with ethanol and dried under vacuum at 60 • C for 12 h. The FeOCl nanosheet was synthesized by heating FeCl 3 ·6H 2 O at a rate of 10 • C·min −1 to 220 • C and annealing for 2 h, as previously reported [28]. CL Measurements CL kinetic curves were recorded in batch experiments, which were conducted in a static system consisting of a glass cuvette and a BPCL Ultra-Weak Luminescence Analyzer (Institute of Biophysics, Chinese Academy of Sciences, Beijing, China). Briefly, for each CL reaction, 100 µL of CTME and involved reagents were respectively added into a glass cuvette. Then 100 µL of co-reaction reagent was injected using a microsyringe in the upper injection pore to trigger a CL reaction. For the measurement of the CL spectrum, a series of high-energy optical filters (440, 460, 475, 490, 505, 535, 555, 575, 590, and 605 nm) were utilized to screen the CL intensities of CTME CL systems, respectively. Degradation of Rhodamine B The degradation of Rhodamine B by different types and concentrations of catalysts in heterogeneous Fenton catalysis was conducted as follows. A total of 10 µL of Rhodamine B (5.0 mg/mL) was added into 5.0 mL of three kinds of catalysts (1.0 mg/mL ) or into different concentrations of FeOCl nanosheet (0.05, 0.08, 0.1, 0.2 and 0.5 mg/mL), followed by adding 500 µL of H 2 O 2 (1.0 mol/L). After 10 min, 500 µL of ascorbic acid (0.5 mol/L) was added to the mixture to stop the reaction. Then the mixture was filtered through a 0.45 µm membrane, and the filtrate containing the residual Rhodamine B was measured on the UV-vis spectrophotometer. CL Property of CTME We have previously studied the CL behavior of tobacco-methanol extract (TME) with the Fe 2+ /H 2 O 2 system [26]. Herein, the CL characteristics of CTME with the Fe 2+ /H 2 O 2 system were also investigated. As shown in Figure 1a, the CL emissions of CTME with the Fe 2+ /H 2 O 2 system were generated at different pH levels ranging from 0 to 14. Results indicated that CTME could undergo CL reactions with Fe 2+ /H 2 O 2 system in acidic, neutral, and alkaline media as with the TME [26]. CTME exhibited slow CL reactions, which had almost a plateau of long-lasting weak emissions at pH ≤ 2, while there were fast CL reactions for TME, and the CL intensity reached the maximum at pH = 1 [26]. As pH increased, however, the CL intensity of CTME increased until pH 4 and remained stable from pH 4 to 10. From pH 11 to 14, the CL intensity of CTME escalated and then declined drastically. The maximum of CL intensity for CTME was at pH = 12, which was about three times higher than at pH 4 through 10. In contrast, the CL intensity for TME began to decrease at pH > 1 and increased to the maximum at pH = 9 once again [26]. Thereafter, the CL intensity declined [26]. The results show the different CL characteristics between CTME and TME, indicating that the chemiluminophores within CTME and TME were probably different in quantity and type. In addition, the luminescent efficiency of CTME and TME was also examined ( Figure S1). The CL intensity of CTME was about two to three times greater than TME at the same mass concentration. This further implied that the process of combustion that generated cigarette tar from tobacco probably changed the chemiluminophores both in quantity and types, which lead to higher luminescent efficiency of CTME than TME. •OH Detection In our previous study, hydroxyl radical (•OH) was confirmed to be responsible for the CL reaction of TME. In our present study, a universal •OH scavenger thiourea was added to the CTME-Fe 2+ /H2O2 system to investigate the role of •OH in the CTME CL reaction ( Figure S2). CL signals were completely inhibited by adding thiourea in acidic, neutral, and alkaline mediums, meaning that •OH played a crucial role in the CTME CL reaction. To verify the feasibility of the CTME CL method for determining •OH, a typical •OH-generating system, tetrachloro-1,4-benzoquinone (TCBQ)/H2O2, was adopted to conduct a CTME CL reaction [29]. The CL phenomenon of CTME was first investigated by mixing with TCBQ/H2O2 system in acidic, neutral, and alkaline media. CL emissions were all produced by CTME-TCBQ/H2O2 systems ( Figure S3), indicating that •OH triggering CTME CL reactions in this system occurred in acidic, neutral, and alkaline solutions. Then the relationship between the CL intensity of CTME and the amount of •OH was conducted. Different •OH amounts were indirectly made by changing the TCBQ concentration due to its short lifetime. As shown in Figure 2, the CL intensity of CTME exhibited a linear increase with TCBQ concentrations (R 2 = 0.99) in acidic, neutral, and alkaline media, confirming that the CL intensity of CTME was •OH concentration-dependent in TCBQ/H2O2 systems. These results also confirmed that the CTME CL method could achieve the rapid and in-situ detection of •OH in a semi-quantitative way. To further examine the CL behavior of CTME with Fe 2+ /H 2 O 2 , CL spectrums of CTME-Fe 2+ /H 2 O 2 were conducted in acidic, neutral, and alkaline media, respectively (Figure 1b-d). In an acidic medium with 0.1 mM of H 2 SO 4 solution, there were two peaks, one centered at 490 nm and the other at 575 nm (Figure 1b). In H 2 O as the neutral medium, two maximum peaks appeared at about 490 nm and 590 nm (Figure 1c). The variation of wavelength shifted from 575 nm in an acidic medium to 590 nm in a neutral medium, which could probably be attributed to the change in pH value. In the 0.01 mol/L of NaOH solution representative of the alkaline medium, there was an additional peak relative to the two peaks at 490 nm and 590 nm that emerged at 535 nm ( Figure 1d). Furthermore, the CL intensity of peak at 490 nm escalated as pH increased, while the CL intensity at 590 nm in neutral and alkaline media was almost identical but larger than that at 575 nm in the acidic medium. The CL intensities at 590 nm (or 575 nm in an acidic medium) were larger than those at 490 nm regardless of the pH value. CL intensity at 535 nm in an alkaline medium was higher than those both at 490 nm and 590 nm, which is most likely the reason that there was a maximum CL intensity in the 0.01 mol/L of NaOH solution. Overall, approximately three kinds of potential emitting species in CTME participated in CL reactions with Fe 2+ /H 2 O 2 , and the emission wavelength at 490 nm and 590 nm could undergo CL reactions regardless of the pH value for some of the species. This is intriguing given that the conventional CL reactions are usually restricted by pH value. Meanwhile, the emitting species corresponding to the emission wavelength at 535 nm tended to take place CL reaction in alkaline media (e.g., pH = 12), but not in acidic and neutral solutions. •OH Detection In our previous study, hydroxyl radical (•OH) was confirmed to be responsible for the CL reaction of TME. In our present study, a universal •OH scavenger thiourea was added to the CTME-Fe 2+ /H 2 O 2 system to investigate the role of •OH in the CTME CL reaction ( Figure S2). CL signals were completely inhibited by adding thiourea in acidic, neutral, and alkaline mediums, meaning that •OH played a crucial role in the CTME CL reaction. To verify the feasibility of the CTME CL method for determining •OH, a typical •OH-generating system, tetrachloro-1,4-benzoquinone (TCBQ)/H 2 O 2 , was adopted to conduct a CTME CL reaction [29]. The CL phenomenon of CTME was first investigated by mixing with TCBQ/H 2 O 2 system in acidic, neutral, and alkaline media. CL emissions were all produced by CTME-TCBQ/H 2 O 2 systems ( Figure S3), indicating that •OH triggering CTME CL reactions in this system occurred in acidic, neutral, and alkaline solutions. Then the relationship between the CL intensity of CTME and the amount of •OH was conducted. Different •OH amounts were indirectly made by changing the TCBQ concentration due to its short lifetime. As shown in Figure 2, the CL intensity of CTME exhibited a linear increase with TCBQ concentrations (R 2 = 0.99) in acidic, neutral, and alkaline media, confirming that the CL intensity of CTME was •OH concentration-dependent in TCBQ/H 2 O 2 systems. These results also confirmed that the CTME CL method could achieve the rapid and in-situ detection of •OH in a semi-quantitative way. Evaluation of the Catalytic Capacity of Catalysts The heterogeneous Fenton catalytic reaction has been a research hotspot for water treatment technology due to its advantages in comparison with other AOPs [6]. Now researchers are keen on synthesizing various catalysts for heterogeneous Fenton catalysis, and thus the ability to distinguish catalytic capacity is indispensable. The catalytic capacity of these synthesized catalysts is highly dependent on •OH production. Herein, we attempted to evaluate the catalytic capacity of the three different catalysts (FeOCl, CoFe2O4, and MnFe2O4) under the same experimental conditions, and then the same catalysts (FeOCl) with different concentrations by determining the amount of •OH in-situ and rapidly with the CTME CL method. Of the three catalysts shown in Figure 3a, the highest CL intensity was derived from FeOCl, which was much more intense than CoFe2O4 and MnFe2O4, but the CL intensity of MnFe2O4 is only slightly larger than that of CoFe2O4. For individual FeOCl (Figure 3b), the CL intensity of CTME increased as the concentration of FeOCl increased, and there was a good correlation between them (R 2 = 0.98). In addition, degradation efficiencies of Rhodamine B with different types and concentrations of catalysts were also performed under the same conditions ( Figure S4). The order of Rhodamine B degradation efficiency for three catalysts was FeOCl > MnFe2O4 > CoFe2O4 (Figure S4a), in accordance with the CL intensity in Figure 3a. Figure S4b showed that the degradation efficiency of Rhodamine B increased with the FeOCl concentration, which was also consistent with the CL intensity in Figure 3b. The combined results in Figure 3 and Figure S4 strongly confirmed that the CTME CL method could efficiently evaluate the catalytic ca- Evaluation of the Catalytic Capacity of Catalysts The heterogeneous Fenton catalytic reaction has been a research hotspot for water treatment technology due to its advantages in comparison with other AOPs [6]. Now researchers are keen on synthesizing various catalysts for heterogeneous Fenton catalysis, and thus the ability to distinguish catalytic capacity is indispensable. The catalytic capacity of these synthesized catalysts is highly dependent on •OH production. Herein, we attempted to evaluate the catalytic capacity of the three different catalysts (FeOCl, CoFe 2 O 4, and MnFe 2 O 4 ) under the same experimental conditions, and then the same catalysts (FeOCl) with different concentrations by determining the amount of •OH in-situ and rapidly with the CTME CL method. Of the three catalysts shown in Figure 3a, the highest CL intensity was derived from FeOCl, which was much more intense than CoFe 2 O 4 and MnFe 2 O 4 , but the CL intensity of MnFe 2 O 4 is only slightly larger than that of CoFe 2 O 4 . For individual FeOCl (Figure 3b), the CL intensity of CTME increased as the concentration of FeOCl increased, and there was a good correlation between them (R 2 = 0.98). In addition, degradation efficiencies of Rhodamine B with different types and concentrations of catalysts were also performed under the same conditions ( Figure S4). The order of Rhodamine B degradation efficiency for three catalysts was FeOCl > MnFe 2 O 4 > CoFe 2 O 4 ( Figure S4a), in accordance with the CL intensity in Figure 3a. Figure S4b showed that the degradation efficiency of Rhodamine B increased with the FeOCl concentration, which was also consistent with the CL intensity in Figure 3b. The combined results in Figure 3 and Figure S4 strongly confirmed that the CTME CL method could efficiently evaluate the catalytic capacity of catalysts in heterogeneous Fenton catalysis. CL intensity was derived from FeOCl, which was much more intense than CoFe2O4 and MnFe2O4, but the CL intensity of MnFe2O4 is only slightly larger than that of CoFe2O4. For individual FeOCl (Figure 3b), the CL intensity of CTME increased as the concentration of FeOCl increased, and there was a good correlation between them (R 2 = 0.98). In addition, degradation efficiencies of Rhodamine B with different types and concentrations of catalysts were also performed under the same conditions ( Figure S4). The order of Rhodamine B degradation efficiency for three catalysts was FeOCl > MnFe2O4 > CoFe2O4 ( Figure S4a), in accordance with the CL intensity in Figure 3a. Figure S4b showed that the degradation efficiency of Rhodamine B increased with the FeOCl concentration, which was also consistent with the CL intensity in Figure 3b. The combined results in Figure 3 and Figure S4 strongly confirmed that the CTME CL method could efficiently evaluate the catalytic capacity of catalysts in heterogeneous Fenton catalysis. Conclusions In this work, the CL property of CTME was examined with •OH at different pH values, and subsequently achieved the rapid and in-situ detection of •OH in a semi-quantitative way in acidic, neutral, and alkaline media. Then the CTME CL method was successfully used to evaluate the catalytic capacity of catalysts in heterogeneous Fenton catalysis. Given that numerous catalysts have been synthesized for heterogeneous Fenton catalysis, the CTME CL method provides a convenient tool for the efficient evaluation of the catalytic capacity. In addition, the chemiluminophores within CTME are also intriguing and worthy of further research.
2022-12-31T16:13:17.339Z
2022-12-29T00:00:00.000
{ "year": 2022, "sha1": "ffe05f7df17773fc8be092a8e8edff81804c7890", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2305-6304/11/1/30/pdf?version=1672296028", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6d3329e1fdd6584436ae8bf18d0e0e6fed2fc32c", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
55978065
pes2o/s2orc
v3-fos-license
The Jeans theorem and the"Tolman-Oppenheimer-Volkov equation"in an exact wave solution of R^2 gravity Corda, Mosquera Cuesta and Lorduy Gomez have shown that spherically symmetric stationary states can be used as a model for galaxies in the framework of the linearized R^2 gravity. Those states could represent a partial solution to the Dark Matter Problem. Here we discuss an improvement of this work. In fact, as the star density is a functional of the invariants of the associated Vlasov equation, we show that any of these invariants is in its turn a functional of the local energy and the angular momentum. As a consequence, the star density depends only on these two integrals of the Vlasov system. This result is known as the"Jeans theorem". In addition, we find an analogous of the historical Tolman- Oppenheimer-Volkov equation for the system considered in this paper. For the sake of completeness, in the final Section of the paper we consider two additional models which argue that Dark Matter could not be an essential element. Introduction The Dark Matter problem started in the 30's of last century [1].When one observes the Doppler shift of stars which move near the plane of our Galaxy and calculates the velocities, one finds a large amount of matter inside the Galaxy which prevents the stars to escape out.That (supposed and unknown) matter generates a gravitational force very large, that the luminous mass in the Galaxy cannot explain.In order to achieve the very large discrepancy, the sum of all the luminous components of the Galaxy should be two or three times more massive.On the other hand, one can calculate the tangential velocity of stars in orbits around the Galactic center like a function of distance from the center.The result is that stars which are far away from the Galactic center move with the same velocity independent on their distance out from the center. These strange issues generate a portion of the Dark Matter problem.In fact, either luminous matter is not able to correctly describe the radial profile of our Galaxy or the Newtonian Theory of gravitation cannot describe dynamics far from the Galactic center. Other issues of the problem arise from the dynamical description of various self-gravitating astrophysical systems.Examples are stellar clusters, external galaxies, clusters and groups of galaxies.In those cases, the problem is similar, as there is more matter arising from dynamical analyses with respect to the total luminous matter. Zwicky [2] found that in the Coma cluster the luminous mass is too little to generate the gravitational force which is needed to hold the cluster together [2]. The more diffuse way to attempt to solve the problem is to assume that Newtonian gravity holds at all scales that should exist and that they should exist non-luminous components which contribute to the missing mass.There are a lot of names which are used to define such non-luminous components.The MAssive Compact Halo Objects (MACHOs) are supposed to be bodies composed of normal baryon matter, which do not emit (or emit little) radiation and drift through interstellar space unassociated with planetary systems [3].They could be black holes and/or neutron stars populating the outer reaches of galaxies.The Weakly Interacting Massive Particles (WIMPs) are hypothetical particles which do not interact with standard matter (baryons, protons and neutrons) [4].Hence, they should be particles outside the Standard Model of Particles Physics but they have not yet been directly detected.Dark Matter is usually divided in three different flavors, Hot Dark Matter (HDM) [5], Warm Dark Matter (WDM) [6] and Cold Dark Matter (CDM) [7].HDM should be composed by ultrarelativistic particles like neutrinos.CDM should consist in MACHOs, WIMPs and axions, which are very light particles with a particular behavior of self-interaction [8].If we consider the standard model of cosmology, the most recent results from the Planck mission [9] show that the total mass-energy of the known universe contains 4.9% ordinary matter, 26.8% Dark Matter and 68.3% Dark Energy. R theory of gravity and Vlasov system In the framework of f (R) theories of gravity, the R 2 theory is the simplest among the class of viable models with R m terms.Those models support the acceleration of the universe in terms of cosmological constant or quintessence as well as an early time inflation [11,15,20].Moreover, they should pass the Solar System tests, as they have an acceptable Newtonian limit, no instabilities and no Brans-Dicke problem (decoupling of scalar) in scalar-tensor version.We recall that the R 2 theory was historically proposed in [29] with the aim of obtaining the cosmological inflation. The R 2 theory arises from the action [30] where b represents the coupling constant of the R 2 term.In general, when the constant coupling of the R 2 term in the gravitational action (1) is much minor than the linear term R, the variation from standard General Relativity is very weak and the theory can pass the Solar System tests [30].In fact, as the effective scalar field arising from curvature is highly energetic, the constant coupling of the the R 2 non-linear term → 0 [30].In that case, the Ricci scalar, which represents an extra dynamical quantity in the metric formalism, should have a range longer than the size of the Solar System.This is correct when the effective length of the scalar field l is much shorter than the value of 0.2 mm [31].Hence, this effective scalar field results hidden from Solar System and terrestrial experiments.By analysing the deflection of light by the Sun in the R 2 theory through a calculation of the Feynman amplitudes for photon scattering, one sees that, to linearized order, the result is the same as in standard General Relativity [22].By assuming that the dynamics of the matter (the stars making of the galaxy) can be described by the Vlasov system, a model of stationary, spherically symmetric galaxy can be obtained.This issue was described in detail in [30], in the framework of the R 2 theory.For the sake of completeness, in this Section we shortly review this issue. In this paper we consider Greek indices run from 0 to 3. By varying the action of Eq. ( 1) with respect to g µν , one gets the field equations [30] By taking the trace of this equation, the associated Klein -Gordon equation for the Ricci curvature scalar is obtained [30], where is the d'Alembertian operator and the energy term, E, has been introduced for dimensional motivations [30] Hence, b is positive [30]. µν in Eq. ( 2) is the standard stress-energy tensor of the matter and General Relativity is easily re-obtained for b = 0 in Eq. ( 2). As we study interactions between stars at galactic scales, we consider the linearized theory in vacuum (T (m) µν = 0), which gives a better approximation than Newtonian theory [30].Calling R the linearized quantity which corresponds to R, considering the plane wave [30] b R = a( − → p ) exp(iq β x β ) + c.c. (5) with [30] one can choose a gauge for a gravitational wave propagating in the+z direction in which a first order solution of eqs.(2) with T (m) µν = 0 is given by the the conformally flat line element [30] We recall that the dispersion law for the modes of b R, i.e. the second of Eq. ( 6), is that of a wave-packet [30].The group-velocity of a wave- From the second of Eq. ( 6) and Eq. ( 8) one gets [30] which can be rewritten as [30] If one assumes that the dynamics of the stars making of the galaxy is described by the Vlasov system, the gravitational forces between the stars will be mediated by the metric (7).Thus, the key assumption is that, in a cosmological framework, the wave-packet of b R centred in − → p , which is given by the (linearized) spacetime curvature, governs the motion of the stars [30].In this way the "curvature" energy E is identified as the Dark Matter content of a galaxy of typical mass-energy E ≃ 10 45 g, in ordinary c.g.s.units [30].As E ≃ 10 45 g, from Eq. ( 4) one gets b ≃ 10 −34 cm 4 in natural units [30].Hence, the constant coupling of the R 2 term in the action ( 1) is much minor than the linear term R and the variation from standard General Relativity is very weak.This implies that the theory can pass the Solar System tests as the effective length of the scalar field is l ≪ 0.2 mm [30]. We can use a conformal transformation [30,32] to rescale the line-element (7) like where we set [30] Φ ≡ b R. Thus, in the linearized theory we get Hence, it is the Ricci scalar, i.e. the scalaron [29,30], the scalar field which translates the analysis into the conformal frame, the Einstein frame [15,30]. In general, the Vlasov-Poisson system is introduced through the system of equations [30,[32][33][34] In Eqs. ( 16)t is the time and x and v are the position and the velocity of the stars respectively.U = U (t, x) is the average Newtonian potential generated by the stars.The system (16 ) represents the non-relativistic kinetic model for an ensemble of particles (stars in the galaxy) with no collisions.The stars interact only through the gravitational forces which they generate collectively, are considered as pointlike particles, and we neglect the relativistic effects [30,[32][33][34].The function f (t, x, v) in the Vlasov-Poisson system ( 16) is non-negative and gives the density on phase space of the stars within the galaxy [30]. In this approach, the first order solutions of the Klein-Gordon equation (3) for the Ricci curvature scalar are considered like galactic high energy scalarons, which are expressed in terms of wave-packets having stationary solutions within the Vlasov system [30].The energy of the wave-packet is interpreted like the Dark Matter component which guarantees the galaxy's equilibrium [30].This approximation is not as precise as one would aspect [30], but here we consider it as the starting point of our analysis. The analysis in [30] permits to rewrite the Vlasov-Poisson system in spherical coordinates as In Eqs. ( 17), ( 18) and (19) p denotes the vector p = (p 1 , p 2 , p 3 ) with p 2 = |p| 2 , and x denotes the vector As one is interested in stationary states, one calls λ the wavelength of the "galactic" gravitational wave (7), i.e. the characteristic length of the gravitational perturbation [30].One further assumes that λ ≫ d, d being the galactic scale of order d ∼ 10 5 light-years [35,36].In other words, the gravitational perturbation can be considered "frozen-in" with respect to the galactic scale [30]. In that way, one can write down the system of equations defining the stationary solutions of eqs.( 17), ( 18) and (19) Therefore, the idea in [30] is that the spin-zero degree of freedom arising from the R 2 term in the gravitational Lagrangian, i.e. the scalaron, is a potential candidate for the dark matter.In this approach, the dominant contribution to the curvature within a galaxy comes from the scalaron field equation (3).That equation has a proper baryon source term.This enables the baryons themselves to evolve obeying a collisionless Boltzmann equation.Then the baryons can propagate on the curvature generated by the scalaron. 3 The Jeans theorem and the "Tolman-Oppenheimer-Volkov equation" The following results will be obtained adapting the ideas introduced in [32][33][34]37].Let us start by recalling some important definitions in the conformal frame: Let us consider the stationary solution of our stellar dynamics model, i.e.Eqs. ( 20), ( 21) and (22).The particle density is a functional of the invariants of the Vlasov equation ( 14).The Jeans theorem states that any of these invariants must be a functional of the local energy and the angular momentum.In that way, the particle density depends only on these two integrals of the system under consideration.In order to prove these statements, one introduces the new coordinates We note that, f being spherically symmetric, one can write it as a function of r, Y, Z, i.e. f (x, p) → f (r, Y, Z).Thus, Eq. ( 22) in the new coordinates reads This equation has the same form of eq.(2.11) in [37] with m(r) = 1 + b R d dr b Rr 2 .Thus, by applying the result in [37], one obtains that f must have the form where Returning to the system of Eqs. ( 20), ( 21) and ( 22), one gets immediately where is the local energy of the particles and is the modulus squared of the local angular momentum.Using Eq. ( 29) and a transformation of variables, the system (23) becomes A direct computation permits to obtain which is the analogous of the historical Tolman-Oppenheimer-Volkov equation [38,39] for the system under analysis in this paper. For the sake of completeness, we stress that the Jeans instability (gravitational stability) has been analysed in the context of modified theories of gravity for rotating/non-rotating configurations in [40][41][42].Related questions on the Tolman-Oppenheimer-Volkov equation and the Jeans instability (but not for plane waves) were also studied in [43][44][45].The Tolman-Oppenheimer-Volkov equation has been also analysed in the dimensional gravity's rainbow in the presence of cosmological constant in [46] and in the framework of dilaton gravity in [47]. Two additional models For the sake of completeness, we play the devil's advocate and consider two models which argue that Dark Matter is not an essential element, even though popular models postulate that it comprises roughly a fourth of a universe.Our starting point is the relation [48,49] where G 0 is the present value of G, t 0 is the present age of the universe and t the time elapsed from the present epoch.Similarly one could deduce that [49] In this scheme the gravitational constant G varies slowly with time.This is suggested by Sidharth's 1997 cosmology [48], which correctly predicted a Dark Energy driven accelerating universe at a time when the accepted paradigm was the Standard Big Bang cosmology in which the universe would decelerate under the influence of dark matter. We reiterate the following: The problem of galactic rotational curves [49,50].We would expect, on the basis of straightforward dynamics that the rotational velocities at the edges of galaxies would fall off according to However it is found that the velocities tend to a constant value, This as known had lead to the postulation of as yet undetected additional matter, the so called Dark Matter.We observe that from Eq. ( 35) it can be easily deduced that [51] a as we are considering infinitesimal intervals t and nearly circular orbits.Equation (38) shows that there is an anomalous inward acceleration, as if there is an extra attractive force, or an additional central mass [52]. Thus, From Eq. ( 39) it follows that Eq. ( 40) shows that at distances within the edge of a typical galaxy, that is r < 10 23 cm, the equation ( 36) holds but as we reach the edge and beyond, that is for r ≥ 10 24 cm, we have v ∼ 10 7 cm sec , in agreement with eq. ( 37).Then, the time variation of G explains observation without invoking dark matter.It may also be mentioned that other effects like the Pioneer anomaly and shortening of the period of binary pulsars can be deduced [53], while new effects also are predicted. Milgrom [54] approached the problem by modifying Newtonian dynamics at large distances.This approach is purely phenomenological.The idea was that perhaps standard Newtonian dynamics works at the scale of the solar system but at galactic scales involving much larger distances, the situation might be different.However a simple modification of the distance dependence in the gravitation law, as pointed by Milgrom would not do, even if it produced the asymptotically flat rotation curves of galaxies.Such a law would predict the wrong form of the mass velocity relation.So Milgrom suggested the following modification to Newtonian dynamics: A test particle at a distance r from a large mass M is subject to the acceleration a given by where a 0 is an acceleration such that standard Newtonian dynamics is a good approximation only for accelerations much larger than a 0 .The above equation however would be true when a is much less than a 0 .Both statements can be combined in the heuristic relation In Eq. ( 42); µ(x) ≈ 1 when x ≫ 1, and, µ(x) ≈ x when x ≪ 1.It is worthwhile to note that (41) or (42) are not deduced from any theory, but rather are an ad hoc fit to explain observations.Interestingly it must be mentioned that most of the implications of Modified Newton Dynamics (MOND) do not depend strongly on the exact form of µ.It can then be shown that the problem of galactic velocities is solved [55][56][57][58][59]. It is interesting to note that there is an interesting relationship between the varying G approach, which has a theoretical base and the purely phenomenological MOND approach.Let us write Hence, At this stage we can see a similarity with MOND.For if β ≪ 1 we are with the usual Newtonian dynamics and if β > 1 then we get back to the varying G case exactly as with MOND. Concluding remarks The results in [30] have shown that spherically symmetric stationary states can be used as a model for galaxies in the framework of the linearized R 2 gravity.Those states could, in principle, be a partial solution to the Dark Matter Problem.In this paper, an improvement of this work has been discussed.As the star density is a functional of the invariants of the associated Vlasov equation, it has been shown that any of these invariants is in turn a functional of the local energy and the angular momentum.Then, the star density depends only on these two integrals of the Vlasov system.This result represents the so called "Jeans theorem".In addition, an analogous of the historical Tolman-Oppenheimer-Volkov equation [38,39] for the system considered in this paper has been discussed.We tried this extension of previous work in [30] because, on one hand, the Jeans theorem is important in galaxy dynamics and in the framework of molecular clouds [60].On the other hand, the historical Tolman-Oppenheimer-Volkov equation constrains the structure of a spherically symmetric body of isotropic material which is in static gravitational equilibrium, as modelled by metric theories of gravity, starting from the general theory of relativity [38,39].Thus, a viable extended theory of gravity, like the R 2 gravity, must show consistence with these two important issues. For the sake of completeness, in Section 4 of this paper, two additional models which argue that Dark Matter could not be an essential element have been discussed.In fact, Dark Matter is considered a mysterious and controversial issue.There is indeed a minority of researchers who think that the dynamics of galaxies could not be determined by massive, invisible dark matter halos, see [10,11] and [51][52][53][54][55][56][57][58][59].We think that, at the present time, there is not a final answer to the Dark Matter issue.In other words, it is undoubtedly true that the Universe exhibits a plethora of mysterious phenomena for which many unanswered questions still exist.Dark Matter is an important part of this intriguing puzzle.Thus, when one works in classical, modern, and developing astrophysical and cosmological theories, it is imperative to repeatedly question their capabilities, identify possible shortcomings, and propose corrections and alternative theories for experimental submission.In the procedures and practice of scientific professionals, no such clues, evidence, or data may be overlooked. Finally, we take the chance to stress that important future impacts which could help to a better understanding of the important Dark Matter issue could arise from the nascent gravitational wave astronomy [61].The first direct detection of gravitational waves by the LIGO Collaboration, the so called event GW150914 [61], represented a cornerstone for science and for astrophysics in particular.We hope that the gravitational wave astronomy will become an important branch of observational astronomy which will aim to use gravitational waves to collect observational data not only about astrophysical objects such as neutron stars, black holes and so on, but also about the mysterious issues of Dark Matter and Dark Energy.In order to achieve this prestigious goal, a network including interferometers with different orientations is required and we're hoping that future advancements in ground-based projects and space-based projects will have a sufficiently high sensitivity [61][62][63]. For the benefits of the reader, we also signal two important works on selfgravitating systems [64] and on Jeans mass for anisotropic matter [65]. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper.
2016-10-13T16:58:15.000Z
2015-10-23T00:00:00.000
{ "year": 2015, "sha1": "a24e1194413a310085027b13d1d596a2303a9620", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ahep/2016/2601741.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "a24e1194413a310085027b13d1d596a2303a9620", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
225598774
pes2o/s2orc
v3-fos-license
Assessing the Performance of Sustainable Development Goals of EU Countries: Hard and Soft Data Integration : The European Union (EU) energy policy for sustainable development has been the topic of continuous debate, research, and analysis, which frequently focused on objectives and the evaluation of quantitative and qualitative performance. Different approaches can be used for the assessment of sustainable development goals. The authors of the article conducted a literature review of relevant research papers dated 2016–2020. The most common are quantitative methods based on hard data. Some qualitative studies based on soft data are also available but rare. This article proposes hybrid Rough Set Data Envelopment Analysis (DEA) and Rough Set Network DEA models that integrate both approaches. Also, the models allow the inclusion of uncertainty in the underlying data. The article uses hard data of the International Energy Agency (IEA) and the results of the EU survey regarding the influence of the socio ‐ economic environment on CO 2 emissions in EU countries. The authors demonstrate that multifaceted and objective assessment is possible by merging concepts from the set theory and operational research. Introduction Energy is the main driver for economic growth; however, it is also the leading cause of CO2 emissions [1]. A shift away from high-carbon energy sources resulted from depleting fossil fuels and the growing awareness of the negative impact made by economic development on the environment. There is a global consensus that energy needs to become eco-friendly [2]. Based on the United Nations Agenda 2030 [3], sustainable development is impossible without "access to affordable, reliable, sustainable, and modern energy for all". Many economies have set ambitious goals for their energy sectors, emphasizing safety, sustainability, environmental friendliness, resource-efficiency, and low-carbon dependency [4]. The European Union (EU) is especially active in introducing relevant regulatory framework and policies [5,6]. Policy instruments for the smarter and more sustainable electricity in the EU include a wide range of directives, strategies, and roadmaps with targets for a low-carbon economy, energy efficiency and renewable resources [4]. At the micro-level, European enterprises are also encouraged to innovate responsibly [7,8] and assess newly-developed technologies [9] considering the intended and unintended environmental and social impact. Debates, research efforts, and analyses have been focusing on ways to measure and monitor the progress toward sustainability. Indicators commonly used for those purposes show the indivisibility of the relationship between energy and development. Instead of absolute values, the impact made by economies on the natural environment is measured by the share of renewable energy in the total final energy consumption, energy intensity per unit of GDP [3], and CO2 per unit of value-added [10]. However, other issues should also be considered. For example, energy sustainability is impossible without sufficient investments in energy efficiency, expressed as a percentage of GDP or direct investments in infrastructure and technology [3]. It is also important to compare the progress of countries considering many aspects of sustainable energy and development as well as find ways to aggregate the final assessment [11][12][13]. The review of the literature on the performance of sustainable development goals revealed many available approaches. The used methods range from a comparison of simple indicators via multivariate statistical analysis to an adaptation of multicriteria decision-making methods. In particular, the latter-the adaptation of multicriteria decision-making methods-is a rapidly growing research area [14]. However, most efforts are limited to the analysis of hard quantitative data that describe the progress of a country leaving out the context-dependency defined by the soft qualitative data. Papers that use qualitative data either present it as an evaluation of policies and initiatives transformed into simple dummy variables [15,16], or consider it indirectly as various ranking scores [11]. In some instances, the data is presented as expert opinions [17] or importance weights determined by experts [18]. Qualitative data often contains more information. However, it is also ambiguous. A dismissal or oversimplification of qualitative information means the rejection of a vast amount of knowledge. The success of a policy to reduce CO2 emissions depends not only on economic prosperity measured by GDP per capita but also on priorities, set directly and indirectly by the public. The significance of both factors is evident in the midst of unprecedented challenges faced by societies, such as the expected recession and global changes resulting from the COVID-19 pandemic. The suggestion to integrate soft and hard data on the socio-economic environment makes this article original. The proposal is to use different types of data, i.e., social authorization, the readiness for change, the standard of living, and efforts made toward the decoupling of economic growth and carbon emissions. The article offers to use hybrid models, i.e., Rough Set Data Envelopment Analysis (DEA) and Rough Set Network DEA, to deal with uncertainty in the underlying data used for assessing the performance of sustainable development goals in EU countries. The article has the following structure. First, it provides a solid review of papers issued in 2016-2020 on monitoring and assessing efforts of EU countries toward energy sustainability. In this part, it also indicates methods and goals used as the background for a proposed approach. Next, Rough Set theory, DEA, and network DEA models are used to deal with uncertain data in the case of sustainable development assessment. Then, a case study is discussed, and results of the analysis are presented. The article finishes with conclusions. Background Literature Many research projects originated from the importance of sustainable development and free public access to EU data aggregated at a national level. Table 1 presents the results of the literature review conducted in the Scopus database. The analysis targeted relevant articles that focused on the EU and were published in 2016-2020. The summary below indicates the methods and aims of each research. [32] Multidimensional scaling data reduction method and CA data: energy indicators, e.g., energy imports, energy use, energy production, capacity, the share of RES, GHG To measure the differences between the countries in the Eastern European region in terms of RE and economic development Lindberg and Markard (2019) [6] Transition pathway (semi-coherent pattern of major changes) analysis data: list of key EU electricity policies and their key industry actors To assess the EU electricity policy mix supporting different transition pathways Lyeonov, Pimonenko, Bilan, Štreimikienė, and Mentel (2019) [33] Modified OLS data: GDP per capita and GHG emissions, RE consumption, green investment To analyze the linkages between GDP per capita, GHG and RE in the total final energy consumption and green investments in the EU Malinauskaite, Jouhara, Ahmad, Milani, Montorsi, and Venturelli (2019) [5] Descriptive statistics analysis data: energy consumption trends, sources and sectors, energy savings To review EU strategies and policies on energy efficiency; to present national case studies for Italy and the UK Mikalauskiene, Štreimikis, Mikalauskas, Stankūnienė, and Dapkus (2019) [34] Descriptive statistics analysis data: GHG emissions and removals by sector, a set of indicators for the assessment of energy intensity, the structure of consumption, dependency, shares of RES in sectors To assess GHG emission trends and climate change mitigation policies in the fuel combustion sector of Lithuania and Bulgaria Neofytou, Karakosta, and Gómez (2019) [18] Promethee II data: 12 indicators from 4 dimension: environmental impacts, e.g., GHG reduction, energysaving; social impact, e.g., employment; economic impacts, e.g., GDP; energy systems impacts, e.g., import, intensity To assess alternative climate and energy policy scenarios and their socioeconomic, environmental, and energy impacts Pach-Gurgul and Ulbrych (2019) [35] Hellwig's multidimensional comparative analysis data: energy consumption, the share of RE in energy consumption To empirically verify progress made implementing the provisions of the EU Energy Package by the V4 countries Siksnelyte and Zavadskas (2019) [4] MCDM, TOPSIS data: indicators for monitoring the progress (electricity interconnection, market concentration, electricity prices, retail electricity markets share of RES in final electricity consumption), indicators for the To monitor the progress of the electricity sector toward EU objectives; assess the sector sustainability assessment of sustainability: economic (e.g., prices), environmental (e.g., share of RES, distribution losses) security (import) Siksnelyte, Zavadskas, Bausys, and Streimikiene (2019) [36] MCDM MULTIMOORA optimization based on Ratio Analysis technique data: indicators for monitoring the progress of energy import dependency and energy security (e.g., import, supplier concentration), indicators for monitoring the progress of decarbonization (e.g., energy consumption, GHG emission), national energy targets and their implementation, set of EISD indicators to comparative assessment of sustainable: social (e.g., affordability of electricity), economic (e.g., energy use and productivity), and environmental (e.g., GHG emissions) To present the EU energy policy context; to analyze trends in energy development in eight Baltic Sea Region countries Arbolino, Boffardi, Simone, and Ioppolo (2020) [13] Efficiency index based on normalization, weighting, and aggregation and PCA data: indicators from dimension: sectoral trends (e.g., GDP, energy intensity per capita, RE production per capita, energy consumption), interaction with the environment (e.g., CO2 emissions), economic and policy aspects (e.g., Tax, R&D Expenditure) To propose an approach for achieving increased efficiency energy; to present the test on a sample of 20 Italian provinces Based on the review, the interest in the problem of objective assessment is high and constant. Most papers focused on the results of multidirectional efforts toward sustainable development and the reduction of its negative impact on the environment. Some papers investigated the speed of changes taking place in this area, e.g., the transition to renewable energy sources. The regression models could be divided into simple equations [28,33,38] and extended, multiple variables [15,27]. Some authors created rankings based on several indicators [12] or their set that described different dimensions [4,11,13,36,37]. Policy and strategy studies were mostly a yes/no analysis of documents [16] or dependencies between them [6]. The portfolio of methods is extensive and still open for suggestions. The main advantage of the DEA method is a comparative analysis of multiple variables. The method was applied empirically in many areas [39]. DEA-based assessment of sustainable development goals allows for the relativization of performance based on the underlying relationship between the weighted sum of results and the weighted sum of cost [40]. The issues of weighting are solved using objective linear programming. Table 2 presents examples of DEA applications for the assessment of sustainability in EU countries. In this case, the inputs and outputs are given as a reference to previous studies and practice. Hence, DEA-based approaches to the assessment of sustainability in different countries were used for various purposes. In some instances, the focus was placed on efficiency while transforming hard data on labor, capital, and energy into GDP. Other papers considered the associated production of pollutants (mainly greenhouse gas emissions) [45,48,49] and renewables [45][46][47]. Changing trends of RE amounts [46,48,49] was another popular area of assessment. The cited papers assumed that the data underlying the assumption were solid and reliable. Most of them analyzed absolute energy sustainability indicators from statistical databases. The few cases that used qualitative data treated it in the same way: the transformation into numerical variables was a conversion to zero-one values [15,16]. Also, it was used directly as expert opinions [17,18]. Conversion of qualitative data into quantitative data without considering the subjectivity and the resulting uncertainty was associated with the loss of information, shallow conclusions, and misinterpretation. There are many approaches to incorporation of uncertainty into the DEA, e.g., chanceconstrained DEA model [74]. In the context of sustainability assessment, uncertainty in data is mainly modeled using concepts derived from Fuzzy Sets [75,76]. For example, DEA and fuzzy best and worst methods were combined and used to prioritize renewable energy sources [77]. Fuzzy numbers were used to describe the ambiguity of a qualitative index used to assess an RE technical plan [78]. This article proposes a hybrid model that integrates the Rough Set theory and DEA to model the vagueness of data. Rough Sets, although different with some origin assumptions, by some authors are treated as an approach derived from Fuzzy Sets [79,80]. The merging of concepts from the set theory and operational research address the need for multifaceted and objective assessment as it proved to be effective in the case of technology prioritization [81]. Fuzzy sets in non-empty space are based on the membership function. Using the membership function relates to the basic problem of choosing the method of its construction. The Rough Set theory, which does not require special assumptions as to data and probability distribution, characterized by mathematical simplicity, has found many applications. However, the combination of Rough Sets with the DEA method is unpopular and relatively rarely used. Methods Data Envelopment Analysis (DEA) is a linear programming technique that allows evaluating the relative efficiency of Decision-Making Units (DMUs). The method was initiated by Charnes, Cooper, and Rhodes (1978) [53] based on a paper by Farrell (1957) [82] and his concept of the best practice frontier. The DEA method uses the idea of technical efficiency. It evaluates the performances of the units, considering the relationship of outputs and inputs in connection to the value of this relationship in other DMUs covered by the study (Figure 1). (1) The score , specified in Equation (1) ranges from 0 for the worst performing units, to 100%, for the best ones. The symbol represents the weight of DMUj. In the case of sustainability assessment and comparisons, data ( , ) are often presented and interpreted in the form of ratios rather than in absolute numbers (e.g., GDP per capita, or CO2 per GDP). On the other hand, subjects of decision are numerators ( , ) and denominators ( , ) of these fractions ( , ). The basic standard CCR model (Charnes,Cooper,and Rhodes [20] DEA model) is technically incorrect when data is used in the form of ratios [83][84][85][86]. Assuming ⊆ and ⊆ represent, respectively, ratio inputs and ratio outputs, and \ and \ represent absolute (non-relative) inputs and outputs and, furthermore, , for each ∈ and , for each ∈ , the BCC model Equation (1) can be deployed [83] or the nonlinear solution [84,85] Equation (2): (2) Symbols used as in Equation (1). This paper proposes to integrate DEA and Rough Set methods to deal with data uncertainty [87]. The Rough Set theory, introduced by Pawlak (1982) [88], is a mathematical approach to vagueness and uncertainty. The uncertainty modelling with the concept of rough variables of inputs and outputs [89]: , , , , , , , , where: , , 1, … , , and 1, … , is presented in Figure 2. The ranges , and , represent lower approximations, , and , upper approximations (boundaries) of the unknown real values of input and output , respectively. Using the concept of trust in rough variables and assuming the level α, such as 0.5≤α≤1, αoptimistic and α-pessimistic values can be calculated for each rough variable and each DMU: , , , [89]. The following Equation (3)-(6) formulas can be used: The Rough Set DEA model results in the range of efficiency indicators for the assumed level of α: , [81,89]: , ∑ , , 1, … , , Achievement of sustainability goals is the effect of many consecutive stages (subprocesses). Network DEA models are multistage. They allow examining the efficiency of DMUs that have internal network structures and provide measures for the components that make up the DMUs [90]. Presented below is a general two-stage process, where the first stage uses inputs , , … , , … , ∈ 1, … , to produce outputs , , … , , … , ∈ 1, … , , and then, these are used as inputs of the second stage to produce outputs , , … , , … , , ∈ 1, … , , where ~ represents unknown decision variables. Assuming and are weights that are specified by users and reflect the user preference [91]: In the case of a rough variable, for example from the range: , , where: , ∈ 1, … , formulas Equation (3)-(6) reduce to: For α = 0.5 , reduces to mean: /2, /2 , and for α = 1.0 to origin range: , . The presented models and their combinations can be used to assess the level of development of EU countries from the perspective of sustainability. The main advantages of Rough DEA include the simple interpretation of both the level of trust: the higher the wider the efficiency range, and the relations: to obtain the maximum score the best characteristics of the evaluated DMU is compared with the worst of others and vice versa while forming the lower bounds. Discussion of Data and Models In the literature the conventional data for productivity analysis by DEA are labor and capital as inputs and economic growth measured by GDP as an output. In the case of sustainability development assessment, the volume of energy is considered as the main input of GDP. It is used in the form of, e.g., the share of RE, resource depletion, and GHG emissions (Table 2). Data on sustainable development are processed by many institutions, e.g., the World Bank, the European Environment Agency (EEA), and Eurostat. Differences in collection and aggregation methodologies often imply a certain inconsistency in the published data. In this study, hard data: GDP (in billion US dollars using 2010 exchange rates), population (in millions), and CO2 emissions (in millions of tons of CO2) in 2017 were taken from IEA (2019), CO2 Emissions from Fuel Combustion [72]. The soft data is based on the survey Europeans' Attitudes on Energy Policy [73], which was conducted in 28 EU member states. The survey revealed a high consensus and positive attitude toward the current energy policy. For example, about 60% of respondents totally agreed, and 30% tended to agree with statements "it should be the EU's responsibility to encourage more investment in renewable energy" and "it should be the EU's responsibility to encourage more investment in energy research and innovation". The percentages of agreements were correlated between countries. The Pearson's correlation coefficient for the statements was 0.9. The most frequent answer regarding the priorities for the next five years was investments in clean energy technology and their development. Respondents had less enthusiasm about the reduction of the impact made by energy on climate change and the reduction of the overall energy consumption in the EU. The agreement to the statement that "it should be the EU's responsibility to encourage more investment in renewable energy" was chosen to reflect the social authorization and readiness for change. The following relationship was assumed: the socio-economic environment (the social agreement to investments and robust data on a country's standard of living measured by GDP per capita) influences CO2 emissions. A simple DEA model was used for total agreements only ( Figure 3a), and Rough DEA-for total agreements and tendencies to agree, thus creating a rough variable (Figure 3b). The network Rough DEA model was also used to assess efficiencies of investments in lowcarbon development. The authors analyzed the transformation of agreement with investments and the standard of living into the value of investments in energy (the first stage) and the value of investments in low-carbon development (the second stage) (Figure 3c). Referring to the equations of the models presented in the Methods section, as was taken attitude to policy, as GDP per capita, as GDP per CO2 emission in DEA and Rough DEA model. In network Rough DEA model, represented attitude to policy, GDP per capita, total public energy for research, development and deployment (RD&D) budget, GDP per CO2 emission. Equal weights w1 and w2 was assumed, and α = 0.6, and α = 0.8. The data on the total public budget (in millions US dollars) allocated for research, development and deployment (RD&D) of energy technologies in 2017 was taken from IEA [10]. Not all data was available. For this reason, network Rough DEA analysis was performed on a smaller data set. Due to the limitation of the DEA model assuming a positive relationship between inputs and outputs, the degree of low-carbon economies was measured by the inverse of the classical CO2 value per GDP. The summaries of data covered by the analysis are presented in Tables 3-5 Apart from an obvious linear relationship between CO2, population and GDP, a small but statistically significant relationship was found between GDP per CO2 and the total agreement to encourage more investment in energy research and innovation. This relationship justifies the consideration of the opinion in the evaluation of a low-carbon economy. Also, there is a noticeable correlation between GDP per capita and RD&D per capita, which indicates that higher spending in wealthier states could achieve lower emissions per GDP. The charts in Figures 4 and 5 visualize the relationships between considered variables of EU states. In both figures, SE positively stands out: high social awareness and the above-average standard of living are reflected in a low-carbon economy. However, LU demonstrates that GDP per capita does not always determine the level of GDP per CO2. FR shows that high GDP per CO2 can be reached having a moderate social agreement. MT demonstrates that the same can be done with a below-average GDP per capita. ES, CY, and SI are examples indicating that eco-awareness can precede the results of actions aimed at low CO2 emissions. At the other end are the countries of Central and Eastern Europe (PL, RO, CZ, SK) with a low agreement to the need for investment, low GDP per capita, and high current CO2 emissions per GDP. Figure 5 depicts the general relationship, i.e., the higher GDP per capita, the higher is RD&D expenditure. However, it also indicates that the current RD&D budget does not always directly relate to achievements (in the case of FI). Data Analysis The results of applied DEA models are presented in Tables 6 and 7 and Figures 6 and 7. BCC-O corresponds to model (1), Ratio DEA-(2), Rough BCC DEA- (7,8), and Network Rough BCC DEA-(9) with the rough concept (7,8). DEA model scores based on only hard data GDP per capita and GDP per CO2 without a qualitative variable was also presented (Ratio DEA only hard data). Including additional variables in DEA increase or leave the rating unchanged. Assessment with both quantitative and qualitative variables is less strict. It is worth noting that the assessment based only on quantitative data is close to the pessimistic estimation from the Rough BCC DEA model integrating soft and hard data for α = 0.8. Differences between nonlinear and traditional BCC approaches are insubstantial. The highest scores by these models (1) and (2) LV-with an average agreement but low GDP per capita. The incorporation of inconsistency in opinions broadens the range of scores. The 100% effective countries may also be countries, which by models (1) or (2) achieved high nearly 100% result, i.e., (i) IE or DK, but also those having average results, namely, (ii) BE and FI, or even low results, i.e., (iii) CZ or PL. It is significant that in the case of groups (i) and (ii), the dispersion between optimistic and pessimistic results is large, around 50%. The two-stage network evaluation shows a relatively high efficiency of the first stage-the transformation of GDP per capita and public awareness into the RD&D budget. Results of the second stage indicate the need for much improvement in expenditure efficiency. However, these conclusions should be made with caution as financial expenses often have an effect in the long term. The provided example of assessing the performance of sustainable development of EU countries indicates that depending on the assumed interpretation of qualitative data, varying scores can sometimes be obtained. Four groups of EU countries can be distinguished based on the Rough BCC DEA model: The final assessment may consist of an average score with a range of possible results. The width of the assessment range represents its sensitivity to changes in data interpretation. Discussion The contribution of the article is two-fold. First, it presented a new hybrid model based on a combination of Rough Sets and DEA. The model is intended for the integration of hard and soft data in the object ranking task. It enables the inclusion of uncertainty in the underlying data. Second, the article demonstrated the use of the Rough DEA model by assessing EU countries in terms of their progress toward sustainable development objectives. It allows the assessment of physical data and socio-economic data and permits for a more multi-faceted and objective evaluation. The paper is also significant because both quantitative and qualitative data were used to appraise the performance of countries in the field of sustainable development. Sustainable development goals address many global challenges, including those related to poverty and social inequalities [3], exploitation of natural resources, growing global population, and energy needs [92], and the aging society in the EU [93]. The article refers to the monitoring of clean energy goals considering investments in technology development and modernization at a given level of economic growth and social support. The article is particularly relevant under the present circumstances. The period chosen for the analysis precedes the anticipated global economic crisis, which will hit many countries that were most affected by COVID-19. In their efforts to reduce CO2 emissions, government-level decisionmakers will have to focus more on the economic opportunities of individual countries and public opinion. The assessment of the EU's progress toward sustainable development goals was based on hard and soft data integration. It can be treated as a preliminary stage of quantitative considerations regarding the need to increase ecological awareness, shaping the sense of responsibility and readiness to bear the costs of eco-development. The used approach allows broadening the perspective and provides more reliable sustainability rankings. The presented studies did not fully explore the broad research topics of the subject. In future research, it is worth designing a dedicated survey for the measurement of the degree of social readiness to incur expenses of transformation into a low-carbon economy. However, the most important extension of the presented models is the inclusion of a time delay between expenditure, social readiness, and quantitative indicators of sustainability. This task should comprise an adequate aggregation of data according to the schedule of research and investment processes.
2020-07-09T09:14:44.789Z
2020-07-03T00:00:00.000
{ "year": 2020, "sha1": "40ab21e7b56b2331b888c83f2364cd2e2de67a38", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/13/13/3439/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "310784c8b0433b3c73d92d74df460cf631b4d379", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Computer Science" ] }
267542996
pes2o/s2orc
v3-fos-license
Management of Cutaneous Squamous Cell Carcinoma of the Scalp: The Role of Imaging and Therapeutic Approaches Simple Summary Cutaneous squamous cell carcinoma is the second most common subtype of skin cancer. The scalp is one of the most frequently affected locations and is associated with a higher risk of complications, compared to other locations. In addition, it has a characteristic thickness and anatomical structure that may influence both growth pattern and treatment of primary cutaneous squamous cell carcinoma; while clinical peripheral margins may be easily achieved during surgery, vertical excision of the tumor is limited by the skull. Despite having a unique anatomy, current guidelines do not offer specific recommendations for cutaneous squamous cell carcinoma of the scalp, which may lead to inconsistent decision-making in multidisciplinary committees when discussing tumors with some risk factors or with close histological margins. Thus, more data are needed to improve its management and assist multidisciplinary teams in shared decisions. Abstract Cutaneous squamous cell carcinoma (cSCC) is the second most common subtype of skin cancer. The scalp is one of the most frequently affected locations and is associated with a higher rate of complications, compared to other locations. In addition, it has a characteristic thickness and anatomical structure that may influence both growth pattern and treatment of primary cSCC; while clinical peripheral margins may be easily achieved during the surgery, vertical excision of the tumor is limited by the skull. Despite having a unique anatomy, current guidelines do not contemplate specific recommendations for scalp cSCC, which leads to inconsistent decision-making in multidisciplinary committees when discussing tumors with high risk factors or with close margins. This article provides specific recommendations for the management of patients with scalp cSCC, based on current evidence, as well as those aspects in which evidence is lacking, pointing out possible future lines of research. Topics addressed include epidemiology, clinical presentation and diagnosis, imaging techniques, surgical and radiation treatments, systemic therapy for advanced cases, and follow-up. The primary focus of this review is on management of primary cSCC of the scalp with localized disease, although where relevant, some points about recurrent cSCCs or advanced disease cases are also discussed. Staging System in Cutaneous Squamous Cell Carcinoma Currently, two main staging systems are used to predict the risk of cSCC-related events (recurrence, nodal metastasis, and/or death): the American Joint Committee on Cancer's 8th edition staging system (AJCC8) and the Brigham and Women's Hospital T classification system (BWH) [2,7,9, [24][25][26][27].Although the BWH system does not address nodal and metastasis classifications or advanced stage groups, it seems to be more accurate than the AJCC staging system in the classification of localized cSCC [9, 10,24,25,27,28]. cSCC is considered high-risk when staged as T3/T4 with the AJCC8 staging system.However, a limitation of this staging system is that the T4 group is less frequently used, as few tumors meet the inclusion criteria.Thus, most events occur in the T3 group, but a substantial proportion of these behave well, leading to a heterogeneous group unable to detect those T3 tumors associated with poorer outcomes.On the contrary, the BWH staging system stratifies T2 tumors into a low-risk T2a stage and a high-risk T2b stage, providing superior prognostication for patients with localized cSCC patients [9, [24][25][26][27].Karia et al. demonstrated BWH T2b/T3 tumors account for 70% of nodal metastases and 85% of disease-specific death [28].Therefore, those cSCCs classified as T2b/T3 by BWH T are considered high-risk cSCCs [2,9,24,26 -28].Finally, the group of Salamanca proposed an alternative system to classify the AJCC8 T3-stage more accurately and identify subgroups of higher-risk patients (T3b and T3c) [29]. Despite comprising less than 5% of the body surface area, the scalp accounts for 3-20% of all cSCCs [4,5,[11][12][13]31]. cSCC of the scalp develops more frequently in men and elderly people [5,12].This could be explained to the chronical sun-exposure of the scalp and the protective role of the hair, scarcer in men due to common male baldness and to cultural preferences (i.e., shorter hair for men) [5,12,32].Some authors have supported the inclusion of the scalp as a high-risk location, as it is associated with worse prognosis [31,33,34], including a risk of local recurrence of 6-10% [32,35,36], and a 7-9% risk of lymph node metastasis [32,33]. Scalp Anatomy The scalp exhibits characteristic anatomical and pathological features that makes it unique.It extends from the superior occipital line to approximately 2 cm below the frontal hairline, and has a stratified structure composed of five basic layers-skin (composed of The scalp exhibits characteristic anatomical and pathological features that ma unique.It extends from the superior occipital line to approximately 2 cm below the f hairline, and has a stratified structure composed of five basic layers-skin (compo epidermis and dermis), subcutaneous tissue (or dense connective tissue), epicrania neurosis or galea aponeurotica, loose areolar tissue (or loose connective tissue), and osteum (or pericranium)-overlying the skull (Figure 1).In addition, it has a close ranged adnexa (sebaceous glands, hair follicles, and eccrine and apocrine glands rounded by a dense network of blood vessels, lymphatics, and nerves that course th the subcutaneous layer.The galea aponeurotica is a firmer layer of the scalp and i tinuous with the frontalis muscle anteriorly, the occipitalis muscle posteriorly, an temporoparietal fascia laterally.The skull beneath the scalp is composed of separa nial bones-frontal, temporal, parietal, and occipital bones-which are used to vir divide the scalp into sections [31,34,37].This unique anatomical structure may influence both behavior and treatment primary tumors of the scalp.Its different layers, especially the firmer ones-gale periosteum-may condition the growth pattern of the tumors due to the innate resi to infiltration they are believed to have, favoring the lateral dissemination of neop cells, but preventing vertical growth [31,37].On the other hand, while clinical perip margins may be easily achieved during the surgical procedure, vertical excision is li by the skull [31,35]. Another peculiarity of the scalp is its lymphatic drainage.The frontal part of the drains into the parotid, submandibular, and deep cervical lymph nodes, while the rior portion drains into the posterior auricular and occipital lymph nodes [37].The p gland is the most common drainage site for tumors in the anterior scalp [38], and t volved nodes are usually located in the superficial lobe.However, some may be lo beyond the facial nerve level while others are located above the parotid fascia. Despite the peculiarities of this location, current guidelines do not recommend cific management for cSCCs of the scalp, leading to inconsistent decision-making in cases. Clinical Presentation and Diagnosis Invasive cSCCs of the scalp may have different clinical presentations dependi tumor size, differentiation, and skin type, but usually appear as a rapidly growing reddish hyperkeratotic plaque or tumor, with or without a central horn plug, tha ulcerate or associate crusts, or as a chronic, non-healing ulcer.It commonly aris chronic sun-damaged skin, typically in hairless areas of the scalp of males, associated the presence of actinic keratoses (over an area of "field cancerization", a marker o although the rate of transformation itself of individual solar keratoses is low).On de copy, in situ cSCCs are characterized by yellowish/white opaque scales and clust This unique anatomical structure may influence both behavior and treatment of the primary tumors of the scalp.Its different layers, especially the firmer ones-galea and periosteum-may condition the growth pattern of the tumors due to the innate resistance to infiltration they are believed to have, favoring the lateral dissemination of neoplastic cells, but preventing vertical growth [31,37].On the other hand, while clinical peripheral margins may be easily achieved during the surgical procedure, vertical excision is limited by the skull [31,35]. Another peculiarity of the scalp is its lymphatic drainage.The frontal part of the scalp drains into the parotid, submandibular, and deep cervical lymph nodes, while the posterior portion drains into the posterior auricular and occipital lymph nodes [37].The parotid gland is the most common drainage site for tumors in the anterior scalp [38], and the involved nodes are usually located in the superficial lobe.However, some may be located beyond the facial nerve level while others are located above the parotid fascia. Despite the peculiarities of this location, current guidelines do not recommend a specific management for cSCCs of the scalp, leading to inconsistent decision-making in some cases. Clinical Presentation and Diagnosis Invasive cSCCs of the scalp may have different clinical presentations depending on tumor size, differentiation, and skin type, but usually appear as a rapidly growing pinkreddish hyperkeratotic plaque or tumor, with or without a central horn plug, that may ulcerate or associate crusts, or as a chronic, non-healing ulcer.It commonly arises on chronic sun-damaged skin, typically in hairless areas of the scalp of males, associated with the presence of actinic keratoses (over an area of "field cancerization", a marker of risk, although the rate of transformation itself of individual solar keratoses is low).On dermoscopy, in situ cSCCs are characterized by yellowish/white opaque scales and clusters of small dotted and glomerular vessels.When progressing to invasive cSCC, looped/hairpin and/or polymorphous vessels over an erythematous/whitish background may be typically identified, although some glomerular/dotted vessels can still be seen. small dotted and glomerular vessels.When progressing to invasive cSCC, looped/hairpin and/or polymorphous vessels over an erythematous/whitish background may be typically identified, although some glomerular/dotted vessels can still be seen. In addition, scale/keratin and white circles are typically seen in those well-differentiated cSCCs, whereas ulceration and blood spots are more common in poorly differentiated tumors (Figures 2 and 3) [2, 8,9,31,39,40].Histological confirmation is the gold standard for the diagnosis, showing an atypical epithelial cells proliferation that extends beyond the epidermis into the underlying dermis, and in some cases may invade subjacent structures as well (i.e., subcutaneous fat, fascia, muscle, etc.) (Figures 4 and 5) [7,9,39].It can be assessed either by an incisional biopsy (i.e., incision or punch), shave biopsy, or directly by performing an excisional biopsy of the entire lesion, depending on the characteristics of the lesions and the judgment small dotted and glomerular vessels.When progressing to invasive cSCC, looped/hairpin and/or polymorphous vessels over an erythematous/whitish background may be typically identified, although some glomerular/dotted vessels can still be seen. In addition, scale/keratin and white circles are typically seen in those well-differentiated cSCCs, whereas ulceration and blood spots are more common in poorly differentiated tumors (Figures 2 and 3) [2, 8,9,31,39,40].Histological confirmation is the gold standard for the diagnosis, showing an atypical epithelial cells proliferation that extends beyond the epidermis into the underlying dermis, and in some cases may invade subjacent structures as well (i.e., subcutaneous fat, fascia, muscle, etc.) (Figures 4 and 5) [7,9,39].It can be assessed either by an incisional biopsy (i.e., incision or punch), shave biopsy, or directly by performing an excisional biopsy of the entire lesion, depending on the characteristics of the lesions and the judgment Histological confirmation is the gold standard for the diagnosis, showing an atypical epithelial cells proliferation that extends beyond the epidermis into the underlying dermis, and in some cases may invade subjacent structures as well (i.e., subcutaneous fat, fascia, muscle, etc.) (Figures 4 and 5) [7,9,39].It can be assessed either by an incisional biopsy (i.e., incision or punch), shave biopsy, or directly by performing an excisional biopsy of the entire lesion, depending on the characteristics of the lesions and the judgment of the physician [10].Due to the characteristics of the scalp, those techniques that allow obtaining a full-thickness specimen, such as incisional or excisional biopsies rather than shave biopsies, may be preferred to better assess the real tumor's depth and infiltration [39]. of the physician [10].Due to the characteristics of the scalp, those techniques that allow obtaining a full-thickness specimen, such as incisional or excisional biopsies rather than shave biopsies, may be preferred to better assess the real tumor's depth and infiltration [39].When facing a cSCC of the scalp, a thorough physical examination is mandatory, for early detection of clinical risk factors or complications (such as tumor diameter >2 cm, infiltration or adherence of the tumor to underlying structures, neurologic symptoms, satellitosis, etc.), with an emphasis on regional lymph node basins, parotid and cervical, to of the physician [10].Due to the characteristics of the scalp, those techniques that allow obtaining a full-thickness specimen, such as incisional or excisional biopsies rather than shave biopsies, may be preferred to better assess the real tumor's depth and infiltration [39].When facing a cSCC of the scalp, a thorough physical examination is mandatory, for early detection of clinical risk factors or complications (such as tumor diameter >2 cm, infiltration or adherence of the tumor to underlying structures, neurologic symptoms, satellitosis, etc.), with an emphasis on regional lymph node basins, parotid and cervical, to When facing a cSCC of the scalp, a thorough physical examination is mandatory, for early detection of clinical risk factors or complications (such as tumor diameter >2 cm, infiltration or adherence of the tumor to underlying structures, neurologic symptoms, satellitosis, etc.), with an emphasis on regional lymph node basins, parotid and cervical, to rule out node metastasis [7, 10,39,41].The presence of a clinically palpable regional lymph node, as well as an abnormal lymph node detected by imaging exam during the diagnostic process, should lead to a fine-needle aspiration or core biopsy of the suspicious node and to additional studies for clinical staging and preoperative evaluation assessment [7] (see Section 4. Management of Regional Node Disease and Section 6.1.The Role of Imaging in Diagnosis and Staging). Clinical and Pathological Risk Factors in Scalp cSCCs There are certain clinical characteristics and histological features of a tumor that may increase the risk of developing complications and poor prognosis, such as a diameter >2 cm, the presence of perineural invasion (PNI) of nerves >0.1 mm, a poorly differentiated histological grade, or lympho-vascular invasion [2,7,9, 21,24,39,42,43]. Immunosuppression is also an important risk factor and may include human immunodeficiency virus (HIV) infection, solid organ transplant, hematopoietic stem cell transplant, or chronic lymphocytic leukemia.Several studies have shown worse outcomes for cSCCs in immunosuppressed patients, with a higher risk of locoregional recurrences, metastatic cSCC, and cSCC-related death [9,12,[44][45][46][47]. All risk factors described in the literature are summarized in Table 1.These features allow stratifying cSCCs into low-risk and high-risk tumors, and identifying those cSCCs with a more aggressive behavior and a higher risk of recurrence and metastasis that may benefit from a closer follow-up and specific management. Surgical Treatment Surgery remains the treatment of choice for cSCCs, and mainly includes wide local excision with postoperative margin assessment and Mohs micrographic surgery (MMS) [3,7,10,39,51,52].Generally, low-risk primary cSCCs are treated with conventional surgical excision, whereas high-risk cSCCs would be candidates for MMS [7,10,39,51], though this technique is not evenly available.Different surgical treatment modalities are described below and also summarized in Table 2. Surgery of the Scalp: Some Initial Considerations When beginning any scalp procedure, it is important to properly prepare the surgical field.Shaving the hair of the affected area up to at least 1 cm from the suture margin-if applicable-and using a towel wrap, or cutting or pinning down perilesional hair with clips or tape, may prevent the introduction of foreign bodies into the wound [34,60]. It must be considered that the scalp is supplied by a rich network of anastomosing arteries within the subcutaneous layer, which are fixed to fibrous septa and often bleed profusely during surgery (Figure 1).In this regard, allowing at least 10 min between the injection of lidocaine with epinephrine and the first incision may provide better visualization and hemostasis during the surgery, facilitating the procedure and the identification of tissue planes and vulnerable structures [34].In addition, the use of tumescent local anesthesia may also facilitate the procedure, expanding and allowing easier plane dissection with lower risk of bleeding [34,61].Later, during the intervention, those blood vessels affected may be manually compressed, located, and either ligated with suture or sealed with electrocautery.The anesthetic infiltration should be subcutaneous or intradermal since deeper injections below the galea do not anesthetize the scalp [60]. It should be noted that collaboration may be needed when surgically approaching large scalp tumors.Particularly, when tumors invade bone or in sentinel lymph node biopsy, cooperation between dermatologists and plastic surgeons, neurosurgeons, and/or head and neck surgeons may be essential [34]. Curettage and Electrodesiccation Curettage and electrodesiccation consists of scraping away tissue with a curette down to a firm layer of normal dermis and denaturing the area by electrodessication, with up to three cycles, in three different directions, performed in a session.It is a fast and costeffective technique used in daily practice for the treatment of low-risk cSCCs, but no randomized controlled trials nor prospective studies compare this technique with other treatments modalities.Small studies have described good responses in selected lesions that are superficial, with a diameter smaller than 2 cm, and in low-risk locations [7, 10,39,51]. However, the potential follicular extension of the tumor in areas that harbor terminal hair may be associated with poorer results when using therapeutical modalities that do not assess histological margins [7,10]. Excision with Postoperative Margin Assessment The most common therapeutic option for cSCCs is conventional surgery with wide local excision (including a margin of clinically normal appearing skin around the tumor and the surrounding erythema), followed by postoperative histological evaluation of margins with "bread-loaf" histopathologic sectioning technique.Despite that no randomized controlled trials comparing different excision margins for cSCCs have been performed, the current evidence is based on retrospective studies and some systematic reviews, with generally good prognosis [7, 10,39,51].From the current literature, an overall local recurrence risk of 3-16% (with most of studies reporting risks ≤6%) and a regional metastasis risk of 1-4% after conventional excision is deduced [3,11,15,17,18,20,21,23,32,62].This nodal metastasis risk increases to 5-14% in head and neck cSCCs [2,11,15], and to 7-9% on the scalp [32,33]. Achieving clear surgical margins is the most important treatment consideration for patients with cSCCs amenable to surgery [51].Some works, mainly retrospective and based on cSCCs removed with MMS, analyzed the subclinical extension of the tumor and the number of stages needed, in order to estimate the width of clinical margin required to achieve histologically clear margins in a standard surgery of cSCC [53,63,64].Based on these findings, reference guidelines for cSCC management present similar recommendations about the clinical peripheral margins to be performed during conventional excision in primary cSCCs, to ensure complete removal in ≥95% of cases: between 4-10 mm depending on the risk factors (specially tumor diameter and location) [7, 13,39,51,65]. The scalp has been proposed as a high-risk location [7, 33,35].As mentioned before, excision with comprehensive intraoperative margin control is the preferred surgical technique for high-risk cSCCs [7,10].However, this technique is not available in many treating centers.Thus, many cSCCs of the scalp are commonly removed by standard surgery. NCCN guidelines state that cSCCs in high-risk locations (including the scalp) with a diameter <1 cm, 1-1.9 cm, and ≥2 cm should be removed with clinical margins of at least 4 mm, 6 mm, and 9 mm, respectively [7].However, larger excision margins should be considered when other risk factors are present (i.e., poor differentiation, PNI, or invasion of subcutaneous tissue).Nonetheless, the European consensus group suggests at least a minimum of 5 mm clinical safety margin for any cSCC [51], and the British Association of Dermatology (BAD) guidelines recommend ≥6 mm for high-risk, and ≥10 mm for very high-risk cSCCs [39].Thus, in our opinion, any cSCC of the scalp should be removed with Cancers 2024, 16, 664 9 of 24 a minimum peripheral margin of 5-6 mm, extending to ≥10 mm for those cases with other high-risk factors. Another important issue when talking about standard excision of cSCC of the scalp is the deep surgical plane that should be reached during the procedure.There is no consensus in current cSCCs guidelines, and different planes have been proposed, without enough evidence to support them: the hypodermis (assuming that the deeper layers are not macroscopically affected), the inclusion of a thin layer of subcutaneous fat or reaching "the next clean surgical plane" [7, 10,39,51,52,63,65]. The BAD, European, and Scottish guidelines recommend including the galea aponeurotica in the excision, due to its firm consistency and the probable innate resistance to infiltration [39,51,52].Moreover, deeper surgical planes beyond the galea are associated with lower rates of close/affected margins [35]. Another consideration to keep in mind during scalp cSCC surgery is that the majority of the scalp mobility relies on the loose areolar tissue layer, while most of the nerves and vasculature lay above it.Thus, this layer can be easily dissected and would also be a relatively safe plane during dissection [37]. Considering all the above, it seems wise to recommend excision below the galea aponeurotica, both to reduce margin positivity and the risk of complications. Finally, taking a deeper margin should be considered if there is clinical concern of an incomplete resection during the surgery [39].When suspicion or evidence of tumor invasion of bone-clinically seen as subtle pitting of the bone or suggested by imaging studies (see Section 6. Imaging Approach)-the outer table of the skull should also be removed [34,39]. Histological Margins Little is known regarding histological margins after conventional excision in cSCC [36,66,67].Although recommendations for incomplete excisions seem to be clearer, there are no guides for the management of those cSCCs completely removed, but with close histological margins.This is especially relevant on the scalp, where deep removal of the primary tumor is limited by the skull.In fact, three retrospective studies which analyzed cSCCs removed by standard surgery found that 4-11.9% of cases were not completely excised.One of the most common locations for these cases was the scalp (16-38%), and mostly related to the deep margin [11, [68][69][70]. Globally, two main scenarios, requiring different approaches, can be faced when evaluating histological margins after the excision of a cSCC, regardless the location, by conventional surgery. 1. Peripheral or deep positive margins.Local and regional recurrences, as well as pathological positivity after re-excisions, are higher in these group of patients [36,68,71].Thus, most guidelines recommend, when possible, re-excision as the treatment option of choice, commonly yielding clean margins [16,35,51].If available, MMS should be the treatment of choice, rather than re-excision with postoperative margin assessment, to ensure free histological margins and avoid complications, especially in tumors with high-risk factors [34,39].When surgery is not feasible, other treatments, such as radiotherapy (RT), might be considered [34,39]. 2. Free but close histological margins (by consensus, those margins between 0.1-0.9mm, according to the Royal College of Pathologists and the BAD [39,72]).While there is scarce evidence in the literature regarding the conduct in this scenario [7,10,39,51,52,65], the British and the Scottish guidelines recommend discussing those cSCCs with histological margin <1 mm in a multidisciplinary tumor board to assess the need or not of further adjuvant treatments [39,52,72].Thus, they consider observation in those pT1 cSCC with <1 mm histological margins in immunocompetent patients [39]. Regarding the scalp, only one retrospective study, by Jenkins et al., compared the differences in local and regional recurrence rates of cSCCs with clear but close deep margin (0.1-1.9 mm) to cSCCs with a thicker deep margin (2-6 mm and >6 mm).They observed a greater number of local recurrences in the first group (8% vs. an overall rate of 3%) [32]. Although the current evidence is scarce, careful consideration should be given to those cSCCs of the scalp with clear but close peripheral or deep margins, and re-excision or further treatments should be considered if other high-risk features are present [39].Nevertheless, further studies assessing the role of close histological margins in relapse remain indispensable. Mohs Micrographic Surgery or Excision with Complete Circumferential Peripheral and Deep Margin Assessment MMS is a technique that has proven to be effective for the removal of skin tumors located in compromised areas in which saving tissue is essential and/or when we want to ensure negative margins in a tumor [51,73].It is also a technique of choice for tumor subtypes that are associated with an increased risk of recurrence.In this sense, cSCCs on the scalp fulfill these criteria [51]. The technique usually involves the study of frozen tissue sections.The histopathological study is carried out in tangential sections that determine the assessment of 100% of the tumor margins compared to the conventional vertical "bread-loaf" sections in which large areas of "blind" margins may remain unexplored under the microscope [73]. MMS for cSCCs has some peculiarities.These are tumors that may be quite large and difficult to process in frozen sections.On the other hand, the tumor depth is a relevant prognostic feature in cSCCs that may be difficult to register due to technical singularities of the MMS (a debulking is separated from the true margins) [7].Furthermore, undifferentiated and small-nest infiltrative tumors may be difficult to detect in frozen sections.In this sense, some authors prefer a modified (slow, 3D histology) MMS technique with paraffin sections that also allow routine immunohistochemical stainings [51]. MMS has been demonstrated to be an effective technique for the treatment of high-risk cSCCs in large cohort studies, both retrospective and prospective [51,53,[74][75][76][77][78][79], and is recommended by the European and American guidelines in these subsets of patients [7, 51,80]. In a large multicenter prospective case series study with 1263 patients treated with MMS, in which almost all tumors were located in the head and neck area (96.5%), a risk of recurrence of 3.9% was observed after a 5-year follow-up period.The risk of recurrence was lower in patients with primary cSCC (2.6%) than in those with recurrent tumors (5.9%) [53].Another prospective cohort study by Tschetter et al. including 745 tumors showed 5-year local recurrence-free survival, nodal metastasis-free survival, and disease-specific survival of 99.3%, 99.2%, and 99.4%, respectively [76].Finally, a recent Spanish prospective study including 371 cSCCs reported recurrence rates of 4.5 per 100 person-years [79]. As the recurrence following MMS is low, the risk can be increased by several factors, including unfinished surgery, the number of stages needed, immunosuppression [79], invasion beyond the subcutaneous fat, poor histological differentiation [74], and PNI [81]. Although there are no randomized prospective clinical trials comparing MMS with conventional surgery, several retrospective studies have shown significantly lower recurrence rates for MMS [51,54,55].Interestingly, in the largest comparative retrospective study by Van Lee et al., which included 672 head and neck cSCCs (approximately 20% on the scalp), the overall recurrence rate was 8% after standard excision vs. 3% for MMS [54].Recent studies have also demonstrated that MMS is more cost effective than wide local excision and would be particularly indicated for high-risk cSCCs [56]. Studies reporting MMS for scalp cSCCs have reported cure rates and recurrence rates that are equivalent to those reported in other areas [57,58].The first stage of MMS of scalp cSCCs should include the subcutaneous tissue and run into the subgaleal plane [59]. Reconstructive Approaches on the Scalp The preferred options in the scalp, as a high-risk location, are those closures that do not rotate tissue around and/or alter anatomy of the surgical bed, where "residual cells" of the tumor could remain.Primary closure (linear repair), skin grafting (split-or fullthickness), the use of dermal matrices, and secondary intention healing with granulation are appropriate reconstructive approaches, especially if MMS is not available [10,34].Careful consideration might be given to skin grafting when the periosteum is removed and the bone is exposed, or if previous RT has been performed over the area [34,37,82]. Nonetheless, reconstruction of medium-sized (2-5 cm width) and larger defects on the scalp after cSCC extirpation can sometimes be challenging due to tightness of the surrounding soft tissues and a lack of soft tissue reservoir [34].In those cases, with a large defect or in which the periosteum or the outer table of the skull has been removed, tissue rearrangement with flaps might be required.These large closures should be delayed until negative histologic margins are confirmed [7,10,39,53]. Particularly in the scalp, subcutaneous stiches are not usually used in order to avoid damage of the hair follicles and prevent local alopecia.On the contrary, the use of staples might be of interest as it is faster to apply, more hygienic, and also allows daily washing, essential to reducing local inflammation and infection [60]. Management of Locoregional Disease 4.1. Management of Patients with Satellitosis or In-Transit Metastases Satellitosis or in-transit metastases (S-ITM) are nonepidermal lesions originating between the primary tumor and the first tumor-draining lymph nodes, considering as satellitosis those that occur within 2 cm of the primary tumor.They are thought to occur as a consequence of intralymphatic or possibly angiotropic tumor spread [83], and are more common in immunosuppressed patients [83,84].S-ITM have proved to be an independent poor-outcome risk factor in cSCCs [83,[85][86][87], with outcomes comparable to node-positivity in terms of recurrence and disease-specific survival [86].A recent study of Marti-Marti et al., demonstrated that the size (≥20 mm) and the number of lesions (>5) of S-ITM are two independent prognostic factors for relapse, and the number of lesions for specific death [83]. The head and neck region is the most common location of S-ITM [83,84], occurring more commonly in the scalp (Figure 6) [84].poor-outcome risk factor in cSCCs [83,[85][86][87], with outcomes comparable to node-positivity in terms of recurrence and disease-specific survival [86].A recent study of Marti-Marti et al., demonstrated that the size (≥20 mm) and the number of lesions (>5) of S-ITM are two independent prognostic factors for relapse, and the number of lesions for specific death [83]. The head and neck region is the most common location of S-ITM [83,84], occurring more commonly in the scalp (Figure 6) [84].Although uncommon, S-ITM represent an authentic management challenge as they are not included in current cSCC staging systems and guidelines [7,39,51].According to the literature, S-ITMs are usually excised by surgery with or without adjuvant RT, and less frequently with RT as monotherapy or systemic therapy [83,84].Surgery followed by adjuvant RT seems to obtain better outcomes [83], although studies comparing treatment approaches are lacking.If resection is not possible, systemic therapy and/or RT, if feasible, could be considered. Management of Patients with Clinically Detected Lymph Nodes Lymph node dissection is the standard of care for patients with regional lymph node metastases detected on physical examination or following imaging tests.The extent of lymph node dissection is a controversial issue.There is a growing trend to offer more conservative surgeries that provide less morbidity and better functional results without leading to worse survival [88,89].In case of parotid lymph node involvement, superficial parotidectomy and ipsilateral cervical dissection are usually recommended since studies have proved that involvement of the parotid gland correlates with higher incidence of occult metastases in the neck lymph nodes [90].However, these decisions must be made Although uncommon, S-ITM represent an authentic management challenge as they are not included in current cSCC staging systems and guidelines [7, 39,51].According to the literature, S-ITMs are usually excised by surgery with or without adjuvant RT, and less frequently with RT as monotherapy or systemic therapy [83,84].Surgery followed by adjuvant RT seems to obtain better outcomes [83], although studies comparing treatment approaches are lacking.If resection is not possible, systemic therapy and/or RT, if feasible, could be considered. Management of Patients with Clinically Detected Lymph Nodes Lymph node dissection is the standard of care for patients with regional lymph node metastases detected on physical examination or following imaging tests.The extent of lymph node dissection is a controversial issue.There is a growing trend to offer more conservative surgeries that provide less morbidity and better functional results without leading to worse survival [88,89].In case of parotid lymph node involvement, superficial parotidectomy and ipsilateral cervical dissection are usually recommended since studies have proved that involvement of the parotid gland correlates with higher incidence of occult metastases in the neck lymph nodes [90].However, these decisions must be made in the context of a multidisciplinary tumor board, considering tumor aggressiveness, patient's status, and surgical conditions. Adjuvant RT following lymph node dissection should be recommended if extracapsular extension is observed, more than two nodes are affected, or one node is larger than 3 cm (AJCC 8th N2 or N3) [91,92]. In patients with incompletely excised lymph node metastases (LNM) or those who are inoperable, RT and/or systemic treatment should be considered.However, the management of macroscopic lymph node disease could be soon redefined if the use of neoadjuvant treatment is expanded [93,94]. Management of Patients without Clinically Detected Lymph Nodes In patients without evidence of lymph node dissemination, there is no evidence of the usefulness of elective lymphadenectomy, so this approach is not recommended. Sentinel lymph node biopsy (SLNB) is used in various skin cancers for the early detection of LNM before they could be clinically detected.In the case of cSCC, these occur in 8-12.3% of cases, according to the systematic review by Tejera [95] and the metaanalysis of Schmitt [96].Both groups proved that the risk of sentinel node involvement increases with the tumor stage [97].Therefore, risk factors for SLNB positivity include tumor diameter, tumor thickness, lymphovascular invasion, PNI, or the simultaneous presence of multiple risk factors.According to Tejera and Schmitt, the false negative rate of SLNB in cSCC is 3.9-4.6%[95,96]. The role of SLNB in high-risk cSCC is currently under debate.Evidence of its usefulness comes from several studies and systematic reviews, but there are no clinical trials that have assessed its value.cSCC is a cancer prone to an orderly dissemination.Most patients showing poor prognosis develop LNM.Thus, identifying those patients with microscopic nodal disease may impact in its management. Few studies have analyzed the association between SLNB status and survival, although it seems that there would be worse survival in patients with positive SLNB [98,99].It is not clear if SLNB followed by completion lymph node dissection improves survival [98].Studies that have evaluated the ideal time to perform SLNB have not shown differences in the rate of SLNB detection between performing SLNB at the same time of tumor excision or delaying it [100]. Nowadays, clinical practice guidelines do not recommend SLNB to be used routinely in cSCC.However, its adoption is encouraged in the setting of clinical trials in highrisk patients. Radiation Therapy Radiotherapy (RT) is a useful therapeutic tool to treat scalp cSCC.Several modalities/devices are used, such as external RT and brachytherapy.The relatively flat surface of the scalp makes the use of external beam RT suitable for large lesions in this area.RT can be used as primary (radical) therapy, in adjuvancy, or for palliation.Although no prospective randomized controlled trials comparing therapeutic modalities have been performed, surgery is usually recommended to treat most cSCCs.Radical RT is usually restricted to patients who reject surgery, those with severe co-morbidities or frailty, or patients with unresectable tumors.The response rates of RT are high, especially for small, superficial lesions in immunocompetent patients.Moreover, the functional and cosmetic results of RT are usually excellent.However, the 5-year local recurrence rates are above 6% and 10% for primary and recurrent tumors, respectively [15,101].Poorer local control rates and higher recurrences are observed in higher T-stages, with T4 cSCC of the head and neck 5-year local control rates of 50-60% [15,102].When bone is involved, the initial local control descends to 40% [103].However, RT is a convenient option in elderly frail patients with bone invasion in which a complex neurosurgery would be necessary for radical control of the disease.In this sense, dramatic responses of giant cSCC of the scalp with extensive bone destruction have been reported [104]. Re-excision should be encouraged in patients with cSCC with positive residual microscopic margins (R1).If further surgery is contraindicated, RT is associated with lower recurrence rates than a wait-and-see conduct [105]. Adjuvant RT after complete surgical resection of scalp cSCC (R0) can be useful in some patients.Similar to other head and neck cSCCs, RT is recommended after elective node dissection of patients with metastatic neck regional dissemination, especially in those with multiple nodal metastases, nodes larger than 3 cm, or nodes smaller than 3 cm but showing extracapsular extension (N2).Adjuvant RT after parotidectomy is also indicated in patients with metastatic intraparotideal lymph nodes.Prophylactic cervical lymph node dissection or cervical RT are recommended in patients with intraparotideal metastases, and both treatments have shown similar outcomes [106]. Adjuvant RT to the tumor bed can also be considered in patients with large or named nerve involvement and in those with microscopic extensive or larger than 0.1 mm nerve invasion [107,108]. The utility of RT in adjuvancy in patients with negative surgical margins but poor prognostic features is controversial [109,110].A recent study by Ruiz et al. has shown that RT reduces the risk of local and locoregional recurrences to half in tumors of high T-stage cSCCs (BWH T2b or T3) [111].The American Society for Radiation Oncology and the Head and Neck Cancer International Group (HNCIG) have recently published guidelines for definitive and postoperative RT for cSCC [112,113]. Some final special considerations have to be made for RT of scalp cSCC.RT induces alopecia in hair-bearing individuals.Moreover, RT may infrequently induce bone exposure in the scalp in elderly patients with skin atrophy. Immunotherapy with Checkpoint Inhibitors Treatment of Advanced cSCC cSCC harbor a high mutational burden due to UV that make them good targets for immunotherapy [114].Recent published phase II trials support the efficacy and safety of immunotherapy (cemiplimab and pembrolizumab) in patients with locally advanced cSCC and metastatic cSCC [51,[115][116][117]. Cemiplimab, an anti-PD-1 antibody, was approved by the Food and Drug Administration (FDA) in 2018 and by the European Medicines Agency (EMA) in 2019 for metastatic and locally advanced cSCCs that are not amenable to curative surgery or radiation therapy [118].Pembrolizumab, another anti-PD-1 antibody, has also been approved for metastatic and locally advanced cSCC by the FDA in 2020, but not by the EMA. Neoadjuvant Therapy A recent Phase 2 multicenter study with resectable stage II-IV cSCCs in 79 patients explored the utility of neoadjuvant cemiplimab (350 mg every 3 weeks for up to four doses) before a curative intent surgery.In 40 patients (51%), a complete pathologic response was observed [93].However, further studies with larger samples are needed.NCCN guidelines are actually considering neoadjuvant therapy after multidisciplinary discussion for selected cases [7]. Cancers 2024, 16, 664 14 of 24 5.2.2.EGFR Inhibitors EGFR inhibitors block the intracellular MAPK pathway.Of all available targeted EGFR inhibitors, those monoclonal antibodies that target the extracellular domain have been mainly used for advanced cSCC (cetuximab and, less frequently, panitumumab) [119][120][121].Nevertheless, no clinical evidence has been demonstrated, and they are not currently approved by either the FDA or the EMA. However, sustained remissions are rare and traditional chemotherapy is poorly tolerated by frail elderly patients who comprise the majority of those with advanced cSCC [122]. Imaging Approach 6.1. The Role of Imaging in Diagnosis and Staging There is currently limited evidence regarding the need to perform imaging tests in cSCC, both in terms of indication and the most appropriate technique, and, to the best of our knowledge, there are no studies specifically evaluating the performance of imaging in the scalp. The risk of LNM in cSCC is relatively low [21] and indiscriminate imaging could lead to a considerable number of false positives and unnecessary additional procedures [123,124].However, it is well established that tumors with higher T scores in staging systems have a higher risk of LNM [28], and there are studies suggesting that early detection of LNM when fewer lymph nodes are affected [125], or where nodes are smaller and there is no extracapsular invasion [126], may lead to a better prognosis.Some studies have shown a trend towards larger lymph nodes in patients with clinically detected LNM compared to those routinely screened with ultrasound [127]. Both the latest European and NCCN clinical practice guidelines agree that a complete anamnesis and detailed physical examination at the time of diagnosis should be performed in all patients, which may be sufficient in patients with low-risk cSCC [7,9]. There is evidence, nevertheless, that patients with high-risk cSCC with normal physical examination may also benefit from imaging to detect subclinical metastatic disease.In three retrospective cohort studies at Brigham and Women's Hospital using mainly computed tomography (CT) scans in patients with high-risk cSCC (≥T2b-BWH), imaging was associated with a change in therapeutic approach in up to one-third of patients [128][129][130], and it was associated with a decrease in the number of poor outcomes [128].Furthermore, another retrospective cohort study demonstrated a higher sensitivity of ultrasound than physical examination alone for the detection of lymph node metastases, although at the expense of a higher false positive rate [124]. In the light of this evidence, European guidelines recommend imaging tests at diagnosis in patients without palpable lymphadenopathy on examination who present with high-risk cSCC, defined by a T score according to BWH equal to or greater than T2b or the presence of any of the risk factors proposed by the EADO [9]. Regarding the type of technique for nodal staging, guidelines recommend preferably ultrasound or contrast-enhanced CT [9].There are no specific recommendations for high-risk cSCC of the scalp, but in our view CT with contrast is probably a more efficient technique than ultrasound, as it allows assessment of not only lymph node involvement but also the depth of the tumor, which in some cases may be greater than clinically expected, as observed in studies in which this technique was performed perioperatively [130].In tumors with features that raise suspicion of bony invasion of the calvarium (firmly adherent or pain on palpation of the bony margin), it is indispensable.Magnetic resonance imaging (MRI), although generally less available, is superior to CT in assessing deep invasion beyond the outer table of the skull, PNI, and parotid or central nervous system involvement [131][132][133]. However, a substantial portion of the population affected by high-risk cSCC of the scalp comprises elderly individuals, frequently presenting with concomitant renal failure, or kidney transplanted patients.These conditions often constrain the use of contrast-enhanced CT, thereby markedly diminishing the sensitivity of the imaging technique.In such instances, an alternative approach to evaluate lymph node involvement, such as ultrasound, may prove to be more advantageous. Finally, in patients with lymph node involvement, imaging tests should also be performed to rule out distant metastases, such as body CT or positron emission tomography [9]. Follow-Up The use of imaging tests for the follow-up of patients with cSCC is recommended in three scenarios: high-risk cSCC, locally advanced disease, and metastatic disease.In contrast, for patients with low-risk cSCC, the guidelines propose annual clinical follow-up, at least for the first 2 years, as this is the period with the highest risk of local and distant recurrence [7,9,134]. In high-risk tumors, European clinical practice guidelines propose to preferably perform lymph node ultrasound, which should include cervical and parotid lymph node territories every 3-6 months for the first two years [9].From our point of view, ultrasound would be a good option due to its excellent cost-effectiveness and the absence of exposure to ionizing radiation for the follow-up of those cases of high-risk cSCC of the scalp in which it is not necessary to assess the status of the calvarium, i.e., those patients with completely excised tumors and no signs of local recurrence.As we have emphasized previously, the non-utilization of contrast in a population frequently burdened with associated renal insufficiency is another advantage that supports the use of ultrasound in monitoring instead of contrast CT scan. Despite these recommendations, it is essential to consider that some studies have demonstrated that with a screening ultrasound investigation at baseline, only a few LNM are detected, while the majority of metastases are identified through clinical examination, typically self-examination, during follow-up [127].This limitation reduces the utility of the technique and raises questions about the appropriate timing of routine ultrasound.Therefore, prospective studies assessing the true benefit of routine radiological examinations compared to thorough self-examination or clinical follow-up remain indispensable. In patients with locally advanced and metastatic disease, a frequency of 3 to 6 months is also proposed, and the choice of imaging technique is left to the discretion of the clinician [9].In this type of patient, either because of the need to assess the status of the calvarium or other organs at the same time, we consider it more convenient to perform a CT. Proposed Algorithm for the Initial Management of Primary cSCC of the Scalp Based on current evidence, an algorithm for the initial management of patients with primary cSCC of the scalp has been proposed (Figure 7).Comprehensive treatment approach of scalp cSCC should include both correct surgical excision and appropriate closure (Figure 8). Proposed Algorithm for the Management of Histological Margins and Other Histological Features Based on current evidence, an algorithm for the management of scalp cSCC according to histological features, and specially to histological margins, has been proposed (Figure 9). Proposed Algorithm for the Follow-Up of cSCC of the Scalp Based on current evidence, an algorithm for the proper follow-up of patients with cSCC of the scalp has been proposed (Figure 10).Based on current evidence, an algorithm for the initial management of patients with primary cSCC of the scalp has been proposed (Figure 7).Comprehensive treatment approach of scalp cSCC should include both correct surgical excision and appropriate closure (Figure 8). Proposed Algorithm for the Management of Histological Margins and Other Histological Features Based on current evidence, an algorithm for the management of scalp cSCC according to histological features, and specially to histological margins, has been proposed (Figure 9). Proposed Algorithm for the Follow-Up of cSCC of the Scalp Based on current evidence, an algorithm for the proper follow-up of patients with cSCC of the scalp has been proposed (Figure 10). Proposed Algorithm for the Follow-Up of cSCC of the Scalp Based on current evidence, an algorithm for the proper follow-up of patients with cSCC of the scalp has been proposed (Figure 10). Conclusions and Future Directions With increasingly longer life expectancies, the health burden associated with cSCC is likely to rise still further.Although the understanding of cSCC has grown in recent years, much research remains to be conducted.The scalp has a characteristic thickness and anatomical structure that may influence both behavior and treatment of the cSCC, making a specific management mandatory.Current guidelines do not contemplate specific recommendations for cSCC of the scalp, and more data are needed to improve its management and elucidate other risk factors that might better predict prognosis and assist in shared decisions of multidisciplinary teams. Future research directions in cSCC of the scalp should include the evaluation of the role of histological margins in recurrences, compare different approaches (surgery, RT, immunotherapy, others) in involved histological margins, study other clinicopathological or molecular factors that might predict poor outcomes, evaluate the role of sentinel lymph node biopsy in the staging of very high-risk cSCC, assess the true benefit of routine radiological examinations and best imaging technique, and compare adjuvant RT after surgical excision of high-risk cSCC. Figure 1 . Figure 1.Graphical representation of the anatomical structure of the scalp, with its five laye dermis + dermis, subcutaneous tissue, galea aponeurotica, loose areolar tissue, and perio Blood vessels, lymphatics and nerves exist through the subcutaneous layer (small color circle jacent to fibrous tracts. Figure 1 . Figure 1.Graphical representation of the anatomical structure of the scalp, with its five layers: epidermis + dermis, subcutaneous tissue, galea aponeurotica, loose areolar tissue, and periosteum.Blood vessels, lymphatics and nerves exist through the subcutaneous layer (small color circles), adjacent to fibrous tracts. Figure 2 . Figure 2. Clinical appearance of different cutaneous squamous cell carcinoma of the scalp.(a) Welldifferentiated scalp cSCC.A rounded pink and hyperkeratotic tumor, with well-defined borders, in the parietal region of the scalp.(b) Moderately differentiated scalp cSCC.Hyperkeratotic erythematous plaque in the right parietal region, with poorly defined borders, and small areas with ulceration.Notice the actinic damage surrounding the lesion.(c) Poorly differentiated scalp cSCC.Erythematous and fleshy tumor, with a diameter greater than 2 cm, in the frontal region of an elderly patient. Figure 3 . Figure 3. Dermoscopy of different cutaneous squamous cell carcinomas.(a) Keratotic tumor, with yellowish-whitish keratosis in the center, with some hemorrhagic area, and a pink peripheral rim with hairpin and looped vessels.(b,c) Hyperkeratotic lesions with poorly defined edges, with an erythematous background with yellowish scales and keratosis, and small erosions.Few dotted vessels can be seen in the center of image (b). Figure 2 . Figure 2. Clinical appearance of different cutaneous squamous cell carcinoma of the scalp.(a) Welldifferentiated scalp cSCC.A rounded pink and hyperkeratotic tumor, with well-defined borders, in the parietal region of the scalp.(b) Moderately differentiated scalp cSCC.Hyperkeratotic erythematous plaque in the right parietal region, with poorly defined borders, and small areas with ulceration.Notice the actinic damage surrounding the lesion.(c) Poorly differentiated scalp cSCC.Erythematous and fleshy tumor, with a diameter greater than 2 cm, in the frontal region of an elderly patient. Figure 2 . Figure 2. Clinical appearance of different cutaneous squamous cell carcinoma of the scalp.(a) Welldifferentiated scalp cSCC.A rounded pink and hyperkeratotic tumor, with well-defined borders, in the parietal region of the scalp.(b) Moderately differentiated scalp cSCC.Hyperkeratotic erythematous plaque in the right parietal region, with poorly defined borders, and small areas with ulceration.Notice the actinic damage surrounding the lesion.(c) Poorly differentiated scalp cSCC.Erythematous and fleshy tumor, with a diameter greater than 2 cm, in the frontal region of an elderly patient. Figure 3 . Figure 3. Dermoscopy of different cutaneous squamous cell carcinomas.(a) Keratotic tumor, with yellowish-whitish keratosis in the center, with some hemorrhagic area, and a pink peripheral rim with hairpin and looped vessels.(b,c) Hyperkeratotic lesions with poorly defined edges, with an erythematous background with yellowish scales and keratosis, and small erosions.Few dotted vessels can be seen in the center of image (b). Figure 3 . Figure 3. Dermoscopy of different cutaneous squamous cell carcinomas.(a) Keratotic tumor, with yellowish-whitish keratosis in the center, with some hemorrhagic area, and a pink peripheral rim with hairpin and looped vessels.(b,c) Hyperkeratotic lesions with poorly defined edges, with an erythematous background with yellowish scales and keratosis, and small erosions.Few dotted vessels can be seen in the center of image (b). Figure 4 . Figure 4. Histological image of a cutaneous squamous cell carcinoma of the scalp, showing a proliferation of atypical squamous cells, ulcerated, that infiltrate the dermis, subcutaneous tissue, and galea aponeurotica (H&E, 0.6×). Figure 5 . Figure 5. Perineural invasion of a nerve of 0.05 mm in a cutaneous squamous cell carcinoma of the scalp (H&E, 63×). Figure 4 . Figure 4. Histological image of a cutaneous squamous cell carcinoma of the scalp, showing a proliferation of atypical squamous cells, ulcerated, that infiltrate the dermis, subcutaneous tissue, and galea aponeurotica (H&E, 0.6×). Figure 4 . Figure 4. Histological image of a cutaneous squamous cell carcinoma of the scalp, showing a proliferation of atypical squamous cells, ulcerated, that infiltrate the dermis, subcutaneous tissue, and galea aponeurotica (H&E, 0.6×). Figure 5 . Figure 5. Perineural invasion of a nerve of 0.05 mm in a cutaneous squamous cell carcinoma of the scalp (H&E, 63×). Figure 5 . Figure 5. Perineural invasion of a nerve of 0.05 mm in a cutaneous squamous cell carcinoma of the scalp (H&E, 63×). Figure 6 . Figure 6.(a,b) Clinical images of two cSCC satellitosis in the scalp, in an immunosuppressed patient, appearing as erythematous nodules next to the scar of a previously excised high-risk cSCC. Figure 6 . Figure 6.(a,b) Clinical images of two cSCC satellitosis in the scalp, in an immunosuppressed patient, appearing as erythematous nodules next to the scar of a previously excised high-risk cSCC. 7. Algorithms for the Management and Treatment of Primary Scalp cSCC with Localized Disease 7.1.Proposed Algorithm for the Initial Management of Primary cSCC of the Scalp Figure 8 . Figure 8. Example of the surgical approach performed in a cutaneous squamous cell carcinoma, <2 cm of diameter, in the left parietal region of an immunocompetent patient, using conventional surgery.(a) Peripheral clinical margins of 5 mm were performed.(b) As the histological margins were evaluated postoperatively, closure with a partial skin graft was chosen. Figure 8 . Figure 8. Example of the surgical approach performed in a cutaneous squamous cell carcinoma, <2 cm of diameter, in the left parietal region of an immunocompetent patient, using conventional surgery.(a) Peripheral clinical margins of 5 mm were performed.(b) As the histological margins were evaluated postoperatively, closure with a partial skin graft was chosen. Figure 9 . Figure 9. Proposed treatment algorithm for the management of histological margins and other histological features.RT: radiotherapy; cSCC: cutaneous squamous cell carcinoma; W&S: wait-andsee/close surveillance. Figure 9 . Figure 9. Proposed treatment algorithm for the management of histological margins and other histological features.RT: radiotherapy; cSCC: cutaneous squamous cell carcinoma; W&S: wait-andsee/close surveillance. Figure 9 . Figure 9. Proposed treatment algorithm for the management of histological margins and other histological features.RT: radiotherapy; cSCC: cutaneous squamous cell carcinoma; W&S: wait-andsee/close surveillance. Figure 10 . Figure 10.Proposed algorithm for the follow-up of cSCC of the scalp.cSCC: cutaneous squamous cell carcinoma; RLN: regional lymph node; CT: computed tomography.* However, the choice of imaging technique is left to the discretion of the clinician. Table 1 . Clinical and histological risk factors in scalp cSCCs. Table 2 . Surgical treatment in scalp cSCC.
2024-02-08T16:11:06.010Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "08b60ffcc9e312c74bd398dd93b22f372b69beb9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/16/3/664/pdf?version=1707033449", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "248a917fa49c828294d026420970075c4fd0b4c5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258841847
pes2o/s2orc
v3-fos-license
Layer-by-layer disentangling two-dimensional topological quantum codes While local unitary transformations are used for identifying quantum states which are in the same topological class, non-local unitary transformations are also important for studying the transition between different topological classes. In particular, it is an important task to find suitable non-local transformations that systematically sweep different topological classes. Here, regarding the role of dimension in the topological classes, we introduce partially local unitary transformations namely Greenberger-Horne-Zeilinger (GHZ) disentanglers which reduce the dimension of the initial topological model by a layer-by-layer disentangling mechanism. We apply such disentanglers to two-dimensional (2D) topological quantum codes and show that they are converted to many copies of Kitaev's ladders. It implies that the GHZ disentangler causes a transition from an intrinsic topological phase to a symmetry-protected topological phase. Then, we show that while Kitaev's ladders are building blocks of both color code and toric code, there are different patterns of entangling ladders in 2D color code and toric code. It shows that different topological features of these topological codes are reflected in different patterns of entangling ladders. In this regard, we propose that the layer-by-layer disentangling mechanism can be used as a systematic method for classification of topological orders based on finding different patterns of the long-range entanglement in topological lattice models. I. INTRODUCTION Studying equivalence classes under local unitary transformations [1][2][3][4] is an important approach in the classification of quantum phases of matter which is one of the most important problems in condensed matter physics [5,6]. Applying such transformations as local disentanglers to lattice models is an important step of entanglement renormalization which is an important tool for studying critical quantum phases as well as topological quantum phases [7][8][9][10]. In particular, because of non-local order in topological quantum systems [11][12][13][14][15][16][17], quantum phases in different topological classes can not be transformed to each other by local operations. It implies that different equivalence classes under local unitary transformations correspond to different topological classes [18]. Local unitary transformations are also important in understanding the role of dimension in the classification of topological phases. For example, one-dimensional quantum states are topologically trivial because a local unitary transformation converts them to product states like a scissor that breaks a string. Therefore, there is no intrinsic topological order in one-dimensional lattice models [42]. However, it is known that some 1D quantum phases have a non-intrinsic topological order and are named symmetry-protected topological phases. A simple example of such models is the toric code state on a ladder which shows a topological phase protected by a Z 2 × Z 2 symmetry [43] in the sense that it is not transformed to a product state by local unitary transformations which respect to a Z 2 × Z 2 symmetry. On the other hand, topological order is characterized by a long-range entanglement in topological quantum states [44]. Since local unitary transformations can not remove the long-range entanglement in a topological state, if we consider the space of all quantum states belonging to different topological classes, local unitary transformations correspond to moving along paths towards fixed points in each topological class [18]. However, in order to move between different topological classes, we need non-local unitary transformations to change the pattern of the long-range entanglement. Therefore, it is an important task to find a systematic way for applying non-local transformations to sweep all topological classes. Here we propose partially local unitary transformations which are local along one particular dimension of the lattice and non-local along other dimensions. We show that it leads to a layer-by-layer disentangling mechanism that induces transitions between different topological classes by reducing the dimension of the initial quantum state. We explicitly introduce such a partially local transformation that we call GHZ disentangler for color code on a hexagonal lattice as well as toric code on a square lattice. By applying such disentanglers to the above 2D topological codes, we convert them to many copies of Kitaev codes on ladders that have symmetryprotected topological phase. Therefore, it implies a transition from intrinsic topological phases to the symmetryprotected topological phase. We also use our results for comparing the entanglement structure of the color code with that of the toric code. In particular, we show that the difference between these important topological quantum codes is reflected in different patterns of the entanglement between Kitaev's ladders. In this regard, we propose that the layer-by-layer disentangling mechanism is an important tool for finding pattern of the long-range entanglement in different topological states which is important for classification of topological orders. The structure of the paper is as follows: In Sec.(II), we give an introduction to the toric code, Kitaev ladder, and color code. In Sec.(III), we introduce a partially local unitary transformation for color code state on a hexagonal lattice. We show that such a transformation plays the role of a disentangler which converts the color code state to many copies of Kitaev's ladders. In Sec.(IV), we examine our approach for the toric code state and show that it is also converted to many copies of Kitaev's ladders by partially local transformations. Finally, we compare the pattern of long-range entanglement in toric code and color code by considering different patterns of entangling ladders in these topological codes. II. TOPOLOGICAL QUANTUM CODES Toric code (TC) is one of the most pioneer quantum codes [19,23] which can be defined on any arbitrarily oriented lattice with qubits on the edges, see Fig.(1). The Hamiltonian corresponding to this code is defined in terms of vertex and plaquette operators A v and B p where J is the coupling energy. B p and A v are defined as follows: where X and Z are Pauli operators, i ∈ v refers to qubits that live on edges incoming to the vertex v and i ∈ ∂p refers to qubits that live on edges surrounding the plaquette p. In Fig.(1), we show these operators for three different lattices including a square lattice, a triangular lattice, and a triangular ladder. Since the plaquette and vertex operators are commuted with each other, the ground state of the toric code is simply obtained as follows: where n is the number of qubits and we ignore the normalization factor. On the other hand, since each vertex operator corresponds to a loop on the dual lattice, as shown in Fig.(1), each product of the plaquette operators can be represented by configurations of loops. In this regard, the ground state of a toric code is a superposition of all loop configurations of spin down |1 on the background of spin ups |0 which is called a loop condensed state. Such a state has a topological order which leads to degeneracy in the ground state when we consider a periodic boundary condition. In particular, there are two topological operators in the form of the product of X operators along non-contractible loops around the torus. Applying such operators in the |GS generates three more ground states of the toric code. In particular, different topological descriptions of the above ground states are the reason for robust degeneracy in the toric code which is important for application as a quantum memory. The robustness of topological order in the toric code is also understood in terms of local unitary transformations. In particular, topological order is robust against arbitrary local unitary transformations in the sense that the ground state can not be converted to a product state by applying arbitrary local unitaries. On the other hand, as shown in Fig.(1), we can consider Kitaev code on a quasi-one-dimensional lattice such as a ladder which is named Kitaev's ladder. It is shown that Kitaev's ladder does not have an intrinsic topological order but it is a symmetry-protected topological phase [43]. In particular, while the ground state is converted to a product state under generic local unitary transformations, it is protected under local unitaries that respect a particular symmetry i. e. Z 2 × Z 2 symmetry. Besides the toric code, color code (CC) is also another topological quantum code in which, qubits live on the vertices of a three-colorable lattice. Adding an extra element of color in this model leads to the emergence of some features which are different from the toric code [21,22]. Here we consider a two-dimensional hexagonal lattice that is colored by three colors, red, blue, and green. As it is shown in Fig.(2-a), the hexagonal lattice is a three-colorable lattice in the sense that no two neighboring plaquettes have the same color. Moreover, the edges are also three colorable where we assign a color to each edge that connects the plaquettes of the same color. The Hamiltonian corresponding to this code is written as: where B X P and B Z P are commuting plaquette operators which are defined as follows: where i ∈ p refers to all qubits belongings to the plaquette p. a. b. FIG. 2: a) Color code on a two dimensional hexagonal lattice. Here the qubits live on the vertices of each plaquette and the edges connect the plaquette of the same color. For example, the green edge connects the green plaquettes. b) Red and blue plaquette operators are described by triangles of a green triangular lattice. c) Blue and green plaquette operators are described by triangles of a red triangular lattice. d) A product of plaquette operators in the color code is represented by a loop structure constructed by two different colors. Similar to the toric code, the ground state of color code can be written in terms of X-type operators as follows, up to a normalization factor: where m refers to the number of vertices in the hexagonal lattice. As it is shown in Fig.(2-b,c), we can plot a triangular lattice with edges crossing edges of the hexagonal lattice which have the same color. In this regard, since each triangle of such a lattice corresponds to a hexagonal plaquette of the initial lattice, the corresponding B x p operator can be represented by a triangular loop. It implies that there should be a loop representation for the color code state similar to the toric code state. However, it is impossible to represent all B x p operators with loops with the same color. For example, while red and blue plaquettes correspond to green triangles Fig.(2-b), for representing green plaquettes we need blue or red triangles Fig.(2-c). In this regard, p (1 + B x p ) in Eq.(6) does not lead to a simple loop condensed state. In particular, there are loop structures constructed by different colors similar to what we show in Fig.(2-d). Existence of loop structures of different colors plays also an important role in degeneracy of the ground state of the color code model. In particular, we have six noncontractible loops with three different colors in two different directions. In this regard and since only two colors of the above non-contractible loops are independent, the color code has a 16-fold degeneracy due to four noncontractible loops of two different colors. Furthermore, it has been shown that the color in the color code leads also to more computational power compared to the toric code where one is able to apply all clifford gates on qubits encoded in the ground state of the color code [21]. In spite of such difference between toric code and color code, it is shown that a local unitary transformation which converts a 2D color code to two copies of the toric code. Here, we would like to emphasize in different topological classes of the above 2D topological codes and quasi-1D Kitaev's ladder. It is a reflection of the role of dimension in the classification of topological phases. In particular, toric code and color code can even be defined on higher-dimensional lattices where different topological properties emerge [45][46][47][48]. For example, while excitations in the 2D toric code and color code are string-type, in higher dimension excitations correspond to membranes [45,47]. This important topological property is the reason that higher dimensional version of these codes can be self-corrected [33]. Regarding different topological properties of topological codes in different dimensions, it is clear that there is no local unitary transformation that converts topological codes in different dimensions. Here, we propose a systematic way to induce transition between different topological classes corresponding to different dimensions. To this end, for a D-dimensional topological code one can consider a partially local transformation which is applied between two D−1 dimensional layers in the sense that while it is non-locally applied to D-1 dimensional layers, it is local in direction orthogonal to the above layers, see Fig.(3) as a schematic of such a transformation. In the next section, we introduce explicitly such a transformation for 2D color code and show that it converts color code to many copies of the Kitaev's ladders in the sense that it reduces the dimension of the initial topological code by a layer-by-layer disentangling mechanism. III. PARTIALLY LOCAL TRANSFORMATIONS ON THE COLOR CODE In this section, we examine a layer-by-layer disentangling operation for a 2D topological color code and show that it is converted to many copies of Kitaev ladders by a partially local unitary transformation. To this end, consider the color code on the honeycomb lattice. As it is shown in Fig.(4), we consider non-contractible loops along one direction on the lattice where each loop passes from N qubits. There are N numbers of such noncontractible loops on the lattice which cover all qubits of the color code. Then, corresponding to each loop, we introduce an N -qubit GHZ basis. To this end, note that the GHZ state on N qubit in the form of 1 √ 2 (|00...0 +|11...1 ) is a stabilizer state stabilized by a group of Pauli operators constructed by the following N generators: where Ω x refers to a product of all X operators on qubits belonging to a non-contractible loop and we denote the above generators by g 1 , g 2 , ..., g N , respectively. Moreover, using the above set of stabilizers, we are also able to construct other N − 1 GHZ states to have a complete Nqubit GHZ basis. For example, the state 1 √ 2 (|00...0 − |11...1 ) is stabilized by g 1 ,...,g N −1 but the effect of g N on such state leads to the eigenvalue of −1. In the same way, all N -qubit GHZ states are defined as eigenstates of g 1 ,...g N with different eigenvalues. In this regard, we write all 2 N GHZ states in the form of: (1 + (−1) mi g i )| + +...+ (8) where m i = 0, 1 and m 1 ,m 2 ,...,m N are called GHZ qubits living on edges belonging to each non-contractible loop, see Fig.(4) where we denote GHZ qubits by circles colored by the same color of the corresponding edge. Notice that corresponding to each non-contractible loop there is a GHZ basis and therefore, whole space for N 2 qubits on the lattice is spanned by a product of N numbers of the above N -qubit GHZ bases. It is clear that there is a unitary transformation that changes the computational basis to the above GHZ basis. Since such an operator is local in the vertical direction and non-local in the horizontal direction, we call it a partially local unitary transformation. Now, we are going to find the effect of such a transformation on the color code state. Since there is a one-to-one correspondence between a stabilizer state and the group of its stabilizers, we consider the effect of the above transformation on stabilizers of the color code state. Then, we can use the new group of stabilizers to characterize the final quantum state after transformation. To this end, we divide all stabilizers in three sets corresponding to three colors of GHZ qubits. In particular, corresponding to green GHZ qubits, we consider all B x p operators corresponding green plaquettes in addition to B z p operators corresponding to red and blue plaquettes which have three green GHZ qubits on their edges, see Fig.(5-a). In the same way, corresponding to the red (blue) GHZ qubits, we consider another stabilizer set including B x p operators corresponding the red (blue) plaquette in addition to B z p operators corresponding to blue and green (red and green) plaquettes which have three red (blue) GHZ qubits on their edges, see Fig.(6-a). We start with transformation on the first set of stabilizers corresponding to the green color. In particular, consider a B x p stabilizer corresponding to a green plaquette. As shown in Fig.(5)-a, there are eight GHZ qubits near a green plaquette including three red GHZ qubits, one blue GHZ qubit, and four green GHZ qubits. To consider the effect of the B x p operator on these eight GHZ qubits, note that each GHZ qubit in the GHZ basis appears in the form of .. and therefore, the effect of B x p on the GHZ qubit m i is equivalent to an identity operator. On the other hand, if B x p unticommutes with Z i Z i+1 , we will have x p and therefore, the effect of B x p on the GHZ qubit m i is equivalent to a logical X operator which shifts m i to m i + 1. In this regard, we consider commutation relation of the green B x p operator with eight operators of Z i Z i+1 corresponding to eight GHZ qubits near the green plaquette p. As seen in Fig.(5)-a, Since B x p has two qubits common with blue and red edges, it commutes with the corresponding Z i Z i+1 . However, it has one qubit common with four green edges and therefore, it unticommutes with the corresponding Z i Z i+1 . In this regard, the effect of B x p operator is equivalent to a product of four logical X operators on four green GHZ qubits. Now, we consider red and blue plaquettes which have three green GHZ qubits in their edges, and study transformation on the corresponding B z p operators. In particular note that such a B z p operator has two qubits in common with each green edge and therefore it is equal to a product of Z i Z i+1 operators on three green edges. To consider the effect of this operator on the GHZ basis, we notice that . Therefore, each Z i Z i+1 applied to a green edge is equivalent to a logical Z operator on the corresponding green GHZ qubit. Consequently, the above B z p operators are transformed in to a product of three logical Z operators on three green GHZ qubits around the plaquette p as shown in Fig.(5)-a. Interestingly, as shown in Fig.(5)-b, the resultant logical X-type and Z-type stabilizers are the same as the vertex and plaquette operators for a Kitaev code define on a triangular green ladder where green GHZ qubits live on the edges of the ladder. By applying the above transformation to similar stabilizers in other rows of the lattice, we find other green ladders. Importantly, the above ladders are completely separated in the sense that there are no common green GHZ qubits for them. Transformation for other sets of stabilizers corresponding to blue and red colors is also done in the same way. As shown in Fig. (6)-a, consider a B x p stabilizer corresponding to a red (blue) plaquette. Such an operator anticommutes with four red (blue) GHZ qubits and therefore, it is equal to a product of four logical X operators on red (blue) GHZ qubits. B z p operators corresponding to green and blue (green and red) plaquettes are also equal to the product of three logical Z operators on three red (blue) GHZ qubits as shown in Fig.(6)-a. Such stabilizers are also represented by red (blue) ladders and the resultant stabilizers are the stabilizers of Kitaev codes defined on red (blue) ladders, see Fig.(6-b). In this regard, while in the color code state, all qubits are entangled, logical qubits in the GHZ basis are disentangled where the resultant state is a tensor product of Kitaev states on ladders with three different colors. In other words, Kitaev's ladders are building blocks of the color code and the partially local unitary transformation plays the role of a layer-by-layer disentangler which separates different layers of the color code states. Regarding symmetry protected topological phase of the Kitaev's ladder, our result implies a transition from an intrinsic topological phase to a symmetry-protected topological phase. On the other hand, notice that the non-local nature of disentanglers has led to change of pattern of the long-range entanglement in the initial state. Therefore, the pattern of entangling ladders in the color code is in fact a simple picture of pattern of the long-range entanglement in this topological quantum code. IV. DISENTANGLING TORIC CODE TO KITAEV'S LADDERS Our layer-by-layer disentangling method can be applied to other topological models. It is in particular important for comparing the patterns of the long-range entanglement for topological states in different topological classes. In this section, we show that there is another pattern of partially local transformations which converts the toric code model to many copies of Kitaev's ladders. To this end, as shown in Fig.(7), we study Toric code on a square lattice and consider diagonal lines on the square lattice which cross qubits along non-contractible loops. Corresponding to half of these lines, we define GHZ bases in the sense that two neighboring qubits i and i + 1 along the line are mapped to a GHZ qubit m i living between them i. e. (1 + (−1) mi Z i Z i+1 ). It would be a partially local transformation because it is local in the direction which is orthogonal to the diagonal lines. Next, we consider the effect of such a partially local transformation on stabilizers of the toric code. As shown in Fig.(8-a), a vertex operator A v = X 1 X 2 X 3 X 4 unticommutes with Z i Z i+1 s corresponding to two GHZ qubits corresponding to qubits 1 and 2. Therefore X 1 and X 2 are converted to two logical operatorsX 1 and X 2 while two original operators X 3 and X 4 remain unchanged. Consequently, the original operator is converted to a four-local operator including two logical X operators and two initial X operators which are the same as the X-type stabilizer of a Kitaev code on the triangular ladder. On the other hand, for a plaquette operator B z p = Z 1 Z 2 Z 3 Z 4 shown in Fig.(8-b), while Z 3 and Z 4 remain unchanged, the effect of Z 1 Z 2 on the GHZ basis is equal to a logical operatorZ 1 because Therefore, the initial plaquette operator is converted to a three-local Z-type stabilizer including one logical op-eratorZ 1 and two original operators Z 3 and Z 4 which is the same as the Z-type stabilizer of the Kitaev code on triangular ladder. Next, we apply the above transformation to all plaquette and vertex operators of the toric code. As it is shown in Fig.(9-a), we divide all vertices of the square lattice into two different sets denoted by blue and green colors. Then we color also all GHZ qubits with blue and green colors in the sense that a GHZ qubit living in a plaquette is colored by blue (green) if most of the vertices of that plaquette are blue (green). As shown in Fig.(9-b), by such a division of vertex operators, the blue and green vertex operators are converted to X-type operators living in blue and green GHZ qubits, respectively. In other words, blue and green GHZ qubits are decoupled due to transformation. In the same way, plaquette operators are also converted to two sets of Z-type operators on blue and green GHZ qubits which are decoupled. Finally, applying the above transformation on all qubits generates blue and green Kitaev's ladders which are decoupled, see Fig.(9-C). Consequently, similar to color code, Kitaev's ladders are building blocks of the toric code and the partially local transformation plays the role of a layer-bylayer disentangler which separates diagonal ladders from the toric code. On the other hand, it is important to compare patterns of the entanglement between ladders for the toric code with that for the color code. Regarding Fig.(9-C), toric code is constructed by entangling Kitaev's ladders which are inserted near each other in a side-by-side pattern. However, as shown in Fig.(6-b), color code has a different structure in the sense that it is constructed by Kitaev's ladders which overlap with each other and entanglers are applied to three successive Kitaev's ladders. In particular, we represent the above three Kitaev ladders with three different colors corresponding to three colors in the color code. In this regard, the above structure is a reflection of the role of color in the difference between features of the color code and the toric code. This result shows that by using the layer-by-layer disentangling mechanism, we are in fact able to find the pattern of long-range entanglement in the above topological quantum states. In other words, different patterns of the entanglement between layers for color code and toric code correspond to different patterns of the long-range entanglement. V. CONCLUSION We proposed a layer-by-layer disentangling mechanism as a systematic way for reducing dimension in topological lattice models. Since such non-local operations can change the pattern of long-range entanglement in topological states, it was expected that the above mechanism leads to a transition between different topological classes. We showed that there is such a transition from 2D topological codes with intrinsic topological order to Kitaev's ladders with symmetry-protected topological order. Furthermore, we showed that there are different patterns of the entanglement between ladders for color code and toric code. Therefore, we concluded that different patterns of the entanglement between layers are related to different topological features of the above quantum codes. In other words, different patterns of entanglement between layers correspond to different patterns of the long-range entanglement which are finger prints of different topological classes. As a concluding remark, we propose that the above idea can be used for classification of topological orders in different lattice models with different dimensions. For example, we expect that different topological classes in D dimensional models are distinguished by different patterns of entanglement between layers when we reduce the dimension of the model step by step to convert the initial model to many copies of one dimensional models. In this regard, we would be able to classify different topological orders corresponding to different patterns of entanglement between layers.
2023-05-24T01:16:32.832Z
2023-05-23T00:00:00.000
{ "year": 2023, "sha1": "7ba446845085152f05a9cab006816b78e6eb2bcb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7ba446845085152f05a9cab006816b78e6eb2bcb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }