id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
252343322
|
pes2o/s2orc
|
v3-fos-license
|
Maize Apoplastic Fluid Bacteria Alter Feeding Characteristics of Herbivore (Spodoptera frugiperda) in Maize
Maize is an important cereal crop which is severely affected by Spodoptera frugiperda. The study aims to identify endophytic bacteria of maize root and leaf apoplastic fluid with bioprotective traits against S. frugiperda and plant growth promoting properties. Among 15 bacterial endophytic isolates, two strains—namely, RAF5 and LAF5—were selected and identified as Alcaligenes sp. MZ895490 and Bacillus amyloliquefaciens MZ895491, respectively. The bioprotective potential of B. amyloliquefaciens was evaluated through bioassays. In a no-choice bioassay, second instar larvae of S. frugiperda fed on B. amyloliquefaciens treated leaves (B+) recorded comparatively lesser growth (1.10 ± 0.19 mg mg−1 day−1) and consumptive (7.16 ± 3.48 mg mg−1 day−1) rates. In larval dip and choice bioassay, the same trend was observed. In detached leaf experiment, leaf feeding deterrence of S. frugiperda was found to be greater due to inoculation with B. amyloliquefaciens than Alcaligenes sp. The phenolics content of B. amyloliquefaciens inoculated plant was also found to be greater (3.06 ± 0.09 mg gallic acid g−1). However, plant biomass production was more in Alcaligenes sp inoculated treatment. The study thus demonstrates the potential utility of Alcaligenes sp. and B. amyloliquefaciens for improving growth and biotic (S. frugiperda) stress tolerance in maize.
Introduction
Maize (Zea mays), the third most important cereal crop in India after rice and wheat, has been planted on about 80 lakh ha [1]. Spodoptera frugiperda is one of the severe pests of maize and causes yield losses of 8.3 to 20.6 million tons per annum. This is a devastating insect native to tropical and subtropical regions of America. This insect feeds on more than 80 plant species-including maize, rice, sorghum, millet, sugarcane, vegetable crops, and cotton, among others. In the recent past, it has been reported that maize is being affected heavily by S. frugiperda [2]. In the current scenario of climate change and global warming, it is important to identify bioprotective agents for the control of this herbivore.
Results of plant microbiome studies revealed that plants are closely associated with numerous beneficial microorganisms. These microbes are of immense attention because of their role in improved plant growth and health [3]. Plant endophytes, microbes inhabiting plant tissues, have been reported to enhance plant growth by improving nutrient availability and providing tolerance against biotic and abiotic stresses [4]. The mechanisms of improvement in plant health include triggering immune system via induced systemic resistance and through production of hydrogen cyanide [5], siderophores [6], ammonia [7], and hydrolytic enzymes such as proteases, chitinases, lipases, and pectinases [8]. The endophytes mediated priming and/or eliciting defence against various phytopathogenic fungi [9], herbivores [10], nematodes [11], and viruses [12] were reported in earlier studies. A plant's apoplast is the space outside the plasma membrane which contains free diffusing metabolites and proteins. These solutes play an important role in plant physiology by controlling biotic and abiotic stresses. The apoplast is considered as one of the main reservoirs of bacterial endophytes and also it is an important site of interaction between beneficial endophytic microbes and external elicitors such as pathogens [13]. The apoplastic fluid endophytes improve plant growth through enhanced nutrient availability (nitrogen, phosphate, potassium, iron, and zinc) and phytohormone production (indole acetic acid and gibberellic acid) [10,14]. Earlier studies reported that apoplastic microbes could enhance plant growth, mitigate drought [15], and trigger immunity against phytopathogens [16].
Despite reports available for endophytic bacteria induced resistance against diseases and pests [10], the effect of apoplastic endophytic bacteria mediated defence priming against herbivores are poorly understood. In this context, the study was carried out with the objectives to isolate and screen potential plant growth promoting bacterial endophytes from apoplastic fluid of maize leaf and root and to evaluate the potential of selected endophytic bacterial inoculation on the growth of maize and S. frugiperda during herbivore (S. frugiperda) attack.
Isolation and Characterization of Apoplastic Fluid Endophytes from Maize (COH6)
The apoplastic fluid was recovered from excised roots and leaf samples of 45 days old maize COH6 grown in millet farm, Tamil Nadu Agricultural University (TNAU), Coimbatore, India according to the procedure of Maksimovica et al. [17]. In brief, samples were washed with tap water and cut into 5 cm pieces and soaked in 0.05% (v/v) triton × 100 for 4 min. After that, the samples were immersed in 5% sodium hypochlorite for 10 min, followed by 2.5% (w/v) sodium thiosulphate for 10 min; then washed several times with sterile distilled water. After that, the leaf segments were dipped in 70% ethanol for 10 min followed by ten times wash in sterile distilled water. The leaves were blot dried on sterile filter paper and imprinted on tryptic soy agar medium to check for contamination. For apoplastic fluid extraction, the samples were cut into 2 cm and placed in the sterile disposable syringe containing infiltration solution (100 mM KCl) followed by imposing pressure on samples until it turned a dark colour. The samples were placed inside 1 mL micropipette tip. The tip was placed in the falcon tube and centrifuged at 6000 g for 10 min. The filtered white apoplastic fluid was stored at 4 • C until further use. The collected apoplastic fluid (10 µL) was spread on nutrient agar (NA), lysogeny agar (LA), tryptic soy agar (TSA), and Reasoner's 2A agar (R2A) with different concentrations (100%, 50%, and 25%) for 48 h at 37 • C. Morphologically distinct colonies from each media were selected.
Estimation of Mineral Solubilisation Efficiency
For estimating nutrient solubilising index, the bacterial cultures were spotted on Pikovskaya, Aleksandrov, and Bunt and Rovira media for phosphate, potassium, and zinc respectively. After incubation, the halo zone formed around the colonies was measured. The mineral solubilising index was calculated according to Chakdar et al. [18].
Indole Acetic Acid (IAA)
IAA production by the apoplastic fluid endophytes was determined using Salkowski reagent following the procedure of Patel and Saraf [19]. Isolates were inoculated into LB broth containing 0.2% L-tryptophan, pH 7.0, and incubated at 28 • C for 7 days. Post incubation cultures were centrifuged at 12,000× g for 15 min. One mL of the supernatant was mixed with 2 mL of Salkowski reagent, and the intensity of the pink colour developed was read at 530 nm. Selected endophytes were first grown in 10 mL nutrient broth for seven days and then centrifuged at 12,000× g for 10 min. To the supernatant, 2 mL zinc acetate was added and incubated for 2 min. Then 2 mL potassium ferrocyanide was added and the mixture was centrifuged at 10,000× g for 10 min. To the 5 mL supernatant, 5 mL of 30% hydrochloric acid was added and incubated at 30 • C for 75 min [20]. After incubation, the absorbance was taken at 254 nm and various concentrations of GA were used for preparation of standard. Chrome azurol succinic (CAS) acid medium was used for qualitative assay of siderophore production. The endophytes were streaked on CAS medium and change of the colour of the medium from green to yellow was considered positive for siderophore production. For quantitative assessment, 48 h old cultures grown in succinic acid broth were centrifuged at 10,000× g for 10 min. One mL supernatant was mixed with 1 mL of CAS and absorbance was taken at 630 nm in spectrophotometer (Spectramax ® i3X, San Jose, CA, USA) [12].
Ammonia Production
The endophytes were inoculated into 10 mL peptone water broth and incubated at 30 • C for 72 h. After incubation, the broth was centrifuged at 10,000× g for 10 min. Subsequently, 1 mL supernatant was mixed with 0.5 mL Nessler's reagent and the colour developed was measured at 450 nm in spectrophotometer. The quantity of ammonia produced was calculated using the known concentration of ammonium sulphate as standard [21].
Hydrogen Cyanide (HCN) Production
HCN production was determined with the methodology of Devi and Thakur [22], using sodium picrate as an indicator. First, the selected endophytes were grown in a conical flask containing nutrient broth supplemented with glycine (4.4 g L −1 ). Then, Whatman No.1 filter paper (1 × 5 cm) strip dipped in sodium picrate solution was hanged in the conical flask and incubated at room temperature for 72 h. Upon incubation, the colour of the sodium picrate in the filter paper changed from yellow to reddish brown which is proportional to the concentration of HCN produced by the culture. The compound in the filter paper was eluted using 10 mL distilled water and absorbance was recorded at 625 nm.
Lipase Activity
Isolated endophytes were streaked on tributyrin agar medium and incubated at 35 • C for 48 h. The clear zone around the colony indicated positive for lipase activity [23].
Protease Activity
The bacterial endophytes were streaked on skimmed milk agar medium and incubated at 30 • C for 48 h. The endophytes positive for protease production were detected from formation of clear zone around the colony. To quantify the protease production, 24 h grown endophytes were centrifuged at 10,000× g for 6 min and the supernatant (crude enzyme) was collected. To 200 µL of the supernatant, 500 µL of casein (1% w/v in 50 mM phosphate buffer, pH-7) was added and incubated in water bath at 40 • C for 20 min. Then, 1mL of 10% (w/v) trichloroacetic acid was added and incubated at 35 • C for 15 min. The mixture was centrifuged at 10,000× g for 5 min and the supernatant was collected. To the supernatant, 2.5 mL of 0.4 M Na 2 CO 3 and 1 mL of Folin-Ciocalteu's phenol reagent were added and incubated at room temperature for 30 min in the dark. The optical density of the solution was read at 660 nm [24].
Pectinase
The endophytes were streaked on pectinase agar medium and incubated for 48 h at 30 • C. Then, the culture plates were flooded with 50 mM potassium iodide solution. Appearance of clear zone around the colony indicated positive result for pectinase production. For quantitative analysis, the positive cultures grown in nutrient broth for 48 h were centrifuged at 10,000× g for 5 min and the supernatant served as a source of crude enzyme. To 100 µL of supernatant, 900 µL of substrate (citrus pectin (0.5% w/v) in 0.1 M phosphate buffer, pH-7.5) was added and incubated at 50 • C for 10 min in water bath. After incubation, 2 mL of dinitrosalicylic acid reagent (DNS) was added and placed in boiling water bath for 10 min. After cooling, optical density (OD) of the solution was measured at 540 nm [25].
Chitinase
Isolated endophytes were streaked on colloidal chitin agar medium and incubated for 5 days at 35 • C and clear zone formation around the colony indicated positive result for chitinase activity [26]. The positive cultures grown in nutrient broth at room temperature for 48 h were centrifuged at 10,000× g for 15 min at 4 • C and the supernatant (crude enzyme) was collected. 150 µL crude extract was added to 150 µL of 0.1 M phosphate buffer (pH 7) and 300 µL of 0.1% colloidal chitin. Then incubated at 55 • C for 10 min; the mixture was centrifuged at 10,000× g for 5 min and the supernatant was collected. Then, 200 µL supernatant was mixed with 0.5 mL distilled water and 1 mL of Schales reagent; the mixture was boiled for 10 min and the absorbance was recorded at 420 nm.
Molecular Identification of Potent Apoplastic Fluid Bacterial Endophytes
Among 15 endophytes, two endophytes were selected based on the growth promotions characteristics and bioprotective properties. The selected endophytes grown overnight in nutrient broth were used for extraction of genomic DNA following CTAB (cetyltrimethylammonium bromide) method [27]. 16S rRNA sequences was amplified using, 27F (5 AGAGTTTGATCCTGGCTCAG3 ) and 1492 R (5 GGTACCTT GTTACGACTT3 ) primers in PCR (polymerase chain reaction) thermocycler (Biorad T100). The PCR product was sequenced (J.K. Scientific, Coimbatore, India) and submitted in National Centre for Biotechnology Information (NCBI). Spodoptera frugiperda egg mass procured from National Bureau of Agricultural Insect Resources (NBAIM), Bangalore (Accession number: NOC-03) was reared in plastic container with maize leaves as feed till they reached second instar [28]. Maize (COH6) seeds were collected from Department of Millets, Tamil Nadu Agricultural University, Coimbatore. The apoplastic endophyte B. amyloliquefaciens was grown in nutrient broth at 35 • C for 24 h and then centrifuged at 10,000× g for 10 min and the cell pellet was collected. The cell concentration was adjusted to 10 8 cfu mL −1 with sterile distilled water.
No-Choice and Choice Bioassay
The bioprotective potential of B. amyloliquefaciens against S. frugiperda was determined through bioassays. For the bioassays, 5 cm maize (COH6) leaves (35 days old plants from Millet field, TNAU) were soaked in sterile water containing 0.001% Tween 80 for 10 min and washed five times with sterile distilled water. Then leaves were surface sterilized with 1.6% sodium hypochlorite for 3-4 min and washed five times with sterile distilled water. In no-choice bioassay, the surface sterilized pre-weighed leaves were dipped in 20 mL of B. amyloliquefaciens (10 8 cfu mL −1 ) (B+) for 15 min, simultaneously control leaves (B−) were soaked in sterile distilled water. Excess inoculum was drained off and each treated leaf was placed on a sterile petri plate containing wet filter paper. Then, a pre-weighed second instar larva was released into the petri plate and allowed to feed for 24 h. In choice bioassay, two leaves-one B. amyloliquefaciens inoculated (BA+) and another un-inoculated (BA−)-were placed in a petri plate (140 mm). One pre-weighed second instar larva was released, allowed to feed both the leaves and observed for the preference [29]. Each bioassay was carried out with 25 replicates. The feeding capacity of larva was evaluated statistically based on the methodology of Waldbauer [30].
Relative growth rate (RGR) =
Weight gain Initial weight o f caterpillar (2) Relative consumptive rate (RCR) = Food consumption Initial weight o f caterpillar where B-larval weight gain, I-food ingested, C-weight of control leaf, T-weight of treated leaf.
Larval Dip Bioassay
In this assay, second instar larvae (n = 25) were soaked in 20 mL B. amyloliquefaciens culture (10 8 cfu mL −1 ) for 5 s. For control, the same numbers of larvae were dipped in sterile distilled water [31]. One surface sterilized maize (COH6) leaf segment (from 35 days old plant) was placed in petri plate containing (90 mm) sterile wet filter paper. Then, the second instar larva was introduced and allowed to feed for 24 h. The feeding capacity of larva was calculated [30].
Detached Leaf Bioassay
A pot culture experiment with completely randomized design was conducted in the Department of Agricultural Microbiology, TNAU, Coimbatore. Treatments comprised of T1-Control without Alcaligenes sp., B. amyloliquefaciens and with S. frugiperda (C*SF); T2-Alcaligenes sp. + S. frugiperda (AS*SF); T3-B. amyloliquefaciens + S. frugiperda (BA*SF); T4-Alcaligenes sp. + B. amyloliquefaciens + S. frugiperda (AS*BA*SF). Surface sterilized (10% sodium hypochlorite for 10 min) maize (COH6) seeds were soaked in 15 mL (10 8 cfu mL −1 ) of Alcaligenes sp. which was isolated from root apoplastic fluid and incubated for overnight. Then, the maize seeds were sown in mud pots (22 × 20 × 20 cm) containing sterile soil and sand (2:1). Five days after seed germination, 3 mL of 24 h culture of B. amyloliquefaciens (10 8 cfu mL −1 ) was sprayed on foliar region of each plant using low volume sprayer. Hoagland nutrient (100 mL pot −1 ) was poured once in five days after germination of seeds. The pots were irrigated once in two days with 100 mL tap water pot −1 . After 35 days of germination, leaf samples (n = 10) were collected from each treatment and cut into 5 cm segments. For the bioassay, single leaf segment was placed in a petri plates containing wet filter paper and one pre-weighed second instar S. frugiperda (SF) larva was introduced for 24 h feeding [32]. The feeding capacity of larva was calculated [30]. T3-B. amyloliquefaciens + S. frugiperda (BA*SF), T4-Alcaligenes sp. + B. amyloliquefaciens + S. frugiperda (AS*BA*SF) was conducted with five replication as mentioned in the Section 2.6. After 35 days of germination, two second instar larvae were introduced in a plant for 24 h feeding.
Phenolics Content
After herbivory treatment, leaf samples were drawn, and 500 mg of the sample were ground with 1 mL of 80% methanol and centrifuged at 10,000× g at 4 • C for 10 min and the supernatant was collected. The phenolics content in the supernatant was estimated as per the protocol of Selvaraj et al. [32]. Accordingly, 0.2 mL supernatant was mixed with 1 mL of 1 N Folin-Ciocalteu reagent and 1 mL distilled water and incubated for 3 min at 30 • C. After incubation, 1 mL of 20% sodium carbonate was added, and incubated in water bath (100 • C) for a minute and then cooled. After cooling, the absorbance was measured at 725 nm.
Plant Biomass Production
After 24 h of herbivore treatment, the dry weight of shoot and root were recorded and expressed as g plant −1 .
Statistical Analysis
Statistical analysis was carried out with the software, XLSTAT (version 2019.2.1) and SPSS (version 16.0). The values are represented as mean ± standard error of experimental data with minimum of three replications. The Duncan's multiple range test (DMRT) was performed at p ≤ 0.05 for all the endophytic characterization, detached leaf bioassay experiments. Principal component analysis (PCA) was performed to identify the best bioprotective apoplastic endophytic bacteria. Independent sample t-test was done for the data concerned with no choice and larval dip bioassay. The Box and Wishker plots were carried out for the phenolics test.
Isolation of Endophytes from Leaf and Root Apoplastic Fluid of Maize (COH6)
Totally 15 bacterial endophytes were isolated from maize leaf and root apoplastic fluid. Of 15, nine bacterial endophytes were isolated from leaf apoplast (LAF-Leaf apoplastic fluid) and six were obtained from root apoplast (RAF-Root apoplastic fluid) using different media with various concentrations (Supplementary Table S1). The morphological characteristics of these isolates were studied and represented in Supplementary Table S2. These 15 endophytes were characterized for their plant growth promoting activities and bioprotective properties.
Molecular Identification of Potent Bacterial Endophytes
Based on the plant growth promotion activities and bioprotective properties, RAF5 (Table 1) and LAF5 (Table 2, Figure 1) were selected and sequenced. The nucleotide sequences were analyzed using BIOEDIT software resulted in contigs of 1445 bp (RAF5) and 710 bp (MAL5). The NCBI blast analysis of RAF5 (1445 bp) showed higher homology with Alcaligenes sp. with sequence similarity of 100% and e-value 0. LAF5 (710 bp) showed higher similarity with B. amyloliquefaciens with sequence similarity of 100% and e value 0. The nucleotide sequences of these endophytes were submitted in the NCBI, and the accession numbers obtained were MZ895490 for RAF5 and MZ895491 for LAF5 ( Figure 2). (Table 1) and LAF5 (Table 2, Figure 1) were selected and sequenced. The nucleotide sequences were analyzed using BIOEDIT software resulted in contigs of 1445 bp (RAF5) and 710 bp (MAL5). The NCBI blast analysis of RAF5 (1445 bp) showed higher homology with Alcaligenes sp with sequence similarity of 100% and e-value 0. LAF5 (710 bp) showed higher similarity with B. amyloliquefaciens with sequence similarity of 100% and e value 0. The nucleotide sequences of these endophytes were submitted in the NCBI, and the accession numbers obtained were MZ895490 for RAF5 and MZ895491 for LAF5 (Figure 2). quences were analyzed using BIOEDIT software resulted in contigs of 1445 bp (RAF5) and 710 bp (MAL5). The NCBI blast analysis of RAF5 (1445 bp) showed higher homology with Alcaligenes sp with sequence similarity of 100% and e-value 0. LAF5 (710 bp) showed higher similarity with B. amyloliquefaciens with sequence similarity of 100% and e value 0. The nucleotide sequences of these endophytes were submitted in the NCBI, and the accession numbers obtained were MZ895490 for RAF5 and MZ895491 for LAF5 ( Figure 2).
Choice Bioassay
The results of choice bioassay showed that the percentage preference by the larvae for feed was 35% for BA− and 65% for BA+. However, among 65%, 20% of larvae shifted their preference from BA+ to BA−. The mortality rate due to feeding of BA+ leaves was 15%. The larval weight was greater when fed on BA− (1.83 ± 0.23 mg g −1 day −1 ) compared to BA+ (0.90 ± 0.41 mg g −1 day −1 ).
Biomass Content of Maize
Endophytes inoculated maize recorded higher biomass than uninoculated control against S. frugiperda (Figure 4). Among the endophytes, AS*SF treated plants registered significantly (p = 0.001, df = 32) higher shoot biomass content except BA*SF (p = 0.12, t = 1.84). Similarly, root biomass was also significantly higher in AS*SF (p = 0.001, df = 32) inoculated plants. Root biomass value was lower in AS*BA*SF compared to treatments like AS*SF and BA*SF; however, it was significantly higher than control (p = 0.003).
Biomass Content of Maize
Endophytes inoculated maize recorded higher biomass than uninoculated control against S. frugiperda (Figure 4). Among the endophytes, AS*SF treated plants registered significantly (p = 0.001, df = 32) higher shoot biomass content except BA*SF (p = 0.12, t = 1.84). Similarly, root biomass was also significantly higher in AS*SF (p = 0.001, df = 32) inoculated plants. Root biomass value was lower in AS*BA*SF compared to treatments like AS*SF and BA*SF; however, it was significantly higher than control (p = 0.003).
Discussion
The apoplast is the region exterior to the protoplasm wherein lot of interactions occurs and is the main reservoir for bacterial endophytes. It is the space in which microorganisms interact with each other and also with the host. Thus, the present study was aimed to identify potential bacterial endophytes from apoplastic fluid to promote maize growth and protect the host from the most dangerous herbivore, fall armyworm (S. frugiperda). Plant growth promoting bacterial (PGPB) endophytes improve crop growth by enhancing nutrient availability (nitrogen, phosphorus, potassium, sulphur, zinc, and iron), providing plant growth hormones (IAA, GA, cytokinins) [33], protecting plants from pests and diseases [7] and imparting tolerance against biotic and abiotic stressors [11,15]. In the current study, 15 endophytes were obtained from maize COH6 leaf and root apoplastic fluid. They were characterized for plant growth promoting and bioprotecting properties. Among 15 isolates, two endophytes were found to be superior to others. These two endophytes were identified phylogenetically as Alcaligenes sp. and B. amyloliquefaciens. Among these two isolates, Alcaligenes sp (MAR5) showed significant potentiality for all the plant growth promoting properties (Table 1, Figure 4). Similarly, Alcaligenes feacalis inoculated rice recorded more germination percentage, biomass content, and root and shoot length than the control [34]. A plethora of research cited the importance of Alcaligenes sp. in plant growth promotion and stress alleviation [35].
To improve the resistance and/or tolerance of maize against S. frugiperda, the biochemical and physiological properties (siderophore, HCN, ammonia) of the maize apoplastic fluid endophytes were tested and these compounds were also reported to promote plant growth. Production of siderophore (as ligands to solubilize the iron) enhance the defence signalling of plants against various pathogens and pests. Excess intake of iron led to iron toxicosis and results in insect mortality [36]. The hydrogen cyanide (HCN) is a secondary metabolite produced by bacteria and has potential effect on external pathogen
Discussion
The apoplast is the region exterior to the protoplasm wherein lot of interactions occurs and is the main reservoir for bacterial endophytes. It is the space in which microorganisms interact with each other and also with the host. Thus, the present study was aimed to identify potential bacterial endophytes from apoplastic fluid to promote maize growth and protect the host from the most dangerous herbivore, fall armyworm (S. frugiperda). Plant growth promoting bacterial (PGPB) endophytes improve crop growth by enhancing nutrient availability (nitrogen, phosphorus, potassium, sulphur, zinc, and iron), providing plant growth hormones (IAA, GA, cytokinins) [33], protecting plants from pests and diseases [7] and imparting tolerance against biotic and abiotic stressors [11,15]. In the current study, 15 endophytes were obtained from maize COH6 leaf and root apoplastic fluid. They were characterized for plant growth promoting and bioprotecting properties. Among 15 isolates, two endophytes were found to be superior to others. These two endophytes were identified phylogenetically as Alcaligenes sp. and B. amyloliquefaciens. Among these two isolates, Alcaligenes sp. (MAR5) showed significant potentiality for all the plant growth promoting properties (Table 1, Figure 4). Similarly, Alcaligenes feacalis inoculated rice recorded more germination percentage, biomass content, and root and shoot length than the control [34]. A plethora of research cited the importance of Alcaligenes sp. in plant growth promotion and stress alleviation [35].
To improve the resistance and/or tolerance of maize against S. frugiperda, the biochemical and physiological properties (siderophore, HCN, ammonia) of the maize apoplastic fluid endophytes were tested and these compounds were also reported to promote plant growth. Production of siderophore (as ligands to solubilize the iron) enhance the defence signalling of plants against various pathogens and pests. Excess intake of iron led to iron toxicosis and results in insect mortality [36]. The hydrogen cyanide (HCN) is a secondary metabolite produced by bacteria and has potential effect on external pathogen elicitors. The HCN disrupts the electron transport chain, which causes cell death. The HCN producing Chromobacterium sp. affected the growth of Anopheles gambiae mosquito larvae and increased mortality rate [37]. Ammonia is a volatile compound which is used for plant growth and disease suppression. Pal et al. [38] reported that ammonia producing Enterobacter cloacae control Pythium ultimum (damping off) in cotton and in the current study also most of the bacterial species produced siderophore, HCN, and ammonia.
Bioprotective properties also depend on the capability of endophytes to produce hydrolytic enzymes such as lipases, proteases, pectinases, and chitinases. Lipases hydrolyse waxes, lipoproteins, and fat in the insects; proteases affect the insect cuticles, midgut, and hemocoel; chitinases disrupt the cell wall cuticle and pectinases also have the role in pest control which affects insect gut [39]. Based on these characteristics LAF5 (B. amyloliquefaciens) was chosen as the better endophyte among other apoplastic isolates (Table 2, Figure 1).
Plethora of references reported the importance of Bacillus spp. in plant growth promotion and biocontrol properties which include B. amyloliquefaciens from canola which possessed multiple biocontrol traits such as siderophore, HCN, and ammonia production and the bacterium was shown to be antagonistic to pathogen [5]. Similarly, the endophytic Bacillus sp. was found to produce substantial amounts of proteases and lipases [39]. The chitinase produced by Bacillus spp. was found to induce defence genes and enzymes during biotic stress in tomato [40]. Similarly, the current investigation showed protease, lipase, and chitinase production by B. amyloliquefaciens (Table 2). Numerous references are available for supporting bioprotective property of B. amyloliquefaciens against phytopathogens [41]. B. amyloliquefaciens strain C6c capable of secreting hydrolytic enzymes was shown to be antagonistic to pathogens which colonize leaves, seed, and stem of English ivy [42]. Similarly, a lower degree of anthracnose infection was noticed in tobacco plants inoculated with B. amyloliquefaciens compare to control [43]. Although plenty of reports indicate the effect of Bacillus sp. against plant pathogens, reports are scant for Bacillus sp. mediated protection of plants from herbivorous insects attack. Some reports [44] showed high mortality rate of spotted stem borer (Chilo partellus) due to feeding of endophytic Bacillus sp. inoculated maize indicate the possibility of utilizing bacterial endophytes for protecting crops from herbivorous insect attack. Inoculation and colonization of broad bean with B. amyloliquefaciens significantly reduced the feeding capacity of aphids through defence priming [43]. Hence, in the current study, we investigated the importance of Bacillus amyloliquefaciens in protecting maize form S. frugiperda attack.
In no choice bioassay, B. amyloliquefaciens dipped leaves fed larvae showed lesser growth compared to control (Table 3). In larval dip method also growth and consumption rate of B. amyloliquefaciens treated larvae reduced significantly (Table. 4). Similarly, Kaushik et al. [45] reported that S. frugiperda showed lower growth rate when fed with fungal endophytes inoculated grass than un-inoculated control. Clement et al. [46] reported that Rhopalosiphum padi aphids choose endophyte free grass compare to endophyte infected grass in choice test. Crawford et al. [47] also observed that both, in field and pot trails, herbivores significantly preferred endophyte free grasses. On the contrary, in our study, most of the larvae (65%) preferred B. amyloliquefaciens inoculated leaves (BA+) initially. However, they have changed their preference from inoculated to un-inoculated leaves (BA−). Nearly 20% of larvae changed their option from inoculated (BA+) to un-inoculated leaves (BA−) and 15% larvae fed on treated leaves died.
The detached leaf bioassay indicated that the highest relative growth and consumptive rates of larvae were found in control plants over treated. However, the conversion efficiency and digestibility of feeding material were recorded in higher percentages in endophytes applied treatments than in the control. These observations are similar to the result for Spodoptera litura fed on black gram inoculated with AMF and Rhizobium by Selvaraj et al. [32]. However, the feeding deterrent index was higher in endophytes inoculated treatments than control (Table 5).
This reduced feeding behaviour is mostly correlated with enhanced phenolics content of bacteria inoculated plants. The phenolics were reported to be toxic and thus show allelopathic effects on herbivorous insects. Studies indicate that accumulation of phenolic compounds in plants affects the growth and feeding capacity of larvae [48]. In the current study, the total phenolics content of maize leaves was higher in endophyte colonized plants over control plants (Figure 3). The result of the current study thus confirms the earlier observations of Oukala et al. [4] that tomato inoculated with endophytic Bacillus pumilus accumulate a greater quantity of toxic substances such as phenolics and β-1,3-glucanases against Fusarium. Similarly, Commare et al. [49] reported that accumulation of phenolics in tomato reduces the growth of Helicoverba armigera.
To the best of author's knowledge, this is the first report of apoplastic fluid of maize, harbouring numerous plant growth promoting endophytes with bioprotective properties against the fall armyworm insect. This study resulted in isolation of 15 endophytes from both root and leaf apoplastic fluid of maize. Among those 15 endophytes, two endophytes-namely B. amyloliquefaciens and Alcaligenes sp.-were chosen as superior strains. While B. amyloliquefaciens inoculation significantly reduced the feeding characteristics of S. frugiperda on maize, Alcaligenes sp inoculation improved the growth of maize. Various bioassays conducted with B. amyloliquefaciens inoculated maize plants indicated the potential of the bacterial inoculation to reduce the leaf feeding characteristic of S. frugiperda, probably by enhancing feed inhibitory or toxic substances like phenolics in the plant system. It is also revealed that B. amloliquefaciens might be a potential bioprotective agent for S. frugiperda. However, the effect of the bacterial inoculation on the feeding behaviour of insects has to be studied under field conditions. Further studies are also essential to find the biochemical and physiological alterations in maize inoculated with B. amyloliquefaciens responsible for reduction in the feeding characteristics of S. frugiperda.
|
2022-09-18T15:12:57.390Z
|
2022-09-01T00:00:00.000
|
{
"year": 2022,
"sha1": "1c3e45174a1c265440b1c2c0ed277edca6273210",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/10/9/1850/pdf?version=1663296601",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ac3f700a2b45eed898aeb909376d9974376d1bdd",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
237383837
|
pes2o/s2orc
|
v3-fos-license
|
Performance measurement system using performance prism approach in batik company: a case study
Received: March 4, 2021 Revised: June 28, 2021 Accepted: June 30, 2021 XYZ company is one of the small-medium enterprises (SME) engaged in the batik industry with the main focus on making muslim clothes made from printed batik and batik. The company cannot evaluate the cause of the turnover decrease because no performance measurement has been applied. Therefore, a company performance measurement is carried out using the performance prism approach because a company performance appraisal is needed stakeholder contributions using the performance prism approach. Stakeholders of this company are consumers, employees, community, capital owners, and suppliers. Supporting the performance prism framework, the AHP method was used to determine the weighting and hierarchical structure and then carried out a scoring system with the help of OMAX to determine the company's actual score. This design shows that corporate stakeholders, including owners, consumers, employees, suppliers, and the surrounding community, obtained as many as 34 KPIs. From the implementation of the performance measurement system with OMAX scoring obtained the value of company performance based on satisfaction aspects (6.489), contribution aspect (6.582), and capability aspect (5.646). Recommendations are also given to improve it.
INTRODUCTION
One industry that has rapid development in Indonesia is batik. The batik industry in Indonesia has experienced rapid development since the recognition of batik as a world cultural heritage by UNESCO on October 2, 2009. Batik gives important meaning for making the spirit of nationalism and can preserve Indonesian culture [1]. Micro and small businesses dominate the batik business. The batik industry was spread in almost all regions of Indonesia. An increase does not follow the increase in the number of batik industries in turnover. Especially in the current state of the Coronavirus Diseases (COVID-19) pandemic, many batik industries are experiencing a decline in turnover. Including XYZ company, which is one of the batik companies in Indonesia, its turnover also declined. XYZ company is one of the small-medium enterprises (SME) engaged in the batik industry with the main focus on making Muslim clothes made from printed batik and batik. The company cannot evaluate the cause of the turnover decrease because no performance measurement has been applied. Performance measurement with this system causes company orientation to focus only on short-term profits and neglects long-term survival.
There are several methods of measuring performance. These include performance pyramid systems, performance measurement systems for service industries, balanced scorecard, performance prism, organizational performance measurement, etc. [2]. One of the main weaknesses of the balanced scorecard approach is its incompleteness in identifying performance assessments of stakeholder contribution [3]. The performance measurement method with Performance Prism has advantages when compared to the BSC (Balanced Scorecard) method, which uses the identification of various interested stakeholders, including investors, suppliers, customers, labour, regulators, and the community. In contrast, the BSC only uses stakeholders from shareholders and customers courses. It compared to IPMS (Integrated Performance Measurement System), which directly uses several identifications regardless of whether the KPI comes from strategy, process, or capability; performance Prism has advantages, namely Key Performance Indicators (KPI). They are identified as consisting of strategic KPIs, process KPIs, and KPI capabilities [4].
In this study, performance measurement is measured using the performance prism approach. Performance prism is a performance measurement system that is a refinement of the previous performance measurement system. The framework from performance prism is categorized into two aspects: business performance review and performance measurement review [5]. Companies need strategies to deal with environmental conditions and consider the resources they have, for example, by using SWOT and Fuzzy [6]. This model is based not only on strategy but also on the satisfaction and contribution of stakeholders, processes, and company capabilities [7], [8]. It can be seen in Fig. 1.
Contrary to the Balanced Scorecard performance measurement system, which is guided by performance measures strictly derived from strategy [9], [10]. Performance Prism refers to the needs and desires of the Stakeholders that must be considered first. Moreover, the Balanced Scorecard is more focused on financial results. It has not been able to determine the compensation system appropriately regarding the follow-up of the performance evaluation results [11]. Therefore, a company performance appraisal is needed stakeholder contributions using the performance prism. Stakeholders play an important role in improving SME performance through stakeholder functions as trainers, analysts, coordinators, specialists, and financial providers [12]. Performance prism also provides a comprehensive performance measurement by translating stakeholder satisfaction and contributions towards organizational goals, strategies, business processes, and capabilities, for example, in the batik industry [13]. The implementation of the Performance Prism method has been carried out too in various fields of micro and small business, including sports segments, pharmacy, department store, and an apparel faction [14].
A method of problem-solving is used to support decision-making using performance prism. The use of multi-criteria analysis is more popular than multi-objective optimization in environmentally conscious manufacturing. Currently, several techniques that are often used are AHP, ANP, and TOPSIS. Meanwhile, the use of other techniques such as MACBETH, DEMATEL, ELECTR, PROMETHEE is very rare [15]. This research using the Analytical Hierarchy Process (AHP) and the Scoring system method, Objective Matrix (OMAX). The use of AHP as a weighting criteria only. The advantage of AHP is that it guarantees consistency when determining the weight of the criteria. This study does not use a multi-criteria ranking index based on certain measures, such as the VIKOR method [16]. Many industries use a combination of the AHP method, such as automotive companies [17], the food industry [18], etc. The decision support model will describe complex multi-system or multicriteria problems into a hierarchy. Hierarchy is defined as a representation of a complex problem in a multi-level structure where the first level is the goal, followed by the system level, criteria, subsystems, and so on down to the last level of alternative criteria [19]. Various factors influence performance. It depends on the type and profile of the organization, and the aim of a study conducted [20]. Later, the OMAX method and the company's traffic light system can determine long-term goals regarding the goals to be addressed by the company.
This study aims to determine the satisfaction and contribution of each stakeholder to determine performance indicators of the criteria of strategy, processes, and capabilities of the company and design a performance measurement model and measure company performance based on the results of the design has been done. This performance measurement can help XYZ company to be able to carry out measurements and improvements so that it can develop into a better company and compete with various similar industries.
RESEARCH METHODS
The object of this research is one of the batik companies which has a "make to order" production system. Based on the source of data acquisition, the data needed are primary data and secondary data. Primary data includes a list of stakeholders, data on the causes of satisfaction of each stakeholder, strategies, processes, capabilities needed, and data on contributions from each stakeholder. Secondary data is obtained through specific references or literature regarding company data by conducting library research. Field studies were conducted through observation, interviews, and questionnaires. Stakeholders of this company are consumers, employees, community, capital owners, and suppliers. Data collection using a random sampling technique. Details of the number of respondents to consumers, employees, and communities are 50, 65, and 70 respondents. The number of respondents has met the data adequacy test. There are two respondents for suppliers and one respondent for the capital owner.
The steps to use performance prism are as follows (Fig. 2): 1. Identify stakeholder satisfaction and contributions Distributing questionnaires make the identification of satisfaction and contributions from the stakeholders so that the satisfaction and contribution data from the stakeholders will be obtained later. Customer satisfaction and questionnaire use a reference in the form of eight dimensions of quality: performance, features, reliability, performance, durability, serviceability, aesthetic, and perceived quality [21]. Ten random consumers were interviewing for the questionnaire to see what factors affect their satisfaction. The results of these interviews and adjusted to the reference journal, then used as a questionnaire of consumer satisfaction and contribution. Whereas employee satisfaction uses the reference of job satisfaction by Smith et al. [22], the five dimensions that affect job satisfaction are work itself, supervise, pay, promotion, and workers. As for the leading stakeholders, suppliers, and the community, satisfaction and contribution criteria are obtained using brainstorming and direct interviews with the parties concerned. 2. Transformation to the perspective of strategy and process The next step is transforming data from satisfaction and contribution to the perspective of strategy and process. Satisfaction data and contributions have been obtained previously. It was transformed into a suitable strategy to be applied in the company. Brainstorming makes the determination of the company's strategy. 3. Determining the capability perspective and making Key Performance Indicators (KPIs) framework The company's capability perspective is obtained by using data that has been collected and measured by the company before. This data is about business or KPI measurements that companies have previously carried out. From the data obtained previously and then processed with each perspective of performance prism, the overall company KPI framework will be obtained. 4. KPI weighting uses AHP and scoring by OMAX KPI weighting used AHP and followed by the OMAX method to measure company performance based on the design results. AHP was developed by Saaty [23]. AHP is a technique to help solve complex problems by decomposing the problem into a hierarchy of levels. Each criterion is given a weight, and the alternatives are assessed in pairwise comparisons.
Then, priority is calculated using the eigenvector method. The steps for the AHP method are as follows: 1) Determine the types of criteria used.
2) Arrange these criteria in the form of a paired matrix.
Where: n is the number of criteria; wi is weighting to i, and aij is the ratio of the weight of i and j. In filling out the pairwise comparison matrix, decision-makers are assisted by the scale shown in Table 1. The scale describes the relative importance of an element over other elements concerning a criterion. One element is slightly more important than the other 5 One element is very important than the other 7 One element is more important than the other 9 One element absolute is more important than other elements 2, 4, 6, 8 The middle value between two consecutive assessments 3) Normalize each column by dividing each value between the i-th column and j-th row by the largest value in the i-th Where: CM = Consistency Measure is obtained by multiplying the matrix by the priority weight of each row.
7) Calculating the Consistency Index (CI)
Counting the consistency is calculating the deviation from the consistency of the value of this deviation called the Consistency Index by using the equation: Where : λ : eigen value maximum, n: matrix size. Consistency Index (CI) is a random matrix with a rating scale of 1 to 9 and the opposite as a Random Index (RI). Based on Saaty's calculation using 500 samples, if the numerical "judgment" is taken at random from a scale of 1/9, 1/8,..,1,2,..,9, a consistent average will be obtained for matrices with different sizes ( Table 2). The comparison between CI and RI for a matrix is defined as In the AHP model, the comparison matrix can be accepted if the value of the consistency ratio (CR) ≤ 0.1.
8) Geometric average calculation
This step is carried out to find the average value of pairwise comparisons given by n decision-makers with the following formula: ai = (Z1, Z2, ... Zn) 1/n Where ai: the average value of the pairwise comparisons of criteria i for n participants Problem formulation Identification of data from satisfaction and contribution to a strategy and process perspective Determination of capability perspective and identification of KPI KPI weighting using the AHP method and scoring using the OMAX method Analysis and conclusion (geometric mean), Zi: the average value of comparisons between criteria for participants I, n: number of participants and i: 1, 2, 3, ... n 9) Determination of final weight This step is the criterion or sub-criteria with the largest normalization value.
RESULTS AND DISCUSSION
Identification of satisfaction and contributions has been obtained using a questionnaire. Questionnaires given to respondents have met the validity and reliability tests. The company use the data to determine the strategies. It implemented to meet the satisfaction and contribution. The formation of this strategy was discussed with the company. For example, in the aspect of customer satisfaction who want "batik products that are not defective" and "batik products that have a resistance", then from that formed a strategy in the form of "companies producing good quality batik products". After describing each of these satisfactions and contributions, the formed strategy and KPI process can be seen in Table 3.
KPI data from the aspect of company capability obtained from previous studies. Capability aspects include ten KPIs, there are customer growth (KPI 25), customer retention rate (KPI 26), work accident level (KPI 27), revenue per employee (KPI 28), employee turnover rate (KPI 29), community satisfaction (KPI 30), the level of waste treatment (KPI 31), return of investment (KPI 32), net profit margin (KPI 33), and sales growth (KPI 34). Finally, the hierarchy of the five stakeholders and KPIs can be seen in Fig. 3.
In this calculation, the weighting of each criterion is based on its level. Levels on performance prism include level one and level two. Level one compares criteria between stakeholders, and level two compares each KPI from each stakeholder. The company carries out weighting. The Consistency Ratio (CR) value from a comparison between criteria (consumer, employee, community, owner, and supplier) is 0.062. Because the CR value is less than 0.1, the weighting between criteria shows consistent or valid results. Likewise, the CR value in the subcriteria shows consistent results because it is less than 0.1. The next step is taken, the performance measurement model is integrated with the Scoring system model, the OMAX (objective matrix) model, whose function is to equalize the scale of each indicator.
Therefore, the achievement of each of the parameters can be known as well as overall company performance. OMAX itself is a method of evaluating company performance developed by Riggs [24]. Where the assessment is carried out in the cafeteria associated with the company. The concept of assessment is to combine several working group performance criteria in a matrix. Each performance criterion has a goal in the form of a special path for improvement and has a weight that is following the number of importance to the organisation's goals. OMAX measurement on Performance Prism, grade each level (from 0 to 10) uses the value obtained from the aspect of the process by providing an assessment based on the likert scale of the satisfaction questionnaire and the contribution of each stakeholder. The highest number is then entered at level 10. And the lowest number, 1, is entered at the lowest level, which is level 0.
Whereas the granting of 1-9 is an interpolation. Level 0-10 criteria use the traffic light system, i.e. levels 0-2 are in red, levels 3-7 are in yellow and levels 8-10 are green. Then fill in the score table based on the average value of the stakeholder questionnaire and stakeholder contribution. Next is to determine at what level the actual condition is and calculate how much the performance value. The following are the recapitulation of aspects of stakeholder satisfaction, aspects of the contribution of stakeholders, and company capability criteria (Table 4, Table 5, and Table 6).
In the aspect of satisfaction from stakeholders, it was found that the lowest satisfaction value was from employee satisfaction, which was 4.987 (Table 4). This level of employee satisfaction is important because it is the key to the company's business operations. The low score on the aspect of employee satisfaction lies in KPI 6,9,and 11. The assessment on these KPIs is on evaluating the suitability of the employee's job position with the expertise possessed, the percentage of employee absenteeism, and the improvement of work facilities. In general, HRM (Human Resource Management) practices such as training and development, rewards, job analysis, social support, recruitment and selection, employee relations, and empowerment are said to have a significant relationship with employee performance, as well as employee job satisfaction significantly affect employee performance [25]. Then, HRM practice has no significant effect on employee satisfaction because it depends on employee perceptions [26]. Therefore, a manager's role is needed to strengthen employees' positive HRM perceptions to increase job satisfaction.
The lowest contribution aspect from stakeholders is the contribution from consumers, with a value of 4.98 ( Table 5). The desired contribution from customers is mainly on product quality tolerance and delivery delay tolerance (KPI 2 and KPI 3). The way to increase consumer contribution is to create a social media network to increase engagement from SME customers through Social CRM (Customer Relationship Management) [27]. Companies also need to apply the Indonesian National Standard (SNI) for batik using the ISO method to provide added value and provide economic benefits [28].
Based on the value of the lowest level of stakeholder satisfaction is the KPI of employees. The strategy to increase employee satisfaction is to provide job positions to employees according to their abilities, increase income, bonuses, employee facilities, and a joint discussion forum between management and employees. Priority improvements are made to the red colour (KPI 29) and yellow colour KPIs (KPI 25, KPI 26, and KPI 31). KPI 29 gives the only red value on the capability criteria. Therefore, immediate improvement needs to be given for this KPI. It is in line with research conducted by Nurcahyo et al. [29] that the initial stage for priority strategies and development of the batik industry is developing human resources and technological development. Employees are human resources who must receive full attention.
How to solve the higher employee turnover rate? The employee turnover rate starts from the intention to move, which is influenced by leadership behaviour by using the mediation of organizational commitment and the impact of job satisfaction [30]. Therefore, XYZ company needs to use leadership behaviour that is people-oriented rather than task-oriented. Employees must also commit to the organization that can be grown by creating a comfortable working environment. Other KPIs that also need improvements are KPI 25, KPI 26, and KPI 31. Consumer stakeholders for KPI 25 (customer growth) are to add new products that can improve marketing. KPI 26 (customer retention rate) conducts a customer loyalty screening program by offering discounts or cashback for each prospective buyer. Community stakeholders at KPI 31 (the level of waste management) create a batik waste management system that does not pollute the environment.
CONCLUSION
The influential stakeholders in XYZ company are consumer, employee, community, owner, and supplier. Performance measurement in XYZ company by using five stakeholders produced 34 KPI. There are seven KPIs from consumer stakeholders, nine KPIs from employees, seven KPIs from community, seven KPIs from owners, and four KPIs from supplier stakeholder. Overall performance value for XYZ company based on the satisfaction aspect is 6.489. It was 6.582 for the aspect of contributions. From the aspect of company capability, a value of 5.646 is obtained. Therefore, these values show that XYZ company has achieved a fairly good company performance value. It is the scoring criteria using OMAX. If the total measurement is between 0 and 3, it shows the company's performance is deficient. If the total measurement is between 3.01 to 8, the company's performance is fairly good, and if the total measurement is more than 8 to 10 shows good company performance.
To overcome the measurement value of the company's performance which is still in the yellow and red values, the company should meet the recommendations that have been given. KPI 29 (employee turnover ratio) that gives the only red value on the capability criteria must receive full attention to be resolved. Companies must be able to create people-oriented leadership behaviour to result in employee satisfaction and organizational commitment from employees to the company.
Future research can implement the recommendations given and makes financial calculations. It is hoped that it can continue to develop strategic goals that are adjusted to company procedures and policies so that the company's goals can be maximally achieved. Another research is the application of the performance prism method in other SMEs.
|
2021-09-01T15:04:07.721Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "ffdc4b92cba323efa30e2bd0561cf96ad70d0814",
"oa_license": "CCBYNCSA",
"oa_url": "https://e-jurnal.lppmunsera.org/index.php/JSMI/article/download/3099/1770",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8ebfef450654dca006a8e93f1837651d7aa65d38",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
12405363
|
pes2o/s2orc
|
v3-fos-license
|
Bidirectional ventricular tachycardia in ischemic cardiomyopathy during ablation
Case report A 78-year-old man presented with recurrent appropriate shocks from his implantable cardioverter-defibrillator (ICD) for monomorphic VT. He was known to have ischemic cardiomyopathy and monomorphic VT for which he had received an ICD 12 years ago. Since then, he had infrequent episodes of VT while taking sotalol that were terminated with antitachycardia pacing. He presented with VT storm and appropriate ICD therapies refractory to intravenous amiodarone andwas offeredVTablation. On arrival to the electrophysiology (EP) laboratory, he was in clinical and hemodynamically stable VT (Figure 1, VT1). The EP procedure was performed using intracardiac echocardiography (SoundStar, Biosense Webster, Diamond Bar, CA) and CartoSound (Biosense Webster), creating an anatomic map of the left ventricle (LV). A bidirectional DF curve SmartTouch surround flow 3.5-mm irrigated ablation catheter (Biosense Webster) was used for mapping and ablation within the LV. A quadripolar diagnostic EP catheter (Response electrophysiology catheter, St. Jude Medical, St. Paul, MN) was placed in the right ventricular apex. Figure 1 illustrates entrainment with near-concealed fusion (with slight electrocardiographic [ECG] morphological alteration) when pacing from the LV anterior-mid septum. The difference between the postpacing interval and the VT tachycardia cycle length (TCL) was 10 ms. The time from stimulus to surface QRS was 70 ms, equivalent to the time from the local elec-
Introduction
Ventricular tachycardia (VT) with alternating morphology is usually associated with intracardiac channelopathies, such as digoxin toxicity and cathecholinergic polymorphic ventricular tachycardia (CPVT). We describe an unusual case of VT with alternating exits during VT ablation in a patient who is clinically known to have recurrent monomorphic VT.
Case report
A 78-year-old man presented with recurrent appropriate shocks from his implantable cardioverter-defibrillator (ICD) for monomorphic VT. He was known to have ischemic cardiomyopathy and monomorphic VT for which he had received an ICD 12 years ago. Since then, he had infrequent episodes of VT while taking sotalol that were terminated with antitachycardia pacing. He presented with VT storm and appropriate ICD therapies refractory to intravenous amiodarone and was offered VT ablation. On arrival to the electrophysiology (EP) laboratory, he was in clinical and hemodynamically stable VT (Figure 1, VT1).
The EP procedure was performed using intracardiac echocardiography (SoundStar, Biosense Webster, Diamond Bar, CA) and CartoSound (Biosense Webster), creating an anatomic map of the left ventricle (LV). A bidirectional DF curve SmartTouch surround flow 3.5-mm irrigated ablation catheter (Biosense Webster) was used for mapping and ablation within the LV. A quadripolar diagnostic EP catheter (Response electrophysiology catheter, St. Jude Medical, St. Paul, MN) was placed in the right ventricular apex. Figure 1 illustrates entrainment with near-concealed fusion (with slight electrocardiographic [ECG] morphological alteration) when pacing from the LV anterior-mid septum. The difference between the postpacing interval and the VT tachycardia cycle length (TCL) was 10 ms. The time from stimulus to surface QRS was 70 ms, equivalent to the time from the local elec-trogram (EGM) to surface QRS. The stimulus to QRS interval was short, suggesting that the catheter was near the exit site of the VT or with dual capture of both the isthmus and outer loop tissue. 1 The first radiofrequency (RF #1) application of 30 W was delivered at the site of entrainment ( Figure 1). The clinical VT (VT1) terminated after 13 seconds of RF application. We continued for a total of 90 seconds of RF ablation at this site. Shortly after the termination of RF, there was a spontaneous occurrence of a second VT (VT2) of alternating morphology (Figure 2, VT2). We opted not to perform additional mapping of VT2 immediately at the time, but rather perform consolidation lesions at the site of successful ablation of VT1. Another RF application (RF #2) was applied adjacent to the first RF lesion, which resulted in termination of VT2 within 3 seconds of RF ablation.
We then proceeded to perform electroanatomic voltage mapping and ablation of the scar substrate in the LV. No further VTs were inducible on electrophysiologic testing after completion of substrate ablation. The subject had completed 6 months of follow-up without any recurrence of ventricular arrhythmias.
Discussion
Bidirectional VT (BVT) is a form of VT with a beat-to-beat alteration in the QRS axis on the surface ECG. It is uncommon and usually associated with certain specific conditions, such as CPVT, 2 digoxin toxicity, and long QT syndrome type 7 (Andersen-Tawil syndrome). 3 It had also been rarely reported in subjects with sarcoidosis 4 and acute coronary syndrome (ACS). 5 To our knowledge, it has never been described in subjects with ischemic cardiomyopathy in the absence of ACS.
In the context of channelopathies, the mechanism has been associated with mutations in the ryanodine receptor 2 of the sarcoplasmic reticulum, resulting in the voltage-dependent Ltype Ca 21 channel remaining open for a longer period of time extending into diastole of cardiac muscle cells. This causes delayed afterdepolarization in phase IV of the action potential, triggering polymorphic VT or BVT. Digoxin is postulated to cause polymorphic VT or BVT in a similar fashion by facilitating the opened state of ryanodine receptor 2. In ischemia, the mechanism is postulated to be a complex combination of neurohumoral and ionic imbalance coupled with an increase in electrical resistance between cardiac myocytes. 6 In our subject, BVT occurred shortly after ablation was performed at the exit site of a reentrant circuit of a right bundle branch block monomorphic VT. The exit site of the right bundle branch block morphology VT was localized to the anterior-mid aspect of the interventricular septum. The resultant BVT demonstrated left bundle branch block morphology with alternating superior and inferior axis. BVT exhibited a shorter but regular alternating cycle length (CL) associated with alternating surface morphology on the ECG and nearfield EGM on the distal bipole of the ablation catheter (Figure 2, 310 and 370 ms, respectively, but a stable and consistent combined CL of 680 ms). One potential explanation is that BVT uses 2 exit sites in the LV (superior and inferior axis) that are closely related to the site of RF #1. The superior axis exit site having a faster conduction time or shorter circuit, resulting in a shorter TCL (310 ms) as compared with the inferior axis exit site (370 ms). However, both exit sites likely had a long effective refractory period; hence, the wavefront would block every other beat, leading to an exit in the alternative site. VT2 of "bidirectional" morphology must share a common protected isthmus, as there is a consistent combined CL of 680 ms and VT was noninducible post-ablation.
Although VT1 has a longer TCL than does VT2, we postulate that VT1 suppressed the wavefront exit of VT2 via concealed retrograde penetration and collision within the circuit or exit. As the VT1 wavefront exits, it collides with the exit sites of VT2, rendering these exit sites refractory (as illustrated in Figure 3). Hence, the VT1 exit site was preferentially Figure 1 Clinical ventricular tachycardia is shown with pacing from the mapping catheter during ventricular tachycardia demonstrating entrainment with nearconcealed fusion and measurements demonstrating that the distal bipole of the ablation catheter was near the exit site of a protected isthmus. ECG 5 electrocardiogram; EGM 5 electrogram; PPI 5 postpacing interval; TCL 5 tachycardia cycle length.
KEY TEACHING POINTS
Entrainment is an invaluable tool to localize the isthmus of scar-mediated reentry ventricular tachycardia (VT), which provides a significantly higher rate of success in termination of VT during ablation.
Scar-mediated VT may present with VT of various morphologies on the surface electrocardiogram due to the presence of multiple exit sites.
When a patient presents with de novo VT with alternating morphology on the surface electrocardiogram, scar-mediated reentry VT should be considered as a differential diagnosis and not just assume conditions more commonly associated with bidirectional VT, such as digoxin toxicity and cathecholinergic polymorphic ventricular tachycardia.
manifesting despite a longer TCL. However, with RF #1 applied in a protected area, the VT1 exit site was blocked and the VT circuit was left with the exit sites for VT2 with no colliding wavefront previously from VT1, leading to the manifestation of VT2. The local EGM of ablation 1-2 in Figure 2 showed consistent alternating morphologies with each alternating ECG surface morphology during VT2, suggesting alternating wavefronts colliding at the ablation distal bipole, which would be in keeping with multiple exit sites.
As illustrated in Figure 3, RF #2 was applied to the common isthmus of VT2, just adjacent to the isthmus of VT1 where RF #1 was applied. This resulted in the block and termination of VT2.
The patient had previously never exhibited BVT until the first RF application was performed. A similar theory of multiple exit sites as the cause of BVT had been previously offered, but in the context of sarcoidosis and angina. 4,7 In the presence of scar-mediated reentry VT, it is common to find VT of multiple morphologies because of multiple exit sites. Ablation within areas of scar with an abnormal local EGM during pace mapping with multiple exit sites had been associated with greater success. 8 This could explain the success with single RF application at this site for VT2.
Conclusion
We present an unusual case of BVT in the midst of ablating ischemic VT. To our knowledge, this is the first reported case of this phenomenon in the absence of ACS. It is important to Figure 2 Second ventricular tachycardia (VT2) that occurred after the first radiofrequency application was delivered. VT2 exhibited a shorter but regular alternating cycle length associated with alternating morphology on the surface electrocardiogram and near-field electrogram. Figure 3 Proposed circuit: we postulate that the clinical ventricular tachycardia (VT1) suppressed the wavefront exit of second VT (VT2) via concealed retrograde penetration. By the time VT1 exits, its wavefront (represented by solid arrows) collides and blocks the exit sites for VT2, leading to preferential manifestation of VT1 despite having a longer tachycardia cycle length. When the first radiofrequency (RF #1) at the VT1 exit site resulted in termination, VT2 was able to exit from the superior and inferior exit sites without collision with the antidromic wavefront of VT1. This allowed the manifestation of "bidirectional" VT2 (represented by dashed arrows). Another RF application (RF #2) was applied to the adjacent site to RF #1, the common isthmus of both superior and inferior exit sites for VT2, resulting in the termination of VT2. TCL 5 tachycardia cycle length. recognize that when a patient presents with de novo bidirectional VT, we should consider scar-mediated reentry VT and not assume conditions more commonly associated with BVT such as digoxin toxicity and CPVT.
|
2018-04-03T00:30:48.319Z
|
2017-09-05T00:00:00.000
|
{
"year": 2017,
"sha1": "8b6e80b9b21ee61b206ac0c18cf0dfedbf076b9e",
"oa_license": "CCBYNCND",
"oa_url": "http://www.heartrhythmcasereports.com/article/S2214027117301537/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8b6e80b9b21ee61b206ac0c18cf0dfedbf076b9e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237506891
|
pes2o/s2orc
|
v3-fos-license
|
Do mechanisms matter? Comparing cancer treatment strategies across mathematical models and outcome objectives
When eradication is impossible, cancer treatment aims to delay the emergence of resistance while minimizing cancer burden and treatment. Adaptive therapies may achieve these aims, with success based on three assumptions: resistance is costly, sensitive cells compete with resistant cells, and therapy reduces the population of sensitive cells. We use a range of mathematical models and treatment strategies to investigate the tradeoff between controlling cell populations and delaying the emergence of resistance. These models extend game theoretic and competition models with four additional components: 1) an Allee effect where cell populations grow more slowly at low population sizes, 2) healthy cells that compete with cancer cells, 3) immune cells that suppress cancer cells, and 4) resource competition for a growth factor like androgen. In comparing maximum tolerable dose, intermittent treatment, and adaptive therapy strategies, no therapeutic choice robustly breaks the three-way tradeoff among the three therapeutic aims. Almost all models show a tight tradeoff between time to emergence of resistant cells and cancer cell burden, with intermittent and adaptive therapies following identical curves. For most models, some adaptive therapies delay overall tumor growth more than intermittent therapies, but at the cost of higher cell populations. The Allee effect breaks these relationships, with some adaptive therapies performing poorly due to their failure to treat sufficiently to drive populations below the threshold. When eradication is impossible, no treatment can simultaneously delay emergence of resistance, limit total cancer cell numbers, and minimize treatment. Simple mathematical models can play a role in designing the next generation of therapies that balance these competing objectives.
Introduction
Treatment that delays or prevents the emergence of resistance can control cancers, potentially indefinitely, and provides a suitable strategy when eradication is impossible [1].As with bacterial resistance to antibiotics or herbivore resistance to pesticides, high levels of treatment can lead to emergence of resistant strains that had been controlled by a combination of costs of resistance and competition with susceptible strains [2].Adaptive therapies, where cessation of treatment precedes loss of efficacy, have been proposed as a way to delay emergence of resistance, an approach supported by mathematical modeling, laboratory experiments, and some preliminary results of clinical trials [3][4][5].Modeling has played a key role in evaluating therapeutic timing, often providing evidence that reduced doses with treatment holidays can provide longer-term control [6].
The effectiveness of adaptive therapy depends on three assumptions: resistance is costly, resistant cells can be suppressed by competition with sensitive cells, and therapy reduces the population of sensitive cells.Under these assumptions, mathematical models have provided some support for adaptive therapy [5,7].Our goal here is to investigate a wider range of mathematical models to better understand the role of these assumptions and of particular modeling choices in shaping the tradeoff between controlling cell populations and delaying the emergence of resistance.However, even for a relatively simple and well-understood cancer like prostate cancer, the mechanisms delaying resistance are not fully known.In particular, neither the role of costs nor that of competition with susceptible cell lineages has been clearly established.
For our range of models, we compare three strategies:
1.
Maximum Tolerable Dose (MTD): a constant high dose determined by side effects.
2.
Intermittent: a periodic scheduled dose with treatment holidays [8].We here follow more common usage, rather than metronomic therapy, as recently reviewed [9].
3.
Adaptive: initiation and termination of treatment based on the status of individual patient biomarkers, often with much earlier cessation of treatment than in intermittent strategies.
We begin by examining the original model by Zhang et al (2017) in more depth, and looking at a wider range of therapy strategies.Our central result is that all therapies follow the same tradeoff between total cancer cell burden and time to emergence of resistance.
To investigate a wider range of models, we first place the original model into a broader framework of game theoretic and competition models.We extend these in four ways that could alter responses to therapy.
• Allee effect: Control of resistance solely by competition with susceptible cell populations leads logically to the tradeoff between cancer cell burden and time to emergence of resistance.Inclusion of an Allee effect, where cell populations grow more slowly at low population sizes [10], could break that tradeoff.
• Healthy cells: Cancer cells compete both with each other and with unmutated cells [11] that respond differently to therapy and could alter the response to different therapeutic regimes.
• Immune response: Apparent competition, whereby species interact not through competition for space or resources, but as mediated by predators [12] or an immune response, could alter responses to therapy by introducing a delay and through their own responses to treatment [13,14].
After summarizing the original model from [5], we present the alternative models, provide parameter values that scale dynamics to be comparable to the original, derive analytical results on the simplest of these to illustrate tradeoffs, and test the three treatment strategies.We hypothesize that intermittent and adaptive therapy will produce similar results in all cases, with the exception of the Allee effect where the rapid cessation of treatment could allow cancers to escape.
The basic Zhang model (Zh)
We begin with the model published by Zhang et al (2017).This Lotka-Volterra model has three competing cell types, which we reletter for consistency with our later models: S represents the population of androgen-dependent cells that are sensitive to treatment, P the population of androgen-producing cells and R the population of androgen-independent cells that are resistant to treatment x 1 , x 2 and x 3 respectively in the original).
Each cell type has an intrinsic per cell growth rate r and carrying capacity K.The carrying capacity for S cells is assumed to be proportional to P .The competition coefficients a represent pairwise competitive effects between cells.Adaptive therapy is based on prostate specific antigen (PSA) dynamics described by where 0.5 represents the rate of PSA decay.This is much faster than the other time scales in the model, making PSA almost exactly proportional to total cell numbers.
This model simulates therapy by reducing the carrying capacity of S and P .More specifically, K P is reduced by a factor of 100, and the carrying capacity of S is changed from 1.5P to 0.5P .Treatment thus results in an extremely rapid drop in these two populations.
Treatment has no effect on R cells which are thus released from the competitive suppression created by their smaller carrying capacity, and grow quickly until therapy is stopped.
The full set of parameter values for our main simulations are given in Table 1 based on representative patient #1 [5].
This model includes the complexity of testosterone-producing P cells in addition to competition of sensitive and resistant cells.To test whether P cells are essential to the result, we build a simpler version of the model (Zs) with two cell types, and implement therapy by directly reducing the carrying capacity of the sensitive cells, with equations We use the same parameters as for the full model, except that K S = 1.5 × 10 4 in the absence of therapy and 50 with therapy.
General model framework
To capture the key assumptions of this model and examine the conditions that lead to success of adaptive therapy, we consider the following general framework of interaction between sensitive cells S and resistant cells R, and an additional variable or variables X: (2.4) The additional dimension X represents androgen-producing cells in the Zhang model, but could also be healthy cells, androgen, another resource or growth factor, or an immune response.The variable u represents treatment, which will be a function of time for intermittent and adaptive therapy.The functions r R and r S describe growth as functions of population size to model competition, of X to capture the tumor microenvironment via use of resources or immune attack, and of u to represent the effects of treatment.The death terms δ S and δ R are included to separate births and deaths, and as a place for treatment effects u, here restricted to sensitive cells.
As outputs, we solve for the time of two types of treatment failure:
•
The time T C when the total cancer cell population exceeds some threshold C crit ,
•
The time T R when the population of resistant cells exceeds some threshold R crit .
As the costs, we also track two outputs: • Mean cancer cell numbers,
•
The fraction of time being treated.
We use mean cancer cell numbers, which effectively assumes that costs are linear in the number of cells, both for simplicity and because risks of mutation and metastasis will be proportional to the number of cells.We do not attempt optimal control analysis [21], and present as results tradeoffs between the times, treatment, and cancer burden, looking for conditions where maximizing time to emergence reduces both treatment and cancer burden.
Game theoretic model (GT)-
The simplest version of this framework uses game theoretic models that focus on how strategy frequencies depend on frequencydependent fitness.A basic model with density dependence is given by where C = S + R. We place treatment costs in the growth rate of S cells, and give both cell types the same carrying capacity and symmetric competitive effects.2.5 is a the special case of a Lotka-Volterra model with equal competition coefficients.We here generalize to a model similar to equation 2.1, but with a more realistic approach to the effects of therapy and consequences of resistance.
Lotka-Volterra model (LV)-Equation
Treatment increases the death rate δ S .We scale the competition coefficients describing the effect of each type on itself to 1, but can have asymmetric competitive effects and different carrying capacities for the two cell types.
Allee Effect (AL)-
The next model complements this framework with an Allee effect, whereby cancer cells grow more quickly when the population is above some threshold.We use the form (2.7) The parameter b scales the strength of the effect, with b = 1 reducing to equation 2.6.Values of 0 < b < 1 create a weak Allee effect where populations grow more slowly when rare, and b < 0 generate a strong Allee effect where populations decline when rare.The parameter k a scales the critical population size below which the Allee effect is strongest.
Healthy Cells (HC)-As our first example of an additional dimension X, we consider interactions with healthy cells, denoted by
In this simple model, healthy cells are distinguished by their reduced growth and lack of sensitivity to treatment.
Lottery model with cancer growth (LM)-Our other models include carrying
capacities, which is unrealistic for cancers.To address this, we create an extended lottery model of competition for sites with healthy cells [22].Assume that healthy tissue has K sites, with H occupied by healthy cells and the rest E empty.The healthy cells turn over at rate δ H , and replicate at rate r H but only into empty locations.Then This model maintains a constant number of sites E + H = K, and we think of the equilibrium healthy cell population as corresponding to a physiological optimum.
Cancer cells can differ from healthy cells in several ways: they may replicate more quickly, reproduce into sites occupied by healthy cells, and reproduce into sites occupied by other cancer cells and increase the total cell population.With two cancer cell types S and R, we have a total cell population of N = H + S + R and assume that E = K − N if N < K and E = 0 otherwise.The cells follow the equations (2.11) The competition coefficients a describe the probability that a cancer cell reproduces into an occupied site.If that site is occupied by a cancer cell, we assume that the total cell population will increase, and the parameter η represents the probability that a cancer cell kills a healthy cell that it overgrows.
Immune response (IC)-A simple model of the immune system obeys
(2.12) We assume that immune cells are induced to replicate by presence of cancer cells, and with growth limited by a carrying capacity.Immune cells directly kill cancer cells at rates η S and η R .The replication and death rates of immune cells could be altered by treatment or by the effect of treatment in priming the immune response, although we do not address these factors here.
Models with androgen dynamics
We consider a range of models that include androgen-dependent growth by sensitive cells, and study androgen deprivation therapies that reduce the supply of this resource.
Resource competition (RC)-
We adapt the basic resource competition model from population biology [23] by treating androgen as a resource.Upon activation, androgen receptor is translocated to the nucleus, and we take the subsequent chemical transformations that occur within the cell to mean that any androgen that is used is destroyed in the process [24,25].The model tracks the two cell types and the androgen level A with the equations where we model competition as in the basic game theory model (GT).Growth rates depend on androgen levels, and androgen is supplied externally at rate σ A that is reduced by treatment, and used by other cells at rate δ A .For simplicity, we assume non-saturating per cell absorption rates, but saturating growth The parameters ϵ S and ϵ R describe growth efficiency of the two types, and k S and k R the half-saturation constants of growth as a function of androgen uptake.Androgen deprivation treatment reduces the supply rate of androgen, and has a larger effect on susceptible cells if k S > k R or c S < c R .
Androgen Dynamics (A3
)-These more detailed models include androgendependent S , androgen-producing P , and androgen-independent cells R along with explicit dynamics of production and use.The dynamics follow where the division rates of S and P cells depend on their intracellular androgen concentrations A S and A P respectively.
The androgen concentrations derive from an accounting of androgen production and diffusion (2.15) A E is the external androgen concentration, η is diffusion into and out of cells, μ is androgen use by cells, ρ is production by P , and σ is residual production outside the prostate.The equilibrium of the androgen system is (2.16) We assume that the dynamics of equation 2.15 are sufficiently fast to use these equilibrium values in equation 2.14.
Androgen model without androgen-producing cells (A2)-For
comparison with the two-dimensional models, we simplify the system to exclude testosterone-producing cells by setting P = 0 in equation 2.14.
Parameter values
We choose parameter values to hit three targets.
1.
Resistant cells do not invade without therapy
2.
Sensitive cells grow to about 10,000 without therapy
Resistant cells invade with MTD therapy
We set initial conditions to S 0 = 1000 and R 0 = 1.0 × 10 −10 .The threshold for R cell invasion is set to R crit = 1.0 × 10 −4 and for total cell numbers to C crit = 0.5M max where M max is the maximum PSA level (equal to the total cancer cell population) that occurs in 10,000 days in the absence of treatment.We assume that treatment increases the death rate of sensitive cells by a factor of 50 for the models without resources to match the strong effect of treatment in the original model [5] except for the Allee effect model where we use a factor of 10 to avoid driving the population below the threshold too quickly.For the models with explicit resources (RC, A2 and A3) we reduce resource availability by 90% as an upper bound of observed effects [26,27].For treatment, we run a range of intermittent therapies with periods t P ranging from 100 to 1000 days and treatment duration t D running from 10 to 400 days, constrained to t D < t P .Treatment begins at time t P − t D .To implement adaptive therapy, we compare the levels of PSA to two critical levels.Therapy turns on when the total cell population increases above a fraction M ℎi of M max ranging from 0.2 to 0.9, and turns off when the total cell population decreases below a fraction M lo of M max ranging from 0.1 to 0.8.
Derivation of the tradeoff curve
Consider the basic model with two competing types, where u represents the level of drug.If R is rare, then it will have a negligible effect on S which will follow its own dynamics with solution S t .R will obey the linear equation where S ‾ is the mean of S from time 0 to t.We solve for T R with R T R = R crit , or (2.18) Because the models differ in their details, we include this relationship by fitting the parameters a and b to the predicted linear relationship of S ‾ with 1/T R , S ‾ = a + b T R . (2.19)
Computational methods
We solve models with the package deSolve in R [28] as written with one exception.In equation 2.14, there are two regulation terms based on androgen and carrying capacity, and the models behave pathologically when both are negative.In this case, we use only the androgen-based growth term.To simulate adaptive therapy, we include the auxiliary equation where M represents the level of the marker, like PSA, which is set equal to the total number of cancer cells.Therapy is on when U > ϵ and off when U < ϵ.Uincreases when the marker M is above the threshold to turn on (the fraction M ℎi of the maximum value M max ) and decreases otherwise.We adjusted the parameter values to r U = 20.0,ϵ = 1.0 × 10 −8 and U crit = 10ϵ in order to have therapy remain on until the value of U decreases below a fraction M lo of M max .
Results
We simulate adaptive and intermittent therapy, including as special cases MTD and No Therapy, and record T R , the time when R cells emerge R t > R crit and T C , the time when the total cancer cell population exceeds its threshold of C crit .At each of these times, we record the average cell population until that time, and the fraction of time under treatment.
Intermittent therapy and adaptive therapy each have two hyperparameters that control the timing of treatment.For intermittent therapy, they are treatment period and duration, while for adaptive therapy they are M ℎi and M lo , the fractions of the maximum PSA where treatment turns on and off respectively.By varying these hyperparameters, we obtain the tradeoff curves relating time to escape, cell burden, and treatment burden.
To illustrate the dynamics, we show solutions of the equations for three of the models with representative parameters (Figure 1).In both the full and simplified Zhang models (Zh and Zs), MTD leads to the most rapid decline of sensitive cells and escape of resistant cells (blue curves), No Therapy leads to rapid increase of sensitive cells with no escape of resistant cells (red curves), and the greatest delay in emergence of resistance depending on the parameter choices.For the full range of hyperparameters, both intermittent and adaptive therapy generate repeated oscillations of sensitive cells that are eventually invaded and replaced by resistant cells.However, with Allee effect, the results are nearly the opposite.MTD drives the cancer cell population below the threshold and eliminates both sensitive and resistant cells.Resistance emerges most quickly with this choice of adaptive therapy, and intermittent therapy maintains both cell populations in a long-term oscillation.
To summarize across the hyperparameters, we illustrate the relationships of time to escape of resistant and total cancer cells with mean cancer cell burden and mean tumor burden until that time beginning with the Zh model (Figure 2).Due to the simple linear dynamics of resistant cells when rare, intermittent and adaptive therapy follow the same tradeoff curves in relation to the time of escape of resistant cells (equation 2. 19).MTD gives the endpoint of this curve, with the most rapid escape time, minimum cell burden and maximum treatment.
The time to escape of the total cell population follows a more complex relationship, with some intermittent therapies acting much like No Therapy, with rapid cell population escape and a relatively high tumor burden.Adaptive therapy deviates only slightly from intermittent therapies, with a slight benefit of delaying escape at the cost of higher cell populations.
The two-dimensional extensions follow similar dynamics with the exception of the Allee effect model AL (Figure 3).In the Zs, GT and LV models, the mean cancer cell burden follows the tradeoff curve (equation 2.19) for both intermittent and adaptive therapy.
Overall, the two therapies behave similarly, although adaptive therapy can delay resistance at the cost of a slightly higher tumor burden.With the Allee effect, the results are quite different.MTD drives cells below the threshold, and prevents both resistant cells and total cells from reaching their thresholds.Adaptive therapy can behave quite poorly, leading to escape times nearly as short as those with No Therapy and with a high total cell burden.
In Figure 4, we illustrate the results of the HC, LM, and IC models for which total cells are comprised of an additional cell type: healthy (HC, LM) or immune (IC).Despite their greater complexity, the results from these models also closely follow the predicted tradeoff (equation 2.19), although with a greater deviation for the lottery model (LM) that lacks a carrying capacity.As before, adaptive therapy largely overlaps with intermittent therapy, deviating in producing longer times to total cell escape at the expense of greater tumor cell burden, which is quite large for LM and IC models.
The models with androgen dynamics, RC, A2, and A3, follow broadly similar patterns (Figure 5).The relationship of mean cancer burden with time to escape of resistant cells persists robustly.As before, adaptive therapy can deviate from intermittent therapy when considering time to escape of the total cancer cell number, with a more complex structure with two distinct branches evident with intermittent therapies that use a low level of treatment but not for any value of adaptive therapy.
With the exception of the success of MTD with a strong Allee effect, no universal therapy can achieve all three objectives of lowering mean treatment, delaying time to emergence of resistant cells, and reducing total tumor burden.All strategies follow the relationship of mean cancer burden with time to emergence of resistant cells, and the deviations between intermittent and adaptive therapies are relatively minor for time to escape of total cancer cells.The inclusion of androgen dynamics has the largest effect on whether intermittent, adaptive or MTD therapy has higher tumor cell burden with an earlier time to emergence of resistant cells.
Discussion
Adaptive therapy is based on three key assumptions: resistance is costly, resistant cells can be suppressed by competition with sensitive cells, and therapy is effective in reducing the population of sensitive cells.Without additional interactions, the only factor impeding resistant cells is competition with sensitive cells, leading to a tradeoff between reducing overall cancer burden and delaying emergence of resistance.We use a suite of simple models to find whether any of three therapy strategies can break this tradeoff: 1) constant therapy with maximum tolerable dose (MTD), 2) intermittent therapy on a fixed schedule, and 3) adaptive therapy on a patient specific schedule.We seek to test whether adaptive therapy is a feasible way to reduce overtreatment.
We have three key results.First, with the exception of models that include a strong Allee effect, all models closely follow a tradeoff curve between cancer cell burden and time to emergence of resistant cells, due to their suppression by sensitive cells.The Allee effect, meaning that cancer cell populations decline if they fall below some threshold, adds an additional type of control of resistant cells, and breaks the relationship.Second, again with the exception of models with an Allee effect, the tradeoff among time to total cancer cell number escape, average cancer cell burden, and total treatment, is similar but not identical over a range of intermittent and adaptive therapies.In most cases, some adaptive therapies do delay tumor growth, but at the cost of higher cell populations.With the Allee effect, some adaptive therapies perform quite poorly because the threshold for stopping therapy (a fraction M lo of M max ) is above the Allee threshold.Third, and most importantly, no therapeutic choice robustly breaks the three-way tradeoff among delaying emergence of resistance, delaying cancer growth, and minimizing treatment.
These simple models leave out many factors known to shape response to therapy.Most clearly, these models use ordinary differential equations that neglect spatial interactions known to be critical in shaping cell interactions and response to therapy [29][30][31].The stochasticity neglected by differential equations would have the strongest effects when cell numbers are low, such as during the initial invasion of resistant cells or when populations are close to the threshold created by the Allee effect.
In addition, if resistance is induced by therapy rather than arising from mutations, resistance may be much more difficult to suppress [32].Reversibility of these responses can create complex responses to therapeutic timing [33].Much cancer resistance derives from phenotypic plasticity [34] that leads to more rapid emergence of resistance than the dynamics used here, and would require a different set of objective functions to evaluate.
Heterogeneity of cells with more than two states [35] can alter responses [30], and these states can be induced by a variety of intracellular changes including ABC transporter upregulation [36] and the evolution of mutation and genetic instability [37].Androgen dynamic models begin to link intracellular with tissue-level models, but do not include specifics of pharmacokinetics and pharmacodynamics that generate differences among cells [36].
Our model of the immune system is highly simplified, and more realistic interactions could include the positive effects of the immune system on tumors [38], and additional thresholds, such as a lack of immune response to tumors below a particular size or an inability of the immune system to suppress tumors larger than some upper bound for control [39].
Use of multiple therapies, such as cytotoxic and cytostatic [40] can help create a double bind, such as between chemotherapeutic agents and those that attack glycolysis in hypoxic tumors [41] or force cancer into cycles of futile evolution [42].The original paper [5] has been extended to include cells that are testosterone-independent and resistant to the chemotherapy docetaxel [43].Oncolytic viruses interact with cells and the immune system with feedbacks that have only begun to be modeled [44].Anti-angiogenic drugs have complex interaction with the dynamics of vasculature [14].A modeling approach based on non-small cell lung cancer [45] that includes glycolytic cells and vascular overproducers, shows that adaptive therapy targeting glycolytic cells has potential to be effective [46].
We here examine only a few preset strategies, and optimization approaches could greatly refine these.The original model [5] itself has been studied this way [21].Alternative models of prostate cancer [33] have been studied to retrospectively compute optimal treatments for 150 patients [47].Using an alternative competitive framework with Gompertz growth, optimization of the dose and timing of treatment can maintain tumors below a tolerable size [48].
Optimization has been applied in many more complicated models, such as those including vasculature and the immune system and anti-angiogenic drugs and immunotherapy [49], and in models that include a continuum of internal resistance states and a population of healthy cells [40].Building on this framework, a simplified two-state model identified strategies that give the full dose, then a smaller dose, and then a zero dose [50].In a comparison of different treatment goals, a model of sensitive, damaged and resistant cells proposed to treat early to minimize integrated cell numbers, but early and late to minimize total cancer cell numbers at a fixed terminal time [51].In a detailed model of colon cancer, [52] consider various dosing schedules to address the case where cancer cells can be in a quiescent state that is released by treatment of normal cells.
Any optimization requires choosing an objective function in a single ordered currency.
Combining the measures used here (cancer cell burden, emergence of resistance, and costs of therapy) into a single currency would require sufficient clinical information to combine these into survivorship or quality-of-life adjusted years [53].
Applying even the simpler adaptive therapies here requires fitting to data on individual patients.A comparison of a simple model [33], a more complex model with basic androgen dynamics [20], and a detailed model of androgen dynamics [19] found that all fit data reasonably well, although with some exceptions [54].Whether PSA dynamics alone are sufficient to resolve differences across patients, in particular patients who have different types of emerging resistance, seems unlikely, and methods almost certainly will need to be complemented with sequencing data [55].If the models can be fit to the dynamics, adaptive therapies may be more robust to patient variability than prescribed timing of intermittent therapy.
Our evaluation shows that different objectives, delaying emergence of resistance, limiting total cancer cell numbers, and minimizing treatment, are likely to be related, and that no treatment can achieve all three.To address this, the choice of treatment strategy must be based on an objective function that weights different outcomes and patient goals, some of which are typically not included explicitly in mechanistic models, such as side effects [36].
Future therapies will need more sophisticated approaches that take into account multiple drug effects, differences among patients including increased clearance under low doses [6], and thinking that anticipates cancer responses [56,57].However, given the limited and noisy data we have on patients, we argue that the simple models and principles presented here will remain useful as we move toward the next generation of cancer therapy.
• About 90% of androgen comes from P cells, and is used internally and fairly quickly, • With treatment that reduces σ and ρ by 90%, populations decline quickly.
The key idea is that the growth functions take the form r S A S = − r 0 + s S A S r P A P = − r 0 + s P A P and must satisfy the conditions that r S A S * = r S and r P A P * = r P taken as the values from Model Zh.To find r 0 , we choose a value of A S, crit < A S * which is point of zero growth.We then find We can then find A P ,crit by assuming that S and P cells have the same value of r 0 as To coexist, we must have one type regulated by the carrying capacity and the other by androgen.Because K P < K S , P must be regulated by carrying capacity, so S + P = K P .Then S will be regulated by androgen, meaning that A S = A S, crit at equilibrium.We have that A S ≈ σ + ρP η η + μ μK P = A S, crit by setting δ A = 0 and simplifying by using S + P = K P .This will have a positive root for P * as long as A S,crit is sufficiently large.Otherwise, P will be excluded from the system because external production alone can maintain S at a high enough level to exclude it.The root is We suppose η = 1.0 and μ = 9.0, meaning that most androgen is used internally and fairly quickly.Results are not sensitive to δ A and we pick δ A = 6000.0for convenience.The value of ρ can be scaled out and we set ρ = 10.0 for convenience.For 90% of androgen to come from P cells, σ = 0.1 ⋅ 0.1ρP .To find equilibrium androgen values, we need equilibrium values of S and P .In the Zhang model with a SP = a P S = 1, we have P * = 2K P /3 = 20000/3 and S* = K P /3 = 10000/3.By picking σ = 10000, and A E Results for the four models with sensitive and resistant cells only.Notation as in Figure 2. Results for the three models with sensitive and resistant cells plus an additional dimension.Notation as in Figure 2.
Author Manuscript Author Manuscript
Author Manuscript Results for the three models with androgen dynamics.Notation as in Figure 2.
ηP A P − A E + ηS A S − A E − δ A A E .
*
The equilibrium equations for S and P take the form 0 = −r 0 + s S A S 1
Figure 2 .
Figure 2. Summary of results with the Zhang model.The left column shows mean cancer cell burden and mean treatment burden as a function of the time of emergence T R of resistant cells above the critical value R crit .The right column shows the same outputs as a function of the time T C of escape of cancer cells above the critical value C crit .The blue dot indicates results with MTD and the red triangle results with No Therapy.The shades of green show intermittent therapy, with lighter shades indicating a higher fraction of time under treatment, the treatment duration divided by treatment period.The pink diamonds illustrate adaptive therapy, with darker shades indicating a lower value of M lo , the threshold value for initiating therapy.The black line in the upper left panel is the curve in equation 2.19.
Buhler et al. Page 23 Math
Biosci Eng.Author manuscript; available in PMC 2023 November 04.
|
2021-08-27T17:09:17.843Z
|
2021-07-21T00:00:00.000
|
{
"year": 2021,
"sha1": "d4c0b391a79ac047f675535015f2cae2357c0b8a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3934/mbe.2021315",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e2dd11703a5ae941131117d784cc1146c9e6d94",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5946586
|
pes2o/s2orc
|
v3-fos-license
|
Characterization and Prediction of Protein Flexibility Based on Structural Alphabets
Motivation. To assist efforts in determining and exploring the functional properties of proteins, it is desirable to characterize and predict protein flexibilities. Results. In this study, the conformational entropy is used as an indicator of the protein flexibility. We first explore whether the conformational change can capture the protein flexibility. The well-defined decoy structures are converted into one-dimensional series of letters from a structural alphabet. Four different structure alphabets, including the secondary structure in 3-class and 8-class, the PB structure alphabet (16-letter), and the DW structure alphabet (28-letter), are investigated. The conformational entropy is then calculated from the structure alphabet letters. Some of the proteins show high correlation between the conformation entropy and the protein flexibility. We then predict the protein flexibility from basic amino acid sequence. The local structures are predicted by the dual-layer model and the conformational entropy of the predicted class distribution is then calculated. The results show that the conformational entropy is a good indicator of the protein flexibility, but false positives remain a problem. The DW structure alphabet performs the best, which means that more subtle local structures can be captured by large number of structure alphabet letters. Overall this study provides a simple and efficient method for the characterization and prediction of the protein flexibility.
Introduction
Proteins are dynamic molecules that are in constant motion. Their conformations are depending on environmental factors like temperature, pH, and interactions [1]. Some regions are more susceptible to change than others. Such motions play a critical role in many biological processes, such as proteinligand binding [2], virtual screening [3], antigen-antibody interactions [4], protein-DNA binding [5], structure-based drug discovery [6], and fold recognition [7,8].
Many studies try to predict protein flexibilities using either sequence or structure information of proteins [9]. Sonavane et al. [10] analyzed the local sequence features and the distribution of B-factor in different regions of protein threedimensional structures. Yuan et al. [11] adopted support vector regression (SVR) approach with multiple sequence alignment as input to predict the B-factor distribution of a protein from its sequence. Schlessinger and Rost [12] found that flexible residues differ from regular and rigid residues in local features such as secondary structure, solvent accessibility, and amino acid preferences. They combined these local features and global evolution information for protein flexibility prediction. Several sequence-based B-factor prediction methods were compared by Radivojac et al. [13]. Different models have been proposed to predict B-factor distribution based on protein atomic coordinates. The normal mode analysis can identify the most mobile parts of the protein as well as their directions by focusing on a few C atoms that move the most [14,15]. The translation liberation screw model [16] simplified the protein as a rigid body with movement along translation, liberation, and screw axes. The Gaussian network model (GNM) [17] transformed a protein as an elastic network of C atoms that fluctuate around their mean positions. Recently, Yang et al. [18] predicted the B-factor by combining local structure assembly variations with sequence-based and structure-based profiling. There are also many other methods for protein flexibility prediction [19][20][21].
All the above methods use the B-or temperature factors produced by X-ray crystallography to elucidate flexibilities of proteins. The B-factor reflects the degree of thermal motion and static disorder of an atom in a protein crystal structure [22]. However, there is noise in experimentally determinate B-factor. Many factors can affect the value of B-factor such as the overall resolution of the structure, crystal contacts, and, importantly, the particular refinement procedures [23]. B-values from different structures can therefore not be reasonably compared [12]. Some researchers considered that the upper limit of accuracy for the prediction of B-factors is no more than 80% [11].
Protein structures are not static and rigid. The polypeptide backbones and especially the side chains are constantly moving due to thermal motion and the kinetic energy of the atoms (Brownian motion) [24]. Recent study [1] used the continuum prediction of secondary structures to identify the region undergoing conformational change. Other researchers have pointed out that continuous secondary structure assignment can capture protein flexibility [25]. Furthermore, the MolMovDB database [26] consists of structures that are experimentally determinate to exhibit conformational flexibility enabling a variety of protein motions. The Morph Server [27] in particular has been used by many scientists to analyze pairs of conformations and produce realistic animations.
The present work aims to explore whether the predicted conformations from the protein sequences can characterize their flexibilities or not. To achieve this goal, a simplified description of protein structure has to be provided first. The protein secondary structure offers only a summary of general backbone conformation and of local interactions through hydrogen bonding. The DSSP program [28] provides 8-class secondary structures. However, most secondary structures prediction methods only predict 3-class states with nearly 80% accuracy [29,30]. The secondary structures are very crude description of protein backbone structures. Recently, many studies try to describe protein structures in a more refined manner. Toward this goal, many fragment libraries or structure alphabets (SA) have been presented either in Cartesian coordinates space or in torsion angles space [31][32][33]. Camproux et al. first derived a 12-letter alphabet of fragments by Hidden Markov Model [34] and then extended to 27 letters by Bayesian information criterion [35]. De Brevern et al. [36] proposed a 16-letter alphabet generated by a selforganizing map based on a dihedral angle similarity measure. The prediction accuracy of local three-dimensional structure has been steadily increased by taking sequence information and secondary structure information into consideration [37]. A comprehensive evaluation of these and other structural alphabets is performed by Karchin et al. [38].
In this study, we first explore whether the conformation variants can capture protein flexibility. The multiple conformations of proteins are taken from the Baker decoy sets [39]. Each three-dimensional conformation is represented by the one-dimensional series of letters from a structural alphabet. Four different structure alphabets, including the secondary structure in 3-class and 8-class, the PB structure alphabet [37], and the DW structure alphabet [40], are investigated here. Here, the conformational entropy is used to quantitatively indicate the flexibility. The results show that the conformational entropy has high correlation with B-factor. We then predict the protein flexibility from basic amino acid sequence. The structure alphabet letters of proteins are predicted using only sequence information and the entropy function of the predicted class distribution is used to be indicators of protein flexibilities. Experiment is performed on a subset of the MolMovDB database [26]. The results indicate that the conformational entropy is a good indicator of protein flexibility.
Materials and Method
2.1. Dataset. Three datasets are used in this study for different experimental validation.
The first dataset is taken from the work of Bodén and Bailey [1], which is used for the prediction of protein flexibility. This dataset contains 171 nonredundant protein sequences, in which no pair of sequences has larger than 20% sequence identity. All the proteins exhibit conformational flexibility according to the comprehensive database of macromolecular movements (MolMovDB) [26]. Each sequence in this dataset has been annotated with a list of residue positions that have more than one local structure according to the structure alphabets.
The second dataset is used to train the support vector machine which is used for the local structure predictions of proteins. This dataset is a subset of PDB database [41] obtained from the PISCES [42] web-server. There is less than 25% sequence identity between any two proteins and any protein has a resolution better than 2.5Å. The structures with missing atoms and chain breaks are excluded. The proteins that show homologue with the proteins from the first dataset are also excluded. The resulting dataset contains 928 protein chains.
The third dataset is used to test whether the changes of local structures can characterize the protein flexibility. To achieve this goal, a variant of conformations for one protein must be provided. We use the Baker decoy sets [39] previously used for the evaluation of knowledge-based mean force potentials. This dataset consists of 41 single domain proteins with varying degrees of secondary structures and lengths from 25 to 87 residues. Each protein is attached with about 1400 decoy structures generated by ab initio protein structure prediction method of Rosetta [43].
Training and Test of Local Structures.
Many methods have been presented for the prediction of protein local structures. The dual-layer model has been adopted here, which is developed in our previous studies [44]. The method is based on the observation that neighboring local structures are strongly correlated. A dual-layer model is then designed for protein local structure prediction. The position specific score matrix (PSSM), generated by PSI-BLAST [45], is inputted to the first-layer classifier, whose output is further enhanced by a second-layer classifier. At each layer, a variant of classifiers can be used, such as support vector machine (SVM) [33], neural network (NN) [46], Hidden Markov Models (HMM). In this study, the SVM is selected as the classifier, since its performance is better than those of other classifiers. Experimental results show that the dual-layer model provides an efficient method for protein local structure prediction.
Characterization of Protein Flexibilities by Conformational
Changes. The conformations of proteins are represented by the local structures in the form of a structural alphabet. All the local structure types can be referred to as structure alphabet. Four different structure alphabets, including the secondary structure in 3-class and 8-class, the PB structure alphabet [37], and the DW structure alphabet [40], are investigated here. The three-dimensional protein structures can be represented by one-dimensional structure alphabet sequences according to a specific structure alphabet. Given a protein and its variable conformations, we can convert them into several structure alphabet sequences. The changes of local structures can be used to characterize the protein flexibility. For example, there is a protein sequence 1 , 2 , . . . , . Its three-dimensional structures and conformations are labeled as structure alphabet sequences; we then obtained a structure alphabet matrix 11 , 12 , . . . , , where is the probability of the structure alphabet letter of the th conformation at the amino acid position , is the length of the protein sequence, and is the total number of letters in the structure alphabet. The conformational entropy is then used as an indicator of the protein flexibility: where ( ) is the conformational entropy of the protein at sequence position . The correlation between the conformational entropies and the -factors is calculated as follows: where is the B-factor of the protein at sequence position and Ave( ) and Ave( ) are the average of the conformational entropy and the average of B-factor of the protein.
Prediction of Protein Flexibilities by Local Structure
Entropies. Let the predicted local structure for a given residue be = 1 , . . . , , where is the probability that the residue is in the th local structure class, and is the number of local structure classes: 3 for 3-class secondary structure alphabet, 8 for 8-class secondary structure alphabet, 16 for PB structure alphabet, and 28 for DW structure alphabet. The conformation entropy of a residue is defined as High entropy indicates relative disorder. Low entropy indicates relative order.
Performance Metrics.
The following measures are used to evaluate the prediction of protein flexibilities: sensitivity, specificity, precision, and the Receiver Operator Characteristic (ROC) curves, which are defined as follows: where TP is the number of true positives (flexible residues correctly classified as flexible residues), FP is the number of false positives (rigid residues incorrectly classified as flexible residues), TN is the number of true negatives (rigid residues correctly classified as rigid residues), and FN is the number of false negatives (flexible residues incorrectly classified as rigid residues). The ROC curve is plotted with true positives as a function of false positives for varying classification thresholds. A ROC score is the normalized area under the ROC curve. A score of 1 indicates the perfect separation of positive samples from negative samples, whereas a score of 0 denotes that none of the sequences selected by the algorithm is positive.
Local Structure Prediction.
Four different structure alphabets are used in this study. They are the secondary structure in 3-class and 8-class, the PB structure alphabet [37], and the DW structure alphabet [40]. All of them are the description of the local structures of proteins.
The 3-class secondary structure provides a three-state description of backbone structures: helices, strands, and coils. The 8-class secondary structure provides a more detail description [28]. However, this description of protein structures is still very crude [47].
Two other structure alphabets are investigated in this study: the DW structure alphabet and the PB structure alphabet. They are represented in Cartesian coordinate space and in torsion angles space, respectively. The PB alphabet [37] is composed of 16 prototypes, each of which is 5residue in length and represented by 8 dihedral angles. This structure alphabet remains valid although the size of the databank becomes large [48]. The DW structure alphabet is developed in our previous study [40], which is represented in Cartesian coordinate space. This structure alphabet contains 28 prototypes with lengths of 7 residues.
The dual-layer model is used to predict the local structures of proteins [44]. The experiment is performed on the second dataset. The -score is used to assess the prediction results, that is, the proportion of structure alphabet prototypes correctly predicted. This score is equivalent to the 3 value for secondary structure prediction. After 5-fold crossvalidation, the results are shown in Table 1. The accuracy of secondary structure prediction is comparable with the currently state-of-the-art method [29], while the performances of the other two structure alphabets are significantly better The single-layer model uses the position specific score matrix (PSSM) as input and output probability of the structure alphabet letters. The dual-layer model adds an additional classifier, which uses the output of single-layer model as input and output final prediction. For both models, the support vector machine is used as the classifiers.
Results for the Characterization of Protein Flexibilities.
Since proteins are dynamic molecules, we can investigate whether the conformational changes can capture protein flexibilities. The protein structures are represented by structure alphabet sequences. The conformational entropy is used as an indicator of protein flexibility. The experiment is performed on the third dataset.
The initial results demonstrate that some of the proteins show high correlations between the conformational entropies and the B-factors while the other proteins show low and even negative correlations. After detail analysis, we find that the correlations are influenced by the distribution of the decoy structures. Uniform distribution often leads to high correlation. The decoy structures are first classified by the Root-Mean-Squared Deviation (RMSD) with the native structures. We then select the decoy structures so that they are approximately uniform distribution between different classes. Some of the proteins and the correlations and are listed at Table 2 together with the number of decoy structures. As the number of letters increases, the correlations also increase.
According to the law of thermodynamics, the native structure is the one that has the lowest energy. Since proteins are dynamically molecular in living organisms, their structures often fluctuate around the native state. The decoy sets used here are generated by the well-known Rosetta algorithm [43]. These sets contain many decoy structures whose energies are close to the native one. The conformational entropies are then derived from the decoy sets. Some of the conformational entropies show high correlation with the protein flexibilities. However, the decoy sets are not the true stories; there still are some proteins that show low correlations between the entropies and the B-factors (data not shown). This experiment only tries to investigate whether the conformational changes can capture protein flexibilities. If the true decoy sets can be obtained, we can give a definite answer. However, obtaining the true decoy sets is costly and labor-intensive work. are converted into structure alphabet letter sequences by the specific structure alphabet. If a residue changes its structure alphabet letter among the animations, it is labeled as flexible residue. Otherwise, it is labeled as rigid residue.
Results for the Prediction of Protein
During the prediction process, the protein local structures are first predicted from amino acid sequence by the dual-layer model, and then the entropy function is applied to the predicted class distribution for each residue. Residues with entropy larger than a given threshold are predicted to be flexible residues. Otherwise, they are predicted to be rigid residues. Following the work of Bodén and Bailey [1] we use the mean entropy of all residues in our conformation variability dataset as the threshold .
The results of the four structure alphabets are shown in Table 3. The corresponding Receiver Operator Characteristic (ROC) curves are given at Figure 1. The different structure alphabets get different number of positive (flexible) and negative (rigid) samples. As the number of letters in the structure alphabet increases, the number of positive samples increases and the prediction performance also increases, which means that more subtle local structures can be captured by large number of structure alphabet letters. Particularly, the precision and ROC scores steadily increase. Overall the DW structure alphabet gets the best performance.
The results obtained here are similar to the work of Bodén and Bailey [1]. The precisions of this study are higher than that of Bodén and Bailey (0.05 for Sec3 and 0.12 for Sec8), but the ROC scores are a little lower than of Bodén and Bailey (0.61 for Sec3 and 0.64 for Sec8). The main differences of this study to that of Bodén and Bailey lie in two aspects. The first one is that the additional two structure alphabets (the PB and DW structure alphabet) are investigated here. The second one is that a decoy set is used to explore whether the conformation change can capture protein flexibility.
Conclusion
In this study we provide a simple and efficient method for the characterization and prediction of the protein flexibility. We first validate that the conformational change can capture protein flexibility and then predict protein flexibility from primary sequences. The results show that conformational entropy is a good indicator of protein flexibility. Four structure alphabets with different number of letters are investigated. Future work will aim at exploring other structure alphabets that can provide detail description of protein backbone structures and even the side-chain structures.
|
2018-04-03T03:27:51.341Z
|
2016-08-30T00:00:00.000
|
{
"year": 2016,
"sha1": "5e7b5c29869161d20432ae5fafe4dac9431846da",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2016/4628025.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "966d6575a64f477bae5cc14646b27b0b699dadf7",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
}
|
251179936
|
pes2o/s2orc
|
v3-fos-license
|
Nomogram for predicting rebleeding after initial endoscopic epinephrine injection monotherapy hemostasis in patients with peptic ulcer bleeding: a retrospective cohort study
Background Although the current guidelines recommend endoscopic combination therapy, endoscopic epinephrine injection (EI) monotherapy is still a simple, common and effective modality for treating peptic ulcer bleeding (PUB). However, the rebleeding risk after EI monotherapy is still high, and identifying rebleeding patients after EI monotherapy is unclear, which is highly important in clinical practice. This study aimed to identify risk factors and constructed a predictive nomogram related to rebleeding after EI monotherapy. Methods We consecutively and retrospectively analyzed 360 PUB patients who underwent EI monotherapy between March 2014 and July 2021 in our center. Then we identified independent risk factors associated with rebleeding after initial endoscopic EI monotherapy by multivariate logistic regression. A predictive nomogram was developed and validated based on the above predictors. Results Among all PUB patients enrolled, 51 (14.2%) had recurrent hemorrhage within 30 days after endoscopic EI monotherapy. After multivariate logistic regression, shock [odds ratio (OR) = 12.691, 95% confidence interval (CI) 5.129–31.399, p < 0.001], Rockall score (OR = 1.877, 95% CI 1.250–2.820, p = 0.002), tachycardia (heart rate > 100 beats/min) (OR = 2.610, 95% CI 1.098–6.203, p = 0.030), prolonged prothrombin time (PT > 13 s) (OR = 2.387, 95% CI 1.019–5.588, p = 0.045) and gastric ulcer (OR = 2.258, 95% CI 1.003–5.084, p = 0.049) were associated with an increased risk of rebleeding after an initial EI monotherapy treatment. A nomogram incorporating these independent high-risk factors showed good discrimination, with an area under the receiver operating characteristic curve (AUROC) of 0.876 (95% CI 0.817–0.934) (p < 0.001). Conclusions We developed a predictive nomogram of rebleeding after EI monotherapy, which had excellent prediction accuracy. This predictive nomogram can be conveniently used to identify low-risk rebleeding patients after EI monotherapy, allowing for decision-making in a clinical setting.
Introduction
Peptic ulcer disease is defined as a break of the mucosal barrier that exposes the submucosa due to the damaging effects of acid and pepsin present in the gastroduodenal lumen [1,2]. Peptic ulcer bleeding (PUB) is one of the most common and severe complications of peptic ulcer disease and accounts for the leading etiology of acute upper gastrointestinal bleeding (UGIB) [3,4]. With the rapid improvement of endoscopic therapy and medication in recent years, many PUB patients can be treated without surgery. However, up to 10%-15% of PUB patients have recurrent bleeding after initial endoscopic hemostasis within 30 days, which greatly influences the prognosis of PUB [1][2][3][4]. Consequently, proper endoscopic hemostasis therapy should be taken into consideration depending on the characteristics of the patients.
Many authoritative guidelines, including the 2020 American Gastroenterological Association (AGA), 2021 American College of Gastroenterology (ACG) and 2021 European Society of Gastrointestinal Endoscopy (ESGE), recommend combination therapy using epinephrine injection plus a second hemostasis modality (contact thermal or mechanical therapy) for patients with actively bleeding ulcers (FIa, FIb) [5][6][7]. However, epinephrine injection (EI) monotherapy is widely used in real-world medical emergencies due to its low technical requirements, easy operation, low costs and quick hemostasis in primary hemostasis, especially in low-risk rebleeding PUB patients [5][6][7][8][9][10][11]. In addition, EI monotherapy in high-risk rebleeding patients in nontertiary hospitals can achieve temporary hemostasis, which will provide more referral time and safer referral opportunities. Therefore, it is vital to identify low-risk rebleeding patients suitable for EI monotherapy treatment, especially in nontertiary hospitals with no endoscopic combination therapy conditions and techniques. Early identification of high-risk patients may facilitate the choice of physician treatment modalities and the need for referral to tertiary hospitals. Therefore, the objective of this study was to identify risk factors related to rebleeding after initial EI monotherapy hemostasis for 30 days. Then, we developed and validated a nomogram that predicts the risk of rebleeding after EI monotherapy. Thus, additional combination therapy with EI is warranted in high-risk rebleeding patients to guarantee the safety and efficacy of endoscopic hemostasis.
Patients and data collection
This was a single-center and retrospective study. We consecutively enrolled peptic ulcer bleeding (PUB) patients who initially underwent epinephrine injection (EI) monotherapy for hemostasis at the First Affiliated Hospital of Nanchang University between March 2014 and July 2021. Data were collected from the endoscopy database and electronic medical record system of the First Affiliated Hospital of Nanchang University. Patients meeting the following criteria were excluded: (1) epinephrine injection combined with sclerosant or histoacryl injection therapies; (2) other hemostasis therapies, including mechanical (such as hemoclipping) or thermal (such as argon plasma coagulation (APC)) therapies; (3) patients diagnosed with other possible reasons for bleeding, such as esophageal and gastric varices, Dieulafoy lesions, malignant lesions, hemorrhagic erosive gastritis, esophageal foreign-body injury, or Mallory-Weiss syndrome, etc.; (4) patients with Forrest Ia peptic ulcers, which could not reach initial technical success for hemostasis after EI monotherapy; Forrest IIc and III peptic ulcers, which were not necessary for endoscopy intervention for hemostasis; and (5) patients with incomplete demographic data. Finally, a total of 360 patients were enrolled.
Patients' basic characteristics were recorded at hospital admission. Data collection included demographic information (sex, age, smoking and drinking history, medication history, upper gastrointestinal bleeding history), comorbidity (strokes, coronary artery disease, chronic renal disease, liver cirrhosis, hypertension, diabetes), physical examinations (blood pressure, heart rate), laboratory findings, endoscopic findings (ulcer size, ulcer location and stigmata of hemorrhage), Glasgow Blatchford score, Rockall score, AIMS65 score and clinical outcomes. The study was approved by the Human Ethics Committee of The First Affiliated Hospital of Nanchang University. All patients provided written informed consent for the endoscopic procedure.
Endoscopic evaluation and medication
In our center, all emergency endoscopic treatments were performed by experienced deputy directors or chief physicians within 24 h. Endoscopists were familiar with the indications, efficacy and limitations of currently available tools and techniques for endoscopic hemostasis and were comfortable applying endoscopic epinephrine injection therapies, they all have endoscopic experience of more than 5 years [5,11]. In this study, we only chose those patients who underwent endoscopic EI monotherapy between March 2014 and July 2021 for enrollment. Diluted epinephrine (1:10,000 dilution, equivalent to 100 mcg/mL) can be injected at or near the bleeding site. All enrolled patients were injected the same volume of epinephrine injection and finally achieved technical hemostasis during the initial endoscopy. The bleeding status under endoscopy was classified based on the modified Forrest classification [12]. The endoscopic findings of standard epinephrine monotherapy in the treatment of PUB patients according to Forrest classification was displayed in Fig. 1. The most severe ulcer was used to classify patients with more than one ulcer. After endoscopy, all patients immediately received high-dose intravenous proton pump inhibitors (PPIs) (an 80-mg bolus injection followed by a continuous infusion of 8 mg per hour for 72 h). PPI included omeprazole, esomeprazole and pantoprazole. Then 40 mg PPI was taken orally once daily for 30 days after short-term (72 h) high-dose intravenous PPI therapy in the hospital. All patients were followed up for at least 30 days.
Definition and outcome assessment
Rebleeding was defined as recurrent hematemesis, melena, anemia or vital hemodynamic instability with a decrease in hemoglobin by at least 2 g/dL after a successful initial endoscopic treatment within 30 days, and fresh blood could be seen in the stomach or duodenum during the second-look endoscopic observation [11,13]. Patients who underwent a second endoscopic therapy for hemostasis within 30 days were also regarded as rebleeding. Shock was defned as shock index (pulse rate/systolic blood pressure) > 1.0 or systolic blood pressure < 90 mmHg. Technical success meant initial success for hemostasis during endoscopy. Clinical success meant hemostasis during endoscopy and no recurrent bleeding in 30 days follow-up.
The primary outcome of this study was to identify the main risk factors associated with rebleeding after initial endoscopy treatment, and we constructed and internally validated a novel predictive nomogram for rebleeding after initial therapy. Additionally, we calculated the rebleeding rate, time to rebleeding, need for surgery, the requirement for repeated endoscopic hemostasis and mortality.
Statistical analysis
For normally distributed data, continuous variables are presented as the mean ± standard deviation (SD) and were analyzed by using Student's t test. In contrast, continuous variables are presented as the median and interquartile range for abnormally distributed data. The Mann-Whitney U test was performed to analyze the data. Categorical variables were expressed as proportions, and the χ2 test or Fisher's exact test was used to analyze the data as appropriate.
Variables associated with rebleeding (p < 0.05) were incorporated into the multivariate logistic regression analysis (backward stepwise) to identify the independent risk factors. All results are presented as odds ratios (ORs) and 95% confidence intervals (95% CIs). p < 0.05 was considered statistically significant. Then, a predictive nomogram was constructed based on the outcome of the final multivariate logistic regression analysis (p < 0.05). Receiver operating characteristic (ROC) curves were plotted to assess the predictive ability of the nomogram.
All statistical analyses were performed with IBM SPSS software version 24.0 for Windows (SPSS Inc., Chicago, IL, USA) and R statistical software 4.1.0 (www.r-proje ct. org). A two-tailed p value < 0.05 was considered statistically significant.
Baseline characteristics of enrolled patients
A total of 360 patients with acute peptic ulcer bleeding who met the inclusion criteria were finally enrolled in our study. They all underwent endoscopic epinephrine injection (EI) monotherapy for hemostasis and received at least a 30-day follow-up. After initial treatment, rebleeding occurred in 51 (14.2%) patients. Among the rebleeding group, the median time to rebleeding was 2 days; 20 (39.2%) patients underwent surgery, 13 (25.5%) patients died due to rebleeding, and 18 (35.3%) patients received repeat endoscopic therapy (Fig. 2, Table 4).
Multivariate analysis of risk factors for rebleeding after initial treatment
Risk factors significantly influencing rebleeding (p < 0.05) after an initial successful technical treatment were analyzed by multivariate logistic regression analysis by backward stepwise regression (
Predictive nomogram and performance of the models
A predictive nomogram of rebleeding after the initial EI monotherapy was constructed based on the multivariate logistic regression analysis results (Fig. 3). The model assigned a weighted point value to each independent risk factor on the scale. A higher total score for all risk factors was associated with a higher risk of rebleeding. The predictive ability of the nomogram was analyzed by receiver operating characteristic (ROC) curve analysis (Fig. 4) and internally validated via bootstrapping resampling of the construction data set (with 1000 bootstrap samples per model) to obtain optimism corrected discrimination via the C-index for rebleeding. The calibration curve (Fig. 5) of the nomogram showed a good fit between the prediction and observation in the primary cohort. The area under the ROC curve (AUC) was 0.876 (95% CI 0.817-0.934) (p < 0.001), with a sensitivity of 82.40% and a specificity of 77.30% (Table 6), indicating that the sum risk score had high accuracy for the prediction of rebleeding risk after the initial treatment.
Discussion
Rebleeding often occurs within 30 days of EI administration with a high incidence after initial endoscopic hemostasis, which is closely associated with severe complications and high mortality. Meanwhile, a high rebleeding rate can negatively impact the patient's quality of life and result in a significant financial burden [4,11,13]. Therefore, the most important objective of endoscopic treatment is to achieve persistent hemostasis and minimize the rebleeding risk after endoscopic treatment.
Concerning PUB with high-risk stigmata (active bleeding or visible vessels), a second hemostasis modality (such as thermal, mechanical or sclerosant injection) in combination with EI can significantly reduce the morbidity and mortality of rebleeding, which is strongly recommended in most guidelines and clinical trials [5][6][7]14]. Recent meta-analyses show EI monotherapy is inferior to EI combination therapy (EI plus thermal or mechanical therapy) in terms of rebleeding and the need for emergency surgery postendoscopy [15,16]. However, EI monotherapy tends to be widely used among patients because of its accessibility and low expenditure, especially in nontertiary hospitals. EI monotherapy also has high efficacy in hemostasis and is easy to operate during emergencies [5][6][7][8][9][10]. Therefore, identifying risk factors for rebleeding after EI monotherapy can help guide our management of PUB patients during endoscopy.
In this study, we analyzed the risk factors associated with rebleeding after EI monotherapy based on a detailed database of PUB patients in our hospital. After multivariate logistic regression analysis, we found that shock, tachycardia (heart rate > 100 beats/min), gastric ulcer bleeding, a higher Rockall score and a prolonged prothrombin time (PT > 13 s) were independently associated with a high rebleeding rate after EI monotherapy. Additionally, we developed an effective predictive model and nomogram for rebleeding after EI monotherapy based on these risk factors with sound discrimination. After that, we proposed an algorithm for the management of PUB patients (Fig. 6). When we encounter patients with acute gastrointestinal bleeding, we first acquire basic clinical characteristics and laboratory findings before endoscopy, including age, blood pressure, heart rate, hemoglobin, albumin, prothrombin time (PT), underlying diseases, etc. After proper triage, medical management and stabilization, all patients underwent emergency endoscopy within 24 h. If endoscopy revealed peptic ulcer bleeding (PUB), we performed assessments for shock, heart rate, gastric ulcer bleeding, Rockall score and PT to be evaluated by the predictive nomogram. If the patient is considered a low-risk rebleeding patient, we can initially employ EI monotherapy for hemostasis. Conversely, if the patient is classified into a high-risk rebleeding population, a combination therapy rather than EI monotherapy is more suggested for hemostasis. Furthermore, if hemorrhage recurs within 30 days after the initial combination therapy, interventional radiology, such as transcatheter angiographic embolization (TAE), should be considered following combination therapy. Finally, surgery is indicated after failed TAE [5-7, 11, 13].
In this study, we found that patients with shock, a prolonged prothrombin time and tachycardia were more likely to have a recurrent hemorrhage in the future, which was consistent with some previous similar studies [17][18][19]. Thomopolous et al. [18] proposed that shock, use of NSAIDs and history of ulcer bleeding were independent risk factors for rebleeding after EI. Park et al. [19] built a scoring system for predicting the probability of a patient with acute upper gastrointestinal bleeding requiring an operation, indicating that patients with an elevated heart rate, a high BMI and peptic ulcer in the lesser gastric curve were closely related to a high rebleeding rate after EI therapy. We think that shock and tachycardia on admission may result from persistent blood loss, which will cause the loss of platelets and coagulation factors. In addition, mass rehydration to correct hemodynamic instability leading to hemodilution further aggravates coagulation dysfunction with prolonged prothrombin time, which poses a great threat to persistent hemostasis after initial technical success of EI monotherapy. Unlike other clinical trials and guidelines [20,21], our present study showed no significant differences in rebleeding in the use of NSAID (3.9% vs. 2.9%; p = 0.698) or antiplatelet (3.9% vs. 4.5%; p = 0.845) drugs; we think this may be due to the small sample size of the chosen groups. The use of the Glasgow Blatchford score and Rockall score in the prediction of rebleeding in PUB patients after endoscopy EI remains controversial in different studies [22,23]. Budimir et al. [22] reported that the Glasgow Blatchford score better predicted overall rebleeding risk than the Rockall score. However, Liu et al. [23] concluded that the Rockall and Glasgow Blatchford scores are equivalent in predicting the rebleeding of acute upper gastrointestinal bleeding. In our study, the Rockall score was an independent predictor of rebleeding, but the Glasgow Blatchford score still had a high . Over half (7/13, 53.8%) of the dead population were combined with serious underlying diseases such as hepatic and renal failure in our rebleeding group, which we think was the main factor leading to the high mortality in our study. Sometimes, some factors such as incomplete endoscopic hemostasis at initial EI monotherapy, bad transfer to additional surgery or interventional radiology, and missing adequate timing for transfusion could also accelerate rebleeding and thus lead to death. The limitations of this study may include the following aspects. Firstly, this is a single-center retrospective study, which could introduce selection bias due to the nature of retrospective study. Further multicenter and prospective clinical trials with large samples are still warranted to validate the findings in the future. Secondly, EI therapy was performed by various levels of endoscopists, which might result in subtle differences in prognosis. Finally, due to the small sample size of rebleeding groups, we didn't have a validation data set initially. However, we have internally validated the accuracy of our model by performing ROC curves. The area under the ROC curve (AUC) was 0.876 (95% CI 0.817-0.934) (p < 0.001), with a sensitivity of 82.40% and a specificity of 77.30%, which had a high accuracy and indeed helped us identify enough high-risk rebleeding patients in our clinical practice. Nonetheless, to our knowledge, the present study is the first to construct a nomogram to prevent rebleeding after EI monotherapy of PUB patients. This predictive model could help provide clinicians with an intuitive and quantitative tool for predicting rebleeding risk after EI monotherapy, which may be practical for clinicians to choose suitable modalities for PUB hemostasis.
However, the risk assessment of the patients should not influence the choice of endoscopic treatment modality of PUB. After hemostasis by EI monotherapy, EI combination therapy should be used regardless of the patient's rebleeding risk and physical condition. The modality of definitive hemostasis such as thermal, mechanical or sclerosant injection should be used depending on the availability of the method, familiarity with the method, the skill of the operator whenever it is possible.
Conclusions
In conclusion, we developed a predictive nomogram of rebleeding after EI monotherapy, with excellent prediction accuracy and discriminatory ability. This predictive nomogram can be conveniently used to identify low-risk rebleeding patients after EI monotherapy, allowing for decision-making in a clinical setting. Nevertheless, EI monotherapy should be avoided in high-risk rebleeding patients, particularly in patients with shock, tachycardia (heart rate > 100 beats/min), gastric ulcer bleeding, a higher Rockall score and a prolonged prothrombin time (PT > 13 s). EI combined with thermal, sclerosant injection, or mechanical (such as clips) hemostasis is the initiative to treat these high-risk patients.
|
2022-07-31T13:17:10.148Z
|
2022-07-31T00:00:00.000
|
{
"year": 2022,
"sha1": "91e2888a982b0ebbf5d267d9724be23971821c40",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "91e2888a982b0ebbf5d267d9724be23971821c40",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
258981162
|
pes2o/s2orc
|
v3-fos-license
|
Indigenous Food Yam Cultivation and Livelihood Practices in Cross River State, Nigeria
: Yam production, processing, distribution, and marketing processes are underpinned by socio-cultural beliefs shaped by ritual practices and indigenous wisdom. We used semi-structured interviews, public meetings, keen observation, local informants, and a review of secondary materials to assess local indigenous understanding of interconnected perspectives of yam farming processes, socio-cultural perspectives, and livelihood practices in communities in southern Nigeria. Our findings revealed that over 90% of farmers depend on experiences of adjusting to seasonal challenges, storage practices, and fertility enhancement. Cultural beliefs and spiritual practices pervade farmers’ social attitudes to improving farming operations. Almost 70% of yam producers are aged 60 years and above and depend on crude tools and traditional methods of land management and production process, even though the modern and innovative farming methods and practices are limited. Farmers respond to the poor public support system of extension services by informal networking and local associational relationships with diverse schemes to support and encourage members. Government and organizations should take advantage of these informal structures to empower farmers through micro-credits, education, information, training, supervision, and mechanization. Different groups of actors organized into formal social structures like cooperatives will take advantage of bulk buying, selling, transportation, access fundings, information, education, and training from public and non-governmental institutions. The study findings have demonstrated that the socio-economic structure of the Obudu community has developed extensively on account of decades of yam production and processing, supporting chains of a livelihood network, entrepreneurship, and relationships of mutual cooperation and co-existence.
Introduction
The yam (Dioscorea rotundata) is a clonally propagated economic crop, cultivated for its underground edible tubers. Yams have been cultivated since 11,000 BC in West Africa [1], and more than 94% of the world's yams are produced in the "West African yam belt" including countries like Nigeria, Ghana, Côte d'Ivoire, Benin, and Togo [2]. Nigeria ranks as the leading producer of yams (Dioscorea spp.) in the world, accounting for 66% (about 50.1 million tons) of annual global production [3]. The yam is an important source of dietary calories and contributed on average more than 200 Kilocalories per person per day to over 300 million people between 2006 and 2010 [4,5]. The yam is a highly revered cultural crop and key festivities like marriage, chieftaincy ceremonies, conflict resolutions, peace accords, and sacrifices to the gods are tied to it [6]. As a testament to the economic impact World 2023, 4 315 of the yam value chain, 31.8% of the Nigerian population depends on yams for food and income security [7].
Yam farming has led to the emergence of a range of livelihood options in local farming communities. Farmers participate in yam production for three main reasons: household food supply, income generation through marketing ware yams, and production and marketing of planting material (seed yam). Yams exhibit diverse agro-ecological adaptation, diverse maturity periods, and in-ground storage capability, thus permitting flexible harvesting times, which aids sustained food availability.
Yam production has led to the thriving of social and economic practices as well as enriched the cultural and political identities of farming communities. According to Appadurai [8], historical research on food production, consumption and distribution, cuisines, and gastronomy are symbolic representations of the nations and identity of a people. There is a consensus in the literature that food types, mode of preparation, and consumption habits are strong markers of regional, social, and cultural identities [9][10][11]. Our total selves-'who we are and where we are from'-most likely bears much on the popularity of our individual food habits. Our dietary footprints and food habits are, to a large extent, ethnically, regionally, and culturally interconnected. Kittler et al. [11] noted this when they stated that eating is a daily reaffirmation of (one's) cultural identity.
The interaction of yam and man is speculated to have developed in the humid tropics from West Africa at one end to New Guinea at the other end. Yams have been cultivated since 11,000 BC in West Africa [1]. Scientific evidence supports the fact that the center of domestication of the popular white yam is in the River Niger belt of Nigeria and parts of the Republic of Benin [12]. This region is speculated to have the most advanced yam culture in the world [13]. Coursey [14] argued that the survival imperatives of emerging human cultures enhanced the appreciation of certain food species in a given environment. The value of yam under such circumstances was not to be underestimated. Additionally, plants were probably more central to the survival needs of the inhabitants of the humid tropics than animal foods, compared with the inhabitants of other climes outside the tropical regions. This point was reinforced by Lee and de Vore [15] stating that 'tropical hunter/gatherer societies that have survived until modern times derive three-quarters or more of their total diet from plant sources' [14].
Not much is known about indigenous practices and sustainability impact as well as how the organization of yam farming has shaped local socio-cultural, economic, and livelihood practices. This paper draws on local indigenous understanding of interconnected perspectives of yam farming processes, socio-cultural perspectives, and livelihood practices in the Obudu community, northern Cross River State, southern Nigeria. Obudu represents a typical yam production community and the prowess of her production status in yam is celebrated within the yam production agro-ecologies of Nigeria. We organize our discussion in segments touching on the background of study communities, data collection, and the analysis approach, as well as the presentation and discussion of findings.
Research Methodology
The fieldwork process used interviews, public meetings, local informants, review of secondary and grey literature, and keen observation [6]. The study adopted a multistage sampling procedure. In the first stage, we targeted stakeholders (comprising 90 farmers, 60 middlemen and transporters) purposely chosen and interviewed from farms and yam loading and offloading points in Obudu.
Our interviews were segmented into two demographic categories (60 years and above and below 60 years) to understand generational differences and associated perceptions related to the topic of study [6]. We successfully conducted a total of 150 in-depth and semi-structured interviews. A total of 33% of the respondents were in the age group of 60 years and above while 77% were below 60 years of age. A total of 56% of the respondents were males and 44% were female participants. The above variance in the gender proportion
Study Communities
Obudu is located at latitude 6 • 40 and longitude 9 • 10 . It has a total land size of 459.458 km 2 [16] and is bordered to the south by Boki, to the east by Obanliku, to the west by Ogoja and Bekwarra local government areas in Cross River State, and to the north by Vandeika local government area in Benue State. Obudu is home to the clans of Bette, Obanliku, Bendi, Utugwang, Ukpe-Alege, Utanga-Becheve, Bekwarra, and Mbube, who all lived as autonomous communities sharing kinship, being the sons of Agba [17].
Over 90% of the population are Christians, who co-exist with other religious groups including the Muslims and the traditional religious worshippers. There is, however, a tendency of syncretism among the various religious groups which manifests through ancestral worship, sacrifices, libation, and beliefs in witchcrafts, myths, taboos, charms, etc. Eko and Ekpenyong [18] explained that the prevalence of these beliefs has shaped the course of many practices, ceremonies, and festivals, for instance, the designation of special days during which no economic or related activities may be conducted. Saturdays are popularly reserved for major traditions and festivals during which all forms of farming activities are prohibited. Obudu is dominantly a rural settlement. Consequently, the larger proportion of the population (over 90%) is estimated to live in the rural areas and is involved in subsistent and semi-subsistent activities dominated by farming, trades, skill crafts, and commerce, among others.
Conceptual and Theoretical Framework
The socio-ecology theory is a conceptual model developed by Urie Bronfenbrenner in the 1970s and later formalized as a theory in the 1980s. It places its focus on the individual, their affiliations, and interrelationships with their environment via people, institutions, organizations, and policy. Moraine et al. [19], utilizing this framework, analyzed, designed, and performed integrated assessment of crop-livestock systems at the territory level. They defined the framework as a socio-ecological system called the territorial crop-livestock system (TCLS) and developed a generic typology of crop-livestock systems through interacting with stakeholders in participatory design approaches. They concluded that this framework shows great potential to support the development of sustainable farming systems at the territory level.
This work utilizes socio-ecology theory in simplifying the complex web of individuals' affiliations and interrelationships in food yam cultivation. Structured by the focus on the individual level, this work shows how the individual's skills, cultural knowledge, native wisdom, and information about yam farming and ecological interactions across the environment are important in influencing the right attitudes and decision making in assessing sustainability and productivity.
The actor in yam cultivation utilizes all the necessary physical, economic, cultural, and social capital, be it land, labor, farming inputs, cultural wisdom, family, friends, and social groups, to mention but a few, available to achieve their aim by interacting with the diverse institutional organization to enhance profitability. These institutions also exist in the individual's community and usually pull resources and ideas together to better productivity and sustainability outcomes. Figure 1 shows Cross River State in the southern geopolitical zone of Nigeria. Obudu is located in the northern Cross River State and geographically characterized as derived savanna agro-ecology, which is suitable for yam cultivation.
World 2023, 4, FOR PEER REVIEW 4 native wisdom, and information about yam farming and ecological interactions across the environment are important in influencing the right attitudes and decision making in assessing sustainability and productivity. The actor in yam cultivation utilizes all the necessary physical, economic, cultural, and social capital, be it land, labor, farming inputs, cultural wisdom, family, friends, and social groups, to mention but a few, available to achieve their aim by interacting with the diverse institutional organization to enhance profitability. These institutions also exist in the individual's community and usually pull resources and ideas together to better productivity and sustainability outcomes. Figure 1 shows Cross River State in the southern geopolitical zone of Nigeria. Obudu is located in the northern Cross River State and geographically characterized as derived savanna agro-ecology, which is suitable for yam cultivation.
Indigenous Knowledge and Yam Cultivation
Yam cultivation involves series of interconnected processes shaped by seasons, local knowledge, and religious rituals. Every year, yam farming proceeds through land preparation, planting, weeding, staking, harvesting, storage, and marketing with huge financial and labor implications (Table 1). Table 1. Activities carried out in yam production at different periods of the cropping seasons in Obudu, Nigeria. The cutting process involves the removal of vegetative life such as grasses, trees, and any other obstacles that may limit yam cultivation in the farm. This process is usually carried out using knives and cutlasses and executed at any time of the day, but the mornings or late evenings are preferred by farmers. After the cutting process, the farm waste is allowed to dry for a day or two. This culminates in the clearing process, which refers to the gathering of all farm waste into diverse shapes and heaps. The burning stage is where the gathered farm waste is set on fire. The clearing stage continues, where the remaining waste is moved to the farm boundary, thus creating space for tilling/mound making. Tilling/mound making is carried out with special hoes and sometimes shovels. It is believed that the deeper the till depth and larger the mound, the better the yam yield.
Months Processes and Labor Demands
Planting is carried out by inserting the seed yam into the mound. This is followed by staking when the yam stem has sprouted. Trailing and staking are carried out for better exposure of the foliage to sunlight. This can have an indirect effect on underground tuber growth resulting in a better yield. Weeding is carried out 2-3 weeks after planting. The uprooted weeds are kept strategically to dry and then applied upon the yam mound as mulching. This organic form of soil enhancement is extensively utilized, as it is believed to be a major boost for yam growth. Vine trailing occurs when the yam stem grows longer with different branches. This process involves guiding the stem in a way to enhance faster growth of the root.
Yam tuber milking is a pre-harvest activity often carried out between five and seven months after planting. The time for yam tuber milking is usually determined by the farmer as influenced by two main factors, these being his/her perception of yield output and household food need. In milking, the immature tuber is harvested carefully without destroying the roots and the corm, after which the tuber continues to grow from the corm. This process enables farmers to assess yams before the main harvest. Harvesting marks the end of yam cultivation as the farmers reap their rewards. During our survey, a local chief in his late 70s explained that 'yam planting is possible and successful during the dry season (October-December) due to the cultural wisdom and norms of making till/mound under huge/large trees and applying intense mulching'. Cultural practices, such as mulching in the dry season, conserve the soil moisture and improve the nutrient status. It also reduces soil caking caused by high absorption of heat/solar radiation, thus preventing tuber rot. It also provides loose seedbeds for ease of tuber penetration, allows for the collection of fertile topsoil, controls water in areas with a high water table, and makes harvesting easy.
The chief further stated that the wisdom behind early planting is to take advantage of tree shade and protect the seedlings and sprouts from the heat of the dry season sun. As the rain begins in March and April, yam stems would have grown very appreciably, while the branches of shaded trees will serve the purpose of staking and trailing. The period spanning July and September is reserved for harvesting. Furthermore, it is important to mention that a key aspect of yam production lies in the transfer from farms to preservation and distribution. Traditional "milking" of the yam provides seed yams for the next planting season and thus reduces the burden of buying seed yams from the market. The respondents highlighted four main traditional storage methods, which include straws and sticks, the use of mud, the raffia bag, and compact storage ( Table 2). The straw and sticks technique for yam storage utilized these components as preservatives during storage. We found that farmers make a careful choice of some categories of leaves and sticks in the storage process to further protect the harvested yams from pest attacks as well as to enhance their longevity: " . . . yams may be stored in any safe place but without the use of certain types of leaves/straws and sticks they may not last to the next planting season . . . ", noted a farmer in his late 70s.
The storage techniques used vary in relation to context and equally determine risk exposure levels of yams. For transporters, the compact method is the most suitable. Compact storage refers to the technique where yams are packed in an organized form in a vehicle, preventing spaces in between them as much as possible. A total of 53% of the transporters believed that the compact storage method, a relatively modern method of storage, reduces the risk of breakage upon encountering bad patches of road during transport, while 13% explained that straws and sticks help prevent disease, pests, and parasites from damaging the product. A total of 43% of farmers utilized the straws and sticks methods for similar reasons, while 27% of farmers utilized the traditional mud method. A total of 47% of collectors used raffia bags as the major material of storage and distribution. It was also observed that most of the respondents above 60 years of age adopted mud, straw, and stick techniques, which were the more traditional form of storage. This technique was mostly adopted by the farmers and 25% of the collectors.
Farmers' Seed System Management
Seed yams are culturally perceived as the source of all life, wealth, and health. This belief holds that seed yam production is sacred and highly regarded in such a way that it is used as one of the marriage rites to test/screen for an optimal bride. An elderly chief in his late 70s recounts that after a man expresses his desire to marry his bride, the male's family would give the bride a yam to process. The bride is screened from the first minute her blade touches the yam. If she starts to peel from the seed head down, then she has failed the marriage test. She is also tested on how she handles the yam. If she hurts the seedhead or consumes the head irrespective of the economic situation of the family, then she is definitely not a good bride and will not be able to manage the home. The right approved and good bride would be one who not only separates the seedhead but keeps it in an appropriate space that would encourage sprouting. The woman that keeps the head is a woman that will preserve the family's future. This is because the woman is supposed to consider the next planting season by preserving the head for cultivation. This is the optimum bride desired by all families.
Another yam farmer in his early 60s explained that to be a successful yam farmer, one has to understand the spirituality and fertility difference between seeds. He revealed every seed yam is different in terms of fertility and reproductive capacity; as such, they have the ability to communicate to a listening farmer. This indigenous knowledge is mostly transferred from father to son and gained through practical experiences of yam cultivation on each particular traditional farm plot. He revealed that in his traditional farmland plot, not all spaces have the same fertility outcome, as some are high, moderate, and low. This trend is observed in the seed yam due to variations in sprout time. He quipped that there is also a spiritual dimension to these, as he mutters his prayers while applying the seed yam with a slow stem sprout to the portion of farmland that he perceives to have a high fertility rate. The seed yam with fast sprout potential is thus planted at farm spaces perceived to be of moderate fertility, while a seed yam exhibiting fast sprout potential is planted in spaces of low fertility in the plot. He explained that he has never been disappointed, as his yield is appropriate to his expectations.
All farmers had diverse indigenous techniques for identifying what makes a good seed yam. The dominant indigenous techniques for identifying a good yam are the seed yam's appearance, the speed of germination, which translates to vigor potential, the seed yam's size, and the scratch technique. Respondents could only perceive and found it difficult to explain choosing a good seed yam based on its appearance. However, a male respondent in his late 40s came close in his explanation, saying that he would choose a seed yam which skin had limited wrinkles and no spots over the ones with lots of winkles and black spots. Respondents easily acknowledged that the seed yam exhibiting a vigorous dormancy break tends to be more vegetative in growth and thus results in a higher yield. They also explained their preference for the larger seed yam over others, as it is perceived that the larger the size, the more fertile the seed yam. The scratch technique is used to determine if there is still life in the seed yam. This technique involves scratching the seed yam's brown skin surface lightly. If a light greenish color is found, then the seed yam is alive. The deeper the light green color, the higher the fertility of the seed yam. These indigenous techniques were used to determine the degree of fertility of the seed yam.
All farmers acknowledge using at least one or more of the above identified indigenous technique in choosing a good seed yam, especially in their supplementary purchase to meet their cultivation demand. A total of 62% of yam farmers preferred seed yam exhibiting vigorous sprout potentials, while 48% revealed their main preference to be seed yam size. Different farming households produced seed yam for different purposes. Based on the respondents' reports, farmers engage in seed yam production with the following intent: (a) private use, (b) private use in combination with sales to make income, and (c) exclusively for sales to make income. (Table 3). It was observed that the one percent respondent had a land conflict issue and had to resort sales of his seed yam. Farmers who produced seed yam for private use and sale often had large farmlands and or several farmlands in diverse locations, thus enabling them to possess a large supply of seed yam stock for sale or barter for other farming and household commodities. Farmers who produce seed yams for private use often run short of their farm requirements and are required to purchase more seedlings from the market to augment their demand. The seed yam value often varies from NGN150 to NGN500 (USD0.32-1.1 (conversion rate of NGN460 to USD1)) depending on perceived fertility quality of the yam through seed appearance, the germination potential of the seed yam shoot, the seed yam size, and the scratch result of the seed yam. Seed yam cost also varies by season, as it decreases during harvest and increases during cropping season.
Labor Practices and Gender Role Differences
Yam farming depends mainly on manual labor from family and exchange/hire sources. All the farmers acknowledged depending on family labor composed of men, women, and children for cultivation. Alternative labor sources come from informal or organized labor exchange groups. There could be a need for hire, especially at the peak of farming season. Twenty-three percent of the respondents claimed that they complement family labor with exchange and hired sources to cope with the pressure of peak season farming. Fourteen percent said they depend on their family members and in rare cases complement with external source drawn from the local informal associational labor network. Eight percent said they mostly depend on family with hired labor sources (Table 4). Labor exchange group hiring is social. It involves mutual labor support and assistance from families, groups, and cooperatives on a 'turn by turn' basis. An extremely small number of farmers (1%) with relatively bigger and larger farm size draw on mechanical sources ( Table 5). The farm sizes range between 0.3 to 2.6 hectares. The plots are irregular in shape and are demarcated by trees planted to signify boundaries between plot sizes. Although the study areas are generally perceived to be yam farmers, we found that only 20% of farmers were relatively full-time in the yam production business. They complement their income through extra paid labor services. Hired labor services were known to attract a different wage structure, from NGN1000 (USD2.17) and above depending on the size of the plot or amount of labor needed. There is usually more demand for three specific labor tasks, including cutting/clearing, mound making, and weeding. A woman in her late 40s explained that family labor is always available for cutting/clearing, mound making, and weeding tasks, but can also be hired out when relatively larger and bigger sizes of farmlands are involved. While cutting/clearing and mound making demand more strength, weeding requires more skills in execution. This perception justifies price differences; a higher pay of NGN 2000 (USD4.35) per plot mostly applies to cutting/clearing and weeding, which also depends on the farmland sizes. The mounding task could range between NGN10 (USD0.02) and NGN50 (USD0.10) per mound. A total of 92% of respondents preferred using women and girls for weeding due to the perceptions of their ability for greater weeding efficiency, as opposed to 70% respondents who preferred to engage the men for cutting/clearing tasks. Thirty-one percent of farmers claimed they depend much on communal rotational labor exchange practices, which can accomplish large-scale tasks in a large expanse of land with minimal cost and time.
Gender role differences also formed part of the labor practices and are embedded in the broader socio-cultural norms. Men and women perform different labor tasks from cultivation to harvesting. Such tasks are segmented into cutting/clearing, tilling/mound making, planting, mulching and staking, weeding, harvesting, and other minor tasks, including conveying yam tubers from farmlands to homes and regular farm visits, among others (Table 5).
A woman in her late 30s explained that the weeding task is culturally assigned to women. She said, 'women are naturally endowed to bend and weed for hours with ease and if a man tries to execute similar task, he will encounter health challenges if not immediately then later in his life'. Another female respondent in her late 60s explained that the old cultural values assign men the role of bush clearing and tilling/mound making. However, she noted that such role differences are gradually losing steam as a few men and women occasionally cross role boundaries to offset and make up for labor shortage as well as the opportunity to earn extra income through hire services. As can be seen in Table 4, 8% of men claimed they do engage in the weeding task, while 30% of women claimed they also ventured into land preparation tasks (cutting/tilling). Generally, women seem to perform the greater share of labor tasks in the cultivation process, and they seem to be comfortable in the fulfillment of cultural norms and societal expectations. Take for instance the arduous task of carrying yam tubers on the head from farms back home. A female respondent in her early 30s noted that the pride of carrying yam tubers on the head back home signifies healthy living and productivity, attracting goodwill in its wake.
Social Perception, Cultural Practices, and Indigenous Productivity-Enhancing Options
The processes of yam cultivation, marketing, and consumption are nested in social norms and cultural beliefs and taboos with implications on productivity. Landrace preferences, ritual practices, and gender role assignment shape productivity outcomes in different forms (Table 6). Table 6. Socio-cultural perspectives on yam distribution practices.
Yam-Related Cultural Practices and Routines in Distribution Remarks
Worship -Varies by ethnic groups -Profitability is believed to be determined by spiritual forces -These spiritual forces have to be consulted before market transactions are carried out.
Yam transportation is highly gendered
-Culturally women do not own transport vehicles nor go into long-distance transportation business -Women are, however, actively involved in farming, collecting, and livelihoods tied to distribution, except long-distance transportation business. -Yam transportation business is exclusively in the preserve of the male gender. -Poor or no participation of women due to the exigencies and rigorousness of the business.
Excessive fixation on indigenous and native landraces
-Indigenous and native landraces are highly in demand -This attitude is reflective of traditional landraces attracting higher prices. -Preferences for these landraces by distributors are reflective of the perceptions of buyers who see these landraces as a prestige food as signified by its highly desired texture, taste, and appearance level.
The cultural practice of wholeness in yam distribution
-Yam is perceived as a whole, as any dent on it is perceived as a handicap, and it is thus disregarded by majority of the public (well-to-do). Farming practices are steeped in cultural beliefs which rarely embrace innovation. A man in his 70s, for instance, argued that you can only be a bad person in the society for your yams not to thrive using the known dry season traditional method of cultivation. The context of a bad person refers to one who is not sociable or helpful and does not contribute to the numerous community development initiatives.
Over 90% of the respondents rarely subscribe to modern farming methods of mechanical and chemical farming. Most farmers prefer traditional means of soil fertility maintenance involving traditional manuring and religious practices. Most soil fertility enhancement practices are through mulching and animal manuring. All farmers interviewed indicated their desire for chemical fertilizer use. However, only 1% actually claimed they use chemical fertilizer, and the reason for low acceptance has to do with the poor market value of final products. A male respondent in his mid-40s claimed that consumers prefer yam tubers that are not cultivated with chemical fertilizers. He noted that they detect that through yield output and taste differences; ' . . . It is difficult to make much profit when you farm with chemical fertilizers on account of low sales and poor pricing . . . ' argued the respondent. Respondents were discreet in revealing the physical appearance of yams cultivated with chemical fertilizer, although one explained that a yam cultivated with fertilizer will have excessive 'hairs' on its tuber, among other slight indicators such as color and texture. The yield capacity of yam is dominantly shrouded in religious beliefs, which necessitate annual and regular ritual practices and sacrifices. This limits opportunities for large-scale production.
Gender role differences that assign fix boundaries to men and women limit the scope of opportunities for men and women in yam production and marketing value chains. Women are not allowed ownership of yam barns and are traditionally excluded from fully participating in the transport employment chain. Beyond the cultural ties, the yam transport business needs a huge capital outlay, which most women might not meet.
Livelihood Assets of Yam farmers
The yam farmer's assets include physical, financial, human, social, and natural capital. These include farming inputs, energy, water and sanitation, shelter, and means of transport. Rudimentary farming inputs were utilized by small holder farmers in the study area.
Yam farmers' natural capital is the land and environment. Land ownership is mostly by ancestral transfer and inheritance from parents. Culturally a system of land rotation is practiced, although population growth has intensified pressure on land, causing most farmers to change this cultural system, as land has become fragmented with the resulting continuous cropping. Small holder farmers are financially limited to farm more than a single plot. Traditional land use transference of ownership rights to children and grandchildren creates land fragmentation and more pressure on plot cultivation.
Soil fertility augmentation is mostly carried out by the application of organic manure. Observation revealed four classes of housing architecture in the study area. Classed by materials used in building, they include the cemented block with corrugated zinc roof, the baked mud brick with corrugated zinc roof, the baked brick with thatch, and mud with thatch houses. These farmhouses also reveal the level of economic advancement of farmers. Among Obudu settlements, about 60% of houses were made of cemented block and corrugated zinc roofs, and 30% were baked mud brick houses with corrugated zinc roofs, reflecting a modest level of socio-economic attainment, while others dotted the landscape, revealing low levels of economic attainment. The number of households that reside in these buildings could not be clearly identified, as a female respondent in her late 40s revealed that "family houses are always opened to any one even after they leave to their own homes. During the farming seasons my sons with their families come back home to dwell and farm on the family land".
A female respondent in her late 30s narrated the cooking experience of rural yam farming households. She explained that kerosene cooking fuel was generally utilized by major households. However, due to a 100% increase in price, yam farmers have resolved to use wood for cooking, thus resulting in the gradual depletion of forest cover and impacts on farming, exponentially escalating the vulnerability of subsistence yam farmers in southern Nigeria. Another female respondent in her early 60s revealed how the Abacha stove (a specially fabricated stove to maximize heat produced) helped the farming communities. She explained "that has been my only benefit from any government intervention program in 1997".
A yam farmer in his late 30s revealed that most farming households cannot afford the outrageous electricity bills, which are as high as NGN3000-5000 (USD6.52-10.86) a month; such households prefer to stay off-grid: "after all the electric power is rarely available and farmers barely have electrical appliances". He further added that due to the high cost of kerosene now, households prefer to use small rechargeable lamps, which they charge in outlets that retail electricity.
Furthermore, the increase in energy price increased transportation costs of organic manure, farm inputs, and other needs, thus increasing the cost of yam production. This also impacts yam price as farmers try to make even. The energy price increase also has impacts across the yam value chain on the vulnerability of collectors, middlemen, transporters, wholesalers, retailers, processors, and consumers.
Yam farming households in the study area were observed to be in highly sanitary condition. A respondent explained that community sanitation was regularly observed in the community as supervised by Osu (the regal chief) and his enforcers. He explained that monitoring is not that necessary, as cleanliness is part of their culture due to the shame attached to an unclean environment. Toilet facilities are mostly found in households, as the only public toilet was located at the commercial vehicle station. The public toilet charged NGN50 (USD0.11) for urinating and NGN100 (USD0.22) for defecating. The types of private toilet facilities range from the pit toilet to modern toilet system, depending on the farming household's income levels. A yam farmer in his late 60s explained that it was practical during farming to add human waste to the other animal waste to ensue appropriate organic manure. He revealed that this is done in a very responsible way.
Yam farmers mostly utilized well water as their major water sources. An elderly woman explained that before borehole drilling technology was utilized, "everyone cooks, eats and irrigate farmlands with well water and if household or farmland is close to the stream with stream water as well". She explained that now farmers perceive that borehole water is the cleanest and spend as much as NGN20 (USD0.04) to purchase it in 15-25-liter buckets. She revealed that there were also highly vulnerable people who purchased water packed in polythene bags or satchels for pure water. A bag of water contains between 15 and 20 smaller bags' worth of 50-70 cl of water, depending on the company of production. Odum revealed that when these are shared in ceremonies like the new yam, households will be perceived as having a bumper harvest and presumed to be well-to-do. She revealed, however, that most farmers are modest and generally rely on borehole water for drinking purposes, while well water is use for other purposes including farm irrigation.
Yam farmers basically utilize the public transportation system, which has, in the order of highest frequency, motorcycles, tricycles, and cars, to mention but a few. Yam farmers possess and more frequently utilize wheelbarrows, bicycles, motorcycles, and tricycles in moving yam from farm to the sale destination. Routes in the interior areas are untarred, causing a certain level of difficulty in yam transportation, as the undulating terrain more often than not causes 'wound' (accidental peel) to yams, reducing their market value. Most of these accidentally peeled yams are sent back home for consumption, reducing the selling stock of farmers. Respondents claimed that their earnings cannot meet their needs, as some have merged farming with other jobs. Yam farmers with one traditional plot of farmland revealed that they do not get good prices for their farm products, so such farmers prefer just to consume the yams. This group, after the sale of excess yams, utilizes the money to finance their other household needs. They rarely save and have to rely on other sources of income to survive.
Farmers cultivating more than a hectare of land often belong to groups like cooperatives. 'Ubiam', 'Osusu', and 'Beyietin' local cooperative groups in Obudu have yam farmers who are involved in financial activities, such as registration for membership, savings, loan collection, and contributions (a system where farmers in a group of five, ten, or more contribute a certain amount to give to a member to use for any purpose), to mention but a few. However, for this group, up-scaling is usually a challenge and needs to be studied in detail. An anonymous farmer specified that funding from commercial banks is "too detailed, cumbersome with very high interest rates", prompting most farmers to default in payment due to the challenges of farming in southern Nigeria. This raises more caution from the commercial banks in administering loans to farmers. Other farmers ignore the risk of accessing commercial loans altogether. Some were not even aware that the Nigerian Government had such an intervention in place to assist rural farmers. Further information on services offered by Nigerian Agricultural Insurance Company (NAIC) was unknown to yam farmers. Most farmers examined revealed that access to credit will result in more labor hire, thereby increasing production.
The human capital of yam farmers in southern Nigeria is composed of manual labor and skilled labor. The majority of the farmers had at least a primary school education and could speak at least two languages. Smallholding farmers often utilize family labor alone. They carry out other livelihood activities such as petty trading, shoe repairs, wood repairs, motorcycle repairs and riding, retailing food, blacksmithing, and bricklaying, to mention but a few, to augment their income. Manual labor hire is social capital in nature, as families, groups, and cooperatives take turns farming for each other. Yam farmers in southern Nigeria access the social capital of communicating in groups by word of mouth where demand is needed. A total of 70% of respondents had mobile phones, with all having knowledge of how to operate a mobile phone. Smart phones were not accessed by the labor who responded to this study. All yam farmers also acknowledged ownership of a radio, granting them access to information.
Livelihood Opportunities, Socio-Economic Network, and Social Support Services
The processes of producing, distributing, and marketing of yams involves chains of interconnected activities and services, including various farming implements, inputs, structures, and associated infrastructures. Rudimentary farming implements including hoes, spades, shovels, and knives are still commonly used, providing opportunities for direct labor engagement and indirectly supporting local producers and marketers. A flourishing business community of local tool fabricators and retail has sprung up to support year-round yam farming (Table 7). Most of the equipment shops also sold all kinds of general goods, including different types of yam seedlings and farm chemicals. Yam production, processing, distribution, and marketing provide livelihood support to many economic actors and equally contribute to the flourishing of social networks to further strengthen farming practices and to provide social support to farmers and other agents (Figure 2). social support to farmers and other agents (Figure 2).
Ownership of these farm tool shops was dominated by males, though sales were administered by a female member of the family. The socio-cultural construct aligned the female gender as being better sales/administrative representatives. These roles were closely supervised by male farm tool shop owners in the study. Female respondents also dominated the seed yam sales. They appealed to their customers and met their needs differently. Ownership of these farm tool shops was dominated by males, though sales were administered by a female member of the family. The socio-cultural construct aligned the female gender as being better sales/administrative representatives. These roles were closely supervised by male farm tool shop owners in the study. Female respondents also dominated the seed yam sales. They appealed to their customers and met their needs differently.
Many economic actors were identified in the yam production and intermediary chains, as follows.
Box 1 above explains the job description of the different actors involved in the yam value chain, as listed in Figure 2. It was observed that there exists a dynamic flow of yam value through farmers, collectors, middlemen, transporters, wholesalers, retailers, processors, and consumers, as detailed in Box 1. Farmers cultivating more than a hectare of land mostly operate through social networks and cooperatives to encourage savings, lending, and labor exchanges, in addition to providing welfare and insurance services to their members. Socio-economic groups and networks are collectively funded through membership levies, donations, profit from investments, and other forms of support. Available groups include 'ubiam', 'osusu', and 'beyietin'. Specifically, associational networks serve to provide basic education and share experiences on farming and storage methods, marketing opportunities, and other services, as needs arise. Their services to members are more critical considering funding limitations and the difficulties/complexities of accessing financial products and services from commercial banks. A female farmer in her early 50s noted: ' . . . these associations are very important to use, and it will be difficult to achieve success here without them . . . government is not helping and if they help, it does not get to the real farmers . . . commercial banks are not useful . . . their loan services come with high interest rate and conditions that are too difficult to fulfil . . . in fact they do not have human face . . . ' She added that the farmers find solidarity and a sense of belonging through the diverse informal network and associations: ' . . . members provide support in festivities and come to our rescue during challenges . . . '. Box 1. Economic actors in the yam production and intermediary chains.
Farmers:
The main actors in yam production are households engaging in farming. This is the starting point of the yam value chain. Small holder farmers residing in hamlets across linear, nucleated, and dispersed settlements make up about 80% of yam cultivators in the study area. They are mainly equipped with the indigenous knowledge systems for cultivation of yams. They are socially networked to share labor, seed yams, and other farm input. Most livelihood activities include the cultivation of other crops such as groundnut, afang, cocoa, and cassava, to mention but a few, in mixed cropping systems. Traditional seed yam is cultivated for mostly family consumption and harvested yearly. Upon harvest, enough stocks are selected to feed the family for the year. The excess is transported using wheelbarrows, bicycles, motorcycles, and tricycles to the village markets for exchange with the collectors. Farmers are exposed to exploitation by collectors, as they seek to sell in order to purchase other needs, thus selling cheaply.
Collectors: These actors are mostly relatives of farmers who live in the rural hamlets. They are mostly business speculators. They consist of young village entrepreneurs, former businessmen, and retired civil servants who retire to the rural areas, to mention but a few. Only 24% of collectors are full-time in this business. Other collectors are involved in diverse trading and service activities. Fortythree percent of these respondents were female. During harvest, they purchase directly from the farmers and stock their local barns. They buy from farming households and village markets such as Utugwang. They stock their purchase from these markets and rarely sell unless an urgent need for money arises. However, these collectors are not limited to the purchase and storage of yam alone but are connections between the middleman and the farmer. They possess limited capital and cannot purchase much yam nor store it over a long period of time. They transport their yams to bigger markets in the local government headquarters. Their modes of transportation are mostly pickups and wagons (automobiles).
Middlemen: These are bigger players in the value chain, as they possess more capital than the collectors and are fewer in number. They understand the demand and supply dynamics of the market and influence pricing greatly. They have large store houses and make purchases from bigger markets like Obudu, Ogoja, Ikom, Yakurr, Yala, etc. Their market reach is not limited to a few yam wholesalers, as they sell directly to yam retailers. They belong to strong trade unions that cooperate to share information and cost of transportation. Together they utilize lorries to transport their goods to the city market. However, as individuals, they possess pickups reaching long distances and 'dyna'.
Transporters: Transporters engage in yam and non-yam transport tasks with a mix of wheelbarrows, bicycles, motorcycles, tricycles, wagons, trucks, and lorries, depending on the location. They provide the linkage across different locations and service points. They begin from the rural hamlets to the city consumer. A respondent in his late 30s explained: "Yam is mostly hauled by Lorries from far distance farms and all kinds of wagons from local farms . . . ." At the peak of harvest, there is an increase in mobility, as traders flock into local farms to collect yam tubers for distribution to major cities of Calabar, Uyo, and Obudu.
Wholesalers: They have an open profit strategy of making money through trade. They are often the leaders of yam market unions and own warehouses and lorries. They are fewer in number than the middle men. Wholesalers coordinate the yam market as they create opportunities for the middlemen they lead. They have traded for a long time and are educated in the rudiments of accessing credit. They are vulnerable to road hazards and the storage problems which they sometimes encounter. They understand the market taxation system and often exempt themselves from the process. Box 1. Cont.
Retailers:
They play an important part in the distribution and marketing processes. They are often more in number when compared to the wholesalers. They often have social credit agreements with wholesalers to support their business and wade through market challenges.
Processors: The yam processing value chain has a wide range of actors. The first group focuses on the conversion and preservation of yams. This group may be classified into the local processeors and organized processors. Local processors seek to preserve yams through frying, drying, grinding, and packaging yam and yam flour for preservation using locally fabricated materials. They often operate informally and appeal to local consumers. The organized processors are food firms who are guided by the food agency (such as NAFDAC) standards for processing yam into diverse forms. These firms often target urban and international markets for sales of their processed products. Their products are not limited to yam powder and chips. Other processors include retail outlets which sell yam products after value is added by converting it to different edible forms. These retail outlets are mostly small-and medium-scaled enterprises dotting the yam ecosystem. They sell all kinds of foods appealing to the culture of the location. They are located across junctions, along major and minor road routes, urban and rural areas, to mention but a few. Yam is often eaten as pounded dough with different kinds of soup such as afia efere ebot (white goat soup), melon, and afang, to mention but a few. The process of eating pounded yam is by swallowing after lubrication with soup. Processing yams involves peeling, washing, boiling, and pounding them into into a dough. This method is labor-intensive, especially during pounding. A less labor-intensive technique of using yam flour exists, but yam pounding ranks as the most preferred culturally by consumers based on its better mold texture, which is the standard for measuring good yam.
Consumers: These are the end users of yams. They bear the total cost of the yam value chain. Consumers are classed into two types. First is the household consumers who shop for yam tubers from the market and convert them into different edible forms at home. Other consumers include those who patronize the different food processors mostly to satisfy their hunger and other recreational purposes.
Discussion of Findings
Yam farming has underpinned local livelihood, fostering indigenous practices and socio-cultural inter-relationship for many decades. Over 90% of families and individuals depend on yam as their main staple in porridge and pounded, boiled, chip, flake, powder, and many other forms. Yam tubers are distributed and sold within and across communities and cover regional, sub-regional, and international levels, contributing to the food and nutrition mix of the people, in addition to complementing the foreign exchange earnings of Nigeria. The findings of the present study have demonstrated that the socio-economic structure of the Obudu community has developed extensively on account of decades of yam production and processing, supporting networks of chains of livelihood, entrepreneurship, and relationships of mutual cooperation and co-existence.
The production, processing, distribution, and marketing processes for yam depend on indigenous practices and efforts and are underpinned by socio-cultural beliefs. While the cultivation and production processes are largely shaped by religious beliefs and social perception anchored on ritual practices and indigenous wisdom, labor practices draw significantly on communal solidarity and reciprocity through joint and communal efforts and social networks. In many respects, these have implications on sustainability, acceptance of innovation, and adaptation to the dynamics of environmental circumstances. Over 90% of farmers depend on long experiences in adjusting to seasonal challenges, storage practices, and fertility enhancement. Where and when to plant are decided on indigenous trial and error as well as religious beliefs and practices. In the circumstance of largescale and significant atmospheric events such as climate change, indigenous wisdom and ritual practices are less likely to help. Although most of the farmers are aware of current climate challenges that lead to delayed/diminished or excessive rain, the solution depends on spiritual religious practices. Yam farming depends on the natural cycle of rainy and dry seasons, whose variability or changes could engender substantial risk. Uncertainties about handling seasonal fluctuations probably discourage young people from participation in yam farming given the near absence of a public support system to mitigate possible challenges.
Farm-level information collected showed that the head of the household, usually a man and his wife, were asked who owned each crop in the field, a man, a woman, or the household members combined. The couple was also asked who performed each farm task in the field, namely land clearing, seedbed preparation, sowing of each crop, weeding, harvesting, and transporting of each crop from the field. Analyses of our dataset show that yam field ownership includes both men and women, i.e., both the male and the female genders grow yam in their own rights and make production, marketing, and utilization decisions. This contradicts an age-old speculation of yam as a man's crop in Nigeria. In several Nigerian cultures, wealth was controlled by the man who served as head of the household. It is noteworthy to mention that yam was the ultimate wealth and regarded as king of all crops where agriculture was the main business. Ohadike reported that the yam production requirement for masculine labor was a contributing factor to the expansion of cassava production in the Lower Niger in the twentieth century [20]. In the Lower Niger (the Niger basin from just above the Niger Delta on the coast to Lokoja), a series of three tragedies-a war of resistance against the imposition of British rule (1899 to 1914), the First World War (1914 to 1918), and the influenza epidemic (1918)-made sustenance of food security through yam production difficult [20]. In addition, it is noteworthy that yam production was adversely affected by the withdrawal of men from the villages to fight in the wars. This led to a scenario where people of the Lower Niger embraced cassava, which was hitherto unacceptable as inferior to yam but less labor-intensive to produce [20].
Our data analysis in Table 4 revealed that a high number of women provided the bulk of the labor for each task, which increased from a low level during land clearing to a higher level at weeding, harvesting, transporting, and marketing. By contrast, the number of the fields in which men provided the bulk of the labor was highest during land clearing, mound making, and planting. These findings show that both men and women are heavily engaged in different yam production and postharvest tasks. Contrary to a study in Plateau State, Nigeria, Stone et al.'s [21] findings revealed the male: female labor ratio, where men do 50% of the weeding and transplanting labor and 52% of harvesting, storage, and processing, while 42% of the heavy ridging and mounding are done by women. In terms of total work hours in all agricultural activities, women's contribution is 53%, and their per capita labor input is 46%. Stone et al.'s [21] labor distribution shows that more women engage in the bulk of labor activities, while both genders exhibit similar family and social labor structures. Kleih et al. [22] in their study explained that though the yam is primarily considered a man's crop, women participate in some agricultural activities such as weeding and transporting of the yam tubers, while planting of the tubers is traditionally carried out by men.
The traditional preference of soil fertility maintenance using organic and animal manuring is extremely beneficial to not only yam cultivation but also general ecological wholesomeness and sustainability. Neina [23] revealed that yam production is traditionally non-sedentary because of its high nutrient demand. Identifying soil fertility as the biggest driver, Neina's [23] data show that yam yields decline with time under mineral fertilizer application; on the contrary, yields increase chronologically under organic fertilizer application due to the additive effects of the latter on soil properties. These findings show that the native wisdom exhibited in our study enhances better yam production output in the long term.
Seed yam quality improvement has been extensively studied through scientific technological innovations and practices; however, these products are often not accessible to rural Nigerian yam farmers. Innovations emerging from research and developmental centers are often stalled at the grassroots level, as rural yam farmers find it difficult to assimilate the science. Other reasons could be the the fact that farmers prefer and hold onto their cultural values and processes. This is in line with the study by Bergh et al. [24], who confirm the slow adoption of of yam minisett technology that was introduced to yam farmers starting in 1970. Seed yam production is culturally managed by most Obudu farmers as revealed above. Culturally perceived as the origin of life, seed yams are held sacred, with indigenous practices across farming seasons grading the good and bad seed yams. It is perceived that this continuous domestication process of yam enables this region to evolve highly desired and demanded yam outputs with widely acknowledged great taste, texture, and size, generally called 'Atam yam'. It is also worth noting that this yam domestication process also impacts on people's livelihoods via their cultures, as examined in the study, influencing their choices, attitudes, and spirituality. These indigenous practices are mostly transferred from father to son, with practical experiences gained in the art of yam cultivation and production.
Conclusions
The work shows the indigenous food yam cultivation and livelihood practices in Obudu local government area of Cross River State, Nigeria. It utilized the socio-ecology theory in explaining yam cultivation via its actor/individual-the farmer interacts with socio-cultural, economic, and environmental factors such as family, community, and direct and indirect services and structures that surround them.
The farmers, through yam cultivation in these locations, interact with ecology by employing native wisdom in processes like utilizing farm waste organic manuring, and indigenous technologies that simplify tilling/mound making. Other processes include indigenous staking to allow for maximum sunlight interception for plant photosynthesis, resulting in increased tuber yield, and vine trailing, which enhances vegetative growth, and thus efficient tuber bulking.
The trends in the level of contribution of labor by gender to yam production and postharvest activities are such that as the activities move from the field towards the home, women's contributions increase and men's contributions decline. This trend may be attributed to the fact that women do more work at home than men. Such homemaking activities include meal preparation and child care. Although yam is labeled a man's crop, men and women are involved in yam production; thus, each gender is permitted to make production and post production decisions. Both men and women engage in complementary yam production and post production tasks. In yam technology development and transfer, these roles of men and women are more pertinent issues for consideration than the labeling of yam as a man's crop.
The socio cultural construct that the female gender is more effective in resource management enables their strong presence in the tool and seed yam retail livelihood. It is perceived that their presence facilitates increased frequency in sales with accuracy and accountability in returns. The influence of cultural beliefs and spiritual practices pervades farmers' social attitudes toward improving farming operations. Crude implements (hoe, machetes, etc.) and indigenous methods of fertility enhancement (local manures, rotational farming and shifting cultivation, etc.) dominate farming operations and practices. These practices/experiences present less scope for engaging in modern and innovative farming methods and practices. It is also less likely to open up opportunities for large-scale investment and participation of the younger demographic groups. In another perspective, we found that some aspects of indigenous farming practices carry some sustainability implications. The cultural norm of living in harmony with nature probably sustains some conservative attitudes towards farming operations, as most farmers are less willing to compromise on their age-old tradition of a reciprocal relationship with nature. Shifting cultivation, rotational farming, the use of organic manure, and other indigenous sustainability practices may not produce a bigger effect of encouraging productivity and innovation. They have, however, been instrumental in sustaining the social economy and ecological health for the community.
Farmers engage in informal networking and local associations with diverse schemes to support and encourage members. Government and organizations could take advantage of these informal structures to reach out to farmers through micro-credits, education, information, training, supervision, and mechanization, among many other forms of support. There is a huge potential of registering the different groups of actors into formal social structures such as cooperatives in order to take advantage of bulk buying, selling, and transportation. This will strengthen these associations, creating the platform for accessing funds, information, education, and training from public and non-governmental institutions. There has to be special support administered to them through regular observations of their recordkeeping, which will reflect their present status, dynamics, and turnover.
Our study has demonstrated the relationship between the yam, the social structure, and the environment [25][26][27], focusing on the interconnectedness of places and food on the one hand and social practices and food production on the other. These relationships are driven and governed by indigenous knowledge systems and associated cultural norms, and hardly cohere with the need for innovation and modern practices. These are probably justified given the near absence of a public support system in the areas of education, information and communication, and improved farming methods. Our study enriches the wealth of literature on the subject of discourse while revealing man as the central driver of yam production through his interaction with his environment via a socio-ecological system. The findings will be utilized as a basis and foundation for further investigations on the evolving indigenous practices and sustainability impact as well as how the organization of yam cultivation will shape the local socio-cultural, economic, and other livelihood practices in the future. It will also serve as a planning instrument for effective utilization of cultures to enhance productivity.
|
2023-05-31T15:10:13.625Z
|
2023-05-29T00:00:00.000
|
{
"year": 2023,
"sha1": "7ef012bdb639147b78b625481eb5179e10fa9b75",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/world4020020",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d64ff8d26b0c787743059f8da5ec9c13c2357aa7",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
19112778
|
pes2o/s2orc
|
v3-fos-license
|
A New Strategy for Extracting ENSO Related Signals in the Troposphere and Lower Stratosphere from GNSS RO Specific Humidity Observations
El Niño-Southern Oscillation related signals (ENSORS) in the troposphere and lower stratosphere (TLS) are the prominent source of inter-annual variability in the weather and climate system of the Earth, and are especially important for monitoring El Niño-Southern Oscillation (ENSO). In order to reduce the influence of quasi-biennial oscillations and other unknown signals compared with the traditional empirical orthogonal functions (EOF) method, a new processing strategy involving fusion of a low-pass filter with an optimal filtering frequency (hereafter called the optimal low-pass filter) and EOF is proposed in this paper for the extraction of ENSORS in the TLS. Using this strategy, ENSORS in the TLS over different areas were extracted effectively from the specific humidity profiles provided by the Global Navigation Satellite System (GNSS) radio occultation (RO) of the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) mission from June 2006 to June 2014. The spatial and temporal responses of the extracted ENSORS to ENSO at different altitudes in the TLS were analyzed. The results show that the most suitable areas for extracting ENSORS are over the areas of G25 (−25◦S–25◦N, 180◦W–180◦E) −G65(−65◦S–65◦N, 180◦W–180◦E) in the upper troposphere (250–200 hpa) which show a lag time of 3 months relative to the Oceanic Niño index (ONI). In the troposphere, ENSO manifests as a major inter-annual variation. The ENSORS extracted from the N3.4 (−5◦S to 5◦N, 120◦W to 170◦W) area are responsible for 83.59% of the variability of the total specific humidity anomaly (TSHA) at an altitude of 250 hpa. Over all other defined areas which contain the N3.4 areas, ENSORS also explain the major variability in TSHA. In the lower stratosphere, the extracted ENSORS present an unstable pattern at different altitudes because of the weak ENSO effect. Moreover, the spatial and temporal responses of ENSORS and ONI to ENSO across the globe are in good agreement. Over the areas with strong correlation between ENSORS and ONI, the larger the correlation coefficient is, the shorter the lag time between them. Furthermore, the ENSORS from zonal-mean specific humidity monthly anomalies at different altitudes can clearly present the vertical structure of ENSO in the troposphere. This study provides a new approach for monitoring ENSO events.
Introduction
El Niño-Southern Oscillation (ENSO), which originates in the tropical Pacific, is a natural, coupled ocean-atmospheric cycle with an approximate timescale of 2 to 7 years [1][2][3].ENSO is a complicated system which has a profound effect, not only over the tropics, but also across the globe through teleconnections [3][4][5][6][7][8][9][10], and many aspects of its evolution are still not well understood [1].Currently, the phases and strengths of ENSO events are quantified by indices corresponding to time, such as the equatorial tropical Pacific sea surface temperatures (SSTs) (e.g., Niño3.4Index), and differences between the anomalies of a meteorological variable observed at two different stations (e.g., the Southern Oscillation Index (SOI)) [1,11].Over the past two or three decades, the frequent occurrence of the ENSO phenomenon has attracted an upsurge in interest regarding the description and explanation of inter-annual fluctuations in weather and climate.
Investigating the El Niño-Southern Oscillation related signals (ENSORS) in the troposphere and lower stratosphere (TLS) and their response to ENSO are the subjects of many climate change detection studies [12,13].Several researchers have studied ENSORS and analyzed their responses to ENSO events in the TLS.In the troposphere, a warming effect was observed during the warm phase of the ENSO cycle, and the maximum response to ENSO events occurred with a lag of one or two seasons [14][15][16][17][18][19].In the lower stratosphere, a significant cooling signal was associated with El Niño events at low latitudes [20,21].In contrast, a warming signal was found over the Artic stratosphere [21,22].The transition between warming and cooling occurred across the tropopause during different ENSO events [13,16].Most of the above-mentioned conclusions are associated with temperature signals obtained during ENSO events in the TLS.However, the response of TLS to different ENSO events on the basis of water vapor datasets has not been analyzed thoroughly.
Global Navigation Satellite System (GNSS) radio occultation (RO) datasets have the advantages of long-term stability, self-calibration, high accuracy, good vertical resolution, and all-weather global coverage.In recent years, GNSS RO measurements have proven useful for ENSO studies [13,17,23,24].Lacker et al. [17] described ENSORS in the upper TLS region over the tropics using RO observations.Scherllin-Pirscher et al. [13] investigated the vertical and spatial structures of ENSORS, mainly over the tropical areas, by referring to RO temperature profiles and the total column water vapor variable derived from RO water vapor pressure.Teng et al. [23] characterized global precipitable water in ENSO events from 2007 to 2011 using datasets from the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) mission.Sun et al. [24] illustrated the equatorial ENSORS at the tropopause observed by COSMIC from July 2006 to January 2012, and the COSMIC tropopause parameters were found to be useful for monitoring ENSO events.Although these studies described the vertical and spatial structures of atmospheric ENSORS over local regions, the phases and strengths of ENSO events were not fully determined.The temporal and spatial responses of the ENSORS in the TLS to ENSO were not clearly discussed.In addition, most of the studies were confined to tropical regions, and the impact of ENSO on a global scale needs further study.
ENSO is caused by complex interactions between the oceans and atmosphere, primarily through the transfer of heat and water vapor, and its evolution is highly related to variations in SST, wind, water vapor, and precipitation [25][26][27][28].In the study of Lau et al. [26], approximately 80% of water vapor sensitivity to SST (i.e., deduced from ENSO anomalies) was caused by the transport of water vapor in large-scale circulation.Specific humidity is also a good indicator of ENSO because its value remains constant, even when large fluctuations exist in the temperature field-as long as moisture is not added to or reduced from a given mass (i.e., no phase change occurs) [29].
Identifying ENSORS in the TLS from specific humidity variability effectively is crucial, considering that the specific humidity variability in the TLS is not only affected by ENSO, but also by month-to-month variations, annual variations, quasi-biennial oscillation (QBO), as well as other unknown (unmodelled) factors (i.e., the fact that different underlying surfaces have different impacts on the lower troposphere and other factors beyond our current knowledge).Most of the previous studies have only used empirical orthogonal function (EOF) analysis to extract the ENSORS from the monthly anomalies of atmospheric parameters in the TLS [30,31].However, the ENSORS extracted only by EOF may be artificially mixed with some residual noise or unknown (unmodelled) signals uncorrelated with ENSO [32] which could lead to uncertainties in the analyses [32,33].In this work, a new strategy is brought forward, which is more stable in terms of specific humidity variability for the extraction of ENSORS in the TLS, compared with only using the EOF analysis.Under this strategy, considering that the annual and month-to-month variations have been eliminated from the monthly specific humidity anomalies, and in order to further eliminate QBO, other unrecognized signals and the residual noises of annual and month-to-month variations, the monthly anomalies of specific humidity are filtered by a low-pass filter with the optimal filtering cut-off frequency, before being processed by the EOF analysis.Using this strategy, ENSORS in the TLS are extracted from the GNSS RO specific humidity variability effectively, and the responses of the ENSORS to ENSO events in the TLS over different areas are explained quantitatively.
The main contributions and novelties in this study are as follows: (1) these ENSORS in the TLS are the first to be extracted from GNSS RO specific humidity profiles with this new strategy; (2) the spatial and temporal responses of ENSORS at different altitudes in the TLS are fully analyzed, and the most suitable areas and altitudes for extracting the ENSORS are found; (3) the vertical structure of ENSO in the troposphere is reconstructed with the GNSS RO specific humidity profiles.
Details of the data sets used, ENSO indicators and the extracting procedures are given in Section 2. The results and discussion are outlined in Section 3, and the conclusions are presented in Section 4.
COSMIC GNSS RO Data
The data used in this study are specific humidity profiles retrieved from COSMIC GNSS RO for the period, June 2006-June 2014.The profiles are provided by the COSMIC Data Analysis and Archive Center (http://cosmicio.cosmic.ucar.edu/cdaac/index.html) of the University Corporation for Atmospheric Research.The COSMIC mission, which comprises a constellation of six micro low-earth-orbit (LEO) satellites at an altitude of 800-kilometers, was successfully launched on 15 April 2006 (see Rocken et al. [34] for details about the mission).The basic observations from GNSS RO measurements are comprised of GNSS radio signal phases between the LEOs and occulting GNSS satellite.Then, bending angles can be obtained from the signal phases.Finally, on the basis of the spherical symmetry assumption, refractivity N can be derived from the bending angles [35,36].In the neutral atmosphere, the atmospheric temperature (T in Kelvins), atmospheric pressure (P in hpa), and water vapor pressure (P w in hpa) are estimated by [37].
The recovery of P w or T from N measurements using the one-dimensional variational retrieval (1D-Var) method [38,39] requires temperature or water vapor data derived from the background model (e.g., ERA-Interim reanalysis).Specific humidity (q) can be derived from P w , as expressed by Equation (2) [40].
where ε = R d /R v (≈ 0.622) and R d and R v are the dry and moist air gas constants, respectively.The COSMIC mission provided up to 2000 daily profiles with almost uniform global coverage and a horizontal resolution of around 300 km [23].More than 90% of the profiles can penetrate the lowest 2 km of the troposphere by employing the "open-looping tracking" technique [23,41].According to independent evaluations, the COSMIC GNSS RO specific humidity profiles in the TLS are sufficiently accurate for climate studies [41-43].
ENSO Indicators
To best define an ENSO event, the year of occurrence, strength, duration, and timing are considered [1].Several different indices have been proposed for describing ENSO variability, such as the Niño1 + 2, Niño3, Niño4, and Niño3.4 (N3.4) indices.These indices were discussed in detail by Hanley [1].The predictability of the N3.4 index is the highest among the indices.The N3.4 index was proven to be a good descriptor of ENSO [44], and several studies used the N3.4 index to represent ENSO events [45][46][47].
The Oceanic Niño index (ONI), which is defined by the three-month running mean SST anomaly in the N3.4 region (−5 • S to 5 • N, 120 • W to 170 • W), is available online from http://ggweather.com/ enso/oni.htm.The National Oceanic and Atmospheric Administration (NOAA) uses ONI as the de facto standard for identifying El Niño (warm) and La Niña (cool) events in the tropical Pacific.The ONI defines an El Niño event as five consecutive overlapping three-month periods within or exceeding a +0.5 • C anomaly and La Niña events within or below a −0.5 • C anomaly in the N3.4 region.The events can be further categorized as weak (0.5-0.9), moderate (1.0-1.4),strong (1.5-1.9), and very strong (≥2.0)SST anomalies, in which ONI equals or has exceeded the threshold for at least three consecutive overlapping three-month periods.In the present study, the ONI is used to represent ENSO signals.
Methodology
COSMIC GNSS RO specific humidity observations were employed to quantitatively understand the response to ENSO in the TLS for the period, June 2006-June 2014.The process of extracting the ENSORS in the TLS (see Figure 1) was as follows: (1) The original daily average data were interpolated into 5 • × 5 • longitude-latitude grid points across the globe at different altitudes in the TLS using the nearest neighbor interpolation with inverse distance weighted method.Applying the method results in an Earth oblateness level error, but the error can be ignored.Owing to the relatively sparse distribution of COSMIC GNSS RO profiles over the polar region, the accuracy of the grid points in this region is lower than that of the other regions.Considering that the horizontal resolution of COSMIC observations is about 3 • × 3 • (∼ 300 km), and to avoid the impact of tangent point horizontal drift (the drift from altitude 1 km to 10 km is about 102 km, and the drift from 1 km to 20 km is about 136 km [41]) of the COSMIC occultation point, a grid resolution of 5 • × 5 • , which is slightly lower than the actual resolution of the occultation data, was adopted.The gridded specific humidity data are presented for 250 standard pressure levels (from 1000-100 hpa in 5 hpa steps, 100-30 hpa in 1 hpa steps) by linear interpolation from 1000 to 30 hpa (~0-25 km) in the vertical direction.(2) For each isobaric surface, the specific humidity monthly anomalies at each grid point were extracted through eliminating the annual and month-to-month variations in the monthly specific humidity time series at each point.The annual mean anomalies were obtained by subtracting the mean annual cycle (~372-day cycle) identified via fast Fourier transform at each grid point for the period, June 2006-June 2014.Then, the time series of monthly mean anomalies were obtained by taking the monthly average of the annual mean anomalies.The time series were then smoothed with a 1-2-1 binomial filter to reduce month-to-month variations [13,22].(3) For each isobaric surface, the monthly mean anomalies at each grid point were filtered by the low-pass filter with different filtering cut-off frequencies.Considering that the data length is 97 months for the period of June 2006-June 2014, the minimum cut-off frequency was set as 1/48.5 (assuming the maximum cycle of ENSORS is 48.5 months), the maximum cut-off frequency was set as 1, and so the cut-off frequency could be set to 1, 1/1.5, 1/2.0, ..., 1/48.5 (unit: 1/month).
To determine the optimal cut-off frequency, we required a numerical low-pass filter to eliminate the high frequency signals.The frequency response, H 1 ( f ), of an ideal low-pass filter can be described by Formula (3).
where f 1 is the cut-off frequency, and f is the frequency of the signals of the monthly mean anomalies.Then, the corresponding impulse responses of an ideal low-pass filter, h 1 (t), obtained using the inverse Fourier transform method can be represented by where the time series, h 1 (t), at different cut-off frequencies can be obtained.( 4) For each isobaric surface, the filtered time series of monthly mean anomalies (h 1 (t)) obtained from Step (3) were analyzed by EOF for all filtering cut-off frequencies.For each filtering cut-off frequency, the cross-correlation was carried out between the time series of EOF components and ONI.The optimal cut-off frequency of the low-pass filter was determined when the maximum absolute value of the correlation coefficient was obtained.The time series of EOF components which have the maximum absolute values of the correlation coefficient with the ONI were regarded as ENSORS.
The process of extracting the ENSORS in the TLS is shown in Figure 1.
In order to evaluate the effect of the optimal low-pass filter method, the absolute values of correlation coefficients between the ENSORS derived from the following two schemes and ONI were compared.
Scheme 1: The monthly mean anomalies of specific humidity from which the annual and month-to-month variations were subtracted were directly processed with EOF, which means that Step (3) in the process of extracting ENSORS is ignored in this scheme.
Scheme 2: The monthly mean anomalies of specific humidity from which the annual and month-to-month variations were subtracted were processed with the optimal low-pass filter and EOF.This is the complete process of the proposed method described above (Figure 1).
The compared results are discussed in Section 3.1.The results verify that the Scheme 2 is more suitable for extracting the ENSORS from the TLS (see next section).To determine the most suitable areas to extract ENSORS from, the globe was divided into 27 regions.The detailed definitions for the 27 regions are listed in Table 1.
In this study, we used the Pearson cross-correlation coefficient to describe the response strength to ENSO events in the TLS.The correlation coefficients in this work are all statistically significant with Student's t-tests at the 95% confidence level [48].Figure 2 gives an example which clarifies the function of each step in the data processing procedure for extracting ENSORS.In this work, all grid data were processed with the proposed strategy to extract ENSORS.Hence, in this figure, we use the global averages of specific humidity as the original data at an altitude of 245 hpa.In addition, we have provided the step-by-step results, i.e., the monthly anomalies which have been eliminated the mean annual cycle, which have been further reduced the month-to-month variations by applying the 1-2-1 binomial filter, and which have been even further reduced other unknown factors and the residual signals from Step (2) in Figure 1 by applying the low-pass filter with the optimal cut-off frequency.The ENSORS were extracted either only by EOF (Scheme 1), or by low-pass filter and EOF (Scheme 2).The correlation coefficients in Figure 2e,f are statistically significant with Student's t-tests at the 95% confidence level.The comparison between Figures 2e and 2f shows that compared with Scheme 1, Scheme 2 improves the correlation coefficient between the extracted ENSORS and the ONI, although the correlation coefficient between the ENSORS and ONI extracted with Scheme 1 can also be up to 0.927 lagging the ONI by 2 months.Table 1.Regions defined in this study.
Abbreviation
Region Latitude Longitude
Results and Discussion
We extracted ENSORS from COSMIC GNSS RO specific humidity profiles using the two methods outlined in Section 2.3.Then, we analyzed the response of the ENSORS to ENSO over different areas and identified the suitable areas and altitudes to extract ENSORS in the TLS.Finally, the spatial and vertical structures of ENSO were investigated with the proposed method in the TLS.
Extracting ENSORS in the TLS
The ENSORS over different areas (Table 1) were extracted by the proposed method (the optimal low-pass filter + EOF, Scheme 2) and only by EOF (Scheme 1) in the TLS, respectively.The maximum lead/lag time was set as 24 months.Given that ONI is used to represent ENSO signals, we analyzed the ENSORS extracted from the areas that contain the N3.4 area by the two methods and compared them with the ONI in the TLS.
In this work, we define the intensity of the response to ENSO with the absolute values of the correlation coefficient between the extracted ENSORS and ONI. Figure 3 depicts the absolute value of the correlation coefficient between ENSORS and ONI at each isobaric pressure altitude in TLS.It can be seen that the absolute values of the correlation coefficients between ENSORS and ONI based on Scheme 2 are significantly higher than those based on Scheme 1 for all statistical areas in the TLS, especially in the lower stratosphere and lower troposphere.Therefore, we choose Scheme 2 to analyze the response to ENSO in the TLS.In addition, the ENSORS derived from the G25-G65 areas with the proposed method (Scheme 2) presented similar patterns in the TLS.The stability when extracting the ENSORS and the strongest response to ENSO at different altitudes in the TLS are analyzed in Section 3.2.In each subfigure, the red dotted line represents the result when only the EOF was used (Scheme 1) to extract the ENSORS, while the blue solid line represents the result when the proposed method was used (the optimal low-pass filter + EOF, Scheme 2).
Response to ENSO in the TLS
In the TLS, the impacts of other different factors, such as QBO, in the upper troposphere and lower stratosphere, the effect of different underlying surfaces over different regions, and other unknown factors at different altitudes, may exist in the time series of monthly mean anomalies of specific humidity at different altitudes.Although these mixed factors cannot be recognized and be separated completely, the differences in these mixed factors at different altitudes are reflected by the EOF modes and the filtering frequencies of the optimal low-pass filter.At different altitudes, the corresponding EOF time modes that have the maximum absolute values of correlation coefficients between EOF time modes (ENSORS) and ONI are not necessarily the same.For a certain height range, if the chosen EOF mode stays almost unchanged at different altitudes, we consider that the ENSORS can be extracted stably over this height range.Consequently, the curves of the absolute correlation coefficient values between the extracted ENSORS and ONI are smooth, and the lead/lag times and the cut-off frequencies also remain almost unchanged at different altitudes over this height range.Thus, we selected two variables-the EOF mode and the filtering frequency of the optimal low-pass filter-to analyze the stability when extracting ENSORS in the TLS (Figure 4a,b).
As shown in Figure 4, the ENSORS are stable in the middle and upper troposphere (~560 to ~150 hpa) but not very stable in the lower troposphere and lower stratosphere regions.Due to being situated near the underlying surface, lower tropospheric convection is very intense.Water vapor in the lower troposphere is affected by other unknown factors besides the ENSO phenomenon, as shown by the different EOF modes at different altitudes in Figure 4a and the different filtering frequencies at different altitudes in Figure 4b.The EOF mode is neither the first mode nor the second one, and the EOF modes change dramatically (<560 hpa) over several areas, especially near the surface (Figure 4a).The filtering frequencies also vary greatly at different altitudes under 560 hpa (Figure 4b).As shown in Figure 4b, the filtering frequencies under 560 hpa are larger than 0.4 (2.5 months) and increase to as high as 0.6 (1.6 months) near the Earth's surface.The results indicate that the ENSO is not the only factor which influences the monthly anomalies in specific humidity in the lower troposphere.In the lower stratosphere, the QBO dominates the inter-annual variations, and the ENSO effect is weak [18,21,22].Thus, the maximum absolute values of the correlation coefficients in the lower stratosphere (>0.7) are less than those in the troposphere (>0.8).Moreover, the EOF modes are large and the filtering frequencies are unstable at different altitudes in the lower stratosphere.With regard to the temporal response to ENSO in the lower stratosphere, the extracted ENSORS do not present similar patterns at different altitudes.For a range of altitudes, the ONI is delayed by several months, while for other altitudes, the ONI is advanced by several months, some even by more than 24 months.As shown in Figure 4d, the lag time of the ENSORS corresponding to the ONI in the troposphere is 0-6 months (≤2 seasons), which is consistent with previous results based on temperature profiles [13,16,19].Figure 4c shows that for certain altitudes in the TLS, the absolute values of the correlation coefficients between the extracted ENSORS and ONI do not vary much over the 19 areas defined here, especially in the middle and upper troposphere (~560 to ~150 hpa).The maximum values are observed over the areas, G25-G65.These results verify that although ENSO originates from the tropical Pacific, the area with the strongest response to ENSO extends to G25-G65.The peak values are attained in the upper troposphere (250-200 hpa) with a lag time of 3-6 months (≤2 seasons), as shown by Figure 4c,d.The maximum values can be greater than 0.94 in the upper troposphere over some altitudes.As depicted in Table 2, the absolute values over different areas at different altitudes in the TLS might reach the maximum when the extracted ENSORS delay ONI by 3-6 months.The corresponding EOF modes over different areas are the first modes (N3.4,G5, G10, and G15) or the second modes (G20-G90).Over the N3.4 area, the first EOF mode accounts for 82.59% of the monthly anomaly variance in the specific humidity.With the expansion of the region, the percentage of the variance explained by the corresponding EOF modes decreases gradually over different areas, except for G90 areas (10.03%) (Table 2).Thus, these results further confirm that ENSO is the most important source of inter-annual variation over low-latitude areas in the troposphere.ENSO can also explain the major variations in the total specific humidity anomaly (TSHA) over other areas which contain the N3.4 area.Besides, the maximum absolute values are all found in the upper troposphere.As is noted in Section 3.1, the areas over G25-G65 presented similar patterns in the TLS.As shown in Table 2, the EOF mode, the filtering frequency and the lag time of ENSO all tend to be stable when the absolute values of correlation coefficients between the extracting ENSORS and ONI reach a maximum.Therefore, to further analyze the results of Figure 4, the ENSORS extracted from the areas (G25-G65) are depicted in Figure 5.The extracted signals, which are very stable over the altitudes of 940-150 hpa over the G25 area, are similar to those over the G30-G65 areas (Figure 5).These results indicate that across all of the G25-G65 areas, the monthly anomalies in the specific humidity for altitudes of 940-150 hpa are affected by similar factors.The strongest responses to ENSO were found at altitudes of 250-200 hpa with a lag of 3 months (Table 2).In order to study the influence of ENSO over other areas (i.e., excluding N3.4), we extracted ENSORS from the areas of N30, NHM, ARC, N90, S30, SHM, ANT, and S90 (Figure 6).The ENSORS of these eight areas differ from those of the N3.4 area at different altitudes.The maximum absolute values of the correlation coefficients between the ENSORS derived from these areas and ONI can be greater than 0.93 (Table 3).These results indicate the global effect of ENSO.We can further extract ENSORS from the specific humidity in the TLS over any geographical area.In Figure 6, the response time, the corresponding EOF modes, and the filtering frequency are shown to be very unstable in the TLS, especially over the NHM, ARC, SHM, and ANT areas.Thus, these eight areas, excluding the N3.4 area, are not the most suitable areas for extracting ENSORS in the TLS.From Table 3, the variances in the EOFs are greater than 21% over the N30 and SHM areas and become very small over the NHM, ARC and ANT areas, indicating that ENSO has a significant effect over the N30 and SHM areas.After extracting the ENSORS in the TLS, the time series of ENSORS were compared with the ONI.The time series of ENSORS at each isobaric surface were standardized as x = (x − x)/σ x , where x is the standardized time series, x is the ENSORS time series, x is the average of the extracted time series, and σ x is the standard deviation of x on the same scale.Figure 7 depicts the extracted ENSORS after moving forward a certain number of months (3-6 months) (i.e., see list in Table 2) over different areas during the period of June 2006-June 2014.The extracted ENSORS and ONI show similar inter-annual variability after moving forward 3-6 months over different areas (Figure 7).The absolute values of the correlation coefficients between ENSORS extracted from the G20-G65 areas and ONI reach the maximum (~0.94) after a delay of three months.Meanwhile, the absolute value of the correlation coefficient (~0.908) becomes the smallest after a delay of five months over the N3.4 area.The results verify that, in the TLS, the strongest response area is not the N3.4 area, which is because the ENSO signal has propagated into the atmosphere of middle and high latitudes by means of planetary Rossby waves and the meridional circulation (Table 2) [22].
Building on the above analysis, the G25-G65 areas are the most suitable for extracting ENSORS from the upper troposphere (250-200 hpa) with a lag time of 3 months using Scheme 2. The ENSORS extracted from these altitudes over these areas and ONI are highly correlated.The correlation coefficient can reach up to more than 0.94, which is far greater than the results given by Sun et al. [24], who analyzed the correlation coefficients of the time series of tropopause parameters (e.g., the height, pressure, temperature and potential temperature of the tropopause) derived from COSMIC over the N3.4 area and ONI.In their work, QBO and other unknown factors were not eliminated from these time series of tropopause parameters.Other than the tropopause, the results of this work indicate that the time series of ENSORS extracted from GNSS RO specific humidity with the proposed method can provide us with an opportunity to describe the phases and strengths of ENSO events in the upper troposphere.
Response to ENSO at an Altitude of 245 hpa across the Globe
Only by finding the altitudes with the strongest ENSO responses in the TLS can the spatial and temporal responses to ENSO in the atmosphere be adequately described.In Section 3.2, we established that the strongest response to ENSO across the globe was situated at the 245 hpa level, lagging the ONI by 5 months (Table 2).At the 245 hpa level, the maximum value of the correlation coefficient can reach up to 0.936.The ENSORS extracted from the globe are in the opposite phase to ONI.The extracted ENSORS over the globe, after moving forward 5 months, show similar inter-annual variability with ONI (Figure 7).To analyze the spatial and temporal responses to ENSO across the globe, the optimal low-pass filtering and EOF analysis methods were employed to extract ENSORS at an altitude of 245 hpa. Figure 8 depicts the maximum correlation coefficients between ENSORS and ONI across the globe and the corresponding lag/lead times over different areas at the 245 hpa level in the troposphere.As shown in Figure 8a,b, the spatial and temporal responses of ONI and the extracted ENSORS across the globe are in good agreement.The strongest correlations with ENSO were mainly concentrated over the G30 area.Positive responses were observed over east Africa, the equatorial Indian Ocean, and the central and Eastern tropical Pacific.In contrast, negative responses were found over the Western Pacific, coastal countries of South America, and the equatorial Atlantic Ocean.The negative correlations were also found at the 20-40 • latitudes over both hemispheres.The spatial response patterns are presented in Figure 8a,c.These patterns are similar to corresponding results derived from the literature [44], which describe the correlations between SST and ONI.The corresponding temporal responses of ONI and ENSORS are presented in Figure 8b,d.Over these strongly correlated areas, delayed responses to ONI are also observed (Figure 8b).The patterns indicate that the larger the correlation coefficient, the shorter the lag time will be.Over the mid-latitudes (20-60 • ) of the Southern hemisphere, north of Australia, Colombia, Venezuela, Ecuador, and Peru's Western waters, Brazil's Northeastern waters, and other regions, the temporal responses relative to the ONI present early responses by approximately 1-2 years.
Specific Humidity Response to ENSO in the Vertical Direction
Previous studies [13,16,18] proposed using the zonal-mean temperature components in the TLS for the analysis of different ENSO signals.This study adopted the same approach to analyze the vertical structure derived from the GNSS RO specific humidity, and then investigated the specific humidity response to ENSO in the vertical direction.This section, describes the ENSORS extracted from zonal-mean specific humidity components in the TLS by the proposed method.
Figure 9 depicts the vertical structure and temporal response to ENSO in the TLS.The correlation coefficient between ENSORS (i.e., extracted from the zonal-mean specific humidity monthly anomalies at different altitudes) and ONI is 0.829 after a delay of four months compared with ONI.The zonal-mean correlation coefficient cross-sections (Figure 9a) present strong positive correlations over the tropical areas in the troposphere with a lag time of 4-8 months (Figure 9b).In the upper troposphere (400-200 hpa), the areas showing strong positive correlations with ENSO extend from the tropics to the latitudes of 45 • S-30 • N. At the same time, negative correlations are found in the center of 30 • S and 15 • N, in the middle and lower troposphere (not significant correlations).Although these areas with negative correlations are not statistically significant, the corresponding EOF spatial mode presented a similar phenomenon (Figure 9c).Negative correlations were also evident over 45 • S-60 • S in the middle troposphere and over 70 • N-75 • N in the lower troposphere.Over the high latitudes (70 • S-90 • S and 75 • N-90 • N), positive correlations were found in the middle troposphere, and the correlations penetrated downward to the lower troposphere over the Northern high latitudes.Over the Southern high latitudes, negative correlation coefficients were found in the lower troposphere.A similar pattern was observed in the corresponding EOF spatial mode (Figure 9c), specifically over the G30 and high-latitude areas.Figure 9b shows the corresponding temporal responses to ENSO, which were derived from the zonal-mean specific humidity monthly anomalies.Over the tropics, the temporal responses to ENSO present a pattern showing that the larger correlation coefficient between the zonal-mean specific humidity monthly anomaly and ONI is accompanied by a shorter lag time to ONI.These findings are similar to the results from the temporal responses of ENSO at an altitude of 245 hpa across the globe (Figure 8b,d
Conclusions
The ENSORS of specific humidity profiles in the TLS observed by COSMIC were extracted successively by applying an optimal low-pass filter and EOF analysis in the current study.Different altitudes and different regions during the period of June 2006-June 2014 were considered.Given that monthly mean anomalies are affected by several factors besides ENSO, the optimal low-pass filtering method was proposed to eliminate the QBO and other unknown (unmodelled) factors for the different altitudes in the TLS.Then, EOF was conducted to extract the ENSORS.Finally, the spatial and temporal responses to ENSO in the TLS were investigated by the proposed method.
Our findings can be summarized as follows: (1) The absolute values of the correlation coefficients between ENSORS and ONI based on Scheme 2 (optimal low-pass filtering + EOF) were significantly higher than those of Scheme 1 over the statistical areas in the TLS, especially in the lower stratosphere and lower troposphere.The proposed method used to extract the ENSORS in the TLS was more stable and effective than only using the EOF analysis (Scheme 1).The proposed method can effectively eliminate the influence of QBO in the lower stratosphere and other unknown (unmodelled) factors in the TLS.
(2) The ENSORS can be extracted stably in the middle troposphere and upper troposphere (~560-150 hpa) over different areas containing the N3.4 area (−5 • S to 5 • N, 120 • W to 170 • W).The strongest responses were observed in the upper troposphere (250-200 hpa) with a lag time of 3-6 months (≤2 seasons).The maximum absolute value of the correlation coefficient between the extracted ENSORS and ONI can reach more than 0.94 over the G25-G65 areas.The ENSORS extracted over all the G25-G65 areas presented a similar stable pattern over altitudes of 940-150 hpa.The temporal response to ENSO in the troposphere can delay ONI by 0-6 months (≤2 seasons).The most suitable areas and altitudes for extracting ENSORS are found over the G25-G65 areas in the upper troposphere (250-200 hpa) with a lag time of 3 months relative to ONI.The good agreement with ONI indicates that the ENSORS extracted by the proposed strategy are useful for monitoring ENSO events.
(3) In the tropical troposphere, ENSORS are the main source of inter-annual variation.The first EOF mode of the N3.4 area corresponds to 83.59% of the total specific humidity anomaly variance at an altitude of 250 hpa.With the expansion of the region, the corresponding EOF modes that can explain the variance decrease gradually at the strongest response altitudes over different areas.Across the globe, the second EOF mode that is correlated with the ONI can explain 10.03% of the total specific humidity anomaly variance at the strongest response altitude (altitude = 245 hpa).Patterns for the temporal and spatial responses to ENSO in the upper troposphere were established in the current study: the larger the correlation coefficient is, the shorter the lag time will be.
(4) In the lower stratosphere, the ENSORS were proven not to be the most prominent source of inter-annual variation.The maximum absolute values of correlation coefficients between the ENSORS and ONI in the lower stratosphere (>0.7) were smaller than those in the troposphere (>0.8) at different altitudes.The extracted ENSORS thus present an unstable pattern at different altitudes.
(5) A clear vertical structure of ENSO in the troposphere was constructed with the ENSORS extracted from the zonal-mean specific humidity monthly anomalies at different altitudes.Strong positive effects were identified over the tropics in the lower and middle troposphere, and they extended from the tropics to the areas of 45 • S-30 • N in the upper troposphere with a lag time of 4-8 months relative to ONI.At the same time, negative correlations were found at the center of 30 • S and 15 • N in the middle and lower troposphere (although these were not statistically significant).Negative correlations were also evident over 45 • S-60 • S in the middle troposphere and over 70 • N-75 • N in the lower troposphere.
Above all, the ENSORS can be extracted most effectively and stably over the G25-G65 areas in the upper troposphere (250-200 hpa) at a lag time of 3 months with the proposed method of optimal low-pass filtering and EOF analysis.The spatial and temporal responses to ENSO over different areas have been depicted in detail.With the FORMOSAT-7/COSMIC-2 launched in 2017, which will provide a longer GNSS RO record, the time series of ENSORS will continue to be a good index for monitoring and diagnosing the ENSO events.
Figure 2 .
Figure 2. The processing steps for extracting ENSORS at an altitude of 245 hpa: (a) the original specific humidity data; (b) the monthly anomalies from which the mean annual cycle has been subtracted; (c) the monthly anomalies from (b) which have been further eliminated in terms of month-to-month variations with a 1-2-1 binomial filter; (d) the monthly anomalies from (c) which have been further processed by the low-pass filter with the optimal cut-off frequency; (e) the ENSORS extracted from (c) only by the EOF (Scheme 1).The blue curve is the unmoved ENSORS and the red curve is the ENSORS moved forward by 2 months; (f) the ENSORS extracted from (c) by the low-pass filter with the optimal cut-off frequency and EOF (Scheme 2).The blue curve is the unmoved ENSORS, and the red curve is the ENSORS moved forward 5 months.
Figure 3 .
Figure 3. Absolute values of the correlation coefficients between ENSORS and ONI over different areas in the TLS.In each subfigure, the red dotted line represents the result when only the EOF was used (Scheme 1) to extract the ENSORS, while the blue solid line represents the result when the proposed method was used (the optimal low-pass filter + EOF, Scheme 2).
Figure 4 .
Figure 4.The maximum response to ENSO derived from COSMIC GNSS RO specific humidity at different altitudes in the TLS over different areas which contain the N3.4 area.(a) The corresponding EOF mode when the absolute value of the correlation coefficient reaches the maximum value; (b) the filtering frequency when the absolute value of the correlation coefficient reaches the maximum value; (c) the correlation coefficient between ENSORS extracted from COSMIC GNSS RO specific humidity observations and ONI; (d) the lead/lag time when the absolute value of the correlation coefficient reaches the maximum value.The two dotted black lines in (d) are the lag times of 0 and 6 months, respectively.Positive values in (d) denote lag time, while negative values denote lead times.
Figure 5 .
Figure 5.The maximum response to ENSO derived from COSMIC GNSS RO specific humidity at different altitudes in the TLS over the areas, G25-G65.(a) The corresponding EOF mode when the absolute value of the correlation coefficient reaches the maximum value; (b) the filtering frequency when the absolute value of the correlation coefficient reaches the maximum value; (c) the absolute value of the correlation coefficient between ENSORS extracted from COSMIC specific humidity observations and ONI; (d) the lead/lag time when the absolute value of the correlation coefficient reaches the maximum value.The two dotted black lines in (d) are the lag times of 0 and 6 months, respectively.Positive values in (d) denote lag times, while negative values denote the lead times.
Figure 6 .
Figure 6.The maximum response to ENSO derived from COSMIC GNSS RO specific humidity at different altitudes in the TLS over different areas (top: N30, NHM, ARC, and N90; bottom: S30, SHM, ANT, and S90).(a,e) The corresponding EOF modes when the absolute values of the correlation coefficients reach the maximum; (b,f) the corresponding filtering frequencies when the absolute values of the correlation coefficients reach the maximum; (c,g) the absolute values of the correlation coefficients between ENSORS extracted from COSMIC GNSS RO specific humidity observations and ONI; (d,h) the corresponding lag/lead time when the absolute values of the correlation coefficients reach the maximum.The two dotted black lines in (d,h) are the lag times of 0 and 6 months, respectively.Positive values in (d,h) denote lag times, while negative values denote the lead times.
Figure 7 .
Figure 7. Normalized specific humidity index and Oceanic Niño index (ONI) time series (red) on the pressure level with the maximum absolute value of correlation between specific humidity and ONI over different areas.ENSORS and ONI are unified as a scale-consistent index by standardized processing.
Figure 8 .
Figure 8. Spatial and temporal responses to ENSO at an altitude of 245 hpa in the troposphere across the globe.(a) Correlation coefficients between specific humidity anomalies and ONI; (b) the temporal responses at 245 hpa between specific humidity anomalies and ONI (positive values denote lag times and negative values denote lead times); (c) the corresponding EOF spatial mode of the extracted ENSORS when the absolute value of the correlation coefficient reaches the maximum value; (d) the absolute value of corresponding time response when the absolute value of the correlation coefficient reaches the maximum value.Areas with "+" denote significant correlation at the 95% level.
).The corresponding lag time responses of ONI were approximately 0-2 months over the Southern high latitudes (70 • S-90 • S) in the middle troposphere, Northern high latitudes (75 • N-90 • N) below 200 hpa in the troposphere, and over 45 • S-60 • S and 45 • N-60 • N in the troposphere.Furthermore, the vertical responses to ENSO were very slow (>12 months) in the lower troposphere over the high latitude of the Southern hemisphere, across the center of 15 • N, and 60 • N-75 • N in the troposphere (Figure 9d).
Figure 9 .
Figure 9. Zonal mean response to ENSO in the TLS.(a) The meridional cross of the correlation coefficient between monthly anomalies of zonal mean specific humidity and ONI in the TLS; (b) the corresponding time response of ENSO when the absolute value of the correlation coefficient reaches the maximum value; (c) the corresponding EOF spatial mode of the extracted ENSORS when the absolute value of the correlation coefficient reaches the maximum value; (d) the absolute value of the corresponding time response when the absolute value of the correlation coefficient reaches the maximum value.Positive values in (b) denote lag time; negative values in (b) denote the lead time.The maximum correlation was evaluated by Student's t-test at the 95% confidence level; "+" denotes significant correlation.
Table 2 .
Extracted ENSORS from different areas containing the N3.4 area in the TLS.
Table 3 .
Extracted ENSORS from different areas excluding the N3.4 area in the TLS.
|
2018-05-09T00:43:46.005Z
|
2018-03-22T00:00:00.000
|
{
"year": 2018,
"sha1": "bdc5958f35bc80ed6a52b2be531c4d3e5e803f39",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/10/4/503/pdf?version=1525344773",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "bdc5958f35bc80ed6a52b2be531c4d3e5e803f39",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Geology"
]
}
|
246832303
|
pes2o/s2orc
|
v3-fos-license
|
MATHEMATICAL MODELLING OF THE HIGHWAY INFLUENCE TO AIR POLLUTION MATHEMATICAL MODELLING OF THE HIGHWAY INFLUENCE TO AIR POLLUTION
Negatívny vplyv dopravy na znečistenie ovzdušia je všeobecne známy. Doprava v mestách prispieva k znečisteniu ovzdušia viac ako 50 %, v centrálnych častiach miest viac ako 70 % [5]. Riešenie problému rastúcej hustoty dopravy si vyžaduje výstavbu nových ciest, hlavne diaľnic. Je potrebné si zodpovedať na otázku, či diaľnice riešia tiež problém negatívneho vplyvu dopravy na znečistenie ovzdušia. Výstavba diaľnice neznižuje počet áut prechádzajúcich daným miestom. Skôr tento počet zvyšuje, pretože diaľnice priťahujú veľký počet vodičov motorových vozidiel, ktorí by ináč použili inú dopravnú komunikáciu. Diaľnice zvyšujú priemernú rýchlosť dopravného prúdu. Preto spotreba paliva narastá, čo spôsobuje nárast emisie hlavných znečisťujúcich látok, ktoré produkuje doprava: NOx – suma oxidov dusíka, CO – oxid uhoľnatý, VOC – prchavé organické zlúčeniny.
Introduction
The negative effect of traffic to air pollution is generally well known. The traffic in cities contributes to more than 50 percent of air pollution and in the central part of cities more than 70 percent [5]. The solution of the increasing traffic density problem demands construction of new roads, mainly highways. It is also necessary to answer the question if highways solve the problem of negative influence of traffic to air pollution. Building highways does not bring down the number of cars that pass through a given place. It sooner raises this number because highways attract more drivers that would otherwise use another route. Highways increase the average speed of the traffic stream. Therefore, fuel consumption increases and results is increasing emission of the main harmful substances produced by traffic: NO x -the sum of nitrogen oxides; CO -carbon monoxide and VOC -volatile organic compounds.
By the projecting of highways it is necessary to take into consideration that the negative influence of highways affects the smallest number of people. Construction of highways out of city living parts is not possible. It is important that the highway could run in such a way, that in the zone, where the concentrations of NO x and CO exceed the short-term imission standards, the number of the permanent living inhabitants was minimalized. The emission of carbon monoxide by highway traffic is approximately fourtimes higher than the emission of nitrogen oxides but the imission standard is 50 times higher, e.g. CO is 50 times less toxic then NO x . Therefore, it is comprehensible that the negative influence of the highway should be judged according to the NO x concentration, and the width of the protective zone round the highway should be determined by the isoline 200 g.m Ϫ3 NO x concentration. The distribution of NO x , CO and VOC concentration is calculated on Prezentácia metodiky výpočtu automobilového znečistenia ovzdušia a jej modifikácie pre výpočet distribúcie koncentrácie hlavných zložiek spaľovania paliva okolo diaľnice je jeden z cieľov tohto članku.
One of the purposes of this paper is to present air pollution calculation methodology from car traffic and its modification for calculation of concentration distribution of the fuel combustion main products around the highways.
Methodology of air pollution calculation from car traffic
In the first approximation a street may be taken as a line source of pollutants. The dispersion of pollutants from a line source is described by the stationary, two-dimensional equation of turbulent diffusion: Street may be imagined as a canyon enclosed from either one or both sides by the buildings. For this type of street the limiting conditions will be where C -is the pollutant concentration in mg.m Ϫ3 , K x , K z -components of diffusion coefficient in the corresponding direction in m 2 .s Ϫ1 , U -the wind speed in m.s Ϫ1 , h -the height of the built-up area in m, H -the altitude of the mixed layer in m, S -the width of the street canyon in m, Q -the specific emission of the road in mg.m Ϫ2 .s Ϫ1 , -the coefficient of the pollutant passage through the walls of the built-up area in m.s Ϫ1 .
The first limiting condition simulates the reflection and passage of pollutants through the walls of built-up area. We can express the coefficient by the relation where ⌬ x is the width of the footway on the both sides of the road. PR ϭ 1 for continual built-up area PR ϭ 0 for the road out from built-up area.
The second and third limiting condition express the pollutant production over the road as well as the perfect reflection of the pollutant on the ground surface and on the upper level of the mixed layer.
Ďalší vývoj dopravy je neoddeliteľne spojený s otázkami hodnôt životného štýlu, spôsobu života a hospodárstva. Treba si uvedomiť, že doprava ovplyvňuje životné prostredie kladným aj záporným spôsobom: G pozitívne tým, že účelným premiestňovaním osôb a tovaru zabezpečuje potreby spoločnosti a výkon niektorých služieb i výrazne prispieva k rastu turizmu, The limiting problem (1), (1a) was solved numerically by the method of finite differences. This is an implicit method and thus unconditionally stable. It consists of fact that instead of functions of continuous arguments, the functions of discrete arguments are considered, and their values are given in the grid points. The calculation domain is constructed so, that the whole canyon is divided in horizontal direction into the three columns of the boxes. The calculated pollutant concentration in a grid point presents the mean pollutant concentration in this box, in the centre of which the grid point is situated.
The mean pollutant concentration in a given box will depend on the dimension of the box. The width of the highway is not always possible to calculate from the number of the lanes. For that reason the width of the calculated section of highway is given interactively through the screen.
The distance of a chosen isoline of pollutant concentration from the highway depends on the wind direction. This distance is maximal when the wind direction is perpendicular to the axis of the highway. For this reason the distance from the highway is calculated, in which NOx concentration achieves the value of 200 g.m Ϫ3 (short-term imission standard for NO x ). The whole amount of pollutants NO x , CO and VOC emitted by the existing road traffic during some time interval, usually during one year, is also calculated.
The relation calculates specific emission of the road Q from the number of the passenger cars and duty vehicles (POS and PNAK), which passed the highway in the averaging time of the pollutant concentration T (0.5 h, 24 h) Specific emissions EOS and ENAK depend on the technical level of the cars. In present time the specific emission given in the Table 2 are used in calculations.
Transport
The transport sector is one of the main energy and environmental problems, because it is part of the biggest consumption of petrified energy sources and is responsible for essential affect to environment. This opinion results from the "Transport in quickly changing variable Europe" report, processing by "Transport 2000 plus" of European community [1].
Future evolution of traffic is inseparably connected with environment values questions, manner of life (modus vivendi) and economy. It is necessary to sense that the traffic is influencing the environment positively and negatively, too: G Positive effect of traffic is providing for society needs and performance of some services by effective transportation of persons, goods, and traffic also goes towards the rise of tourism.
Posudzované sú množstvá nasledujúcich znečisťujúcich látok: G CO oxid uhoľnatý, G NO x oxidy dusíka, G VOC uhľovodíky. G Negative effect of traffic has long-term impact of nonrevivable natural source consumption part, by the instrumentally of short -term impact to surroundings and humans.
The most significant effects of car traffic to environment with the impact to the population's health are noise and imissions. Therefore, they take a substantial place in Environmental Impact Assessment methodology -EIA.
Air pollution, one of the immediate impacts of traffic on its surroundings, is mainly a result of moving cars, motors operation, but also by whirling of sedimentary dust elements on the road and in its surroundings, and by individual car part abrasions, for example, brake lining, tires, etc. Therefore, the imission study should be part of road design documentation up to the mark of variant decision by the appropriate location choice. In order to fulfil its aim, it ought to include the modelling of gaseous emissions production from car traffic in such a proportion that it will be possible to compare different variants of the route.
Imission study
In imission study there should be imission creation arbitration from: G actual traffic on the existing network of roads in regarding area, G forecast traffic on the existing network of roads in regarding area, provided that the redesigned road will not be realised, called zero variant, G forecast traffic on the redesigned road in regarding area, G residual traffic on the existing network of roads in regarding area, provided that redesigned road will be realised, G suggestion of arrangements for air pollution reduction, G suggestion of air monitoring.
Imission study drafted in this way can be the foundation for the environmental impact assessment process according to the Law NR SR č. 127/94 Z. z. [2].
With the exploitation of described computing program, which deliberates all determining factors influencing the gaseous emissions production, it is possible to simulate air pollution of the road surroundings.
Input information for modelling: G emission factors for actual and future vehicle stock, G traffic volume and its composition by the type of vehicles, G longitudinal gradient of road, G urban and eventually suburban treatment of traffic, G speed of vehicle drive, G atmospheric conditions. Following pollutants are appreciated: G CO carbon oxide, G NO x nitrogen oxides, G VOC volatile organic compounds.
Outputs:
G diffusion in open atmosphere -CO, NO x , VOC concentrations, -total year production, -max. concentration in atmosphere from highest traffic, G frontier of the limit excess 200 g.m Ϫ3 NO x distance from the road axis.
Principles of the air pollution evaluating
Nitrogen oxides -NO x -belong to deleterious substances, which is representative one of the most momentous element of exhaust gas by contemporary petrol structure. Because they reach the highest harmful pollutant concentrations, they are identifiable by monitoring or calculation and have the strictest limits. Hence, they use a similar indicator of air pollution by exhaust gas.
In imission studies air pollution is appreciated according to the total production emissions quantity from the car traffic in t.year Ϫ1 , moreover the max. NO x concentration in air in breathing zone (1,5 m over the pavement surface) is pursuing from 1/2 hour peak value of the traffic and the input is the quantum of inhabitants stricken by air pollution over the allowable imission standard.
Specific emission of the cars in Slovakia
Tab. 2 Total amount production of deleterious substances is influenced mainly by intensity and structure of the traffic flow (rate of heavy vehicles), length of the route and the longitudinal gradient and intensity of the road. Thus, by longer distance of the road, more deleterious substances are produced in the air.
Maximal deleterious substances concentration in the air depends on the traffic flow intensity and structure, oblong gradient
Modelling of air pollution from planned highway construction
G Figure 2 presents a rise of traffic between years 2005 and 2035, when by the constructing of highways it can come to considerable decline of the rest of traffic at the same time. G Table 3 presents the total summary value of emissions production in air from the summary of traffic on all streets for zero variant and the comparison for contact of highway D1 and residual traffic on actual roads.
The difference is noticeable since year 2015, where the variants V1a and V7 appear most favourable.
From tables 4 and 5 it is evident that the growing up of traffic to double is predicted, but by constructing highways, total production of emissions in year 2035 will rise only about 5 percent. That is induced by the fact that the model solution takes the cutting edge out from the enforcement of Slovak edict 248/91, which is the Pri porovnávaní stavu pre rok 2035 najnižšie hodnoty akumulovanej koncentrácie sa predpokladajú pre variant V1a, ktoré by boli na ceste I/61 nižšie ako pri nultom variante.
Záver
Účinky emisií vznikajúcich od dopravy sú v konkrétnych územiach veľmi závažné a podiel dopravných prostriedkov ako ich pôvodcov je na územie nerovnomerne rozptýlený. Riešenie tohto problému môže mať efekt len vtedy, ak bude mať celosvetový charakter. Medzinárodné dohody, rozsiahla a cieľavedomá spolupráca continuation of EHK statutes and from this fact results the assumption of vehicle stock modification and the reduction of exhausters value at automobiles past year 2010 approximately to 40 percent by NO x , to 50 percent by CO, VOC and to 75 percent by solid elements in comparison with year 1995. G Figure 4 presents expected layers of short-term NO x concentration of peak half-hour traffic by value 6 percent regarding total traffic intensity.
Change of drive by itself from discontinuous city mode to smooth highway mode induces the decline of NO x concentration. Variant V1a predicts minimal values, and variant V2 maximal values. G Maximal short-term concentrations are accumulated in places where the route of highway converge to the route of existing communication at 220 m distance, because the area of interference with the hygienic limit is overrun, specified for short-term NO x concentration in the air -200 g/m 3 . The indications are present in figures 4 and 5.
By the comparison of status considering year 2035, the lowest accumulated concentration values are predicted for variant V1a, which could have been lower on the road I/61 than by zero variant.
From the results documented in this way according to emission production model from traffic in several items by regarding total emission production as well as by regarding accumulated values (sum D1 and residual traffic on actual roads) of short-term NO x concentration, variant V1 appears more favourable.
Conclusions
Impacts of traffic emissions in given areas are very momentous and the ratio of vehicles and their generators is unevenly dispersed. The solution of this problem can be effective only when its character will be worldwide. International conventions, extensive and systematic co-operation and realisation of agreements give the Už teraz sú jasné predstavy riešenia zaťaženia životného prostredia osobnými automobilmi. Postupné celosvetové zavedenie automobilov so zdvihovým objemom motora 3000 cm 3 (počíta sa s obnovou vozidlového parku v rokoch 2000 až 2005) sa prejaví nielen v relatívnej úspore pohonných látok, ale aj v relatívnom poklese produkcie emisií.
The conceptions of solution an environmental loading with vehicles are evident already. Consecutive worldwide applications of the so-called three-litre automobile (make allowance for assumption of vehicle stock -between years 2000 and 2005) will be registered not only in relative saving of fuel, but also in relative decline of emission production.
The basic factor influencing the amount and structure of produced emissions is composition of fuel, type and conditions of motor work and style of drive.
From the information presented in the study result that location of the route and an especially sensitive vertical line directly influence the produced emission capacity. By increasing drivespeed the emission production and concentration rises, but when comparing the production of emmisions by discontinuous driving in the city and by continuous driving out of the city comes to important difference. Therefore, total emission production is influenced by density level intersections, and the dispersion of pollution is affected by height and density of built-up area. Obr. 6
|
2022-02-16T16:13:18.562Z
|
2000-12-31T00:00:00.000
|
{
"year": 2000,
"sha1": "4be542796f6c0320d05d8c62bfcdfbf277e6960f",
"oa_license": "CCBY",
"oa_url": "http://komunikacie.uniza.sk/doi/10.26552/com.C.2000.4.59-68.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "6e100c4d139dd3b4541a8c683c86bc09b8e5c8d0",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
}
|
216541334
|
pes2o/s2orc
|
v3-fos-license
|
Comparing Efficiencies of Local Anesthetic Injection and Photobiomodulation in the Treatment of Fibromyalgia Syndrome
Amac: Bu calismada fibromiyalji tanili hastalarin tedavisinde lokal anestetik enjeksiyon yontemi ile fotobiyomodulasyon yonteminin etkinliklerinin karsilastirilmasi amaclanmistir. Hastalar ve Yontem: Amerikan Romatoloji Cemiyeti tanisal kriterlerine gore fibromiyalji sendromu tanisi konmus olan 20 ila 60 yas arasindaki 40 kadin hasta calismaya dahil edilmistir. Hastalar randomize olarak iki calisma koluna ayrilmis, bir gruba omuz kusagindaki hassas noktalara prilokain enjeksiyonu yapilirken, diger gruba ise fotobiyomodulasyon teknigi ile dusuk doz lazer uygulamasi yapilmistir. Ayrica, her iki calisma kolundaki hastalara uygun postural ve germe egzersizlerinin yer aldigi bir egitim programi uygulanmistir. Hassas nokta sayisi, sabah sertligi, uyku kalitesi, kas spazmlari ve kisitlilik parametreleri likert olcegi ile, agri seviyeleri gorsel analog skala ile, hastalarin genel durumlari Fibromiyalji Etki Anketi ile ve hastalarin psikososyal durumlari ise Beck Depresyon Olcegi ile degerlendirilmistir. Bulgular: Her iki calisma kolunda uygulanan tedavi yontemlerinin hastalarda istatistiksel olarak anlamli iyilesme sagladigi tespit edilmis, ancak iki tedavi modalitesinin arasinda istatistiksel olarak anlamli fark bulunmadigi tespit edilmistir. Lokal anestetik enjeksiyonu ile hassas nokta sayisindaki azalmanin, kas spazmlarindaki duzelmenin ve kisitlilik seviyesindeki dususun daha fazla oldugu, fotobiyomodulasyon kolunda ise sabah sertliginin, uyku kalitesinin ve Beck depresyon skalasi skorlarinin daha fazla duzelme gosterdigi belirlenmistir. Sonuc: Bulgularimiza gore her iki yontemin de fibromiyalji sendromunun tedavisinde etkin sekilde kullanilabilecegi gorulmustur. Ancak, hastalarin klinik ozellikleri goz onunde tutularak en uygun tedavi yonteminin secilmesi hastalarin tedavi basarisini artiracaktir.
INTRODUCTION
Fibromyalgia is the second most common rheumatologic disease following osteoarthritis [1]. This disease can also be defined as a chronic pain disorder without an exactly known etiology and physiopathology [2,3]. Its prevalence may vary between 2% to 8% based on the diagnostic criteria used [4]. It is characterized with prevalent musculoskeletal pain and fatigue, and these may be accompanied frequently with cognitive and mood disorders [5,6]. This disease may present in both sexes and in all age groups, but is mostly affects females between 40 to 60 years of age [6].
The current treatment of fibromyalgia syndrome includes an integrated approach of pharmacological and nonpharmacological methods, and active participation of patients to treatment course. Current guidelines emphasize the importance of including the specialists who are experienced in patient education, exercise therapists, and cognitive behavioral therapists in a team for the treatment of fibromyalgia [1]. The exercise therapies, physical treatments, and psychotherapies will both increase the functional capacities, as well as their quality of life [7,8].
One of the methods for the treatment of fibromyalgia is local anesthetic injection. This method enables both local pain control, as well as stimulation of blood flow to ischemic tissues. Prilocaine injections may also provide a 2 weeks to 3 months of symptom-free periods [9]. Another non-pharmacological treatment modality of fibromyalgia is laser applications, and these applications have been reported to decrease the number of sensitive points and improve the physical examination findings [10]. Laser treatment is a phototherapy method and it is based on the principle that monochromatic rays have a biomodulatory effect in biological tissues. In this application, low energy doses are applied to the tissues to stimulate cellular processes and to accelerate biochemical reactions.
Currently, infrared, Gallium-Arsenide (Ga-Ar) and Helium-Neon lasers are among the methods that can be used for this purpose. With these methods, low-power energy applications are performed to prevent thermal changes while stimulating neuronal activity [11,12]. While these applications are generally referred to as "low-dose laser therapy" in the literature, this terminology was revised as "photobiomodulation" in 2014 at the nomenclature consensus meeting of the North American Phototherapy Association and the World Laser Therapy Association [13].
In the light of the available information in the literature, we aimed to compare the efficacy of local anesthetic injection and photobiomodulation techniques for decreasing the number of tender points, pain, and other symptoms in patients with fibromyalgia. Figure 1.
Fibromyalgia Impact Questionnaire
Turkish validity and reliability of this questionnaire was shown by Ediz et al. [14], and the questionnaire evaluates the "Function", "General Impact" and "Symptom" status of the patients on a scale of 0-10 (0: best, 10: worst). The total score of the questionnaire is calculated by summing 1/3 of the "Function" total score, the "General Impact" total score, and 1/2 of the "Symptom" total score.
Beck Depression Scale
The
Statistical Analysis
Descriptive statistics of the study are presented with mean and standard deviation for numerical data, and frequency and percentage for categorical data. Comparisons of the numerical data between the independent groups of the study were performed by Mann-Whitney U test, and before and after the treatment of the dependent groups by Wilcoxon test. In all analyzes, 5% Type-I error was accepted for statistical significance. G * Power for Mac software was used for sample size calculations, and SPSS 21 software (IBM Inc., Armonk, NY, USA) was used for statistical analysis.
RESULTS
All patients in the LAI and PBM groups have completed the treatment and included in the analyses. The mean ages (SD) of the women in LAI and PBM groups were 39.4 (13.9) and 39.6 (12.4) years, respectively (p=0.96). For the marital status, more patients were married in LAI group (85% vs. 70%), but the difference was not significant (p=0.46). Likewise, the occupation (p=1.00) and education status (p=0.95) were also similar between two groups. The general demographics of the patients were summarized in Table 1.
The comparisons of clinical parameters between study groups, and between pre-and post-treatment periods were presented in Table 2. Improvements in each clinical assessment parameter after treatment in both groups were found to be statistically significant. in the treatment of fibromyalgia [17,18].
A recent literature search revealed only a limited number of studies that directly compared the low-dose laser therapy and local anesthetic injection methods like in our study. In one of those studies, Tuncay et al. [19] found that both methods decreased the pain and disease symptoms. But, authors have also reported that low-dose laser therapy was more effective in the early treatment period. In our study, we found that clinical parameters that are more associated There are also several other previous studies that compared these two methods per se. One of these studies was conducted by Hong et al. [21], and evaluated the efficacy of 0.5% xylocaine injection to the tender points on trapezius muscle of the patients with myofascial pain syndrome and with/without fibromyalgia syndrome, and found that patients with fibromyalgia were benefited from this treatment more when compared to ones without. In another study, Altindag and Gur [22] [33]. Indeed, in a recent systematic review and metaanalysis study evaluating the efficacy of low-dose laser treatments in the treatment of fibromyalgia, this method was reported to be highly effective and provide significant clinical benefit [34] as we found in our study.
LIMITATIONS
The small sample size, the lack of long follow-up period and a placebo-control group are the main limitations of the present study.
CONCLUSION
In conclusion, our results revealed that local anesthetic injection and photobiomodulation / low dose laser therapy are both effective methods in the treatment of fibromyalgia and there is no statistically significant difference between their efficacies. However, when the improvement rates are considered, local anesthetic injection method was found to be more effective on tender points, muscle spasm and limitation, and low dose laser therapy was more effective on morning stiffness, sleep quality, pain and depression.
Therefore, both methods can be used effectively in the treatment of fibromyalgia, but considering the clinical features of the patients will be the most appropriate approach in determining the method to be applied in the treatment.
The author declares no conflict of interest.
|
2020-04-27T21:06:51.744Z
|
2019-12-20T00:00:00.000
|
{
"year": 2019,
"sha1": "fce17248e4c2fbc61c13ce14f7ffa02dd96addd7",
"oa_license": "CCBYNC",
"oa_url": "https://dergipark.org.tr/tr/download/article-file/878454",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c09ddaf36dc2002f84ade3a89194d38ef19d1443",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
}
|
249954032
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of free-surface and conservative Allen-Cahn phase-field lattice Boltzmann method
This study compares the free-surface lattice Boltzmann method (FSLBM) with the conservative Allen-Cahn phase-field lattice Boltzmann method (PFLBM) in their ability to model two-phase flows in which the behavior of the system is dominated by the heavy phase. Both models are introduced and their individual properties, strengths and weaknesses are thoroughly discussed. Six numerical benchmark cases were simulated with both models, including (i) a standing gravity and (ii) capillary wave, (iii) an unconfined rising gas bubble in liquid, (iv) a Taylor bubble in a cylindrical tube, and (v) the vertical and (vi) oblique impact of a drop into a pool of liquid. Comparing the simulation results with either analytical models or experimental data from the literature, four major observations were made. Firstly, the PFLBM selected was able to simulate flows purely governed by surface tension with reasonable accuracy. Secondly, the FSLBM, a sharp interface model, generally requires a lower resolution than the PFLBM, a diffuse interface model. However, in the limit case of a standing wave, this was not observed. Thirdly, in simulations of a bubble moving in a liquid, the FSLBM accurately predicted the bubble's shape and rise velocity with low computational resolution. Finally, the PFLBM's accuracy is found to be sensitive to the choice of the model's mobility parameter and interface width.
A B S T R A C T
This study compares the free-surface lattice Boltzmann method (FSLBM) with the conservative Allen-Cahn phase-field lattice Boltzmann method (PFLBM) in their ability to model two-phase flows in which the behavior of the system is dominated by the heavy phase. Both models are introduced and their individual properties, strengths and weaknesses are thoroughly discussed. Six numerical benchmark cases were simulated with both models, including (i) a standing gravity and (ii) capillary wave, (iii) an unconfined rising gas bubble in liquid, (iv) a Taylor bubble in a cylindrical tube, and (v) the vertical and (vi) oblique impact of a drop into a pool of liquid. Comparing the simulation results with either analytical models or experimental data from the literature, four major observations were made. Firstly, the PFLBM selected was able to simulate flows purely governed by surface tension with reasonable accuracy. Secondly, the FSLBM, a sharp interface model, generally requires a lower resolution than the PFLBM, a diffuse interface model. However, in the limit case of a standing wave, this was not observed. Thirdly, in simulations of a bubble moving in a liquid, the FSLBM accurately predicted the bubble's shape and rise velocity with low computational resolution. Finally, the PFLBM's accuracy is found to be sensitive to the choice of the model's mobility parameter and interface width.
Introduction
Multiphase flows are important in a wide range of natural and industrial applications. For instance, they can manifest as undesired foam in the food industry [1,2], in the transport of hydrocarbons from subsurface environments [3], or in the transfer of micro-particles into the environment during rainfall [4]. The laboratory experiments associated with studying the fundamental dynamics of multiphase flows can be expensive and time-consuming, while only supplying limited insight into the governing fluid mechanics. With advances of computational infrastructure, it has now become common to supplement physical with numerical experiments through the use of computational fluid dynamics (CFD). This tends to provide a cheaper, time-efficient solution to flow problems and allows direct insights to the flow field, as every arbitrary location inside the fluid can be monitored.
This work aims to present and analyze a numerical method that can be used to supplement experiments by numerical simulations of immiscible two-phase flows, in which the flow dynamics of the lighter phase are assumed to have a negligible influence on the heavier phase, and overall dynamics of the system. In such cases, the flow in the lighter phase is commonly neglected, reducing the two-phase flow to a flow with a free boundary, or more commonly referred to as free-surface flow [5]. It has been previously shown that such simplification is valid in simulations for e.g., single gas bubbles rising in a liquid [6,7] and has been applied to simulate foaming [8].
While the flow in the lighter phase is assumed to be negligible, the simulation of the flow in the heavier phase often requires a highly resolved computational grid to capture all the relevant flow structures. Therefore, the numerical methods presented here are designed and targeted for massively parallel computing environments. For efficient numerical fluid simulations on such hardware, the lattice Boltzmann method (LBM) has been established as a modern alternative to classical approaches for CFD that are based on the discretization of the Navier-Stokes equations. As all operations require only information of a local neighborhood, the LBM is inherently suitable for parallel computing and has been extended with models for simulating a variety of different physics including multiphase flows [9,10,11], particulate flows [12,13], thermal effects [14] and others.
There are several multiphase LBM models available in the literature that can be distinguished by the representation of the interface between the phases. Models having a sharp interface representation include the free-surface lattice Boltzmann method (FSLBM) [8], the level-set method [15], the front-tracking approach [16], and the color gradient model [17]. In contrast, the interface is represented in a diffuse manner in the pseudopotential model [10], the free-energy model [18], and phase-field models. The latter are either based on solving the Cahn-Hilliard [19,20] or Allen-Cahn [21] equation to model the interfacial dynamics. In this article, the comparative study is restricted to the FSLBM and the conservative Allen-Cahn phase-field LBM (PFLBM) [21,22]. Both of these models have well-optimized parallel implementations, and have been shown to be capable of simulating systems with high density and viscosity contrasts corresponding to liquid-gas systems.
The FSLBM extends the LBM with a volume-of-fluid approach [23] where the sharp interface between the two phases is captured by an indicator field [8]. The fluid dynamics of the lighter phase are entirely neglected, and only the effect of pressure forces at the interface is modeled. It is implicitly assumed that the density and viscosity ratio between the two fluid phases is infinite. The sharp interface formulation and avoiding computations in the lighter phase lead to high computational efficiency with low memory requirements. Although the algorithm's implementation is challenging, it is also well suited for parallel hardware like graphics processing units (GPUs) [4,24].
The conservative Allen-Cahn equation [11,25] is the basis of the conservative Allen-Cahn phase-field LBM [21,22], a model designed to simulate two-phase flow problems with high density and viscosity contrasts. The algorithm is simpler than that of the FSLBM, where different equations must be solved depending on the type of cell. In contrast, the PFLBM can be purely formulated via the standard lattice Boltzmann equation with additional force terms [21]. As in the single-phase LBM, all operations are restricted to a local cell-neighborhood making the PFLBM well-suited for parallel computing. While prior phase-field models were not capable of simulating multiphase flows with large density and viscosity ratios [26,27,28], the PFLBM has been successfully used in simulations with density ratios of up to 10 3 and viscosity ratios of up to 10 2 [14,29,30,31,32]. This is equivalent to an air-water system and makes the model a possible alternative for free-surface flows where the dynamics are governed by the heavier phase. The Allen-Cahn phase-field equation tracks the dynamics of the interface. The diffusivity of the interface suggests that a PFLBM simulation must have a higher resolution than an FSLBM simulation. On the other hand, due to its algorithmic simplicity, it is easier to optimize the implementation for different architectures, including accelerator hardware like GPUs [33].
In this work, the models are compared with respect to their algorithmic properties and ability to simulate two-phase flows in which the lighter phase has negligible impact on the flow dynamics. First, the numerical foundations of the LBM, FSLBM and PFLBM are introduced in more detail. Then, the models are compared with respect to methodology and numerical implementation. Based on six numerical experiments, the accuracy and the required computational grid resolution of the models are compared. For all numerical experiments, each model's simulation results were cross-validated with independent implementations from other code bases, as listed below. The choice of these tests is discussed, as the test cases must be reasonably applicable to both models. The initial test case features a standing surface wave governed by gravitational forces and is referred to as a gravity wave. Surface tension effects are not modeled in this test case. In the second benchmark, the same standing wave setup is used, however, the flow is governed by surface tension rather than gravitational forces. With respect to the gravity wave test case, the capillary wave allows one to exclusively evaluate the models' capability to capture the effects of surface tension. For both the gravity and capillary wave, there exist analytical models that can be used to validate the simulation accuracy. In the third and fourth test case, an unconfined single rising gas bubble in liquid, and a confined Taylor bubble in a cylindrical tube are simulated and compared with experimental data from the literature. Finally, the fifth and sixth benchmark case feature dynamic coalescence, i.e., the formation of a splash crown caused by the impact of a droplet into liquid. The results are qualitatively compared with experimental data from the literature. Both models are regularly applied to simulate capillary waves [7,8,34], rising bubbles [6,7,34,35,36,37], and drop impacts [4,34,36,37,38], however, a direct comparison between them is missing from the literature. Finally, it is concluded that the PFLBM is more accurate in simulating flows governed purely by surface tension forces compared to the FSLBM used in this article. However, in flow problems governed by surface tension and gravitational acceleration, the FSLBM required less computational resolution than the PFLBM while having more accuracy in the tests performed here. Additionally, the PFLBM was sensitive to the choice of the model's mobility parameter and interface width, affecting accuracy and numerical stability.
In this work, general properties related to computational performance such as the grid's resolution and memory requirements are discussed, while quantitative data are not presented. Data such as these would only represent the state of the implementations used here and would not allow a general conclusion to be made.
The FSLBM simulations were performed using the open source C ++ framework waLBerla [39] and cross-validated with FluidX3D [4,40]. The PFLBM simulations were also performed using waLBerla together with the code generation framework lbmpy [41]. These simulations were cross-validated using TCLB [42]. The implementations used in this work and all simulation setups are freely available online, as described in Appendix A.5.
Numerical methods
The first part of this section briefly introduces the LBM, before presenting the numerical foundations of the FSLBM and the PFLBM. The section is concluded by comparing both models focusing on their computational properties.
The lattice Boltzmann method
The classical approach to CFD is to simulate the evolution of a flow problem via the discretization of the Navier-Stokes equations. Contrary to this, the LBM is based on the lattice Boltzmann equation (LBE) and has gained popularity in the last few decades. The LBE is given by, with f i (x, t) ∈ R being a discrete particle distribution function (PDF) that describes the probability that there exists a virtual fluid particle at position x ∈ R d and time t ∈ R + traveling with discrete lattice velocity [43]. The domain is discretized using a uniformly spaced Cartesian grid with spacing ∆x ∈ R + where the macroscopic fluid velocity in each cell is discretized using a DdQq velocity set such that i ∈ {0, 1, . . . , q − 1}. Here, d ∈ N refers to the number of dimensions in space and q ∈ N refers to the number of discrete lattice velocities. In each velocity set, the so-called lattice speed of sound, c s = 1/3 ∆x/∆t, defines the relation between density, ρ(x, t) ∈ R + , and pressure, p(x, t) = c 2 s ρ(x, t), with ∆t ∈ R + denoting the temporal resolution. External forces are included in the LBM by F i (x, t) ∈ R.
The collision operator, Ω i (x, t) ∈ R, models particle collisions and redistributes PDFs. In this study, collision operators are based on the multiple relaxation time (MRT) scheme [44] that can be written as, where M ∈ R q×q denotes a q × q Matrix, constructed from a set of q moments, that transforms the PDFs to the moment space [44]. In the moment space, the collision is resolved by subtracting the PDFs' equilibria, f eq (ρ, u) ∈ R q , from the PDFs and applying the diagonal relaxation matrixŜ ∈ R q×q . It contains the relaxation rate, ω i < 2/∆t, the inverse of which is referred to as the relaxation time, τ i = 1/ω i . For the MRT employed here, the relaxation time corresponding to second-order moments, τ , is directly related to the kinematic viscosity of the fluid through, The equilibrium PDF is given as, and can be derived from the continuous Maxwell-Boltzmann distribution [45] using the macroscopic velocity, , and lattice weight, w i ∈ R. When setting the LBM reference density ρ 0 = 1 in Equation (4), the incompressible LBM formulation is obtained, whereas ρ 0 = ρ reveals the LBM in compressible form [46]. If the collision operator's moment set is constructed with the so-called raw moments and all moments are relaxed with the same relaxation rate, ω = 1/τ , the commonly used Bhatnagar-Gross-Krook (BGK), also referred to as single relaxation time (SRT) collision operator is obtained [43], A major contributor to the LBM's popularity is its formulation as an explicit time-stepping scheme and the fact that all non-linear operations (collision) are local to a computational cell, while the advection (streaming) is linear [47]. This means that Equation (1) can be separated into the subsequent steps of collision and streaming denoted by, where f i (x, t) indicates the post-collision status of the PDFs. This illustrates that the resulting scheme can be parallelized well and is therefore excellently-suited for massively parallel, large-scale simulations [33]. In practice usually both steps are combined, as shown in Equation (1), to achieve the best parallel performance [48].
Free-surface lattice Boltzmann method
The free-surface lattice Boltzmann method (FSLBM) used in this work is based on the approach from Körner et al. [8]. It allows the simulation of a moving interface between two immiscible fluids and assumes that the heavier phase completely governs the flow dynamics of the system. As the flow dynamics of the lighter phase are ignored, the problem reduces to a single-phase flow with a free boundary. This assumption applies to two-phase systems with substantial density and viscosity ratios between the phases. In the following, the heavier and lighter phases will be called liquid and gas phases, respectively.
The boundary between the two phases is treated in a volume-of-fluid-like approach [23]. As such, the fill level, ϕ(x, t), in a cell is defined as the ratio of its liquid volume to its total volume, and acts as an indicator to describe the affiliation to a phase. Using this definition, all cells belonging to the fluid domain are either categorized as liquid (ϕ(x, t) = 1), gas (ϕ(x, t) = 0) or interface (ϕ(x, t) ∈ (0, 1)). The latter type assembles a sharp interface, i.e., a closed layer of single interface cells that separates liquid from gas cells. In terms of the LBM implementation, liquid and interface cells are treated as normal cells that contain PDFs and participate in the collision and streaming described in Section 2.1. As opposed to this, gas cells neither contain PDFs nor participate in the LBM update.
The fill level, ϕ (x, t), fluid density, ρ (x, t), and volume, ∆x 3 , of a cell are used to define its liquid mass as, The mass flux, ∆m i (x, t), is tracked for interface cells and computed from the LBM streaming step as, An interface cell is converted to gas or liquid when it gets emptied, ϕ(x, t) < 0 − ε ϕ , or filled, ϕ(x, t) > 1 + ε ϕ , with respect to the heuristically chosen threshold, ε ϕ = 10 −2 , that is defined to prevent oscillatory conversions [49]. It is important to note that liquid or gas cells can not be converted directly into one another. Instead, when converting an interface cell, surrounding liquid and gas cells are converted to interface cells to maintain a closed interface layer. In the case of conflicting conversions, the separation of liquid and gas cells is prioritized.
In the course of the simulation, unnecessary interface cells may appear that either have no liquid or no gas neighbor. In that case, the mass flux from Equation (9) is modified as suggested in Reference [38] to either force these cells to fill or empty.
When converting an interface cell with fill level, ϕ conv (x, t), to liquid or gas, the fill level of the converted cell is set to ϕ(x, t) = 1 or ϕ(x, t) = 0, respectively. This leads to an excess mass of, if x is converted to gas (10) that must be distributed to neighboring cells. In this work, excessive mass is distributed evenly among surrounding interface cells, or evenly among surrounding interface and liquid cells in the implementation in FluidX3D. A cell conversion from liquid to interface and vice-versa does not modify the PDFs of the cell. In contrast, the PDFs in cells converted from gas to interface are not yet available. They are initialized using Equation (4) with ρ, and u averaged from all surrounding liquid and non-newly converted interface cells.
The LBM collision as in Equation (6) is applied to all liquid and interface cells with Equation (4) being used in compressible form. Unlike suggested in Reference [8], the gravitational force is not weighted with the fill level of an interface cell in the implementation used here.
During the LBM streaming step, according to Equation (7), PDFs streaming from gas cells to interface cells do not exist and must be reconstructed. This is accomplished using an anti-bounce-back pressure boundary condition at the interface, where u ≡ u(x, t) is the velocity of the interface cell and ρ G ≡ ρ G (x, t) = p G (x, t)/c 2 s is the gas density computed from the pressure of the gas phase, p G (x, t). In the original model [8], it was suggested to reconstruct PDFs based on their orientation with respect to the interface normal. However, this approach overwrites existing information and was observed to lead to anisotropic artifacts [50,51]. Here, as suggested in Reference [50], only missing PDFs are reconstructed, and no information is dropped.
The gas pressure, consists of the volume pressure, p V (t) ∈ R + , and the Laplace pressure, p L (x, t) ∈ R + . The volume pressure can be either atmospheric in which p V (t) = constant or result from the change of the volume, V (t) ∈ R + , of an enclosed gas volume, i.e., bubble with, The Laplace pressure, incorporates the surface tension, σ ∈ R + , and the interface curvature, κ (x, t) ∈ R. There exist different approaches for computing the interface curvature that are based on the finite difference method (FDM) or a local triangulation of the interface [40,50]. The simulation results shown here are obtained using the FDM as described in Reference [50]. The interface normal, as required by the FDM curvature model, is modified near solid obstacle cells according to Reference [52]. Other curvature computation models have been tested and will be discussed in Section 3.1.2. In applications where bubbles must be properly simulated, an additional bubble model extension is required for the FSLBM. Since gas volumes can coalesce and divide, this algorithm must keep track of the volume pressure of individual bubbles and handle coalescence and segmentation accordingly. Such algorithms are referred to as bubble models and are algorithmically challenging when applied in parallel computing environments. Here, the bubble model from Reference [49] is used to simulate bubble coalescence correctly and in parallel environments. It is based on the combination of the interface normal and the seed-fill algorithm [53]. In contrast, in FluidX3D, the bubble model is based on the Hoshen-Kopelman algorithm [54].
Conservative Allen-Cahn model
The conservative Allen-Cahn model is described in several other publications [22,32,36]. Here, the governing equations and their discretization with the LBM are only briefly introduced.
Governing equations
The phase-field model studied in this work is built on the following macroscopic equations, the first of which represents the continuity equation. Equation (16) is the momentum equation with the hydrodynamic pressure, p ≡ p(x, t), and Equation (17) is the Allen-Cahn equation used for the tracking of the interface. Here, the mobility is denoted by M ∈ R + , the interface width by ξ ∈ N + , n ≡ n(x, t) = ∇φ/|∇φ| is the unit vector normal to the liquid-gas interface, and µ ∈ R + is the fluid's dynamic viscosity. The principle behind phase-field models is to allocate an additional scalar field for the phase indicator parameter, φ ≡ φ(x, t) ∈ [0, 1]. This phase indicator represents the fluid with higher density by φ H = 1 and the lower density fluid by φ L = 0. The bounds of φ H and φ L can be seen to vary in the literature, and is generally a point of contention. Nonetheless, the authors specify the bounds as (0, 1) to minimize issues that may otherwise arise in the light phase fluid.
The forces acting on the fluid include the body force associated with gravity, and the surface tension forces resulting from the liquid-gas interface. These are given as, respectively, with gravitational acceleration, g ∈ R d and chemical potential, µ φ ∈ R.
Lattice Boltzmann equations
Discretizing the conservative Allen-Cahn equation with the LBM yields, where the collision operator of the phase-field LBE is given by Ω h i (x, t) ∈ R, the phase-field PDFs by h i (x, t) ∈ R, and the phase-field relaxation time by, Thus, the mobility of the interface defines the behavior of the interface tracking LBM step. The density as used in the PDF equilibrium in Equation (4), is computed via interpolation from the phase indicator φ(x, t) as suggested in Reference [21]. Using this formulation of the LBM step, the zeroth-order moment, computes φ (x, t). The conservative Allen-Cahn equation is recovered by applying, in the collision space according to Guo's forcing scheme [29,55]. The LBE for the hydrodynamics is given by, with collision operator, Ω g i (x, t) ∈ R, for the hydrodynamic PDFs, g i (x, t) ∈ R, and normalized pressure, p * ≡ p * (x, t) = p(x, t)/(ρ(x, t)c 2 s ). Note here that the LBE is formulated such that the zeroth-order moment recovers the normalized pressure, Additionally, it is important to notice that for g eq i (p * , u) ∈ R, the incompressible formulation of the equilibrium PDFs is used.
The forcing term to recover the Navier-Stokes equation is, which consists of terms to recover the correct pressure gradient term, F p ≡ F p (x, t) ∈ R, the viscous forces, F µ ≡ F µ (x, t) ∈ R, the surface forces, F s ≡ F s (x, t) ∈ R, and the body forces, The force vector is directly applied in the collision space according to Guo's forcing scheme [29,55]. The pressure and viscous forces are given as, where ρ H ≡ ρ H (x, t), and ρ L ≡ ρ L (x, t) denote the density in the heavy and light phase, respectively [36]. The kinematic viscosity, ν ≡ ν(x, t), is computed with Equation (3) using the linearly interpolated relaxation time where τ H ≡ τ H (x, t) is the relaxation time of the heavy phase and τ L ≡ τ L (x, t) is the relaxation time of the light phase. It is noted here that the deviatoric stress tensor can be obtained from moments of the non-equilibrium distribution to avoid the need for finite difference approximations in the velocity field.
Comparison of methodology and numerical implementation
In this section, the FSLBM and PFLBM are compared in terms of various aspects ranging from methodology to implementation. This is done to illustrate the various assumptions made in each model, and provide an understanding for the quantitative comparisons made in the later sections.
Treatment of the low density phase
One of the major differences between the FSLBM and the PFLBM presented is the treatment of the lighter fluid phase. Contrary to the PFLBM, the flow dynamics of the lighter phase are ignored in the FSLBM. Although this allows the PFLBM to be applicable to a broader range of applications, this work focuses on flows where the lighter phase is believed to have negligible influence on the system. In this case, the computations in the second phase are assumed to be unnecessary when using the PFLBM. Further to this, the lighter phase has a lower viscosity than the heavier phase. Consequently, the flow is more likely to become turbulent, and ∆x and ∆t must be chosen to avoid instabilities in the lighter phase. Concerning the heavier phase, ∆x and ∆t tend to be much smaller than necessary for stability. This impacts the efficiency of the simulation and is one of the driving motivations of the FSLBM. While not considered here, these drawbacks could be moderately compensated by using adaptive refinement of the computational grid [37].
Representation of the interface
Another significant difference between the two models is the representation of the interface between the phases. In the FSLBM, the interface layer has a width of one cell and is therefore referred to as a sharp interface. The fill level in the cell captures the interfacial movement. On the other hand, the PFLBM represents the interface layer in a diffuse manner with a width of typically around five lattice cells [22]. The Allen-Cahn equation describes the advection of the interface. In general, it is preferential that the interface width is more than a magnitude smaller than the smallest characteristic length scale of the system [43].
Conservation of mass
Both models in their originally proposed states are mass conserving [8,11]. However, it was observed that single interface cells can become trapped in liquid or gas in the FSLBM [38]. It is argued that these artifacts do not perturb the fluid simulation but are visible as artifacts. To resolve these artifacts, it is suggested to forcefully convert these cells to the cell type in their surrounding, leading to a loss in mass. Following this approach, the current FSLBM implementation does not fully conserve mass.
Numerical implementation
This section focuses on implementation-related aspects of the FSLBM and PFLBM, such as their applicability to code generation, parallel computing, and memory requirements.
Code generation. With metaprogramming techniques, it is possible to describe the complete PFLBM model in an abstract symbolic form embedded in a high-level programming language, e.g., Python [33]. Highly optimized code in a performance-oriented programming language, e.g., C or CUDA, is generated automatically. Furthermore, performance optimizations, including spatial blocking, common subexpression elimination, and simultaneous instructions on multiple data (SIMD) vectorization, are applied by the code generator. This provides portability to different computing architectures, such as accelerator hardware like GPUs, and reduces code complexity while increasing the maintainability of the code base. The PFLBM consists of essentially only two continuous LBM steps, making it perfectly applicable for code generation. Here, the entire model, including boundary conditions, forces, and inter-process communication, is implemented using the code generation framework lbmpy [41].
In contrast, the FSLBM is not expected to be as well suited to code generation directly. While a compute kernel for the LBM step can be generated, many other model components are not inherently suitable to code generation. In various parts of the model, the type and direction of a neighboring cell define the operation. For instance, in the mass exchange algorithm, Equation (9), different lattice directions have to be treated according to the type of the neighboring cell in this direction. The abstract form of the code might then be similar to the direct implementation in a performance-oriented programming language. Therefore, future work remains to evaluate the applicability of the FSLBM to code generation techniques.
Parallelization. The common requirement of a highly resolved computational grid can often not be sufficiently computed on a single processor or compute node of a cluster for practically relevant simulations. The PFLBM scales almost perfectly [33] on parallel computing environments and inherently tracks coalescence and segmentation of gas volumes through the Allen-Cahn equation.
Without modeling bubble coalescence and segmentation, parallelization of the FSLBM is straightforward on any parallel hardware, scaling just as well as the single-phase LBM. It must be remarked that this is sufficient for a wide variety of applications such as the standing wave and drop impact test cases presented in Section 3. However, when tracking individual gas volumes, a bubble model is required that monitors information such as the bubble's identifiers, the gas pressure, and the identifier of the process on which parts of the bubble reside. The parallel implementation of a bubble model is challenging and the models presented in Reference [49] rely on either global all-to-all communication or global sequential communication in each LBM time step. As an extension to that, Reference [56] presented more complicated bubble models where regional all-to-all or sequential communication is sufficient. However, Reference [56] has shown that neither of the mentioned bubble models scale ideally on a parallel computing environment relying on inter-process communication. In the implementation used in this study, the model from Reference [49] with global all-to-all communication is used.
Memory requirements. The FSLBM requires similar memory allocation to a single-phase LBM implementation, making it well suited for systems with limited memory like GPUs. In contrast, the PFLBM requires a second LBM step with separate PDFs for the phase field, approximately doubling the amount of memory required to be stored at each lattice cell, making it less attractive for hardware with constrained memory. In particular, in setups where high surface detail (e.g., a large number of small droplets) needs to be resolved, with the FSLBM, droplets can have a minimum diameter of three cells. With the PFLBM on the other hand, the minimum droplet diameter is increased to at least 10 cells. To match resolved surface details, the lateral increase in lattice resolution for the PFLBM combined with the higher memory requirements per cell lead to approximately 2 · (10/3) 3 ≈ 74-fold increase in required memory, making the FSLBM clearly the better choice in such use-cases.
Numerical experiments
This section compares the FSLBM and the PFLBM using numerical experiments. Choosing the proper test case for comparing two distinct models in terms of accuracy and computational performance is a challenging task. One must select test cases to which both models are applicable, and keep in mind that each model may be subjected to different forms of errors. Additionally, it is crucial to select a benchmark where the correct solution is known a priori, either from experimental data or analytical models to give a point of reference for the modeled results.
While many references provide experimental data for different kinds of multiphase flows, comparing two models based on only experimental measurements can be misleading. Every experiment is subject to uncertainties that can not be considered in numerical simulations, and if both models disagree with experimental observations in contradicting form, no meaningful conclusion can be drawn. Therefore, it is always preferable to base the initial comparison on test cases, for which the exact solution is known from analytical calculations.
Numerical tools used for fluid simulations generally consist of various, coupled models responsible for certain physical aspects. For instance, each of the models discussed here has multiple approaches for including wetting effects [50,52,57,58]. In an initial comparison, a suitable test case should only include the minimally required components of the models to avoid drawing incorrect conclusions caused by a single component in one of the models.
Here, six numerical experiments were used to compare the FSLBM and PFLBM. Citations have been provided to literature in which each of these models has been applied to the chosen test cases, arguably showing that they are both applicable modeling procedures for the cases. Two of these tests simulated a standing wave with analytical models available in the literature. The two cases differed by only the driving force in the flow. While a gravity wave oscillates due to a body force, a capillary wave does so due to the forces resulting from surface tension. In each of the test cases, the respective other force was neglected. The third and fourth test cases featured unconfined and confined buoyancy driven flows. That is, simulations of a gas bubble rising in a large pool of liquid and a Taylor bubble traveling through a cylindrical pipe, both of which were compared with experimental data from the literature. In the final test cases, dynamic coalescence was investigated by simulating the impact of a vertical and oblique drop into a pool of liquid. The results were qualitatively compared to photographs of the laboratory experiments from the literature.
In all simulations with the FSLBM, the SRT collision model from Equation (5) was used. To improve numerical stability in the PFLBM, a weighted orthogonal MRT collision model according to Equation (2) was employed, and the individual moments in both LBM steps were relaxed according to Reference [34]. It is important to note here that also the second-order moments for the interface tracking LBM step were relaxed with τ φ . Within all test cases, the specified relaxation rates were constant across the various resolutions leading to what is also known as diffusive scaling in the LBM. Setting the second-order moments directly to the equilibrium led to nonphysical results. Both models used the D2Q9 velocity set for the standing wave simulations. For all other simulations, a D3Q19 velocity set was employed by the FSLBM while the PFLBM was set up with two D3Q27 lattices for the two LBM steps. It is common to introduce the Cahn number, Cn = ξ/L, to describe the PFLBM's interface width ξ. It is highlighted in this work that for convergence assessments, the value of ξ remained constant rather than Cn, as solutions are desired to tend towards a sharp interface result. In the FSLBM, body forces were modeled according to Guo et al. [55]. The forcing terms applied to the LBM steps in the PFLBM model were according to Reference [21]. In the simulations of both models, no-slip boundary walls were realized through the bounce-back boundary condition [43]. In agreement with the usual choice in the LBM literature, the reference density was chosen to be ρ 0 = ρ H = 1 in all simulations.
In the FSLBM, the fill level was initialized with a Monte Carlo-like sampling method. A two-dimensional grid consisting of equally spaced, 101 × 101, sample points was created in each cell. The ratio of samples within the specified initial profile to the total number of samples per cell gave the initial fill level. In the PFLBM, the diffuse interface was initialized with, in the direction normal to an interface located at x 0 . The surface meshes visualized for the bubbles and drop impacts were obtained using a marching cube algorithm with destination value ϕ = 0.5 and φ = 0.5 for the FSLBM and PFLBM, respectively. If not explicitly specified otherwise, all quantities but non-dimensional numbers are denoted in the lattice Boltzmann unit system. All simulations shown in this article were performed with double-precision floatingpoint arithmetic.
Standing waves
In this section, both models' simulation results for a standing gravity and capillary wave are presented and compared with their analytical solutions.
Gravity wave
A gravity wave is a standing wave that oscillates at the phase boundary between two immiscible fluids. Its fluid dynamics are entirely governed by gravitational forces, with surface tension forces being negligible in comparison.
Simulation setup. A gravity wave with wavelength, L, was simulated in a quadratic domain of size L × L × 1 (x-, y-, z-direction). As illustrated in Figure 1, a free boundary was initialized with the profile, y(x) = d + a 0 cos (kx), with liquid depth, d = 0.5L, initial amplitude, a 0 = 0.01L, and wavenumber, k = 2π/L. There were no-slip boundary conditions at both walls in the y-direction and periodic boundary conditions at all other domain walls. Due to the gravitational acceleration, g, the initial profile evolved into a standing wave oscillating around the liquid depth, d, and dampened by viscous forces. The Reynolds number, is defined by the angular frequency of the wave, In both models, the heavier phase was initialized with hydrostatic pressure according to g such that the LBM pressure at y(x) = d equaled the constant atmospheric volume pressure p V (t) = p 0 = ρ 0 c 2 s = 1/3. The surface elevation, a * (x, t) = a(x, t)/a 0 , and the time, t * = tω 0 , are non-dimensionalized to ease comparison. The simulations were run until t * = 80 and the surface elevation, i.e., amplitude a(x, t), was monitored at x = 0 every t * = 0.1. It was computed by the sum of all cells' fill levels in the y-direction at x = 0 in the FSLBM. In the PFLBM, the surface elevation was evaluated by interpolating the position at which the phase-field value is φ = 0.5.
The simulations were carried out with Re = 10 and L ∈ {50, 100, 200, 400, 800} for the FSLBM and L ∈ {50, 100, 200, 400} for the PFLBM. The FSLBM's gas phase was considered to be the atmosphere, having a constant atmospheric volume pressure of p V (t) = p 0 defined by the LBM reference density ρ 0 = 1. In the PFLBM, the density ratio,ρ = ρ H /ρ L = 1000, and kinematic viscosity ratio,ν = ν H /ν L = 1, mimic a liquid-gas system and were chosen to conform with the analytical solution of the capillary wave in Section 3.1.2. The relaxation rate was set to ω = 1.8 and ω H = 1.99, for the FSLBM and for the heavy phase in the PFLBM, respectively. The mobility, M = 0.02, and interface width, ξ = 5, were chosen in the PFLBM conforming to usual choices in the literature [29].
Analytical model. The analytical model for the gravity wave is derived by linearization of the continuity and Euler equations with a free-surface boundary condition [59]. The surface elevation, i.e., the amplitude of the standing wave, is obtained under the assumption of an inviscid fluid resulting in zero damping with a D (t) = a 0 . Viscous damping is considered by, as provided in Reference [60]. The model is valid for k|a 0 | 1 and k|a 0 | kd [59], which is applicable in this study with k|a 0 | = 0.02π 1 < kd = π.
Results and discussion. Figure 2 shows the amplitude, a * (0, t * ), over time, t * , for different wavelengths, L, simulated with the FSLBM. As immediately evident, the FSLBM could not reasonably simulate the gravity wave setup chosen here with small resolutions. This is caused by the requirement of a small initial amplitude, a 0 = 0.01L, to be consistent with the analytical solution. The surface of the wave moves only in a range of a few LBM cells or even purely within one cell. This could not be simulated with sufficient accuracy with the FSLBM due to its sharp interface representation on a fixed Cartesian grid. On the other hand, with higher resolution, the amplitudes span more cells and the FSLBM converged well with reasonable accuracy to the analytical model. In particular, a resolution of L = 50 did not allow a meaningful simulation, however, L ∈ {100, 200, 400, 800} allowed 2, 3, 4, 5 periods to be simulated. Due to the diffuse interface of the PFLBM, the model was capable of simulating even very small amplitudes as shown in Figure 3. The simulations converged well and from L = 100 on, the phase of the wave was predicted accurately. However, the model clearly underestimated the wave's damping for the parameters used in this study.
In Figure 4, the FSLBM and PFLBM are compared directly. The resolution of the FSLBM was chosen such that a sufficient number of periods have been simulated to allow a meaningful comparison. The computational grid had to be resolved to a very high level to have the amplitude to span over multiple cells (note that many fewer periods were resolved by the FSLBM for the same resolution). However, in this test case, there was only a single fluid and a single gas domain divided by one interface with little curvature. Therefore, the width of PFLBM's diffuse interface was less significant here and did not enforce a highly resolved computational grid. While this is representative for a variety of applications, it is not for many others in which the minimal diffuse interface width imposes a higher computational resolution. It must be also noted that the size of the amplitude was chosen for consistency of the analytical model. This highlights a limit case for the FSLBM where difficulties arise due to only small surface movement. In this particular case where the surface oscillates back and forth around the same lattice cells, the amplitude of the oscillation can only be sufficiently resolved when cell conversions are triggered, i.e., when the surface movement extends beyond a single layer of cells. In other setups where there is persistent directional movement of the surface, this is not a problem and the surface position is resolved well anywhere between lattice cells.
Capillary wave
In contrast to the gravity wave, the fluid dynamics of the capillary wave are purely dominated by surface tension forces, while gravitational forces are neglected.
Simulation setup. The simulation setup was equivalent to the one of the gravity wave in Section 3.1.1. As for the gravity wave, a standing capillary wave evolves, oscillating around a liquid depth, d, because of surface tension forces. The decay of the wave is again caused by energy dissipation due to viscous friction. While the definition of Re in Equation (32) is also used for the capillary wave, the angular frequency of the wave is given by, Here, it can be seen that it is now defined with the surface tension, σ, and the densities of the heavy, ρ H , and light phase, ρ L . Except for hydrostatic pressure, which is not present due to the absence of gravity, the simulation parameters and evaluation procedure were identical to those presented in Section 3.1.1. All simulations were performed with Re = 10 and L ∈ {50, 100, 200, 400, 800} for the FSLBM and L ∈ {50, 100, 200, 400} for the PFLBM. In the latter, the density ratio,ρ = 1000, and kinematic viscosity ratio,ν = 1, mimic a liquid-gas system and conform with the capillary wave's analytical model. The relaxation rate was set to ω = 1.8 and ω H = 1.99, for the FSLBM and for the heavy phase in the PFLBM, respectively. As in Section 3.1.1, the mobility, M = 0.02, and interface width, ξ = 5, were used in the PFLBM.
Analytical model. Prosperetti [61] presented an analytical model for small-amplitude capillary waves in viscous fluids. The model assumes that there is either a single fluid with a free-surface (ρ L = 0, µ L = 0) or two fluids with equal kinematic viscosity such thatν = 1. It is derived from the linearized Navier-Stokes equations and therefore only valid in the limit of infinitesimally small wave amplitudes. Assuming no gravitational forces and no initial velocity, the capillary wave amplitude, a(t), with respect to time, t, is described by, where z i are the roots of the polynomial, and Z i are computed by circular permutation of the index i in z i , The expression erfc(x) = 1 − erf(x) is the complementary error function and β is a dimensionless parameter defined by, The analytical model is only applicable for small amplitudes such that a correction factor was proposed extending the validity of the model to amplitudes of up to a 0 0.1L [62]. For a 0 = 0.01L as chosen here, this correction factor is only 1.0023 and it can be assumed that the original analytical model is valid to be used as a reference in this study.
Results and discussion. As illustrated in Figure 5, the simulations of the FSLBM did not converge with increasing resolution of the computational grid. The only difference between the gravity wave test case and the capillary wave test case is the driving force, which is a body force in the former and the surface tension in the latter. In the gravity wave test case, the FSLBM simulation converged and the results agreed well with the analytical model. This suggests the potential existence of errors in the surface tension model used within the FSLBM. There, surface tension forces are incorporated by the Laplace pressure, p L , from Equation (14), with the interface curvature κ being the only non-constant parameter in the equation. Therefore, it is apparent that the diverging behavior must be caused by a diverging interface curvature computation.
As described in Section 2.2, the simulations and results shown here are based on a curvature computation using the finite difference method (FDM) [50]. A similar result was obtained when computing the interface curvature using a local triangulation model and the algorithm from Taubin [63] in waLBerla, as suggested by Reference [49]. This is in agreement with Reference [50], where both approaches were found to diverge with increasing resolution when computing the curvature of a resting spherical gas bubble. On the other hand, a curvature model based on local triangulation and a least squares fit optimization (LSQR) was found to be second-order convergent in the same test case [50]. However, using a similar LSQR approach [40] in FluidX3D, also no convergent behavior could be obtained in the capillary wave test case at the largest resolutions. It is vital to remark that the absolute value of the curvature decreases when increasing the resolution. With the parametrization chosen here, also the absolute numerical value of the surface tension decreases with increasing L. Therefore, although the LSQR curvature model converges, the model's constant error in curvature has increasingly more influence at higher resolution.
The capillary wave has been previously simulated and compared to a different analytical model [64], where gravitational forces are also considered [7]. There, simulations were performed with the curvature computation using the algorithm of Taubin but the authors did not present a convergence study. Using the same capillary wave setup and resolution as in Reference [7], a moderate agreement with the analytical model was observed with the FSLBM implementations used in this work. However, in a convergence study, again the FSLBM did not converge to the analytical model regardless of the curvature computation model used. In the work of Körner et al. [8], the FSLBM has also been used to simulate a capillary wave and found to agree well with the analytical solution. The authors did not present a convergence study but verbally argued that the error decreases linearly with increasing resolution. The curvature model used there is based on the two-dimensional template-sphere method [65] that uses a neighborhood of 25 cells to compute the curvature. However, the implementations presented here are explicitly targeted at parallel computing environments, in which such a calculation is not feasible. To maintain reasonable parallel efficiency, only information from nearest neighbor cells is desired for curvature computation.
In contrast, as illustrated in Figure 6, the PFLBM converged well towards the analytical solution but slightly underestimated the analytical model's damping with the parameters from this study. A comparable capillary wave test case has been simulated with the PFLBM in Reference [34]. However, compared to the parameters chosen here, the initial amplitude and Reynolds numbers were significantly smaller in Reference [34], leading to a more accurate prediction of the damping. It can be concluded that special attention must be paid when simulating surface tension dominated flows with very low curvature with the FSLBM. While the capillary wave resembles an extreme case with small amplitudes leading to infinitesimal values of absolute curvature, other test cases with major surface tension influence have been simulated with good accuracy with the FSLBM [4,40]. On the other hand, the PFLBM accurately simulates this test case and as in Section 3.1.1, it has to be pointed out explicitly that the PFLBM is capable of also simulating very small amplitudes.
Buoyancy driven flows
This section presents numerical simulations of buoyancy driven flows. The first test case is an unconfined flow, where a single gas bubble rises in liquid. In the second test case, a large gas bubble rises in liquid contained in a cylindrical tube. This large gas bubble in the confined, buoyancy driven flow is commonly referred to as Taylor bubble.
Rising bubble
The more practically oriented third test case is an unconfined buoyancy driven flow, i.e., the rise of a single gas bubble in a liquid column. In order to correctly simulate the bubble shape and rise velocity, the balance between buoyancy, viscous, and surface tension forces must be correct. As there are no analytical models available predicting a rising bubble's shape and velocity, the comparison is drawn using experimental data from Bhaga and Weber [66].
Simulation setup. As shown in Figure 7, a gas bubble was initialized as a sphere of diameter, D, centered at (4D, 4D, 1D) in a computational domain of size 8D × 8D × 20D (x-, y-, z-direction). Gravity was applied in the negative z-direction causing the bubble to rise due to buoyancy. The top and bottom walls (in z-direction) were realized as no-slip boundaries, while the side walls of the domain were periodic. The size of the domain was tested, and determined to be sufficiently large so as not to influence the results of the simulations. Hydrostatic pressure was initialized such that the reference density, ρ 0 = 1, was positioned at 10D in the z-direction.
The rise of a single gas bubble in liquid is characterized by the Morton number, which describes the ratio of viscous to surface tension forces, and the Bond number, which describes the ratio of gravitational forces, i.e., buoyancy, to surface tension forces. It is commonly also referred to as the Eötvös number (Eo). The definitions of these dimensionless numbers are taken from Reference [66] and the density, ρ, and dynamic viscosity, µ, refer to the heavier fluid. The bubble shape and position in terms of its center of mass, were monitored at every reference time interval, From the bubble position in the z-direction at time, t * = 5, and t * = 10, the rise velocity u and Reynolds number, were evaluated. The simulations were stopped at t * = 10. The bubble shape and the Reynolds number were then compared with experimental observations from Reference [66].
The simulations were carried out with D ∈ {8, 16, 32, 64} for both models. Additionally, as in Reference [29], a fixed mobility, M = 0.02, and interface width, ξ = 5, were used for the PFLBM. Furthermore, to close the system parameters, the density of the liquid phase was specified as ρ H = 1, and the density ratio, ρ = 1000, and dynamic viscosity ratio,μ = µ H/µ L = 100, were chosen to mimic an air-water system. For the FSLBM, the initial pressure of the bubble was set to the reference pressure, p 0 = ρ 0 c 2 s = 1/3, with reference density, ρ 0 = 1. The dimensionless numbers that define the four cases tested, and the employed LBM relaxation rates are listed in Table 1. Hydrostatic pressure was initialized in the domain such that the pressure is equivalent to the LBM reference density, ρ 0 = 1, in the center of the domain in z-direction.
Results and discussion. The simulated bubble shapes at t * = 10 are presented in Figure 8 and in the Appendix in Figures 18 to 20. It can be seen that both models converged to the Reynolds numbers reported in the experiments in Reference [66]. The FSLBM simulated the rising bubble with reasonable accuracy for computational resolutions of D ≥ 16. Although surface tension forces are not fully governing the rising In contrast, for the PFLBM, it was not possible to obtain results for resolutions of D < 32. Furthermore, for case 2, neither the bubble shape nor the Reynolds number was predicted reasonably well, regardless of the resolution, as illustrated in Figure 8. Moreover, when increasing the simulation run time, non-physical bubble shapes and collapse were also observed with the PFLBM. Figure 9a shows this behavior for case 2 with D = 32 in which the skirted bubble film ruptures at t * > 10. With increased computational resolution, this effect occurred at later t * .
It was shown in the literature, that phase-field models are sensitive to the choice of the mobility parameter, M [67]. However, in general there appears to be no robust solution for how this parameter should be specified for arbitrary cases. In a study performed here, as depicted in Figure 9, it was observed that larger values of M seem to boost such non-physical effects. On the other hand, with M < 0.02, instabilities were observed, as the relaxation time, τ φ , in Equation (21) decreases and approaches its lower stability limits. These instabilities occurred even when using the weighted MRT scheme, which is generally known for good stability properties [32]. A rigorous study of this behavior is outside the scope of this work, but is proposed for future investigation. The test cases used in this study have also been simulated in two dimensions by Kumar et al. [34], using the PFLBM. To check the implementations' validity, these two-dimensional simulations were also performed here, agreeing with Reference [34] and without becoming unstable or leading to implausible bubble shapes.
While the reason for these instabilities is not yet clear, it must be pointed out that the expected bubble shapes consist of only a thin film of gas. In the literature, similar circular destabilization of films with the PFLBM could be observed in other test cases, however, often of a thin liquid rather than gas film [37].
Taylor bubble
The fourth test case is a buoyancy driven confined flow, a large gas bubble rising through stagnant liquid in a cylindrical tube. During the bubble's rise, it takes an elongated shape with a rounded leading edge. Its length is several times the tube's diameter and it is commonly referred to as Taylor bubble [68,69].
Setup. The simulation setup chosen here is similar to the one in Reference [22], conforming to the experiments in Reference [70]. As illustrated in Figure 10, in a computational domain of size 1D × 1D × 10D (x-, y-, z-direction), the domain walls formed a cylindrical tube of diameter, D, pointing in z-direction. A gas bubble was initialized as cylinder with diameter, 0.75D, and length, 3D, oriented concentrically to the boundary tube. The gas bubble's bottom was located at D in positive z-direction. The rest of the domain was filled with a stagnant liquid. According to the gravitational acceleration, g, the liquid was initialized with hydrostatic pressure such that the reference pressure, p 0 = ρ 0 c 2 s = 1/3, was set at 5D in z-direction. As in Section 3.2.1, the Morton number, Mo, Bond number, Bo, and the reference time, t * , characterize the system. Here, the tube diameter, D, was used as characteristic length [70] in these non-dimensional numbers.
The experiments in Reference [70] were conducted with Bo= 100, Mo=0.015, and olive oil. Following Reference [22], it is assumed that the density and viscosity of the air injected into the oil was ρ SI = 1.225 kg/m 3 and µ SI = 1.983 · 10 5 kg/(m·s), respectively. Therefore, the density ratio,ρ = 744, and the dynamic viscosity ratio,μ = 4236, were used. As in Section 3.2.1, for the FSLBM, the initial pressure of the bubble was set to the reference pressure, p 0 = ρ 0 c 2 s = 1/3. The simulations were performed with computational resolutions according to the tube diameter, D ∈ {16, 32, 64, 128}. However, in the PFLBM, simulating a tube diameter of D ≤ 32 was not possible, as the diffuse interface led to non-physical wall interactions with the interfacial region. Based on the investigations from Section 3.2.1, the mobility parameter was set to the lowest value at which the simulations at any tested resolution were stable, namely M = 0.08. The interface width was chosen as ξ = 3. For all simulations, the relaxation rate was set to ω = 1.8 in the FSLBM, and ω H = 1.76 in the heavier phase of the PFLBM's hydrodynamic LBM step.
Results and discussion. Figure 11 compares the simulated Taylor bubble's shape at different computational resolutions at time t * = 15 with the experimental measurement [70]. To ease comparison, the axial location, z * = z/D and radial location, r * = r/(0.5D) are non-dimensionalized. Additionally, an axial shift is employed as to set z * = 0 at r * = 0 for the bubble's front and tail individually. Both models converged well, but showed minor deviations to the experimental data from Reference [70]. The shape of the front of the bubble was predicted with reasonable accuracy at all computational resolutions tested. However, at the tail of the bubble, a resolution of D ≥ 64 was required for the FSLBM to capture the interface contour moderately well. As also observed in Section 3.2.1, satellite bubbles separated from the main bubble in the case of the FSLBM, as shown in the Appendix in Figure 25. In contrast to the observations for the rising bubble test, this effect vanished with increasing computational resolution.
In Table 2, the simulated Reynolds number, Re, as defined in Equation (44), is shown. The tube diameter, D, and the Taylor bubble's rise velocity, U , are used as characteristic quantities to determine Re. The rise velocity, U , was computed by the bubble's center of mass location in z-direction at time, t * = 10, and t * = 15. In comparison to the PFLBM, which agreed well with the experimental measurement [70], the FSLBM showed larger deviations. This was even more pronounced at lower computational resolutions, where it could capture the bubble's axial movement only moderately well. At the locations specified in Figure 12, the bubble is presented in the Appendix in Figure 26. Both models converged and agreed well with the experimental data [70]. On the other hand, at a radial line situated at 0.111D in front of the bubble, the non-dimensionalized radial fluid velocity U * r = U r /U (see Figure 13), and axial velocity (see Appendix, Figure 27) showed larger deviations when using the PFLBM, favoring the FSLBM at higher computational resolution. A similar observation could be made at a radial line at 0.504D behind the bubble's front, as visualized in Figure 14. Figure 28 illustrates that at a radial line located 2D behind the front of the bubble, the predicted axial velocity by both models agreed reasonably well with the experimental data. 5D). The test case is radially symmetric such that the evaluation can be performed at an arbitrary cross-section.
Dynamic coalescence -crown splash
Understanding the dynamics of splashing during liquid drop impacts has many implications, including aerosol production [71], erosion processes [72] and microplastic transfer in the environment [4]. In this study, Figure 12), with tube diameter D. The comparison with experimental data [70] is drawn in terms of the non-dimensionalized radial location, r * = r/(0.5D), at time, t * = 15. two drop impact test cases are simulated for which photographs of the laboratory experiments are available in the literature [73]. Both test cases have already been simulated using the FSLBM with LSQR curvature computation model [4]. Due to the absence of quantitative experimental data, the comparisons with reference data can only be made qualitatively.
Vertical drop impact
In the fifth test case, a vertical drop impacting on a thin film of liquid was simulated and compared with experimental data [73].
Setup. As shown in Figure 15, a thin liquid film of height, H = 0.5D, was initialized in a computational domain of size, L x × L y × L z (x-, y-, z-direction) with L x = L y = 10D and L z = 5D. At the pool's surface, a spherical droplet with diameter, D, was initialized with an impact velocity, U , in the negative z-direction with α = 0 • , leading to a vertical impact. The domain's side walls in the x-and y-direction were periodic, whereas there were no-slip boundary conditions at the domain's top-and bottom walls. Conforming with the gravitational acceleration, g, hydrostatic pressure was initialized such that the reference density was ρ 0 = 1 at the surface of the pool. The drop impact is described by the Weber number which relates inertial and surface tension forces, and by the Ohnesorge number, which is defined by the relation of viscous to inertial and surface tension forces. The drop diameter, D, the Bond number, Bo (see Equation (42)), and reference time, t * = t U/D, close the definition of the system. As found in Reference [4], the simulation results must be offset by t * = 0.16 to synchronize the first photograph of the laboratory experiment with the simulation setup chosen in this study.
In the experiments of Reference [73], a 70 % glycerol-water mixture at 23°C was used with ρ SI = 1200 kg/m 3 and µ SI = 0.022 kg/(m·s). The experiment obeyed the non-dimensional numbers, We=2010, and, Oh=0.0384. Assuming g SI = 9.81 m/s 2 , the system is closed by Bo= 3.18. As in Section 3.3.2, the density ratio is set toρ = 1000 and the dynamic viscosity ratio is set toμ = 100.
The simulations were performed with computational resolutions according to D ∈ {20, 40, 80}. The FSLBM's relaxation rate was chosen at ω = 1.989 and the PFLBM's hydrodynamic relaxation rate in the [73]. Therefore, the splash crown's dimension can only be compared between simulations rather than with the experiment. The solid black line illustrates the crown's contour in the center cross-section with normal in the x-direction. The photograph of the laboratory experiment was reprinted from Reference [73] with the permission of AIP Publishing.
heavy phase was set to ω H = 1.988. In agreement with the findings from Section 3.2, lower values for the PFLBM's interface width, ξ, and mobility, M , tended to give more physically realistic results, as shown in the Appendix in Figure 29. Therefore, ξ = 4 and M = 0.03 were chosen as they are the lowest values that allowed stable simulations for all tested computational resolutions.
Results and discussion. In Figure 16, the crown formation at time, t * = 12, is shown for both models at various computational resolutions. In the Appendix, Figures 30 and 31 compare the simulated and experimental drop impact dynamically, i.e., with respect to time. While no scale bars for the photograph of the laboratory experiments are available, it can be noted that all simulations converged well with increasing resolution, and the dimensions of the simulated splash crowns agreed with each other. The measured simulated cavity depths and splash crowns' inner diameters are presented in the Appendix in Tables 3 and 4. The FSLBM captured the droplets ejected from the crown qualitatively well, even at low computational resolution. Similar results have also been obtained with the FSLBM and LSQR curvature computation [4]. In contrast, the PFLBM with the parameters chosen here, could not sufficiently predict these droplets. It must be emphasized, that the PFLBM is sensitive to the choice of the interface width and mobility parameter (see Appendix, Figure 29). That is, for consistency reasons, these values were chosen as to be stable with the lowest computational resolution, D = 20, and kept constant for higher resolutions. A rigorous study of the individual lower limits of these parameters at each resolution might improve the quality of the results.
Oblique drop impact
In the final test case, an oblique drop impact is simulated as in the experiments of References [74,75].
Setup. The setup is similar to Section 3.3.1 and presented in Figure 15. However, the computational domain is cubical with L x = L y = L z = 10D, and the liquid pool is of height, H = 5D. The droplet's impact velocity, U , is oriented in an angle, α = 28.5 • , from negative z-direction. The experimental investigations were performed with We= 416.5, D SI = 1.15 · 10 −4 m, and liquid water with ρ SI = 1000 kg/m 3 and σ SI = 0.072 kg/(s 2 ). Assuming µ SI = 10 −3 kg/(m·s) for water at 20°C and g SI = 9.81 m/s 2 , the setup is defined by, Oh= 0.011, and, Bo= 0.0018. The density ratio,ρ = 1000, and dynamic viscosity ratio,μ = 100, are chosen as to mimic an air-water system [75]. The computational resolution, relaxation rates, and hydrostatic pressure were set as for the vertical drop impact in Section 3.3.1. Here, the lowest interface width and mobility that allowed stable simulations in the PFLBM for all tested resolutions were, ξ = 4, and, M = 0.09, respectively.
Results and discussion. In Figure 17, the crown formation at time, t = 18t * , is shown for both models at various computational resolutions and are compared with photographs of the laboratory experiments [75]. Additionally, Figures 32 and 33 in the Appendix show the drop impact as simulated by the FSLBM and PFLBM over time. The FSLBM and PFLBM converged well and the dimensions of the simulated splash crowns agreed well with each other. As for the vertical impact, scale bars for the photograph of the laboratory experiments are missing and no quantitative comparison with reference data could be drawn. Nevertheless, the measured simulated cavity depths and splash crowns' inner diameters are presented in the Appendix in Tables 5 and 6. In contrast to the vertical impact, less droplets were ejected from the crown and the PFLBM agreed qualitatively well at high computational resolution. The FSLBM captured the shape of the drop cavity and splash crown qualitatively well, even with low computational resolution. Similar results have been obtained with the FSLBM and LSQR curvature computation [4]. [75]. Therefore, the splash crown's dimension can only be compared between simulations rather than with the experiment. The solid black line illustrates the crown's contour in the center cross-section with normal in the x-direction. The photograph of the laboratory experiment was reprinted from Reference [75] with the permission of the original authors.
Conclusion
This study has compared two different LBM approaches for simulating flows in which the dynamics of the lighter phase are assumed negligible. After an introduction of the numerical foundation of the FSLBM and PFLBM, both models were applied to a series of benchmark cases and their performance was discussed in terms of their numerical properties and implementation aspects.
The FSLBM ignores fluid flow in the secondary phase and requires much less memory, making it efficient and more applicable to limited-memory hardware. On the other hand, the PFLBM simulates flow in both phases but is well suited for massively parallel computing and can be implemented more easily in a flexible way using code generation technology. A very distinct difference between the two models is their sharp and diffuse interface representation in the FSLBM and PFLBM, respectively. Therefore, six numerical experiments were shown in which the models' accuracy is compared at different resolutions of the computational grid.
While the standing gravity wave was simulated more accurately by the FSLBM, a much higher resolution was required than in the PFLBM to capture the motion of the interface at low amplitude. However, it has to be remarked that this test setup represents a limit case of the FSLBM. In consistency with the analytical solution, the interface motion is limited to only a few LBM cells, even for highly resolved grids.
In the capillary wave test case, the FSLBM diverged with increasing resolution due to deficiencies in all tested approaches for the computation of infinitesimal interface curvature. In contrast, the PFLBM could simulate the capillary wave with reasonable accuracy.
The third and fourth test case featured buoyancy driven flows. That is, an unconfined single rising gas bubble in liquid in four different characteristic parameter sets, and a confined Taylor bubble traversing a cylindrical tube, were simulated. The FSLBM was able to capture the bubble shape and Reynolds number with reasonable accuracy, even with moderate resolution. On the other hand, with the parameters used in this study, the PFLBM required higher resolutions for simulations to be stable. While it predicted bubble shape and accuracy well in the initial phase of the single gas bubble rise, the bubbles tended to evolve into non-physical shapes leading to eventually collapse of the simulation. This observation was made even with the highest computational resolution used in this study. A sensitivity of the chosen mobility on the phase-field was observed. However, the mobility can only be chosen in a certain range to obtain stable simulations. In this range no generally suitable values could be found.
In the fifth and sixth test case, the models' ability to capture dynamic coalescence was validated. To do this, a vertical and oblique drop impact into a pool of liquid were simulated. The FSLBM predicted the shape of the splash crown reasonably well, even with low computational resolution. With sufficiently high computational resolution, the PFLBM was also able to simulate the oblique drop impact with satisfying accuracy. However, for the vertical drop impact, only the FSLBM was able to capture the droplets ejected from the crown formation sufficiently well. As for the rising bubble, the PFLBM was observed to be sensitive to the choice of the mobility and interface width, but no generally applicable choice could be identified.
The investigation of the optimal choice of mobility and interface width in the PFLBM remains future work. Extending the implementations of both models with adaptive refinement of the computational grid is expected to significantly enhance the issues observed, and the efficiency of the implementations. Additionally, the applicability of the FSLBM to code generation should be explored leading to a flexible and portable code basis.
A. Appendix
The appendix presents additional simulation results that were not shown in the main part of the manuscript for reasons of brevity.
A.1. Rising bubble
Figures 18 to 20 extend Section 3.2.1 with simulation results for the rising bubble test cases 1, 3, and 4 from Table 1.
Additionally, simulation results for the FSLBM with the LSQR curvature computation model as described in Section 3.1.2, are presented in Figures 21 to 24. The simulations were conducted with parameters as in Table 1 using the software FluidX3D [4,40]. The results agree reasonably well with those of the FSLBM with FDM curvature computation model as presented in Figures 8 and 18 to 20, and therefore also with the experimental results. However, as shown in Figure 22 Table 1 Table 1 with Bo = 115 and Mo = 4.63 · 10 −3 . Different computational resolutions according to the initial bubble diameter, D, are shown. The simulations were performed with the FSLBM with LSQR curvature computation model implemented in FluidX3D [4,40]. In contrast to the experiment, the bubble broke apart into several smaller bubbles in the simulation at a resolution of D = 64. The photograph of the laboratory experiment was reprinted from Reference [66] with the permission of Cambridge University Press. Table 1 Table 1 with Bo = 339 and Mo = 43.1. The simulations were performed with the FSLBM with LSQR curvature computation model implemented in FluidX3D [4,40]. Different computational resolutions according to the initial bubble diameter, D, are shown. The photograph of the laboratory experiment was reprinted from Reference [66] with the permission of Cambridge University Press. Figure 12). The line is located in the center of the boundary tube with diameter, D. The comparison with experimental data [70] is drawn in terms of the non-dimensionalized axial location, z * = z/D, at time, t * = 15. Figure 12), with tube diameter, D. The comparison with experimental data [70] is drawn in terms of the non-dimensionalized radial location, r * = r/(0.5D), at time, t * = 15. Figure 12), with tube diameter, D. The comparison with experimental data [70] is drawn in terms of the non-dimensionalized radial location, r * = r/(0.5D), at time, t * = 15.
A.3. Vertical drop impact
Extending Section 3.3.1, Figure 29 illustrates the PFLBM's sensitivity to the mobility parameter, M , and interface width, ξ, in the vertical drop impact test case. Figures 30 and 31 The influence of the mobility, M , and interface width, ξ, are shown. While the simulation results are true to scale, no scale bar is available for the photograph of the experiment [73]. Therefore, the splash crown's dimension can only be compared between simulations rather than with the experiment. The solid black line illustrates the crown's contour in the center cross-section with normal in the x-direction. The photograph of the laboratory experiment was reprinted from Reference [73] with the permission of AIP Publishing. While the simulation results are true to scale, no scale bar is available for the photographs of the experiment [73]. Therefore, the splash crown's dimension can only be compared between simulations rather than with the experiment. The solid black line illustrates the crown's contour in the center cross-section with normal in the x-direction. The photographs of the laboratory experiment were reprinted from Reference [73] with the permission of AIP Publishing. Table 3: Simulated non-dimensionalized cavity depth, h * ca (t * ) = hca(t * )/D, of the vertical drop impact. The cavity depth, hca(t * ), is the maximum distance of the cavity bottom to the initial position of the liquid surface at time, t * = 0, measured in the center cross-section with normal in the x-direction. The results are presented for different dimensionless times, t * , and computational resolutions as defined by the initial drop diameter, D. Table 4: Simulated non-dimensionalized splash crown diameter, d * cr (t * ) = dcr(t * )/D, of the vertical drop impact. The splash crown diameter, dcr(t * ), is the crown's inner diameter at the position of the initial liquid surface at time, t * = 0, measured in a center cross-section with normal in the x-direction. The results are presented for different dimensionless times, t * , and computational resolutions as defined by the initial drop diameter, D. While the simulation results are true to scale, no scale bar is available for the photographs of the experiment [75]. Therefore, the splash crown's dimension can only be compared between simulations rather than with the experiment. The solid black line illustrates the crown's contour in the center cross-section with normal in the x-direction. The photographs of the laboratory experiment were reprinted from Reference [75] with the permission of the original authors. While the simulation results are true to scale, no scale bar is available for the photographs of the experiment [75]. Therefore, the splash crown's dimension can only be compared between simulations rather than with the experiment. The solid black line illustrates the crown's contour in the center cross-section with normal in the x-direction. The photographs of the laboratory experiment were reprinted from Reference [75] with the permission of the original authors. Table 5: Simulated non-dimensionalized cavity depth, h * ca (t * ) = hca(t * )/D, of the oblique drop impact. The cavity depth, hca(t * ), is the maximum distance of the cavity bottom to the initial position of the liquid surface at time, t * = 0, measured in the center cross-section with normal in the x-direction. The results are presented for different dimensionless times, t * , and computational resolutions as defined by the initial drop diameter, D. Table 6: Simulated non-dimensionalized splash crown diameter, d * cr (t * ) = dcr(t * )/D, of the oblique drop impact. The splash crown diameter, dcr(t * ), is the crown's inner diameter at the position of the initial liquid surface at time, t * = 0, measured in a center cross-section with normal in the x-direction. The results are presented for different dimensionless times, t * , and computational resolutions as defined by the initial drop diameter, D.
A.5. Supplementary material: open source simulation setups
The following supplementary material is available as part of the online article: • An archive of the C ++ source code of the FSLBM and PFLBM as part of the software framework waL-Berla [39], version https://i10git.cs.fau.de/walberla/walberla/-/tree/01a28162ae1aacf7b96152c9f8 The simulation setups are located in the directories apps/showcases/FreeSurface and apps/showcases/PhaseFieldAllenCahn/CPU.
• An archive of the Python source code used for the PFLBM gravity and capillary wave test cases. These test cases are provided as Jupyter Notebooks as part of the code generation framework lbmpy [41], version https://pypi.org/project/lbmpy/1.0.1/. The notebooks are located in the directory lbmpy_tests/full_scenarios/phasefield_allen_cahn.
|
2022-06-24T01:16:13.298Z
|
2022-06-23T00:00:00.000
|
{
"year": 2022,
"sha1": "f5a01626a9eb148312b8223ae6d8266ff96caf39",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2206.11637",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "633026d0bbf708a1de7f10bda4806bac2d64c86e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
208117472
|
pes2o/s2orc
|
v3-fos-license
|
Effective workspace design: imperative in resolving problem of increasing fluidity of knowledge-based academic activities in universities
Academic workspace is globally witnessing paradigm shift in designs in the contemporary time. This is as a result of the increasing fluidity of academic work in recent time and the requests emanating from industries for commercialisation of research findings. This has prompted the global attempts to standardize workspace designs particularly in knowledge-based work that contained many different activities. Benefits outlined in this new designs underline activity based work environments with outlined space utilisation and effectiveness. This paper expressed the fact that effectiveness of workers in universities depends on the adequacy and effectiveness of workspace provided. Against this background is this compares of the National University Commission (NUC) workspace standard in Nigeria with the International standards. This is with a view to improving the existing designs of Nigerian academic workspace. Data gathered from literature were used as International Benchmark to compare with the National University Commission (NUC) Benchmark. These data were analysed using tables. Findings showed that design of academic workspace in Nigerian universities is both environmentally and technologically below the contemporary time international requirements. This paper concludes that the situation of academic workspace design in Nigerian universities still leaves gap for improvement to meet international benchmark. It recommends that Federal Government of Nigeria (FGN) should encourage the adoption of minimum standard of academic workspace facilities that will compare favourably with international standard for better effective international research collaboration and job performance in universities. Managers, authorities of universities, academia, facilities managers, real estate managers, planners, designers, developers and investors in academic facilities to synergise and develop user-friendly workspace that can flow with the fluidity of knowledge work in academia.
Introduction
Space management especially in academics is currently attracting supports in debates of contemporary issues [6] and would for a long period of time be of relevance. This is because design of workspace is seriously impacted by changing technology and knowledge work requirements. Fundamentally, the purpose of developing many academic facilities is to provide effective workspace to improve user performance and ultimately increase productivity. This often time translates to institutional competency, efficiency, and increased value rating among higher education institutions in the world. Team work and interaction is presently found to encourage and promote innovation and creativity in knowledge work arising from inter-disciplinary needs and knowledge exchange currently common among industry and academia [23] [28] [15]. For this reason and many more, importance is given to collaboration and flexibility in workspace design. Similarly, effective functional workspace undoubtedly requires attributes of satisfaction, comfort, safety, security, wellbeing at work, and current technology. These attributes are contained in physical environmental factors' impact on effectiveness and functionality in academic building workspace [9]. As a measure to efficiency of academic workspaces, measure of space utilisation becomes inevitable.
Statement of research problem
Expectation from academics on their job effectiveness is enormous and increases every day as there are new innovations from new discoveries. New trend, method, technology, style, idea, application, and enquiry are required to be known, addressed and understood by academics to impart knowledge. This perhaps provides reasons underlining training and re-training of academics for effectiveness at work. Effectiveness of academics is therefore appraised by measure of performance achieved and the latter by the effectiveness of workspace allocated. Many of academic activities are 'session-time' based and should be accomplished within stipulated time frame. How effective is academic staff to meet up timelines therefore depends on the quality of workplace environment provided to operate. In other word, effectiveness of workspace as enabling environment for execution of academic assignments also becomes inevitable in the consideration of staff effectiveness and performance.
Studies have shown that academic workspaces are changing in recent times due to fluidity of academic activities and the diverse modes of carrying them out. The rate at which these changes occur depends on the rate at which technology in that particular area of discipline is changing. This has a very strong impact on learning environments toward space design for different combination of activity modes, as well as ownership control of space. This impact is well illustrated by Fisher's [13] learning environment matrix. The matrix explains the range and limit of control that can be exercised on spaces between low and high self-directive spaces as against the collaborative spaces for different activities.
For the purpose of achieving universal collaborative space benchmarks in researches and programmes, International Benchmarks have variously been set up by frontline universities across the globe over time [29] [30] [31] [18] [32]. For a similar purpose of setting minimum standard of academic work quality, the National University Commission [19] in Nigeria stipulated guidelines for every programme to meet approval for setting up. Yet, allegations are rife that some universities are rated higher than others among universities in Nigeria. A forum tagged 'CDWRN' Campaign for Democratic and Workers' Rights in Nigeria agitated for the arrest of decay in Nigerian education in . It has also been read on media that Nigerians in academics overseas whether as students or tutors or researchers perform better outside Nigerian universities as a result of better facilities provided for teaching and research. NUC disqualified eleven Nigerian universities in 2016 for being substandard [26].
This study therefore employed the use of literature survey of the physical environmental condition of workspaces in Nigerian universities; compare the findings with other international universities in other countries of the world and come up with recommendations.
Review of literature
Workspace is associated with workplace environment and the later is conceptually adduced to area where work is simply carried out. Workplace environment therefore implies the physical environmental conditions in various facets of its social, psychological, and technical consideration of the total comfort of workers. This term is extensive and includes elements of organizational features, environmental conditions, ergonomic consideration of furniture, quality of work life and well-being at work [10] [25] [12] [33] [36]. Due to variability in meaning of workspace over time period, it cannot be constrained to the physical area (open or enclosed) where work is carried out. Workspace has been changing rapidly in meaning and interpretation as there are changes in office planning needs, economic pressures, and innovations within information and communication technologies [27] [35]. Workspace within the context of this paper examines the physical space provided for work to be carried out with the associated facilities that makes an effective enabling environment for work. This survey considers the physical environmental factors as very important to workers effectiveness at work [8]. Some of the common factors considered include natural lighting quality, room temperature, location of space, circulation layout within the room space, noise level, floor surface finishes, interior beauty, ventilation, room humidity, air quality, odour, air freshness, and electric lighting comfort [2]. Other factors are: cleanliness, overall comfort, physical security, work interaction, and crowding [16]; glare, auditory distraction, drafts and furniture configuration [7] and organisation workplace design, culture and policy [33] [36]. Epistemology of office furniture indicates a strong relationship between workspace and effectiveness at work [8] [18]. Fatigue, stress, physical and mental impairment that leads to slowness reaction, failure to respond to stimuli, incorrect mental actions, inability to concentrate, forgetfulness, decrease in vigilance, and increased tendency to risk taking is associated with inappropriate furniture in workplaces [11]. This in total impacts effectiveness of workspace to accommodate effective academic work.
According to Malcom and Zukas [17], academic work is conventionally divided into three elements: research, teaching and administration. The authors discovered that discipline; research, pedagogy and academic identity are inextricably fused together and are very significant to academic life. Furthermore, a temporal element is also found in academic experience and discipline. This is because specialisms in discipline diverge sometimes and strengthen, or converge sometimes and weaken; hence the workspace that would be required by discipline will change over a period of time as will the user of space. More of the reasons why academic workspace allocations present a peculiar understanding are the spatial aspect of space-time in relation to academic workload. Academic workload is significantly bounded by space and time divided between the classroom, library, laboratory, site, seminar, conference, the office and home. Technological impact on the 'going to work' and 'working' of academics demonstrated the ease of connecting departments, offices and homes together around the globe by 'flows of representations through the disciplinary web' [17].
Summary of some of the studies carried out on physical workplace environmental conditions of academic facilities in Nigerian universities indicated that a lot of issues relating to staff comfort, ergonomics, well-being, and safety at work are required to be put in place for optimum effectiveness and productivity.
Okolie and Ogunoh's [22] study of academic buildings infrastructure performance in south east Nigerian universities did not meet specific physical environmental criteria used for their assessment and therefore failed to be effective to the institutions' goals. According to Adedeji and Fadamiro [1], all areas of workplace produce significant effect on workers' performance. In the case study, the author showed that there was no satisfactory result from the physical environmental factors assessed. This perhaps translates to imply that the workspace is very ineffective. Similarly, the study carried out by Aderonmu et al [3] involved four universities. None of the universities was found to meet the assessed level of effectiveness in all the environmental workspace effectiveness parameters. Some universities recorded good rating in lighting but very poor in ventilation or safety or security and viceversa. The study of Animashaun and Odeku [8] concludes that suitable environmental conditions are required for effective working of employees but are not found in all Nigerian universities' workplace environments. Ajala [4] looked at the close office plan, clean and decorative office, lighting, noise in the office, room temperature, ventilation and open office design and recommended that authorities in organization should strive to create conducive workplace environment to be able to attract, keep and motivate workforce. The study carried out by Ajayi et al [5], focused on the suitability of academic staff work environment in the universities within Southwest Nigeria. The study was carried out to discover the truth against the claim that academic staff lack conducive work environment. The factors of assessment used in the study include availability of physical facilities, provision of information services, staff motivation, relationship between authority and staff, involvement in decision making and staff development. The authors recommended that management of the universities should give more attention to staff work environments so that staff can improve on their job performance. Amusa, Iyoro and Olabisi's [7] study investigated the work environments and job performance of librarians working within six Federal universities and six State universities in Southwest Nigeria. The work environment indicators employed for the assessment include availability of required physical facilities, open communication, motivation, participatory management, staff development and personnel emolument. Factors used for the assessment of job performance indicators include professional practice, contribution to development of the library, ability to work with co-workers, punctuality at work, ability to respond promptly to request from clients, communication skills and meeting minimum requirements for promotion (i.e. research/publication). According to the authors, the work environments of librarians are fairly favourable while personnel emolument was considered very unfavourable. The study concludes and recommends improvement on work environments to make them favourable so that job performance of librarians would also be improved. Adedeji and Fadamiro's [1] study explored the post occupancy evaluation (POE) of an academic building with a bid to assessing user's satisfaction of the office spaces. The variables of assessment used for the study were grouped into three categories. The building aspects consisted of 17 variables which are vehicular access, pedestrian access, physically disabled access, exit routes, fire safety, security exterior beauty, interior beauty, stairways location, and interior signage. Others are external appearance, parking, cleanliness, speed and efficiency of maintenance service, water quality, waste removal, and landscaping. The second category is the work environment focusing the layout and the furniture. 27 variables were used in the study. This include distance between other areas, distance between the worker and the immediate supervisor, workplace size and arrangement, space available for material storage, visual privacy at workstation, telephone privacy at workstation, height of partition at workstation, furniture comfort, type of chair, chair possibility of adjustment, ease of adjustment, location of meeting rooms and space for formal meeting/ space for informal meeting. Others are space for file storage, personal storage area, location of printing area, hallway characteristics and location, stairway characteristics and location, access and circulation for physically disabled, distance between worker and equipment making noise, speed and efficiency of technical maintenance cleanliness of the floor, fire safety, security against theft, and distance between worker and the work-mates. The third category is the environmental comfort and was assessed with nine variables consisting the temperature, humidity, air quality, ventilation, odour, natural lighting quality, air freshness, air movement, and electric lighting comfort. In summary, the study discovered that certain elements of the building design very crucial to effective performance and productivity of users were sacrificed for aesthetics. A satisfactory level of indoor environmental comfort was not achieved. So also, the expected satisfactory level of indoor air quality seemed to have been sacrificed for high value landscaping achieved. The study therefore recommends that suitable balance between the form, functions and aesthetic performance of academic buildings be given prime consideration at the design stage. The work of Ogedengbe [21] further expressed the comfort standard of Nigerian university libraries compared with internationally accepted standard. The study used the air temperature, relative humidity, sound levels and lighting intensities of different locations as variables of measurement to carry out anthropometric measurement of five hundred and twenty six library users to determine the ergonomic support parameters of furniture provided in the library. Findings from the study showed that many of the factors considered fall below the internationally set standard. The authors therefore recommend conduct of ergonomic studies, their application to design of structure and facilities so that users are not affected by musculoskeletal disorder in the use of libraries.
O 'Neill and Wymer [23] illustrated the fluidity and dynamism currently experienced in the breadth and location of both the contemporary and future nature of work. It can be observed that there is diversity of space solution to support the flow of work both within and between locations. Academic activity is caught succinctly within this web and a very lucid example of rapidly evolving modes of work. Gensler [14] presented a model of work modes as consisting collaboration, learning, focusing, and socialising. The model is as shown below: Source: Adapted from Gensler [14] Collaborating ( This model exhibits the interrelationship of academic activity modes which are not created equal. Consequent upon this fact, Gensler attempted getting Workplace Performance Index (WPI) for various work patterns and work environments. Gensler [14] discovered that it was not easy to assign WPI because workplace in its past and current designs of open office, cubicles, cellular or closed private offices, linear workspaces, and bench workplaces are not adept at supporting the fragile balance existing between the various modes ( Figure 1). Therefore, the desire for effective allocation of space for knowledge work activities in the contemporary time design of academic building is to endeavour a place that will balance spaces for knowledge workers to engage in extended periods of uninterrupted focus work with ability to seamlessly engage in informal, formal and virtual collaboration. NUC [20] provides the space requirement in (Table 1) among others to run programmes in Estate Management in Nigeria: [29] * Emeritus faculty office space is determined on its merit between department head, campus planning and design and the administrative space committee. Research Associate Open-plan 6 5 Professional and Technical 10+ Open-plan (typically) / Office (demonstrated need) 6 10-12 6 Academic Level A Open-plan 6 7 Visiting and Emeritus Academics Dedicated Open Plan 6 8 Professional and Technical Staff Level 1-9 Open-plan 6 9 Research Assistant Open-plan 6 10 Postgraduate Research Students Open-plan 3 Assumptions:-(i) Circulation space is excluded.
(ii) Allocations are based on full time equivalent positions or students.
(iii) Ancillary, support or storage: defined case by case, should be always minimal, centralized and shared.
Source:
Adapted from University of New South Wales [31] (i) Laboratory space is calculated by adding the allocation for each person using the space e.g. an academic with four PhD students would be allocated 32m 2 of laboratory space (16m 2 + 4x4m 2 ). This is in addition to the office and open-plan allocations.
(ii) The standards are for high level of space planning. They apply to staff or students who actively need research laboratory space only.
Source: Adapted from University of New South Wales [31]
Analysis of Findings from the various field surveys and Discussion
From the results of indigenous studies carried out on workplace environment of higher institutions in Nigeria, it was evident that no single study reported a wholesome comprehensive and excellent environmental and technological adequacy of factors and variables of quality assessment. The reports have always showcased fairly well in some parameters and in some others a poor remark. Even the international benchmarks in academic workspaces vary from one country to another ( Table 1 to Table 6). This is to say that there is yet to be a universally accepted benchmark for academic workspaces for reference.
Conclusion and recommendations
The impact of technology on knowledge-based work has created distinct diversity of academic activities, operationally grouped into basically four main modes ( Figure 1). Academic work is caught up within this knowledge-based work and is therefore fall within collaborative, focus, learning, and social work activities. These academic activities are daily evolving and with new innovations in technology are going fluid naturally. This study therefore recommend that concept of integrated work be applied to design of academic workplace to make it dynamic in meeting the needs of the different work modes. This is in tandem with the submission of O 'Neill and Wymer [23] that successful organisation should create a diversity of space solutions that support the flow of work within and between locations.
|
2019-10-10T09:27:56.922Z
|
2019-08-01T00:00:00.000
|
{
"year": 2019,
"sha1": "78b165b47f04f91528a595f8f820e9148b2997f2",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1299/1/012017",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b31e784ce9f759776ba889d5d7732c01dfe0b386",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
119709503
|
pes2o/s2orc
|
v3-fos-license
|
Entropy production in high-energy heavy-ion collisions and the correlation of shear viscosity and thermalization time
We study entropy production in the early stage of high-energy heavy-ion collisions due to shear viscosity. We employ the second-order theory of Israel-Stewart with two different stress relaxation times, as appropriate for strong coupling or for a Boltzmann gas, respectively, and compare the hydrodynamic evolution. Based on present knowledge of initial particle production, we argue that entropy production is tightly constrained. We derive new limits on the shear viscosity to entropy density ratio $\eta/s$, independent from elliptic flow effects, and determine the corresponding Reynolds number. Furthermore, we show that for a given entropy production bound, that the initial time $\tau_0$ for hydrodynamics is correlated to the viscosity. The conjectured lower bound for $\eta/s$ provides a lower limit for $\tau_0$.
Experiments with colliding beams of gold ions at the Relativistic Heavy-Ion Collider (RHIC) have confirmed that dense QCD matter exhibits hydrodynamic flow effects [1]. Their magnitude matches approximately predictions based on ideal Euler (inviscid) hydrodynamics 1 [2]. More precisely, the transverse momentum and centrality dependence of the azimuthally asymmetric flow, v 2 , requires a shear viscosity to entropy density ratio as low as η/s ≤ 0.2 [4,5,6,7]; this is much lower than perturbative extrapolations to temperatures T ≃ 200 MeV [8]. However, it is comparable to recent results for SU(3) pure gauge theory from the lattice [9], and to the conjectured lower bound for strongly coupled systems η/s ≥ 1/(4π) [10]. Similar constraints on η/s have been derived from transverse momentum correlations [11] and from energy loss and flow of heavy quarks at RHIC [12].
The purpose of this paper is to obtain an independent upper bound on η/s by analyzing entropy production in the early stages of the hydrodynamic evolution (the plasma phase), where the expansion rate and hence the entropy production rate is largest. Entropy production in heavy-ion collisions due to viscous effects has been studied before [13,14]. The new idea pursued here is that recent progress in our understanding of gluon production in the initial state constrains the amount of additional entropy produced via "final-state" interactions, 1 Numerical solutions of Euler hydrodynamics on finite grids always involve some amount of numerical viscosity for stability. Reliable algorithms such as flux-corrected transport keep this numerical viscosity and the associated entropy production at a minimum [3]. and hence the viscosity and the thermalization time.
The second-order formalism for viscous hydrodynamics of Israel and Stewart [15], and its application to onedimensional boost-invariant Bjorken expansion [16], are briefly reviewed in section II. The initial condition for hydrodynamics, in particular the initial parton or entropy density in the central rapidity region, plays a crucial role. If it is close to the measured final-state multiplicity, this provides a stringent bound on viscous effects. The initial parton multiplicity in heavy-ion collisions can, of course, not be measured directly. Our analysis therefore necessarily relies on a calculation of the initial conditions (presented in section III). Specifically, we employ here a k ⊥ -factorized form of the "Color Glass Condensate" (CGC) approach which includes perturbative gluon saturation at small light-cone momentum fractions x [17]. However, different approaches for initial particle production, such as the HIJING model which relies on collinear factorization supplemented with an additional model for the soft regime, also predicts multiplicities close to experiment [18]. The same is true when the heavy-ion collision is modeled as a collision of two classical Yang-Mills fields [19]. It is important to test these models for small systems, such as peripheral A + A (or even p + p) collisions, in order to constrain the entropy increase via final-state effects (thermalization and viscosity).
Section IV contains our main results. We show how the entropy production bound correlates η/s to the initial time for hydrodynamic evolution, τ 0 . The entropy production rate grows with the expansion rate (i.e., how rapidly flow lines diverge from each other), and the total amount of produced entropy is therefore rather sensitive to the early stages of the expansion. The bound on the viscosity depends also on the initial condition for the stress, which in the second-order theory is an independent variable and is not fixed by the viscosity and the shear (unless the stress relaxation time is extremely short, as predicted recently from the AdS/CFT correspondence at strong coupling [20]).
In a recent paper, Lublinsky and Shuryak point out that if the initial time τ 0 is assumed to be very small, that a resummation of the viscous corrections to all orders in gradients of the velocity field is required [21]. Here, we explore only the regime where τ 0 is several times larger than the sound attenuation length Γ s , and so the standard approach to viscous hydrodynamics should apply. Quantitatively, we find that an entropy bound of ≃ 10% restricts η/s to be at most a few times the lower bound (η/s = 1/(4π)) conjectured from the correspondence [10]. On the other hand, somewhat surprisingly, we find that even η/s = 1/(4π) is large enough to give noticeable entropy production for thermalization times τ 0 well below 1 fm/c (but still larger than Γ s ). Present constraints from initial and final multiplicities are not easily reconciled with such extremely short initial times [22].
We restrict ourselves here to 1+1D Bjorken expansion. Given its (numerical) simplicity and the fact that the entropy production rate is largest at early times, this should provide a reasonable starting point. Estimates for the initial time τ 0 , for the parton density and the stress at τ 0 , and for the viscosity to entropy density ratio η/s are in fact most welcome for large-scale numerical studies of relativistic (second-order) dissipative fluid dynamics. Without guidance on the initial conditions, the hydrodynamic theory can at best provide qualitative results for heavy-ion collisions.
We neglect any other possible source of entropy but shear viscosity at early times 2 . Even within this simplified setting, there could be additional entropy production due to a viscous "hadronic corona" surrounding the fireball [23], which we do not account for. Clearly, any additional contribution would further tighten the (upper) bound on η/s and the (lower) bound on τ 0 . We also assume that η/s is constant. This does not hold over a very broad range of temperature [8] but should be a reasonable first approximation for T ≃ 200-400 MeV.
We employ natural units throughout the paper: = c = k B = 1.
A. Second-order formalism
In this section we briefly review some general expressions for viscous hydrodynamics which will be useful in the following. More extensive discussions are given in refs. [14,24,25,26,27,28,29,30], for example.
A single-component fluid is generally characterized by a conserved current (possibly more), N µ , the energymomentum tensor T µν and the entropy current S µ . The conserved quantities satisfy continuity equations, In addition, the divergence of the entropy current has to be positive by the second law of thermodynamics, For a perfect fluid, a well-defined initial-value problem requires the knowledge of T µν and of N µ on a space-like surface in 3+1D Minkowski space-time. This is equivalent to specifying the initial flow field u µ , the proper charge density n ≡ u µ N µ , and the proper energy density e ≡ u µ u ν T µν ; the pressure is determined via an algebraic relation to e and n, the equation of state (EoS).
In dissipative fluids, irreversible viscous and heat conduction processes occur. These quantities can be expressed explicitly if the charge and entropy currents and the energy-momentum tensor are decomposed (projected) into their components parallel and perpendicular to the flow of matter [31]; the latter describe the dissipative currents. The transverse projector is given by ∆ µν = g µν − u µ u ν , with g µν = diag(1, −1, −1, −1) the metric of flat space-time. In the following, we focus on locally charge-neutral systems where all conserved currents vanish identically.
The energy-momentum tensor can be decomposed in the following way: Here, W µ = q µ + hV µ = u ν T να ∆ µ α is the energy flow, with h = (e + p)/n the enthalpy per particle, and q µ is the heat flow; we shall define the local restframe via W µ = 0 (the "Landau frame"). Furthermore, Π denotes the bulk pressure such that p + Π = − 1 3 ∆ µν T µν , while the symmetric and traceless part of the energy-momentum tensor defines the stress tensor, The entropy current is decomposed as In the standard first order theory due to Eckart [31] and Landau and Lifshitz [32], only linear corrections are taken into account, i.e., Φ µ = q µ /T . On the other hand, the second order theory of relativistic dissipative fluid dynamics includes terms to second order in the irreversible flows and in the stress tensor [15]: where the coefficients β 0 , β 1 , β 2 and α 0 , α 1 represent thermodynamic integrals which (near equilibrium) are related to the relaxation times of the dissipative corrections. Furthermore, from (2), one can find linear relationships between the thermodynamic forces and fluxes, leading to the transport equations describing the evolution of dissipative flows [15].
In what follows, we will focus on shear effects and neglect heat flow and bulk viscosity, hence (3) simplifies to The stress tensor satisfies a relaxation equation, where η denotes the shear viscosity, and the shear tensor σ µν is a purely "geometrical" quantity, determined by the flow field: with ∇ µ = ∆ µν ∂ ν . The relaxation time τ π determines how rapidly the stress tensor π µν relaxes to the shear tensor σ µν ; in particular, in the limit τ π → 0, π µν = 2ησ µν (9) satisfy the same algebraic relation as in the first-order theory. The limit τ π → 0 is formal, however, since the deviation of the stress π µν from 2ησ µν at any given time, as obtained by solving eq. (7), depends also on its initial value. If (9) is approximately valid at the initial time then the first-order theory may provide a reasonable approximation for the entire evolution (see below). By analyzing the correlation functions of the stress that lead to the definitions (7,9), respectively, of the shear viscosity, Koide argues that in the second-order theory of Israel and Stewart η may represent a different quantity than in the first-order approach [33]. Nevertheless, here we assume that the conjectured lower bound for η/s applies even to the causal (second-order) approach.
B. Dissipative Bjorken scaling fluid dynamics
In this section we recall the 1+1D Bjorken scaling solution [16] in 3+1D space-time including stress [14]. By assumption, the fluid in the central region of a heavy-ion collision expands along the longitudinal z-direction only, with a flow velocity v equal to z/t. This is appropriate for times less than the transverse diameter R of the collision zone divided by the speed of sound c s = ∂p/∂e (possibly longer for very viscous fluids). After that transverse expansion is fully developed and we expect that entropy production due to shear decreases. In fact, it is straightforward to check that for three-dimensional scaling flow 3 u µ ∂ ν σ µν = 0; hence, within the first-order theory at least, the shear viscosity does not enter the evolution equation of the energy density anymore.
Formulations of the Israel-Stewart second-order theory for Bjorken plus transverse expansion have been published [24,25,26,27,28,29] but require large-scale numerical computations. A relatively straightforward 1+1D analysis is warranted as a first step to provide an estimate for entropy production.
It is convenient to transform from (t, z) to new (τ,η) coordinates, where τ = √ t 2 − z 2 denotes proper time andη = 1 2 log((t + z)/(t − z)) is the space-time rapidity; for the Bjorken model, it is equal to the rapidity of the flow, η fl ≡ 1 In other words, the four-velocity of the fluid is The longitudinal projection of the continuity equation for the stress-energy tensor then yields Here, e is the energy density of the fluid in the local restframe, while p denotes the pressure. These quantities are related through the equation of state (EoS). We focus here on entropy production during the early stages of the evolution where the temperature is larger than the QCD cross-over temperature T c ≃ 170 MeV, and so assume a simple ideal-gas EoS, p = e/3.
In what follows, we will neglect the bulk pressure Π which would otherwise tend to increase entropy production further. Well above T c , this contribution is expected to be much smaller than that due to shear [34]. In the transition region, the bulk viscosity could be significant [35]. Also, for 1+1D expansion considered here, the stress Φ ≡ π 00 − π zz acts in the same way as the bulk pressure Π: only the combination Φ − Π appears in (10).
One can also define a Reynolds number via the ratio of non-dissipative to dissipative quantities [36], R = (e + p)/Φ. Eq. (10) can then be written as where we have neglected the bulk pressure. For stability, the effective enthalpy (e + p)(1 − 1/R) should be positive, i.e. R > 1. The energy density then decreases monotonically with time.
The equations of second-order dissipative fluid dynamics, (10) or (14) and (11) together with (12) or (13) and β 2 = τ π /(2η) form a closed set of equations for a fluid with vanishing currents, if augmented by an EoS. Furthermore, the initial energy density e 0 ≡ e(τ 0 ) and the initial shear Φ 0 ≡ Φ(τ 0 ) have to be given. In the secondorder theory one has to specify the initial condition for the viscous stress Φ 0 independently from the initial energy or particle density. We are presently unable to compute Φ 0 . Below, we shall therefore present results for various values of Φ 0 .
Alternatively, a physically motivated initial value Φ * 0 for the stress can be obtained from the condition that dR/dτ = 0 at τ = τ 0 . This is the "tipping point" between a system that is already approaching perfect fluidity at and one that is unable to compete with the expansion and is in fact departing from For an EoS with constant speed of sound, say p = e/3, the condition thatṘ = 0 is equivalent toė/e =Φ/Φ. Eqs. (10) and (11) then yield The second line applies in the limit of short relaxation time; since τ π is proportional to r, this is always satisfied in the limit r → 0. For typical initial conditions relevant for heavy-ion collisions, it is a reasonable approximation even in the Boltzmann limit (r = 1). In (17) we have indicated that (16) is in fact nothing but the stress in the first-order theory (divided by the initial energy density).
While it is clear that Φ relaxes to Φ 1st−O over time-scales on the order of τ π , eq. (17) is actually a statement about the initial value of Φ: in a fluid with reasonably short relaxation time and stationary initial Reynolds number, R(τ 0 ) = 0, even the initial value of the stress is given by the first-order approach. The condition R(τ 0 ) > 1 for applicability of hydrodynamics together with eq. (16) then provides the following lower bound on τ 0 : where Γ s denotes the sound attenuation length; within the first-order approach, R = τ /Γ s . The factor of η/s on the right-hand-side illustrates the extended range of applicability of hydrodynamics as compared to a Boltzmann equation: a classical Boltzmann description requires that the thermal de-Broglie wave length, ∼ 1/T , is smaller than the (longitudinal) size of the system, τ . For very small viscosity, though, hydrodynamics is applicable (since R ≫ 1) even when Γ s ≪ τ < ∼ 1/T . In this point we differ somewhat from Lublinsky and Shuryak [21], who argue that the theory needs to be resummed to all orders in the gradients of the velocity field already when τ ∼ 1/T . From our argument above, this should be necessary only when τ 0 ∼ Γ s (τ 0 ), which is much smaller than 1/T 0 if η/s ≪ 1. The purpose of this paper is to motivate, however, that a much stronger constraint than τ 0 > Γ s (τ 0 ) may anyhow result from a bound on entropy production (which follows from the centrality dependence of the multiplicity), cf. section IV.
In the Bjorken model, the entropy per unit rapidity and transverse area at time τ is given by where A ⊥ is the transverse area whiles ≡ S µ u µ denotes the longitudinal projection of the entropy current. Neglecting heat flow (q µ = 0) and bulk pressure (Π = 0) one obtains from (5): s can be determined, for any τ ≥ τ 0 , from the solution of eqs. (10,11). Note that the second term in (20) is of order (Φ/e) 2 . For nearly perfect fluids with η/s ≪ 1 anḋ R(τ 0 ) = 0 it is rather small.
III. THE CGC INITIAL CONDITION
Before we can present solutions of the hydrodynamic equations, we need to determine suitable initial conditions. To date, the most successful description of the centrality dependence of the multiplicity is provided by the Kharzeev-Levin-Nardi (KLN) k ⊥ -factorization approach [17]. The KLN ansatz for the unintegrated gluon distribution functions (uGDF) of the colliding nuclei incorporates perturbative gluon saturation at high energies and determines the p ⊥ -integrated multiplicity from weakcoupling QCD without additional models for soft particle production.
Specifically, the number of gluons that are released from the wavefunctions of the colliding nuclei is given by where N c = 3 is the number of colors, and p ⊥ , y are the transverse momentum and the rapidity of the produced gluons, respectively. x 1,2 = p ⊥ exp(±y)/ √ s N N denote the light-cone momentum fractions of the colliding gluon ladders, and √ s N N = 200 GeV is the collision energy. The normalization factor N can be fixed from peripheral collisions, where final-state interactions should be suppressed. (Ideally, the normalization could be fixed from p + p collisions; however, this is possible only at sufficiently high energies, when the proton saturation scale is at least a few times Λ QCD .) N also absorbs NLO corrections; when we compare to measured multiplicities of charged hadrons, it includes as well a factor for the average charged hadron multiplicity per gluon, and a Jacobian for the conversion from rapidity to pseudo-rapidity.
The uGDFs are written as follows [17,37,38]: (22) P (r ⊥ ) denotes the probability of finding at least one nucleon at r ⊥ [37,38]. This factor arises because configurations without a nucleon at r ⊥ do not contribute to particle production. Note that the perturbative ∼ 1/k 2 ⊥ growth of the gluon density towards small transverse momentum saturates at k ⊥ = Q s . Therefore, the p ⊥integrated gluon multiplicity obtained from (21) is finite.
We should emphasize that the ansatz (22) is too simple for an accurate description of high-p ⊥ particle production. For example, it does not incorporate the socalled "extended geometric scaling" regime above Q s , which plays an important role in our understanding of the evolution of high-p ⊥ spectra from mid-to forward rapidity in d+Au collisions [39]. However, high-p ⊥ particles contribute little to the total multiplicity, and more sophisticated models for the uGDF do not change the centrality dependence of dN/dy significantly [37]. Q s (x, r ⊥ ) denotes the saturation momentum at a given momentum fraction x and transverse coordinate r ⊥ . It is parameterized as [37,38] The ∼ 1/x λ growth at small x is expected from BFKL evolution and has been verified both in deep inelastic scattering at HERA [40] and in high-p ⊥ particle production from d + Au collisions at RHIC [39]; the growth speed is approximately λ ≃ 0.28. Note that the saturation momentum, as defined in (23), is "universal" in that it doesn't depend on the thickness of the collision partner at r ⊥ [41]. The centrality dependence of Q s is determined by the thickness function T (r ⊥ ), which is simply the density distribution of a nucleus, integrated over the longitudinal coordinate z. Note that the standard Woods-Saxon density distribution is averaged over all nucleon configurations, including those without any nucleon at r ⊥ . For this reason, a factor of 1/P (r ⊥ ) arises in Q 2 s [37,38]. It prevents Q s from dropping to arbitrarily small values at the surface of a nucleus (since at least one nucleon must be present at r ⊥ or else no gluon is produced at that point). The fact that Q s is bound from below prevents infrared sensitive contributions from the surface of the nucleus and also makes the uGDF (22) less dependent on "freezing" of the one-loop running coupling. Centrality dependence of the charged particle multiplicity at midrapidity from the k ⊥ -factorization approach with perturbative gluon saturation at small-x, for Cu+Cu and Au+Au collisions at full RHIC energy, √ sNN = 200 GeV. PHOBOS data from ref. [42]; the errors are systematic, not statistical. Fig. 1 shows the centrality dependence of the multiplicity, as obtained from eq. (21) via an integration over the transverse plane. It was noted in [37] that the multiplicity in the most central collisions is significantly closer to the data than the original KLN prediction [17] if the integration over r ⊥ is performed explicitly, rather than employing a mean-field like approximation, . An even better description of the data can be obtained when event-by-event fluctuations of the positions of the nucleons are taken into account [38]; they lead to a slightly steeper centrality dependence of the multiplicity per participant for very peripheral collisions or small nuclei. However, we focus here on central Au+Au collisions and hence we neglect this effect.
It is clear from the figure that the above CGC-k ⊥factorization approach does not leave a lot of room for an additional centrality-dependent contribution to the particle multiplicity. (Centrality independent gluon multiplication processes have been absorbed into N .) In fact, within the bottom-up thermalization scenario one does expect, parametrically, that gluon splittings increase the multiplicity by a factor ∼ 1/α 2/5 [43] before the system thermalizes at τ 0 and the hydrodynamic evolution begins. If the scale for running of the coupling is set by Q s , this would lead to an increase of the multiplicity for the most central Au+Au collisions by roughly 20%. However, such a contribution does not seem to be visible in the RHIC data, perhaps because the bottom-up scenario, which considered asymptotic energies, does not apply quantitatively at RHIC energy. It is also thinkable that the model (22,23) overpredicts the growth of the particle multiplicity per participant with centrality somewhat.
It is noteworthy that from the most peripheral Cu+Cu to the most central Au+Au bin, that (dN/dη)/N part grows by only ≃ 50% while N 1/3 part increases by a factor of 2.6. Clearly, any particle production model that includes a substantial contribution from perturbative QCD processes will cover most of the growth. This implies that rather little entropy production appears to occur after the initial radiation field decoheres. If so, this allows us to correlate the thermalization time τ 0 and the viscosity to entropy density ratio η/s. We shall assume that about 10% entropy production may be allowed for central Au+Au collisions.
The density of gluons at τ s = 1/Q s is given by dN/d 2 r ⊥ dy from eq. (21), divided by τ s . For a central collision of Au nuclei at full RHIC energy, the average Q s ≃ 1.4 GeV at midrapidity; hence τ s ≃ 0.14 fm/c. The parton density at this time is approximately ≃ 40 fm −3 . If their number is effectively conserved 4 until thermalization at τ 0 then The initial energy density e(τ 0 ) can now be obtained from the density via standard thermodynamic relations. We assume that the energy density corresponds to 16 gluons and 3 massless quark flavors in chemical equilibrium,
A. Evolution of the entropy and of the Reynolds number
We begin by illustrating entropy production due to dissipative effects. Given an initial time τ 0 for the hydrodynamic evolution, we determine ∆S = dS fin /dy − dS ini /dy for τ > τ 0 . This quantity increases rather rapidly at first, since the expansion rate H ≡ ∂ µ u µ = 1/τ is biggest at small τ . We chose τ fin = 5 fm/c to be on the order of the radius of the collision zone and fix the final value for ∆S/S ini to equal 10%. Having fixed the initial and final entropy as well as the initial time then determines η/s. The result of the calculation is shown in Fig. 3. As expected, if the hydrodynamic expansion starts later (larger τ 0 ) then less entropy is produced for a given value of η/s; conversely, for fixed entropy increase, larger values of η/s are possible. This is due to two reasons: the total time interval for one-dimensional hydrodynamic expansion as well as the entropy production rate decrease.
In fact, the figure shows that for very small initial time the ∆S/S ini = 10% bound can not be satisfied with η/s ≥ 1/(4π) ≃ 0.08. In Fig. 4 we show the behavior of the inverse Reynolds number for different initial values of the stress. Again, for each curve η/s is fixed such that ∆S/S ini = 10% at τ fin = 5 fm/c. As already indicated above, if Φ 0 < Φ * 0 defined in eq. (15), the fluid can not compete with the expansion and departs from equilibrium. On the other hand, if Φ 0 > Φ * 0 , there is already a rapid approach towards the perfectfluid limit at τ 0 . In either case, the interpretation of τ 0 as the earliest possible starting time for hydrodynamic evolution does not appear sensible. The initial condition corresponding toṘ(τ 0 ) = 0 in turn corresponds to the situation where the fluid has just reached the ability to approach equilibrium. It is clear from the figure that the evolution is close to that predicted by the first-order theory. Fig. 5 shows the Reynolds number for our ansatz (13) for the relaxation time at strong coupling, which essentially follows the behavior given by the first-order theory: after a time ∼ τ π has elapsed, the behavior of R is nearly independent of the initial value of Φ. The initial condition Φ 0 = Φ * 0 again leads to the most natural behavior of R without a very rapid initial evolution.
B. η/s versus τ0
From the previous results it is evident that fixing the amount of produced entropy, ∆S/S ini , correlates η/s with τ 0 . In this section we show the upper limit of η/s as a function of τ 0 . We begin with the Boltzmann gas with fixed Φ 0 (independent of τ 0 ) in Fig. 6. One observes that the maximal viscosity depends rather strongly on the initial value of the stress. For any given Φ 0 , (η/s) max first grows approximately linearly with τ 0 . For large initial time, however, the expansion and entropy production rates drop so much that the bound on viscosity eventually disappears. Furthermore, it is interesting to observe that the conjectured lower bound η/s = 1/(4π) excludes too rapid thermalization: even if the fluid is initially perfectly equilibrated (Φ 0 = 0), a thermalization time well below ∼ 1 fm/c is possible only if either η/s < 1/(4π) or ∆S/S ini > 10%. With 10% corrections to perfect fluidity at τ 0 , shown by the long-dashed line in Fig. 6, the minimal τ 0 compatible with both η/s ≥ 1/(4π) and ∆S/S ini = 10% is about 1.2 fm/c. If η/s ≃ 0.1 − 0.2, as deduced from the centrality dependence of elliptic flow at RHIC [6], then τ 0 ≃ 1.5 fm/c. In Fig. 7 we perform a similar analysis for our ansatz (13) for the strong-coupling case. Due to the much smaller relaxation time of the viscous stress, we observe that the viscosity bound is now rather insensitive to the magnitude of the initial correction to equilibrium. We obtain a lower bound on the thermalization time of In Fig. 8 we return to the Boltzmann gas with initial value for the stress as given in eq. (15), corresponding toṘ(τ 0 ) = 0. Comparing to Fig. 6, we observe that the viscosity bound is affected mostly for large τ 0 : with this initial condition, a high viscosity η/s ∼ 1 is excluded even if the initial time is as big as 2 fm/c. The reason why the upper bound on the viscosity does not disappear at large τ 0 for this initial stress is that Φ * 0 /e 0 grows with η/s, cf. eq. (15). A lot of entropy would then be produced, even for large τ 0 . We performed similar calculations for the strongcoupling limit as shown in Fig. 9. The curves are rather close to those for a Boltzmann gas from Fig. 8, which is expected. With this initial condition, i.e. Φ 0 = Φ * 0 , the hydrodynamic evolution is close to the first-order theory for both cases. Entropy production is sensitive only to τ 0 and η/s but is nearly independent of the stress relaxation time τ π . 10 shows the ratio of the final to the initial entropy as a function of τ 0 for two different values of η/s, and the two different relaxation times discussed above in eqs. (12,13). Here, S fin has been fixed to the value appropriate for central Au+Au collisions while S ini is varied accordingly. For example, for τ 0 = 0.6 fm/c and η/s = 1/(2π), almost 30% entropy production occurs. This would account for the entire growth of (dN/dη)/N part from N part ≃ 60 to N part ≃ 360 observed in Fig. 1. That is, for these parameters the initial parton multiplicity per participant would have to be completely independent of centrality. For the same viscosity, τ 0 = 0.3 fm/c would imply that nearly half of the final-state entropy was produced during the hydrodynamic stage, i.e. that the initial multiplicity per participant should actually decrease with centrality. Such a scenario appears unlikely to us. Note that even for τ 0 = 0.3 fm/c and η/s = 1/(4π), with T = 400 MeV one finds that Γ s (τ 0 )/τ 0 ≃ 0.17 is quite small. Romatschke obtained similar numbers for the initial to final entropy ratio, albeit only for τ 0 = 1 fm/c, in a computation that included cylindrically symmetric transverse expansion [28].
V. SUMMARY, DISCUSSION AND OUTLOOK
In this paper, we have analyzed entropy production due to non-zero shear viscosity in central Au+Au collisions at RHIC. We point out that a good knowledge of the initial conditions, and of the final state, of course, can provide useful constraints for hydrodynamics of highenergy collisions, specifically on transport coefficients, on the equation of state (not discussed here, cf. [6]), on the initial/thermalization time and so on.
Our main results are as follows. Assuming that hydrodynamics applies at τ > Γ s , then due to the rather restrictive bound on entropy production, it follows that the shear viscosity to entropy density ratio of the QCD matter produced at central rapidity should be small, at most a few times the lower bound η/s = 1/(4π) conjectured from the AdS/CFT correspondence at infinite coupling. This represents a consistency-check with similar numbers (η/s < ∼ 0.2) extracted from azimuthally asymmetric elliptic flow [4,5,6,7]. We have neglected several other possible sources of entropy production, such as bulk viscosity near the transition region [35] or hadronic corona effects [23]; such additional contributions might tighten the constraints even further.
Furthermore, the entropy production bound correlates the maximal allowed viscosity to the initial time τ 0 for hydrodynamic evolution. This is due to the fact that the expansion rate is equal to the inverse of the expansion time, which makes entropy production from viscous effects rather sensitive to the value of τ 0 . We have found that for ∆S/S ini ≃ 10%, that the initial time for hydrodynamics should be around 1 fm/c, possibly a little larger. Significantly smaller thermalization times would either require η/s < 0.15 − 0.2 (or even smaller than 1/(4π)). Alternatively, they would require a particle pro-duction mechanism that yields significantly lower initial multiplicities than the KLN-CGC approach. Given the very good description of the centrality dependence of the multiplicity, however, to us it appears reasonable to assume that this approach provides an adequate initial condition in that the initial parton multiplicity per participant increases with centrality.
A significant problem with viscous hydrodynamics, in particular with the second-order approach of Israel-Stewart, is the fact that the number of initial parameters increases. Even within the most simple framework followed here (1+1D Bjorken expansion combined with neglect of conserved currents, of bulk viscosity, and of heat flow), a unique solution requires us to specify, in addition to the ideal-fluid parameters, the shear viscosity and the initial value for the stress. The latter, in particular, is not a general property of near-equilibrium QCD but depends on the parton liberation and thermalization process. We have, however, introduced a physically motivated initial condition for the stress: if τ 0 is defined as the earliest possible initial time for hydrodynamics, it is plausible that the initial Reynolds number should be stationary, R(τ 0 ) = 0. Otherwise the fluid either still departs from equilibrium (Ṙ(τ 0 ) < 0); or is already approaching it (Ṙ(τ 0 ) > 0).
For small relaxation times of the stress, the condition thatṘ(τ 0 ) = 0 implies that its initial value already be close to that given by the first-order theory of Eckart, Landau and Lifshitz (the relativistic generalization of Navier-Stokes hydrodynamics). We therefore expect that in general the two approaches will provide rather similar results for heavy-ion collisions. One should keep in mind, however, that in the second-order theory the entropy current includes a term quadratic in the stress, which is of course absent from the first-order theory, and which reduces entropy production slightly.
Perhaps most importantly, withṘ(τ 0 ) = 0, the hydrodynamic evolution is largely independent of the stress relaxation time τ π , and therefore similar for both a Boltzmann gas at weak coupling (with low viscosity, however) and a strongly coupled plasma. The latter relaxes very rapidly to the first-order theory, regardless of the initial condition. The former, on the other hand, is forced by the initial condition to start close to relativistic Navier-Stokes, and the relaxation time is still sufficiently small to prevent a significant departure from the first-order theory.
The initial conditionṘ(τ 0 ) = 0 also guarantees that R(τ ) ≫ 1 for all τ ≥ τ 0 , as long as the initial time is not extremely short (τ 0 T 0 ≫ η/s). The effective enthalpy (1 − 1/R)(e + p) is therefore always positive. On the other hand, our numerical results indicate that the Reynolds number does not exceed ∼ 100 during the QGP phase. This is well below the regime where Navier-Stokes turbulence occurs in incompressible, non-relativistic fluids (R > ∼ 1000). Indeed, turbulence during the hydrodynamic stage would probably cause large fluctuations of the elliptic flow v 2 [44], which are not seen [45].
A quantitative interpretation of hydrodynamic flow effects in heavy-ion collisions at RHIC and LHC will of course require 2+1D and 3+1D solutions [7,24,25,26,27,28,29]. The results obtained here should prove useful for constraining the initial conditions (in particular τ 0 and Φ 0 ) for such large-scale numerical efforts. In particular, as we pointed out here, the entropy production bound correlates τ 0 with η/s. In turn, we expect that elliptic flow will provide an anti-correlation since later times and larger shear viscosity should both reduce its magnitude. The intersection of those curves could then provide an estimate of the initial time for hydrodynamics at RHIC.
|
2007-08-10T13:33:52.000Z
|
2007-06-14T00:00:00.000
|
{
"year": 2007,
"sha1": "a4d46a9cc81ec882fd8607d012c15b4925f450ac",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0706.2203",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a4d46a9cc81ec882fd8607d012c15b4925f450ac",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
38306507
|
pes2o/s2orc
|
v3-fos-license
|
An audit of suprapubic catheter insertion performed by a urological nurse specialist
Aims:Aims: To introduce the concept that a urological Nurse Specialist can perform Suprapubic Catheter (SPC) insertions independently without signi fi cant complications, if systematic training is given. Settings and Design: Settings and Design: Retrospective study. Materials and Methods: Materials and Methods: A retrospective audit of Suprapubic Catheter insertions performed by a Urological Nurse Specialist was conducted between April 2009 and April 2011. Results:Results: Of the total 53 patients, in 49 (92.45%) the procedure was successful. Out of the remaining four, two (3.77%) were done by a urologist. One patient’s (1.89 %) SPC did not drain after placement and ultrasonography reported that the Foley balloon was lying within the abdominal wall. The other patient’s SPC drained well for a month and failed to drain after the fi rst scheduled change in a month. Since the ultrasonography showed the Foley balloon to be anterior to the distended bladder, an exploration was performed and this revealed that the SPC tract had gone through a fold of peritoneum before reaching the bladder. None had bowel injury. Conclusions:Conclusions: If systematic training is given, a urological Nurse Specialist can perform SPC insertions independently without signi fi cant complications.
INTRODUCTION
The concept of a urological Nurse Specialist (NS) is yet to take wings in India. The National Health Service (NHS) of the UK allows nurses to specialize under the auspices of the 'Scope of professional practice'. [1] One area where a well-trained nurse could shoulder responsibility is in performing suprapubic catheter (SPC) insertions. Dependence on a urology resident doctor for SPC insertion is not practical. Over a period of time the NS may actually become as profi cient as a trainee urology resident in performing SPC insertions.
At our Hospital, all SPC insertions after April 2009 have been done by a trained nurse. This paper presents an audit of this practice.
MATERIALS AND METHODS
First, a strict written protocol for suprapubic catheter insertion was introduced thereby standardizing the procedure. The protocol stipulated that all patients could be taken up for SPC insertion only after a recent ultrasonography confi rmed the absence of a bladder tumor. Obesity and previous pelvic surgery precluded the NS from performing the procedure. All patients were covered with a single dose of antibiotic. A well-distended bladder was a prerequisite for performing the procedure. The site of trocar puncture was marked approximately four fi nger breadths above the upper border of the symphysis pubis. Local anesthetic was instilled up to the rectus abdominis. Urine was aspirated with a 22G needle. During aspiration, depth and direction were measured and noted. After confi rming the free aspiration of urine, trocar cystostomy was performed. An 18F Foley with its tip cut, to facilitate later guide wire exchange if required, was placed.
One of our nurses was selected for this procedure based on aptitude and profi ciency. The parameters assessed were the ability to palpate a distended urinary bladder, ability to Access this article online predict diffi culties, operative steps, asepsis and the adherence to the protocol. Training was imparted by the team of consultant urologists. The formal training included theory classes about anatomy of the urinary system and a log book in which the target number of procedures to be observed, assisted and performed under supervision were documented. Once the target numbers were reached and the NS was performing the procedure to the satisfaction of the senior consultant, independent SPC insertions were permitted.
The results of SPC insertions performed between April 2009 and April 2011 were audited retrospectively. All these were done as daycare procedures.
RESULTS
As stipulated in the log book the NS observed and assisted in 12 cases each. Following this he performed 12 cases assisted by a consultant. The NS was then allowed to place suprapubic catheters independently. A urologist, however was always present for the procedure. During the audit period the NS independently performed 53 suprapubic catheter insertions. The patient's age ranged from 17 to 95 years. There were 50 males and three females. The various indications are listed in Table 1. All these were done only after an ultrasound study of the bladder. Patient's mean weight was 62.28 + 14.24 kg. Of the 53 procedures, 49 (92.45%) were successful. In two of these patients the NS was unable to reach the bladder after two attempts. A urology consultant was then asked to take over. In one blind trocar was successful. The second patient was shifted to the Operation Theatre and SPC was done under fl uoroscopic guidance after contrast instillation. The third patient had severe vulvovaginal and lower abdominal wall edema. The SPC did not drain after placement. Ultrasonography reported the Foley balloon to be lying in the abdominal wall. It was repositioned under sonographic guidance. The fourth patient had undergone a previous appendectomy via a standard grid-iron incision. As this did not strictly fall under the category of pelvic surgery the NS performed the procedure. The SPC drained well for a month. At the fi rst scheduled change of SPC the new catheter failed to drain urine. Ultrasonography showed a Foley bulb lying anterior to the distended bladder. Exploration revealed that the SPC tract had gone through a fold of peritoneum before reaching the bladder.
DISCUSSION
The urological nurse specialist is an underutilized resource in India. While many technicians are performing urological procedures under the supervision of their urologists, this remains a largely unregulated area. It is only now that universities have started courses for urology technicians. In our centre, the NS who doubles up as the continence nurse takes a considerable load off the urologists. While the NHS has institutionalized the concept of NS, it is yet to gain a wide foothold in India. This paper audits our attempt at training and utilizing the resources of a urological NS for inserting suprapubic catheters.
Over ten years ago Gujral et al., [1] had reported the successful practice of trained nurse specialists performing suprapubic catheterizations in hospitals as well as in community settings. As is the practice at our centre they emphasized the use of an unequivocal protocol and rigid selection criteria. In their study 164 suprapubic catheterizations were performed by nurse specialists. Eight were selected for placement by urologists under general anesthesia and cystoscopic control. In our audit a urologist intervened in two cases. In one a blind trocar SPC insertion was successful whereas the second patient required transfer to the operating room and placement under fl uoroscopic control.
The true incidence of SPC insertion-related complications is diffi cult to estimate. The most dreaded, of course, is bowel perforation. In 2009 the National Patient Safety agency reported a national survey conducted by the British Association of Urological Surgeons (BAUS), [2] Thirty-two percent of the urologists could recall a total of 65 bowel perforations over the previous ten years. Fourteen percent recalled deaths associated with the procedure. This procedure estimated the risks of bowel perforation and death resulting from the procedure to be 0.15% and 0.05% respectively. [2] In our review we did not have any bowel perforations.
Peritoneal perforations may occur without bowel injury. [3] The patient who had a peritoneal transgression had undergone an appendectomy in the past. This has prompted us to include all incisions below the umbilicus in the exclusion list for trocar SPC done by the nurse specialist.
In a study comparing 52 suprapubic and 50 urethral catheterizations in males for urinary retention, Abrams had reported exclusion of four patients from the SPC group due to failure of the procedure. [4] After analyzing 219 consultants, middle grades and station house offi cers, Ahluwalia reported malposition/expulsion in six of 219 cases and bowel injury in fi ve. [5] Success rates of the procedure done by our NS compare well with these rates.
Standard surgical practice is to place suprapubic catheters two fi nger breadths above the symphysis pubis. The recent BAUS recommendation reinforces this. [6] Our protocol stipulates a point four fi nger breadths above the symphysis pubis. Higher insertion facilitates access to the prostatic urethra during rigid cystoscopy. This is the reason that we continue to use a higher point for insertion. It is borne out by our data that this has not resulted in any increase in the complication rate.
CONCLUSION
A strict protocol and rigid selection criteria have resulted in a successful program of suprapubic catheterizations being performed by the urology nurse specialist. Complication rates compare well with procedures done by surgeons. With our training protocol the NHS model in the UK can be safely replicated in India. However, there are no existing guidelines in our country. This can be eventually conceptualized so that the urological NS can perform these procedures at the residences of those patients who are too morbid or too old to be transported to the hospital. This will be a boon for them.
|
2018-04-03T04:23:26.820Z
|
2013-01-01T00:00:00.000
|
{
"year": 2013,
"sha1": "073de3ccc6d2f06d87b204f24b006d23e15fa2e0",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0970-1591.109977",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a738bb6db45191644e657728c851a231f57c0896",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
239024673
|
pes2o/s2orc
|
v3-fos-license
|
The Ultramassive White Dwarfs of the Alpha Persei Cluster
We searched through the entire Gaia EDR3 candidate white dwarf catalogue for stars with proper motions and positions that are consistent with the stars having escaped from the Alpha Persei cluster within the past 81~Myr, the age of the cluster. In this search we found five candidate white dwarf escapees from Alpha Persei and obtained spectra for all five. We confirm that three are massive white dwarfs sufficiently young to have originated in the cluster. All three are more massive than any white dwarf previously associated with a cluster using Gaia astrometry, and possess some of the most massive progenitors. In particular, the white dwarf Gaia~EDR3~4395978097863572, which lies within 25~pc of the cluster centre, has a mass of about 1.20 solar masses and evolved from an 8.5 solar-mass star, pushing the upper limit for white dwarf formation from a single massive star, while still leaving a substantial gap between the resulting white dwarf mass and the Chandrasekhar mass.
INTRODUCTION
The maximum mass of a stable white dwarf (WD) has a widely accepted value of about 1.38 M (Nomoto 1987); the maximum mass of a WD precursor star, however, is much more contentious. Theory suggests this value should be around 8 M (Weidemann & Koester 1983), but for this limit to hold, the observed type II supernovae (SNe) rate should be much higher (Horiuchi et al. 2011). This dearth of observed type II SNe may point to a higher maximum mass, with some initial mass functions suggesting a maximum progenitor mass closer to 12 M (e.g. Kroupa & Weidner 2003). Better constraining this limit is important as it has a profound impact on a number of astrophysical quantities, including, but not limited to, the formation rates of compact objects and the metal enrichment rates of galaxies.
To probe this limit we have been searching for massive WDs which are members of young open star clusters. Identifying massive WDs in young clusters is advantageous as it allows us to use the WD cooling age to estimate the progenitor mass as the main-sequence turnoff mass of the cluster at the time the WD was born. The breadth of modern stellar surveys greatly expands our ability to search for these objects; in particular, the precise parallaxes and proper motions measured by the Gaia survey (Gaia Collaboration et al. 2016) allow us to select high-confidence cluster members using only astrometry and photometry. Recently, a wide search for massive WDs in young clusters (Richer et al. 2021) identified new young and high-mass WDs as cluster members, but failed to identify any cluster member WDs with masses in excess of 1.1 M or with progenitors over 6.2 M , leaving a gap in the high-mass region of the WD initial-final mass relation (IFMR).
As the most massive WDs are the first to be born in a cluster, and since escape velocities in open clusters are quite small, the missing massive WDs might have escaped their host clusters. Open clusters are in fact known to be deficient in WDs (Fellhauer et al. 2003), and this deficit is thought to occur from WDs receiving a natal velocity kick of a few km/s at birth (see Fregeau et al. 2009;Heyl 2007, and references therein). In order to increase the number of potential massive WD cluster members, we expanded our search to include WDs that may have escaped from their host clusters. In previous work, we developed a technique that uses five-dimensional phase space information from Gaia EDR3 (Gaia Collaboration et al. 2020) to trace stars back to their potential birth clusters (Heyl et al. 2021a). This technique was applied to a sample volume around each of the five nearest open clusters whose ages are less than 200 Myrs old (Heyl et al. 2021b). In the current paper, we expand the analysis for one of these young nearby clusters, Alpha Persei, and search the entire Gentile-Fusilo Gaia EDR3 WD catalogue (Gentile Fusillo et al. 2021) for candidate escapees.
We describe the methodology used to identify escaped WDs in § 2, examine candidate massive WDs in § 3, discuss their implication to the WD IFMR in § 4, and summarize our findings in § 5. In this search, we have identified five candidate massive WD escapees from the Alpha Persei cluster, two of which were also found in the aforementioned search in Heyl et al. (2021b). We obtained follow-up spectroscopy for each of these five objects, and were able to confirm three of the WDs as high-confidence escapee massive WDs. These three are the most massive cluster WDs identified thus far. We estimate the most massive of these WDs to have a precursor whose mass is beyond the theoretical limit of 8 M . Given that the WD has a mass of about 1.2 M , notably below the Chandrasekhar limit of approximately 1.38 M , this finding hints at the idea that the upper limit on the mass of a star that can end its life as a WD is well above 8 M or that single-star evolution does not produce WDs with masses all the way up to the Chandrasekhar limit.
To look for potential escapees from the Alpha Persei cluster, we determine the distance from the cluster of each object within the entire Gentile-Fusilo EDR3 WD catalogue (Gentile Fusillo et al. 2021) as a function of time, d(t), assuming no relative acceleration and an arbitrary radial displacement (δr) where r is the displacement of the star from the Sun, and v 2D is the velocity of the star in the plane of the sky. We then determine the time when the star and the cluster were or will be the closest together as t min = ∆r · ∆v − (∆r ·r) (∆v ·r) where ∆r = r − r cluster and ∆v = v 2D − v cluster .
This also yields an estimate of the radial displacement and velocity of the star as where the caret denotes that this is the reconstructed velocity. To be deemed a candidate escapee, we take the distance of closest approach to be d min < 15 pc, and the time of closest approach to be during the lifetime of the cluster (Basri & Martín 1999;Heyl et al. 2021b), −81 Myr < t min < 0 Myr; also, we impose |∆v 3D | < 5 km/s, determined by looking at the cumulative distribution of reconstructed relative 3D velocities of sample stars that met escapee criteria for d min and t min (Heyl et al. 2021b). Furthermore, to identify the potential WD escapees we further restrict the sample to WDs whose age is estimated to be less than 250 Myr and whose mass is greater than 0.85 M from their Gaia EDR3 photometry, thus still allowing for the possibility that interstellar reddening and absorption could make the objects appear older and less massive than their true values.
CANDIDATE WHITE DWARF ESCAPEES
This search yielded five new candidates from the Alpha Persei cluster as shown in Fig. 1. Because of their current proximity to the cluster (within 25 pc) and small relative proper motions, Lodieu et al. (2019) have identified two of these white dwarfs (WD1 and WD2) as candidate members of the cluster. These two objects were were also identified in a search of the entire Gaia EDR3 database as candidate escapees of the cluster (Heyl et al. 2021b). The three others are now more than 100 pc away from the centre of the cluster. The astrometric, spectroscopic, and derived quantities for all five objects are presented in Tables 1 and 2.
Spectroscopic Analysis
We obtained optical spectroscopy for WD1 (Gaia EDR3 439597809786357248) and WD2 (Gaia EDR3 24400369345718860) with the 8.1m Gemini-North telescope using the Gemini Multi-Object Spectrograph (GMOS; Hook et al. 2004;Gimeno et al. 2016) in long-slit mode, using the B600 grating with no filter centered at 520 nm. Employing a 1.00 arcsecond focal plane mask, we binned 2x2 in both the spectral and spatial directions, providing a pixel scale of 0.161 arcsec/pixel, for a post binning resolution of ≈ 1 angstrom. Total exposure time was 1 hr 3 mins for WD1, and 1 hr 19 mins for WD2. For WD5 (Gaia EDR3 1983126553936914816), we obtained spectra with the 10m Keck I Telescope (HI, USA) and the Low Resolution Imaging Spectrometer (LRIS; Oke et al. 1995;McCarthy et al. 1998), using the R600 grism for the blue arm (R = 1100) and the R600 grating for the red arm (R = 1400) with 2x2 binning, for a total exposure time of 600 s. Spectra for WD3 (Gaia EDR3 1924074262608187648) and WD4 (Gaia EDR3 1990559596140812544) were obtained with the Double-Beam Spectrograph (DBSP; Oke & Gunn 1982) on the 200 inch Hale telescope at Palomar observatory, with the R600 grating on the blue arm and the R316 grating on the red arm, for a total exposure time of 20 minutes each.
The spectra, showing a blue continuum and broad hydrogen Balmer absorption lines, confirm that the five stars are WDs with hydrogen-dominated atmospheres (DA). The lack of notable Zeeman splitting of the spectral lines excludes the possibility of a strong magnetic field in any candidate. We analyse the spectroscopic data to obtain estimates for the surface gravities and temperatures of the five WDs. We employ atmospheric models developed by Gianninas et al. (2010) and by Tremblay et al. (2011). In both sets of models, the hydrogen atmosphere is computed without the assumption of local thermodynamic equilibrium; the main difference is that in the former, the composition of the atmosphere includes carbon, nitrogen, and oxygen at solar abundance ratios, while the latter are made of pure hydrogen. The addition of metals in the atmosphere is important for very hot WDs, where metal levitation in the intense radiation field can change the shape of the Balmer lines (Gianninas et al. 2010). Our fitting method to the Balmer lines is similar to the routine outlined in Liebert et al. (2005): we fit the spectrum with a grid of spectroscopic models combined with a polynomial in λ (up to λ 9 ) to account for calibration errors in the continuum; we then normalize the spectrum using this smooth function picking normal points at a fixed distance in wavelength to the lines and finally use our grid of model spectra to fit the Balmer lines and extract the values of the effective temperature (T eff ) and logarithm of the surface gravity (log g). The nonlinear least-squares minimization method of Levenberg-Marquardt is used in all our fits.
We initially fit each spectrum using the pure hydrogen atmosphere models of Tremblay et al. (2011). As WDs 1, 2 and 5 all appear to be very hot WDs above 40, 000 K, we additionally fit using the Gianninas et al. (2010) models that include the influence of metals in the atmosphere. These models were developed because the presence of metals in the atmosphere, although not abundant enough to appear as additional metal lines, modify the shape of the Balmer lines, preventing simultaneous fitting of these lines. We find that WD1 does not display the Balmer line problem and the fit is not improved using metal-influenced models, while for WDs 2 and 5 the metal-influenced models notably improve the quality of the simultaneous fit to the Balmer lines. The small differences in the model fitting quality for these WDs is not surprising as the effective temperature of WD1 is approximately 42, 000 K, where the effects of metal levitation are supposedly very weak (Gianninas et al. 2010), while WDs 2 and 5 are at somewhat higher temperatures, closer to T eff = 47, 000 K. The best fits are shown in Fig. 2 using Tremblay et al. (2011) pure hydrogen atmosphere models for WDs 1, 3 and 4, and Gianninas et al. (2010) metal-influenced models for WDs 2 and 5. The resulting values of T eff and log g are listed in Table 2. For each WD, we determine the mass and cooling age from two sets of high-mass WD cooling models: the Camisassa et al. (2019) models, with an oxygen-neon (ONe) core composition and hydrogen-dominated atmosphere 1 as well as the Bédard et al. (2020) thick hydrogen atmosphere models with a carbon-oxygen (CO) core 2 . WDs should have a mass of at least 1.05 M to contain ONe cores (Siess 2007). In Table 2, we list the masses and cooling ages of the five WDs, and, for WDs that have a mass above 1.05 M , we include the results of both the ONe and the CO fitting. Though we cannot strictly rule out the possibility of CO cores, theory suggests these three ultramassive WDs are all likely to have ONe cores. The CO core fit for WD1 implies a very massive precursor that would be a significant outlier in the IFMR, providing evidence that ONe is the preferred core composition for ultramassive massive WDs. Going forward, we will consider only the ONe results for these stars.
WDs 3 and 4 have masses close to 1 M and cooling ages that are much longer than the age of the cluster. For this reason, they cannot be former members of Alpha Persei, and we remove them from our sample. WD1 escaped from the Alpha Persei cluster about 5 Myrs ago with a 3D escape velocity of 4.08 km/s. The ONe model fit suggests a very massive WD with a mass of 1.20 ± 0.01 M and a cooling age of 45 ± 4 Myrs. Combined with the cluster age of 81 ± 6 Myrs (Basri & Martín 1999;Heyl et al. 2021b), this yields a precursor main sequence lifetime of 35 ± 7 Myrs, corresponding to a 8.5 ± 0.9 M progenitor according to the Padova isochrone models (Bressan et al. 2012;Tang et al. 2014;Chen et al. 2014Chen et al. , 2015Marigo et al. 2017;Pastorelli et al. 2019Pastorelli et al. , 2020. WD2 is currently about 20 pc away from the cluster, having escaped approximately 12 Myrs ago with a small escape velocity of 1.61 km/s. ONe models suggest a mass of 1.17 ± 0.01 M and a cooling age of 14 ± 4 Myrs, giving a progenitor mass of 6.3 ± 0.3 M from the Padova models. WD5 was possibly still a main sequence star when it escaped the cluster approximately 30 Myrs ago; a higher escape velocity of 4.43 km/s pushed the WD out to 136 pc away from the cluster. WD5 has a mass of 1.12 ± 0.02 M with a cooling age of 3 ± 1 Myrs, corresponding to a 5.9 ± 0.2 M progenitor. Because WD1 and WD2 were found in the well-defined sample of Heyl et al. (2021b), we can estimate the number of young massive WDs that we would expect to appear in this sample by chance. Heyl et al. (2021b) looked for stars in a volume of 6.4 × 10 6 pc 3 surrounding the cluster. Of the 463,917 stars in this volume, only 698 have kinematics consistent with having left the cluster within the last 100 Myr, and of these 698 stars about 300 may be interlopers (Heyl et al. 2021b). On the other hand, Fleury et al. (2021) determined that there are 100 WDs with masses greater than 0.95 M and ages less than 250 Myr within 200 pc of the Sun. Combining these results we find that 0.012 young massive WDs would lie within the phase-space volume probed in the survey by chance. WD1 is both significantly younger and more massive than our thresholds, so the a posteriori chance of such a massive young WD being an interloper is a factor of fifteen smaller (Fleury et al. 2021), less than 10 −3 . An alternative calculation that ignores the small relative proper motions between WD1 and WD2 and the cluster, but focuses on the small distance between them, is the number of massive young WDs that one would expect in an average sphere of 25 pc; the result is 0.04 WDs younger than 50 Myr and more massive than 0.95 M (Fleury et al. 2021).
The Possibility of Interlopers
The remaining WDs, WD3 to WD5, were discovered in a broader search over the Gentile Fusillo et al. (2021) catalogue, so we cannot assess the relative probabilities that WD5 comes from the cluster versus the field in the same manner as WD1 and WD2; however, in this case we can obtain an estimate by looking at all of the WDs that meet the kinematic criteria to have escaped from Alpha Persei (65 WDs). The vast majority of these are too old to have originated from the cluster, and they are therefore clear interlopers. Fleury et al. (2021) have found that only one out of one thousand WDs within 200 pc are more massive than 0.95 M and younger than 250 Myr, yielding an expectation of finding only 0.06 WDs with these properties as chance interlopers within the sample. Given this estimate, it is somewhat surprising that we did find two massive and relatively young interlopers in the sample, WD3 and WD4. Their presence allows us to investigate the possibility that the phase-space volume corresponding to escapees from Alpha Persei contains a relative overabundance of massive stars and therefore massive WDs. Heyl et al. (2021b) have argued that the Alpha Persei cluster lies on an orbit with a small inclination with respect to the Galactic plane. If we assume that among these massive WDs (from 0.95 to 1.25 M ) the usual relative distribution in mass and age occurs (Fleury et al. 2021), we can estimate the possibility that WD5 is also an interloper. The mean age of WD3 and WD4 is 167 Myr (fourteen times older than WD5) and they come from a population at least twice as large as that of WD5: WDs more massive than 0.95 M , compared to those more massive than 1.10 M . This yields a probability for WD5 to be an interloper of 0.07, larger than the results for WD1 and WD2, but still very small. Fig. 3 shows the updated IFMR from Richer et al. (2021) including these new escaped cluster member WDs as well as those identified in Heyl et al. (2021a). Each of the three newly identified WDs from this work are more massive than any cluster WDs previously identified. For each of these new WDs we display results of ONe core fits. WD1 has a precursor mass of 8.5 ± 0.9 M , placing it near the theoretical limit of about 8 M (e.g. Woosley et al. 2002). Given that the WD's mass is still well below the Chandrasekhar mass, this supports an increased main-sequence upper mass limit for WD production, more consistent with expectations from observed SN II rates, or hints to the fact that single-star evolution does not produce WDs with masses all the way up to the Chandrasekhar limit.
THE INITIAL-FINAL MASS RELATION
While field stars do not provide the initial mass of a star, cluster membership is required for that, it is nevertheless instructive to inquire as to the maximum mass of WDs seen outside of clusters. In an analysis of the 100 pc sample from the Montreal White Dwarf Database, Kilic et al. (2021) identified 25 WDs with masses above 1.3 M if all possess H-atmospheres and CO cores. If the WDs instead have ONe cores, which we expect is likely the case for the majority of WDs in this mass range, just 2 of them have masses above 1.3 M . However, 23 of the 25 would have masses above 1.25 M , well above the most massive WD in the current IFMR. Note, at a minimum, a third of these are merger remnants as revealed by high magnetic fields and rapid rotation. Nevertheless, their findings suggest that WDs are formed well closer to the Chandrasekhar limit than those that have been thus far identified in clusters, which when coupled with our results provides additional support for an increased upper limit on WD production.
CONCLUSIONS
We employ a technique that we developed in Heyl et al. (2021a) for identifying WDs that may have escaped from open star clusters. In Heyl et al. (2021a) as well as the follow up paper Heyl et al. (2021b), this technique was used for a sample volume around five nearby young clusters. Here, we instead search the entire Gentile-Fusilo Gaia EDR3 WD catalogue for one specific cluster, Alpha Persei. By tracing the historical position of these catalogue WDs, we identified five candidates whose motion suggested they may have escaped from the cluster. Each of these were followed up with spectroscopy from Gemini-North GMOS, Keck LRIS, or Palomar DBSP. The surface gravity and temperature of each WD was determined from the best fit NTLE hydrogen atmosphere models. From these results the mass and cooling age were determined using CO core cooling models as well as ONe models for those above 1.05 M . Of the five WDs, three are consistent with being escaped former cluster members, while the other two have cooling ages which are larger than the age of the cluster, thus eliminating the possibility of membership. The progenitor mass for each of the three escaped members were determined using Padova isochrone models (Bressan et al. 2012;Tang et al. 2014;Chen et al. 2014Chen et al. , 2015Marigo et al. 2017;Pastorelli et al. 2019Pastorelli et al. , 2020.
Though the results of this work provide valuable insight into the WD IFMR, we have not yet identified any cluster member WDs near the Chandrasekhar limit. Measurement uncertainty likely limits the technique we have used here to a handful of nearby clusters. That said, given that we have identified a significant number of escaped WDs, we are led to believe that many of these objects have also escaped from other clusters and are merely waiting to be identified. We will continue to work to develop techniques to identify these escaped WDs in the future to better constrain the upper mass limit of WD progenitor stars.
|
2021-10-20T01:15:58.931Z
|
2021-10-19T00:00:00.000
|
{
"year": 2021,
"sha1": "2bedce7c8e5207d6b534a6e5e3e75766b7055be1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3847/2041-8213/ac50a5",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "2bedce7c8e5207d6b534a6e5e3e75766b7055be1",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
}
|
117772784
|
pes2o/s2orc
|
v3-fos-license
|
Scalar Mesons in Radiative Decays and pi-pi Scattering
In this write-up, I summarize the analyses on the low-lying scalar mesons I have done recently with my collaborators. I first briefly review the previous analyses on the hadronic processes related to the scalar mesons, which shows that the scalar nonet takes dominantly the $qq\bar{q}\bar{q}$ structure. Next, I summarize our analysis on the radiative decays involving the scalar mesons, which indicates that it is difficult to distinguish $qq\bar{q}\bar{q}$ picture and $q\bar{q}$ picture just from radiative decays. Finally, I summarize our recent analysis on the $\pi$-$\pi$ scattering in the large $N_c$ QCD, which indicates that the $\sigma$ meson is likely the $qq\bar{q}\bar{q}$ state.
I. Introduction
According to recent theoretical and experimental analyses, there is a possibility that nine light scalar mesons exist below 1 GeV, and they form a scalar nonet [1]. In addition to the well established f 0 (980) and a 0 (980) evidence of both experimental and theoretical nature for a very broad σ (≃ 560) and a very broad κ (≃ 900) has been presented.
As is stressed in Ref. [2], the masses of the above lowlying scalar mesons do not obey the "ideal mixing" pattern which nicely explains the masses of mesons made from a quark and an anti-quark such as vector mesons [3]. As is shown in Ref. [2], the "ideal mixing" pattern qualitatively explains the mass hierarchy of the scalar nonet when the members of the nonet have a qqqq quark structure proposed in Ref. [4]. In this 4-quark picture, two quarks are combined to make a diquark which together with an anti-diquark forms a scalar meson. The resultant scalar mesons have the same quantum numbers as the ordinary scalar mesons made from the quark and antiquark (2-quark picture). It is difficult to clarify the quark structure of the low-lying scalar mesons just from their quantum numbers. The patterns of the interactions of the scalar mesons to other mesons made from qq, on the other hand, depend on the quark structure of the scalar mesons. I expect that the analysis on the interactions of the scalar mesons will shed some lights on the quark structure of the scalar nonet. Actually, in Refs. [2,5], several hadronic processes related to the scalar mesons are studied. They concluded that the scalar nonet takes dominantly the qqqq structure.
Recently, for getting more informations on the structure of the low-lying scalar mesons, we studied the radiative decays involving scalar mesons [6] and the π-π scattering in the large N c QCD [7]. In this write-up I will summarize these analyses, especially focusing on the quark structure of the low-lying scalar nonet, and show how these processes give a clue for understanding the * Talk given at YITP workshop on "Multi-quark Hadrons; four, five and more ?", February [17][18][19]2004, Yukawa Insitute, Kyoto, Japan. structure of the scalar mesons.
This write-up is organized as follows: In section II, following Refs. [2,5], I will briefly review the analyses on the hadronic processes related to the scalar mesons. Next, in section III, I will briefly summarize the analysis on the radiative decays involving the scalar mesons based on the 4-quark picture [6]. I also present a new result on the analysis on the decay processes based on the 2-quark picture [8]. In section IV, I will summarize our analysis on the π-π scattering in the large N c QCD [7]. Finally, in section V, I will give a brief summary.
II. Effective Lagrangian for Scalar Mesons
In this section I briefly review previous analyses [2,5] on the masses of scalar mesons and hadronic processes related to the scalar mesons.
In Ref. [2], the scalar meson nonet is embedded into the 3 × 3 matrix field N as where N T and N S represents the "ideally mixed" fields. The physical σ(560) and f 0 (980) fields are expressed by the linear combinations of these N T and N S as where θ S is the scalar mixing angle. The scalar mixing angle θ S can parameterize the quark contents of the scalar nonet field: When θ S = ±90 • , the σ and f 0 fields are embedded into the nonet field as This is a natural assignment of scalar meson nonet based on the qq picture: On the other hand, when θ S = 0 • or 180 • , the scalar nonet field N becomes which is a natural assignment of scalar meson nonet based on the qqqq picture: Then, the present treatment of nonet field with the scalar mixing angle can express both pictures for quark contents. By using the scalar nonet field introduced above, the effective Lagrangian for the scalar meson masses are expressed as [2] where a, b, c and d are real constants, and M is the "spurion matrix" expressing the explicit chiral symmetry breaking due to the current quark masses. This M is defined by M = diag (1, 1, x), where x is the ratio of strange to non-strange quark masses with the isospin invariance assumed. Note that the scalar mixing angle is expressed by a combination of the parameters a, b, c and d.
Here I use the following values of the masses of the scalar nonet as inputs: determined from the π-π scattering [11], and M κ ≃ 900 MeV , (2.10) determined from the π-K scattering [12]. The above choice yields the two possible solutions for the scalar mixing angle [2] θ S ∼ −20 • , (2.12) Solution in Eq. (2.11) corresponds to the case where the scalar nonet is dominantly made from qqqq, while solution in Eq. (2.12) to the case where it is from qq. For determining the scalar mixing angle, the authors of Ref. [2] considered the tri-linear scalar-pseudoscalarpseudoscalar interaction. There the pseudoscalar mesons are embedded into the nonet field as where η T and η S denote the ideally mixed fields. Based on the two-mixing-angle scheme introduced in Ref. [13] the physical η and η ′ fields are expressed by the linear combinations of η T and η S . By using the scalar meson nonet field N defined in Eq. (2.1) together with the above pseudoscalar nonet field P , the general SU(3) flavor invariant scalar-pseudoscalarpseudoscalar interaction is written as [2] −L N P P = Aǫ where A, B, C and D are four real constants, and a, b, c = 1, 2, 3 denote flavor indices. The derivatives of the pseudoscalars were introduced in order that Eq. (2.14) properly follows from a chiral invariant Lagrangian in which the field P transforms non-linearly under chiral transformation. In Refs. [2,5], four parameters A, B, C, D and the scalar mixing angle are determined by fitting them to the experimental data of the π-K scattering and the η ′ → ηππ decay together with the π-π scattering. The resultant best fitted values for A, B, C and D are The best fitted value of the scalar mixing angle is which implies that the scalar meson takes dominantly the qqqq structure. It should be noticed that the coupling constant of the f 0 -π-π interaction determined from the π-π scattering [11] plays an important role to constrain the value of the mixing angle.
III. Radiative Decays Involving Scalar Mesons
In the previous section, I briefly reviewed the analyses done in Refs. [2,5], which shows that the experimental data of the hadronic decay processes involving scalar mesons give θ S ≃ −20 • , i.e., the scalar meson is dominantly made from qqqq. In this section, I show our analysis on the radiative decays involving the scalar mesons done in Ref. [6].
In Ref. [6], the trilinear scalar-vector-vector terms were included into the effective Lagrangian as where N is the scalar nonet field defined in Eq. (2.1). F µν (ρ) is the field strength of the vector meson fields defined as whereg ≃ 4.04 [14] is the coupling constant. (A term ∼ tr(F F N ) is linearly dependent on the four shown). In Ref. [6], the vector meson dominance is assumed to be satisfied in the radiative decays involving the scalar mesons. Then, the above Lagrangian (3.1) determines all the relevant interactions. Actually, the β D term will not contribute so there are only three relevant parameters β A , β B and β C . Equation (3.1) is analogous to the P V V interaction 1 which was originally introduced as a πρω coupling a long time ago [16]. One can now compute the amplitudes for S → γγ and V → Sγ according to the diagrams of Fig. 1. The decay matrix element for S → γγ is written as where X S takes on the specific forms: In the above expressions α = e 2 /(4π), s = sin θ S and c = cos θ S where the scalar mixing angle, θ S is defined in Eq. (2.2). Furthermore, ideal mixing for the vector mesons, with 3 , was assumed for simplicity. Similarly to the one for S → γγ, the decay matrix element for V → Sγ is written as (e/g)
5)
1 It was shown [15] that the complete vector meson dominance (VMD) is violated in the ω → π 0 π + π − decay which is expressed by P V V interactions. However, since the VMD is satisfied in other processes such π 0 → γγ * as well as in the electromagnetic form factor of pion, it is assumed to be held in the processes related to SV V interactions in Ref. [6].
is the photon momentum in the V rest frame. For the energetically allowed V → Sγ processes we have In addition, the same model predicts amplitudes for the energetically allowed S → V γ processes: Let me show the results obtained in Ref. [6] together with new results from a recent analysis [8]. I should stress again that all the different decay amplitudes are described by only three parameters β A , β B and β C .
In Ref. [6], the value of β A was determined from the a 0 → γγ process. Substituting Γ exp (a 0 → γγ) = (0.28 ± 0.09) keV (obtained using where positive in sign was assumed. By using this value, the value of β C is determined from Γ exp (φ → a 0 γ) = (0.47 ± 0.07) keV (obtained by assuming φ → ηπ 0 γ is dominated by φ → a 0 γ) and Eq. (3.6) as β C = (7.7 ± 0.5 , −4.8 ± 0.5) GeV −1 . (3.10) It should be stressed that the values of β A and β C obtained above are independent of the mixing angle θ S , and that |β A | is almost an order of magnitude smaller than |β C |. As one can see from Eq. (3.8), the amplitude D ω a0 is given by β C while D ρ 0 a0 is given by only β A . Then, the large hierarchy between β C and β A implies that there is a large hierarchy between Γ(a 0 → ωγ) and Γ(a 0 → ργ). Actually, by using the values of β A and β C given in Eqs. (3.9) and (3.10), they are estimated as Γ(a 0 → ωγ) = (641 ± 87 , 251 ± 54) keV , Γ(a 0 → ργ) = 3.0 ± 1.0 keV . (3.11) This implies that there is a large hierarchy between Γ(a 0 → ωγ) and Γ(a 0 → ργ) which is caused by an order of magnitude difference between |β C | and |β A |. I next show how to determine the value of β B from the f 0 → γγ process. X f0 in Eq. (3.4) depends on β B as well as on β A and the scalar mixing angle θ S . Here the scalar mixing angle θ S is taken as which is characteristic of qqqq type scalars [2]. By using this and the value of β A in Eq. This implies that |β B | is on the order of |β A |, and almost an order of magnitude smaller than |β C |. Equation (3.8) shows that D ω f0 includes β C while D ρ f0 does not. Thus, there is a large hierarchy between decay widths of f 0 → ωγ and f 0 → ργ: The typical predictions are given by (3.14) This implies that there is a large hierarchy between Γ(f 0 → ωγ) and Γ(f 0 → ργ) which is caused by the fact that |β C | is an order of magnitude larger than |β A | and |β B |. I summarize the fitted values β A , β B and β C together with several predicted values of the decay widths of V → S + γ and S → V + γ in Table I. Let me make an analysis when the scalar mixing angle is taken as θ S ≃ −90 • . 2 As I stressed above, the values of β A and β C are independent of the scalar mixing angle θ S . The value of β B determined from Γ(f 0 → γγ) becomes Then the typical predictions for Γ(f 0 → ωγ) and Γ(f 0 → ργ) are given by 3.0 ± 1.0 3.0 ± 1.0 These predictions are very close to the ones in Eq. (3.14). This can be understood by the following consideration: From the expression of D ω f0 in Eq. (3.8), one can see that it is dominated by the term including β C which is proportional to (cos θ S − √ 2 sin θ S ). Then, the approximate relation implies that the value of D ω f0 for θ S = −90 • is close to that for θ S = −20 • , and thus Γ(f 0 → ωγ) for θ S = −90 • to that for θ S = −20 • . As for Γ(f 0 → ργ) I should note that the following relation is satisfied for X f0 in Eq. (3.4) and D ρ 0 f0 in Eq. (3.8): Since the experimental value of Γ(f 0 → γγ), i.e., X f0 is used as an input, this relation implies that the predicted value of Γ(f 0 → ργ) for θ S = −90 • is roughly equal to that for θ S = −20 • . Similarly, the predicted values of other radiative decay widths for θ S ≃ −90 • are also very close to those for θ S ≃ −20 • as I list in Table II.
The result here indicates that it is difficult to distinguish two pictures just from radiative decays. Of course, other radiative decays should be studied to get more informations on the structure of the scalar mesons. Furthermore, inclusion of the loop corrections may be important [17]. However, there are still large uncertainties in the experimental data which make the analysis harder. So instead of the analysis which can be compared with experiment, some theoretical analyses give a clue to get more informations on the structure of the scalar mesons.
IV. π-π Scattering in Large Nc QCD In this section, I briefly review our recent analysis [7] on the π-π scattering in QCD with large N c , where N c is βA 0.72 ± 0.12 0.72 ± 0.12 βB −0.12 ± 0.13 1.1 ± 0.1 βC 7.7 ± 0.52 7.7 ± 0.52 Γ(σ → γγ) 0.023 ± 0.024 0.37 ± 0.10 3.0 ± 1.0 3.0 ± 1.0 the number of colors. First, let me briefly review the analyses done in Refs. [11,18] which stressed that the scalar meson σ is needed for satisfying the unitarity in the isospin I = 0, S-wave π-π scattering amplitude in real-life QCD with N c = 3. First contribution included in the ππ scattering amplitude is the one from the pion self interaction given by the current algebra, or equivalently, expressed by the leading order chiral Lagrangian: where F π = 92.42 MeV is the pion decay constant. The contribution from this to the real part of the I = 0, Swave ππ scattering amplitude is shown by the dashed line in Fig. 2. Since the amplitude greater than 0.5 im- plies that the unitarity is violated, this amplitude breaks the unitarity in the energy region around 500 MeV. The solid line in Fig. 2 shows the curve when the following ρ-exchange contribution is inclded in addition: .
(4.2)
Note that the appearance of the first term is required by the chiral symmetry. From Fig. 2, we can easily see that a large cancellation occurs between the contribution from the pion self-interaction and that from the ρ-meson exchange. However, the unitarity is still violated around 550 MeV.
To recover unitarity, we need negative contribution to the real part above the point where the solid line in Fig. 2 violate the unitarity. While below the point a positive contribution is preferred by the experiment. Such property matches with the real part of a resonance contribution: The resonance contribution is positive in the energy region below its mass, while it is negative in the energy region above its mass. In other words, the unitarity requires the existence of the resonance in this energy region. Then we have included a low mass broad scalar state, σ. The contribution of the σ to the real part of the amplitude is given by 3) where G ′ is a parameter corresponding to the width and γ σππ is the σ-π-π coupling constant. This γ σππ is related to the parameters A and B in the scalar-pseudoscalarpseudoscalar interaction Lagrangian given in Eq. (2.14) as [2] In Ref. [11], a best overall fit was obtained with the parameter choices: (4.5) The result for the real part R 0 0 due to the inclusion of the σ contribution along with π and ρ contributions is shown in Fig. 3. It is seen that the unitarity bound is satisfied and there is a reasonable agreement with the experimental points up to about 800 MeV.
The above analysis on the π-π scattering in real-life QCD tells an important lesson: The mass of σ meson is determined by the point where the amplitude constructed from π + ρ contribution violates the unitarity. Now, let me show the results in Ref. [7], where the π-π scattering in the large N c QCD was analyzed. 0 with the (current algebra) +ρ + σ contribution included [11].
First one to be included in the amplitude is the current algebra contribution given in Eq. (4.1). Note that the pion decay constant F π depends to leading order on N c as F π (N c )/F π (N c = 3) = N c /3, while the pion mass m π is independent of N c to leading order. In Fig. 4, I show the plot [7] of the current algebra contribution to the real part of the I = 0 S-wave amplitude, R 0 0 for increasing values of N c . We observe that the unitarity is violated at s = s * ca which increases linearly with N c . Next, I show that this result is strongly modified by the presence of the well established qq companion of the pion -the ρ meson. The amplitude is obtained by adding to the current algebra contribution the ρ meson contribution given in Eq. (4.2). In Fig. 5, I show the plot of R 0 0 due to current algebra plus the ρ contribution for increasing values of N c . Here the scaling property of the ρ-π-π coupling is taken as g ρππ (N c )/g ρππ (N c = 3) = 3/N c with m ρ kept fixed. This figure shows that the unitarity (i.e., |R 0 0 | ≤ 1/2) is satisfied for N c ≥ 6 till well beyond the 1 GeV region. However unitarity is still a problem for 3, 4 and 5 colors. As I showed in Fig. 3, the violation of the unitarity in the real-life QCD is recovered by the existence of the σ pole. The σ pole structure is such that the real part of its amplitude is positive for s < M 2 σ and negative for s > M 2 σ . Identifying the squared sigma mass roughly with s * , at which R 0 0 without σ contribution violates unitarity, will then give a negative contribution where the real part of the amplitude exceeds +0.5. In the case when only the current algebra term is included we get This shows that the squared mass of the σ meson needed to restore unitarity for N c = 3, 4, 5 increases roughly linearly with N c . This estimate gets modified a bit when we include the vector meson (see Fig. 6), yielding M 2 σ ≈ s * ca+ρ , where s * ca+ρ is to be obtained from Fig. 5. This clearly shows that the mass of σ becomes larger for larger N c , and when N c ≥ 6, the σ is not needed in the energy region below 2 GeV. From this we concluded [7] that the σ meson is unlikely the 2-quark state and likely the 4-quark state. This is similar to the conclusion obtained in Ref. [19].
V. Summary
In this write-up, focusing on the structure of the lowlying scalar nonet, I summarized the analyses in two works [6,7] which I have done recently.
In section II, following Refs. [2,5], I first briefly reviewed what the hadronic processes involving the scalar nonet tell about the quark structure of the low-lying scalar mesons. The analysis on the pattern of the hadronic processes implies that the scalar nonet takes dominantly the qqqq structure (or the diquark-antidiquark structure). Next, in section III, I summarized the work in Ref. [6], in which we analyzed the radiative decays involving the scalar nonet based on the qqqq picture. I also presented a new result [8] on the radiative decays based on the qq picture. Our result indicates that it is difficult to distinguish two pictures just from radiative decays. Finally, in section IV, I summarized the work in Ref. [7], in which we studied the π-π scattering in large N c QCD. Our analysis shows that the mass of the σ meson becomes larger for larger N c , and when N c ≥ 6, the π-π scattering amplitude satisfies the unitarity without the σ meson. From this we concluded that the σ meson is unlikely the qq state and likely the qqqq state.
|
2019-04-14T02:37:11.898Z
|
2004-08-17T00:00:00.000
|
{
"year": 2004,
"sha1": "58f4a1e4de7010f8c5a197732c47b503d2f4e88d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "58f4a1e4de7010f8c5a197732c47b503d2f4e88d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
268695648
|
pes2o/s2orc
|
v3-fos-license
|
Coupled Oxygen-Enriched Combustion in Cement Industry CO 2 Capture System: Process Modeling and Exergy Analysis
: The cement industry is regarded as one of the primary producers of world carbon emissions; hence, lowering its carbon emissions is vital for fostering the development of a low-carbon economy. Carbon capture, utilization, and storage (CCUS) technologies play significant roles in sectors dominated by fossil energy. This study aimed to address issues such as high exhaust gas volume, low CO 2 concentration, high pollutant content, and difficulty in carbon capture during cement production by combining traditional cement production processes with cryogenic air separation technology and CO 2 purification and compression technology. Aspen Plus ® was used to create the production model in its entirety, and a sensitivity analysis was conducted on pertinent production parameters. The findings demonstrate that linking the oxygen-enriched combustion process with the cement manufacturing process may decrease the exhaust gas flow by 54.62%, raise the CO 2 mass fraction to 94.83%, cut coal usage by 30%, and considerably enhance energy utilization efficiency. An exergy analysis showed that the exergy efficiency of the complete kiln system was risen by 17.56% compared to typical manufacturing procedures. However, the cryogenic air separation system had a relatively low exergy efficiency in the subsidiary subsystems, while the clinker cooling system and flue gas circulation system suffered significant exergy efficiency losses. The rotary kiln system, which is the main source of the exergy losses, also had low exergy efficiency in the traditional production process.
Introduction
In recent years, climate issues have garnered significant attention, and the substantial amount of CO 2 emissions is a crucial factor contributing to global warming.Based on statistical data, the cement sector accounts for around 7% of the world's total carbon emissions [1].To solve this problem, CCUS technology is regarded as a viable method targeted at minimizing CO 2 emissions and achieving a rational usage of CO 2 resources [2,3].The pre-combustion capture technique [4], post-combustion capture technology [5], and oxygen-enriched combustion technology [6,7] are the three primary CO 2 capture methods available today.
Pre-combustion capture technology is a process that transfers the chemical energy from carbon components to carbon-based fuels (like coal) and then separates the carbon from other energy-carrying compounds [8].Systems for integrated coal gasification combined with cycle power production have made extensive use of this technique [9].Pre-combustion capture technology, however, is not relevant to the cement industry since the primary source of CO 2 generation in this sector is the breakdown of carbonates.On the other hand, post-combustion capture technology involves installing CO 2 separation devices on the flue gas channel to collect CO 2 in the flue gas after use in fossil fuel combustion equipment [10].However, obstacles exist in the deployment of post-combustion capture technology in cement production owing to huge exhaust gas volumes, low pressure, low carbon dioxide concentrations, and high pollutant content.Currently, the most established post-combustion capture systems include physical absorption and chemical absorption, both confronting challenges linked to costly capture costs [11].Calcium looping is a new CO 2 capture technology that uses solid adsorbents to react calcium oxide with CO 2 to produce calcium carbonate, and then releases CO 2 through the process of carbonate decomposition through calcination to achieve the purpose of capturing CO 2 .Studies have shown that calcium looping technology has a high CO 2 capture efficiency and better economic indicators [12].However, calcium looping technology is currently at an early research phase, especially addressing the difficulty of sustaining the activity of solid sorbents during long-term cycling.Therefore, there are presently no commercial-scale demonstration instances in the cement sector [13].
Oxygen-enriched combustion technology is a novel and highly effective energy-saving approach that considerably improves the combustion rate of fuels, promotes full combustion, boosts combustion temperature, decreases smoke production, and enhances energy usage efficiency.This technology has already been widely adopted in industries such as metallurgy and glass production [14][15][16].The high cost of CO 2 collection may be efficiently addressed in the cement industry by using oxygen-enriched combustion technology.It is feasible to raise the temperature at which coal powder burns while decreasing the emissions of various pollutants, such as sulfur compounds (SO X ) and nitrogen oxides (NO X ) under oxygen-enriched combustion circumstances.Furthermore, this may lower the energy consumption of the product and the amount of coal used under the same operating circumstances [17].
With the progress of computer technology and numerical simulation methods, theoretical research on oxygen-enriched fuel combustion in the cement industry may now be undertaken through simulations.This has significant value in guiding actual production while reducing research costs.Research suggests that raising the oxygen content in the burner considerably boosts the combustion rate and temperature of coal powder, consequently boosting the heat transfer process in the kiln [18].Wang et al. found that increasing oxygen content also significantly increases the highest temperature in the kiln, which is beneficial to the clinker calcination process [19].Marin et al. believed that introducing highpurity oxygen into the main burner of the kiln improves the combustion characteristics, efficiency, and production capacity of the kiln.Additionally, studies have shown that the temperature of the clinker and the kiln's refractory elements are not significantly affected by the burning of pure oxygen fuel [20].According to Granados et al., boosting flue gas recirculation during the burning of oxygen-enriched fuel increases the clinker output, kiln heat transfer efficiency, and combustion efficiency [21].Ditaranto and Bakken optimized the oxygen-enriched combustion process of the rotary kiln through CFD simulation, achieving stable combustion, temperature control, heat transfer, and complete combustion [22].Magli et al. explored the optimum design and cost analysis of carbon dioxide purification and compression devices during oxygen-enriched fuel combustion.They discovered the ideal separation temperature and pressure, stressing that air leakage greatly affects clinker prices and carbon dioxide recovery rates [23].Relevant research suggests that compared to other carbon dioxide capture methods, oxygen-enriched combustion technology has the greatest carbon capture rate and lowest capture costs when employed in the cement sector, concurrently decreasing cement production costs [24].Presently, research on the oxygenenriched combustion process in cement mostly relies on numerical simulation calculations, lacking the modeling of an entire system.By integrating the oxygen-enriched combustion process with cement production, this study reduces the amount of exhaust gases released and increases the proportion of carbon dioxide in the fumes.This curbs carbon emissions and improves energy utilization efficiency.The energy efficiency of the entire kiln system was analyzed and compared to those of traditional production processes to demonstrate the superiority of the proposed solution.Additionally, the exergy efficiencies of the entire system and its subsystems were explored, as well as potential improvements within these sub-components for overall energy utilization.
Cement Production System with Coupled Oxygen-Enriched Combustion Process
The production process of cement typically involves three steps: grinding, homogenization, and burning.Firstly, raw materials are proportioned, dried, homogenized, and finely ground to create the appropriate particle size.The compositions and flow rates of raw materials are shown in Table 1.Subsequently, the raw materials enter the suspension preheater, combining with the gas created by the rotary kiln, preheating the materials before entering the pre-decomposition furnace.Heat exchange between the solid and gas phases occurs in a countercurrent way.In a four-stage suspension preheater, the outlet gas temperature is usually between 300 • C and 380 • C. The preheated raw materials then enter the pre-decomposition furnace, where most of the CaCO 3 and MgCO 3 are decomposed.Once inside the rotary kiln, the broken-down raw materials exchange heat with hot flue gas in a countercurrent manner.The rotary kiln is set at a certain inclination angle, and during the rotation process, the materials gradually flow to the kiln head under the influence of gravity.Under high temperatures, the components in the raw materials undergo chemical reactions and form clinker. High-temperature flue gas from the kiln head enters the rotary kiln while the coal burns within the burner.Usually, the temperature and main air flow rate are adjusted to regulate the rotary kiln's temperature.The high-temperature clinker is cooled by cooling air.The fourth-stage suspension preheater's high-temperature flue gas is treated to remove dust, cool down, and dehydrate.After going through the gas distributor, some of the flue gas that was combined with high-purity oxygen is utilized to cool the clinker, and the remaining portion is used as coal injection air that enters the burner.As a complicated physical and chemical process, cement manufacturing involves the breakdown of source materials, material movement, sintering processes, and gas-solid heat transfer.Therefore, we make the following assumptions before establishing the model: • The whole process is in a stable condition, and the composition of the feedstock does not vary.• All chemical reactions within the system are in thermodynamic equilibrium.
• The pressure decrease and partial heat loss of the process are ignored.
• All materials in the reactor are in the same state and leave the reactor at the same temperature.
•
In total, 70% of the CaCO 3 and MgCO 3 are decomposed in the decomposition furnace.• All chemical reactions take place only in the reactor.The physical property method used in this module is the ideal method.Coal composition is too complex to be defined using conventional methods, so the HCOALGEN and DCOALIGT models (shown in Figure 1) were used to establish the Aspen Plus process for the entire production process.The process is separated into five subsystems: a four-stage suspension preheating system, a rotary kiln system, a pre-decomposition furnace system, a coal powder combustion system, and a flue gas circulation system.The four-stage suspension preheating system, based on Aspen Plus, is mainly used for gas-solid heat transfer and separation.The heater module and the mixer module simulate gas-solid heat transfer, while the SSplit module simulates the gas-solid separation process.The rotary kiln system comprises the RGibbs module, heater module, SSplit module, and RStoic module used to model raw material breakdown and clinker sintering processes [25].The following reaction processes are mainly considered in this process: ventional components C, H, O, N, S, and ash, and the yield of each component is controlled through formulas written in Fortran.Unconventional component-pulverized coal was defined through industrial, elemental, and sulfur analyses.A composition comparison is shown in Table 2. Decomposition products are defined through the Gibbs free energy minimization principle to determine the products of the combustion reaction process as well as the composition; the solids and flue gases produced after combustion are separated from gas and solids through the separation module, and the ash materials produced are decomposed into conventional components as raw materials for cement production through the RYield module.Table 3 shows the specific configuration parameters of the module.The Gibbs function is represented in Equations ( 2) and (3) [26,27]: where S is the system's single phase; K is the total number of phases in the system; m is the number of moles; and G is the system's Gibbs free energy.To replicate the combustion process of pulverized coal, the RGibbs module, RYield module, and SSplit module make up the pulverized coal combustion system.The RYield module decomposes the non-conventional components of pulverized coal into the conventional components C, H, O, N, S, and ash, and the yield of each component is controlled through formulas written in Fortran.Unconventional component-pulverized coal was defined through industrial, elemental, and sulfur analyses.A composition comparison is shown in Table 2. Decomposition products are defined through the Gibbs free energy minimization principle to determine the products of the combustion reaction process as well as the composition; the solids and flue gases produced after combustion are separated from gas and solids through the separation module, and the ash materials produced are decomposed into conventional components as raw materials for cement production through the RYield module.Table 3 shows the specific configuration parameters of the module.The Gibbs function is represented in Equations ( 2) and (3) [26,27]: where S is the system's single phase; K is the total number of phases in the system; m is the number of moles; and G is the system's Gibbs free energy.
Cooler1
Exhaust gas cooling process
Cooler3
Defining the heat loss of the grate cooler
Air Separation System
This research leveraged a Cryogenic Air Separation Unit (CASU) to develop an air separation system.Compared with pressure swing adsorption and membrane separation methods, a CASU is more mature in the industry and can produce higher quantities and mass fractions of oxygen [28,29].The air separation system was established using Aspen Plus, which includes an air compressor, a distillation tower, and heat exchangers.The entire process was simulated using the Peng Robinson property method [30].The system adopts a full low-pressure external compression process.Prior to entering the primary heat exchanger, the air is first compressed and then condensed many times.A portion of the air is cooled to −152 • C and enters the upper tower, while another portion is cooled to −174 • C and enters the lower tower.In the lower tower, the separated rich-oxygen liquefied air and nitrogen undergo heat exchange and enter the upper tower.At the bottom of the tower, oxygen products with a purity of 99.4% are separated, while high-purity nitrogen products are also separated.The mixture of oxygen and argon, separated from the upper part of the tower, enters the crude argon tower.The top of the tower releases crude argon gas, while the bottom of the tower recovers, further separates, and purifies high-purity oxygen.Finally, the crude argon gas enters the refined argon tower to separate high-purity argon [31,32].
CO 2 Purification Unit
The high level of carbon dioxide purity in the flue gas is a result of the process's use of oxygen-enriched combustion technology, which removes the need for further carbon dioxide absorption procedures to directly purify and compress the flue gas [33].Noncondensable gases (O 2 , NO, NO 2 , etc.) and water vapor are the primary contaminants found in flue gas [34].The flue gas is compressed to around 3 MPa through many stages in order to separate part of the water, and using molecular sieves, the leftover water vapor is eliminated.Main heat exchanger 1's condensers and flash tanks are positioned in between the compressors.Then, MEX3 cools the compressed flue gas to −5 • C before it reaches the first-stage flash tank for flashing.The gas phase enters main heat exchanger 2 (MEX4) and further cools to −23 • C before entering the second-stage flash tank.The cooling capacity required for the entire system can be generated through the pressure reduction throttling of the liquid product [35,36].Since non-condensable gases (SO X and NO X ) dissolve in water during the high-pressure flue gas condensation process, the entire system does not require additional desulfurization and denitrification devices [9].After purification and compression, the CO 2 mass fraction in the flue gas reaches 96%, satisfying the real storage and transit demands [37].The compressed flue gas is pressurized to 7 MPa and then cooled to 20 • C to become liquid, which can be transported through pipelines.The system as a whole uses 4.54 MW of energy, with an adiabatic compression efficiency of 0.85.The specific energy consumption for CO 2 compression is 0.568 MJ/kg.The CO 2 flow rate at the inlet is 29,061.7 kg/h, and the CO 2 flow rate obtained after the flue gas compression and purification is 28,185.6kg/h, with a CO 2 recovery rate of 96.98%.As shown in Figure 2, based on the three systems above, a low-carbon cement production process coupled with the CASU and CO 2 purification unit was constructed.
Model Verification
Every step of the manufacturing process was modeled using Aspen Plus, and then the correctness and efficacy of the developed model were verified by comparing the simulation results with real production data.The first validation was the difference in the chemical composition of the clinker product.A chemical analysis of the clinker used in the real manufacturing process provided the reference data in Table 4.The primary cause
Model Verification
Every step of the manufacturing process was modeled using Aspen Plus, and then the correctness and efficacy of the developed model were verified by comparing the simulation results with real production data.The first validation was the difference in the chemical composition of the clinker product.A chemical analysis of the clinker used in the real manufacturing process provided the reference data in Table 4.The primary cause for the disparity between the simulation results and the real production results is the removal of various minor chemical components, such as Na 2 SO 4 , CaSO 4 , K 2 SO 4 , TiO 2 , etc. From Table 4, it can be seen that there is a significant deviation in tricalcium aluminate (C 3 A), which is due to the lack of physical property data for tetracalcium alumino ferrite (C 4 AF) in the Aspen database, and its proportion in the clinker composition is relatively small.Therefore, it is expected that all tricalcium aluminate (C 3 A) is created throughout the reaction process.Additionally, the model implies that dicalcium silicate (C 2 S) combines fully with free lime (CaO) to form tricalcium silicate (C 3 S) [25].Table 5 shows the contrast between actual plant operation data and Aspen Plus simulation calculation data.The majority of parameters demonstrate a close resemblance to the actual operation data; yet, due to the model's idealized nature, a negligible disparity emerges.It is evident that the exhaust gas's oxygen content has significantly deviated from normal.The root cause is that the production process is not a perfectly closed system, resulting in undetected air leaks, which have yet to be incorporated into the current modeling framework.Furthermore, the pulverized coal flow rate deviates significantly from the actual production process due to the intricate nature of the pulverized coal combustion process.Suboptimal combustion efficiency and other factors that are accepted in the real manufacturing process result from the unit operation model's inability to adequately represent the many physical and chemical changes that take place throughout this process [38,39].
Study on System Operating Conditions Parameters
A sensitivity analysis of the main operating parameters can provide important information for the system production process.Therefore, the research and analysis of relevant parameters can improve the performance of the entire production process.This section primarily examines the effects of varying coal and oxygen flow rates on the kiln's oxygen-fuel coupled combustion process's performance and pollution emissions.
Effect of O 2 /CO 2 Atmosphere on the System
Although higher temperatures can be achieved with less oxygen and coal flow in an oxygen-rich combustion state, the amount of flue gas generated will also decrease.Convective and radiative heat transfer are the methods used in the rotary kiln to transmit heat from the flue gas to the raw material [40,41].Insufficient flue gas carries less enthalpy, which has a detrimental effect on clinker production.Coal combustion must be performed in an O 2 /CO 2 environment in order to guarantee that the temperature distribution in the kiln stays consistent with the heat transfer characteristics and standard operating parameters [42].With the results shown in Figure 3, under the condition of constant oxygen combustion ratio, the flue gas temperature under diverse O 2 /CO 2 atmospheres was simulated.Because CO 2 has a larger heat capacity than N 2 , the flue gas temperature in an O 2 /N 2 atmospheric environment is somewhat higher than that of an O 2 /CO 2 atmospheric environment.This is because the flue gas temperature in the former is lower than the air temperature in the atmospheric environment [43].
Effect of Different Oxygen and Coal Flow Rates on Combustion Systems
High-concentration oxygen with a quality score of 99.4% was chosen as the combustion-supporting gas to study the effects on the combustion system under different oxygen and coal flow conditions.The experimental findings are displayed in Figure 4. From Fig-
Effect of Different Oxygen and Coal Flow Rates on Combustion Systems
High-concentration oxygen with a quality score of 99.4% was chosen as the combustionsupporting gas to study the effects on the combustion system under different oxygen and coal flow conditions.The experimental findings are displayed in Figure 4. From Figure 4a, it can be noted that when m O 2 = 10,000 kg/h and m coal = 4000 kg/h, the temperature in the kiln reaches 1910 • C, which is acceptable for production.Compared to the coal flow rate utilized in the reference (m coal = 5000 kg/h), roughly 20% of coal usage may be avoided.The clinker output on this manufacturing line is 35,000 kg/h, with a specific energy consumption of 4200 kJ/kg coal.When the coal input rate is 4000 kg/h, the specific energy consumption in the rotary kiln is 3360 kJ/kg coal, suggesting a decrease of around 840 kJ/kg coal in specific energy consumption.It should be noted that this can be achieved only under ideal conditions without considering heat loss, system air leakage, or combustion in the production process, but overall, the oxygen-rich state can still achieve significant coal-saving effects.Figure 4b-d show the variation trends of the main gas product contents with oxygen and coal flow rates.When m O 2 = 7052.632kg/h and m coal = 3500 kg/h, the maximum CO 2 content is 89.94% (wt.%), indicating the optimal stoichiometric ratio between coal powder and oxygen, and the combustion is more complete.The CO decreases with an increase in the oxygen-to-coal ratio.When m O 2 = 10,000 kg/h and m coal = 3500 kg/h, the mass fraction is 0.75%, indicating that the excess O 2 increases the generation of CO.The trend of NO is opposite to that of CO.With an increase in the oxygen-to-coal ratio, its content dramatically increases.When the oxygen-to-coal ratio is about 2.5, its concentration is 4.63 × 10 −4 (wt.%).As indicated in Figure 4a, the gas temperature under this working state is quite high, which is favorable to the generation of thermal NO X [44].
Effect of Pulverized Coal Flow on Tail Gas Emissions
Under oxygen-rich combustion circumstances, the impact of the coal powder flow rate on pollutant emissions in the exhaust gas was evaluated.The simulation results are shown in Figure 5.With a rise in the coal powder flow rate, the flow of CO2 increases while the flow of O2 drops.Therefore, at mO2 = 8500 kg/h and from mcoal = 3000 kg/h to mcoal = 3000 kg/h, the coal powder is in a good combustion state.The rise in CO flow rate may be attributed to the increase in the coal flow rate, which elevates the temperature of the flue gas and causes the dissociation of a tiny quantity of CO2.Both NOX and SO3, the main pollutants, show decreasing trends, but the flow rate of SO3 increases significantly due to The fuel specific heat energy consumption was calculated as follows: where q (kJ/kg) is specific heat energy consumption, LHV (kJ/kg) encompasses lower fuel heating values, m coal (kg/h) is the fuel flow rate, and m f (kg/h) is the clinker flow rate.
Effect of Pulverized Coal Flow on Tail Gas Emissions
Under oxygen-rich combustion circumstances, the impact of the coal powder flow rate on pollutant emissions in the exhaust gas was evaluated.The simulation results are shown in Figure 5.With a rise in the coal powder flow rate, the flow of CO 2 increases while the flow of O 2 drops.Therefore, at m O 2 = 8500 kg/h and from m coal = 3000 kg/h to m coal = 3000 kg/h, the coal powder is in a good combustion state.The rise in CO flow rate may be attributed to the increase in the coal flow rate, which elevates the temperature of the flue gas and causes the dissociation of a tiny quantity of CO 2 .Both NO X and SO 3 , the main pollutants, show decreasing trends, but the flow rate of SO 3 increases significantly due to the high temperature promoting the generation of SO 2 during combustion [45].
Effect of Pulverized Coal Flow on Tail Gas Emissions
Under oxygen-rich combustion circumstances, the impact of the coal powder flow rate on pollutant emissions in the exhaust gas was evaluated.The simulation results are shown in Figure 5.With a rise in the coal powder flow rate, the flow of CO2 increases while the flow of O2 drops.Therefore, at mO2 = 8500 kg/h and from mcoal = 3000 kg/h to mcoal = 3000 kg/h, the coal powder is in a good combustion state.The rise in CO flow rate may be attributed to the increase in the coal flow rate, which elevates the temperature of the flue gas and causes the dissociation of a tiny quantity of CO2.Both NOX and SO3, the main pollutants, show decreasing trends, but the flow rate of SO3 increases significantly due to the high temperature promoting the generation of SO2 during combustion [45].
Effect of Oxygen Flow Rate on Tail Gas Emissions
The sensitivity analysis results of oxygen flow rate are shown in Figure 6.When mO2 = 7000 kg/h, the flow rates of O2, SO3, NO, and NO2 in the exhaust gas are close to 0 kg/h, and then they start to increase linearly.This shows that previous to this, the incomplete burning of coal powder and lower combustion zone temperature prevented the creation of thermal NOX, and at this moment, the predominant kind of NOX formed is fuel related.
Effect of Oxygen Flow Rate on Tail Gas Emissions
The sensitivity analysis results of oxygen flow rate are shown in Figure 6.When m O 2 = 7000 kg/h, the flow rates of O 2 , SO 3 , NO, and NO 2 in the exhaust gas are close to 0 kg/h, and then they start to increase linearly.This shows that previous to this, the incomplete burning of coal powder and lower combustion zone temperature prevented the creation of thermal NO X , and at this moment, the predominant kind of NO X formed is fuel related.At this time, the main type of NO X generation is fuel type.When m O 2 = 6000 kg/h, the flow rate of CO reaches its maximum at 1852 kg/h, and then declines to close to 0 kg/h with the rise of oxygen flow rate.Generally, the generation of SO 2 increases with the increase in the combustion zone temperature.Before m O 2 = 7000 kg/h, with the increase in the oxygen flow rate, the ideal oxygen-to-coal ratio is progressively achieved, and the combustion zone temperature also increases.However, the ongoing rise in the quantity of oxygen results in a part of the heat being used to heat the extra oxygen, thereby decreasing the temperature of the combustion zone and leading to a subsequent drop in the flow of SO 2 .CO 2 linearly increases to 29,061 kg/h at m O 2 = 7000 kg/h, and then remains constant.This is because at this moment, the coal powder has been totally consumed, and under the circumstance of abundant oxygen, no additional CO 2 is formed.It is evident from a comparison of Figures 5 and 6 that variations in the oxygen flow rate are more sensitive to changes in the exhaust gas composition than are variations in the coal powder flow rate.At this time, the main type of NOX generation is fuel type.When mO2 = 6000 kg/h, the flow rate of CO reaches its maximum at 1852 kg/h, and then declines to close to 0 kg/h with the rise of oxygen flow rate.Generally, the generation of SO2 increases with the increase in the combustion zone temperature.Before mO2 = 7000 kg/h, with the increase in the oxygen flow rate, the ideal oxygen-to-coal ratio is progressively achieved, and the combustion zone temperature also increases.However, the ongoing rise in the quantity of oxygen results in a part of the heat being used to heat the extra oxygen, thereby decreasing the temperature of the combustion zone and leading to a subsequent drop in the flow of SO2.CO2 linearly increases to 29,061 kg/h at mO2 = 7000 kg/h, and then remains constant.This is because at this moment, the coal powder has been totally consumed, and under the circumstance of abundant oxygen, no additional CO2 is formed.It is evident from a comparison of Figures 5 and 6 that variations in the oxygen flow rate are more sensitive to changes in the exhaust gas composition than are variations in the coal powder flow rate.
Results under Different Working Conditions
Figure 7 compares the differences in various production parameters between conventional conditions and oxygen-enriched combustion conditions.The graph shows that the coal powder feed decreased from the real operating state of 5000 kg/h to 3500 kg/h under
Results under Different Working Conditions
Figure 7 compares the differences in various production parameters between conventional conditions and oxygen-enriched combustion conditions.The graph shows that the coal powder feed decreased from the real operating state of 5000 kg/h to 3500 kg/h under the oxygen-rich condition, saving almost 30% of the coal.However, if the actual combustion efficiency and heat losses are taken into account, it should be lower than this.In addition, the specific energy consumption decreases from 4200 kJ/kg coal to 1174 kJ/kg coal.The primary air flow rate reduces from 11,438 kg/h to 8750 kg/h, owing to the lower gas supply volume under pure oxygen circumstances.The secondary air flow rate is reasonably close, since the circulating flue gas is important for recovering the heat of the clinker and increasing the heat transfer properties of the kiln.The exhaust gas flow rate decreases from 67,536 kg/h to 30,647 kg/h, which is a reduction of 54.6%.In the oxygen-enriched combustion process, the source of the secondary air shifts from air to circulating flue gas, resulting in a considerable drop in the exhaust gas flow rate.At the same time, the decreased exhaust gas flow rate helps to minimize the energy consumption in the later CO 2 collection operation.Table 6 shows a comparison of various components in the exhaust gas under normal operating conditions and oxygen-enriched combustion conditions.From the table, it can be observed that the mass fraction of CO2 in the exhaust gas is 94.828% under oxygenenriched combustion circumstances, whereas the mass fraction of CO2 at normal operating settings is only 34.642%.This creates suitable circumstances for a later CO2 collection and may considerably lower the cost of collecting CO2 in cement kilns.Due to the employment of high-purity oxygen as a combustion helper, the content of N2 drops, resulting in reductions in the amounts of NO and NO2.The increases in SO2 and SO3 may be due to the favorable conditions for the formation of SO2 under higher combustion zone temperature under oxygen-enriched combustion conditions.In addition, the percentage of N2 in the exhaust gas under oxygen-enriched combustion circumstances reduces dramatically to 0.318%, whereas the mass fraction of N2 under normal operating settings is 63.534%.During the production process, a considerable amount of heat is used to heat N2, which is then lost in a subsequent heat transfer, resulting in a decrease in the overall system thermal efficiency.Table 6 shows a comparison of various components in the exhaust gas under normal operating conditions and oxygen-enriched combustion conditions.From the table, it can be observed that the mass fraction of CO 2 in the exhaust gas is 94.828% under oxygen-enriched combustion circumstances, whereas the mass fraction of CO 2 at normal operating settings is only 34.642%.This creates suitable circumstances for a later CO 2 collection and may considerably lower the cost of collecting CO 2 in cement kilns.Due to the employment of high-purity oxygen as a combustion helper, the content of N 2 drops, resulting in reductions in the amounts of NO and NO 2 .The increases in SO 2 and SO 3 may be due to the favorable conditions for the formation of SO 2 under higher combustion zone temperature under oxygen-enriched combustion conditions.In addition, the percentage of N 2 in the exhaust gas under oxygen-enriched combustion circumstances reduces dramatically to 0.318%, whereas the mass fraction of N 2 under normal operating settings is 63.534%.During the production process, a considerable amount of heat is used to heat N 2 , which is then lost in a subsequent heat transfer, resulting in a decrease in the overall system thermal efficiency.Energy analysis and exergy analysis are excellent thermodynamic analytical methodologies.Energy analysis is based on the first rule of thermodynamics, quantitatively revealing the energy transmission in the system.In contrast, energy analysis employs the second rule of thermodynamics to assess energy from a mass viewpoint.It may also pinpoint the origins and locations of thermodynamic losses [46,47].Energy analysis is crucial for assessing industrial process systems in the context of sustainable energy development [48,49].Therefore, in this part, the performance of the oxygen-enriched combustion cement manufacturing system is analyzed utilizing the benefits of exergy analysis.The exergy balance of the whole system was computed using the following equation: where E in E out , respectively, indicate the exergy flow rates entering and exiting each unit of the process, kW; E d,j denotes the exergy destruction rate of subsystem j, kW; and W stands for the shaft power of compressors and pumps, kW.The environmental reference for exergy was specified as T 0 = 301.15K and P 0 = 101.325kPa (the average ambient temperature and pressure during system operation).E Q signifies the heat exergy entering the system, kW, and its calculation formula is as follows: The physical exergy flow rate E PH is where F represents the molar flow rate of the stream, mol/h; h i stands for the specific enthalpy of the stream, kJ/mol; s i denotes the specific entropy of the stream, kJ/(mol/K); h 0 is the specific enthalpy at the reference state, kJ/mol; and s 0 is the specific entropy at the reference state, kJ/(mol/K).The exergy value of a process unit or stream, under the condition that there are no substantial changes in velocity and elevation between the intake and output, was determined by adding the physical and chemical exergy changes together.This is stated as follows: The kinetic and potential exergies of the system in this investigation may be disregarded: The cement production process involves the flow of solid and gas mixtures at different temperatures and pressures.Additionally, various chemical reactions occur within the kiln system.The chemical exergy of several ideal gas combinations [50,51] was calculated as follows: For multi-component mixtures, their chemical exergy [52] is where L o and V o represent the liquid-phase and vapor-phase flow rates of n components, respectively.e CH,ol and e CH,ov denote the standardized chemical exergies of the liquid-phase and vapor-phase components.X 0,I and y 0,i , respectively, indicate the liquid and gas molar fractions of the stream.Th total exergy efficiency of the entire system is expressed as The exergy efficiency of the subsystem is expressed as and the exergy destruction ratio is expressed as The exergy efficiency of process units in the design may be assessed by examining the exergy information, which includes both the physical and chemical exergies of each process stream.This information is shown in Table 7.The method assumes that the exergy variations resulting from heat loss in each process unit and mixing operations are not taken into account.The chemical exergy value of coal was determined using the following calculation [53]:
Exergy Analysis and Discussion
The whole system's mass and energy balance were computed using a simulator.In addition, the exergy efficiency and exergy destruction rates of both the overall system and each individual subsystem were computed using the reference environmental model established by Morris and Szargut [54].This analysis helped identify specific locations within the system where improvements might be made.
Figures 8 and 9 show the exergy efficiencies of the overall system and each subsystem for the cement manufacturing process.This method includes the use of linked enhanced oxygen combustion technology and CO 2 capture purification compression technology, as well as the standard cement production process.Based on these data, an exergy evaluation was undertaken for the process.For the complete cement manufacturing system, the exergy efficiency of the traditional production process was 37.75%, whereas the system's exergy efficiency after linked enhanced oxygen combustion improved to 44.38%, an increase of roughly 17.56%.Figure 7 and Table 6 indicate that in using high-purity oxygen instead of air, the exhaust gas flow and N 2 content decreased significantly, thereby reducing the exergy efficiency drop caused by heat loss.Moreover, pure oxygen enhances coal combustion, improving thermal efficiency, resulting in higher exergy efficiency for COMBUST under enriched oxygen combustion conditions.In other subsystems, the exergy efficiency of the cryogenic air separation system is poor owing to repetitive operations of compression, decompression, and heat exchange, considerably lowering the system's exergy efficiency.The PHT (preheater tower) subsystem employs a more idealized model and involves no chemical reactions, thus exhibiting a high exergy efficiency with almost no energy loss.The BURNING reactor demonstrates a lower exergy efficiency, as it simulates the sintering stage of the kiln involving complex physical and chemical changes, resulting in significant exergy losses.Except for the WALL-LOSS module, the exergy efficiencies of all other modules have shown improvement.
Figures 10 and 11 exhibit the exergy destruction ratios of several subsystems in the cement manufacturing process.In the cement production process employing coupled enriched oxygen combustion technology, the smoke cooling system and smoke circulation system exhibit higher proportions of exergy destruction due to significant heat losses in these two processes.Therefore, developing better techniques to recover heat from clinker and lowering heat losses in the smoke circulation process will effectively decrease the exergy destruction rate of the system.In the case of the typical manufacturing process, the rotary kiln system displays the greatest exergy destruction ratio owing to irreversible exergy losses generated by chemical reactions and gas-solid heat transfer processes.compression, decompression, and heat exchange, considerably lowering the system's ex-ergy efficiency.The PHT (preheater tower) subsystem employs a more idealized model and involves no chemical reactions, thus exhibiting a high exergy efficiency with almost no energy loss.The BURNING reactor demonstrates a lower exergy efficiency, as it simulates the sintering stage of the kiln involving complex physical and chemical changes, resulting in significant exergy losses.Except for the WALL-LOSS module, the exergy efficiencies of all other modules have shown improvement.Figures 10 and 11 exhibit the exergy destruction ratios of several subsystems in the cement manufacturing process.In the cement production process employing coupled enriched oxygen combustion technology, the smoke cooling system and smoke circulation system exhibit higher proportions of exergy destruction due to significant heat losses in these two processes.Therefore, developing better techniques to recover heat from clinker and lowering heat losses in the smoke circulation process will effectively decrease the exergy destruction rate of the system.In the case of the typical manufacturing process, the rotary kiln system displays the greatest exergy destruction ratio owing to irreversible exergy losses generated by chemical reactions and gas-solid heat transfer processes.Figures 10 and 11 exhibit the exergy destruction ratios of several subsystems in the cement manufacturing process.In the cement production process employing coupled enriched oxygen combustion technology, the smoke cooling system and smoke circulation system exhibit higher proportions of exergy destruction due to significant heat losses in these two processes.Therefore, developing better techniques to recover heat from clinker and lowering heat losses in the smoke circulation process will effectively decrease the exergy destruction rate of the system.In the case of the typical manufacturing process, the rotary kiln system displays the greatest exergy destruction ratio owing to irreversible exergy losses generated by chemical reactions and gas-solid heat transfer processes.
Conclusions
In this study, the traditional cement production process was combined with low-temperature air separation technology and carbon dioxide purification and compression technology to deal with the high emission, low carbon dioxide concentration, high pollutant content and high carbon capture difficulty in the cement production process.A complete production model was established using Aspen Plus V11 software, and a sensitivity analysis of related production parameters was carried out.An exergy analysis of the whole furnace system revealed the major energy loss sources in the traditional production process, and revealed improvement measures, which provide a way for the technical transformation of the cement industry.The results indicate that following the adoption of coupled enriched oxygen combustion technology, with the same raw material treatment volume, the overall coal consumption was reduced by approximately 30%, and the specific energy consumption (q) decreased by about 72%.Simultaneously, there was a decrease of around 54.6% in the exhaust gas flow rate, with the CO2 mass fraction in the exhaust gas rising to 94.83%.A further exergy analysis found a 17.56% improvement in the total exergy efficiency of the kiln system compared to the traditional method, where the RTK (Rotary kiln) and PHT subsystems demonstrated better exergy efficiencies in the enriched oxygen combustion process.However, the cryogenic air separation subsystem in the ancillary system showed a lower exergy efficiency.Additionally, an analysis of the exergy destruction ratio indicated significant exergy losses in the clinker cooling system and the smoke circulation system in the enriched oxygen combustion process.Thus, minimizing heat losses in these two systems might effectively reduce the energy consumption of the overall system.In contrast, for the traditional process, the rotary kiln system was the major source of exergy losses, highlighting its potential for development.Future work will emphasize studying the process of integrating novel oxygen supply systems and CO2 capture systems, along with the need for an economic evaluation to verify the industrial potential of this process.
Conclusions
In this study, the traditional cement production process was combined with lowtemperature air separation technology and carbon dioxide purification and compression technology to deal with the high emission, low carbon dioxide concentration, high pollutant content and high carbon capture difficulty in the cement production process.A complete production model was established using Aspen Plus V11 software, and a sensitivity analysis of related production parameters was carried out.An exergy analysis of the whole furnace system revealed the major energy loss sources in the traditional production process, and revealed improvement measures, which provide a way for the technical transformation of the cement industry.The results indicate that following the adoption of coupled enriched oxygen combustion technology, with the same raw material treatment volume, the overall coal consumption was reduced by approximately 30%, and the specific energy consumption (q) decreased by about 72%.Simultaneously, there was a decrease of around 54.6% in the exhaust gas flow rate, with the CO 2 mass fraction in the exhaust gas rising to 94.83%.A further exergy analysis found a 17.56% improvement in the total exergy efficiency of the kiln system compared to the traditional method, where the RTK (Rotary kiln) and PHT subsystems demonstrated better exergy efficiencies in the enriched oxygen combustion process.However, the cryogenic air separation subsystem in the ancillary system showed a lower exergy efficiency.Additionally, an analysis of the exergy destruction ratio indicated significant exergy losses in the clinker cooling system and the smoke circulation system in the enriched oxygen combustion process.Thus, minimizing heat losses in these two systems might effectively reduce the energy consumption of the overall system.In contrast, for the traditional process, the rotary kiln system was the major source of exergy losses, highlighting its potential for development.Future work will emphasize studying the process of integrating novel oxygen supply systems and CO 2 capture systems, along with the need for an economic evaluation to verify the industrial potential of this process.
Figure 1 .
Figure 1.Simulation flow of cement oxygen-enriched combustion process based on Aspen Plus.Figure 1. Simulation flow of cement oxygen-enriched combustion process based on Aspen Plus.
Figure 1 .
Figure 1.Simulation flow of cement oxygen-enriched combustion process based on Aspen Plus.Figure 1. Simulation flow of cement oxygen-enriched combustion process based on Aspen Plus.
Processes 2024 , 20 Figure 2 .
Figure 2. The coupling of the CASU and CO2 purification unit within the low-carbon cement production process.
Figure 2 .
Figure 2. The coupling of the CASU and CO 2 purification unit within the low-carbon cement production process.
Figure 4 .
Figure 4. Effects of different oxygen and combustion flows on combustion system.
Figure 4 .
Figure 4. Effects of different oxygen and combustion flows on combustion system.
Figure 4 .
Figure 4. Effects of different oxygen and combustion flows on combustion system.
Figure 5 .
Figure 5.Effect of pulverized coal flow on flue gas emission.
Figure 5 .
Figure 5.Effect of pulverized coal flow on flue gas emission.
Figure 6 .
Figure 6.Effects of oxygen flow on flue gas emission.
Figure 6 .
Figure 6.Effects of oxygen flow on flue gas emission.
Figure 7 .
Figure 7. Parameter comparison under different working conditions.
Figure 10 .
Figure 10.Exergy destruction ratios of individual subsystems under oxygen-rich combustion settings.
Figure 10 .
Figure 10.Exergy destruction ratios of individual subsystems under oxygen-rich combustion settings.
Figure 11 .
Figure 11.Exergy destruction ratios of various subsystems under normal operating circumstances.
Figure 11 .
Figure 11.Exergy destruction ratios of various subsystems under normal operating circumstances.
Table 2 .
Coal compositions and heating values.
Table 3 .
Aspen Plus cement production system module descriptions.
Table 4 .
Simulation results of Aspen Plus were compared with actual clinker composition data.
Table 5 .
Aspen Plus simulation results are compared with operating parameters.
Table 6 .
Comparison of exhaust gas components in different working conditions.
Table 6 .
Comparison of exhaust gas components in different working conditions.
Table 7 .
Mass flow rates and exergy flow rates of process streams.
|
2024-03-27T15:33:43.803Z
|
2024-03-24T00:00:00.000
|
{
"year": 2024,
"sha1": "594db96733666235963866cd134142849e9b9ae6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9717/12/4/645/pdf?version=1711355899",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "42fe671c2623f8116346cae32ea8759c120169ed",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
}
|
4603197
|
pes2o/s2orc
|
v3-fos-license
|
Role of genetic heterogeneity in determining the epidemiological severity of H1N1 influenza
Genetic differences contribute to variations in the immune response mounted by different individuals to a pathogen. Such differential response can influence the spread of infectious disease, indicating why such diseases impact some populations more than others. Here, we study the impact of population-level genetic heterogeneity on the epidemic spread of different strains of H1N1 influenza. For a population with known HLA class-I allele frequency and for a given H1N1 viral strain, we classify individuals into sub-populations according to their level of susceptibility to infection. Our core hypothesis is that the susceptibility of a given individual to a disease such as H1N1 influenza is inversely proportional to the number of high affinity viral epitopes the individual can present. This number can be extracted from the HLA genetic profile of the individual. We use ethnicity-specific HLA class-I allele frequency data, together with genome sequences of various H1N1 viral strains, to obtain susceptibility sub-populations for 61 ethnicities and 81 viral strains isolated in 2009, as well as 85 strains isolated in other years. We incorporate these data into a multi-compartment SIR model to analyse the epidemic dynamics for these (ethnicity, viral strain) epidemic pairs. Our results show that HLA allele profiles which lead to a large spread in individual susceptibility values can act as a protective barrier against the spread of influenza. We predict that populations skewed such that a small number of highly susceptible individuals coexist with a large number of less susceptible ones, should exhibit smaller outbreaks than populations with the same average susceptibility but distributed more uniformly across individuals. Our model tracks some well-known qualitative trends of influenza spread worldwide, suggesting that HLA genetic diversity plays a crucial role in determining the spreading potential of different influenza viral strains across populations.
Introduction A central aim of epidemiological studies is to identify factors that place some populations at greater risk of contracting an infectious disease than others [1]. Such factors can be associated with each of the three legs of the "epidemiologic triad" for infectious diseases, the combination of an external causative agent, a susceptible host, and an environment that links these two together [2]. Each of these could vary across populations. However, even if the causative agent was unique and environmental factors assumed to be largely common, variations intrinsic to the host can lead to large inhomogeneities in epidemic progression across populations [1,2]. Such variations are ignored in standard formulations of compartment models for infectious diseases, which project all properties of the host onto a small set of states describing the host status. These states are typically taken to be susceptible, infected or recovered, with respect to the progress of the disease [3].
The influenza pandemic of 2009 originated in a new influenza virus, pandemic H1N1 2009 influenza A (pH1N1), to which a large fraction of the population lacked immunity [4]. The virus responsible is thought to have arisen from a mixture of a North American swine virus that had jumped between birds, humans and pigs, with a second Eurasian swine virus that circulated for more than 10 years in pigs in Mexico before crossing over into humans [4]. This pandemic caused extensive outbreaks of disease in the summer months of 2009, across the USA, Brazil, India and Mexico, leading on to high levels of disease in the winter months. The pandemic virus had almost complete dominance over other seasonal influenza viruses and was unusual in its clinical presentation, with the most severe cases occurring in younger age groups [4].
The severity of the H1N1 2009 pandemic can be assessed in terms of the basic reproduction number (R 0 ), a fundamental dimensionless epidemiological parameter representing the average number of secondary infections caused by a typical infectious individual in a fully susceptible population. An R 0 > 1 leads to an expected exponential increase in the number of infected individuals at early times, an increase which saturates before decreasing as infected individuals recover, whereas for R 0 < 1, the number of infected individuals decreases monotonically. We compile estimates of R 0 values for the pH1N1 epidemic across several countries from the literature, and list them in Table 1. Substantial variation in R 0 values, ranging from about 1.2 at the lower end to values of 3 and above at the upper end, is evident from this table. This variation across countries illustrates the need to account for host-specific susceptibilities to disease. The immune response of the host, modulated by prior infections and vaccinations, is usually the central factor influencing R 0 , although location-specific contact rates and health-seeking behaviour contribute as well. In this work, we study how the spread of influenza in a population is affected by variation in naïve host immune response.
Epidemics are typically modelled through deterministic compartmental-type models, represented by coupled non-linear ordinary differential equations. The SIR model is particularly well suited for studying the spread of influenza, since H1N1 is a virus which spreads from person-to-person through contact, without requiring a vector for transmission. The lack of a long incubation period and a relatively rapid recovery makes it possible to ignore the effects of immigration and emigration, as well as of births and deaths due to natural causes [3]. Models such as the SIR model and related models typically assume that individuals in the population are all alike, which allows one to reduce the number of model parameters to be estimated from data, and leads to mathematical models that can be more feasibly studied from an analytical or computational perspective. However, increasing efforts have been devoted during recent years to assessing the impact of individual heterogeneities in disease spread [14]. These heterogeneities can be of very different nature, when considering for example populations structured in specific spatial configurations [15][16][17][18][19], such as households [20] or age-structured populations [15], or when there exist heterogeneous individual susceptibilities, infectivities or recovery periods due, for example, to genetic [15,21] or behavioural [22] reasons. Network or individual-based models provide a methodology for simulating each individual as a separate entity (an agent) with a specified susceptibility, an individual-specific ability to infect others as well as a specified time to recovery, while also being flexible enough to incorporate specific interaction patterns between agents. Such models, however, typically require estimating a large number of parameters. Individual-based models come with substantial overheads in terms of computational resources. In addition, their inherent stochasticity makes extensive averaging necessary [23,24].
A straightforward generalisation of the simplest version of the SIR model involves subdividing populations into smaller groups or sub-populations. Individuals in each sub-population can be considered to be homogeneous, but individuals across different sub-populations can be modelled as responding differentially to the disease, as in the models of [18][19][20][21][25][26][27][28][29]. Prior work has mainly focused on the theoretical analysis of these models, and relatively few attempts have been made to incorporate clinical or biological heterogeneities known to be relevant at the individual level, into population-level epidemic models. Incorporating such individual-level immunological information into population-level epidemic models accounting for susceptibility or infectivity heterogeneities has been recently identified as a major challenge for mathematical epidemiology [30]. Both innate and adaptive immune responses are initiated when an individual is exposed to the influenza virus. The innate response induces chemokine and cytokine production. Type I interferons are among the most important cytokines produced by the innate immune response and act to stimulate dendritic cells (DCs), enhancing their antigen production. The adaptive immune system can recognise the presence of an intracellular virus and mount a response only if a molecule called the human leukocyte antigen (HLA) binds to and 'presents' fragments of viral proteins (epitopes) to the extracellular environment. Professional antigen presenting cells such as DCs present viral antigens to CD4 + T-cells through HLA class-II and to CD8 + Tcells through HLA class-I molecules. The CD4 + T-helper cells promote a B-cell response and antibody secretion. HLA class-I molecules can be found on the surface of all cells, and interact with T-cell receptors (TCRs) present on CD8 + T-cells [31,32]. These cells are also called cytotoxic T lymphocytes, or CTLs.
The central role of HLA-mediated presentation of antigens in the magnitude and specificity of CTL response in infectious diseases in general [33], and in influenza A in particular [34,35], have been well studied. A recent study shows that the targeting efficiency of HLA, a function of the binding score of a given HLA allele and the conservation score of a given protein, correlates with the magnitude of the CTL response, and also with the mortality due to influenza A infection [36]. These studies also show that considering a single HLA allele is insufficient to determine the strength of the CTL response [34,37].
Each individual has 6 HLA class-I alleles, the combination of all 6 alleles being referred to as an HLA genotype. Cross-reactivity between HLA alleles can result in two individuals with completely different HLA genotypes presenting the same number of high affinity epitopes [38]. Also, some alleles correlate with stronger (HLA-A Ã 02 [34]) or weaker (HLA-A Ã 24 [36]) CTL response to the influenza A virus. This raises a number of questions. Does a high risk allele always correlate with a severe influenza epidemic, or can the presence of diverse HLA alleles offset this risk? Are there specific patterns of susceptibility resulting from diversity in HLA, which can confer greater protection to a population? We answer these questions by using the full HLA genotype of each individual, and with an assumption that a person who presents a larger number of high affinity viral epitopes will mount a stronger CTL immune response than one who presents a smaller number [33][34][35][36][37][39][40][41][42][43]. We use genetic diversity in HLA alleles to inform epidemiological parameters at the population level and study their influence on the epidemiological spread of H1N1 influenza.
We assume that all other factors affecting disease spread, such as contact patterns [44], health-seeking behaviour [45] and migration [46] are uniform among all individuals in a population, and across all populations. Such factors have been studied in the literature [44][45][46], largely using theoretical models or data collected for small cohorts. Immunological memory of an individual is also an important aspect of the immune response, and can be affected by factors such as the strain with which an individual was first infected [47,48], prior history of infections [48] and inherited factors [49]. For lack of data regarding these factors, the model described in this work does not incorporate age and immunological history explicitly. To offset this limitation, we focus first on H1N1 strains isolated during the 2009 pandemic, for which immunological memory and vaccination proved insufficient to curb the spread of disease [4,50]. We mine this data for characteristics which correlate with epidemic size, and test whether these correlations hold for strains isolated in years other than 2009.
In a previous paper [51], we developed a method to group together individuals who can be expected to have a similar CTL response, using the frequency of occurrence of HLA class-I alleles and the full proteome of the pathogen. We formulated an algorithm to generate all possible HLA genotypes given the frequency of occurrence of each allele in a particular population. Algorithms available through the IEDB resource [52] were used to predict the epitopes presented by each such HLA genotype. Clustering was then carried out on these HLA genotypes based on the number of epitopes presented from each viral protein. In this work, we use the algorithm presented in [51] to generate HLA genotypes and thereby predict high affinity epitopes presented by each such genotype. We thus identify sub-populations of individuals with comparable susceptibility to the virus. The relevant parameter in this case is the total number of such epitopes presented, irrespective of the viral protein from which these epitopes originate. We cluster individuals into groups based on this information, and use the clustering results to calculate the rate at which susceptible individuals become infected. This rate can be connected to the parameter β which appears in the conventional compartmental SIR model, which can be used to track the progress of the epidemic through the population. The prevalence of different HLA class-I alleles in different parts of the world is available through the Allele Frequency Net Database (AFND) [53]. Each population in the AFND is given an ethnicity tag. We predict epidemic sizes using our model for 61 such ethnicities, as well as for 81 strains of influenza A (H1N1) virus isolated in 2009, and 85 strains isolated before or after 2009, for which the genome (and hence proteome) sequence is known [54,55].
Our results show that if we assume that the susceptibility of a given individual is inversely proportional to the number of high affinity epitopes that this individual presents for a given viral strain, we can qualitatively reproduce some known trends of influenza spread worldwide. Moreover, although the basic reproduction number R 0 for a given population and a given viral strain remains the main parameter that controls the epidemic size, other characteristics of the population can also significantly impact epidemic spread. In particular, we show that a composition of HLA genotypes which results in sub-populations with widely differing susceptibilities confers protection against the spread of influenza. Moreover, populations where most of the individuals are less susceptible but where a small sub-set of individuals is highly susceptible, are better in terms of containing the disease than populations that are otherwise configured, even if they have the same value of R 0 . We show that the full distribution of susceptibilities across a population is required to predict the final epidemic size, but that one can extract useful information from low order moments of this distribution. Although these results are derived from pH1N1 strains, we find that the same trends apply even for viral strains isolated before or after 2009. We also show that populations with frequent occurrence of an allele associated with high risk for one strain do not always experience severe epidemics when considering influenza strains in general. We verify these conclusions by comparisons to synthetic data.
Materials and methods
To model epidemics at the population level, we use a deterministic SIR epidemic model. We describe a population as being formed out of a number of sub-populations. Each sub-population is defined according to their specific susceptibility to the viral strain. To define these subpopulations in practice, starting from biological data, we employ the probabilistic method developed in [51]. This method uses well-tested and benchmarked algorithms for epitope prediction [52] to predict the viral epitopes presented by individuals represented by different HLA class-I genotypes. We link these genotypes to individual susceptibility against the pathogen. We can then group individuals with comparable susceptibilities into well-defined subpopulations.
We represent different epidemic scenarios in terms of epidemic pairs, formed by considering both the pathogen (different influenza strains) and the specific population (in this work, ethnicities) with different sub-population structures. We then use the SIR framework to track the spread of influenza through the population. The ordinary differential equations used in the model are coded in Matlab and solved numerically using Matlab's ode45 solver.
Generating HLA class-I genotypes
The frequency of different HLA class-I alleles for different ethnicities estimated through largescale genotyping is available from public databases [53]. Each individual possesses three pairs of HLA class-I genes. One HLA-A, -B and -C allele is obtained from each parent. Provided we assume that these 6 alleles occur independent of each other, we can draw 2 genes each from the full set of possible A, B and C alleles, sampling them according to the empirically measured prevalence of that allele in the population. Each combination of 6 alleles is referred to as an HLA genotype. The likelihood of finding an individual with the exact HLA genotype generated, is given by the product of the likelihood of finding each of the 6 alleles comprising the genotype. A generated genotype is only accepted if the likelihood of finding an individual with that genotype is larger than 10 −6 .
Forming susceptibility sub-populations
An adaptive CD8 + T-cell mediated immune response can only be mounted against a virus if epitopes from the virus are presented by HLA class-I molecules. The binding between the epitope and the passing CTL takes place through a receptor called the T cell receptor (TCR). Not all TCRs are capable of recognising all viral epitopes. Thus if an individual presents a large number of high affinity epitopes, it is reasonable to assume that there is an enhanced probability that one or more of these epitopes can be recognised by their TCRs. Such individuals can be argued to have low susceptibility to the virus. Conversely, the ability of the immune system to present only a small number of epitopes will reduce the chance that they can be recognised. Such individuals can be argued to be more susceptible to the viral infection. This link between HLA class-I genotypes and disease susceptibility is supported, among others, by [33][34][35][36][37][39][40][41][42][43].
Predicting epitopes. For a given H1N1 influenza viral strain V and particular ethnicity E forming an epidemic pair (E, V), we predict the entire set of epitopes presented by each HLA class-I allele in that ethnicity using different algorithms available through the IEDB analysis resource [52]. A consensus of three algorithms is used: an artificial neural network [56], a stabilized matrix method [57], and a combinatorial peptide-library based method [58]. These three algorithms use very different approaches for predicting epitopes for a given HLA allele. In a study carried out by Sette et. al., all peptides with strong binding affinity, IC50 < 50nM, with their cognate allele were found to be immunogenic [59]. We restrict ourselves to predictions with high likelihood of being immunogenic by ensuring coincident prediction by all three algorithms, and by only considering epitopes with predicted IC50 < 50nM. From these results, we compute the number of high affinity epitopes presented by each individual, represented by their HLA genotype, in the population.
Susceptibility sub-populations. The clustering of HLA genotypes into sub-populations is carried out on the basis of the number of epitopes presented, under the hypothesis that more susceptible individuals present fewer epitopes. Thus, we cluster individuals so that individuals within the same group present a similar number of epitopes, whereas individuals from different groups present different numbers of epitopes. The susceptibility of each such group is then, where s i relates to the susceptibility of individuals in group i, and e i denotes the average number of epitopes presented by the HLA genotypes belonging to sub-population i. A discussion of the proportionality constant is provided in the section Estimating the proportionality constant.
Using the number of individuals N in the population and the classification of genotypes in clusters, we can calculate the fraction of individuals x i in each sub-population i 2 {1, . . ., m}, as All the calculations described above are for a single (ethnicity, viral strain) epidemic pair. The values of all these parameters must be recalculated for each such epidemic pair being studied, since, among others, the parameter m depends on (E, V). 1. The population is closed and spatially well-mixed.
All individuals in the population have equal infectivity and recovery rates.
3. Individuals in each sub-population have the same susceptibility.
Individuals in different sub-populations have different susceptibilities.
We use the SIR epidemic model of [21], considering a closed population of N susceptible individuals and a initially infected individuals. The dynamics of the epidemic are represented by the coupled equations Here S i (t), I(t) and R(t) are the numbers of susceptible (at sub-population i), infected and recovered individuals at time t and initial conditions are given by We use a = 1 in our numerical calculations to represent a single infective individual who introduces the disease into a fully susceptible population. The parameter β i governing the infection of susceptible individuals belonging to the i th subpopulation is assumed to be a composite of three factors, We take αs i 2 [0, 1] to represent the probability of a successful contact between a susceptible individual from the i th sub-population, and an infective individual, leading to infection. The quantity α accounts for factors such as the infectiousness of the pathogen, or the infectivity of the infective individual, while s i is related to the susceptibility of individuals in sub-population i. The parameter c represents the average number of contacts per individual per unit time. We note here that, since the dimensions of c are person −1 time −1 , β i has dimensions person −1 time −1 . An alternative notation in the literature takes the infection rate to have units time −1 , with S and I representing proportion of susceptible or infected individuals, rather than numbers. This would be equivalent to working with the alternative parameterb Since individuals in all the ethnicities are considered to be homogeneously mixed and all our numerical computations are carried out with the same number of individuals (N + a = 10 4 ), we assume the parameter c to be the same for all the epidemic pairs under consideration. Further, since our interest is in analysing the impact of susceptibility heterogeneities in the spread dynamics, we take α to be the same regardless of the epidemic pair (E, V) under consideration. Thus, when comparing the spread dynamics between two epidemic pairs, heterogeneity in susceptibilities emerges as the main factor in our models determining the difference in these dynamics.
Finally, we note that the parameter β, given by can be seen as the counterpart of (β 1 , . . ., β m ) when the population is considered homogeneous. It corresponds to the parameter widely used and estimated, usually by estimating the basic reproduction number R 0 , in the literature from epidemiological data for different pathogens and populations.
Estimating the proportionality constant
For a given (E, V) pair, and using Eq (1), the susceptibility of each sub-population is inversely proportional to the average number of epitopes presented by individuals in that group. Thus we can write s i ¼ z 1 e i where z is a proportionality constant which captures other components of the immune system that affect susceptibility, including all aspects of the innate and humoral immune response. We assume these aspects to be the same across all individuals and pairs, since only heterogeneities related to HLA profiles are considered in this work. Then, β i is given by where y = αcz accounts for contributions to β i that are assumed to be the same across different individuals and pairs. The value of β in Eq (12) can be calculated as a weighted average of the The quantity β is henceforth referred to as average susceptibility. We note that our algorithm reports e i = 0.07 as the minimum value of the average number of epitopes presented by a subpopulation in any epidemic pair, so that β is always finite.
One way to obtain y is to scale to an experimentally determined value for β, given a specific ethnicity and viral strain (E 0 , V 0 ). Values for β have historically been estimated using techniques such as serotyping the same set of people at different time points to estimate the change in the fraction of individuals susceptible to a given pathogen. Other methods are reviewed in [60]. Once we have a value of β for one epidemic pair (E 0 , V 0 ), we can calculate values x i and e i for this epidemic pair using the HLA genotype generation, epitope prediction and clustering methods outlined above. These can be inserted into Eq (14), allowing us to compute the value y, which we have assumed to be the same across all epidemic pairs. Values of x i and e i for each pair (E, V) can be used, together with this value of y, to get a β for any pair (E, V).
In this work, we use the value of R 0 estimated in [13] for the Mexico City population for the 2009 H1N1 pandemic originating in Mexico La-Gloria. This was chosen as a reference because HLA class-I allele frequency for this ethnicity, as well as the protein sequence of this viral strain were available. In [13], an exponential curve was fit to the data of number of infections over time during the initial phase of the epidemic. The distribution thus estimated was used to compute R 0 . The R 0 estimated in this manner was 1.72. We use this R 0 to compute β for this epidemic pair, and use the epitopes and sub-populations for the pair (E 0 , V 0 ) = (Mexico City Mestizo pop 2, A/Mexico/LaGloria-8/2009) to estimate y. We note that we are using a particular β estimated in the literature for a specific pair (E 0 , V 0 ) for computing y, and then considering y to be the same across different pairs. By doing this, we are scaling the rate of the event S i + I ! I + I in all the simulations for any pair (E, V) to the value of β obtained from data for the given pair (E 0 , V 0 ).
Summary statistics for comparing epidemics
We focus on the following global epidemiological characteristics: until they recover: In our model, the ability of an individual to transmit the disease does not depend on the sub-population that the infected individual belongs to, since infectivity is considered to be the same across sub-populations. The SIR model of Eqs (3)-(10) was analysed in [21], where it was proved that R(1) is the only positive solution of and FI 1 can be derived from R(1) by applying FI 1 ¼ Rð1Þ Nþa . The basic reproduction number R 0 is the number of secondary infections that a typical infected person causes when introduced into a large population of susceptible individuals. In the classical SIR model for homogeneous populations, R 0 is given by In order to calculate R 0 for our system of equations (Eqs (3)-(10)), we consider the case when a small number of infected individuals is introduced into a large population of N susceptible individuals. We assume the number of susceptible individuals (S i (0) = N i for all i) to be large, such that a ( N i . This approaches the limit in which there is an unlimited source of susceptible individuals at the beginning of the epidemic. Then the dynamics of the initially infected population in terms of a(t), the number of initially infected individuals at time t declines as and thus a(t) = a(0)e −γt . Let I (1) (t) be the number of secondary infections caused up to time t, with I (1) (0) = 0, by the a initially infected individuals. Then so that I ð1Þ ðtÞ ¼ að0Þ The basic reproduction number is given by so that by setting a(0) = 1 we get For m = 1, this expression leads to the well-known basic reproduction number for the homogeneous case (Eq (16)).
Parameters characterising epidemic pairs. Our model predicts values of FI 1 and R 0 for each pair (E, V). Any given epidemic pair (E, V) corresponding to an ethnicity E and a viral strain V has a susceptibility profile described by the number m of sub-populations, and by vectors (β 1 , . . ., β m ) and (N 1 , . . ., N m ). The susceptibility profile of any epidemic pair (E, V) is described by a Susceptibility Profile Vector (SPV) The quantities FI 1 and R 0 can be expected to directly depend on the SPV(E, V), where we omit (E, V) from now on for ease of notation. For example, it is clear that for a given epidemic pair, R 0 directly depends on the total number of individuals, N, the recovery rate, γ, and the average susceptibility On the other hand, the quantity of central interest to epidemic modeling, the final epidemic size FI 1 for a given epidemic pair, could depend on the full distribution of the SPV. For concreteness, we examine the dependence of FI 1 on the lower order moments of the distribution, such as the standard deviation, the skewness and the coefficient of variation, defined respectively as We note that a long left tail of the distribution represented by SPV would result in Sk(SPV) < 0, indicating the presence of a small number of individuals with susceptibility significantly lower than the mean. On the other hand, when the population has a small representation of individuals with susceptibility significantly higher than the mean, we have Sk(SPV) > 0.
Workflow
The workflow used in this paper is summarised in Fig 2.
Results
To compute FI 1 , we solve Eqs (3)-(10) with N + a = 10 4 individuals, a = 1. Each simulation is allowed to run for (0, T), where time T is large enough to ensure that the epidemic has died out. In particular, T is chosen to be large enough for each considered epidemic pair so that R (T) % R(1) obtained from the simulation satisfies Eq (15) with some error < 10 −2 . The recovery rate used was γ = 1/3 day −1 [13].
The input to Eqs ((3)-(10)) was determined for 61 ethnicities and 81 viral strains isolated in 2009, leading to the study of 4, 941 epidemic pairs. Of these, 1, 392 cases had R 0 > 1, and 718 cases had FI 1 > 0.5. The distributions of SPV characteristics across these 4, 941 epidemic pairs is provided in Fig 3. The number m of susceptibility sub-populations varied from 1 (578 cases) to 23 (1 case, A/Giessen/6/2009 with Kenya Nandi ethnicity). The most common value for m was 5, seen in 647 cases spanning 80 strains and 32 ethnicities. Details regarding ranges of calculated parameters for strains isolated before or after 2009 can be found in the supporting information; see S1 Fig. All estimated parameters are provided for all epidemic pairs in a supplementary file; see S1 File.
Results presented in upcoming sections are for H1N1 strains isolated in 2009, unless stated otherwise.
Dependence of epidemic size and R 0 on average susceptibility
We first examine the relationship between the average susceptibility (β), the basic reproduction number (R 0 ) and the epidemic size (FI 1 ); see Fig 4. We note that Eq (16) predicts a linear relationship between R 0 and β. As can be seen in Fig 4(a), most (E, V) pairs have β < 2 × 10 −4 person −1 day −1 , while pairs with higher values of β correspond to those with large epidemic sizes (FI 1 > 0.6). These pairs have R 0 > 7, implying β > 2.33 × 10 −4 person −1 day −1 from Eq (20).
In Fig 4(b) we focus on epidemic pairs with R 0 < 7. In this plot, there are a large number of points with epidemic size FI 1 % 0. Upon closer examination, these points turn out to have R 0 < 1, as expected. We note that R 0 = 1 implies β = 0.33 × 10 −4 person −1 day −1 , which corresponds to the point in Fig 4(b) where the epidemic size starts to rise above 0. In all further plots, we focus on the (E, V) pairs where 1 < R 0 < 7 and m > 1, leading to the analysis of 956 epidemic pairs.
No single parameter predicts epidemic size
In Fig 4(b), where the relationship between β and FI 1 is shown, it can be seen that a high value of average susceptibility leads to a larger epidemic. The red line corresponds to the epidemic size when the susceptibility compartment is homogeneous (i.e., m = 1). We see that this line forms an upper bound on the FI 1 values for epidemic pairs with m > 1. It has been proved that the final epidemic size is always lower in an epidemic pair with heterogeneous susceptibility, than an epidemic pair with the same average susceptibility but with homogeneous susceptibility [21,28,44,61]. The predictions in our simulations agree with this result. However, we observe a spread of FI 1 values when considering epidemic pairs containing heterogeneous susceptibilities and having the same average susceptibility β; see Eq (12). This shows that heterogeneity plays a role in determining the extent of an epidemic even when the average susceptibility remains constant. To study what aspects of this heterogeneity have the greatest impact on epidemic size, we examine the dependence of FI 1 on the characteristics of the susceptibility profile vector discussed above (m, β, σ(SPV), CV(SPV) and Sk(SPV)); see Fig 5. The main trends that can be identified are the following: In our data, a positive value for Sk(SPV) corresponds to epidemic pairs with R 0 < 2. We provide correlation coefficients r(θ, τ) 2 (−1, 1) between our summary statistics τ 2 {FI 1 , R 0 } and SPV characteristics θ 2 {m, β, CV(SPV), σ(SPV), Sk(SPV)} in Table 2. The parameter β provides the best predictor for both R 0 and FI 1 . On the other hand, the heterogeneity described by CV(SPV), and the skewness of the susceptibility distribution described through Sk(SPV), also emerge as good predictors of FI 1 .
Case study 1-σ(SPV). Fig 5(b) shows that most of the epidemic pairs in our data set have σ(SPV) < 10 −4 person −1 day −1 . Although the correlation between σ(SPV) and FI 1 is Impact of genetic heterogeneity on epidemiology of H1N1 influenza not statistically significant (see Table 2), we notice that a high value of σ(SPV) (> 1.5 × 10 −4 person −1 day −1 ) corresponds to moderate values for FI 1 . We examine two (E, V) pairs with high σ(SPV); see pairs 1 and 2 in Table 3 and their corresponding epidemic dynamics in Fig 6. These two pairs have similar values for σ(SPV), and yet have significantly different epidemic sizes (0.48 for pair 1, and 0.74 for pair 2). We also see from Fig 6(b) and 6(d), that the infection runs its course faster in pair 2 than in pair 1. Both these phenomena can be explained by the fact that pair 2 has a significantly higher β (2.33 × 10 −4 person −1 day −1 , compared to 1.42 × 10 −4 person −1 day −1 for pair 1). We can also see from Fig 6 that the sub-population with highest β i is the one most affected by the infection, while the sub-populations with low β i remain largely uninfected, in both pairs 1 and 2. We will see in further sections that β and σ(SPV) together, correlate well with epidemic size. Case study 2-m. In Fig 5(d) there appears to be some negative correlation between m and FI 1 , with larger values of m corresponding to smaller epidemic sizes; see Table 2. However, we note that this is more an artefact of the data than a predictive trend, and it is possible to have epidemic pairs with a large value of m but very different final epidemic sizes and epidemic time-course dynamics. This can be seen for example in Fig 7 for epidemic pairs 3 and 4 from Table 4. Once again, the pair with higher average susceptibility has both a larger epidemic size, and also a faster time course for the spread of the disease. The linear relationship between β and R 0 follows from Eq (16). In Fig 8(d), we once again observe that (E, V) pairs with m > 10 have low R 0 . As observed in case study 2, this is more an artefact of biases in the real data than a general trend. In Fig 8(b), we plot σ(SPV) against R 0 . Although σ(SPV) does not directly affect R 0 , we see this shape due to the relationship in the data between β and σ(SPV).
Epidemic size largely correlates with select pairs of parameters
Earlier, we examined the correlation between FI 1 and the SPV characteristics θ 2 {m, β, CV (SPV), σ(SPV), Sk(SPV)} independently. This raises the question of whether accounting for pairs of such parameters might provide a more accurate prediction of FI 1 . We study here how pairs of parameters are related to epidemic size in all (E, V) pairs with 1 < R 0 < 7 and m > 1. We find that pairs involving the average susceptibility β, as well as the heterogeneity parameters Sk(SPV), CV(SPV) and σ(SPV) are better predictors of the final epidemic size than these quantities individually. Plots involving these parameters are shown in Fig 9, while multiple correlation coefficients are shown in Table 5 • Epidemic pairs with positive Sk(SPV) are also the ones with small average susceptibility, and they lead to small final epidemic sizes.
From Fig 9, we see that for a given β, FI 1 decreases with increasing σ(SPV). It also decreases as Sk(SPV) is made more positive, or as CV(SPV) is increased. This shows that for intermediate Impact of genetic heterogeneity on epidemiology of H1N1 influenza values of β such as the ones shown in Fig 9, a higher spread in β i values helps to protect the population against the epidemic spread. In other words, a population with higher genetic heterogeneity in susceptibility to a virus, leading to susceptibility sub-populations with a large spread in susceptibilities, can be expected to have a smaller epidemic than a population where We also observe that for a given value of β, an (E, V) pair with Sk(SPV) > 0 or only slightly negative corresponds to a smaller FI 1 than one for which Sk(SPV) is a large negative value. We interpret this in the following way: populations containing a small sub-set of individuals with heightened susceptibility, but in which most of the individuals are less susceptible, are better protected against the disease than populations where the susceptibility is more uniformly distributed, even if the mean susceptibility is the same.
Predictions broadly track trends in pH1N1 (2009) burden
The 2009 pandemic of H1N1 was closely tracked by many organisations in the world, including the World Health Organization (WHO). For example, [62, Fig 3] indicates that certain areas of the world experienced a larger number of cases than others. In particular, we see that China and Japan experienced worse epidemics than Russia, which tends to have relatively smaller epidemics.
To compare the predictions of our model with these observations, we select viral strains isolated in these regions during the 2009 pandemic, and ethnicities corresponding to these countries. We would like to mention here that our model works with individual ethnicities, while the data available is for countries, which are comprised of multiple ethnicities. We find that different ethnicities from the same country experience widely differing epidemic sizes for the same viral strain; see S1 File. For this comparison, we select ethnicities available in our data set from each of these countries, for which the predictions most closely resemble the observations in [62, Fig 3]; see Fig 10. As can be seen in Fig 10, our method predicts that most Chinese ethnicities will experience severe epidemics regardless of the viral strain. On the other hand, Russia and Japan are predicted to experience smaller epidemics for most viral strains. However, coefficients r((θ 1 , θ 2 ), τ) between summary statistics of the epidemic and SPV characteristics pairs. we note that for most of the Japanese strains, the Japanese ethnicity will suffer larger epidemic sizes than the Russian one, thus qualitatively agreeing with what can be observed in [62, Fig 3]. An interesting case is the strain A/Japan/921/2009 (H1N1), which was one of the strains circulating in Japan during the 2009 pandemic. This strain is predicted to cause severe epidemics in most ethnicities, and this holds true across all the 61 ethnicities considered in the data set.
SPV
The ethnicity Russia Tuva Pop 2 is predicted to experience moderate epidemics for viral strains isolated in Russia, but a slightly worse epidemic for the strain A/Hyogo/2/2009 (H1N1), isolated in Japan. Thus, our methods bear out the idea that the severity of an influenza epidemic in a given country should not be dictated entirely by the genetic makeup of the hosts, but should also depend on the particular strain of the pathogen circulating in this country. Our predictions suggest that the ability of HLA class-I alleles in the ethnicity Russia Tuva pop 2 to present epitopes from the influenza A (H1N1) virus changes significantly across different viral strains.
The results described above show that even with a model that only incorporates susceptibility heterogeneities in terms of epitope presentation through HLA class-I alleles, we can qualitatively explain some essential trends observed across the world during the 2009 H1N1 pandemic. This serves as a qualitative validation of our methodology. Moreover, our results suggest that while some trends in influenza spread worldwide can be explained by the average susceptibility of each ethnicity to each strain, others might have an explanation related to the particular genetic diversity within each ethnicity for a given strain. For example, when analysing pairs 5 and 6 in Table 6, we can see that the same value of β can lead to different epidemic sizes for the same strain when considering the China Yunnan Province Lisu and the Japan pop 5 ethnicities. This is likely related to the fact that Sk(SPV) is significantly more negative for the Chinese ethnicity, and the coefficient of variation is smaller, leading to a larger epidemic size. A similar behaviour can be seen when considering pairs 7-9 in Table 6. Larger reproduction numbers can still arise from smaller epidemic sizes if Sk(SPV) is closer to 0 (or positive), and for more heterogeneous populations (larger values of CV(SPV)), which might explain smaller epidemic sizes in, for example, the Kenya Luo ethnicity compared to the Chinese ones [62, Fig 3].
Response of some indigenous ethnicities to H1N1
Several studies have reported that during the 2009 pandemic, indigenous ethnicities experienced more severe epidemics than their non-indigenous counterparts [63][64][65]. The indigenous ethnicities in our data set are USA Alaska Yupik, Australia Yuendumu Aborigine, and Australia Cape York Peninsula Aborigine. We find that the ethnicity USA Alaska Yupik is always predicted to have a worse epidemic than non-indigenous ethnicities from the USA, irrespective of the strain being considered. Since our data set does not include any non-indigenous ethnicities from Australia, we are unable to verify whether or not a similar statement holds true for the Australian aboriginal ethnicities.
In general, we find the ethnicity Australia Cape York Aborigine, with average FI 1 = 0.14 when considering all 166 viral strains, is predicted to experience a marginally worse epidemic than Australia Yuendumu Aborigine whose average FI 1 = 0.08. Interestingly, this trend is reversed when we focus on the strains A/Auckland/1/2009 and A/Auckland/597/2000 isolated in Australia. For these strains, Australia Cape York Aborigine has R 0 < 1 for both these strains, but Australia Yuendumu Aborigine has R 0 = 1.49 for the strain A/Auckland/1/2009; see Table 7.
Based on the observations during the 2009 pandemic, it has been suggested that aboriginal communities should be prioritised during vaccination [63,64]. However the predictions in Table 7 suggest that at least from the perspective of HLA alleles and downstream CTL response, each influenza strain and each aboriginal community needs to be assessed independently. Using our model, it is possible to predict whether or not a new strain will cause a worse epidemic than a strain in the data set, within the constraints of the assumptions made. Predictions such as these could help optimise the deployment of resources when combating a new strain of influenza.
High risk alleles for one strain do not always correlate with severe epidemics in general
The frequency of the HLA class-I allele HLA-A Ã 24 has been found to correlate with mortality rate due to the pandemic H1N1 (2009) influenza virus [36]. We rank ethnicities in our data set in descending order of their average FI 1 across all 166 strains of influenza, and find that the ethnicity USA Alaska Yupik has the highest prevalence of allele HLA-A Ã 24:02, and also has the worst average epidemic size; see Table 8. The ethnicity with the next highest frequency of allele HLA-A Ã 24:02, Japan Central, has very low average epidemic size, and ranks 52 nd among 61 ethnicities. The ethnicity Japan pop 3 has comparable frequency of the allele HLA-A Ã 24:02 as Japan Central, but is ranked 28 th based on its average epidemic size. These results show that an allele whose frequent occurrence correlates with a high risk for one influenza strain, does not always correlate with a severe epidemic when considering influenza strains in general. Rather, we need to estimate the full profile of the SPV, or at least the summary characteristics with strong correlation as described in previous sections.
Synthetic data supports the observed behaviour
Does the behaviour discussed in the preceding sections rely on correlations between SPV characteristics that are specific to the epidemic pairs we analyse? These correlations arise directly from genetic heterogeneities at the HLA genotype level corresponding to the 61 ethnicities and 166 viral strains considered here. However, we could frame our questions more generally. For example, we could ask if a positive skewness of the SPV would always be a protective characteristic for the population, given a fixed average β?
To address these and similar questions, we construct a synthetic data set of 10 4 epidemic pairs created within the following parameter ranges: m $ U int ðf2; . . . ; 15gÞ; u $ Uð0; 1Þ; p i $ Uðlog 10 ðe min Þ; log 10 ðe max ÞÞ; 1 i m; . . . ; NgÞ; 1 i m s:t: where e min and e max are the minimum and maximum values of e i in the real data set analysed in previous sections. These distributions have been chosen so that we obtain 10 4 epidemic pairs with values in the interval 1 < R 0 < 7, m > 1, with N i and β i distributed within ranges that are comparable to those of the original data set. For this synthetic data set, we plot in Fig 11 the predicted final epidemic size as a function of the different SPV characteristics. In Tables 9 and 10, correlation coefficients for single and paired SPV characteristics, and summary statistics FI 1 and R 0 , are provided for the epidemic • Positive skewness leads to smaller epidemic sizes than negative skewness scenarios, as observed for the original data set; see Fig 11(b).
• The larger the heterogeneity (in terms of σ(SPV) or CV(SPV)), the more protected the population is against epidemic spread. This is not a consequence of the value of m. Rather, it is the particular combination of β i and N i values which has an impact on the epidemic dynamics; see
Discussion
Theoretical studies on epidemiological spread of disease in the presence of susceptibility heterogeneities have shown that final epidemic size is typically lower when susceptibility sub-populations are factored in, as compared to the case of homogeneous susceptibility [21,44,61]. We find that this result holds true when the sub-population sizes and disease transmission rates are informed by real-world data about immunological factors. The novelty in our approach is to propose how the susceptibility profile vector can be estimated from genetic sequence data, so that we can then deal with particular SPVs that might exist in reality for different ethnicities and viral strains. We also show that some summary statistics of the SPV (such as the skewness or the coefficient of variation) can help to better understand the predicted final size of the epidemic. r((θ 1 , θ 2 ), τ) between summary statistics of the epidemic and SPV characteristics pairs, for the synthetic data set. A limitation of our model is that factors such as age, prior infection history and vaccination are not included. While there have been studies which collect and analyse such data for small cohorts [47,48,65], gathering such information on the global scale required for this analysis requires the formation of consortia such as those existing for diseases such as cancer [37]. Also, we make the strong simplifying assumption that all aspects of the innate and adaptive immune system not affected by HLA class-I presentation can be pooled into a single proportionality constant, and are considered uniform among individuals within an ethnicity, and across ethnicities. While this helped focus the analysis on the role of HLA alleles in disease spread, incorporating other aspects of the immune system into epidemiological models is an important problem that must be addressed. Due to these limitations, predictions made by our model can only be used to draw comparisons between different epidemic pairs, particularly epidemic pairs consisting of the same ethnicity and different viral strains, and not for making absolute quantitative predictions.
SPV
A number of extensions of the line of work presented in this manuscript are possible. Presentation of epitopes by HLA class-I alleles is preceded by a number of steps including internalisation of the virus, proteasomal cleavage of viral proteins into shorter peptides, and transport of peptides through the TAP transport system [31]. The epitope prediction tools used in this work do not explicitly consider all these pre-processing steps in any single tool. Also, the prediction algorithms have lower accuracy for rare alleles. The model can be improved by plugging in different epitope prediction methods which overcome these limitations. Also, it would be useful to establish a more accurate, quantitative connection between s i and e i than the simple inverse relation we have assumed. Two other possible mathematical forms, s i / 1/ln(e i + 1) and s i / 1/(e i + 1) 2 , are explored in the supporting information; see S4 Fig.
Spatial heterogeneities are known to allow for disease persistence, since asynchrony in the epidemic spread among different sub-populations located in different geographical locations can allow for global persistence, even if the epidemic locally dies out [19]. Since HLA alleles are inherited, it can be expected that families and households will have similar HLA genotypes, potentially introducing spatial inhomogeneity in the distribution of HLA alleles in a population. If such spatial information regarding HLA genotypes were gathered, it would be interesting to study how this affects epidemic dynamics and persistence. An agent based model incorporating variations in agent susceptibility along the lines indicated here, along with spatial information regarding each susceptible agent, would provide an idea of how such factors might modify the general conclusions described in this paper. A network model incorporating the social structure of individual contacts would indicate if the combination of varied susceptibility with a specified contact network structure between individuals might accelerate epidemic progress or retard it.
Conclusions
The incorporation of within-host immunological information into population-level epidemic models is a major challenge for epidemiological modeling [30]. In this paper, we address this question in a specific case, by modeling the impact of genetic diversity in terms of the HLA class-I genotype on the predicted epidemic dynamics of H1N1 influenza. To do this, we made use of HLA allele frequencies measured across different ethnicities, focusing on the number of high affinity epitopes presented by individuals within 61 ethnicities and for 81 H1N1 influenza A viral strains isolated in 2009 as well as 85 H1N1 influenza A viral strains isolated in other years. Our main hypothesis was that the susceptibility of individuals in a given ethnicity, for a given viral strain, is inversely proportional to the number of high affinity epitopes that these individuals can present. We then used a multi-compartment SIR model to study the spread dynamics of influenza for each (ethnicity, viral strain) epidemic pair, where the final epidemic size FI 1 and the basic reproduction number R 0 are used as the summary statistics for the purpose of comparison.
While the average susceptibility β is a central parameter, the susceptibility profile corresponding to each epidemic pair also plays an important role governing epidemic spread. In particular, when analysing epidemics with intermediate values of β (i.e., intermediate values of R 0 ), more heterogeneous susceptibility profiles, as well as profiles showing positive skewness Sk(SPV), are more protective for the population as a whole against H1N1 influenza. Our model only considers heterogeneity from the perspective of the ability of a person's HLA genotype to present epitopes from a given virus. However, even if at a qualitative level, our results support the idea that having a wide variety of HLA alleles represented among its individuals, resulting in a wide range of susceptibilities, benefits a population as a whole in terms of restricting the spread of an infectious disease.
Although our model does not incorporate other factors such as social and economic characteristics of each particular population or potential different infectivities for each viral strain, our results qualitatively capture several central trends of influenza spread worldwide. Thus, we can conclude that susceptibility of individuals in terms of the HLA genotype is an important factor that could explain the spread potential of different influenza viral strains among different ethnicities and populations. While some of these trends can just be explained due to larger or smaller values of R 0 (i.e., the average susceptibility β), the reason for small epidemic sizes occurring for some particular ethnicities and viral strains might be related to the existence of high genetic diversity resulting in a wide range of susceptibilities in these populations, for these viral strains, with a positively skewed susceptibility profile vector.
|
2018-04-03T01:38:58.983Z
|
2018-03-01T00:00:00.000
|
{
"year": 2018,
"sha1": "12ca430ef72d4d7ea99add858f9a4d44a2055319",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1006069&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "12ca430ef72d4d7ea99add858f9a4d44a2055319",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine",
"Computer Science"
]
}
|
231671625
|
pes2o/s2orc
|
v3-fos-license
|
Flexible organic thin-film transistor immunosensor printed on a one-micron-thick film
Flexible and printed biosensor devices can be used in wearable and disposable sensing systems for the daily management of health conditions. Organic thin-film transistors (OTFTs) are promising candidates for constructing such systems. Moreover, the integration of organic electronic materials and biosensors is of extreme interest owing to their mechanical and chemical features. To this end, the molecular recognition chemistry-based design for the interface between sensor devices and analyte solution is crucial to obtain accurate and reproducible sensing signals of targets, though little consideration has been given to this standpoint in the field of device engineering. Here, we report a printed OTFT on a 1 μm-thick film functionalized with a sensing material. Importantly, the fabricated device quantitatively responds to the addition of a protein immunological marker. These results provide guidelines for the development of effective healthcare tools. Flexible sensors that respond to chemical stimuli in the body are promising for personal health monitoring. Here, a flexible immunosensor is printed onto a 1 μm-thick film, which can detect an immunological protein marker.
R elated to the global increase in the incidence of lifestyle diseases, a rise in health awareness, and public hygiene resulting from globalization, the development of point-ofcare testing (POCT) technologies has attracted attention for the management of health conditions based on quantitative and chronological data 1,2 . In particular, the medical industry is devoting a lot of effort toward the realization of rapid and disposable assays to detect immunological markers or viruses for preventing the spread of serious infectious diseases or inflammations. For these purposes, wearable biosensors are of great interest because they can inspect our bioactivity unobtrusively and continuously 3,4 . To achieve the real-time monitoring of health conditions utilizing wearable sensors, the devices should fulfill the following requirements: (1) To evaluate bioactivity in accordance with quantitative data, devices should linearly respond to changes in biomarker levels. (2) For the qualitative investigation of health conditions, detection units in wearable devices must be functionalized with specific recognition materials for biomarkers. (3) To maintain cleanness on human skin or tissues, wearable devices should be disposable. This point is also significant in protecting against the spread of secondary infections from patients to medical workers. Hence, low-cost processes should be focused on in the development of biosensing devices so that biosensors can be realized as a sufficiently low cost for their regular replacement.
To satisfy the above-mentioned requirements for the development of wearable biosensors, organic thin-film transistors (OTFTs) are used as promising device platforms owing to their data handling capacity on integrated circuits, mechanical durability and flexibility, and printing processability 5 . More importantly, OTFTs can be integrated with transmitting systems, meaning that the big-data construction of bioactive data can be realized by using OTFTs as wearable biosensors. Furthermore, it is possible to compare and accumulate physiological information obtained from each person. Therefore, OTFT-based biosensors can also contribute to the expansion of basic medical research. In fact, the monitoring of biophysical or biochemical information by using flexible OTFTs has been widely reported 6,7 . OTFT characteristics are affected by charges of adsorbed molecules onto the surfaces of the semiconductor layer or terminal electrodes. Although the origins of the shifts in threshold voltage (ΔV TH ) in OTFT (FET)-based sensors have still been debatable, the researchers in the field of OTFT-based sensors have still used ΔV TH as the sensing parameter. For example, it has been reported ΔV TH stemmed from the adsorption of gas molecules onto the metal-ligand complex-based organic semiconductor 8 . Furthermore, there are many reports that the adsorbed biomolecules (e.g., nucleotides) onto the semiconductor or the gate electrode not only affect ΔV TH but also induce changes in other parameters such as drain current (I DS ), the field-effect mobility (μ), etc. 9,10 . These demonstrated results indicate that the parameters of OTFT devices can be experimentally utilized for the sensing of signals for the determination of captured analytes. However, the detectable signals of previously reported devices were restricted to vital signs and changes in concentrations of small molecules, in spite of the fact that proteins are important biomarkers related to the onset of various diseases. To obtain accurate and reproducible sensing signals of targets, the alignment of analytes on devices should be tuned by molecular recognition materials. This is because ΔV TH of OTFT-based sensors by adding target proteins are unstable in the physical adsorption model of proteins on the electrode surface. In the physical adsorption model, the adsorbed amount of proteins onto the electrode fluctuate owing to its non-specificity. In addition, the variability of ΔV TH is also derived from various charge distributions of proteins (see Supplementary Fig. 1a). In contrast, the antibody-modified electrode can detect proteins based on the specific immune-interaction (see Supplementary Fig. 1b, c), which means that the adsorbed amount of target proteins on the electrode surface can be defined by the binding affinity and the immobilized amount of the antibody. Importantly, the distribution of surface charge on proteins is not homogeneous because acidic and basic amino-acids are placed on the protein surface heterogeneously. Hence, the well-ordered immobilization of the antibody on the electrode (i.e., alignment for the orientation of the captured protein) is required for achieving the accurate and reproducible detection of proteins (see Supplementary Fig. 1c). These suggest that the surface design of sensing electrodes is crucial to obtain the biomolecule information. Furthermore, selfassembled monolayer (SAM)-modified electrode can protect the non-specific adsorption of proteins and interferences on the electrode. Therefore, this standpoint from molecular recognition chemistry is important to achieve the accurate quantitative determination of biomarkers by using OTFTs. Toward this end, molecular recognition researchers have already established a strategy for achieving these requirements of protein detection [11][12][13] ; nevertheless, suitable materials and functionalization for wearable sensors for the detection of proteins have not been considered in the field of device engineering. In this regard, although we have carefully examined the protein sensing ability of OTFT devices functionalized with several sensing materials 14 , further consideration of flexible and printed OTFTs for protein determination is required. This is because the development of flexible devices for biosensing is still in its infancy.
Herein, we realize a flexible disposable biosensor that can electrically detect an immunological marker protein by combining different standpoints and knowledge of device engineering and molecular recognition chemistry. In this paper, we propose a conceptual design of an immunosensor device on an ultra-thin film with potential applicability to wearable biomedical systems. The fabricated OTFT with a dual-gate configuration 15 (Fig. 1) showed excellent operation stability under electrical or mechanical loading. More importantly, the designed immunosensing portion achieved the quantitative detection of proteins on a printed OTFT with mechanical flexibility. The obtained results suggest that the proposed device structure, process, and applied materials will contribute to the achievement of OTFT-based biosensors with these attractive characteristics. Furthermore, our OTFT-based immunosensor was fabricated by printing processes, meaning it could be used in low-cost biomedical systems.
Results
Fabrication of a dual-gate OTFT on an ultra thin-film substrate. The external appearance and schematic of the flexible OTFT are shown in Fig. 1. The fabrication procedure for the ultra-thin-film device including printing processes was in accordance with our previous report (see the "Methods" section) 16 . The fabricated device was easily peeled from the support substrate after finishing the preparation process. The thickness of the ultrathin-film substrate was 1 μm, resulting in the OTFT device having a total thickness of <3 μm. In this study, we employed a dual-gate structure for the OTFT as the basic structure for flexible biosensors. This is because dual-gate OTFTs have a practical configuration for biosensors owing to their excellent robustness to mechanical stresses and their reproducible operation under ambient conditions. More importantly, dual-gate OTFTs can be easily integrated into logic circuitry 17 . This suggests that the processing and transmission of biometric information could be achieved by using an electronic system integrated into a single chip.
To accomplish the electrical detection of analytes in aqueous media using the OTFT, the sensing portion should be separated from the channel region of the OTFT because the electrical performance of organic semiconductors is easily deteriorated by exposure to water. Hence, the sensing electrode was extended apart from the area just above the semiconductor layer ( Fig. 1) 18 . Consequently, we obtained excellent performance characteristics of the OTFT-based sensor as a consequence of this design strategy for the device (vide infra).
Electrical performance of the dual-gate OTFT. Basic operation behaviors (transfer and output characteristics) of the fabricated OTFT with the dual-gate configuration are summarized in Fig. 2 (see Supplementary Table 1). First, we carried out electrical measurements for the OTFT in the single-gate configuration (i.e., a top-gate or bottom-gate OTFT). Notably, the square roots of |I DS | versus V GS in each device configuration exhibited as the linear behavior (see Supplementary Fig. 3), supporting that these analyzed results were highly reliable 19 . Next, the operation of the dual-gate OTFT was examined by applying the gate bias at both the top-gate and bottom-gate terminals (Fig. 2c). Figure 2d shows the correlation between the threshold voltage (V TH ) of the OTFT and the external sweep bias for the top-gate terminal (V TG ). A linear relationship between the output parameter (V TH ) and the input signal (V TG ) was observed, suggesting that the fabricated OTFT can be applied to the sensitive detection of the molecular recognition behavior on the top-gate electrode. Notably, the low-voltage operation of the fabricated OTFT with the single-gate or dual-gate configuration was observed (Fig. 2d). The application of high electric fields to water may affect various electrochemical reactions, such as the electrolysis and electrophoresis of charged molecules in the water. These uncontrollable phenomena in the analyte solution might interfere with the read out of sensing signals from the solution. Thus, the device can be utilized for sensing applications because its low operation voltage can limit the occurrence of undesirable electrochemical reactions in aqueous media.
To evaluate the operation stability of the OTFT under longterm use, direct-current (DC) bias-stress measurement was also carried out for the fabricated device (Fig. 2e). Here, it is unnecessary for the bottom-gate portion of the designed OTFT to have stability under long-term continuous bias. This is because the intermittent operation of the bottom-gate device is sufficient for the read out of biosensing information from changes in the electrical potential of the top-gate terminal. Hence, we only evaluated the operation stability of the bottom-gate device under the application of continuous bias stress to the top-gate terminal. The transfer characteristics obtained under stress at a constant bias voltage (V TG = 0 V) were employed to calculate and evaluate the changes in threshold voltage (ΔV TH , Fig. 2f), field-effect mobility (μ/μ 0 , Fig. 2f), and drain current (I DS /I DS0 , Fig. 2g). Although a slight shift of the transfer characteristic was observed, the electrical parameters of the fabricated device were almost unaffected by the bias stress under ambient conditions.
Mechanical durability of the flexible OTFT. To determine the feasibility of future wearable sensors based on the OTFT, the mechanical durability of the flexible OTFT (Fig. 3a) was evaluated with bending and compression stress tests. Initially, bending stresses were applied to the OTFT as shown in Fig. 3b. The transfer characteristics of the dual-gate OTFT scarcely changed during the application of tensile stress (Fig. 3c). Hence, no changes in V TH and mobility were obtained from the transfer curves of the ultra-thin-film device when bending stress was applied at each radius of curvature (Fig. 3d). Secondly, the electrical performance of the OTFT was measured under compressive strain (Fig. 3e). Although the device exhibited stable operation when applying stress with compression up to 15% (Fig. 3f, g), the device was broken under higher compressive stress (>20%). This phenomenon might be due to the materials employed in the flexible OTFT (vide infra). Fortunately, the durability observed upon applying each stress was sufficient for the electrical device to be mounted on human skin 20 . In other words, the device structure of the designed OTFT can be used as a wearable biosensing platform.
Confirmation of the biotin-streptavidin complexation on the top-gate electrode. Encouraged by the stable low-voltage operation of the OTFT, we decided to use the designed device for biosensing applications. Herein, we constructed a protein-sensing portion on the top-gate electrode by using the biotin-streptavidin complex as the sensing scaffold. Owing to the extremely high affinity of biotin (also known as vitamin B 7 ) to streptavidin (K d~1 0 −15 M), the system is often utilized to evaluate the sensing abilities of newly developed sensors. Furthermore, immobilized antibodies on substrate surfaces can be aligned by the biotin-streptavidin system 21 , which indicates that the accuracy of immunosensors could be improved by using a biotin-streptavidin-complex-based scaffold.
To evaluate the sensing capability of the OTFT for a specific interaction, the sensing portion (the extended area of the top-gate electrode) of the OTFT was decorated by a biotin-terminated SAM (biotin-SAM). Then, a phosphate-buffered saline (PBS) solution with streptavidin was added dropwise to the top-gate electrode, and the transfer characteristic of the OTFT was subsequently measured. To confirm the decoration, we performed the surface characterization of the top-gate electrode (see Supplementary Fig. 9). Water contact angle goniometry (WCAG) measurement revered a marked increase in the hydrophilicity of the biotin-modified top-gate electrode (Θ W = 26°) upon its reaction with streptavidin (Θ W = 10°) in comparison with that of an untreated gold film (Θ W = 43°), indicating that the gold surface was covered with hydrophilic molecules (i.e., the biotin-SAM and/or the streptavidin protein) (see Supplementary Fig. 9). In addition, photoemission yield spectroscopy (PYS) measurement showed a slight shift in the depth direction for the work function from the untreated electrode (4.7 eV) to the SAMmodified electrode (4.8 eV), suggesting that the top-gate surface was treated with the electronegative group (i.e., the carboxy moiety of biotin) of the SAM (see Supplementary Fig. 9). The observed slope change in the PYS spectrum of the electrode after treatment with streptavidin implies that the emission of photoelectrons from the gold surface was suppressed by the adsorbed macromolecules (streptavidin) on the gold film. Furthermore, we carried out a direct observation of the functionalized electrodes by atomic force microscopy (AFM) measurement (see Supplementary Fig. 10). As expected, the surface topographies of the electrodes showed different expressions with each functionalization. Importantly, the obtained changes in topography agreed with those in a previous study 22 . From these characterization results, we concluded that the biotin-streptavidin complexation on the electrode was successful. Then, we carried out titration for streptavidin using the OTFT modified with the biotin-SAM. The fabricated OTFT responded predictably to the addition of streptavidin to the PBS solution at pH 7.0 (see Supplementary Fig. 11). The observed positive shift of the transfer characteristic with increasing streptavidin level showed that negatively charged molecules (i.e., streptavidin with the mildly acidic isoelectric point, pI~5) were captured on the surface of the top-gate electrode.
Label-free electrical immunoassay for immunoglobulin G (IgG). Finally, to demonstrate the biosensing ability of the designed OTFT, we executed an immunoassay for IgG, which is known as a biomarker protein for rheumatoid arthritis, infectious diseases, and inflammations 23 . Although immunoassays are some of the most general analytical methods for target proteins due to their high specificity, it is relatively complicated to use conventional immunoassays (e.g., enzyme-linked immunosorbent assays) to determine analyte information because of the necessity to label target proteins. Therefore, the development of label-free immunoassays utilizing flexible OTFTs is required to achieve the real-time monitoring of biomarker levels. In this study, a biotinylated polyclonal antibody was utilized as a label-free protein receptor for IgG on the topgate sensing electrode treated with the biotin-terminated SAM/ streptavidin complex (Fig. 4a). The protocol for the immunoassay is described in the "Methods" section. Figure 4b exhibits the titration results of the IgG protein, showing the observed shifts of V TH with increasing IgG level. Note that the quantification of interfacial states between the top-gate electrode and the aqueous solution in FET-based sensors is difficult. For example, electrolyte conditions strongly affect the interfacial potentials at the electrode surface. Although this point is well understood in the field of surface science, there may be cases where it differs from actual results 24 . This is because the reference plane on prepared electrodes is generally unclear owing to the complicated distributions in shape and charge of proteins. Hence, it is difficult to determine the valid value of the Debye length (i.e., the region influenced by electrolyte conditions) from the electrode unambiguously. Furthermore, the changes in electrical potential at the interface might also be triggered by the modulation of the dipole moment in the immobilized complex which consists of antibody, streptavidin, and the biotin-SAM on the electrode 25 . Nevertheless, the charge of the captured molecules on the device is one way of determining the driving force of electrical responses in FET-based sensors. Hence, the shifts of V TH of the prepared device with increasing IgG level ( Fig. 4b and Supplementary Fig. 12) are probably due to the shift of the electrical potential at the top-gate/solution interface. Hence, a positively charged IgG protein might be captured on the c Transfer characteristics of the fabricated OTFT before, during, and after application of bending stress. The applied V TG was kept at 0 V. d Changes in the threshold voltage (V TH ) of the dual-gate OTFT at each radius of curvature. The number of measurement cycles for each condition was five. e Illustration of the flexible OTFT device upon applying compressive stress. f Transfer characteristics of the OTFT device before, during, and after application of compressive stress. The applied V TG was kept at 0 V. g Changes in the threshold voltage (V TH ) of the dual-gate OTFT at various compression rates (0%, 5%, 10%, and 15%). The number of measurement cycles for each condition was five.
Discussion
The fabricated OTFT was composed of printed materials and an ultra-thin-film substrate, and the device exhibited both operation stability and mechanical durability due to its adopted configuration (top-gate structure). First, the channel region of the OTFT was fully shielded by the top-gate dielectric (parylene film), resulting in the environmental robustness of the fabricated OTFT against the diffusion of water or oxygen into the semiconductor film 26 . In fact, even though all the measurements were performed under atmospheric conditions, the fabricated device showed excellent electrical performance characteristics. Second, the semiconductor layer was embedded at a neutral strain position in the fabricated device, i.e., sandwiched between both parylene layers (the top-gate dielectric and the substrate). Here, the calculated distance of the neutral strain position (r n ) from the bottom plane of the parylene film substrate was very close to the bottom-gate and top-gate channel regions (see Supplementary Fig. 7). In previous studies [27][28][29][30] , it was demonstrated that an appropriate sandwich structure can contribute to the suppression of a compressive or tensile strain at the neutral strain position. Therefore, we concluded that the excellent mechanical durability during the device operation is owing to the appropriate structure for suppressing mechanical strains on both channel planes (i.e., the interfaces between the semiconductor and each dielectric layer). Consequently, we concluded that the designed OTFT has excellent mechanical stability [27][28][29][30] . Because the maximum compression rate of human skin is around 20% 20,31 , our results demonstrate that the fabricated device has sufficient robustness against compressive stress, although the device broke above a compressive strain of 20%. Incidentally, the origin of the malfunction of the OTFT subjected to more than 20% compressive strain might have been due to the contact between the polycrystalline organic semiconductor and the dielectric films 32 . These results show that the designed OTFT with the top-gate configuration has satisfactory characteristics for the development of wearable biosensors to monitor biomarker behaviors in real time.
In conclusion, we have successfully demonstrated the biosensing abilities of a printed dual-gate OTFT on a 1-μm-thick film. Our fabricated device electrically responded to increasing biomolecule concentrations on the sensing electrode. The sensitivity of the ultra-thin-film immunosensor device to IgG (linear response range: 0-10 μg/mL) was comparable to that of conventional immunoassays including TFT-based immunosensors (see Supplementary Table 2). Notably, biosensing was accomplished even in the presence of a protein interferent, suggesting that the designed immunosensor can rapidly detect biomarkers in body fluids without any pretreatments. In addition, while the ionic compositions of the PBS solution and a biological fluid (e.g., sweat) are slightly different, the ionic strength of the PBS solution is much higher than the concentration of the ionic interferent in sweat (see Supplementary Table 3). Hence, the obtained result indicates that the fabricated device can sensitively monitor changes in marker levels in sweat on the human skin. Moreover, OTFT-based sensors can detect electrical charges of captured proteins statically and directly. This suggests that OTFTs are more suitable platforms for constructing protein-sensing systems than conventional electrochemical sensors 8,10 . General electrochemical sensors detect electron transfer associated with oxidation or reduction of target molecules. Therefore, these types of sensors can sensitively detect targets in the ideal electrolyte system. However, the electrochemical detection of protein markers is difficult due to the following reasons: (1) since proteins are composed of various amino acids with different electrical potentials, electrochemical peaks derived from these amino-acid residues interfere with each other. In addition, biological fluids including proteins contain various types of electrolytes at high concentrations. From these matters, it is difficult to secure the selectivity and sensitivity in electrochemical sensors for protein analyses. (2) Electrochemical stimulation breaks higher-ordered structures and minute residue sequences in proteins. This means that the accurate analysis of protein markers by electrochemical methods is difficult. Whereas, FETs (including OTFTs) can detect charge of captured proteins without irreversible electrochemical reactions. (3) Amperometric sensors are incapable of the detection of redox-inactive proteins. In addition, although sensing electrodes should cover redox potentials of target proteins, the electrochemical window of typical electrode materials is narrower than high-potential peaks derived from proteins. Therefore, we concluded that OTFT-based sensors are some of the suitable platforms for the detection of proteins compared to electrochemical measurements. On the basis of these points, we believe that our proposed strategy to develop the OTFT-based immunosensor with printed components could pave the way to a new approach for disposable and wearable biosensors for the on-site detection of various biomarkers.
Methods
General. All solvents and reagents employed for this research were used as supplied. More details are given in Supplementary Methods.
Device fabrication. The device structure of the dual-gate OTFT is shown in Fig. 1. The c-PVP film was not only employed as a planarization layer for the surface of the parylene film, but also contributed to controlling the surface energy (droplet wettability) of the underlayer, enabling uniform and stable formation of laminated materials (i.e., a silver (Ag)-nanoparticle ink) on the substrate. Next, the c-PVP-film surface was treated with an O 2 plasma cleaner (Samco, PC-300). The treatment duration was 2 min and the plasma power was 100 W. Subsequently, a water-based Ag-nanoparticle ink (DIC, JAGLT-01) was used as the bottom-gate electrode and deposited on the substrate by using inkjet printer equipment (Fujifilm Dimatix, DMP-2831) with a 10 pL nozzle head. The dot-to-dot spacing for the droplet deposition was fixed at 60 µm, and the substrate temperature was kept at 30°C during printing to form the bottom-gate layer. To planarize the electrode surface, the substrate was then stored in an environmental control chamber (ESPEC, SH-221) at 95% RH (=relative humidity) and 30°C for 30 min 33 . To sinter the Ag nanoparticles, the printed bottom-gate layer was annealed at 120°C for 1 h under ambient conditions. Next, a second c-PVP film was deposited on the printed gate electrode by the spin-coating (rotation speed: 2000 rpm). The second c-PVP film was utilized as the bottom-gate dielectric layer (film thickness: 450 nm). A tetradecane-based Ag-nanoparticle ink (Harima Chemicals, NPS-JL) was applied as the drain and source electrodes and patterned on the c-PVP dielectric film by the inkjet printer. These electrodes were deposited under the same conditions as the bottom-gate electrode. The printed electrodes were sintered at 120°C for 30 min in ambient air. To improve the carrier injection efficiency from the source to the semiconductor, the surface of the printed electrodes was dipped in a 2-propanol solution of perfluorobenzenethiol (PFBT, 30 mM) for 5 min, and then the substrate was rinsed with 2-propanol and dried with N 2 gas 34 . After this, a blend ink consisting of 2,8-difluoro-5,11-bis(triethylsilylethynyl)anthradithiophene (diF-TES-ADT, 2 wt%) and polystyrene (PS, M w :~280,000, 0.5 wt%) in mesitylene was used as an organic semiconductor layer and deposited in the aperture region between the drain and source electrodes (channel area) by using dispensing equipment (Musashi Engineering, IMAGEMASTER 350). The diF-TES-ADT material was synthesized in accordance with the literature 35 . A parylene-based top-gate dielectric was then formed on the substrate (thickness: 500 nm). We utilized the parylene film as the top-gate dielectric because the CVD process can completely cover the rough surface of the underlayer, i.e., the crystalline semiconductor (diF-TES-ADT). Next, a gold (Au) film was formed on the parylene dielectric layer by resistance heating evaporation at a rate of 0.1-0.5 Å s −1 under a pressure of 10 −4 Pa (thickness: 50 nm). The Au thin film was utilized as extended top-gate and control-gate (pseudoreference) electrodes. The control gate was covered with Ag/AgCl paste. Finally, the completed device was peeled from the glass substrate. Importantly, the parylene film can be easily peeled from the supporting carrier owing to the high release performance of the amorphous fluoropolymer film (surface energy: about 8.1 mN m −1 ).
Modification of the immune-sensing electrode. First, an ethanol solution containing 1 mM biotin-SAM formation reagent (Dojindo, B564) was added dropwise on the Au top-gate electrode at room temperature (reaction time: 1 h). After washing the top-gate electrode with ethanol and water, a PBS (Sigma-Aldrich, D8537) solution of 500 μg/mL streptavidin was casted on the biotin-modified electrode for 15 min at room temperature. Afterward, a PBS solution of the biotinylated antibody for IgG (30 μg/mL) with 0.1 wt% bovine serum albumin (BSA) protein and 0.05 wt% Tween 20 surfactant was cast and incubated on the top-gate electrode modified with streptavidin at room temperature (incubation time: 30 min). Then, the sensing electrode was washed with the PBS solution. The characterization results for the extended topgate electrode are summarized in Supporting Information.
Label-free immunoassay. The PBS solution of the IgG (0-80 μg/mL) with the BSA additive (0.1 wt%) was added dropwise on the extended top-gate electrode and incubated at 37°C (incubation time: 1 h). Then, the IgG was electrically detected by the dual-gate OTFT without further treatment.
Characterization of the fabricated device. The electrical properties of the fabricated device were analyzed using a semiconductor parameter analyzer (Keithley, 4200-SCS). All measurements were performed under atmospheric conditions. The basic characteristics of the fabricated OTFT were measured before the peeling of the device from the supporting substrate. To investigate the mechanical durability of the fabricated OTFT, the tensile strain or compressive stress was applied to the device by using a support stick or rubber (Fig. 3b, e). Here, the flexible OTFT was carefully peeled from the support substrate (the glass plate). However, ΔV TH of the OTFT was slightly different before (Fig. 2d) and after (Fig. 3d) the peeling process of the device from the supporting substrate. Although the OTFT device might be slightly damaged by the process, the peeled OTFT operated reproducibly in this study. The work function of the Au-sensing electrodes was analyzed by photoemission yield spectroscopy in air (PYS, Riken Keiki, AC-3). The surface wettability of the Au films was determined using a contact angle goniometer (CAG, Biolin Scientific, Theta T200). The surface topography of the sensing electrode at each modification step was collected by AFM (Shimadzu Corp., SPM-9700).
|
2021-01-08T21:42:34.691Z
|
2021-01-08T00:00:00.000
|
{
"year": 2021,
"sha1": "8c68195c3fa67715933f5bf66750c49fca158f9f",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s43246-020-00112-z.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "8c68195c3fa67715933f5bf66750c49fca158f9f",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Chemistry"
],
"extfieldsofstudy": []
}
|
255514194
|
pes2o/s2orc
|
v3-fos-license
|
Progress in Psychological Science. The Importance of Informed Ignorance and Curiosity-Driven Questions
In recent decades we have seen an exponential growth in the amount of data gathered within psychological research without a corresponding growth of theory that could meaningfully organize these research findings. For this reason, considerable attention today is given to discussions of such broader, higher-order concepts as theory and paradigm. However, another area important to consider is the nature of the questions psychologists are asking. Key to any discussion about the scientific status of psychology or about progress in the field (scientific or otherwise) is the nature of the questions that inspire psychological research. Psychologists concerned about scientific progress and the growth of theory in the field would be well served by more robust conversations about the nature of the questions being asked. Honest, curiosity-driven questions—questions that admit to our ignorance and that express an active and optimistic yearning for what we do not yet know—can help to propel psychology forward in a manner similar to the development of theory or paradigm. However, existing as it does in the “twilight zone” between the natural sciences and the humanities, psychology is fertile ground for questions of wide-ranging natures, and thus the nature of progress in the field can be variously understood, not all of which will be “scientific.” Recent psychological research in three areas (cognition, memory, and disorders/differences of sex development) are discussed as examples of how curiosity-driven questions being asked from a position of informed ignorance can lead to progress in the field.
Introduction
It is "the right question, asked the right way, rather than the accumulation of more data, that allows a field to progress." (Firestein 2012, p. 98) In recent decades we have seen an exponential growth in the amount of data gathered within psychological research. Study after study is conducted to test various hypotheses, and yet in psychology at large there is little consensus regarding the core concepts in use, and there is a general absence of broader theories or overarching paradigms that would allow for the meaningful organization of research findings (Valsiner 2017;Zagaria et al. 2020). For this reason, considerable attention today is given to discussions of such broader or higher-order concepts as theory and paradigm, and rightly so. Within such discussions, some psychologists have argued that the field would be well served by adopting an inclusive, pluralistic, "meta-theory," such as evolutionary psychology (Zagaria et al. 2020). The main aim of the current piece is to suggest that, before looking for unifying ("scientific") theories, psychologists interested in the scientific progress of the field should look into the nature of the questions being asked and the degree to which those questions open the door to the possibility of scientific progress in the first place. We will examine some characteristics of the kinds of questions that allow for the possibility of scientific progress, and we will briefly look at some areas in psychology in which we see such questions being asked. Much psychological work-valid and valuable psychological work-is not scientific, and much good work would be difficult to link with any notion of progress in the field, scientific or otherwise. For many psychologists, and within a considerable amount of psychological work, the "soft" status of the field is not a concern. Thus, before we can assess the potential merits of various meta-theories, it would seem to be important that we first ask if we are generally studying psychology in a manner that is suggestive of a collective, scientific undertaking, and thus one in which inclusive, pluralistic, meta-theories would potentially be of use. By examining the nature of our questions, we can explore the degree to which the practices of psychologists point towards the possibility of various versions of scientific progress in the field, or perhaps in different, non-scientific directions.
As assessed along a number of metrics (e.g., the theories-to-laws ratio, the rate of consultation between researchers in the field, citation frequency of new researchers, citation concentration), psychology is considered a "soft science" relative to the natural sciences (Zagaria et al. 2020). In addition to these factors, the soft status of psychology also arises from the nature of the questions that give impetus to psychological research. In as far as those questions are curiosity-driven-which is to say that they profess an "informed ignorance" and an honest curiosity about the world-they afford us a way to meaningfully assess progress (scientific or otherwise). Honest, curiosity-driven questions are both a confession of ignorance and a profession of openness to the as-yetunknown that lies beyond. "Thoroughly conscious ignorance is the prelude to every real advance in science" (James Clerk Maxwell quoted in Firestein 2012, p. 7).
While reflecting on the nature of particular questions or of questions in general can feel philosophically naïve or overwhelmingly complicated in turn, the matter is worth raising here in this brief piece as in practice psychologists rarely explicitly engage in such reflection. Psychologists rarely, in practice, publicly profess ignorance, and do so even more rarely as a badge of honor, as an assertion of hope, as a rallying cry. Rather than reflecting on the sense of wonder that can appear in our ignorance, psychologists often in effect seem to want themselves and others to wonder at what is already known. Rather than exploring the unknown, psychologists focus more on the methods we use to get to know it, and the theories we hope to developed to know it better. By focusing exceedingly on issues of method, methodology, hypothesis, theory, or goals, psychologists often lose sight of the object of study, and in the process the curiosity driving our questions about the world often becomes of secondary importance. Unless we are pushed forward by our ignorance and curiosity, and allow ourselves to be pulled forward by what we find outside of ourselves in response, the scientific status of psychology will remain standing on what Zagaria et al. (2020) call "clay feet." The question of psychology's scientific status and the possibilities of scientific progress in the field pertain not so much to whether or not psychology can stand on its own, as whether or not it has anywhere to go.
Progress Arising from Informed Ignorance
The notion of progress in science is both exceedingly popular and particularly slippery (e.g., Debrock and Hulswit 1994;Laudan 1978). Our current understanding of scientific progress developed in the West over the past several centuries, and has been understood in several different ways over that time (for a particularly helpful explication of that history, see Harrison 2015). In as far as we can speak of science centuries ago, the idea of scientific progress was initially understood as the perennially new, individual-level cultivation of internal virtue with the assistance of a semiotically pregnant world (as in Aquinas's understanding of scientia). More recently, it has come to be seen as a linear, collectivelevel accumulation of external, objective data from a mechanistic world. The philosopher Charles Taylor (2007) refers to these changes as part of the shift from an "enchanted" to a "disenchanted" world. Over the course of this transition, science came to be understood as an endeavor divorced from other areas of knowledge, such as those found in the humanities and the arts (Daston and Galison 2007). For many, science has come to be seen not just as a method, but also a philosophy; a shift that others find problematic (e.g., Sheen 2019). Psychology emerged as an independent field in the nineteenth century during a particularly intense period of this transition, and debates regarding psychology's academic allegiances were there from the very beginning (Rzepa and Dobroczyński 2019;Valsiner 2012). Much like the historical development of nationally-conscious languages whose promotors made decisions regarding phonetics, grammar, and lexicon so as to differentiate their languages from those of neighboring peoples (Snyder 2003), psychologists worked very hard to distance the field from those with which it was the most similar, particularly philosophy (e.g., as seen in the work of Gustav Fechner on "psychophysics," in the famous laboratory of Wilhelm Wundt, or in the intellectual development and interesting career path of Władysław Heinrich between psychology, philosophy, and pedagogy; Danziger 1990; Rzepa and Dobroczyński 2019). Many early psychologists fought to steer psychology in the direction of the natural sciences, and in doing so, largely adopted a "disenchanted" understanding of progress.
Closely related to the notion of scientific progress, and similarly complex, is an activity that also appears at first glance to be patently simple, namely, asking questions (Firestein 2012). The nature of the questions we ask about the world gives shape to the nature of our scientific theories. Similarly, our broader understandings of the world give shape to the questions we ask. Thus, questions can be thought of as containing both deductive and inductive elements (and constituting a "chicken or egg" dilemma). Amidst the intellectual battles accompanying the historical "disenchantment" of the world, nineteenth century social scientists were well aware of how the questions we pose are expressions not only of what we don't know, but also of our intellectual allegiances to existing schools of thought (e.g., Sheen 2019). For example, Auguste Comte (1798-1857) wanted the new field of "social physics" (later "sociology") to be a purely "scientific" endeavor, free from metaphysical or philosophical musings. In its allegiance to empirical science and its rejection of metaphysics, Comte's positivism finds expression in the questions he believed social scientists should ask and therefore "takes the terrestrial horizon as its boundary, without prejudice to what might lie beyond it. Abandoning too high a level of speculation, it retired to lowlier positions of more immediate interest" (de Lubac 1995, p. 159). Thus, in the writings of many early social scientists-academics who were acutely aware that the intellectual allegiances and academic independence of their newly emerging fields were in doubt-we see a clear awareness that questions come in all shapes and sizes, and that not all questions are equal regarding the direction in which they propel us (moral and value judgments aside) (Sehon 2005). They were aware that it is not only important to ask questions, but also that it is important to ask questions about the questions we are asking.
Curiosity-Driven Questions
It is not easy to ask meaningful questions, but doing so is of fundamental importance for the advance of any scientific field. Neurobiologist, Stuart Firestein (2012), has argued that within the natural sciences it is "the right question, asked the right way, rather than the accumulation of more data, that allows a field to progress" (p. 98). In as far as we see its status as a "soft science" as problematic, we can say that psychology, on the whole, is struggling to ask meaningful questions in the right way, that is, in a way that would allow for scientific progress to be made. In as far as psychology can be considered a science or even an academic pursuit, it is important that psychologists reflect on the questions being asked in the field. Asking honest questions indicates an awareness of our ignorance, as well as a curiosity about what lies beneath.
We have been long aware of differences in quality between the questions we ask. For example, C. S. Peirce (1958) argued that real questions arise from genuine, "living" doubt, not what he calls "paper doubts," that is, questions that arise from the compulsive need to ask questions-as often found among academics-rather than from honest curiosity and an awareness of one's own ignorance. "According to Peirce, the doubt that brings forth inquiry must be genuine. It is not sufficient to say or write that one doubts; 'paper doubt' does not amount to legitimate disbelief" (Bergman 2009, p. 16). With this distinction in mind, we can also see that there is a subtle, but fundamental, difference between curiosity-driven questions and hypothesis testing. While not mutually exclusive, and ideally complementary, in practice hypothesis testing often overshadows genuine questions in psychological research. Not only is there the overwhelming pressure in academia to publish "significant findings," but those findings are generally expected to confirm the initial hypotheses. Within published psychological research there are few surprises. This applies to both quantitative and qualitative research (even in the absence of explicit hypotheses, as is often the case of the latter). In this climate, unsurprisingly, psychologists often become cheerleaders for their own hypotheses (or general expectations). Additional theoretical, political, professional, and social biases also abound (Campbell and Manning 2018;Duarte et al. 2015;Ferguson and Heene 2012;Gerber et al. 2001), making the majority of research presented in published articles or at conferences thoroughly predictable. While hypothesis testing can be helpful to the extent that it contains within itself the mechanisms for assessing "significance" and thereby for meaningfully judging outcomes, it is unhelpful to the extent that we favor an outcome before the fact. Thoroughly conscious ignorance requires the meaningful assessment of research outcomes, as afforded by hypothesis testing, but also an openness to, and genuine curiosity of, those outcomes. If the outcomes are essentially contained within the hypotheses, and known in advance, the process is not future oriented, and no progress can be made. Having lost sight of the tension within every honest question between what is known and what is unknown, hypotheses in psychology are often, in practice, rhetorical questions.
Another relatively small, but not unimportant, indicator of this can be found in the questions that appear in the limitations section at the end of research articles (rather than lying at the core of the work). What is more, the limitations or suggestions for future research are often actually backhanded suggestions in support of the given claims (e.g., usually by suggesting extensions or replications). Such practices in effect anchor psychological questions to the past and to the already-known, rather than projecting them into the future and the as-yet-unknown. In as far as we root for a hypothesis, and do so at the expense of genuine, curiosity-driven questions, we run the risk of silencing the data and ignoring the subject. This can also happen when our constructs are too rigidly defined, indicating that a lack of universal agreement about even core constructs in the field is not necessarily a bad thing, that is, as long as this flexibility allows us to turn meaningfully to the data, or rather, to the subject under study. In fact, intentionally loosening up on widely held definitions can shift priority from the researcher and the past (the known), to the subject and the future (the as-yet-unknown), and yet, "That principle is one that is difficult for many scientists to swallow, because it relaxes control, gets the experimenter out of the driver's seat, and leaves it up to the subjects […] to produce the results" (Firestein 2012, p. 97). The generative quality of honest questions, as well as the thorough excitement and deep discomfort that such inquiry can cause, is wonderfully expressed in the following statement made by Robert Boyle (1627-1691): "…an Inquisitive Naturalist finds his work to increase daily upon his hands, and the event of his past Toils, whether it be good or bad, does but engage him into new ones, either to free himself from his scruples, or improve his successes. So, that, though the pleasure of making Physical Discoveries, is, in it self consider'd, very great; yet this does not a little impair it, that the same attempts which afford that delight, do so frequently beget both anxious Doubts, and a disquieting Curiosity." (quoted in Hunter 2000, p. 13).
Criteria for Judging the Data
Since at least the times of ancient Greece we have been wrestling with the "dilemma of dualism," i.e., how it is that the seeker of knowledge can come to know that which is unknown (Debrock and Hulswit 1994). How can one recognize that which one has never yet seen? While not resolving this age-old riddle, we can recognize that asking questions presupposes the ability to assess the meaningfulness of what one hears in response. It presupposes the (at least partial) intelligibility of what follows the question; it presumes the ability to see in a response an answer. In this sense, questions contain broader conceptualizations that meaningfully combine individual data. While grounded in what we already know (or think we know), asking curiosity-driven questions is also essentially about the future and our confident movement into that future (Firestein 2012;Valsiner 2017). Honest, curiosity-driven questions-questions that admit to our ignorance and that express an active yearning for what we do not yet know-can help to propel psychology forward in a manner similar to the development of theory or paradigm. The advances that we see in the natural sciences have come from the ability of scientists in those fields to ask honest, curiosity-driven questions and to meaningfully judge the various responses they get from the natural world in reply. Answered questions inspire new questions, more data are gathered, theories and paradigms emerge to meaningfully organize and compare the "answers" we have found, and the march of progress is set afoot. Such progress is seen relatively rarely in psychology, at least in part, because of the nature of the questions being asked. The range of questions asked by psychologists reflects the wide range of epistemological and ontological positions professed in the field. For example, broadly speaking, there are psychologists who understand psychology to be an empirical, experimental science (e.g., "all psychology must be based on experiment, and that it is quite improper to set aside an area of study labeled 'experimental' as if to suggest that other areas of psychology exist which are not experimental"; Bugelski 1951), just as there are those who believe it can never be an empirical science (e.g., "an objective, accumulative, empirical and theoretical science of psychology is an impossible project"; Smedslund 2016, p. 185). These two broad camps exist, and will continue to exist, convinced of the value of their enterprise (Mazur and Watzlawik 2016). Gustaw Ichheiser (1897-1969) saw this fundamental tension not as an impasse, but as an opportunity: "[S]ocial scientists should, in my opinion, not aspire to be as 'scientific' and 'exact' as physicists or mathematicians, but should cheerfully accept the fact that what they are doing belongs to the twilight zone between science and literature" (cited in Rudmin et al. 1987, p. 171).
Psychology belongs to this twilight zone in part because the body of criteria psychologists use for perceiving data as answers to their questions is broad, inclusive, and often shifting-and arguably more so than in other fields. While literary scholars generally do not use "scientific" criteria to study literature, and physicists generally do not use the judgement criteria of literature within physics, psychologists more readily and more often shift between the criteria associated with one or the other of what C. P. Snow (1959) called "the two cultures" (i.e., the sciences and the humanities). While many, such as Ichheiser, see this as a strength of psychology, it is also arguably responsible for a considerable degree of confusion (Kagan 2009). Although the general conceptual separation of the "two cultures" is a relatively recent historical development (Harrison 2015), and one that many have argued is not as clear-cut even today as may appear (Gould 2003;McAllister 1996;Sullivan 1933), in as far as we give credence to this and similar distinctions between fields of study, we ought to take seriously the differences in judgment criteria both between and within fields. This is particularly important within psychology precisely because psychologists often oscillate between various judgment criteria, none of which has primacy within the field. Judging the meaningfulness of data involves the tension between the restrictiveness of judgment criteria that focus our vision and an openness of those criteria so as to not lose sight of their limitations relative to the object of study, however, this essential tension can lose its "bite" if one can in effect shape that balance at will.
As suggested by Ichheiser and others, flexibility in method and methodology within psychology can be a strength, that is, as long as it opens up new avenues for fruitfully studying the subject. After all, polyvalence is understood to be part of our psychological lives (Boesch 1991). However, there is a double-sided risk that comes with this flexibility. On the one hand, we can switch between judgment criteria too often and too easily, thereby ultimately devaluing what the subject says to us in our research; after all, criteria for determining "significance" (quantitative or otherwise) are ultimately intended to serve our ability to perceive the subject in ways that would otherwise be unavailable to us. On the other hand, we can swing too far in the other direction, becoming too wedded to any one judgment criteria over others, thereby undercutting the richness of our conceptual toolkit and, more importantly, of the subject. When facing the complexities of the world, judgment criteria are helpful precisely because they allow us to perceive the world, to perceive the subject, in ways that would otherwise have escaped our awareness. Balance in the use of judgment criteria saves us from both "trivial order" and "barbaric vagueness"; from the "extremes of premature closure and narrow-mindedness on the one hand, and interminable indecision and 'broadmindedness' on the other" (Aeschliman 1983, p. 69-71). To simplify matters somewhat, we see this balancing act illustrated in the tension that often emerges in psychology between qualitatively-minded researchers and quantitatively-minded researchers. Qualitatively-minded researchers often criticize quantitative, experimental research as being overly restrictive or even closed to the subject ("forcing the subject into the empirical methods"), while quantitatively-minded researchers often criticize qualitative research as using weak or "wishy-washy" criteria for evaluating the subject. By holding onto our techniques too tightly, we ignore other options, overlook the limits of method, and ultimately restrict potential discoveries. By letting go too much of such criteria, we ignore the power of those techniques and thus we deny the subject the chance to speak to us through them. By either overly restricting or overly expanding our criteria we in effect lose sight of the subject.
Examples of Curiosity-Driven Questions
To suggest that we reflect more on the nature of the questions we ask, or to suggest the value of curiosity-driven questions for psychological science, is not to suggest the value of any concrete question(s) in particular. What is more, given the nature of curiosity-driven questions arising from informed ignorance, it is impossible to identify such questions on face value (that is, on the basis of any particular wording). In other words, while we have been referring to this as a "question," or rather a type of question, it is in reality more of an approach or practice. Just as the question "Why am I sick?" can be scientific, moral, rhetorical, etc. (Sehon 2005), any particular motivating question posed by psychologists can be of various natures. Similarly, as curiosity-driven questions are defined by their relation to informed ignorance and their receptiveness to the subject, they necessarily extend in time and space beyond what we usually think of as the question itself. Thus, any example of such a question would need to explore the foundations provided by informed ignorance and the manner in which the question indicates a responsiveness to the subject (and the "data"). For this reason it is perhaps more accurate to think of them as research practices or processes.
An example of such a process can be seen in psychological research on consciousness. The notion of consciousness is of fundamental importance within psychology, but also in broader, non-academic discussions. It is also one of those often used, but incredibly "fuzzy," concepts within the field (Zagaria et al. 2020). One area of research within this general topic concerns whether non-human animals have what we call consciousness. As we remain somewhat unsure about just exactly what this term means, it is particularly difficult to look for it empirically. However, as researchers have learned to let go of their "human biases" and listened to their non-human subjects, they have expanded their view on how consciousness might "look" in non-human animals and how we might study it there (Firestein 2012). For example, rather than expecting consciousness to appear like a conversation between two adult humans, researchers have been making strong claims for the presence of consciousness in a wide range of animals (e.g., Plotnik et al. 2010). In a related line of research, work on theory of mind (ToM) continues to produce new tests to identify the appearance and development of such elements of consciousness in children at various ages (Jakubowska and Bialecka-Pikul 2020;Wellman et al. 2001). This has required researchers to in effect stop thinking like adult scientists and to start thinking like younger and younger children. By listening to how children see and interact with the world, psychologists have come to better understand the development of the perception of mental states, both one's own and those of others. In the case of both children and animals, psychologists have been asking questions out of a position of informed ignorance and they have been listening to their subjects. As a result, not only is it fairly safe to say that progress has been made, but new questions have emerged, and continue to emerge, as a result. Such research also builds the collective body of knowledge on the subject, a hallmark of modern science (Harrison 2015), and such curiosity towards the subject makes the research of others relevant, especially more recent research-another hallmark of "hard" science (Zagaria et al. 2020).
Another area of research that constitutes a nice example of the kinds of questions here under consideration concerns the nature of memory. Psychologists have been interested in the nature of memory since the very beginnings of the field. When looking across the history of memory research, hiccups and oddities aside, one would be hardpressed to defend the position that progress has not been made, even "scientific" progress. Across a wide range of research methodologies and methods, a diverse group of researchers have expanded our knowledge of the complex, plastic, and dynamic nature of memory (e.g., Lamprecht and LeDoux 2004). Researchers have been open to their subject(s) and as a result, have come to suggest radical changes to our conceptualizations of memory, or rather, memories (Bourtchouladze 2004). That the issue of memory continues to puzzle and fascinate researchers indicates not the scientific failure of the field, but the expansive, generative nature of honest inquiry.
Another example can be seen in research over the past several decades on gender within cases of "Disorders/Differences of Sex Development" (DSD; formerly called intersex, hermaphroditism, pseudohermaphroditism, sex errors of the body, or ambiguous genitalia). DSD should not be confused with what is known as "gender dysphoria" (APA 2013), that is, cases in which a person believes their gender identity to not match their biological sex or the gender to which they were assigned. DSDs have been defined as "congenital conditions in which development of chromosomal, gonadal, or anatomical sex is atypical" (Lee et al. 2006). Cases of DSD challenge traditional thinking regarding gender identity development by showing that such development is complex, involving numerous prenatal and postnatal factors. For example, someone could be born with 46,XY chromosomes (i.e., a male karyotype), internal testes and no internal female reproductive organs, but female appearing external genitalia. Such a person is likely to be assigned a female at birth and/or thought of as female by their family, thereby beginning a process of female gender identity development which is discordant with their chromosomal and gonadal status. What is more, in the presence of complete androgen insensitivity syndrome (CAIS), such an individual may come to further develop an external female phenotype (e.g., developing breasts). Similarly, there is a large and growing body of research on "classical" congenital adrenal hyperplasia (CAH) in individuals with a 46,XX karyotype, wherein these individuals show marked hormone abnormalities (e.g., related to androgens), which result in various forms and degrees of "masculinization" (Meyer-Bahlburg 2014). DSDs raise fascinating questions regarding the nature of gender, not to mentioning the countless other areas of life connected thereto (e.g., sexual attraction, sexual functioning, reproduction, various non-sexual behavioral patterns, self-image, cognitive functioning, and mental health).
Scientific knowledge regarding various DSDs has increased over the past several decades (e.g., Lee et al. 2006;Lee et al. 2016), as has our understanding of the biological, psychosocial, and cultural factors that contribute to what we generally call gender, including "gendered behavior" and "gender identity" (Meyer-Bahlburg 2014). By better understanding DSDs in particular, we are able to better understand gender in general. What is more, not only does there remain a great wealth of ignorance in this area from which curiosity-driven questions can arise, but the more "informed" that ignorance becomes, the more curiosity-driven questions emerge. However, psychological research in this areas is not only concerned with scientific progress, as generally understood, but also with issues related to such matters as quality of life, quality of relationships, purpose, sense of self, and personal growth, issues which may or may not be what we would often consider "scientific" considerations. What is more, there is also a considerable and growing amount of activism related to DSDs. Progress can mean different things to different people working in this field. In other words, in cases of DSD (as elsewhere in psychology, including work on consciousness and memory), the questions psychologists ask are not necessarily those that lead to scientific progress. In fact, psychological work in this area is often clearly non-scientific. There also remains considerable disagreement regarding just what the science is supposed to say beyond the realms of method. Taken together, this is an example of the "twilight zone" of which Ichheiser spoke and with which he was both comfortable and confident.
Within research on all three of the areas discussed above we not only find strong claims that scientific progress has been made, but also strong claims that a considerable portion of psychological work is not scientific. What is more, there are also disagreements about the ways in which, or the degree to which, science can, or should, inform other areas of our lives. Again, this is not a judgement regarding the value of scientific or non-scientific work, but merely to point out the existence of differing undertakings within the field. One such division can be found between the assertion of the known that we see in advocacy and the assertion of the unknown that we see in science. Much has been written about the complex relationship between science and advocacy, and it is a topic that extends well beyond the scope of the current piece. However, for practical purposes it is worth reflecting on a basic difference between science and advocacy, even if that involves somewhat stereotyped and simplified images of both. Broadly speaking, while advocacy involves the promotion of what we know, or think we know, science promotes our ignorance and directs us towards what we do not yet know-and it does so again and again. Advocacy can stymie scientific progress by restricting what can and cannot be asked, by pushing an agenda regardless of what research might be produced-in other words, by restricting our access to the subject. However, advocacy can also encourage researchers to pose new, curiosity-driven questions that are responsive to the subject. Just as advocacy can inspire or restrict science, science can both support and/or hamper advocacy. There is no inherent moral value to this conflict, as science and advocacy can work both wonders and horrors. While the intertwining natures of science and advocacy is certainly much more complex than this, this distinction remains of considerable practical use, especially when looking at the questions that drive psychological work.
The point here is that if we listen closely to the questions psychologists ask today we will often hear many different theoretical positions and numerous fundamentally different understandings of progress (e.g., see the questions posed by 33 "influential psychologists" in APA 2018). In that regard, such distinctions as the science/advocacy difference (among others) remain of practical utility, at least in as far as psychologists are concerned with the scientific development of the field, or with progress in the field (however defined). As discussed above, within discussions of the scientific status of psychology it is worth reflecting on the assumptions contained in the questions posed, the degree to which they are future-oriented, and the degree to which they are responsive to, and receptive of, the subject. At the same time, not all meaningful and valuable questions need to lay the foundations for scientific progress, or any kind of progress for that matter (e.g., Sehon 2005). Nevertheless, questions remain of fundamental importance for science and for the notion of progress, scientific or otherwise.
Conclusion
Ideally, theory both grounds us in what we know and propels us into the unknown. In light of this, it is certainly worth reflecting on, and attempting to readjust, the current imbalance in psychology between data and theory. At the same time, it is worth examining another important part of the research process, namely, the nature of our questions. What is it that captures our genuine wonder? Where does the recognition of our ignorance simultaneously evoke a belief that we can overcome it? It is there that we find the liminal space between the known and the as-yet-unknown; it is there that we can being to meaningfully trace potential progress and to identify the nature of that progress (not all of which will be "scientific" as currently understood). In our concern for the progress of "psychological science," it is worth reflecting on the potentially progressive quality of our questions. Do they reflect an honest curiosity about the world? Are they boldly expressive and humbly receptive? To the extent that they balance between the known and the unknown, and between the "trivial order" and the "barbaric vagueness" of the given field, it is important that we recognize that not all questions are equal. Questions come in a wide range of types and they serve all sorts of ends, not all of which bespeak the possibility of scientific progress as currently understood. This is not in itself problematic, far from it. Yet, for psychologists concerned with the scientific status of the field or with the possibility of its progress as an empirical science, it is worth asking if our questions provide fertile ground for the notion of such progress in the first place.
Funding There is no funding to report.
Compliance with Ethical Standards
Conflict of Interest The authors declare that they have no conflict of interest.
Ethical Approval Not applicable.
Informed Consent Not applicable.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
|
2023-01-08T14:59:10.287Z
|
2020-05-25T00:00:00.000
|
{
"year": 2020,
"sha1": "8c22db212cde2ffb87dae2037ae4031773ef2829",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12124-020-09538-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "8c22db212cde2ffb87dae2037ae4031773ef2829",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
247592066
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of Interaction NGF/p75NTR in Sperm Cells: A Rabbit Model
Background: Nerve Growth Factor (NGF) plays an important role in the reproductive system through its receptor’s interaction (p75NTR). This paper aims to analyze the impact of NGF p75NTR in epididymal and ejaculated rabbit semen during in vitro sperm storage. Methods: Semen samples from 10 adult rabbit bucks were collected four times (n = 40) and analyzed. NGF was quantified in seminal plasma, and the basal expression of p75NTR in sperm was established (time 0). Moreover, we evaluated p75NTR, the apoptotic rates, and the main sperm parameters, at times 2–4 and 6 h with or without the administration of exogenous NGF. Results: Based on the level of p75NTR, we defined the threshold value (25.6%), and sperm were divided into High (H) and Normal (N). During sperm storage, p75NTR of H samples significantly modulated some relevant sperm parameters. Specifically, comparing H samples with N ones, we observed a reduction in motility and non-capacitated cell number, together with an increased percentage of dead and apoptotic cells. Notably, the N group showed a reduction in dead and apoptotic cells after NGF treatment. Conversely, the NGF administration on H sperm did not change either the percentage of dead cells or the apoptotic rate. Conclusion: The concentration of p75NTR on ejaculated sperm modulates many semen outcomes (motility, apoptosis, viability) through NGF interaction affecting the senescence of sperm.
Introduction
In addition to the already well-known role of Nerve Growth Factor (NGF) as a neurotrophin involved in the regulation of neuronal survival and differentiation [1], recent studies showed a ubiquitous distribution of NGF in different districts, including the reproductive system. The suggested biological significance of this uncommon localization is also shared among several animal species. In particular, in bovines, NGF exerts a luteotropic effect [2], and in llamas, NGF was identified as an ovulation-inducing factor protein [3,4]. NGF modulates endocrinal events, controlling reproduction in both induced [5] and spontaneous [6] ovulatory species. In particular, in the reproductive system of male rabbits, we demonstrated that NGF is expressed by different cell types: Leydig, Sertoli, germinal cells, and prostate cells [7].
NGF may trigger several processes generally mediated by the interaction with two receptors: tropomyosin receptor kinase A of 140-kDa (TrKA) and the NGF receptor of 75-kDa (p75 NTR ) [8], with high and low affinity, respectively. TrKA transduces the classical pathways, commonly downstreaming growth factor receptors: mitogen-activated protein kinase (MEK/MAPK), extracellular signal-regulated kinase (Erk), phosphatidylinositol 3-kinase (PI3K), and phospholipase C gamma [9]. The p75 NTR is a cell death receptor, a member of the tumor necrosis factor receptor superfamily [8], which is able to induce either (i) apoptosis, through c-Jun N-terminal kinases/caspases 3, 6, and 9, or (ii) survival, via nuclear factor kappa-light-chain-enhancer of activated B cells (NF-kB) [10], both depending on the binding with TrKA. Previous studies described the role of NGF and its receptors on sperm properties, focusing on its dose- [11,12] and time-dependent effects on sperm survival, apoptosis, and motility [13]. As shown also for other non-neurological compartments, the biological functions of NGF in sperm are mainly related to the interactive receptors involved, i.e., TrKA is maintained nearly stable during sperm storage, and modulates viability and sperm acrosomal reaction [14], whereas p75 NTR strongly increases throughout an 8 h storage, which seems correlated to its apoptotic role, as demonstrated in other cell lines, e.g., brain [15][16][17].
The modulating role of NGF on reproductive activity has been shown in several animal model studies and humans [6,[18][19][20]. Specifically, in rabbit bucks, we previously demonstrated that where NGF and its receptors have been described in epididymal and ejaculated sperm, p75 NTR is mainly in the midpiece and tail, whereas TrKA resides in the head and acrosome [13]. In addition, the concentration and functions of seminal NGF showed some discrepancies among different animal species [7,21,22]. For instance, the age of rabbit bucks, the collection rhythm, and other not-yet-defined factors deeply affected the levels of NGF and its receptors in the semen [23].
This paper aims at deepening our understanding of some factors (i.e., receptors proportion, endogen NGF concentration, exogenous NGF addition) involved in the role of NGF in the rabbit reproductive system, analyzing the concentration of NGF, and focusing on the impact of p75 NTR in epididymal and ejaculated rabbit semen. This work may contribute to the understanding of the role and biological effects of NGF and p75 NTR in issues of fertility and reproduction.
Materials and Methods
If not otherwise specified, all chemicals were purchased from Sigma Aldrich (St. Louis, MO, USA). pathways, commonly downstreaming growth factor receptors: mitogen-activated protein kinase (MEK/MAPK), extracellular signal-regulated kinase (Erk), phosphatidylinositol 3kinase (PI3K), and phospholipase C gamma [9]. The p75 NTR is a cell death receptor, a member of the tumor necrosis factor receptor superfamily [8], which is able to induce either (i) apoptosis, through c-Jun N-terminal kinases/caspases 3, 6, and 9, or (ii) survival, via nuclear factor kappa-light-chain-enhancer of activated B cells (NF-kB) [10], both depending on the binding with TrKA. Previous studies described the role of NGF and its receptors on sperm properties, focusing on its dose- [11,12] and time-dependent effects on sperm survival, apoptosis, and motility [13]. As shown also for other non-neurological compartments, the biological functions of NGF in sperm are mainly related to the interactive receptors involved, i.e., TrKA is maintained nearly stable during sperm storage, and modulates viability and sperm acrosomal reaction [14], whereas p75 NTR strongly increases throughout an 8 h storage, which seems correlated to its apoptotic role, as demonstrated in other cell lines, e.g., brain [15][16][17].
Experimental Design
The modulating role of NGF on reproductive activity has been shown in several animal model studies and humans [6,[18][19][20]. Specifically, in rabbit bucks, we previously demonstrated that where NGF and its receptors have been described in epididymal and ejaculated sperm, p75 NTR is mainly in the midpiece and tail, whereas TrKA resides in the head and acrosome [13]. In addition, the concentration and functions of seminal NGF showed some discrepancies among different animal species [7,21,22]. For instance, the age of rabbit bucks, the collection rhythm, and other not-yet-defined factors deeply affected the levels of NGF and its receptors in the semen [23].
This paper aims at deepening our understanding of some factors (i.e., receptors proportion, endogen NGF concentration, exogenous NGF addition) involved in the role of NGF in the rabbit reproductive system, analyzing the concentration of NGF, and focusing on the impact of p75 NTR in epididymal and ejaculated rabbit semen. This work may contribute to the understanding of the role and biological effects of NGF and p75 NTR in issues of fertility and reproduction.
Materials and Methods
If not otherwise specified, all chemicals were purchased from Sigma Aldrich (St. Louis, MO, USA). In Experiment 2 (Exp. 2), the same semen samples were divided into two aliquots: one used as control, and another added with NGF (storage times: 2-6 h), generated with Biorender.com (last accessed on 18 January 2022). Exp 1. Quantification of NGF concentration and p75 NTR expression in epididymal and in ejaculated sperm, and the effect of p75 NTR expression at time 0 on the main sperm outcomes. Exp 2. Effects of exogenous NGF (100 ng/mL) during storage (up to 6 h) on ejaculated sperm.
Animals and Semen Sampling
Ten healthy New Zealand white rabbit bucks aged from 10 to 24 months were raised in the experimental facility of the Department of Agriculture, Food and Environmental Science of Perugia (Italy) and used for semen collection. Four consecutive semen samples were collected (two per week) for a total of 40 samples (10 rabbit bucks × 4 replicates).
Animals were not subjected to stressful treatment that caused pain or suffering, and semen collection was performed weekly using a doe-like dummy and an artificial vagina maintained at 37 • C internal temperature. Specific guidelines for rabbit bucks [24] and the International Guiding Principles for Biomedical Research Involving Animals [25] were followed. Animals were bred in compliance with the 2010/63/EU Directive transposed into the 26/2014 Italian Legislative Decree.
The collection of epididymal sperm was performed in a slaughterhouse, from three rabbit bucks of the same genetic strain, through washing with 1 mL of saline solution for each epididymal region (from caput, corpus, and cauda) and directly recovered in 1.5 mL tubes ready for FACScan analysis.
Semen Handling
Immediately after collection, the sperm concentration was measured using a Thoma-Zeiss counting chamber and a light microscope (Olympus CH2, Tokyo, Japan) with a 40× magnification.
An aliquot of the semen (about 0.5 mL) was centrifuged at 700× g for 15 min to obtain seminal plasma (SP) and to quantify NGF concentration by using ELISA.
An aliquot of each semen sample derived from the different collections was diluted with a modified TALP [13] to achieve a final concentration of 10 7 sperm/mL. The effect of storage was evaluated at different time points (0-2-4-6 h) in semen samples supplemented at time 0 with 100 ng/mL exogenous NGF (Merck, Milan, Italy), according to the dose-response curve previously described [13], and compared to vehicle samples diluted with PBS instead of NGF. The samples were evaluated for motility, viability, and necrotic/apoptotic processes as described below. Furthermore, receptor expression was also evaluated.
NGF Quantification in Seminal Plasma
Seminal plasma was collected from the above-described semen samples. Semen collection was performed by means of an artificial vagina that was kept at 37 • C and filled up with heated water at 39-40 • C, following centrifugation at 700× g for 15 min. The NGF concentrations in seminal plasma were detected by enzyme-linked immunosorbent assay (ELISA), according to the manufacturer's instructions for the DuoSet ELISA for NGF (R&D System, Milan, Italy). The standard curve demonstrated a direct relationship between optical density and NGF concentration. All samples were run in duplicates. The NGF concentration was expressed in pg/mL (detection limit 31.25 pg/mL) [7].
Motility (CASA Analysis)
Automated semen analysis (model ISAS, Valencia, Spain) was performed according to previously assessed parameters [26]. Briefly, two drops of the semen samples and three microscopic fields were analyzed, and the kinetic properties of at least 300 sperm were recorded. All kinematic parameters were verified, but only the leading indicators (motility rate expressed as % of total motile cells on the total sperm and curvilinear velocity, VCL, expressed as sperm speed [µm/sec] in the curvilinear trajectory) were reported.
Sperm Capacitation and Acrosomal Reaction
The chlortetracycline (CTC) fluorescence assay was performed as reported by Cocchia et al. [27]. Briefly, a solution composed of 45 µL of sperm suspension, 45 µL of CTC stock, and 1 µg/mL concentrated propidium iodide was added to 1.5 mL foilwrapped Eppendorf tubes. The cells were fixed by adding 8 µL of 12.5% paraformaldehyde and one drop of 1, 4-diazabicyclo[2.2.2]octane dissolved in PBS to delay the fading of fluorescence. The CTC staining of the viable sperm was examined under an epifluorescence microscope (OLYMPUS-CH2 excitation filter 335-425 and 480-560 nm, for CTC and propidium iodide detection, respectively) detecting different sperm fluorescence patterns: fluorescence over the entire head (non-capacitated cells; NCP); a non-fluorescent band in the post-acrosomal region of the sperm head (capacitated cells; CP); or absent fluorescence on the sperm head (cells with an acrosomal reaction; Ar). Two drops for every sample (n = 40) were analyzed. Three hundred sperm cell/samples were counted.
Viable, Apoptotic, and Necrotic Sperm Analysis
The externalization of phosphatidylserine was performed by Annexin V Apoptosis Detection Kit (K101-100BioVision, Waltham, MA, USA). The aliquots of semen samples were washed with PBS and resuspended in 500 µL of Annexin-binding buffer (about 1 × 10 5 ). After the addition of 5 µL of FITC-conjugated Annexin V (AnV-FITC) and 5 µL of Propidium Iodide (PI, 50 µg/mL) the samples were incubated and then analyzed by a flow cytometer (FACScan Calibur, Becton Dickinson, Franklin Lakes NJ, USA). The gating strategy was performed as follows: FSC/SSC dot plot was obtained from each semen sample; a "flame-shaped region" (R1) was established to exclude debris, large cells, and aggregates; 10,000 live-gated events were collected for each sample; all samples were run in duplicate. The combination of both AnV and PI allowed for the discrimination of four sperm categories: AnV − PI − viable, AnV + PI − early apoptotic, AnV + PI + late apoptotic, and AnV − PI + necrotic cells. The analysis was performed with CellQuest Software (Becton Dickinson).
p75 NTR FACScan Analysis
The p75 NTR receptors were evaluated, as described in our previous paper [13], in ejaculated semen immediately after collection (n = 40) and in epididymal sperm cells (n = 3). Briefly, aliquots of 10 6 /mL of sperm cells were placed in FACScan tubes and preincubated with PBS/BSA for 30 min at 4 • C. After washing procedures (three times in PBS supplemented with 0.5% BSA), sperm cells were incubated at 4 • C for 1 h in PBS/0.5% BSA containing 2 µg/10 6 cells of anti-p75 NTR (MA5-13314, Thermo Fisher Scientific, Waltham, MA, USA), then washed and labelled with a secondary antibody (ab6785 FITC conjugated for p75 NTR , Abcam, Cambridge, UK) for 30 min at 4 • C. p75 NTRpositive cells were quantified by FACS analysis. FSC/SSC dot plot was obtained from each semen sample. A "flame-shaped region" was established to exclude debris, large cells, and aggregates. The count of p75 NTR+ cells was executed by plotting green fluorescence (FL1)/FITC. Briefly, the gating strategy was as follows: FSC/SSC flame shape, dot plot TrKA/SS, gating TrKA + cells and histogram of distribution of p75 NTR+ staining on TrKA + cells (p75 NTR+ vs. p75 NTR− ). Ten thousand live-gated events were collected for each sample and isotype-matched antibodies were used to determine binding specificity. The results were expressed as percentages of positive cells/antibodies used for staining (% positive cells). All experiments included a negative control incubated with Normal Goat IgG Control Mouse IgG Isotype Control (# 31903 Thermo Fisher Scientific for p75 NTR ). The analysis was performed with CellQuest Software (Becton Dickinson).
Statistical Design
Exp 1. Diagnostic graphics, Kolmogorov-Smirnov and Levene's tests were used for testing assumptions and outliers. Because non-normality of the data was detected for CP and motility, log transformation was used for analysis. The p75 NTR -positive cells variable at time 0 was categorized using the median as a cutpoint to create two groups [28,29] called Normal p75 NTR (N = p75 ≤ 25.6%) and High p75 NTR (H = p75 > 25.6%). The means of the other parameters at time 0 were then compared between Normal and High p75 NTR groups using unpaired t-tests. Data were reported as means and standard deviations (SD). Associations between parameters at time 0 were further investigated using the Pearson correlation coefficient ® , including p75 NTR as a continuous variable. The correlation was considered poor if r < |0.3|, medium if |0.3| ≤ r < |0.5|, and large if r ≥ |0.5| [30].
Exp 2. Changes in sperm-quality parameters during storage were analyzed by Linear Mixed Models (LMM), including sample as subject and time as repeated factors. The LMMs evaluated the effects of treatment (2 levels: control and 100 ng/mL NGF-supplemented), p75 NTR group (2 levels: Normal and High p75 NTR ), time (3 levels: 2, 4, and 6 h), and the interaction of the treatment with the p75 NTR group, while baseline values were included as a covariate. Sîdak adjustment was used for carrying out multiple comparisons.
Exp 1. Quantification of NGF Concentration and p75 NTR Expression in Epididymal and Ejaculated Sperm and the Effect of p75 NTR Expression on Main Sperm Outcomes
While the percentage of p75 NTR positivity in epididymal sperm samples steadily amounted to 40%, the p75 NTR -positive ejaculated cells at time 0 needed a categorization. To this end, we used the median as threshold to create two groups named Normal (p75 NTR ≤ 25.6%) and High (p75 NTR > 25.6%). Table 1 shows descriptive statistics of the parameters at time 0 in the Normal and High p75 NTR groups. No difference was found between Normal p75 NTR and High p75 NTR groups in all parameters evaluated. Furthermore, NGF plasma levels and p75 NTR expression showed high repeatability of about 50.27% for NGF and 41.43% for p75 NTR (data not shown). Finally, the correlation analysis showed a medium negative linear association between p75 NTR+ cells and sperm concentration (r = −0.403, p < 0.01, Table 2). Abbreviations: VCL = curvilinear velocity; NCP = non-capacitated; CP = capacitated sperm; Ar = acrosomalreacted. Significant correlations with a p < 0.05 (*) and a p < 0.01 (**) are indicated with bold characters (2-tailed); # analyzed after log transformation.
Exp 2. p75 NTR Expression in Ejaculated Sperm and Impact of Exogenous NGF (100 ng/mL) during Storage (up to 6 h) on H and N Sperm Samples
Samples with High p75 NTR (H) at T0 and during storage showed a lower % of motility (p < 0.001) and NCP (p < 0.001), while they showed a higher % of dead (p = 0.001) and apoptotic cells (p = 0.001) than normal (N) sperm samples (Table 3; Figure 2). The Normal p75 NTR group showed a lower percentage of both dead and apoptotic cells after NGF treatment than control (p < 0.05). Conversely, the NGF addition did not change either of these values in H sperm samples. We displayed NGF treatment and p75 NTR levels in Values are displayed as estimated marginal means ± standard errors. Models included baseline values as covariates. Abbreviations: VCL = curvilinear velocity; NCP = non-capacitated; CP = capacitated sperm; Ar = acrosomal-reacted; C = control; T = NGF-treated; # indicates back-transformed estimated marginal means ± standard errors; † : equal variances not assumed. Sample size n = 40.
The motility rate was not associated with the level of p75 NTR (Figure 2A), while NGF was a relevant factor in the sperm kinematics. In particular, the treatment induced a significant reduction in motility percentage only in samples with H p75 NTR compared with N p75 NTR ( Figure 2E). According to motility and VCL values, sperm capacitation was also consistently decreased by the NGF treatment in the H p75 NTR group ( Figure 2F, p = 0.039, 4 h and p = 0.0846, 6 h, two-way Anova Sîdak). In the meantime, the impact of NGF treatment on the same H group revealed a larger percentage of both dead and apoptotic cells than in the N group ( Figure 2G We displayed NGF treatment and p75 NTR levels in Figure 3. No differences were observed either in H or N sperm cells within all the parameters ( Figure 3A-F).
Discussion
The present findings indicate that about 47.5% of rabbit semen samples, imme after ejaculation, showed a sperm population with High p75 NTR , while about showed Normal p75 NTR values.
Discussion
The present findings indicate that about 47.5% of rabbit semen samples, immediately after ejaculation, showed a sperm population with High p75 NTR , while about 52.5% showed Normal p75 NTR values.
In these two groups of samples, the endogenous NGF levels were comparable (1591.71 ± 413.07 vs. 1621.13 ± 428.86 in N and H p75 NTR , respectively), suggesting that NGF does not directly affect sperm characteristics, but its effect is modulated by receptors (Figure 2A-D).
Furthermore, ejaculated rabbit sperm samples with H and N p75 NTR levels have a different response to in vitro addition of exogenous NGF during storage. TrKA and p75 NTR have been already identified in sperm of the golden hamster and humans [20,21] and ejaculated sperm of rabbit [14], and these two receptors strongly modulate its effect.
In our previews paper, we demonstrated the presence of NGF and p75 NTR in seminal plasma and in different cell types of the male rabbit reproductive system. In particular, p75 NTR was identified in both the somatic and germ cells of the gonad as well as in the glandular epithelial and stromal cells of seminal vesicles and prostate glands [7]. In sperm cells, p75 NTR is mainly localized in the midpiece and tail [13].
Within the testes, NGF is synthesized by the Leydig cells, Sertoli cells, and germinal cells at different stages of development, including spermatogonia, spermatocytes, and spermatids. A strongly positive reaction for NGF was detected in the columnar secretory epithelial cells, and in the stromal cells of the rabbit prostate. The presence of NGF and its receptors in those cell types suggests that their growth and differentiation are finely regulated by this neurotrophin via a complex autocrine and paracrine mechanism [13,22].
Our previous results showed that after storage, TrKA level remained almost stable while low-affinity receptors suddenly increased, connected with an increase in apoptosis [14], probably due to a process of p75 NTR externalization. A similar trend was also registered in leiomyosarcoma cells [32], suggesting that the role of NGF is dependent on the balance between its receptors. In particular, the distinctive pathways "death/survival" generated by NGF-receptors' interactions were determined by the concentration (ratio) of p75 NTR to TrKA.
The novelty of this paper regards the fact that a high level of p75 NTR in rabbit sperm cells modulates the role of NGF in sperm characteristics. Many studies have shown a positive effect of in vitro NGF addition on the main sperm traits in several animal species and humans [11,14]. In particular, Lin et al. demonstrated that NGF promoted human sperm motility by increasing the movement distance and the percentage of A-grade spermatozoa in a dose-dependent manner [20]. However, the NGF receptors' concentration was not considered by the authors.
The body of literature reports two distinct p75 NTR signaling pathways, one including TrKA activation-which suppresses JNK activity-and another focused only on p75 NTR activity via NFkB activation [33]. When TrKA is lacking, p75 NTR can act as an inducer of apoptosis and then cell death [34], modulated by NGF. Thus, it could be hypothesized that the interaction between NGF and receptor is a sort of selector for semen samples of poor quality, contributing to selecting (through apoptosis) optimal sperm cells. Indeed, apoptotic processes in sperm should be distinguished from somatic cells, considering their prominent role in both selections of defective sperm produced during spermatogenesis and the programed senescence of ejaculated sperm [35].
On the other hand, the high value of p75 NTR found in the epididymal sperm cells could be related to the need to select/re-absorb defective sperm. Although the reason is still debated (collection order, buck), as some of these cells-candidates for the elimination during epididymal transit-are ejaculated, it is reasonable to suggest that the high p75 NTR value determines a rapid decline ( Figure 2G,H), contributing to the sperm selection occurring in female reproductive apparatus.
At the same time, we observed reproducible levels of NGF and p75 NTR among different bucks, indicating that H semen donors tend to produce sperm which is highly fragile and prone to apoptosis/death (senescence, early aging). Although further studies are needed to better define NGF and p75 NTR values in different males, our results are relevant for fertility and reproduction issues.
Conclusions
The p75 NTR level in ejaculated sperm could be considered a biomarker of the quality of the semen for its ability to modulate, through NGF interaction in some sperm, quality parameters and distinct features (motility, apoptosis, viability).
High p75 NTR concentration in the sperm samples is related to cell senescence and poor quality of sperm. Therefore, the measurement of p75 NTR in the semen samples could be proposed as a screening marker for fertility-selective programs, ideally not only in rodents, after defining the distinct cut-off for each species. The present findings could also be exploited in semen conservation and thus could improve assisted reproduction techniques.
|
2022-03-21T14:00:40.798Z
|
2022-03-01T00:00:00.000
|
{
"year": 2022,
"sha1": "27e1affad2ebc52c593e561d3375bbc0fde8d8a4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/11/6/1035/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "27e1affad2ebc52c593e561d3375bbc0fde8d8a4",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236727423
|
pes2o/s2orc
|
v3-fos-license
|
Smartphone-Based Virtual Reality as an Immersive Tool for Teaching Marketing Concepts
With the advent of virtual reality (VR) technology and the ubiquity of mobile devices, smartphone-based VR has become more affordable and accessible to business educators and millennial students. While millennials expect learning to be fun and prefer working with current technology, educators are constantly challenged to integrate new technology into the curriculum and evaluate the learning outcomes. This study examines the gain in learning effectiveness and students’ intrinsic motivations that would result from the use of VR as compared to the use of traditional learning activity, namely think-pair and share. The results show that students who took part in the VR simulation demonstrated a better understanding of concepts and reported a better learning experience as compared to those who participated in the think-pair-share activity. In particular, the findings show evidence of higher intrinsic motivation and better learning outcomes.
INTRodUCTIoN
Virtual reality (VR) refers to immersive, interactive, multisensory, viewer-centered, three-dimensional computer-generated environment (Mandal, 2013;Toshniwal & Dastidar, 2014). Though VR was first introduced to target entertainment and gaming, recent studies have shown its potential use for educational purposes (see Choi et al., 2016 for review). With the advent of smartphone devices and free VR apps available for download (e.g., Google apps and Apple stores), VR technology has become more affordable and accessible than what it used to be a few years ago. For instance, students can use their smartphones (androids or iOS) to download free apps and slide them into the cardboard.
In fact, there is a great potential in using smartphone-based VR as a teaching tool that would complement and improve the teaching effectiveness and the overall students' learning experience (Jensen & Konradsen, 2018). In particular, many studies in management education have reported opportunities for using VR in teaching retailing principles (Drake- Bridges et al., 2011), social marketing (Dietrich et al., 2019), tourism marketing (Hassan & Jung, 2018), and brand management (Belei et al., 2011).
One of the most important courses in the marketing curriculum is marketing research. For many instructors, teaching marketing research depends heavily on concepts drawn from consumer behavior (Bridges, 2020). While many students recognize the psychological complexities of consumer behavior, they often find it challenging to integrate the learned concepts into a coherent framework that facilitates learning (Lincoln, 2016). For instance, the concepts of hedonic shopping (i.e., the enjoyment and pleasure that consumers may experience while shopping), psychological time (i.e., the sense of the passage of time when purchasing a product), and the flow state (i.e., the sense of playfulness and distorted sense of time) are experiential by nature. They would be better taught if they are integrated within a comprehensive framework related to the consumer's shopping experience. As VR enables simulating shopping experiences, the relevance of VR to teach these concepts becomes instrumental.
Hence, the purpose of this paper is to highlight some merits of using smartphone-based VR to teach marketing concepts. This article reports on the results of a VR simulation activity used in a marketing research class to teach students a few consumer behavior (CB) concepts before setting up experimentations.
The paper is organized as follows. First, the opportunity of using smartphone-based VR as an immersive teaching tool is highlighted. Second, the paper explains how the proposed VR innovation relates to the marketing curriculum objectives. The paper also describes the VR simulation and positions its novelty with regard to learning taxonomies. Thereafter, the paper reports some findings from assessing the VR effectiveness and conclude with some challenges and the potential adaptability of the VR simulation to other marketing courses.
While these activities foster active learning, the increasing interest in virtual reality (VR) among students provides a compelling reason to incorporate VR into the marketing curriculum. In particular, millennials are technologically literate, being immersed in a variety of emerging technologies since their birth, they often quote traditional teaching and learning environments as boring (Mangold, 2007).
As nowadays students are looking for immersive learning experiences, it is not surprising to find millennials embracing VR. Indeed, a survey conducted by Touchstone Research in 2015 points to the same conclusion: 73% of millennials are highly interested in VR. In the same vein, many businesses and particularly retailers (e.g., Lowes, Walmart), have been working on integrating VR into their marketing activities, to improve their customers' experience. The advent of VR technology and the ubiquity of mobile devices and free apps provide sound arguments to embrace VR while teaching marketing and preparing business students for future jobs.
Despite its popularity and unquestionable appeal to today's students, little is known about opportunities that VR would offer to marketing educators. From an instructional perspective, VR could potentially simulate a virtual store or a mall environment (Drake- Bridges et al., 2011;Van Kerrebroeck et al., 2017) and affords flexibility in outlets design and atmospherics that could be otherwise difficult to create or manipulate in a real learning environment. Such flexibility offers possibilities to marketing educators to enable illustrating a large variety of concepts, remotely in a simulated environment.
The Pedagogical Novelty of the VR Simulation
Unlike other conventional instructional methods, VR affords an immersive learning experience that helps students to grasp concepts by experiencing them fully. Indeed, when using VR activities in class, instructors could blend a constructivist approach whereby students create knowledge through learning experiences (Sharma et al., 2013) with a behavioral approach whereby students learn by reacting to or observing others' behavior (Bailenson et al., 2008). From this perspective, VR appears to be of greater flexibility in using mixed approaches than other existing instructional methods. Prior empirical studies and meta-analyses have shown that immersion simulations would improve learning outcomes and increase learning session effectiveness to a larger extent than traditional teaching methods (Merchant et al., 2014). When immersed in an interactive VR environment, students become more engaged, and their interest in the material would increase (Parong & Mayer, 2018).
As VR enables immersion, interaction, and involvement (Pinho et al., 2009), three levels of learnings would result from the current VR simulation, namely, the cognitive, affective and psychomotor learnings. For instance, when immersed in a VR store environment, students could recognize attributes of the shopping environment (i.e., elements of atmospherics), and categorize these into dimensions, while in fact, both tasks of recognizing and categorizing fall under two domains of Bloom's (1956) taxonomy of cognitive learning, respectively, comprehension and analysis domains. This argument draws on elaborative learning (Dunlosky et al., 2013), where learning is informed by examining reasons (e.g., hedonic shopping) and related concepts (e.g., psychological time, flow state) behind the facts (e.g., shopping task), which fosters deep processing of information resulting in better retention of information. From this perspective, VR boosts the process of knowledge construction and makes learning heuristic and highly interactive (Lau & Lee, 2015), allowing the students' cognition to move from abstract concepts to concrete ones.
Furthermore, when instructed to do shopping in a VR store environment, students would interact with their environment to operate and perform tasks of purchasing. In this vein, students would learn skilled movements that represent one of the learning domains of Harrow (1972) 's taxonomy of psychomotor learning. Doing so would lend support to the practice effect (Dunlosky et al., 2013), where students' performance would improve as a result of repeated task evaluation.
Besides, when involved in the VR simulations, students would experience a set of related concepts such as the flow state, hedonic shopping, and psychological time, which are consistent with the domain of receiving (i.e., awareness) and responding (i.e., reacting to a stimulus) of Krathwohl's (1964) taxonomy of affective learning. In sum, the use of VR simulation would provide instructors with greater flexibility in using mixed approaches (constructivist and behaviorist), while covering various levels and domains of learning ( Figure 1).
Integrating VR into Marketing Curriculum
A primary goal of the marketing curriculum is to prepare students for the professional workforce (Drake-bridges et al., 2011). The call for integrating VR into the marketing curriculum has been echoed in both the business field (see for example, Why Should You Care About Virtual Reality In Marketing? in Forbes, by Clark, 2017) and academia (see for example, Pros and Cons of Virtual Reality in the Classroom in The Chronicle of Higher Education, by Evans, 2018). With the advent of highly immersive VR technology (e.g., VR head-mounted devices, Google Cardboard, Oculus Rift) and the increasing accessibility of VR applications (e.g., free apps on Google Play, App Store), many companies have embraced the creative potential inherent of VR and its applications for marketing purposes, such as educating customers (see for example, Lowe's Wants to Use a VR Holoroom to Teach You Home Improvement in Popular Mechanics by Dhal, 2017), enhancing their shopping experience (see for example, Walmart has acquired a virtual reality startup as part of its tech makeover, in Recode by DelRey, 2018), and setting a better product testing (see for example, VR could take product testing to the beach and beyond in Science and Technology, by Hundborg Koss, 2018). Furthermore, pedagogy research provides mounting evidence that VR could increase students' engagement and enhance learning. The use of VR in classes provides a heuristic and highly interactive learning environment and offers a playful and enjoyable learning experience to students (Lau & Lee, 2015;Merchant et al., 2014).
4
The VR simulation proposed here provides an illustrative example of how this innovation could be used in teaching marketing concepts. Hence, the purpose of this simulation is threefold: to enhance both learning and applications of concepts, to introduce students to the capabilities of VR technology in the marketing world, and to provide students with hands-on experience in using VR for marketing applications. More specifically, the current VR simulation was designed to initiate students to VR and introduce some CB concepts. These concepts are important to know before taking part in the experimentation activity of week #10 of the marketing research class. The VR simulation helps students (1) to understand the concept of flow state, (2) to comprehend the notion of psychological time (perceived vs. actual time), (3) to know how to manipulate the perception of time, and (4) to differentiate between a high-involvement and a low-involvement purchase.
description of the VR Simulation
To participate in the VR simulation, students need to download a free app My3Dstore on their Smartphone and use a VR headset (economic model is available in the market for the cost of $5). The app simulates a 3D virtual environment that combines a real shopping experience at a retail grocery store with the benefits of online shopping. While visiting a virtual supermarket, where the products and aisles are laid out as if they are in a bricks-and-mortar store, students could move around an aisle, find out the name and price of a product by gazing the product, add a product to the cart, and check the amount to pay by looking at the cart (see Figure 2).
Students would experience a flow state when they become immersed in a shopping activity to the point that they cannot notice anything else. In situations of high involvement, students enter a flow state characterized by a sense of playfulness and a distorted sense of time. The flow state occurs when the temporal duration (i.e., perceived time) of a virtual walk in the store is thought to be relatively shorter than its actual elapsed time. In other words, there is evidence of a flow state when Table 1.
To participate in the VR simulation, students need to work in pairs. Students are provided with some materials to read before coming to the class and a booklet of instructions (for full instruction, see Table 2). The VR activity lasts for 75 minutes. The first 10 minutes of class is dedicated to reviewing
Participants
Students will work in pairs (i.e., student#1 will be using the timer, student#2 will experience the VR, then they rotate). some concepts discussed in class (e.g., high vs. low involvement, flow state, psychological time) and show students how to navigate in a VR space. The next 15 to 20 minutes are used for a trial on how to set up a VR experience and time measurement. Thereafter, students are instructed to set up and run the shopping VR simulation for 30 minutes. The simulation ends with a debrief session to wrap-up. The implementation of the simulation would integrate five steps in addition to the brief and debrief sessions. All steps with time slots and supporting materials and instructions are detailed in Table 2 below.
Participants
All participants in the current study are marketing students at Kent State University at Stark, who took part in this research for course credit. A total of sixty students have been randomly assigned to one of the two learning conditions: think-pair-share and VR. Hence, two groups of equal size (n=30) have participated in each learning condition of the study (think-pair-share vs. VR conditions). This study was reviewed and approved by the Kent State Institutional Review Board, and all students were given informed consent and were aware of their right to withdraw at any time. In the end, no withdrawal was reported as all students have fully participated in the study.
Procedure
The study was carried out half-way through a 16-week fall and spring semesters. Students from the first group were instructed to read a short article (a text-based resource on psychological time, flow state, and hedonic shopping, created by the professor), then answer a few questions (e.g., What is an objective time versus a subjective time and how you can measure it? Define the flow sate and explain how it correlates with the subjective and objective time?) and share their understanding of concepts with their peers (think-pair-share). At the end of this activity, students completed the short survey. Students from the second group were asked to pair and participate in the VR simulation. Likewise, a survey was administered right after the VR activity to assess their intrinsic motivation and learning effectiveness.
Measures
The effectiveness of the VR simulation was assessed by comparison to a think-pair-share activity with a short survey assessing students' motivation, self-efficacy, and learning effectiveness -all are dependent variables. Items used in the surveys (see Table 3) were adapted from the survey instruments by Lee et al. (2010) and DeNoyelles et al. (2014). All measures are made on a seven-point scale, asking participants to rate the extent to which they might agree or disagree with each item's statement.
ANALySES ANd RESULTS
Students overwhelmingly reported enjoying the VR simulation with a mean of 6.17 (95% CI, 6.03-6.30) on a seven-point scale and a small standard deviation (.379 Table 3 summarizes the main descriptive statistics. Items 1 to 3 were averaged to create a score for perceived teaching effectiveness. Likewise, items 4 and 5 were averaged to create a score for intrinsic motivation, while items 6 and 7 were averaged to calculate a score of self-efficacy. Then a one-way ANOVA was performed to test for differences in intrinsic motivation and learning effectiveness across the VR simulation and think-pair-share activity scenarios. ANOVA results show significant differences between the two scenarios in terms of intrinsic motivation (F(1,58)=230.41; p=.000<.05) and perceived learning effectiveness (F(1, 58) =319.39; p=.000<.05); the use of VR simulation would result in a higher intrinsic motivation (M=6.26) and a better learning outcome (M=5.74) as compared to the use of think-pair-share activity (M=4.36; M=4.55). However, the use of VR may not lead to a better ability to apply concepts as compared to the use of think-pair-share activity (F(1, 58)=.105; p=.747>.05; M=3.25, M=3.28).
dISCUSSIoN
This study aimed to assess the learning effectiveness and students' intrinsic motivation when using smartphone-based VR app in teaching marketing concepts. To this end, a comparison was made between two independent groups of students. Students from the first group were tasked to read a short article and answer a few questions and share their understanding of concepts with their peers (think-pair-share activity). Students from the second group were asked to pair and participate in the VR simulation. A survey was administered to participants in each group to collect data and assess their intrinsic motivation, self-efficacy, and learning effectiveness. Results show that students who took part in the VR simulation have reported a better gain in understanding and learning concepts and have reported a better learning experience as compared to those who participated in the thinkpair-share activity.
These findings are consistent with prior studies suggesting that VR immersion would lead to better retention of information and higher learning effectiveness (Lee et al., 2010;Zhang et al., 2017). Besides, the results from this study show that VR could spark student's motivation to learn. In contrast with students who took part in the think-pair-share activity, those who have experienced the VR simulation showed more interest in learning about the subject. Clearly, the use of VR simulation could prime student's interest more than any conventional learning activity, which is typically the case for millennials who grew up with immersive technology, in particular VR (Polger & Sheidlower, 2017). In fact, a major perspective about motivation is based on interest theory (Schiefele, 1991). According to the theory, students will be intrinsically motivated if they find fun and playfulness in what they learn (Garris et al., 2002). Hence, motivation and learning go hand in hand. This lends support to prior research pointing towards student motivation to play a key role in learning concepts; those who are more motivated to use VR are more likely to engage in the lesson and put more effort into understanding the material (Lund & Wang, 2019).
However, results from the current study show that the use of VR may not lead to a better ability to apply concepts as compared to the think-pair-share activity. This finding contradicts prior studies describing VR as a tool of self-efficacy empowerment (Nissim & Weissblueth, 2017). A plausible explanation of why self-efficacy (i.e., the person's judgments of his ability to perform a given task) was higher in the think-pair-share activity as compared to the VR simulation could be the complexity of the self-efficacy process itself. In this vein, Schunk (1989) describes the process of self-efficacy as a feedback loop. First, the student has his beliefs about his self-efficacy (i.e., I'm good at this). This belief then affects the student's task engagement (i.e., I will try hard). After the task, the student receives feedback (i.e., I did well on the task) and receives efficacy cues from the feedback (i.e., the instructor thinks I'm good at this). Finally, this aptitude feedback reshapes the student's self-efficacy. Applying this reasoning to the current study, it is possible that students found more value in the feedback they received on the think-pair-share activity, from their peers and the instructor, by comparison to the feedback they received from the instructor on the VR simulation. In this regard, to get a better gain on self-efficacy in the VR simulation, it is recommended to incorporate a feedback system that tracks the progress in executing tasks into the VR environment. Overall, this study provides an illustrative example of how VR could be used to teach CB concepts and increase students' motivation to learn about marketing. Furthermore, the VR simulation could also be integrated into the teaching of other business classes, such as marketing research and retailing. For instance, the current simulation could be extended to illustrate some stages in the consumer buying decision process (e.g., information search, evaluation of alternatives), the Affective-Behavioral and Cognitive model of attitudes, and planned vs. unplanned purchasing, to nominate a few. Likewise, the VR activity could be used in retailing management class to illustrate and experience concepts related to atmospherics (e.g., the store layout, and display design, fixtures) and merchandising. Furthermore, VR simulation could be used to teach students how to set an experiment properly, experimental design, randomization, control, internal vs. external validity, and more. In sum, VR simulations provide a flexible and adaptable teaching tool for many marketing classes.
LIMITATIoNS ANd FUTURE dIRECTIoNS
The current research highlights some merit in incorporating VR into the teaching of marketing concepts. The study comes with some limitations. First, the sample size for this study is limited to 30 students in each group. A larger sample of students would increase the validity of the results. Second, the measures used in this research are subjective measures that capture students' perception of what they learned and their ability to apply concepts. These measures should be supplemented with direct measures such as quizzes to assess VR learning outcomes objectively.
While VR simulation could boost students' motivation to learn and the learning effectiveness, it may not appeal to all students. For instance, students who have prior experience with VR as applied to video games and entertainment would take more advantage of VR simulations as compared to those who may not be familiar with VR applications. In fact, the use of VR simulation may not fit all students learning styles. Common sense suggests that active learners (those who learn by doing and participating) and sensory learners (learn by experimentation) would take more advantage of VR simulations as compared to passive and verbal learners (Bell & Fogler, 1997).
Besides, setting up the VR simulation in class comes with some challenges. For instance, some free apps could not simulate a highly immersive environment (i.e., due to a low-resolution quality of the simulated environment) and therefore would significantly limit the possibilities of instructing students to perform tasks in the VR environment. Furthermore, some apps may not be compatible with all types of smartphones. For example, some apps could be installed only on iPhone or Android with screens up to 6 inches). Moreover, the use of VR in classrooms with limited space would require VR headsets equipped with remote control, that would facilitate physical movements in a space while navigating in the virtual environment and would help to avoid in-class traffic. However, the cost of acquiring VR headsets with remote controls is higher than conventional VR headsets.
Noteworthy, from an instructional perspective, the use of VR cannot replace lectures, textbooks, or laboratories. However, VR could be used to supplement traditional educational methods. For example, VR could be offered as an available resource for students who did not fully grasp the material discussed in class or from the text. Nevertheless, the instructors should carefully consider the
|
2021-08-03T00:06:12.535Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "4520cd1786cec516ca1fe6606e938d7e2c586d2c",
"oa_license": null,
"oa_url": "https://www.igi-global.com/ViewTitle.aspx?TitleId=273628&isxn=9781799863748",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3f5e440c220419a54f882d05776148c86614cc52",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
236950644
|
pes2o/s2orc
|
v3-fos-license
|
Outage Probability of Intelligent Reflecting Surfaces Assisted Full Duplex Two-way Communications
In this letter, we study the outage probability of intelligent reflecting surface (IRS) assisted full duplex two-way communication systems, which characterizes the performance of overcoming the transmitted data loss caused by long deep fades. To this end, we first derive the probability distribution of the cascaded end-to-end equivalent channel with an arbitrarily given IRS beamformer. Our analysis shows that deriving such probability distribution in the considered case is more challenging than the case with the phase-matched IRS beamformer. Then, with the derived probability distribution of the equivalent channel, we obtain the closed-form expression of the outage probability performance. It theoretically shows that the reflecting element number has a conspicuous effect on the improvement of the system reliability. Extensive numerical results verify the correctness of the derived results and confirm the superiority of the considered IRS assisted two-way communication system comparing to the one-way counterpart.
I. INTRODUCTION
I NTELLIGENT reflecting surface (IRS) has recently emerged as a promising technology to reconstruct highquality channel links in the sixth-generation wireless networks [1]. On the one hand, based on the massive reflecting elements, whose phase shifts can be adjusted in a desired direction independently, IRS can flexibly control the propagation environment of radio signals [2]. On the other hand, full duplex two-way communication systems, in which users can transmit and receive messages simultaneously over the same channel, can significantly improve spectral efficiency [3]. Since no transmit power consumption of IRS, the case of IRS includes nonexistence of rate loss in full-duplex relaying [4]. Therefore, introducing IRS to two-way communications can enhance reliability and availability of wireless networks.
The system performance of IRS-assisted communications has been investigated in a lot of literature in terms of energy efficiency [5], [6], spectral efficiency [7], outage probability and asymptotic distribution of the sum-rate [8]. However, the system performance of IRS-assisted two-way communications has not been studied thoroughly, and very limited number of results are available so far. In [9], the outage probability and spectral efficiency of two-way communications assisted by IRS are investigated in reciprocal and non-reciprocal channels. In [10], fluctuating two-ray distribution in mmWave frequency is used to derive the outage probability and the average bit error probability for IRS aided systems. B. Lu, R. Wang and Y. Liu Almost all of the aforementioned works are conducted with the assumption that all reflecting elements at IRS are able to reflect incident signals with the same constant amplitude and the IRS beamformer are designed to be optimally phasematched according to the channel state information (CSI). However, the amplitude and phase of IRS elements can both be controlled independently in practice via controlling over the resistance and capacitance of the integrated circuits in the IRS element, respectively [11]. Besides, as a complete passive equipment, IRS can not design the reflecting beamformer itself, appointing another node in the network to design the IRS reflecting beamfomer with collected CSI and then deliver the designed IRS reflecting beamformer to IRS may cause expensive signal overhead, which makes such IRS beamforming design paradigm not feasible. Therefore, it is valuable to considered an IRS assisted communication system with given reflecting IRS beamformers, which are stored in the IRS node and obtained based on certain experiences, such as random phase rotation scheme [12].
In this letter, we focus on studying the outage probability of IRS-assisted full duplex two-way communication systems for any given reflecting coefficients of IRS. It is worth noting that deriving the outage probability with an given IRS beamformer, especially in the case with different amplitudes at different reflecting elements, is much more challenging than the case with an optimally designed phase-matched IRS beamformer. To achieve this goal, we first transform the problem of deriving exact signal-to-noise ratio (SNR) expressions into a task of obtaining the distribution of inner product of two independent complex Gaussian random vectors. Then, we develop the probability density distribution (PDF) and cumulative distribution function (CDF) of the inner product by applying the analytical framework that investigates the joint characteristic function of the real and imaginary parts of a complex variable. Finally, closed-form outage probability expressions are obtained for IRS assisted two-way systems. The correctness of the derived results are testified by numerical results which also illustrate the superiority of the considered IRS assisted two way systems comparing to the one way counterpart, where residual loop interference is considered.
II. SYSTEM MODEL
Consider a two-way wireless communication system with two mobile users (namely, U 1 and U 2 ) and a single IRS as shown in Fig. 1. There is no direct link between mobile users due to serious fading and shadowing. The two users exchange messages by IRS with full-duplex communication strategy, which indicates two nodes transmitting and receiving information symbols simultaneously. Each user has two antennas which are responsible for signal transmission and reception respectively. The pair of antennas are implemented far apart each other or constituted by non-reciprocal hardware. Therefore, we assume that the forward and backward channels are non-reciprocal. We assume that each reflecting element can continuously control both the amplitude and phase of the reflected signal. Most importantly, the phase shifts at IRS are arbitrarily given based on certain experiences, not just the simple case with phase-matched beamformer. They are denoted by a diagonal 2π), and N denotes the number of reflecting elements. A lot of prior works have been conducted with the assumption that the amplitudes of all reflecting elements are constant, which can only be seen as a special case of our letter.
The channel coefficients of U 1 -IRS link and U 2 -IRS link are denoted as h t = [h t1 , ..., h tN ] T and g t = [g t1 , ..., g tN ] T , respectively. Accordingly, the channels from IRS to U 1 and respectively. All the channels are assumed to be independent and identically distributed (i.i.d.) complex Gaussian fading, . Furthermore, we denote the loop channels between the transmitting and receiving antenna of each user byh andg, respectively.
At each time slot, U 1 and U 2 transmit their data to IRS, and IRS reflects the received signal to U 1 and U 2 . Therefore, the signal received by U 1 and U 2 are given by where P i , i ∈ {1, 2} is transmitting power of U i and s i is the symbol that U i wants to transmit to the other user. Additive white Gaussian noise n 1 and n 2 are subject to Since the users have the global CSI, they can completely eliminate the self-interference. To avoid loop interference, U 1 and U 2 apply some sophisticated loop interference cancellations, which results in residual interference [9]. After interference cancellation, U 1 and U 2 receives where m i , i ∈ {1, 2} is the received residual loop interference of U i resulting from several stages of cancellation. We adopt the model where m i is subject to Gaussian distribution with zero-mean and σ 2 mi variance [9]. Further, the variance is characterized as where the two constrains, q i > 0 and v i ∈ [0, 1], depend on the cancellation method applied by users. Therefore, SNRs are expressed as We define that the system outage occurs when at least one of the two users outages, i.e., γ 1 or γ 2 drops below their acceptable SNR threshold γ t1 = 2 R1 − 1 or γ t2 = 2 R2 − 1, respectively, where R 1 and R 2 are target rates of U 1 and U 2 . Then the outage probability can be written as
III. ANALYSIS OF OUTAGE PROBABILITY
In this section, we focus on outage probability of the system. Firstly, we give the lemma as follows to obtain PDF of module of cascade channel h T r Θg t in (3a) with given Θ as an example. Due to the equality of users, we assume that σ 2 ht = σ 2 hr = σ 2 h , σ 2 gt = σ 2 gr = σ 2 g and U 1 and U 2 have the same target rate R. Lemma 1. Probability density distribution (PDF) of cascade channel z = h T r Θg t in IRS assisted two-way systems with given phase shift matrix Θ can be obtained as follows: where r z = |z|, K v (·) is the second kind modified Bessel To efficient present the proof, we construct the following new random variable where t i = h ri θ i , i ∈ {1, 2, ..., N }. Since h satisfy the complex Gaussian distribution CN (0, σ 2 h I N ), we obtain the distribution of t according to [13], i.e., t ∼ CN (0, σ 2 hΘ ), whereΘ = diag[|θ 1 | 2 , ..., |θ N | 2 ]. Then we have where z R and z I are the real and imaginary parts of z, respectively. According to (7), it is clear that z R and z I , conditioned on t, are independent and their conditional distributions can be expressed as The conditional joint characteristic function of the conditioned z R and z I [13] is expressed as Since each element t i of random vector t is subject to complex Gaussian distribution, probability density function of t is Then, combining (9) and (10) , the joint characteristic function of z R and z I is given by where (11c) follows from +∞ −∞ exp(−ax 2 )dx = π a , x ∈ R. Supposing that PDF of z is p z (x, y) where x and y are its real and imagine parts, after converting the joint characteristic function from Cartesian coordinates form to the Polar coordinate form p z (r z , β z ), we have Substituting (11) into (12), probability density distribution of cascade channel z is obtained in (13) as shown on the top of the next page, where I α (·) is the first kind modified Bessel function and J α (·) is the first kind Bessel function of with the order α. The equality (a) of (13) is according to (3.339) in [14], and equality (b) follows from (8.406) in [14].
Since it is arduous to compute the integral in (11), we convert it from the product of a series of fractions to the sum of fractions as shown in where C i , i ∈ {1, 2, ..., N } is the constant coefficient related to a i = σ 2 g σ 2 h |θ i | 2 /4, i ∈ {1, 2, ..., N }. We can obtain each coefficient C i by multiple factorization: Substituting (14) and (15) into (13), we transform the integration to the sum of a series of Bessel function. Therefore, the the PDF of z is given by where the (16b) step follows from Eq. (6.532.4) in [14].
Similarly, we can obtain the PDF of the cascade channel z ′ = g T r Θh t for U 2 as follows: where .., N }. As a consequence of Lemma 1, for the special case where the IRS model that each element has constant amplitude and continuous phase-shift, the joint character function in (11) is simplified as Accordingly, the PDF of cascade channel yields Theorem 1. If the SNR threshold of U 1 and U 2 are set as γ t1 and γ t2 , respectively, the outage probability of IRS assisted two-way communication system is given by where for j ∈ {1, 2}, and C i and a i are given in Lemma 1.
Proof. To derive the outage probability of two-way system, the CDF of power of cascade channel |h T r Θg t | 2 need to be obtained. Firstly, based on PDF of r z obtained in Lemma 1, we can derive the PDF of r 2 z , which is denoted by R z . The PDF of R z can be obtained as follows: Then, the CDF of R z is expressed as where the equality (b) of (24) follows from (12) in [15] and (6.561.8) in [14].
Secondly, we assume that γ ′ tj = γt j σ 2 j +σ 2 m j Pj , since γ j = Rz Pj σ 2 j +σ 2 m j , we have P outj = P r γ j < γ tj = P r R z < γ ′ tj . According to the transmit signal-to-noise ratio (SNR) of U j , ρ j = P j /σ 2 j and loop interference residual σ 2 mj = q j P vj j , the equivalent SNR threshold of U j is expressed as γ ′ tj = 1 ρj + q j (ρ j σ 2 j ) vj −1 γ tj . Therefore, the outage probability of U j is obtained as follows: where γ tj is SNR threshold of U j , j = 1, 2.
Finally, the outage probability of two-way system can be shown as where γ ′ tj is equivalent SNR threshold of U j .
Proof. Noting that when N is large and amplitude of phase shift for all elements are constant, i.e. |θ i | = 1, the variable in (19) obeys the Nakagami-m distribution. The corresponding outage probability of U j , j ∈ {1, 2} degrades into [15] Since the Bessel function can be approximated [14] as follows: where n ≥ 2 and z → 0. Therefore, (28) can be expressed as Combining (30) and (4), the outage probability of the system is obtained in (27).
Based on the above derivation, we conclude that with arbitrarily given Θ, the outage probability does not depend on the phase shift of each elements, but only on the amplitude of them. The reason is that Θ is arbitrarily given and independent on the channel coefficients. Besides, the outage probability is a statistical measure of system reliability. The conclusion is encouraging because it indicates that with only statistical CSI information, we can improve the reliability of the network by only adjusting the modulus of the reflection elements.
IV. SIMULATION RESULTS
In this section, simulation results are presented to demonstrate the performance of IRS assisted full duplex two-way communication systems with σ 2 ht = σ 2 hr = 1, σ 2 gt = σ 2 gr = 1 and channel noise σ 2 1 = σ 2 2 = 1. The variable amplitudes of each elements are set to |θ i | = i/N, i ∈ {1, 2, ..., N } and the phase shifts are random. Fig. 2 plots the outage probability as a function of ρ (we assume that ρ 1 = ρ 2 = ρ) when loop interference σ 2 m1 = σ 2 m2 = ω = 10 −4 , i.e., σ 2 m1 = ω i P vi i , where v i = 0, ω i = 10 −4 . It confirms the accuracy of the developed analytical results of CDF shown in Theorem 1. The presented illustrations include Monte Carlo simulating results with 10 6 independent channel realizations for the outage probability. With the increasing of reflection elements quantity N , the outage probability decreases significantly. Fig. 3 compares the outage performance of two-way and one-way communications, where the two-way target rate are 1,8,16 bits per channel use respectively and the loop interference σ 2 m1 = σ 2 m2 = ω = 10 −4 . We can observe that the outage performance for two-way communication is always better than that for one-way communication, which indicates the IRS assisted two-way scheme is more reliable than oneway scheme at the same effectiveness. Fig. 4 compares the outage probability for IRS of constant amplitude (|θ i | = 1, i ∈ {1, ..., N }) and variable amplitude (|θ i | = i/N, i ∈ {1, ..., N }) with different loop interference residual, σ 2 mj = ω j and σ 2 mj = ω j P j , i.e., v j = 0 and v j = 1, respectively, where j = 1, 2. It shows that outage probability for constant amplitude elements is lower than that with variable amplitude. Furthermore, the increase of the reflection element N has a conspicuous effect on the improvement of reliability such that it is feasible to compensate the loss of loop interference residual by increasing N .
V. CONCLUSION In this letter, we analyzed the PDF and CDF of cascade channel in IRS-assisted two-way communication systems with an arbitrarily given IRS beamformer. The outage probability is derived in closed form based on reflecting elements assumptions with continuous amplitude and phase-shift. The numerical results shows that the outage probability reduces significantly as the number of elements increases. Based on the outage probability derived in this letter, the joint optimization of outage probability and other performance metrics such as sum-rate of two-way systems can to be further investigated.
|
2021-08-09T01:16:11.754Z
|
2021-08-06T00:00:00.000
|
{
"year": 2021,
"sha1": "d32dd810801bee3e5630750aa6c9aa785e5f7b78",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d32dd810801bee3e5630750aa6c9aa785e5f7b78",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
233307943
|
pes2o/s2orc
|
v3-fos-license
|
What is the clinical evidence on psilocybin for the treatment of psychiatric disorders? A systematic review
Abstract Background: Psilocybin is a predominant agonist of 5HT1A and 5HT2A/C receptors and was first isolated in 1958, shortly before it became a controlled substance. Research on the potential therapeutic effects of this compound has recently re-emerged alongside what is being addressed as a psychedelic renaissance. Methods: In this paper we performed a systematic review of the clinical trials conducted so far regarding the therapeutic effects of psilocybin on psychiatric disorders. The eligibility criteria included clinical trials that assessed psilocybin's potential therapeutic effects on patients with psychiatric disorders. Nine hundred seven articles were found and screened in regard to the title, from which 94 were screened through abstract and 9 met the eligibility criteria and were included. Results: The papers published focused on 3 disorders: depression, obsessive-compulsive disorder (OCD) and substance use disorder (namely tobacco and alcohol). Psilocybin has shown a relatively safe profile and very promising results, with reductions found on most of the psychiatric rating scales’ scores. Research on depression showed the most solid evidence, supported by 3 randomized controlled trials. Studies on OCD and substance use disorder showed more limitations due to their open-label design. Conclusions: Altogether, the results from the studies reviewed in this paper suggest a substantial therapeutic potential. This calls for further research to confirm the results observed so far and further explain the underlying mechanisms.
Introduction
Psilocybin is a substituted indolealkylamine from the tryptamine group of compounds. Its main active metabolite, psilocin, is obtained after desphosphorylation of psilocybin in the intestinal mucosa. 1,2 Psilocybin administration can lead to changes in perception, derealization, depersonalization, impaired attention, thought content disorder, symptoms of anxiety or elation and change of intuition. 3 Psilocybin as well as psilocin are substances with predominant agonist activity on serotonin's 5HT 1A and 5HT 2A/C receptors, 2 the latter being considered necessary for the hallucinogenic effects. 4 Psilocybin can be found in over 100 species of mushrooms, many of them belonging to the genus Psilocybe, 5 and their use dates back more than 3500 years in Mexico. 6 However, psilocybin was first isolated only in 1958 by Albert Hofmann. 7 Shortly after, it became marketed as Indocybin; however, clinical research on psilocybin was scarce, limited to anecdotal casereports. 8,9 As a result of the increasing widespread use of psychedelics in the 1960s, psilocybin research stopped almost completely after the Controlled Substances Act classified it as a Schedule I substance. 10 Nevertheless, psychedelics, such as psilocybin, are one of the safest known classes of central nervous system drugs, showing a very low potential for addiction. 2,11,12 A pooled analysis 13 of 8 double-blind placebo-controlled studies with 110 healthy subjects receiving psilocybin suggested that the administration of moderate doses of psilocybin to healthy, highfunctioning and well-prepared subjects in the context of a carefully monitored environment was associated with an acceptable level of risk. Given the risks associated with its use, Johnson and his colleagues have developed guidelines for research safety that include measures such as careful volunteer preparation and having a safe physical session environment. 12 After decades of suspension, human hallucinogen research has resumed in what is being addressed as a Psychedelic Renaissance. 14 In fact, by 2005, around 2000 subjects had undergone psychotherapy in clinical studies with psilocybin. 15 In this paper, we review all the clinical trials conducted so far on the potential therapeutic effects of psilocybin on patients with psychiatric disorders.
Methods
The search was last performed in the PubMed database on the 5th of April 2019 for the term "psilocybin" on PubMed (search details: "psilocybin"[MeSH Terms] OR "psilocybin"[All Fields]). The eligibility criteria included clinical trials that assessed psilocybin's potential therapeutic effects on patients with psychiatric disorders. Studies were excluded if they were not in English, abstract was not available, they were studies with only healthy volunteers, they did not assess therapeutic effects, or they were secondary analyses. No further restrictions were imposed in regard to the type of intervention, outcomes, population of study or comparators. Nine hundred seven articles were found and screened in regard to the title, from which 94 were screened through abstract and 9 met the eligibility criteria and were included (see Fig. 1). A summary of the studies included is presented in Table 1.
Depression
Anxiety and depression on life-threatening illness. On a randomized double-blind placebo-controlled crossover study, 16 12 patients with advanced-stage cancer and reactive anxiety were recruited. They underwent 2 sessions in a comfortable environment, spaced several weeks apart, during which they randomly ingested 0.2 mg/kg of psilocybin, on 1 session, and 250 mg of niacin on the other. The main goals were to establish cardiac safety [through blood pressure (BP) and heart rate (HR) control], evaluate subjective experience during the sessions (with the 5-Dimensional Altered states of Consciousness Scale 17 ) and followup with the Beck Depression Inventory (BDI), 18 the Profile of Mood States (POMS) 19 and the State-Trait Anxiety Inventory (STAI) 20 efficacy measures.
All 12 participants completed the 3 month-follow up, 11 completed the 4-month follow-up, and 8 completed the 6-month follow-up. The mean score for BDI decreased at the 6-month follow-up (BDI mean ∼7.4) in comparison to baseline (BDI mean = 16.1). The mean STAI Trait anxiety score was lower than baseline (mean STAI ∼43.0) after 1 month (mean STAI ∼34.0) and after 3 months (mean STAI ∼32.6). No significant changes were found in POMS mean scores. Compared to niacin, psilocybin increased the HR and BP (Table 2). Additionally, no adverse psychological effects were observed and all subjects tolerated well the treatment sessions. 16 Another randomized double-blind placebo-controlled crossover study 21 assessed the efficacy of a single dose of 0.3 mg/kg psilocybin vs 250 mg of niacin, both in combination with psychotherapy, to treat anxiety and depression in 29 patients with life-threatening cancer. The crossover occurred 7 weeks after dose 1 and patients were then followed for another 6.5 months. Primary outcome measures were assessed with self-reports on: STAI 20 of State (STAI-S) and of Trait (STAI-T) subscales; Hospital Anxiety and Depression Scale (HADS) 22 subscales of anxiety (HADS-A), depression (HADS-D) and total (HAD-T); and BDI. 18 No serious adverse events were reported.
For the pre-crossover period, the psilocybin group showed statistically significant reductions on all 6 primary outcome measures at every time point, with large effect sizes (d ≥ 0.80). Similarly, at every time point after the second session, on every primary outcome measure, the psilocybin-first group reported significant within-group reductions on anxiety and depression scores. Clinically significant response rates were defined as a 50% or greater reduction in a score relative to baseline. Accordingly, at the 7-week time point post-dose 1, 83% of the subjects in the psilocybin-first group vs 14% in the niacin-first group met criteria for anti-depressant response (with BDI). Regarding anxiolytic response, for the same time point, 58% vs 14% (with HAD-A) and ∼75% vs ∼25% (with HAD-T) met criteria, favouring the psilocybin-first group. In addition, about 85% of psilocybin-first group vs ∼15% (with BDI) of the control group met criteria for anti-depressant remission, defined as 50% or greater reduction plus HADS-D 7 23 Table 1 (continued). Table 2 Incidence of adverse effects attributable to psilocybin administration across the studies Thirdly, a randomized double-blind crossover study 27 compared the effects of a low dose (1 or 3 mg/70 kg) vs a high dose (22 or 30 mg/ 70kg) of psilocybin on 51 patients with life-threatening cancer and symptoms of depression or anxiety. The doses were administered in 2 sessions, 5 weeks apart. Preparatory meetings were conducted to establish rapport. The drug sessions took place in a living-room-like environment where psychological support was available. The primary outcome measures chosen were the GRID-Hamilton Depression Rating Scale (GRID-HAM-D-17) 28 and the Structured Interview Guide for the Hamilton Anxiety Rating Scale (HAM-A). 29 A clinically significant response was defined as ≥50% score decrease compared to baseline. Symptom remission was defined as ≥50% reduction from baseline scores plus a score of 7 on either GRID-HAMD or HAM-A. 30,31 Fifteen secondary measures were also applied, including BDI, 18 HADS 22 and STAI. 20 No serious adverse events were reported although some adverse events were registered. 5 weeks after session 1, 92% in the high-dose-first group vs 32% in the low-dose-first group showed clinically significant responses, according to GRID-HAMD-17 scores. For the same time point and outcome measure, 60% vs 16% of the participants showed symptom remission, favouring the high-dose-first group. According to the HAM-A scores, the high-dose-first group also presented larger response rates (76% vs 24%) and remission rates (52% vs 12%) comparing to the low-dose-first group. Furthermore, MEQ30 26 scores were substantially correlated with score reductions in HADS Anxiety (À1.50), HADS Depression (À1.11), HADS Total (À2.62), and HAM-A (À3.93).
Treatment-resistant depression
An open-label study has investigated the feasibility and efficacy of psilocybin administration alongside psychological support on treatment-resistant depression. 32 The study included 12 subjects with major depression of a moderate to severe degree (17+ on the 21-item Hamilton Depression Rating scale [HAM-D]), and no improvement after treatment with 2 adequate courses of antidepressants from distinct pharmacological classes, lasting at least 6 weeks within the current depressive episode. Every subject took an initial low dose of 10 mg and a high dose of 25 mg of psilocybin, 7 days apart. Previously, a 4-hour preparatory session with psychiatrists was provided. Dosing sessions took place in a pre-decorated room and patients were suggested to relax and listen to music while under supervision by 2 psychiatrists. To evaluate feasibility and safety, BP, HR, observer ratings of psilocybin's acute effects as well as the revised 11-Dimension Altered States of Consciousness questionnaire (11D ASC) 33 were assessed. The primary outcome chosen for efficacy was the mean change in the severity of depressive symptoms, assessed with QIDS, from baseline to 1 week after the high-dose session. Additionally, the HAM-D, Montgomery-Åsberg Depression Rating Scale (MADRS), 34 Global Assessment of Functioning (GAF), 35 37 were assessed.
One patient reported deterioration during the 3-month followup. However, mean QIDS scores were significantly lower from baseline to 1, 2, 3, 5 weeks and 3 months (final follow-up) after treatment. Maximum reduction was seen at 2 weeks, with a mean difference of À12.9 points compared to baseline. All patients showed reduced depression severity 1 week [mean BDI (SD) = 8.7 (8.4)] and 3 months [mean BDI (SD) = 15.2(11.0)] after the high dose session, in comparison to baseline [mean BDI (SD) = 33.7 (7.1)]. Taking into account the criteria for remission (BDI score of 9), 8 patients achieved complete remission 1 week after treatment. Furthermore, 7 patients continued to meet criteria for response (defined as a 50% BDI score reduction vs baseline) 3 months after treatment, from which 5 were still in remission at this point. STAI-T anxiety scores were significantly reduced 1 week [STAI-T mean score (SD) = 40.6(14. Later, a 6-month follow-up study 38 was conducted with a sample increase to 20 patients. Treatment with psilocybin was well tolerated and no serious adverse events were registered. 19 subjects completed all assessments. QIDS-SR16 scores were significantly decreased at all post-treatment time points, with maximum effect size seen at 5 weeks (À9.2, Cohen's d = 2.3), compared to baseline. BDI scores were significantly lower at 1 week (mean reduction = À22.7), 3 months (mean reduction = À 15.3) and 6 months post-treatment (mean reduction = À14.9); STAI-T anxiety scores were significantly reduced 1 week (mean reduction = À23.8), 3 months (mean reduction = À12.2) and 6 months after treatment (mean reduction = À14.8); SHAPS anhedonia scores decreased 1 week (mean reduction = À4.6) and 3 months after treatment (mean reduction = À3.3); HAM-D scores dropped week post-treatment (mean reduction = À14.8); and GAF scores increased 1 week post-treatment (mean increase = +25.3). A significant reduction on the suicidality scores of QIDS-SR16 was seen 1 week (mean reduction = À0.9), 2 weeks (mean reduction = À0.85), 3 weeks (mean reduction = À0.8) and 5 weeks post-treatment (mean reduction = À0.7). Similarly, reductions were observed on the suicide item of the HAM-D, 1-week post-treatment (mean reduction = À0.95), at which point 16 of the 19 patients were scoring 0 and none was showing increase from baseline nor reaching the maximum score on this measure. After assessing relapse at 6 months, out of the 9 subjects that met response criteria at 5 weeks post treatment, 3 had relapsed, according to criteria of QIDS score of 6 or more.
Obsessive-compulsive disorder (OCD)
On a modified double-blind study 39 that was set to evaluate the safety, tolerability and the potential therapeutic effects of psilocybin in OCD, 9 subjects who met criteria for current OCD were recruited. Inclusion criteria included at least 1 "treatment failure", defined as a lack of significant improvement after an adequate treatment course with a serotonin reuptake inhibitor (SRI) for at least 12 weeks. Subjects were required to have tolerated well at least 1 prior exposure to indole-based psychedelics and took psilocybin in a dose escalation protocol in ) mg/kg of body weight. LD, MD, and HD were administered in that order, whereas the VLD was given randomly in a double-blind fashion at any session after the first dose (LD) session. During sessions, subjects wore eyeshades and listened to a standardized set of music in the presence of trained sitters. In order to evaluate obsessive-compulsive symptom severity, Yale-Brown Obsessive Compulsive Scale (YBOCS) and a visual analog scale (VAS) were used. In addition, the Hallucinogen Rating Scale (HRS) 40 was assessed and vital signs were monitored.
Two subjects discontinued their participation after the first session (LD) due to discomfort with hospitalization. Regarding efficacy, a significant main effect of time on YBOCS scores was found, but no significant effect of dose or interaction of time and dose. Mean YBOCS scores immediately before (baseline) and 24 hours after psilocybin ingestion (T-24h) for each dose were as follows: 25 mg/kg (VLD), baseline = 18.29, T-24h = 11.14; 100 m g/kg (LD), baseline = 24.11, T-24h = 10.67; 200 mg/kg (MD), baseline = 19.57, T-24h = 11.00; 300 mg/kg (HD), baseline = 18.83, T-24h = 11.33. The combined baseline mean YBOCS scores, after stratification by dose groups, was significantly reduced 24 hours after administration, from a range of 18.3 to 24.1 to a range of 10.7 to 11.3. The comparison of baseline vs post-ingestion VAS scores for all doses combined was also statistically significant; however, no significant effect of time or dose was found on VAS scores for all doses combined. HRS total score and each of its subscales, with the exception of volition, showed a statistically significant linear main effect of dose.
Substance use disorder
Alcohol An open-label study 41 investigated the acute effects of psilocybin and its preliminary efficacy and safety in 10 subjects diagnosed with active alcohol dependence. Volunteers were required to have had at least 2 heavy drinking days (defined as ≥5 drinks/day for males and ≥4 drinks/day for females) in the past 30 days, to not being under treatment and to express concern about their drinking habit. Participants underwent 14 sessions of manualized intervention, including 2 psilocybin sessions. The other sessions consisted of 7 motivational enhancement therapy sessions, 3 preparation sessions and 2 debriefing sessions. Participants had to be abstinent and not in alcohol withdrawal. The drug sessions took place in a living-room-like environment, in the presence of 2 therapists. On week 4, participants took 0.3 mg/kg of psilocybin and, on week 8, 0.4 mg/kg. To measure the psilocybin's acute effects, the HRS Intensity subscale, 40 the 5D-ASC 17 (which includes MEQ 26 ) and a Monitor Session Rating Form 42 were assessed. Moreover, after each psilocybin session, the Addiction Research Center Inventory (ARCI), 49-item version, 43 was evaluated. To evaluate substance use, the past 3-month version of Short Inventory of Problems (SIP) 44 and Breath Alcohol Concentration (BAC) were assessed. The primary efficacy outcome measure was change in percent heavy drinking days (%HDD) at baseline and at weeks 5 to 12, assessed with the Time-Line Follow-Back procedure. Heavy drinking days were defined as days during which participants consumed ≥5 standard drinks, if the participant was male, or ≥4 standard drinks if the participant was female. A standard drink was defined as containing 14 g of alcohol. An analysis of the percent drinking days (%DD), defined as days during which participants consumed any amount of alcohol, was also performed. In addition, the following measures were assessed: The Stages of Change Readiness and Treatment Eagerness Scale (SOCRATES 8A), 45 the Alcohol Abstinence Self-Efficacy Scale (AASE), 46 the Penn Alcohol Craving Scale (PACS) 47 and the Profile of Mood States (POMS). 19 Safety was controlled through vital signs.
One subject left the study with no justification provided and its data was not used. No serious adverse events were reported. Mean percent heavy drinking days (%HDD) dropped during weeks 5 to 12 (∼9%HDD) compared to baseline (∼35%HDD), and compared to weeks 1 to 4 (∼27%HDD)-during which only psychosocial treatment was provided. These %HDD reductions persisted from 13 to 24 weeks and from 25 to 36 weeks, compared to baseline values. The mean %HDD decrease observed at weeks 1 to 4 was not statistically significant, in comparison to baseline. On the other hand, the mean %DD showed significant reductions in every time point when compared to baseline (∼42%DD). For this measure, the mean (SD) reduction on weeks 5 to 12 was 27.2 (23.7) vs baseline, and 21.9 (21.8) vs weeks 1 to 4. The reductions reported on both mean %HDD and mean %DD, on weeks 1 to 4, were not significant in comparison to baseline. However, compared to weeks 1 to 4, these outcomes showed statistically substantial decreases at all time points. The only exception was %HDD at weeks 9 to 12, at which the reduction did not reach significance. Furthermore, significant correlations were found between measures of acute effects (HRS Intensity subscale, MEQ total and ASC summary score) and favorable changes regarding drinking, craving and self-efficacy (%HDD, %DD, PACS scores and AASE). In addition, higher scores on the HRS intensity subscale (r = À0.76), 5D-ASC (r = À0.89), and MEQ(r = À0.85) were correlated with fewer heavy drinking days.
Tobacco
An open-label pilot study researched the safety and preliminary efficacy of psilocybin on treating tobacco addiction. 48 This study included 15 subjects that smoked a minimum of 10 cigarettes a day, that had multiple unsuccessful quit attempts and that still showed desire to stop smoking. In a 15-week treatment course, subjects attended weekly preparatory meetings in the first 4 weeks and took psilocybin on weeks 5, 7 and 13. Nevertheless, the last session was optional. A moderate dose (20 mg/70 kg) was administered in the first session and a high dose (30 mg/70 kg) was provided in the other 2 sessions; however, participants were allowed to repeat the moderate dose instead. A Target to Quit Date (TQD) was set for the same day as the first psilocybin session. Participants underwent cognitive behavior therapy as well as preparation for the psilocybin sessions. Treatment also included 2 components of an effective group based smoking cessation therapy. 49 In addition, integration meetings and weekly support meetings were conducted after the TQD for 10 weeks. Short daily phone calls were made to participants for 2 weeks post-TQD to encourage abstinence. The primary efficacy outcome measures were the self-reported Timeline follow-back (TLFB), 50 for retrospective number of cigarettes smoked per day, and the biological markers exhaled carbon monoxide (CO) and urinary cotinine levels. Safety was evaluated through BP and HR monitoring during psilocybin sessions. Additionally, post-session States of Consciousness Questionnaire 42,51 ratings of acute adverse psychological effects, next-day headache ratings and Visual Effects Questionnaire data were assessed.
Three patients did not undergo the third session and 1 participant chose the moderate dose for the second psilocybin session. The remaining followed the default doses for each session. No clinically significant adverse events occurred. According to the TLFB and confirmed by CO and urinary cotinine measures, out of the 15 participants, 12 (80%) showed 7-day point prevalence abstinence at the 6-month follow-up. One of these 12 participants self-reported quitting on the TQD and was biologically verified as abstinent at all attended meetings, but was unable to attend the third psilocybin session or provide CO and urine samples for weeks 6 to 10 post-TQD. The other 11 participants self-reported quitting smoking on the TQD and showed biologically confirmed smoking abstinence up to 10 weeks post-TQD. Three of these 12 patients reported selfcorrected lapses (defined as any discrete instances of smoking post-TQD) and 1 participant reported a relapse (defined as smoking on 7 or more consecutive days post-TQD 52 ), after 13 weeks of continuous abstinence. Further analysis showed substantial reductions in mean self-reported daily smoking: ∼15 at intake vs ∼3 cigarettes a day at the 6-month follow-up.
For the 3 participants that tested positive for smoking at the 6month follow-up, an analysis on their TLFB assessments also revealed a significant reduction, from a reported mean of 20 cigarettes/day, at intake, vs 14 at the 6-month follow-up. Posthoc testing for linear contrast showed significantly higher confidence to abstain from administration to the 6-month follow-up, and also reported significantly reduced craving and temptation to smoke across all time points.
Later, a long-term follow-up 53 was conducted on the same 15 subjects. The primary outcome measures chosen were the CO, urinary cotinine, the TLFB 50 and a Persisting effects questionnaire. Out of the 15 subjects, 3 did not complete the long-term (at a mean of 30months post-TQD) follow-up and were confirmed as daily smokers at the 12-month follow-up. For the latter time point, 10 (67%) participants were biologically verified as smoking abstinent from which 8 self-reported continuous abstinence since the TQD. At the long-term follow-up, 9 (60%) participants were biologically confirmed as smoking abstinent, from which 7 reported continuous abstinence since their TQD. Furthermore, statistically significant reductions in the self-reported TLFB were observed, in comparison to intake, at 10 weeks, 6 months, 12 months, and at a mean of 30months (longterm follow-up) post-TQD as follows: from a mean (SD) of 16.5 (4.3) cigarettes per day (CPD) at study intake, to 1.4 (3.8) CPD at 10 weeks; 2.7 (5.5) CPD at 6 months; 3.3 (6.5) CPD at 12 months; and 4.3 (6.6) CPD at the long-term follow-up.
Discussion
All studies included in this review have suggested that psilocybin has a favorable safety profile, being well tolerated in general (see Table 2). The most common adverse events reported were transient hypertension, anxiety and nausea, and headaches that were limited to the experimental sessions in the majority of cases. These events go in accordance with previous reports. 13,42,54,55 The occurrence of adverse events was more often reported with higher doses of psilocybin 27 ; however, out of all studies, there were no reports of serious adverse events and all adverse events were readily managed by the staff without the need of pharmacological intervention. The safety of psilocybin use is conditioned mainly by the individual's expectations and the surrounding environment, which explains the wide amplitude of subjective effects 4 and the concern about the conditions under which drug sessions were conducted in many of the studies cited.
Regarding results, on depression and anxiety symptomatology, 4 studies 16,21,27,32 consistently showed immediate and enduring antidepressant and anxiolytic effects. Notably, results have shown BDI score reductions lasting as long as 6 months 16 and reaching as high as 83% anti-depressant response and 85% remission rates (defined by BDI scores), 7weeks following a single dose of psilocybin. 21 Significant reductions were also reported on other depression measures such as QIDS, GRID-HAMD-17 or MADRS. Furthermore, significant evidence for anxiety symptom reduction was also observed on scales such as STAI and HAM-A, with reports of 76% and 52% of response and remission rates, respectively 27 (defined by HAM-A scores). Regarding OCD, 1 open-label study has found symptomatic relief and significant reductions on YBOCS scores. 39,[56][57][58] On tobacco addiction, 2 open-label studies regarding the same experiment have showed abstinence on 80% of the subjects after 3 months and on 67% after 9months from the final dose session. In addition, 60% were biologically verified as abstinent roughly 27months after the last dose session. Nevertheless, these 2 studies did not differentiate the effects of moderate and high doses. In regard to alcohol abuse, the open-label study reported significant reductions on the percent of heavy drinking days and percent of drinking days to up to 28weeks. However, it was not able to differentiate the effects of the 2 doses used. Although the level and quality of evidence for psilocybin on substance use disorder are low, the preliminary results published so far are promising and have motivated researchers to conduct larger controlled trials: a 50participant study on psilocybin-facilitated smoking cessation treatment (NCT01943994), a 180-participant study on psilocybin-assisted treatment of alcohol dependence (NCT02061293), the first study on psilocybin-facilitated treatment for cocaine use (NCT02037126) and multiple studies on depression (NCT03775200, NCT03429075, NCT03715127, NCT03181529, NCT03380442, NCT03554174, NCT03866174) and OCD (NCT03356483, NCT03300947). Overall the results obtained across studies are impressive, given that few administrations show long lasting effects, well beyond the timecourse of the acute drug effects. Moreover, the studies on different disorders have coherently shown a significant therapeutic effect.
The mechanisms underlying the effects measured are, however, yet to be confirmed and several explanations have been proposed. Given that the agonism of psilocybin on the 5-HT 2A receptor is well-established, there is evidence supporting that it plays a role on the therapeutic effects reported, especially on depression. Some authors 59 propose that one of the mechanisms through which psilocybin improves depression symptoms is by blocking the activity of inflammatory cytokines, namely TNF-a, whose levels have been found to be significantly higher in depressed patients. 60 Research with fMRI in healthy volunteers after psilocybin administration has shown reduced activity in the medial Pre-Frontal Cortex (PFC) and decreased connectivity within the default mode network (DMN). 61,62 This becomes more relevant given that depressive symptoms have been associated with increased activity in the medial PFC 63,64 and that activity normalization of medial PFC has been shown with anti-depressant treatment. [65][66][67] In the last paper regarding fMRI studies in patients with treatment-resistant depression under psilocybin treatment, 68 increased DMN connectivity was observed 1 day after psilocybin administration. The authors proposed that, after psilocybin administration, DMN connectivity is reduced acutely and normalized afterwards with mood improvements, in a sort of "reset" mechanism. In the same study, 68 a significant relationship was observed between reductions in amygdala cerebral blood flow (CBF) and reductions in depression symptomatology after psilocybin administration. Moreover, psycho-spiritual mechanisms have been proposed and explored before, 51,69 as well as on several studies 21,27,41,53 included in this review. Significant correlations have been found between mystical-type experiences and the outcomes. These mystical-type experiences are defined by feelings of positive mood, sacredness, a noetic quality, transcendence of time and space and ineffability. 42 In one of the trials 27 the correlations were still significant when the overall psilocybin effects' intensity was controlled in a partial correlation analysis, suggesting that mystical-type experiences per se play an important role, independent from the overall intensity of psilocybin effects. A mediation analysis also suggested that mystical-type experiences mediated the therapeutic effects of psilocybin.
Regarding limitations, generalizability is limited since it includes 6 open-label studies. When it comes to OCD, the single open-label study's results 39 should be analyzed carefully. Because even the lowest dose of psilocybin produced significant symptom reduction, this means that either there is a placebo effect present that cannot be measured for lack of a true placebo, or that psilocybin can be effective in such a low dose. Thus, further research on the effects of psilocybin in this disorder should try to clarify this by comparing it with a true placebo or a nonpsychedelic active comparator. Furthermore, authors could not find a clear dose-response relationship to the change in YBOCS score nor correlation between YBOCS score reduction and the perceived psychedelic intensity. Additionally, the dose escalation protocol, which was also conducted on the tobacco addiction study, may have contributed to expectancy bias in both subjects and staff. Moreover, the need for staying at the hospital overnight may have introduced bias by selecting patients that could tolerate hospitalization. On alcohol dependence 41 study, the lack of biological verification of alcohol use would decrease the bias risk of self-reported-only measures if conducted. Another limitation refers to the significant number of subjects across studies that reported previous psychedelic use, which contributes to expectancy bias. On the other hand, because psilocybin produces highly discriminable effects, blinding becomes a challenge, thus increasing the bias risk. The best option seems to be the use of an inactive low dose of psilocybin, as it showed some protection against monitor expectancy and it assures the benefit of the instruction that psilocybin is going to be administered on each session. 27 Additionally, because psilocybin administration was associated with psychological support on most studies, it is not possible to make strong inferences regarding the extent of the effects. Nevertheless, the possibility of a synergistic interaction between the psilocybin administration and the psychological support is likely and should be explored.
Further trials with larger samples should be sought to confirm the results found so far, including research on the mechanisms underlying the effects reported. Altogether, the results from the studies reviewed in this paper suggest a very promising therapeutic potential from psilocybin. The results obtained so far, alongside the need for more effective psychiatric treatment, justify a call for further research.
|
2021-04-21T05:13:38.538Z
|
2021-02-11T00:00:00.000
|
{
"year": 2021,
"sha1": "8cd9d6c2293e30ceb9b7cf858d556039e1374ec1",
"oa_license": "CCBY",
"oa_url": "https://journals.lww.com/pbj/Fulltext/2021/02000/What_is_the_clinical_evidence_on_psilocybin_for.22.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8cd9d6c2293e30ceb9b7cf858d556039e1374ec1",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269646933
|
pes2o/s2orc
|
v3-fos-license
|
Zooming into lipid droplet biology through the lens of electron microscopy
Electron microscopy (EM), in its various flavors, has significantly contributed to our understanding of lipid droplets (LD) as central organelles in cellular metabolism. For example, EM has illuminated that LDs, in contrast to all other cellular organelles, are uniquely enclosed by a single phospholipid monolayer, revealed the architecture of LD contact sites with different organelles, and provided near‐atomic resolution maps of key enzymes that regulate neutral lipid biosynthesis and LD biogenesis. In this review, we first provide a brief history of pivotal findings in LD biology unveiled through the lens of an electron microscope. We describe the main EM techniques used in the context of LD research and discuss their current capabilities and limitations, thereby providing a foundation for utilizing suitable EM methodology to address LD‐related questions with sufficient level of structural preservation, detail, and resolution. Finally, we highlight examples where EM has recently been and is expected to be instrumental in expanding the frontiers of LD biology.
Historic perspective of EM imaging in LD research
Since the first electron microscopes became available in the 1940s to researchers in the Life Sciences [1], cells of various origins have been imaged.The first micrographs of cultured cells already pictured LDs, albeit at resolutions comparable with modern light microscopy (Fig. 1A) [2].Based on such early EM observations and despite important improvements in EM sample preparation that allow for better lipid preservation over the decades that followed (Fig. 1B) [3,4], cellular LDs were long considered to be passive lipid inclusions, described to have 'no bounding membrane and appear to be held together by their hydrophobic interaction with the aqueous environment' [5].In fact, their lack of a delimiting membrane was considered a 'distinguishing feature' [5].However, in the early 1990s, LD-specific proteins, perilipins, were discovered [6].EM revealed that LD-targeted proteins localize to the LD surface (Fig. 1C,D), which exhibits a single, electron opaque line.Freeze-fracture EM suggested an occasional membrane continuity between the endoplasmic reticulum (ER) and LDs [7].
These observations lead to the notion that the surface layer of LDs may be a 'specialized area of the endoplasmic reticulum membrane leaflet' [6] or a 'novel membrane domain' [8].The ultimate proof that LDs are truly surrounded by a monolayer of phospholipids (as opposed to bilayers bounding all other cellular organelles) was provided in 2002 by cryogenic-EM images of isolated LDs (Fig. 1E) [9].Interestingly, plant cytologists had already proposed in 1972 that LDs or 'intracellular oil-containing particles', called 'spherosomes' in peanuts, show the presence of an 'atypical, single-line [. ..] biological membranes that correspond to half unit-membranes [. ..] whose polar surfaces face the hyaloplasm and whose lipoidal nonpolar surfaces contact internal storage lipid' [10].These fundamental discoveries enabled by EM were instrumental in shaping the definition and our current understanding of oil bodies, oil droplets, spherosomes, oleosomes, and adiposomes, unified today under the common name of LDs [11].
Overview of EM methods in LD research
Versatility of EM imaging techniques EM offers high resolution, down to the Angstrom range, owing to the extremely short wavelength of electrons, but at the cost of having to image the biological object in a high vacuum chamber (to prevent scattering of the electrons by air molecules).Most traditional EM methods therefore deal with dehydrated specimens, typically treated with chemical fixatives, which are meant to retain cellular ultrastructure but can also cause severe alterations.EM also comes in different flavors.Depending on the biological question at hand, the EM imaging method should be carefully selected based on considerations that include: (a) the required size of the imaging area, (b) the desired resolution, (c) the level of preservation of cellular features, (d) the compromise between 2D high throughput imaging and comprehensive 3D visualization, and (e) the available equipment and expertise.Specifically, in relation to the study of LDs by EM, the preservation of lipids in the sample further requires special considerations [12], which we discuss in more detail in Section "EM sample preparation specifics for LDs".We provide an overview of EM imaging and related preparation methods in the context of LD research, their current capabilities, and their limitations in Table 1.
As a general rule, room temperature (RT) EM procedures start with live, hydrated specimens and end with water-free, heavy-metal-stained, and resin-embedded material [5].To gain a general understanding of LD sizes, numbers, and the immediate cellular context of surrounding organelles, conventional transmission EM (TEM) imaging of thin sections (typically of 70 nm thickness generated in an ultramicrotome by mechanical sectioning with a diamond knife) represents a good choice [13][14][15][16].However, being 2D projections, they lack 3D volumetric information.For high-resolution 3D imaging of LDs, TEM tomography of thick sections (200-300 nm) is more suitable, albeit with the data acquisition and processing being more time-consuming (Fig. 1B,F) [17][18][19].This technique can also be applied to consecutive serial sections (serial section TEM), which significantly increases the contextual information and volume covered, but further reduces throughput when considering the need to investigate multiple samples across different conditions [17,18,20,21,22,23].The resolution in these TEM techniques is in the order of a few nanometers, whereas the resolution in the z direction (parallel to the beam direction) in the 3D reconstructed volumes from tomography is further distorted by the incomplete angular sampling, resulting in an anisotropic 3D representation of the specimen [24].The signal-to-noise in such data is dictated by the thickness of the section, but ultimately, the apparent nanometer scale resolution is a result of the limited structural preservation in traditional RT preparations (detailed below) and the fact that the signal comes from heavy metal staining of the cellular structures rather than from the biomolecules themselves.
For the 3D analysis of entire cells or tissue subregions, a number of recently developed methods allow for automatic sectioning and scanning EM (SEM) of large volumes, ranging from 100s to 100 000s lm 3 [25,26].In array tomography, ultra-thin sections are obtained using an ultramicrotome and deposited manually on a slide or automatically on tape using ATUM (Automatic Tape-collecting Ultra-Microtome) to be imaged by SEM [27].In serial block face SEM (SBEM) [28][29][30][31] and focused ion beam SEM (FIB-SEM) (Fig. 1G) [32][33][34][35], the sample is sequentially imaged by SEM followed by repeated material removal by an ultramicrotome knife or a FIB column integrated inside the SEM chamber, respectively.Currently, available knife-based approaches are limited in z resolution, dictated by slicing thickness to ~30 nm, whereas ion beam ablation can deplete material with 5 nm precision [24].Furthermore, it is important to note that while the theoretical resolution in a TEM is in the Angstrom range, the SEM image resolution falls in the range of a few nanometers.Therefore, the maximal resolution of 5 nm achievable with these volume EM techniques precludes fine ultrastructural information on, e.g. the details of direct LD-organelle interactions, but parameters such as LD number, cellular localization, and clustering can be precisely analyzed in relatively large volumes (Fig. 1G).
Another flavor of EM that has been successfully used for visualizing LDs is freeze-fracture EM, wherein a frozen (multi) cellular specimen is mechanically fractured to expose the cell interior.Here, fractures in frozen biological specimens commonly and spontaneously occur along the membrane plane.After further processing, the specimen can be imaged in TEM (by generating a metal replica of the fracture surface), SEM (following sublimation to achieve dehydration), or directly in a SEM equipped with a cryogenic stage (cryo-SEM).In combination with immunogold labeling of specific proteins on the metal replica (Fig. 1C), this technique provides detailed insight on LD membrane topology and protein localization [7,[36][37][38][39].
While RT EM techniques provide diverse means to investigate LDs, emerging cryogenic EM techniques allow the visualization of frozen-hydrated biological material preserved in a near-native state.Cryo-FIB-SEM volume imaging is similar to its RT counterpart in terms of its ability to image multicellular specimens in 3D, but typically suffers from somewhat poor contrast and resolution in comparison to its RT parallels that employ heavy-metal staining combined with backscattered electron detection [40].In cryo-FIB-SEM, charging of the nonconductive frozen-hydrated sample, which is also particularly affected by the neutral lipid-rich LDs, significantly decreases imaging quality (Fig. 2A).Nevertheless, cellular LDs are typically well distinguishable, suggesting that cryo-FIB-SEM has the potential to become a useful technique for their study in complex scenarios and tissues [41].
EM methods Notes
Conventional plastic embedded RT EM: requires fixation, dehydration, staining and resin embedding 2D TEM [13,14,16,65,87] Thin sections; High throughput; XY resolution of few nm TEM tomography [17][18][19] Thick sections; 3D information; Laborious data acquisition and analysis; XYZ of few nm Serial section TEM and tomography [17,18,20,21,22,23] Good resolution in XYZ for larger volumes; Laborious data acquisition and analysis Pre-embedding immunolabeling EM [7,30,149] Good localization precision of proteins of interest; Requires good antibodies On-section CLEM [104] Direct correlation of fluorescence and TEM on the embedded material; Good localization precision of proteins of interest; Requires genetic engineering when using fluorescently tagged target Genetically encoded tags (e.g.Apex, Apex2) [84,101,102] Good localization in TEM; Requires genetic engineering and is not readily suitable for low abundance targets Alternative RT 2D EM sample preparation HPF and Freeze Substitution [61,66,98] Often better structural preservation of LDs in comparison to chemical fixation at room temperature, but some protocols result in low membrane contrast Freeze-fracture EM [7,37,39] Especially suited for questions related to the fine structure of membranes; Combinable with immunolabeling in replica preparations Tokuyasu method [8,97,150] Good membrane visibility and antibody labeling sensitivity RT volumetric EM: 3D Imaging of large volumes (cells to tissues), a compromise between acquisition time, area, and level of detail.LD numbers, sizes, localization, and juxtaposition with other organelles can be analyzed for large volumes, but with more limited detail/ resolution compared to TEM Array Tomography [27] Useful for large volumes (e.g.multiple cells); XY resolution of 5-20 nm; Z resolution dictated by the section thickness (~30-70 nm) Serial Block Face SEM [30] Useful for large volumes (e.g.multiple cells); XY resolution of 10-20 nm; Z resolution dictated by the thickness of the sectioned area (~30 nm) RT Focused Ion Beam SEM [32] Typically used for volumes of one cell, isotropic pixel size XYZ with a maximum resolution of ~5 9 5 9 5 nm Cryo-EM: optimal structural preservation of hydrated samples, close to native state.Fine details, such as LD core structural features and molecular tethers, can be directly observed In situ cryo-ET [46,47,76] High resolution in XY in the nm range; Currently low throughput due to non-routine multistep sample preparation Cryo-FIB-SEM [40,151,152] Imaging still requires improvement due to charging-related artifacts and a low signal-to-noise ratio; 3D reconstruction of large volumes (whole single or multiple cells) is possible Cryo-FIB has also been refined over the last decade as a minimally perturbing preparation method for single-cell cultures or suspensions for cellular cryogenic transmission electron microscopy and tomography (cryo-EM/ET).Cells frozen on EM grids are thinned using cryo-FIB-assisted micromachining at a shallow angle to generate cellular sections of 200-300 nm thickness, called lamellae, that remain attached to the biological material on the grid, and the entire grid is then transferred to the TEM for cryo-EM/ET [42][43][44].This technique preserves LDs in their native milieu in a potentially unaltered state, allowing imaging at sub-nm resolution (Fig. 2B,C) [45][46][47].Cellular cryo-ET is currently relatively low-throughput and requires expensive equipment not commonly available in EM laboratories.
However, recent advances in instrumentation [48,49], automation of sample thinning and data acquisition [41,50,51], as well as computational analysis [52,53], have begun to broaden the scope and breadth of applications of cellular cryo-ET [54].LDs are, on the one hand, easily distinguishable in cryo-TEM and cryo-FIB-SEM imaging of unstained specimens.On the other hand, increased LD abundance (e.g. in cells with high LD levels, such as adipocytes, or after lipid loading) introduces challenges during lamella preparation; the high density of material in LDs, compared to the surrounding cytoplasm, significantly extends the FIB ablation time and can lead to 'curtaining artifacts', which negatively influence lamella quality and the subsequent cryo-TEM data quality [41].
EM sample preparation specifics for LDs
Biological specimens aimed at RT EM must undergo procedures consisting of several major steps: (a) fixation, (b) staining, (c) dehydration, and (d) embedding.Each of these steps can be optimized to improve LD visualization.
Fixation immobilizes and arrests all cellular processes, making cellular structures rigid and stable for the following steps.Fixation can be chemical or physical (i.e.cryopreservation; detailed below).For chemical fixation, paraformaldehyde (PFA), glutaraldehyde (GA), or osmium tetroxide (Os) are most commonly used.PFA and GA cross-link mainly proteins and nucleic acids, while lipids and membranes are stabilized to a lesser extent [5].To prevent the washing away of unfixed LDs by organic solvent during the following dehydration steps, the use of Os is crucial.Os reacts primarily with lipids and their unsaturated hydrocarbon bonds, and protects LDs from extraction during dehydration.It has been shown that the use of Os in multiple steps, first together with GA at the initial fixation and later alternately with thiocarbohydrazide [3,55] at the staining stage, significantly protects LDs from extraction.Moreover, the addition of malachite green to the primary fixative enhances LD staining [56,57].
Fixation is typically followed by the dehydration step, wherein all the water in the sample is gradually substituted with an organic solvent.This step is critical, and LDs, especially their hydrophobic cores, are often dissolved and washed away by commonly used solvents like acetone and ethanol [58].To minimize this unwanted effect, it is vital to optimize the preceding fixation step as described above, but it is important to note that acetone is a more powerful solvent for lipids than ethanol.Thus, the use of ethanol in LD EM studies is recommended over acetone for dehydration.At the embedding stage, omitting the transitional solvent, typically propylene oxide, is beneficial.No significant benefits from specific resins have been reported for LD preservation.Before imaging, sections may be additionally stained with lead citrate and uranyl acetate to enhance LD visibility.
RT fixation often induces LD deformation (Fig. 1F).This can be circumvented using cryopreservation or vitrification (Table 1).Here, molecules are immobilized by transferring the specimen to cryogenic temperature (below À143 °C) rapidly enough to avoid the formation of ice crystals [59], which would be detrimental to fine cellular structure.Cells up to 10 lm thickness can be seeded or deposited on EM grids and plunge-frozen into a cryogenic liquid, such as liquid ethane.Thicker cells, small multicellular organisms, and up to 200 lm-thick tissue sections can be frozen by high-pressure freezing (HPF), where the sample is cooled while being pressurized to 2100 bar.Cryopreservation, followed by the slow introduction of the fixative, stains and dehydration at sub-zero temperatures (À90 to À20) (freeze substitution) where thermal motion is minimal [60], substantially mitigates large-scale structural distortions and artifacts like LD and membrane deformations observed in RT fixation (see Fig. 1B vs. Fig.1F) [17,61].Importantly, both HPF-freeze substitution and RT preparations very often result in significant removal of the LD core lipids, the extent of which is a result of a multifactor combination of the type of biological material, form, and timing of the fixatives, stains, and solvents introduced at specific temperatures.The surface phospholipid monolayer, with the embedded proteins, may remain intact and can be used as a labeling target with specific antibodies.Thus, alternatively, cryopreserved samples can be imaged directly in their frozenhydrated state using cryo-FIB-SEM and in situ cryo-EM/ET, as described in the previous section, which allow for more pristine sample preservation compared to RT approaches (Fig. 2A-C).These preparations, however, preclude the use of specific immunolabeling.In summary, due to their high lipid content, LDs are exceptionally sensitive organelles to RT EM sample preparation, and special care must be taken to preserve their presence and structure.
Correlative EM methods for LD research
Since most EM methods only allow detailed imaging of a small fraction of the entire specimen, it is often challenging to capture specific biological events such as LD-organelle interactions, or even LDs altogether if LD abundance is low [46].To increase the success rate in specific LD-targeting and visualization by EM, correlative light and EM (CLEM) approaches are instrumental.In a typical CLEM experiment, specimens are imaged using fluorescence microscopy after fixation, and this information is used for site-specific targeting (pre-embedding CLEM).Correlation can also be performed after embedding, but it requires sample processing that takes into account minimal heavy metal staining to preserve the fluorescence signal that is used to guide the subsequent EM acquisition [62][63][64].For example, cells may be grown on specialized finder EM grids or sapphire discs with engraved markings that can be located in both light microscopy and EM, allowing the electron microscope to target the cell-of-interest.Fluorescence imaging can be performed on the embedded resin block or at the level of thin sections (on-section CLEM).In such workflows, commonly available neutral lipid stains such as Bodipy, fluorescently labeled lipids, or expression of fluorescently tagged LD-resident proteins can be used to guide the selection of specific targets [65].It is of note that neutral lipid stains suffer from the same pitfalls in the fixation and dehydration steps as do lipids, owing to their similar hydrophobic nature.
To increase localization precision in the TEM, pre-embedding immuno-EM with gold-particle-conjugated antibodies or immunoperoxidase that are detectable in the EM due to the high scattering power of the metal is also possible [66,67].However, these protocols, similar to immunolabeling for light microscopy experiments, usually require permeabilization to facilitate reagent entry into cells, which will inevitably somewhat compromise structural integrity [68,69].To avoid permeabilization, Tokuyasu (Fig. 1D) and post-embedding on-section CLEM are viable alternatives.The Tokuyasu technique (cryosectioned, solvent-and resin-free) shows excellent antibody specificity and a high signal-to-noise ratio [8,70].However, it is also not free from structural artifacts, and LDs require special considerations like the use of uranyl acetate and osmium tetroxide to preserve and visualize LDs (Os is incompatible with immunocytochemistry) [12].In on-section CLEM, a high-pressure frozen and freezesubstituted sample is embedded in a specific hydrophobic resin that preserves the immunoreactivity of epitopes, permitting antibody labeling and light microscopy imaging on the section [62].In Apex2-EM, a peroxidase-derived genetic tag is expressed in live cells, allowing for signal development after fixation by reaction with DAB (3,3 0 -Diaminobenzidine) and H 2 O 2 enhanced by osmium tetroxide, resulting in electron-dense contrast agent deposition at the tagged protein location [71,72].Moreover, pretreatment of cells with the unsaturated fatty acid DHA (docosahexaenoic acid), which is incorporated into LDs, increases reactivity with Os and enhances LDs contrast [73].
In cryo-CLEM workflows, cryopreserved cells may be imaged before or after cryo-FIB micromachining (when cell thinning is required) using widefield or confocal fluorescence microscopes equipped with cryogenic stages [74][75][76][77] and a similar palette of fluorescent molecules as in RT approaches [33,78].Immunolabeling and other chemical reactions deteriorate the pristine structural preservation offered by cryogenic approaches, and cryogenic alternatives for the specific labeling of macromolecules-of-interest in cellular cryotomograms are being actively developed [79,80].
In summary, to increase the success rate in specific LD-targeting and visualization by EM, it is worth considering correlative strategies already at the sample preparation stage.
Characterization of LDs at the organelle and sub-organelle level by EM
With this versatile range of EM methodologies, we proceed to highlight some examples where EM has played a pivotal role in broadening our understanding of LD biology at different scales of spatial resolution.
LD lifecycle
EM has offered important snapshots into the different stages of the LD life cycle, from biogenesis to breakdown.During LD formation, neutral lipids are speculated to form phase-separated lenses in the ER membrane, followed by their budding into the cytoplasm [81,82].Putative neutral lipid lenses of 30 to 60 nm diameter and enclosed within the ER bilayer have been observed in yeast cells in HPF, freezesubstituted, resin-embedded sections imaged by RT tomography (Fig. 1H) [83].RT EM of chemically fixed specimens has also been used to visualize early LD formation in mammalian cells, with observations of putative nascent LD structures that are 30 to 100 nm in diameter with thread-like bridges to the ER [22], or papillary ER protrusions and vesicular structures closely associated with the ER but lacking a distinct neutral lipid core [84].ER phospholipid composition and membrane asymmetry are key factors contributing to LD budding, and EM has allowed for the direct visualization of aberrantly ER-enwrapped LDs upon manipulation of these conditions [85,86].Despite these intriguing observations, the detailed membrane architecture at early LD biogenesis remains unclear, mainly due to the small size and expected metastability of early LD intermediates, rendering them difficult to preserve and observe using traditional RT EM techniques.
EM has also offered fascinating glimpses into LD degradation, including snapshots of direct interactions of LDs with autophagosomal membranes [87], piecemeal engulfment of LD constituents directly into lysosomes in human hepatocytes (Fig. 1I) [88], and detailed characterization of LD-vacuole interactions in yeast [89][90][91].In situ cryo-EM has further revealed surprising dynamic alterations in the structural arrangement of LD core lipids upon mobilization of triglycerides for membrane synthesis or energy production, leading to increased cholesterol ester levels and their phase transition into a crystalline organization in LDs (Fig. 2C) [45,46].Freeze-fracture EM of cells incubated with cholesterol-rich lipoproteins has similarly hinted at structuring within LDs, showing the presence of multiple lamellae enclosing amorphous areas in the LD cores [92].Dynamic fatty acid fluxes have been investigated using EM, with the fatty acid DHA-enhanced neutral lipid contrast allowing for analysis of lipid incorporation into pre-existing LDs, revealing that most preexisting LDs can attain newly synthesized neutral lipids, presumably from the ER [73].
EM of LD-organelle interactions
Owing to the unique advantages of heavy metals as a general stain for biomolecules or cryo-EM as a labelfree imaging method, EM offers a holistic view of LDs in their native milieu.This has expedited the discovery and characterization of many LD-organelle interactions.Indeed, the special relationship between LDs and their mother organelle, the ER, was already evident in early EM studies [7].There appear to be at least two main morphological categories of ER-LD contact sites [95]: (a) discrete ER-LD necks with membrane continuity that are regulated by seipin (Fig. 1K) [19,22,66,[96][97][98], and (b) more-extensive, egg-in-a-cuplike contact sites, where LD and ribosome-free ER membranes are closely apposed but lack direct membrane continuity (Fig. 1B) [39,61,99].ER-LD necks may be a consequence of LD biogenesis occurring at the seipin foci [100], whereas the more extensive contact sites may be formed by ER-LD tethering proteins such as Rab18, Snx14, VPS13, and MOSPD2, possibly functioning in LD expansion [101][102][103][104]. RT EM has provided direct observations of proteinaceous ER-LD tethers [18,22], paving the way for future work to decipher the in situ structures of such tethers, similarly to those recently elucidated for the tunnel-like lipid transport protein VPS13 at ER-lysosomal contact sites [105,106].It should be noted that, due to the ubiquitous nature of the ER, detailed investigations of ER-LD contact sites truly necessitate the enhanced resolution offered by EM.Importantly, EM has been instrumental in unambiguously visualizing the membrane continuities between these two organelles.
Early EM studies also noted the high prevalence of LD-mitochondria contacts in differentiating adipocytes (Fig. 1B) [107].More recent studies with in situ cryo-ET are starting to unravel the molecular machinery acting at these membrane contact sites (Fig. 2B) [108][109][110][111][112][113], including direct visualizations of molecular tethers [45].An emerging theme is that mitochondria associated with LDs may have distinct bioenergetic and enzymatic properties compared to other mitochondria, possibly fine-tuned to promote LD expansion or breakdown as dictated by the metabolic needs of the cell.EM has indeed been instrumental in quantifying LD-mitochondria interactions under various metabolic conditions [114][115][116][117][118].
Less-well studied intracellular interactions, such as those of LDs with peroxisomes [33,119], pathogens [120][121][122][123], and the cytoskeleton [124,125], have been supported by EM observations.Recent investigations into LD-LD contact sites support a model where a potentially phase-separated Cidec-condensate facilitates the flux of neutral lipids along an internal pressure gradient from smaller to larger LDs [47,113,126,150].Cryo-ET showed that at these contact sites, the apposed LD monolayers remain separate, maintaining an approximate 10 nm distance from each other.Importantly, large LDs were often indented by smaller LDs, directly informing on their respective relative internal pressures, the difference of which drives the lipid transfer process [47,126].This exemplifies how EM can help bridge the gap between the biophysical, structural, and cell biology perspectives on LDs.
Molecular machinery in LDs
The life cycle of LDs is regulated by an array of lipidmetabolic enzymes and molecular machineries that control biogenesis and membrane contact sites.Cryo-EM of purified macromolecular complexes has been used in recent years to provide atomistic structural models that inform on the molecular mechanisms involved for many key players in LD biology.Cryo-EM single-particle analysis (SPA) entails capturing tens-to-hundreds of thousands of snapshots of such purified macromolecules frozen in many different orientations and computationally averaging them to provide a high-resolution 3D reconstruction of the target molecule (Fig. 2D,E).Compared to traditional structural biology methods such as X-ray crystallography, SPA can be especially useful in the structural characterization of challenging targets such as integral membrane proteins, as exemplified by the recent structures of key triglyceride and cholesterol ester synthesis enzymes, DGAT1 (diacylglycerol Oacyltransferase 1) and ACAT1 (acyl-coenzyme A: cholesterol acyltransferase 1) (Fig. 2D-F) [127][128][129][130].The structures of these intricate multi-transmembrane ERresident proteins strongly suggest that the newly synthesized neutral lipids are deposited within the leaflets of the ER, consistent with the lens model of LD biogenesis.Following neutral lipid synthesis within the ER leaflets, the lipodystrophy protein seipin is postulated to catalyze LD nucleation, a model suggested by the partial cryo-EM structures of seipin [131][132][133][134] and molecular dynamics simulations supported by cell biology experiments [135][136][137][138].For a more detailed discussion on seipin and its structures, we point the reader to a thorough review included in this issue of FEBS letters [139].Altogether, these recent works start to build a molecular view of LD assembly.
Classical immuno-EM approaches have provided a detailed view on the localization of perilipins on the LD surface (Fig. 1C) [7,39,140], where they mainly function as a protective barrier against lipolysis [141].Interestingly, some freeze fracture immunolabeling EM also indicated perilipins could reside in the LD core [92], although this intriguing finding has yet to be confirmed using other techniques.Perilipins are defined by a domain architecture composed of an N-terminal PAT domain (named after the historical names of perilipins 1-3: Perilipin1, Adipophilin, and TIP47), followed by variable stretches of amphipathic helix forming 11-mer repeats and a C-terminal 4-helix bundle [142].There is currently limited direct structural data on perilipins, with only the perilipin 4-helix bundle solved by X-ray crystallography.Emerging evidence from in vitro, cell biology, and structural studies indicates that the unique biophysical properties of the LD phospholipid monolayer are key to perilipin recruitment to the LD surface [143][144][145], with their binding to the monolayer imposing order on the otherwise disordered domains [143,146].Such structural dynamics may be the reason why perilipins have thus far been a challenging target for structural studies, as most methods typically require structurally defined and stable complexes to arrive at high-resolution models.Finally, a detailed molecular understanding of the LD breakdown machinery is also still largely lacking.
Future opportunities and challenges in connecting structures to LD cell biology
An overreaching goal in modern structural biology is to resolve macromolecular structures in action and in their native functional environment.Recent technological advances in cryo-ET of FIB-milled cells have begun to realize this goal [41,48,49,50,51,52,54].In combination with advanced structure determination algorithms being implemented in subtomogram averaging (STA) workflows [53], an analysis technique analogous to SPA, the structures of molecular machines such as ribosomes are starting to be resolved to atomistic detail and in a number of functional states directly within intact cells [147].It will be exciting to start leveraging these tools in the context of LD biology.However, several bottlenecks remain.Despite significant recent improvements, cryo-FIB and cryo-ET are still relatively low-throughput techniques requiring highly specialized infrastructure and expertise.Cryo-FIB preparations are also not completely artifact-free and can introduce damage to the sample.Furthermore, successfully attaining high resolutions to visualize protein secondary structure elements in STA requires capturing thousands of instances of the macromolecule-of-interest, which is difficult to attain for low-abundance proteins (e.g.seipin at ER-LD contacts) and is further complicated by the large structural flexibility expected for macromolecular complexes in action in a live cell.Nevertheless, even medium-tolow-resolution maps obtained in situ, especially when interpreted with integrative structural modeling, can lead to important biological insights [75,106,148].Finally, a challenging aspect is how to identify specific macromolecules-of-interest in the cryo-tomograms, especially if the targets are small in size, and/or lack welldefined structural features, and are embedded in dense lipid environments, as is likely the case for many LDrelated proteins.Current cryo-CLEM approaches do not provide the resolution and precision required to localize individual macromolecular complexes.Encouragingly, labeling of proteins-of-interest with probes designed to assemble into structurally well-defined particles, allowing for their identification in cryo-tomograms and localization with nanometer precision, shows promise in this regard [79,80].For example, using bacterialderived protein probes as localization markers enabled pinpointing the precise location of seipin at ER-LD contacts in cryo-FIB-milled human cells [80].Considering the biophysically unique membrane environment of LDs, which may be somewhat challenging to mimic in vitro, it is conceivable that approaches such as in situ cryo-ET will be required and may be able in the future to resolve the structures of LD-related proteins.
Conclusions and perspectives
Here, we have summarized the basic principles of EM techniques employed in LD research.The unique lipidrich nature of LDs amongst cellular organelles necessitates careful consideration when choosing a suitable EM method to minimize specimen preparation artifacts but also to gain sufficient resolution and potentially volumetric information to answer the questions under study.No single EM technique alone will be suitable to address all questions, and most key EM studies in the LD field have employed a rich palette of complementary techniques.
The label-free nature of cryo-EM holds promise to provide unprecedented, possibly even hypothesisfueling, snapshots into the intracellular structure and diverse interactions of LDs.On the other hand, atomistic-level insight into LD assembly and breakdown can be attained with meticulous characterization of the relevant purified or reconstituted macromolecular complexes.Continuous advances in these technologies, which allow the acquisition and processing of large datasets, are likely to transform cellular cryo-EM from a descriptive to a quantitative imaging tool, providing far more than just beautiful pictures.In combination with complementary methods, including advanced light microscopy, cell biological approaches and perturbations, molecular modeling, and dynamic simulations, these techniques will deliver a detailed mechanistic understanding of the different stages of the LD lifecycle in health and disease.
Table 1 .
An overview of the basic technical details and requirements of cellular EM imaging and preparation methods.References are provided as examples for studies related to LD research.
|
2024-05-11T06:17:35.783Z
|
2024-05-01T00:00:00.000
|
{
"year": 2024,
"sha1": "7635fec5600b111574eb9be67332832d13309dfc",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/1873-3468.14899",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "71ee6c26c64716590f2d3da22c3b21aed41d5409",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119078463
|
pes2o/s2orc
|
v3-fos-license
|
Floquet states of Valley-Polarized Metal with One-way Spin or Charge Transport in Zigzag Nanoribbons
Two-dimensional Floquet systems consisting of irradiated valley-polarized metal are investigated. For the corresponding static systems, we consider two graphene models of valley-polarized metal with either a staggered sublattice or uniform intrinsic spin-orbital coupling, whose Dirac point energies are different from the intrinsic Fermi level. If the frequency of irradiation is appropriately designed, the largest dynamical gap (first-order dynamical gap) opens around the intrinsic Fermi level. In the presence of the irradiation, two types of edge state appear at the zigzag edge of semi-infinite sheet with energy within the first-order dynamical gap: the Floquet edge states and the strongly localized edge states. In narrow zigzag nanoribbons, the Floquet edge states are gapped out by the finite size effect, and the strongly localized edge states remain gapless. As a result, the conducting channels of the nanoribbons consist of the strongly localized edge states. Under the first and second model, the strongly localized edge states carry one-way spin polarized and one-way charge current around the intrinsic Fermi level, respectively. Thus, the narrow zigzag nanoribbons of the first and second model have asymmetric spin and charge transmission rates, respectively. Quantum-transport calculations predict sizable pumped currents of charge and spin, which could be controlled by the Fermi level.
Two dimensional Floquet systems consisted of irradiated valley-polarized metals are investigated. For the corresponding static systems, we consider two graphene models of valley-polarized metal, which have staggered sublattice or uniform intrinsic spin-orbital coupling. For the first or second model, the strongly localized edge states at zigzag edge carry one-way spin polarized or one-way charge current around the intrinsic Fermi level, respectively. In the presence of irradiation with appropriate frequency, first-order dynamical gaps are opened around the intrinsic Fermi level. Weakly localized Floquet edge states with energy within the first-order dynamical gaps appear at the zigzag edge of semi-infinite sheet. In narrow zigzag nanoribbon, the Floquet edge states are gapped out by finite size effect, while the strongly localized edge states remain gapless. Thus, the conductivity of the narrow zigzag nanoribbons are determined by the properties of the strongly localized edge states. Specifically, the narrow zigzag nanoribbons of the first or second model have one-way spin or charge conductivity, respectively. These features are confirmed by the quantum transportation calculations.
I. INTRODUCTION
Floquet theory describes the quantum states of systems with time-periodically driven Hamiltonian, such as optically irradiated graphene [1,2] or time-periodically strained graphene [3,4]. Novel types of topological phases have been predicted to appear in dynamical systems of 2D materials of graphene family [5][6][7][8][9][10][11][12][13]. One of the motivation to study Floquet states in 2D materials is to construct topologically protected edge states [14][15][16][17][18][19] for electronic and spintronic applications. Optically irradiated graphene, which has low energy excitations near to K and K ′ Dirac points of the Brillouin zone of honeycomb lattice, has Floquet gaps of all order around energy levels ε = 1 2 ΩN [20] with Ω being the optical frequency and N being an integer. The first-order gaps (the dynamical gaps that are induced by the first-order electron-photon coupling) are around ε = ± 1 2 Ω, and the second-order gaps are around ε = 0. At the edge of the semi-infinite graphene, the topological edge states appear within the first-order and higher-order gaps. Because the first-order gap is larger than the higher-order gaps, we aim to engineer the Floquet systems with first-order gap around the intrinsic Fermi level(ε = 0). For graphene models with particle-hole symmetric, the Dirac points of both valleys are at ε = 0, so that the firstorder gap of the corresponding Floquet systems is always around ε = ± 1 2 Ω. The naive idea is to move the energy level of one Dirac point to ε = ± 1 2 Ω, such that the firstorder gap is moved to ε = 0. In order to keep the system being neutral, the energy level of another Dirac point is * Corresponding author:luom28@mail.sysu.edu.cn moved to ε = ∓ 1 2 Ω. If the pair of Dirac points are in opposite valleys, the static system is valley-polarized metal(VPM).
This article considers two graphene models of VPM. For the static systems, zigzag nanoribbons of the two models host strongly localized edge states(SLESs) at the zigzag edge, whose band structures connect the two valleys. The first model of VPM is graphene with staggered sublattice intrinsic spin-orbital coupling(SOC). The staggered sublattice intrinsic SOC is found in graphene with proximity coupling to the transition metal dichalcogenides (TMDCs) [21][22][23]. The SLESs are recently proposed to be pseudohelical edge states(PHESs) [24]. The PHESs carry one-way spin polarized current around ε = 0. The second model of VPM is graphene with uniform intrinsic SOC as well as appropriate staggered sublattice on-site potential and magnetic exchange field. The model is more conveniently realized in silicene-like 2D materials [25], because of the large SOC and tunable staggered sublattice on-site potential. The SLESs carry oneway charge current around ε = 0. For both models, the bulk states in the zigzag nanoribbons carry spin or charge current that offset the one-way currents of the SLESs. Thus, the conductivity of the zigzag nanoribbons are regular, i.e., spin conductivity is zero, charge conductivities under forward and backward bias are the same.
In the presence of irradiation with appropriate frequency, Floquet systems based on the two models of VPM have first-order gap around ε = 0 in the bulk band structures. The zigzag edge of semi-infinite sheet host Floquet edge states, with energy within the first-order gap. The Floquet edge states are weakly localized at the zigzag edge. In the narrow zigzag nanoribbons, the Floquet edge states are gapped out due to finite size effect. On the other hand, the SLESs are negligibly influ-enced by the irradiation or the finite size effect. Thus, the SLESs become the dominating conductive states around ε = 0, which determine the conductivity of the nanoribbons. As a result, the irradiated zigzag nanoribbons of the first or second model exhibit one-way spin or charge conductivity, respectively.
The article is organized as following: In section II, the tight binding model of the VPM on honeycomb lattice with time-dependent Hamiltonian is given. The calculation methods for the Floquet band structure and conductivity are presented. In section III, the numerical result of the Floquet state consisted of the first model of VPM are presented. In section IV, the numerical result of the Floquet state consisted of the second model of VPM are presented. In section IV, the conclusion is given.
II. MODEL HAMILTONIAN AND CALCULATION METHOD
Tight binding model on honeycomb lattice is a general model that describes graphene as well as silicene and germanene. The effect of optical irradiation is described by time-dependent Peierls phases on the nearest and nextnearest neighbors hoppings. The time-dependent Hamiltonian is given as is the time dependent nearest neighbor hopping energy with γ 0 being the hopping parameter and f i,j (t) being the time-dependent function, c + is (c is ) is the creation(annihilation) operator of electron at the i-th lattice site with spin s,ŝ z is the spin-z Pauli matrix, and ν ij = ±1 for clockwise or counterclockwise next nearest neighbor hopping. The summation with indices i, j ( i, j ) covers the nearest neighbor(next nearest neighbor) lattice site. The value of γ 0 is 2.8, 1.6, 1.3 eV for graphene, silicene, germanene, respectively. λ i I (t) is equal to λ A I f i,j (t) and λ B I f i,j (t) for A and B sublattice, respectively, which also include the time-dependent function f i,j (t). ∆ is the strength of the staggered sublattice on-site potential, and ξ i = ±1 for A and B sublattice. In graphene, ∆ could be induced by the h-BN [26,27] or SiC [28] substrate; in silicene and germanene, ∆ is induced by the vertical static electric field E z . Because of the buckle structure of silicene and germanene, the A and B sublattice planes are separated by 2l, so that ∆ = E z l. The exchange field λ M is induced by proximity with ferromagnetic insulator. ∆ and λ M are not timedependent. For the corresponding static systems, the first model of VPM has parameters: λ A I = −λ B I = λ I , ∆ = 0 and λ M = 0; the second model of VPM has parameter: λ A I = λ B I = λ I and ∆ = λ M = 3 √ 3λ I . In addition to the graphene-like 2D materials, the two models could be experimentally realized in cold atomic systems [29][30][31].
In the presence of the normally incident optical field with the in-plane electric field being E =xE x sin(Ωt) + yE y sin(Ωt−ϕ), the time-dependent function of the nearest neighbor hopping terms are given as where Φ 0 is the magnetic flux quantum, r ij = r j − r i with r i being the location of the i-th lattice site. The time-dependent function of the intrinsic SOC f i,j (t) has the same form. In this article, we consider only the circular polarized optical field with E x = E y = E 0 and ϕ = π/2. According to the Floquet theory, the Floquet state is time periodic function being written as |Ψ α (t) = e −iεαt/ +∞ m=−∞ |u α m e imΩt , with ε α being the quasi-energy level of the α-th eigenstate and |u α m being the corresponding eigenstate in the m-th Floquet replica. The Floquet states and the corresponding quasi-energy level are the solution of the equation where H F = H − i ∂ ∂t is the Floquet Hamiltonian. The time-dependent factor in the Hamiltonain can be expanded by the set of time periodic function e imΩt as is the m-th order first type Bessel function of argument x. Similar expansion is applied to f i,j (t). In the direct product space(Sambe space), R T , with R being the Hilbert space and T being the space of time periodic function, the set of functions {|u α m , m ∈ N} form the time-independent basis functions of the Floquet states. In this space, the Floquet Hamiltonian can be expressed as time-independent block matrix, H (m1,m2) , with m 1 and m 2 being the indices of replicas. The diagonal blocks H (m,m) include three parts: the nearest and next nearest neighbor hopping terms, whose hopping coefficients are renormalized by the factor f 0 i,j and f 0 i,j , respectively; the staggered sublattice on-site potential and magnetic exchange field; and the diagonal matrix m ΩI. The non-diagonal block includes the nearest and next nearest neighbor hopping terms, whose hopping coefficients are renormalized by the factor i m2−m1 f m2−m1 respectively. The quasienergy band structures of bulk or nanoribbon of the model can be obtained by diagonalization of the Floquet Hamiltonian with appropriate Bloch periodic boundary condition. For the eigenstate of the α-th quasi-energy, the weight of the static component (the m = 0 replica) is given as u α 0 |u α 0 . In the numerical calculation, the Floquet index m is truncated at a maximum value with m ∈ [−m max , m max ]. In general, calculation with larger m max gives more accurate result. If the investigation focuses on the first-order gap of the quasi-energy dispersion in the m = 0 replica, m max = 2 gives sufficient accurate result, because all single-photon transitions to the dynamical gap are considered. Similarly, if the investigation focuses on the P-th order gap, m max = 2P is required to has sufficient accuracy.
Density of states of semi-infinite sheet with zigzag edge are calculated to visualize dispersion of SLESs and Floquet edge states. Similar to the Floquet Hamiltonian, the Floquet Green's function can be expressed as time-independent block matrix in the Sambe space, The Floquet Green's function of the primitive cells at the left or right zigzag edge can be obtained by applying recursive method [32][33][34] to the Floquet Hamiltonian. Because the edge states are not completely localized at the first primitive cell at the zigzag edge, backward recursive process is performed to calculate the Floquet Green's function of the primitive cells near to the zigzag edge. Numerical result show that the SLESs are strongly localized at the first primitive cell, while the Floquet edge states are weakly localized. In our calculation, the density of states for each zigzag edge are the summation of the local density of states of the fifty primitive cells near to the corresponding zigzag edge, given as In reality, the transportation is measured for a zigzag nanoribbon with finite length that is connected to two leads. The structure of the transportation simulation is shown in Fig. 1(a). In our calculation, we construct the scattering region as the zigzag nanoribbon on the x-y plane with width being 2.13 nm and longitudinal length(along y axis) being 73.79 nm. The optical irradiation is restricted in the middle part of the scattering region. The amplitude of the optical field, E 0 , is assumed to be uniform along x axis, and Gaussian function along y axis as shown in Fig. 1(b). The leads are not irradiated by the optical field, so that both leads are static systems, i.e. VPM. At the buffering unit cells(three unit cells in our calculation) between the leads and the scattering region, E 0 is slowly turn on from zero to the small value at the tails of the Gaussian function. 1: (a) The structure of the transportation calculation. The leads are within the dash rectangular, which is not irradiated. Overlaid is the amplitude of the optical field, E0. The function of E0 versus y coordinate is plotted in (b), which is Gaussian function.
The Floquet Green's function of the zigzag nanoribbon in the scattering region is calculated by the recursive algorithm [32][33][34]. For static systems, the transmission coefficient at energy level ε from lead L to lead R is determined by the Landauer-Buttiker formula, , with Γ L(R) being the line width matrix of the L(R) lead and G LR being the Green's function between the lattice sites at the L and R leads. For the Floquet systems, the transmission accompanied by the m photon process(absorption or emission) has co- . The additional superscript m is the index of the Floquet replicas, and G (0,m) LR is the m-th row 0-th column block of the Floquet Green's function. The conductance with Fermi level being ε under infinitesimal bias from lead L to R is the summation of transmission with all photon processes [35][36][37][38], i.e., G = (2e 2 /h) m T m LR (ε). With moderate optical intensity, the amplitude of T m LR decay fast as |m| increase. The transmission of the m = 0 replica, T 0 LR , contribute the major part of the conductance.
III. GRAPHENE WITH STAGGERED SUBLATTICE INTRINSIC SOC
The first model of VPM is given by the Hamiltonian (1) with parameters λ A I = −λ B I = λ I , ∆ = 0 and λ M = 0. In the reality heterostructure of graphene on TMDCs, the model should include staggered sublattice on-site potential and Rashba SOC in addition. The heterostructure is insulator instead of VPM. Engineering of the heterostructure, such as doping or additional proximity to another substrate, could offset the staggered sublattice on-site potential and Rashba SOC. We focus on the conceptual VPM model without Rashba SOC and with zero or small staggered sublattice on-site potential.
A. Floquet states of bulk
The appropriate optical frequency depend on the model parameters as well as the optical parameters. If λ I is much smaller than γ 0 , the low energy excitations near to K and K ′ points of the Brillouin zone can be described by the Dirac Fermion model with Hamiltonian being where τ = ±1 stand for K and K ′ valleys. Because λ A I = −λ B I = λ I , the intrinsic SOC terms become a constant potential 3 √ 3λ I τ s. The model has particlehole-valley symmetric, i.e. the static band structure is symmetric under the simultaneous operation of particlehole and K-K ′ valley exchanges. For the static systems, the energy levels of the Dirac points are 3 √ 3λ I sτ ; all four Dirac Fermion models are gapless. The intrinsic Fermi level cut through all Dirac cones at finite energy, so that the model is VPM. Although the particle-hole symmetric is broken, the band structure of each Dirac cone is symmetric about the energy level 3 √ 3λ I τ s. In the Floquet solution with E 0 = 0, the band structures of the m replicas are obtained by adding m Ω to the static band structures. The crossings between the band structures of all replicas are at energy ε = 3 √ 3λ I τ s+ 1 2 Ω. Therefore, the appropriate optical frequency would be Ω = 3 √ 3λ I × 2. If λ I is sizable comparing to γ 0 , the quantum states at ε = 0 cannot be considered as low energy excitation. The static band structures deviate from the linear dispersion of the Dirac cone, such that the band structures are no longer symmetric about the energy level 3 √ 3λ I τ s. Thus, in the Floquet solution with E 0 = 0, the crossing between the band structures of all replicas are no longer aligned at energy ε = 3 √ 3λ I τ s + 1 2 Ω. The optical frequency need to be tuned around Ω = 3 √ 3λ I × 2, so that the crossings between the bands of m = 0 and m = ±1 replicas align at ε = 0. With E 0 = 0, the first-order gaps of the quasi-energy band structures are around ε = 0. However, the energy ranges of the first-order gaps in the two valleys are different from each other, so that the global first-order gap is smaller than the first-order gap in each valley. Because the irradiation changes the hopping parameters in the diagonal blocks of the Floquet Hamiltonian by the factors f 0 i,j and f 0 i,j , the energy level of the Dirac points become dependent on E 0 . As a result, for a given E 0 , the frequency need to be further tuned to maximized the global first-order gap.
The quasi-energy band structure of spin up electron in the bulk with parameters λ I = 0.06γ 0 and E 0 = 0.3V /nm is plotted in Fig. 2(a). The band structure of the spin down electron is obtained by mirroring the band structure of the spin up electron about the M point. The optical frequency is tuned to Ω = 3 √ 3λ I × 1.95, so that the first-order gaps in K and K ′ valleys are in the same energy range. Around energy ε = 0, multiple side bands with small weight in the m = 0 replica( u α 0 |u α 0 ≪ 1) are gapless. As a result, the Floquet systems are not insulator. Thus, the topological property is not well defined for this Floquet system. In the additional presence of the staggered sublattice on-site potential, the local static gap of 2∆ at the two Dirac points are opened. Assuming ∆ = −0.15γ 0 , the quasi-energy band structure is plotted in Fig. 2(b). The first-order gaps in the two valleys are both around ε = 0. The gap size at K(K ′ ) valley decreases(increases).
B. Semi-infinite zigzag edge
For a semi-infinite zigzag edge of the irradiated VPM, two types of edge states appear: the SLESs and Floquet edge states. The Floquet edge states only appear in the Floquet systems, with energy within the dynamical Floquet gaps. The SLESs appear in both of the static and Floquet systems. With appropriate model parameters, the SLESs are negligibly impacted by the irradiation. For the static system of pristine graphene with λ I = 0, the bands of the SLESs are the zero-energy flat bands of zigzag edge, which connect the two valleys. The SLESs are strongly localized at the last atom of one zigzag edge and weakly distributed among the other atoms in the same sublattice. With λ I = 0, the bands of the SLESs become nearly linear dispersive with nonzero slope. For the graphene with staggered sublattice intrinsic SOC, the SLESs are PHESs. The PHESs with the same spin at the left and right zigzag edge travel along the same direction. By contrast, in the quantum spin Hall(QSH) model with uniform intrinsic SOC [39], the SLESs(referred to as helical edge states) with the same spin at the left and right zigzag edge travel along the opposite directions. Because the staggered sublattice intrinsic SOC does not induce band inversion, the PHESs only appear in the zigzag edge, but do not appear in the armchair edge. The bulk band structures along the K-M-K ′ line in Brillouin zone (Fig. 2) imply the band edge of bulk states in zigzag semi-infinite sheet or zigzag nanoribbon. One can plot the bands of the PHESs in the bulk band structure(the thick blue lines in Fig. 2), and then estimated the Floquet coupling strength between the bulk states and the PHESs. If the difference between the energy levels of the PHESs and the bulk states is more than Ω, the PHESs and the bulk states are negligibly coupled by higher order photon transition. The PHESs that satisfy this condition are in the sections of the bands with solid blue lines in Fig. 2. These sections of the bands would remain gapless in the Floquet systems. In contrast, the sections of the bands with dash blue lines would be split by multiple Floquet gaps. For the model with ∆ = 0 in Fig. 2(a), the bands of the PHESs around ε = 0 would remain gapless, so that the PHESs are the dominating conductive states. In contrast. for the model with ∆ = 0.15γ 0 in Fig. 2(b), the bands of the PHESs around ε = 0 would be split, so that the PHESs are not the dominating conductive states. Thus, small or vanishing staggered sublattice on-site potential is preferred. In the rest of this section, ∆ = 0 is assumed.
For the model with λ I = 0.02γ 0 , Ω = 3 √ 3λ I × 1.99 and E 0 = 0.1V /nm, the density of states of the spin up(down) electron at the left and right zigzag edge are plotted in Fig. 3(a) and (b)((c) and (d)), respectively. The local density of states of the Floquet edge states slowly decay as the spatial distance from the zigzag edge increases. At the fiftieth primitive cell away from the zigzag edge, the local density of states decay to 0.1%, so that the summation in Eq. (6) cover fifty primitive cells near to the zigzag edge. The figures show the distribution of bulk states, Floquet edge states, PHESs and side bands in the (ε,k y ) space. The numerical result confirm that the PHESs have linear dispersive bands with wavenumber between K and K ′ points; the PHESs with spin up(down) at both zigzag edge travel along the forward(backward) direction. Thus, the PHESs at the two zigzag edges carry spin currents along the same direction. The first-order gaps of the bulk states around ε = 0 are about 0.1 eV. Within the first-order gap, the Floquet helical edge states(FHESs) appear. For the same zigzag edge and the same spin, the FHESs in K and K ′ valleys travel along the same direction. The FHESs at the same zigzag edge with different spin travel along the opposite directions. Thus, the FHESs at the two zigzag edges carry spin currents along the opposite directions. The side bands have small density of state around ε = 0, which also contribute to the conductivity along the zigzag edge.
C. spin polarized transport of zigzag nanoribbon
In the narrow zigzag nanoribbons, the appropriate optical frequency depends on the width of the nanoribbons. The band structures of the bulk states and FHESs significantly deviate from those in zigzag edge of semi-infinite sheet(in Fig. 3) due to finite size effect. The finite size effect mixed the bulk states and the FHESs at the two zigzag edges, forming the nanoribbon mixed states. The optical frequency need to be tuned again, so that the first-order gaps of the nanoribbon mixed states in the two band valleys are in the same energy range. On the other hand, the band structure of the PHESs with wave number between the two band valleys are hardly impacted by the finite size effect due to the strongly spatial localization at the zigzag edge. We study the zigzag nanoribbon with the width being 2.13 nm as example. For the model with λ I = 0.02γ 0 and E 0 = 0.1V /nm, the optical frequency is tuned to Ω = 3 √ 3λ I × 1.8. The quasienergy band structures of spin up and down electron are plotted in Fig. 4(a) and (b), respectively. Within the energy range of the first-order gaps, the side bands have small weight on the m = 0 replica, so that the PHESs become the dominating conductive bands. The quantized conductance of the spin up(down) electron carried by the PHESs under forward(backward) bias is 2×2e 2 /h; that under backward(forward) bias is 0. As a result, the zigzag nanoribbon has one-way spin polarized conductivity. Because of the presence of the conductive side bands, the spin conductance would be slightly smaller than 2 × 2e 2 /h.
In order to confirm the one-way spin polarized conductivity of the irradiated zigzag nanoribbon, the conductivities of the spin up and down electron, G +1 and G −1 , under forward infinitesimal bias versus the Fermi level are plotted in Fig. 4(c). The length of the zigzag nanoribbon is finite, and the irradiated region is restricted, as shown in Fig. 1. The spin conductivity that is defined as the difference between the conductivities of spin up and down electrons, P G = G +1 − G +1 , is plotted in Fig. 4(d). With the backward bias, G +1 and G −1 in Fig. 4(c) exchange, so that the spin conductivity in Fig. 4(d) flips sign. Within the first-order gap, P G is peaked near 2(2e 2 /h). Thus, forward or backward infinitesimal bias with the Fermi level around ε = 0 excites spin up or down polarized conductivity, respectively. The spin conductivity at ε = 0 is not exactly quantized because the side bands contribute small conductivity as well. In the absence of the optical irradiation, both G +1 and G −1 are exactly 2(2e 2 /h) at ε = 0, and the spin conductivity is zero. Therefore, the one-way spin conductivity is controlled by the presence of the optical irradiation.
The optical parameters for experimental implementation of the Floquet system is discussed here. For bulk or semi-infinite sheet, we assume that the graphene is irradiated by normally incident Gaussian beam. If the width of the beam waist is larger than the wavelength, the optical field in the middle of the Gaussian beam could be approximated as plane wave. We denote w 0 = w 1 λ as the width of the beam waist with λ = 2πc/Ω being the wavelength and w 1 ≥ 1. The power of the Gaussian beam is P 0 = π|E0| 2 w 2 0 4Z0 with Z 0 = µ/ε being the impendent of the background media. The first-order gap can be estimated by first order perturbation method as η Ω = evF E0 Ω [15] with η < 0.5 and v F ≈ c/330. Thus, the power of the Gaussian beam is given as For the model with parameters in Fig. 3 and 4, assuming w 1 = 1 and η = 0.2, we have P 0 ≈ 12 W. For the system in Fig. 4, the optical field pattern have subwavelength size. Plasmonic devices, such as metallic tip or plasmon cavity [41], could focus the Gaussian beam into subwavelength field pattern. The local electric field is enhanced by a factor, F . Thus, the required power of the laser beam is reduce by √ F times.
IV. GRAPHENE WITH UNIFORM INTRINSIC SOC
The second model of VPM is given by the Hamiltonian (1) with parameters λ A I = λ B I = λ I and ∆ = λ M = 3 √ 3λ I . This model is more conveniently realized in 2D staggered semiconductors silicene, germanene, stanene, and plumbene [40].
A. Floquet states of bulk
The appropriate optical frequency depend on the model parameters, but does not depend on the optical parameters. The low energy excitation near to K and K ′ points of the Brillouin zone can be described by the Dirac Fermion model, whose Hamiltonian is The model has particle-hole-valley-spin symmetric, i.e. the static band structure is symmetric under the simultaneous operation of particle-hole, Because the Floquet systems are insulators, the topological property can be well defined. One can define the Chern number C of the Floquet systems [20,42,43] as where B α is the Berry curvature of the α-th quasi-energy band. The integral cover the whole Brillouin zone. For the Floquet systems, B α is defined as where v include only the 2m max + 1 − s − f loor[6γ 0 /( Ω)] bands below the intrinsic Fermi level. Once these conditions are satisfied, the Chern number is independent of the truncation. The numerical results show that the Chern number for each spin is zero, so that the Floquet insulators are topologically trivial.
B. Semi-infinite zigzag edge
For a semi-infinite zigzag edge of the irradiated VPM, the SLESs appear in both of the static and Floquet systems. Similar to the analysis in Section IIIB and Fig. 2, the bands of the SLESs are plotted in the bulk band structure in Fig. 5 as thick blue lines. For each spin, only one of the two SLESs has band structure that crosses the energy ε = 0. For the model parameters in Fig. 5, the bands of SLESs around ε = 0 would remain gapless in the Floquet systems. This feature is valid for the realistic systems with smaller strength of the intrinsic SOC.
C. One-way Charge Transport of Zigzag Nanoribbon
In the narrow zigzag nanoribbons, the appropriate optical frequency( Ω = λ M × 2) is not changed by the finite size effect, because the static band structure of spin-s is symmetric to the energy level λ M s. For the zigzag nanoribbon with width being 2.13 nm, λ I = 0.02γ 0 , E 0 = 0.1V /nm and Ω = λ M × 2, the quasi-energy band structures of spin up and down electron are plotted in Fig. 7(a) and (b), respectively. The dominating conductive state around ε = 0 is the SLESs that carry one-way charge current. As a result, with the Fermi level around ε = 0, the charge conductivity under forward or backward bias is expected to be zero or 2 × 2e 2 /h, respectively. The large band gaps of the spin up(down) band structures around the energy 1 2 Ω(− 1 2 Ω) are not due to the optical irradiation, but due to the finite size effect. The optical irradiation induces the Floquet side bands within these gaps.
The conductivity of the zigzag nanoribbon with finite length(300 unit cells along the longitudinal direction) and restricted irradiated region in the scattering region is calculated. The charge and spin conductivities under infinitesimal bias versus the Fermi level are plotted in Fig. 7(c) and (d), respectively. The conductivities under forward and backward bias are plotted as solid(blue) and dash(red) lines, respectively. At ε = 0, the only conductive states are the SLESs that travel along backward direction, so that the conductivity under forward bias should be zero. However, the calculated conductivity is not exactly zero at ε = 0. The small length of the irradiated region allow significant tunneling between the leads. For the scattering region with 900 unit cell, the charge conductivity is reduced to 10 −6 × 2e 2 /h. Under backward bias, the SLESs contribute quantized conductivity around ε = 0. The nonzero spin conductivity appear at energy away from ε = 0. The plateaus of quantized spin conductivity around ± 1 2 Ω are due to the finite size effect, but not due to the optical irradiation. The dips in these plateaus are due to the side bands that are induced by the irradiation. The irradiation induces other dips and peaks in the charge and spin conductivity, including the two peaks of spin conductivity around ε = 0 near to the band edge of the first-order gap. All of these dips and peaks are due to the presence of the Floquet side bands with small weight on the m = 0 replica.
V. CONCLUSION
In conclusion, the Floquet systems of optically irradiated VPMs consisted of 2D graphene-like materials are investigated. Two graphene models of VPM are considered. For the corresponding static systems, the SLESs of the first(second) model carry one-way spin polarized(oneway charge) current. By choosing the appropriate optical frequency and strength, the Floquet systems have firstorder dynamical gap around the intrinsic Fermi level, which gap out the conductive bulk states in bulk, semiinfinite sheet or nanoribbon. At the zigzag edge of semi-infinite sheet, the conductive states are the SLESs, Floquet edge states and side bands. In narrow zigzag nanoribbons, the conductive states are SLESs and side bands. The conductivities of the side bands are negligible, so that the conductivities of the narrow zigzag nanoribbons are basically determined by the properties of the SLESs. As a result, the one-way spin or charge conductivities are optically induced.
|
2019-01-24T09:26:30.000Z
|
2018-08-17T00:00:00.000
|
{
"year": 2018,
"sha1": "1f73577fb76f24406b41e2632ae1bbf05e7d12cf",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1808.05721",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1f73577fb76f24406b41e2632ae1bbf05e7d12cf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
267988054
|
pes2o/s2orc
|
v3-fos-license
|
Implementing and testing a U-space system: lessons learnt
Within the framework of the European Union’s Horizon 2020 research and innovation program, one of the main goals of the Labyrinth project, which ended on May 2023, was to develop and test the Conflict Management services of a U-space-based Unmanned Traffic Management (UTM) system. The U-space ConOps provides a high-level description of the architecture, requirements and functionalities of these systems, but the implementer has a certain degree of freedom in aspects like the techniques used or some policies and procedures. The current document describes some of those implementation decisions. The prototype included, at least in a basic version, part of the services defined by the ConOps, namely e-identification, Tracking, Geo-awareness, Drone Aeronautical Information Management, Geo-fence Provision, Operation Plan Preparation/Optimization, Operation Plan Processing, Strategic and Tactical Conflict Resolution, Emergency Management, Monitoring, Traffic Information and Legal Recording. Besides, a web app interface was developed for the operator/pilot. The system was tested in simulations and real visual line of sight (VLOS) and beyond VLOS (BVLOS) flights, with both vertical take-off and landing (VTOL) and fixed-wing platforms, while assisting final users interested in incorporating drones to support their daily tasks. The three-year experience developing and testing the environment provided many lessons at different levels: functionalities, compatibility, procedures, information, usability, ground control station (GCS) integration and aircrew roles during the mission.
Introduction
A safe use of drones sharing the airspace, same as with manned aviation, requires a management of the separation.This separation cannot always rely on the skills of the pilots, since many of the operations are expected to be BVLOS.Besides, the use of human air traffic controllers does not fit well with the particularities of the small drones and the city environment.A better option is to automate a UTM system.To harmonize such UTMs in Europe, under the European Union's (EU) Horizon 2020 research and innovation program, the CORUS project developed and published the U-space Concept of Operations (ConOps) [1], which would be regulated and adopted in April 2021 as the EU's UTM ConOps, entering into force on January 2023.U-space does not only address drone traffic management, but also other relevant aspects like safety, privacy, security, social acceptance, or co-existence of all Very Low-Level airspace operations, and its architecture is aimed at facilitating the proliferation of an entire industry around the offer of services provided by and for unmanned aerial vehicles (UAV).This is EASN-2023 Journal of Physics: Conference Series 2716 (2024) 012082 IOP Publishing doi:10.1088/1742-6596/2716/1/012082 2 permitted by its modular architectural design, which can be seen as a system of systems.Part of these systems are called services, and represent specialized functionalities.Services can be dependent on or could make use of other services.For a single U-space, these can be provided by different companies or institutions -therefore the need for the harmonization for a compatible market -, and do not necessarily have to run in the same infrastructure.In the Labyrinth project, two of the key U-space services were developed and tested: Strategic and Tactical Conflict Resolution.These help to provide the drone operator with a deconflicted trajectory based on their flight intentions, needs and other existing constraints (e.g., geo-fences, other traffic, obstacles).The Robotics Lab research group at the University Carlos III of Madrid (UC3M) developed the algorithms [2] to find a feasible path, which was returned as a 4D trajectory.One of the main goals of the project was to test these services while supporting the activities of final users willing to integrate drones into their tasks to increase efficiency.The use cases included: bird shepherding, runway inspection in airports, sanitary supplies delivery, evaluation of an emergency area, support for the evacuation of a mass of people, monitoring of loading activities and surveillance in maritime ports, infraction detection of car drivers and speed measurement of ground traffic.
DLR was responsible for implementing the minimum set of services, at least in a basic version, necessary to test the deconfliction algorithms in real scenarios, namely e-identification, Tracking, Geo-awareness, Drone Aeronautical Information Management, Geo-fence Provision, Operation Plan Preparation/Optimisation, Operation Plan Processing, Strategic Conflict Resolution, Tactical Conflict Resolution, Emergency Management, Monitoring, Traffic Information, and Legal Recording.This document collects experiences, needs identified and lessons learnt after three years of implementation and testing of the Labyrinth's U-space system, which was based on the first version of the ConOps.
The exposition has been divided into the different pieces of the environment, starting with the interfaces with the system, followed by the description of the services, and ending with the developments of the operators to integrate their platforms in the environment.These two Spanish operators were the Instituto Nacional de Técnica Aeroespacial (INTA) and Arquimea.First one providing and operating the multi-rotors in the tests and second one doing the same with their fixed-wing.
Web app
This interface allows the operator to manage the flights and the pilots to visualize the information and instructions from and send requests to the U-space.These communications can be also done using the API provided, and operators can implement the display of dialogues and information in their GCS.Fig. 1 shows this option, chosen by Arquimea.For this, operators must have access to the source code of the GCS software, which is not always possible, and this integration can be expensive and take time.Besides, embedding more elements could result in a cluttered screen.Instead, if the web app is used, it requires a dedicated screen, something to which the flight crew was reluctant.However, a benefit is that one member of the flight crew can take care of the U-space communications while the pilot flights manually or monitors the flight status, reducing the cognitive workload of the pilot, which is important in the case of flying swarms.This was the option chosen by INTA (Fig. 2).
The Application Programming Interface
The API defines the way users -GCSs, UAVs, and other clients -send requests or reports to the U-space, and how the information or instructions are received.Part of the calls can be used to embed in the GCS the communications.The rest is a reduced set that operators must use to integrate the GCS/UAV in the U-space environment (e.g., for position reporting).Arquimea's GCS embedded dialogs with the U-space.JSON [3] was chosen as the message format.It allows to add new fields as needed, not affecting the existing code, and any programming language has libraries to de-/serialize it.Its flexibility permits the use of different fields depending on the capabilities of the drone, or to refer to the same variable with a specific unit/format.It is human-readable and the logs can be easily processed to analyze the tests.To represent the trajectories, the GeoJSON standard [4] was selected.It allows the definition of sequences of points and polygonal or round areas that can be used to represent geo-fences or areas to scan.Another benefit is the possible addition of many components to the coordinate vectors, which has been used to register the constraints or any other information related to a waypoint.Concretely, these values were: longitude, latitude, altitude relative to the ground, sea level altitude, speed, and time of arrival (ToA).
The U-space services
The ConOps defines different types of airspace volumes and these require a different number of services.It also determines four deployment phases, which gradually add services to allow each step more complex operations.For simplicity, and due to space limitations, will just be indicated that the environment developed was addressed to the most restrictive Zu volumes.However, about the phases, being a research prototype oriented to test only some services in controlled scenarios, not all of the services of each phase were implemented before developing the services of the following one.It must be noted that many of the services could deserve their own project.The description of the implementation of these services has been ordered following the categories for these specified in the ConOps.
Identification and Tracking
• e-identification Service.The system must maintain updated information on the flights and, if requested, provide the different users -authorities, citizens, and operators -with different details depending on their data access privileges.Concerning drone tags in the web app, which was focused on the pilot's view, and being important to provide a decluttered screen, only the drone's unique identifier and some telemetry chosen by the pilots were displayed.An important value of it is the timestamp that indicates the moment when the telemetry was measured.Position reports were expected once per second, therefore a timestamp much older than that could mean a loss link event.However, users received a warning when the reporting frequency was not met.
• Tracking Service.Tracking is dependent on the Position Report Submission subservice, which receives and handles the reports with the drone telemetry during the flight.On these, some fields were mandatory (drone and flight IDs, origin -UAV/GCS -, 3D coordinates, speed, timestamp, security token) and others optional (e.g., uncertainty of the values, or source, like altitude from GPS, barometry or infrared).It is important to allow flexibility in the fields, since different platforms can provide different data or in different units and formats.In some cases, it can be solved with a conversion in the GCS or the UAV, but that is not always convenient.An example experienced was the attempt to estimate programmatically in the GCS the altitude relative to the ground, which resulted in great inaccuracies on uneven grounds, leading to fatal errors of altitude when the U-space had to modify the trajectories during the flight based on those estimations.The issue of the altitude references was the topic of the project ICARUS [5], which was running at the same time as Labyrinth.The lesson learnt was that it is better to work with the existing reliable data and handle or mitigate any related limitation in the services rather than interpolate or estimate the missing values.
While it was requested a minimum of a report per second, it could be considered to increase/decrease the frequency based on the speed or the traffic nearby.That would help to reduce the workload in the server, of special interest with high densities.Each of the reports triggers a good quantity of checks in different services, and the updates are broadcast to all connected users.Reports could be sent by the UAV, the GCS or by both redundantly.In this last case, the coherence between them was checked, which can help to detect spoofing.
Airspace Management/Geo-awareness
• Drone Aeronautical Information Management Service.It is in charge of keeping the map of existing geo-fences, whether static or dynamic.Dynamic ones can be created/deleted using the API or the web app, and have an associated duration, name, description, and a specific bottom and/or top altitude.The duration can be specified as a number of minutes since its creation or as a date timespan, which is useful to define temporary corridors for priority flights.• Geo-awareness Service.When a user logs in, receives all active geo-fences so that can be seen while designing a flight plan, and will receive notifications of new ones created or terminated.Apart from its informative role, this service also monitors that the UAVs comply with geo-fence avoidance or that they do not leave a geo-cage.If this happens, pilots receive a warning, and tactical deconfliction is triggered if necessary.For geo-cages, a friendly warning is first sent if the UAV gets close to the its limits.This service included the Geo-fence Provision Service, to directly inform the UAV of any existing or new geo-fence.
Mission Management
• Operation Plan Preparation/Optimisation.Using this service, operators can submit flight plan approval requests or cancel already approved ones.The web app allows displaying one or more trajectories in a 2D/3D map view that can be rotated, tilted or zoomed over a clean topographic map background or an orthophoto (it was used the CesiumJS [6] open source library for geo-spatial visualization).The trajectories are rendered as a list of waypoints.If one is clicked, the 4D constraints on it are displayed.These features are extremely useful for research purposes, for example, to analyse the separation decisions of the path planner.• Operation Plan Processing.It handles the list of pending flights and processes new ones.
In Labyrinth, operators sent their flight requests as a draft of the desired trajectory and a set
EASN-2023
Journal of Physics: Conference Series 2716 (2024) 012082 of preferences, and, if found, they received a feasible flight plan based on it.The flexibility of the trajectory design and preferences was determined by the path planning subservice.Such flexibility will be key to satisfying the needs of the final users.Labyrinth's path planner calculated trajectories for drafts defined as origin-destination, origin-area to scandestination (the planner returned a pattern to scan the area), or origin-list of waypointsdestination.However, it was learnt, after the exercises with the first responders, that the option origin-geo-cage-destination should be added.Geo-cages were requested when trajectories in the area could not be decided beforehand since they depended on the needs of each situation.An example would be a drone arriving at an area of interest to follow the unpredictable movements of a mass of people.When this service receives a new request for flight permission, after successful syntactic and semantic checks, adds the geo-fences and traffic affecting the requested trajectory at the moment of the flight, the drone capabilities -max.and min.speed, climb/descend rate, type (VTOL/fixed-wing) -, and forwards the request to the Strategic Conflict Resolution Service, which subsequently calls the path planner subservice.The planner returns, if found, a feasible and optimized trajectory with the hard constraints applied and the soft ones when possible.Paths are really treated like a tube by the planner, providing a radius of separation along it to consider possible relatively light and unavoidable deviations due to navigation inaccuracies or wind.The resulting flight plan is sent to the operator, who must check if it fits their needs and accept or reject it.To ease this decision step, it is important to provide a clear and informative display of the plan returned.Besides, the number of waypoints returned by the planner should be as reduced as possible, since some drones have problems loading or managing trajectories with a high number of points.
Conflict Management
• Strategic Conflict Resolution This service is in charge of returning deconflicted 4D flight plans, where the fourth component can be given by the ToA at the waypoint or the speed on it.Some drones are able only to try to comply with one of these constraints.This service depends on the path planner subservice, whose capabilities and flexibility will have a substantial impact on this and other services.In Labyrinth, the trajectory was not only deconflicted from the traffic and geo-fences at the moment of the flight, but it was also considered the elevation map of the area -a Digital Surface Model -.The availability of these maps is key for optimized and safe trajectory planning.No distinctions between the airspace users were implemented during the project, in the sense that, for example, the airspace was not partitioned in areas or levels for different UAV capabilities, and it was assigned in a first-come, first-served basis.But, when existing, these policies should also be taken as input by the path planner.
There was a difference between the plans calculated for the fixed-wing and those for the VTOL UAVs.The first waypoint for the fixed-wing would be at a certain altitude, and the drone would autonomously calculate the maneuver to reach it.Therefore, in this case, two segregated volumes around the take-off and landing points were considered, big enough for the aircraft to execute its climbing/descend maneuver.• Tactical Conflict Resolution.Some events could have an impact on the ongoing flights; aircraft unable to comply with the expected 4D constraints, malfunctions or sudden geofences (Fig. 3) are some examples.When this happens, thanks to this service, the affected traffic automatically receives instructions to avoid the conflict.This service also makes use of the path planner, which, together with the new constraints, should consider which kind of instructions is the aircraft able to execute during the flight due to its capabilities or status.The importance of this was seen while testing with a drone connected to the U-space via satellite communications (satcom).Together with Telefonica I+D, and the Network Technologies research group from UC3M, part of the project was focused on identifying requirements and suggesting an infrastructure -both ground and airborne -to support a reliable connectivity of the drones.A smart link switcher was developed [7], and DLR studied the impact of satcom users in the environment and the need for any special measure for them.From the tests, it was learnt that it should be avoided to send large portions of data, e.g., an entirely new trajectory, to a satcom-linked drone.In this case, only simpler instructions like changes in altitude or speed were feasible.Also important is to include in the conflict resolution procedure the estimation of the time for the instruction to reach, be accepted and applied by the pilot.An example is illustrated by the instructions to overwrite the remaining trajectory: if the first new waypoints are close to the current position, while the instruction is wilcoed and loaded, the drone could have overflown already the first new points, forcing it to backtrack.One particular tactical deconfliction situation appears when the operator needs to abort and change the current flight plan.This capability was fundamental for missions like those of the first responders, who must adapt their support to the development of the events.In those situations, the procedure decided was the following.First, the pilot requested a cancellation of the current flight.The UAV should be then kept hovering until the end of the procedure.Automatically after the cancellation request, a safety dynamic geo-fence was created around the UAV and any traffic affected was deviated.In the case of fixed-wings, the geo-fence size was tailored to give room to the holding loop pattern required by the aircraft.Pilots could then design and submit the new requested plan.After a successful approval process (i.e., no conflicts found), and before start executing it, they had to send a start-of-flight message.When doing it, the geo-fence was removed and they could start the new plan.With little practice, the aircrew got used to this procedure and executed it quickly.If they forgot to send the start-of-flight message, they received a warning when the safety geo-cage was left, so they could immediately solve it.A similar procedure was provided for those cases where the need was to stop and fly manually in the area without a predefined trajectory.Here, after cancelling the ongoing plan, operators would ask for a geo-cage, broadcasted as a new geo-fence to the rest of the traffic.When the area was segregated, the operator could start working inside it freely.
Emergency Management
This service was not developed in depth since non-nominal situations were out of the scope of the project.However, warnings were included to inform the pilots of relevant events.Besides, a warning message type could be used by pilots or the UAV autonomously to report any problem.If sent, it was broadcast to the traffic nearby and the web app showed the drone surrounded by a red cube representing a wider separation to provide a margin of time for a tactical deconfliction of close traffic.In the current implementation, the affected traffic is only re-routed when its trajectory conflicts with the mentioned emergency geo-fence, but future work could consider to start a preventive separation when a problem is reported in the area.
Monitoring
• Monitoring.As mentioned, different drones are able to comply with different waypoint constraints, like speed, ToA, or none of them.This conformance monitoring service would consider this while checking if they were met.With respect to the 3D position, a light deviation warning was sent when the aircraft came notably apart from the planned route but still inside a deconflicted tube reserved by the path planner around it.If leaving this tube, a severe deviation warning was sent, traffic around warned, and the tactical separation service triggered if conflicts appeared.• Traffic Information.Users receive constant updates on the positions of the close traffic, warnings, or information like flight cancellations and reason.In the web app, the icons of non-nominal flights appear with a warning icon, together with the mentioned red shield box.Lost link events between the UAV and U-space or the GCS appear with different icons in the flight strip.• Legal Recording.Communications related to the flights, events, user requests, and errors, were registered in log files to allow later analysis.These would also be used by the partner Austrian Institute of Technology (AIT) to train the algorithms that would detect unexpected values and therefore possible spoofing [8].AIT also developed a methodology to analyse the information exchange in the environment, identify weaknesses and weigh the impact of a possible cyber attack [9].
GCS integration 4.1. Fixed-wing
The main reason for Arquimea to embed the communications with U-space in their GCS was related to the peculiarities of fixed-wing takeoff and landing manoeuvres.During takeoff, fixedwing aircraft heavily rely on the wind at the precise moment of the manoeuvre, making it challenging to plan its trajectory well in advance.Moreover, there exists a transition period after take-off, during which the aircraft reaches the desired altitude and then follows the planned flight path.Predicting this transition accurately is complex.Similarly, during the landing manoeuvre, there is a transition between the last point of the flight plan and the beginning of the descent orbit.This descent orbit may be performed multiple times to achieve the desired altitude and start the landing phase.The management of these peculiarities was addressed by the GCS software.There was no need or reason to delegate it to the path-planner.The GCS would add the necessary manoeuvres to the trajectory returned by the Operation Plan Processing service.The processing of the JSON messages was straightforward thanks to the existing libraries for C#, being the only problem the incompatibility between some JSON field names and the variable names permitted in C#, which prevented an automated deserialization.
Vertical take-off and landing
INTA's aircrew was formed by a Pilot-in-Command (PiC), an external pilot, two payload operators (one of them also in charge of the U-space communications), one visual observer, the subject matter expert of the exercise (the final user) and a platform engineer.At a given moment, the PiC was managing up to three drones at the same time flying BVLOS, including communications with air traffic control, since the drones took off from the aerodrome Rozas Airborne Research Center (CIAR).However, INTA concluded that an improved integration would allow the PiC to assume also the U-space communication tasks.Examples of that improvement would be to automatically load the plan in the drone when the PiC accepts the route suggested by the Plan Processing service, or the automated loading in the drone of the commands when an instruction is wilcoed by the pilot.
Conclusions
The development of the U-space prototype, and especially testing it in different use cases, provided valuable knowledge of the needs of final users, operators and pilots, limitations and capabilities of the drones, and challenges of the environment and missions.The experience showed that, to facilitate different kinds of operations, the system must provide flexibility, both to design the plan and to modify the ongoing flight.The implementation must be able to adapt its services to the heterogeneity of UAV platforms with different capabilities, especially with regard to the differences between VTOL and fixed-wing.Some particular needs were identified.If a definition of the messages between U-space and the drone/GCS was agreed, with the expected information, units or format, manufacturers could provide tools or functionalities oriented to ease the integration of their drones.Another request was to clearly define the procedures and responsibilities in the event of contingencies.And an important pending work regarding tactical deconfliction is to consider the battery level, that way it could be avoided to send to the drone instructions that could result in an emergency procedure.
With respect to the graphic interface, its usability, flexibility and a clear display of the relevant information and events, will impact the mental workload of the flight crew and how quick can tasks be performed, which will be key in the case of operating swarms.In this regard, the feedback of users is fundamental.Therefore, a successful development should go hand in hand with operators, pilots, and GCS developers, being this the best way to identify needs and constraints at all levels.
Figure 3 .
Figure 3.The pilot receives a deconflicted trajectory if affected by a new dynamic geo-fence.
|
2024-02-27T16:22:10.976Z
|
2024-02-23T00:00:00.000
|
{
"year": 2024,
"sha1": "5bfecdc7f63c70806905e4a952329efc8b31971c",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2716/1/012082/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8b407e40b8e29eb63e4ca20ef9e98fc8a76e55c0",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
254584673
|
pes2o/s2orc
|
v3-fos-license
|
A micropolar shell model for hard-magnetic soft materials
Hard-magnetic soft materials (HMSMs) are particulate composites that consist of a soft matrix embedded with particles of high remnant magnetic induction. Since the application of an external magnetic flux induces a body couple in HMSMs, the Cauchy stress tensor in these materials is asymmetric, in general. Therefore, the micropolar continuum theory can be employed to capture the deformation of these materials. On the other hand, the geometries and structures made of HMSMs often possess small thickness compared to the overall dimensions of the body. Accordingly, in the present contribution, a 10-parameter micropolar shell formulation to model the finite elastic deformation of thin structures made of HMSMs and subject to magnetic stimuli is developed. The present shell formulation allows for using three-dimensional constitutive laws without any need for modification to apply the plane stress assumption in thin structures. Due to the highly nonlinear nature of the governing equations, a nonlinear finite element formulation for numerical simulations is also developed. To circumvent locking at large distortions, an enhanced assumed strain formulation is adopted. The performance of the developed formulation is examined in several numerical examples. It is shown that the proposed formulation is an effective tool for simulating the deformation of thin bodies made of HMSMs.
Introduction
Magneto-active soft materials consist of magnetic particles dispersed into a soft elastomeric matrix and undergo large deformations under magnetic loading. This class of materials has been used in vibration absorbers, sensors, actuators, soft robots, flexible electronics, and isolators (see, e.g., [1][2][3][4][5][6] and references therein). Therefore, developing reliable theoretical models plays an essential role in the optimum and cost-effective design of the aforementioned devices and instruments.
Based on the type of the embedded particles, magneto-active soft materials are divided into two sub-classes, namely soft-magnetic soft materials (SMSMs) and hard-magnetic soft materials (HMSMs). The former contains particles with low coercivity, such as iron or iron oxides, and their magnetization vector varies under external magnetic loading. This sub-class has been the subject of a huge amount of research work in this century (e.g., [7][8][9][10][11][12][13]). The latter sub-class is composed of particles of high coercivity, such as CoFe 2 O 4 or NdFeB, so that their magnetization vector, or equivalently, their remnant magnetic flux, remains unchanged for a wide range of the applied external magnetic flux (e.g., [14,15]). One of the main characteristics of HMSMs is that external magnetic induction of relatively small magnitude causes rapid finite deformations in these materials (e.g., [16,17]). Moreover, the 3D printing technologies have enabled the researchers to program the ferromagnetic domains in complex structures, which leads to the desired deformations [18][19][20][21][22].
Theoretical modeling of HMSMs has been the subject of a plethora of research articles in recent years (e.g., [23][24][25][26][27][28][29][30][31][32]). In particular, Zhao et al. [24] developed a continuum formulation with asymmetric Cauchy stress so that the external magnetic induction directly contributes to the expression of the stress tensor. Their theory has been the foundation for the analysis of hardmagnetic soft beams (HMSBs) in Yan et al. [33], Wang et al. [34], Rajan and Arockiarajan [35], and Chen et al. [36,37] among others. The same formulation has been employed to model the deformation of magneto-active shells by Yan et al. [38]. Dadgar-Rad and Hossain [39] enhanced the formulation of Zhao et al. [24] to account for viscoelastic effects and analyzed the timedependent dissipative response of HMSBs. Some researchers have developed micromechanical and lattice models for HMSMs (e.g., Refs. [28][29][30][31]). From a different point of view, Dadgar-Rad and Hossain [32] focused on the well-known phenomenon that due to the presence of the remnant magnetic induction in HMSMs, applying an external magnetic loading produces a body couple that plays the role of the main driving load to generate mechanical deformation in the body (e.g., [40]). Moreover, the Cauchy stress losses its symmetry, as had been previously pointed out by Zhao et al. [24]. However, instead of following the methodology advocated in [24], the authors developed a formulation based on micropolar continuum theory to predict the deformation of 3D hard-magnetic soft bodies. Two significant differences between the results of the formulation of Zhao et al. [24] and those based on the micropolar-enhanced formulation have been expressed by Dadgar-Rad and Hossain [32].
The current research is essentially the continuation of the previous work of the authors, namely Dadgar-Rad and Hossain [32], which had been developed for three-dimensional bodies. However, most bodies made of HMSMs are thin structures, and using three-dimensional elements is computationally expensive. Accordingly, the purpose of this research is to develop a micropolar-based shell model to predict the deformation of thin HMSMs. To do so, the 7parameter shell formulation of Sansour (e.g., [66,67]) has been extended to a 10-parameter one that involves the micro-rotation of the microstructure. On the other hand, the enhanced assumed strain method (EAS) is a widely-used strategy to eliminate locking in shell structures, e.g., [68][69][70][71]. Therefore, this method is adopted here to circumvent locking effects in the present micropolar shell formulation.
The next sections of this paper are as follows: The basic kinematic and kinetic relations of the micropolar continuum theory are summarized in Section 2. In Section 3, the main characteristics of HMSMs are presented. In Section 4, the kinematic equations describing a 10-parameter micropolar shell model are provided. Section 5 presents the variational formulation, followed by a FE formulation in Section 6. Numerical examples are studied in Section 7, and the paper concludes in Section 8.
Notation: In this work, Greek indices take 1 and 2. All upper-case and lower-case Latin indices take 1, 2, and 3. Upper-case indices with calligraphic font, e.g., K and L, take the values specified in the corresponding equations. The repeated Latin and Greek indices obey Einstein's summation convention. If P and Q are two 2nd-order tensors, the tensorial products defined via the symbols ⊗, ⊙, and ⊠ generate 4th-order tensors so that the corresponding components are given by (C) ijkl = (P ⊗ Q) ijkl = P ij Q kl , (B) ijkl = (P ⊙ Q) ijkl = P ik Q jl , and (C) ijkl = (P ⊠ Q) ijkl = P il Q kj , respectively. For numerical simulations, the notation Í = {U 11 , U 22 , U 33 , U 12 , U 21 , U 13 , U 31 , U 23 , U 32 } ⊤ will be used as the 9 × 1 vectorial representation of the arbitrary 2nd-order tensor U.
A brief review of the micropolar theory
The purpose of this section is to introduce some concepts and relations of the micropolar theory. The interested reader may refer to the pioneering works developed in Refs. [42,43,45] for more details and discussions.
In this section, two coincident Cartesian coordinates {X I } and {x i }, with { I } and { i } as the corresponding basis vectors, are considered. The center of a macro-element in the reference configuration B 0 is denoted by . After deformation by the deformation mapping ψ, the center of the macro-element in the current configuration B at the time t is denoted by Ü, so that Ü = ψ( , t). The deformation gradient F is given by which can be uniquely decomposed as F = RU = VR. Here, R is the macro-rotation tensor, and U and V are the symmetric positive definite right and the left stretch tensors, respectively.
For later use, the variation of the deformation gradient is written as follows: Moreover,Ù = Ü − and δÙ = δÜ are the actual and virtual displacement fields, respectively.
As a basic assumption in the micropolar theory, there exists a microstructure inside each macro-element so that it experiences rigid micro-rotations independent of the macro-motion Ü.
Next, in the current configuration B, let dA and Ò be an infinitesimal area element and its corresponding outward unit normal vector, respectively. In the micropolar theory, the traction Ø (Ò) and the couple vector × (Ò) (as the moment per unit area) act on dA. Let σ and m be the asymmetric Cauchy stress and the asymmetric couple stress corresponding to Ø (Ò) and × (Ò) , respectively. Accordingly, the well-known Cauchy's stress principle is extended as follows (e.g., [32,42]): For later use, the first Piola-Kirchoff stress P, the material stressP, the first Piola-Kirchoff couple stress M, and the material couple stressM are defined by
Basic relations of HMSMs
The main property of hard-magnetic soft materials is the existence of a remnant magnetic flux density, that remains almost unchanged under a wide range of the applied external magnetic flux ext (e.g., [16,17,24]). Let˜ rem and rem be the remnant magnetic flux in the reference and current configurations, respectively. The relation between˜ rem and rem is as follows [24]: The action of ext on rem leads to a body couple (moment per unit volume) in HMSMs. The relations for the body couple per unit current volume Ô, and the body couple per unit reference volume Ô * may be written as (e.g., [24,40]) where the constant µ 0 = 4π × 10 −7 N A 2 is the free space magnetic permeability. For HMSMs, the external magnetic flux density ext is often assumed to remain constant in space (e.g., Refs. [24,[34][35][36][37]). Using this point, it has been proven that the following Maxwell equations are satisfied in HMSMs (e.g., [24,40]): where is the referential magnetic flux density, À is the referential magnetic field, I is the basis vector of the referential coordinate system {X I } defined in the previous section, and "Curl" is the referential curl operator.
Kinematics of a 10-parameter micropolar shell model
The geometry of a part P of a shell in the reference and current configurations is displayed in Fig. 1. Let S 0 be the mid-surface of the shell in the reference configuration, which deforms into the surface S in the current one. As shown in Fig. 1, in addition to the two common-frame with h as the initial thickness of the shell, is considered to be perpendicular to S 0 in the reference configuration. However, it does not remain perpendicular to S in the current configuration, in general. In the sequel, for the sake of simplicity, the coordinate ζ 3 may be replaced by z.
The position of the material particle q on the mid-surface S 0 may be described by the vector (ζ 1 , ζ 2 ). Let { α , α , A αβ , A αβ , , B} be, respectively, the covariant and contravariant basis vectors, covariant and contravariant components of the metric tensor, outward unit normal vector, and the curvature tensor on the undeformed mid-surface S 0 . Then the following relations from the differential geometry of surfaces hold (e.g., [72]): where δ β α is the two-dimensional Kronecker delta. For later use, the surface contravariant basis vectors may be written as α = A * αJ J , where A * αJ are the Cartesian components of α . The position of the material particle p located at the elevation z with respect to S 0 is described by from which the covariant basis vectors i are obtained to be Motivated by Eqs. (13) 8 and (15), the symmetric shifter tensor with I as the identity tensor, is defined. The shifter tensor can be used to map the covariant and contravariant basis vectors from z = 0 to the mid-surface with z = 0, and vice versa. More precisely, the following relations hold: where i are the contravariant basis vectors at , and use has been made of the symmetry property of Q. For later use, the three-dimensional material gradient operator Grad ζ , with respect to the convective coordinates {ζ i } in the reference configuration, and the material surface gradient operator Grad S 0 with respect to {ζ α } are defined as follows: By assuming that a straight material fiber perpendicular to S 0 remains straight during deformation, the following macro deformation field is considered (e.g., [66,67]): where Ü is the image of on S, and is a director vector along the deformed z-axis. Moreover, the scalar field φ describes through the thickness stretching of the shell. Similar to the quantities defined on S 0 in Eq. (13), let { α , α , a αβ , a αβ , Ò, b} be the surface quantities defined on S. It then follows that Moreover, based on Eq. (18), the covariant basis vectors i at Ü are as follows: It is observed from Eq. (20) 2 that the director and the basis vector 3 are in the same direction. However, the normal vector Ò and 3 are not in the same direction, in general. Next, the vectors Ù = u i i and Û = w i i are defined as the mid-surface and the director displacements, respectively. This allows one to write From Eqs. (16) and (17) 1 , the deformation gradient tensor F, described in the convective coor- In the present shell model, using Eqs. (15), (16) 3 , (20), and (22), and neglecting the higher-order terms involving z 2 , the deformation gradient is approximated as follows: and the tensors F [0] and F [1] , with the aid of Eqs. (13) 8 and (21), are given by To circumvent numerical difficulties in finite element solution, the in-plane deformation gradient term F [0] is enhanced by the second-order tensorF, to be introduced in Section 6. Accordingly, the term F [0] in Eq. (23) 2 is replaced by F [0] +F. Moreover, following Ramezani and Naghadabadi [73] in the context of the micropolar Timoshenko beam model, it is assumed that the micro-rotation pseudo-vector is constant along the shell thickness, namely θ =θ(ζ 1 , ζ 2 ).
Accordingly, the micro-rotation tensorR(θ) is independent of the z coordinate. Keeping this in mind and using Eqs. (23), the micropolar deformation measuresŨ and Γ, in the present shell formulation, may be written as where F * =F +F is the enhanced form ofF. The present formulation with {Ù, Û, φ, θ} as the unknown field variables may be regarded as a 10-parameter micropolar shell model. In other words, the present formulation is the extension of the classical 7-parameter shell model, with {Ù, Û, φ} as its unknowns, introduced by Sansour (e.g., [66,67]).
Variational formulation
Let δU be the virtual internal energy and δW denote the virtual work of external loads. The principle of virtual work states that δU − δW = 0 [74]. In what follows, the expressions for δΨ and δŴ, as, respectively, δU and δW per unit reference volume, are derived. Moreover, for the linearization purpose to be used in the next section, the increments of δΨ and δŴ are also calculated.
By neglecting thermal effects, assuming that the material is hyperelastic, and to develop a material formulation, the internal energy per unit reference volume may be written as Ψ = Ψ(Ũ, Γ) [42,45]. Using this point and Eq. (7) furnishes Moreover, the constitutive equations for the pairs {P,M} and {P, M} are as follows [32]: It is noted that Eqs. (26) and (27) hold for all three-dimensional micropolar hyperelastic solids.
FE formulation
A nonlinear finite element formulation for the present shell model is developed in this section.
Let S e 0 be a typical element in the referential mid-surface S e 0 . To perform numerical integration, where n e = 3(n u + n w + n θ ) + n φ is the number of nodal DOFs. Based on Esq.
The differential volume element dV e 0 located at the elevation z with respect to the typical element S e 0 is given by (e.g., [66]) Now, the element virtual internal energy is given by δU e = V e 0 ΨdV e 0 . Similarly, the expression for the virtual work over the element can be calculated via δW e = V e 0Ŵ dV e 0 . Using Eqs. (28) and (42), the expressions for δU e and δW e may be written as where the internal force vectors intv and intα are as follows: Moreover, the external force vector θ exti , work conjugate to Θ i , is given by Next, the linearized equations resulting from Eqs.
The system of algebraic equations extracted from Eq. (47) may be written as where the subscripts "mat", "geo", and "load", represent the material, geometric, and load part of the element stiffness matrix. In particular, the material sub-matrices à vv mat , à vα mat , and à αα mat in Eq. (48) are as follows: where [J ] are the matrix forms of C [J ] . Moreover, the load sub-matrices à vv load , à vα load are given by , is a column vector whose nonzero entry is AE θ . The expressions for the geometric sub-matrices, resulting from the terms P [0] : ∆δH [1] and M [0] : ∆δH [2] in Eq. (30), are too lengthy and are not presented here.
The assembled system of equations is of the formÃ∆Î = −Ễ, whereÃ, ∆Î, andỄ are the assembled forms of the stiffness matrix, incremental generalized displacement, and residual vector, respectively. After finding ∆Î, the non-rotational quantities are update via the relations Ù + ∆Ù → Ù, Û + ∆Û → Û, and φ + ∆φ → φ. However, the update procedure for the rotation pseudo-vector is completely different. Let ∆θ be the increment of the rotation pseudo-vector.
Numerical examples
To examine the applicability of the developed formulation, six examples are solved in this section. The formulation has been implemented in our in-house finite element code. The 10parameter micropolar shell element designed for the present numerical simulations is an eightnode quadrilateral. All eight nodes contain the three displacement components u i . However, only the corner nodes contain the w i , φ, and θ i DOFs. In other words, the DOF parameters defined after Eqs. (38) and (39) are n u = 8 and n w = n θ = n φ = 4. Following Korelc and Wriggers [70], the enhancing deformation gradientF is considered to be of the following form: where J is the Jacobi matrix between the physical and parent elements. Moreover,F ref is the enhancing deformation gradient defined in the parent {ξ, η} space. In this work, the nonzero components ofF ref are considered as follows: which are linear functions in terms of the parent coordinates ξ and η. This indicates thatF contains six enhanced parameters, namely P * = 6 1 . To evaluate the integrals over the element surface, the 2 × 2 Gauss-Legendre integration is used. Moreover, the two-point rule is employed for integration along the shell thickness.
VERIFICATION EXAMPLE: bending of beam-like strips
To examine the validity of the results of the proposed formulation, the flexural deformation of four beam-like strips under magnetic loading is studied in this example. Extensive experiments on these structures have been previously conducted by Zhao et al. [24]. The values of the mechanical properties λ and µ are, respectively, 7300 and 303 (in kPa). As can be seen from . Convergence analysis reveals that the minimum required number of elements along the length of the strips is 10, 15, 30, and 40, respectively. Additionally, two elements in the width direction are necessary for the four strips. Furthermore, for the micropolar parameter η = µ/10 and the material length-scale l = h/10, the present results are in good agreement with the available data obtained in [24]. Therefore, these relations will be employed for the next examples, as well. Fig. 2(a) displays the nonrmalized deflection u T 3 /L at the tip of strips versus the nondimensional load 10 3 µµ 0 | ext ||˜ rem |. From the figure, it is clear that the results based on the present shell formulation are close to the numerical as well as experimental data reported in Ref. [24]. The deformation patterns of the strips for four values of the external magnetic flux are displayed in Figs. 2(b,c,d,e). To have a comparison between the deformation of the strips for a specific value of | ext |, the four strips are plotted in the same figure. The importance of the aspect ratio can be observed in Fig. 2(b), where the strip with AR = 41 experiences considerable large deformation even for | ext | = 2 (mT), which is a small value for the applied magnetic flux.
It is recalled from Eq. (18) that the present formulation employs the through-the-thickness stretching parameter φ, which leads to linear shear strain as well as linear normal strain in the thickness direction. To show the effect of this parameter, two new cases are considered. In the first case, the condition φ = 0 is enforced in the formulation, while the 3D constitutive equations are still employed. In the second case, the plane stress assumption P 33 = 0 is enforced and φ has not been considered in the formulation. Then the constitutive equation is modified to include the plane stress assumption. For the thick beam with AR = 10 and the thin one with AR = 40.5 the results are displayed in Figs. 3(a,b). It is noted that for the beams with AR = 17.5 and AR = 20.5, similar results are obtained, which have not been shown in the figures. It is observed the new cases exhibit locking phenomenon in the resulting elements. The second new case is better than the first one, however, the improvement is negligible. In other words, including the through-the-thickness stretching φ in the formulation and employing the 3D constitutive equations is an effective method for improving the performance of the present micropolar shell element.
Deformation of a hollow cross
In the present example, the finite deformation of a hollow cross under magnetic loading is simulated. Following Kim et al. [18] and Zhao et al. [24], the geometry of the hollow cross is composed of 24 trapezoidal blocks ( Fig. 4(a)). The thickness is 0.41 mm, and all dimensions in to those given in the previous example. As can be seen in Fig. 4(a), the direction of˜ r is constant in each block, but it varies in different blocks. The constant value of |˜ r | = 102 (mT) has been considered for the referential remnant magnetic flux [18,24]. The maximum external magnetic flux density ext max = −200 3 (mT) acts on the body. The symmetry of the geometry allows us to discretize merely 1/4 of the body in the X 1 X 2 plane. Moreover, the displacement component u 3 at the points A and G is assumed to be zero.
By performing various numerical simulations, it is found that a 6×6 mesh of shell elements in each trapezoidal block leads to convergent results. Fig. 4(a) displays the displacement component u 3 against the normalized loading parameter 10 3 µµ 0 | ext ||˜ rem | at some material points. At the final stage of deformation, the lateral displacement at the points E and C is very close to each other. More precisely, the maximum u 3 displacement achieved at the point C is 10.39 mm.
Deformation analysis of a thin cross
The finite elastic response of a thin cross made of HMSMs is simulated in this example. The geometry of the cross involves nine welded 6 × 6 (mm) square-shaped blocks ( Fig. 5(a)), and the thickness is 0.9 mm. The magnitude of˜ r is the constant value of 94 mT. To deform the body by magnetic loading, the maximum value of ext is considered to be 40 mT, which is applied perpendicular to the plane of the cross in the X 3 direction. Moreover, the mechanical properties are µ = 135 and λ = 3250 (kPa). Due to symmetry in the X 1 X 2 plane, only 1/4 of the cross is used in the simulations.
From numerical experiments, it is found that a mesh containing 15 shell elements along AC and 3 elements along AA ′ leads to convergence in the results. The displacement component u 3 at some points is plotted in Fig. 5(a). As usual, the horizontal axis is considered to be the nondimensional loading 10 3 µµ 0 | ext ||˜ rem |. It is noted that the u 3 displacement of the point D has been considered to be zero, and the maximum lateral displacement 22.78 mm is predicted [20] in Fig. 5(b).
Magnetostrictive response of an H-shaped structure
In this example, the mechanical response of an H-shaped thin structure to magnetic stimuli is simulated. The geometry of the structure, consisting of fifteen blocks, is displayed in Fig. 6(a).
The dimensions, mechanical, and magnetic properties of the blocks are identical to those given in the example 7.3. The maximum applied magnetic loading is ext max = −50 3 (mT). By considering the symmetry properties of the geometry, merely 1/4 of the geometry is analyzed. Numerical experiments indicate that a 4 × 4 mesh of shell elements in each block yields converging results. In other words, the number of elements along AA ′ , AC and CD is 2, 14, and 10, respectively. Fig. 6(a) demonstrates the variations of the displacement component u 3 at some points against the normalized loading 10 3 µµ 0 | ext ||˜ rem |. By assuming zero u 3 displacement at the point D, the maximum value of u 3 = 24.65 mm at the point A is achieved. The fully deformed shape of the H-shaped structure from the experimental observations of Kuang et al. [20] is illustrated in Fig. 6 6(e) shows that the deformed structure obtained by the present shell formulation is qualitatively similar to that reported in the experiments of Ref. [20].
Deformation of a cylinder (magnetic pump)
The finite elastic response of a cylindrical shell to magnetic loading is simulated in this example. As will be shown below, the deformation pattern in the cylinder is so that it may be used as a macro-or micro-fluidic magnetic pump in practical applications. In a relatively similar context, an electro-active polymer-based micro-fluidic pump can be seen in Yan et al. [76]. In the present case, it is assumed that the cylinder has been made of the same blocks as described in the example 7.3. To construct the geometry, 24 blocks in the circumferential direction and 20 ones along the axis of the cylinder are used. Therefore, the mean radius and length of the cylinder are R = 22.9 and L = 120 (mm), respectively. The remnant magnetic flux˜ rem is assumed to be tangent to the cylinder surface and perpendicular to the X 2 axis. Moreover, it has a positive component along the X 3 axis. The magnetic flux ext max = 150 3 (mT) acts on the cylinder. Moreover, both ends of the cylinder are considered to be clamped. Symmetry considerations allow us to simulate 1/4 of the full geometry.
Numerical simulations show that a mesh of 24 × 20 elements is sufficient to obtain convergent results. Variations of the displacement components u 1 and u 3 against 10 3 µµ 0 | ext ||˜ rem | are plotted in Fig. 7(a). The coordinates of the material points A, B and C, lying in the XZ-plane, are under the applied magnetic flux, the cylinder contracts at its middle section. This is the reason why it can be used as a magnetic pump in real applications.
A magnetic gripper
The elastic response of a spherical gripper is simulated in this example. Soft grippers made of magneto-active materials have the potential as actuating components in soft robotics. For instance, Ju et al. [77] and Carpenter et al. [78] demonstrated additively manufactured magnetoactive grippers while Kadapa and Hossain [79] simulated the viscoelastic influences of underlying polymeric materials. In our case, the gripper is composed of 12 equal arms. In the undeformed configuration, the arms cover the surface of an incomplete sphere of radius R. It is assumed that the mechanical and magnetic properties, and the thickness of the HMSM are the same as those given in the example 7.3. The geometry of a single arm is shown in Fig. 8(a). The arc DE lies in the X 1 X 2 plane, its length is 12 mm, and covers 30 • of a full circle. Therefore, the mean radius of the arm is R = 12 π/6 = 22.92 mm. The arc AC lies in the X 1 X 3 plane and its length is 60 mm. The angle between the radius OA and the X 3 -axis is 15 • , and the geometry is symmetric w.r.t. the X 1 X 2 plane. Moreover, the topmost arc of the arm is assumed to be clamped. As shown in the figure, let ϕ be the standard meridian unit tangent vector to the sphere. It is assumed˜ rem is along ϕ for X 3 > 0, and along − ϕ for X 3 < 0. It is noted that applying ext in 3 direction opens the arms of the gripper. Here, the maximum magnetic loading ext max = 10 3 (mT) acts on the arms.
Numerical experiments indicate that a 6 × 30 mesh of shell elements in the arm provides convergence in the results. The displacement components u 1 and u 3 at the points B and C versus 10 3 µµ 0 | ext ||˜ rem | are plotted in Fig. 8(a). For a single arm under the maximum external magnetic flux of 10 mT, the maximum value of the displacement component u 3 is obtained to be about 43.9 mm. The deformed shapes of the gripper under four different values of the external magnetic flux are illustrated in Figs. 8(b,c,d,e). It is noted that the maximum value of the external magnetic flux to avoid intersection between the arms is 6.8 mT. In this case, the maximum u 3 component of displacement is about 41.6 mm.
Summary
In this research, a 10-parameter micropolar shell model for simulating the finite elastic deformation of thin hard-magnetic soft structures was formulated. The idea of employing the micropolar theory comes from the fact that magnetic stimulation induces a body couple on these materials, which in turn leads to asymmetric Cauchy stress tensor. Since the governing equations at finite strains, including magnetic effects, cannot be solved analytically, a nonlinear finite element formulation for simulating the problems of arbitrary thin geometry, boundary conditions, and loading cases was also presented. Six different numerical examples were solved to assess the applicability of the present formulation. It was shown that the results of the proposed formulation are in good agreement with the available experimental and numerical ones. The viscoelastic and thermal effects will be taken into account in the forthcoming contributions.
Declaration of competing interest
The authors declare no competing interests.
|
2022-12-13T16:03:03.417Z
|
2022-07-21T00:00:00.000
|
{
"year": 2022,
"sha1": "980d1c94763c2b5a4cfd87f5db179623c0fe49d3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3243547f69463031abb30a9fffe6ab49c26417d3",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
118634051
|
pes2o/s2orc
|
v3-fos-license
|
Numerical operator method for the real time dynamics of strongly-correlated quantum impurity systems far from equilibrium
We develop a method for studying the real time dynamics of Heisenberg operators in strongly-interacting nonequilibrium quantum impurity models. Our method is applicable to a wide range of interaction strengths and to bias voltages beyond the linear response regime, works at zero temperature, and overcomes the finite-size limitations faced by other numerical methods. We compare our method with quantum Monte Carlo simulations at a strong interaction strength, at which no analytical method is applicable up to now. We find a very good coincidence of the results at high bias voltage, and in the short time period at low bias voltage. We discuss the possible reason of the deviation in the long time period at low bias voltage. We also find a good coincidence of our results with the perturbation results at weak interactions.
I. INTRODUCTION
Understanding strongly-correlated open quantum impurity systems is an important unsolved problem in condensed matter physics, and is relevant to a wide variety of experimental fields.
In cases ranging from molecular electronics junctions, 1-6 low-dimensional mesoscopic systems [7][8][9] and out-of-equilibrium correlated materials, 10-12 a need exists for reliable theoretical treatments which go beyond linear response from equilibrium. The combination of correlated quantum physics and the lack of a description in terms of equilibrium statistical mechanics presents a major challenge in this regard, and while a variety of powerful approximate schemes have been developed, [13][14][15][16][17][18][19][20][21][22][23] the entire spectrum of interesting parameters is not reliably covered by these methods. This makes numerically exact results highly desirable; however, even when one is interested only in the properties of a stationary state, the only recourse which does not involve approximations is often to consider time propagation from some simple initial state to the nonequilibrium steady state. Moreover, in some cases one can observe a system's time-dependent response to a quench, [24][25][26][27] and thus a theoretical treatment capable of following the system's time evolution is needed. 20,[28][29][30][31][32][33] A great deal of progress has been made by considering the special case of nonequilibrium quantum impurity models, where transport between sets of infinite, trivial, noninteracting leads occurs through a finite, nontrivial, interacting region. 7,34,35 The nonequilibrium Anderson model, where the dot is modeled as a single spindegenerate electronic level with on-site Coulomb interaction, exhibits a range of exotic phenomena related to Kondo physics 36 and has drawn a particularly great deal of attention. While this model and its extensions are still an infinite many-body problem and remain under active research even at equilibrium, [36][37][38][39][40][41][42] the local nature of the interactions in impurity models results in major simplifications, and in recent years a great deal of progress has been made in the development of numerically exact methods which solve for transport properties in impurity models. [43][44][45][46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62] These controlled methods remain limited in their applicability; in particular, quantum Monte Carlo (QMC) methods 45,51,55 are generally slow to converge at low temperatures and long times, while time-dependent density matrix renormalization group (tDMRG) 49,50 and numerical renormalization group (NRG) 44,63 methods are effectively limited to systems where the leads can be efficiently mapped onto reasonably short 1D chains with a small site dimension, in such a way that entanglement along the chain remains low. In recent years, impurity models have also been of interest as auxiliary systems in the study of large or infinite interacting lattices, by way of the DMFT approximation. [64][65][66] It is therefore necessary to continue exploring new numerical methods. In this regard, the following observation is of some interest: when studying real time dynamics, it is possible to solve the Schrödinger equation and obtain the evolution of quantum states. However, quantum states contain all the information about a quantum system, most of which is redundant when one is only interested in very few or even a single observable. An alternative route is to solve the Heisenberg equation and obtain the evolution of a specific observable operators. Solving the Heisenberg equation may allow for some simplification or optimization. We note that similar ideas have been discussed in the context of tDMRG, where matrix product operators can be more efficient for describing dynamics than matrix product states. [67][68][69][70][71][72] The entanglement area laws which make DMRG perform so well for the ground states of gapped systems, however, do not extend to nonequilibrium situations and metallic systems. It is not therefore clear that tDMRG should be an optimal scheme for such systems as impurity models.
We propose a different approach: to express the solution of the time-dependent observable operators, we construct a set of basis operators, similar to how one might choose basis vectors in the Hilbert space. As will be shown, we can choose the basis operators so that the observable that we are interested in is itself a basis operator at time zero. The time-dependent observable operator then starts from a point on an axis of the operator space and explores the other dimensions at a finite rate, thus facilitating an efficient evaluation of the time evolution in our basis. This allows us to work with an infinite model and circumvent finite-size effects.
In this paper, we develop a numerical operator method (NOM) for nonequilibrium quantum impurity models, based on previous work by one of the authors. 73,74 We apply the method to the real time transport dynamics of the Anderson impurity model. Our method is in principle applicable to arbitrary bias voltage and interaction strength. It describes infinite reservoirs (which are difficult within tDMRG) at absolute zero temperature (which is difficult for QMC), and can therefore be expected to be advantageous in some regimes. We compare our results with perturbation theory at weak interactions and with QMC at strong interactions. We find perfect coincidence in most cases where it is expected, and discuss possible reasons for deviations in problematic regimes.
The contents of the paper are arranged as follows. In sec. II, we introduce the model and the preliminary transformation of the Hamiltonian. In sec. III, we show the details of the NOM. In sec. IV, we show the validity and power of our method by giving some examples and comparing our method with the others. At last, a concluding section summarizes our results.
A. The Anderson impurity model
The NOM is designed for solving the Heisenberg equation of motion. In this paper, we discuss its application to the study of transport through the nonequilibrium Anderson impurity model, an archetypal model for the description of electron-electron interactions in quantum junctions. 75 The model involves an impurity site coupled to two ("left" and "right") electronic reservoirs or leads: (1) Hered σ is an electronic annihilation operator at the impurity, whileĉ kασ is an electronic annihilation operator in the reservoirs. α ∈ {L, R} denotes the left and right reservoir, respectively, σ ∈ {↑, ↓} denotes the spin and k is an index corresponding to a reservoir level with energy ǫ k . g describes the coupling strength between the impurity and the reservoirs (taken to be level-independent here).Ĥ imp is the local Hamiltonian at the impurity site, and is expressed bŷ where ǫ d is the level energy and U is the Coulomb interaction. We concentrate on the particle-hole symmetric point ǫ d = −U/2 throughout the paper. We further define the impurity level broadening Γ = ρπg 2 (ρ denoting the density of states of the reservoir). As is customary in the field, Γ will be used as the unit of energy. We assume an infinitely sharp cutoff in the reservoirs at a finite bandwidth D. It is worth noting that our method can in principle be used for an arbitrary frequency-dependent coupling Γ (ω). We work at zero temperature, and take the chemical potentials of the left and right reservoirs to be µ L = V /2 and µ R = −V /2, respectively; V is therefore a bias voltage across the junction. At large V , the system is driven beyond the linear response regime and can no longer be described well in equilibrium terms.
B. The Wilson transformation
We will not directly apply the NOM to the Hamiltonian Eq. (1), but instead begin by discretizing the system and mapping it onto a one-dimensional chain with only nearest-neighbor couplings by way of a Wilson transformation. The reason for this is one of numerical efficiency: the NOM works well when each creation and annihilation operator appears in only a few terms of the Hamiltonian. In the transformed Hamiltonian, this is true for operators either at the impurity site or on the Wilson chain. However, in the original Hamiltonian Eq. (1), the operatord σ appears in an infinite number of terms of the form ĉ † kασd σ + H.c. , since the infinite reservoirs must be described by an infinite (or at least large) number of k indices. We note that the Wilson transformation, which entails logarithmic discretization, is not a unique choice in this regard: a more general Lanczos transformation allows for arbitrary discretization schemes, and has been successfully employed in performing similar mappings, for example in the context of recent DMRG 41 and configuration interaction 42 solvers for equilibrium impurity models. It should also be mentioned that the Wilson transformation used was employed within the time-dependent numerical renormalization group (tNRG) method to access the real time dynamics of quantum impurity models coupled to both a single bath 43 and multiple baths. 63 To proceed, it is useful to recombine the field operators in the two reservoirs into pairsĉ k±σ = 1 and (4) The symmetric Hamiltonian describes an impurity coupled to a single band, and can be transformed into a Wilson chain by a logarithmic discretization of the band. Following ref. 36, we then havê whered nσ with n = 0, 1, · · · denotes the field operator on the Wilson chain, Γ the impurity level broadening, D the bandwidth of the reservoir, and t n the coupling between neighboring sites on the chain. For constant Γ, an analytical expression for the coupling strength can be obtained: where Λ > 1 is a discretization parameter. Finally, the full Hamiltonian consists of the combination of the antisymmetric part and the Wilson chain, which are commutative with each other. It can be expressed aŝ In the limit Λ → 1, the transformed Hamiltonian Eq. (7) is equivalent to the original Hamiltonian Eq. (1). We can therefore fully eliminate the error caused by the Wilson transformation by taking the limit Λ → 1. It has been argued 76 that the Wilson chain is not a thermal reservoir due to a finite heat capacity, which scales as 1/ ln Λ in the limit Λ → 1, such that the temperature of the Wilson chain is not fixed in a transport setup. However, the dissipated energy in the setup within a finite time is also finite, such that the chains simulate true reservoirs for any given finite time if Λ is close enough to 1. Due to the fact that the NOM operates on a set of truncated Heisenberg-picture operators, the length of the Wilson chain does not significantly impact the computational scaling and can essentially be taken to infinity (see Fig. 1 and the corresponding discussion there).
It can therefore be expected that it be valid to study the dynamics up to some finite timescale even after a Wilson transformation with Λ > 1. In practice, for the timescales explored here, we find that setting Λ = 1.2 is sufficient for converging the discretization error. We have verified that further reducing Λ to 1.02 does not significantly modify the results; we further note that within standard tDMRG this is generally difficult to achieve and often values of Λ ≃ 2 are used. 76
A. The current operator
We study the current I (t) ≡ Î (t) through the impurity at a finite bias voltage V . The current operator is given byÎ where we have employed the antisymmetric field operatorsĉ k−σ (t) in order to express the difference between the currents as measured in the left and right reservoirs. We assume that the reservoirs and the impurity site are initially decoupled from each other. The two reservoirs begin in their respective equilibrium states as determined by the Fermi distribution f α (ǫ k ) = θ (µ α − ǫ k ), while the impurity site is empty. We switch on the coupling g at t = 0 and track the time evolution of the current. Since Ĥ + ,ĉ k−σ = 0, it is straightforward to find that c k−σ (t) = e −iǫ k tĉ k−σ . We therefore introduce a new field operatorĉ −σ = 1 √ ρ k e −iǫ k tĉ k−σ , and re-express the current as The problem is therefore reduced to the calculation of d † σ (t), which will be addressed by computational means in the following subsection.
B. Iterative solution of the Heisenberg equation of motion
To obtain the time dependence ofd † σ , we solve the Heisenberg equation of motion Since the antisymmetric HamiltonianĤ − commutes witĥ d † σ , this becomes The symmetric HamiltonianĤ + describes a semi-infinite Wilson chain with the impurity site as its first site. To simplify the notation, we relabel the impurity site index as −1 such that the impurity annihilation operator iŝ d −1,σ ≡d σ . The symmetric Hamiltonian is then where t −1 ≡ ΓD π . In order to expressd † −1,σ (t), we construct a set of basis operators. At each site j = −1, 0, . . ., we choose some linearly independent set of sixteen local operators generated byd † jσ andd jσ and including the unit operator. These are denoted byω i j for i ∈ {1, 2, . . . , 16}, and every operator acting only on site j can be expressed as a linear combination of the sixteenω i j . We then propose that a basis operatorÔ α in the full symmetric subspace be expressed as the product of on-site operators in an ascending order:Ô Here α is an aggregate index representing a vector α j with j ∈ {−1, 0, 1, . . .}, which identifies the basis element in the full operator Hilbert space. The basis Ô α is complete in the sense that any operator can be decomposed as a linear combination of its members. Additionally, the coefficients of this decomposition are unique. In the basis just described the solution of the Heisenberg equation Eq. (11) can be written aŝ In tDMRG terms, one might say that our operator is represented by a sum of terms of bond order 1, which is clearly very different from a matrix product operator. To obtain the coefficient a α (t) corresponding to each basis operatorÔ α , we derive an iterative equation by propagating from time t to time t+∆t (where ∆t is some small time interval) by using the forward Euler method: We throw out terms of O ∆t 2 and above, a valid approximation in the limit ∆t → 0. Next, substituting Eq. (14) into Eq. (15), we obtain Calculation of the commutator Ĥ + ,Ô α is trivial and easily computerized. The important point is now that at every stage of the computation, eachÔ α appearing in the expansion with a nonzero coefficient can be written as the product of a finite number of (non-unit) local operators. Meanwhile, each term in the Wilson Hamiltonian Eq. (12) involves at most four local operators which act either at the same site or at two adjacent sites. Therefore, even though the HamiltonianĤ + contains an infinite number of terms, the commutator Ĥ + ,Ô α is always finite in length for any finiteÔ α . In fact, Ĥ + ,Ô α generates only very few terms whenÔ α is short, as is the case when the propagation time t is not too large. That the commutator between the Hamiltonian and the basis operator contains a finite number of terms is a necessary condition in order for the NOM to be applicable. This condition can generally be satisfied for lattice models with only short-ranged interactions, but will obviously work best on low-dimensional lattices, which have a smaller coordination number.
Let us write Ĥ ,Ô α = α ′ h α,α ′Ô α ′ and substitute this into Eq. (16). We get (17) Noticing thatd † σ (t + ∆t) = α a α (t + ∆t)Ô α according to Eq. (14) and the expression ofd † σ (t + ∆t) is unique due to our definition of basis operators, we finally obtain the recurrence relation By using Eq. (18), we can in principle obtain the coefficients a α (t) at arbitrary times by an iterative procedure: we begin from an input a α (t = 0) which depends on the operator we wish to evaluate, and advance by a sequence of time steps of size ∆t.
We must store all nonzero coefficients a α (t) along with their correspondingÔ α at time t in order to compute a α (t + ∆t). This demands that the number of nonzero coefficients remain manageable. Fortunately, at t = 0 only a single nonzero coefficient is needed to expressd † σ (assuming that we choosed † j,σ as one of ourω i j ). In a geometric picture, one could say that our targetd † σ (t) begins at t = 0 exactly on an axis of the operator space at the initial time, and the super operator [Ĥ + , ] acting ond † σ is quite inefficient at generating new nonzero dimensions. This property guarantees that our algorithm be extremely fast at short times. This property can only be taken care of within the Heisenberg picture: in the Schrödinger picture, the operatorĤ + acting on a unit basis vector (which is not an eigenstate of the Hamiltonian or extremely local) will immediately generate an infinite number of additional terms, stemming from the infinite number of terms inĤ + . This difference between the two pictures is what makes the Heisenberg picture a far more efficient one to study real time dynamics within the NOM. This is understandable, since in the Heisenberg picture we are restricting our focus to a single observable once a time, while in the Schrödinger picture obtaining the quantum state is equivalent to obtaining all possible observables and should in general be harder.
C. The truncation scheme
While the number of nonzero a α (t) is always finite, it also increases exponentially as the operator is propagated to longer times. To keep the iterative process feasible, we must limit the number of pairs a α ,Ô α that are stored in memory. When the number of stored pairs exceeds a given value (which we will denote by M ), we perform a truncation and throw out some number of the least important a α ,Ô α . The determination of the relative significance of the a α ,Ô α at each step, i.e., the truncation scheme, is critical: for our purposes, a good algorithm is needed to allow the algorithm to accurately describe I (t) at long times.
To proceed in deriving an optimal truncation scheme for the current, let us substitute the expansion ford † σ (t) into Eq. (9) to obtain an expression for the current: In considering this equation, an immediate and naive idea might be to relate the magnitude Im a α (t) Ô αĉ−σ to the significance of a α ,Ô α . However, this idea fails, since it leads to an underestimation of the importance of |a α (t)|, which has an inheritable significance. To see this, one should consider the fact that in the iterative relation Eq. (18), a coefficient a α (t) with a large magnitude also has an important contribution to a j ′ (t + ∆t). On the other hand, a small coefficient |a α (t)| ∼ 0 can generally be thrown out, since its contribution to a α ′ (t + ∆t) is limited by where the second term vanishes in the limit ∆t → 0 (since |h α,α ′ | is bounded). In the other words, if |a α (t)| is very small, throwing out a α ,Ô α has no effect on the current at the subsequent time. However, a coefficient with small Im a α (t) Ô αĉ−σ cannot be safely thrown out, since it may have a significant contribution to I (t + ∆t) even though its contribution to I (t) is essentially zero.
With this in mind, it is clear that using |a α (t)| as our measure of significance is reasonable. However, it also has the disadvantage of not being optimized specifically to the current. Therefore, it may be better to give weight to the contributions of both |a α (t)| and Im a α (t) Ô αĉ−σ . At short times, it is easy to see that the value of |a α (t)| fluctuates strongly with different α (for instance, consider |a α (0)|), such that |a α (t) | is a more important measure than Im a α (t) Ô αĉ−σ . At long times, however, we find that the value of the contributing |a α (t) | at different α is of similar size, a fact which may be understandable as a kind of thermalization ofd † σ (t) in the operator space. This leads to Im a α (t) Ô αĉ−σ being more important at long times. In practice, we have used the weight function where γ, β > 0 are numerical parameters and the choice of their value (which should affect the performance of the algorithm but not the physical result) is decided empirically. After each time step, we arrange the current set of stored operators and coefficient a α ,Ô α in descending order according to their respective weights W α (t), and keep the M pairs a α ,Ô α with the largest W α . The remaining pairs are discarded. Our experience suggests that this truncation scheme performs far better at long times than the more general alternative truncation scheme W α (t) = |a α (t)|, which is not specifically tailored to the current.
D. Evaluating the expectation value
To obtain the current as expressed in Eq. (19), we need to calculate the expectation value Ô αĉ−σ with respect to the initial state. According to the definition of basis operators, this requires evaluating expectation values of the form Sinceω αj j is a product of second quantization operators, Eq. (22) can be expanded using Wick's theorem. And we are able to calculate the contraction of an arbitrary pair of field operators.
There are two kinds of nonzero contractions in Eq. (22), which are d † nσdn ′ σ and d † nσĉ−σ . Using the initial condition of subsection III A, a tedious but straightforward calculation gives and the width of the discretized band, and u nm a set of orthogonal coefficients in the Wilson transformation. u nm is generated by the recurrence relation with the initial conditions u 0m = 1 There are four numerical (as opposed to physical) parameters in the algorithm: the time interval ∆t, the maximum number of stored coefficients M , and the two parameters γ and β which define the truncation weight function W α . Our algorithm becomes numerically exact in the limit M → ∞ and ∆t → 0, regardless of γ and β. To obtain convergence, we start from an initial guess (∆t, M ) for these values and calculate the current I (t) (∆t,M) . We then set ∆t → ∆t/2 and M → 2M and repeat the calculation to obtain I (t) (∆t/2,2M) . The difference I (t) (∆t,M) − I (t) (∆t/2,2M) provides us with an approximate estimate of the error, and we can now iterate this procedure until the error is small enough for our requirements, at which point we say that convergence is reached.
This concludes our discussion of the algorithm. In the next section, we will present several examples and comparisons with other methods.
A. Numerical parameters and convergence
We set the bandwidth of the reservoirs to D = 20Γ and calculate I (t) at the particle-hole symmetric point ǫ d = −U/2 for different interaction strengths U and bias voltages V . The choice of the truncation parameters γ and β is empirical. We have tried different values in order to minimize the error of I (t) at a given time for given ∆t and M , and found that γ = 2 and β = 3 is a good choice for a wide range of parameters. With the truncation scheme we have proposed, computation time is efficiently reduced to a manageable level: in practice we find that M = 40000 and ∆t = 0.008/Γ provide a good estimate of I (t) for the parameters treated in this paper. In general, however, we find that to obtain accurate results at longer times a larger M is required, such that the computational scaling in the propagation time t is in practice substantially more than linear. In the general case, this is of course a universal problem in all numerically exact methods. We briefly mention that under certain conditions it can be overcome by reduced dynamics techniques, 56,59,60 but this is beyond the scope of this paper.
An important problem that has been mentioned in sec. II is that the Wilson chain is not a thermal reservoir at Λ > 1. We circumvent this issue by converging the data with the limit where Λ → 1 for finite times. In Fig. 1, we present the results at different Λ for different interaction strengths and bias voltages. We find that Λ = 1.2 and Λ = 1.02 give comparable results, indicating that the data is converged.
B. Weak and intermediate interactions
Having established that the NOM converges to a welldefined answer, we now continue to argue that this answer is correct. This will be done by performing a set comparisons with trustworthy analytical and numerical results. We begin at the limit of weak interaction. For noninteracting (quadratic) systems, the NOM has previously been discussed in the literature and was shown to agree perfectly with the results of exact diagonalization. 77 We therefore begin with weakly interacting systems at interaction strength U = 0.01Γ, where second order perturbation theory in U may be expected to work well.
In Fig. 2 we present a comparison between the NOM at a bandwidth of D = 20Γ and perturbation theory. The perturbation theory data comes from ref. 18, where the flow equation technique (which is equivalent at steady state to the Keldysh technique of ref. 78) was applied to the Anderson impurity model at the wide band limit D → ∞. As expected, the two data sets converge at low voltages, e.g., at V = 0.5Γ and V = Γ. We also find moderate deviation of our results from the perturbation theory at high bias voltage, where one might expect the bandwidth to be more important. In the presence of interactions, the bandwidth affects the current even when the transport window is narrower than the bandwidth (V < D): due to inelastic scattering effects, levels in the bands are effectively mixed.
In Fig. 3, we further compare our method with perturbation theory at an intermediate interaction strength of U = 4Γ . Interestingly, the numerical curves continue to fit well with the perturbative theory at low bias voltages, but deviate from it qualitatively at high bias voltage. This suggests that the second order perturbation theory still works well at U = 4Γ and low voltage. At higher bias voltages the deviation may be attributed to either the failure of the perturbative approximation or the difference in bandwidth. The inelastic scattering at U = 4Γ is stronger than that at U = 0.01Γ, such that the finite D affects the current to a greater degree.
C. Strong interactions
Finally, we study the current I (t) within the strong coupling regime by considering U = 8Γ. In nonequilibrium-that is, beyond linear response in the voltage-exact analytical results are not available. Linear response is thought to be valid for voltages approximately limited by the Kondo temperature T K ∼ e −U/Γ , and therefore quite small. We compare with numerically exact QMC results at a low but finite temperature of T = 0.1Γ, in comparison with T = 0 in the NOM. The short time behavior is accessible with continuous time quantum Monte Carlo techniques, 39,51 at least for temperatures which are not too low. Newer bold-line techniques 55,79 allow access to longer times and lower temperatures. We have compared the QMC results with similar at T = 0.2Γ (not shown) and verified that within the timescales accessed here, the effect of the finite temperature is small. Also, within QMC we take soft band edges with a width of 0.1Γ, as in ref. 51. The QMC simulations are otherwise performed at the same parameters as the NOM.
The current I (t) at different bias voltages up to Γt = 2.5, as calculated by both methods, is displayed in Fig. 4. The results are consistent at the highest voltage V = 5Γ. However, even at V = 5Γ, it is clear that the NOM gives a slightly larger current around the first peak at Γt ∼ 0.5, which can be attributed to the difference in temperature and band shape. At lower voltages the two methods exhibit good agreement at short times, but deviate significantly from each other at longer times. Though steady state is not reached here, NOM appears to predict a smaller steady state current than QMC. We do not believe that this can be explained by the difference in temperatures, since from the previously mentioned check we find that reducing the temperature in QMC leads to the opposite trend. QMC studies similarly suggest that the importance of the band cutoff width ν is relatively unimportant at these parameters. The rise in current at low voltages may be associated with the formation of Kondo resonances at the chemical potentials, 33 and the failure of the NOM at these parameters indicates a problem either with the Wilson mapping (which implies low resolution at high energies) or with our truncation scheme. In either case, it requires further investigation which will be left to future studies.
V. CONCLUSIONS
In summary, we have developed the numerical operator method, or NOM, and applied it to the study of the real time dynamics of strongly-correlated quantum impurity models in nonequilibrium. This is a notoriously difficult problem to which many techniques have been applied. Our method is distinguished by three important features, which we briefly outline below.
First, in the mapping of the reservoirs onto 1D chains, any discretization scheme is supported. This is similar to DMRG, but differs from NRG; in QMC the issue of mapping onto a 1D chain need not arise. It also allows us to efficiently take the limit Λ → 1 when using the Wilson mapping.
Second, our method revolves around the solution of the Heisenberg equation of motion. We carefully select a basis in the operator space such that the superoperator [Ĥ, . . .] acting on our chosen observable in this basis generates only a small number of terms. This allows us to effectively set the length of the reservoir chains to infinity, thus circumventing the finite-size scaling problems encountered by other (non-QMC) numerical techniques.
Third, we provide a truncation scheme (see Eq. (21)) suited to the characteristics of the solution of the Heisenberg equation and optimized for specific observables. Therefore, our algorithm is extremely fast and accurate in the short time limit, and additionally provides a controllable scheme for obtaining high quality approximations of physical observables at longer times. The downside of this is that our method only addresses a single observable per computation: to calculate an additional observable, the entire time evolution process must be repeated. However, in many cases, only one or very few observables are of interest.
We note that while these features are also shared by tDMRG schemes formulated for matrix product operators, 71 our truncation scheme is different and does not rely on the assumption of low entanglement, which may not be appropriate for nonequilibrium dynamics.
As an example, we apply our method to the real time dynamics of transport through the nonequilibrium Anderson impurity model. We calculate the time dependence of the current in a wide range of interaction strengths and bias voltages going far beyond the linear response regime in both quantities. We show that at small interaction strengths, our results coincide with perturbation theory in the interaction. We further compare our results with QMC data at a large interaction strength for which no analytical method is known to be applicable, and find good agreement as long as Kondo physics does not come into play.
We have therefore established the NOM as a reliable new formalism for exploring nonequilibrium transport properties in the impurity models over a wide range of parameters and at zero temperature. We expect to generalize our method to more complicated quantum impurity models in future and further analyze its advantages and limitations. In particular, a comparison with tDMRG is in order, and the relative merits of the NOM basis and the truncation scheme as opposed to those of matrix product operator algorithms should be studied in detail. Finally, modifications to DMRG which have allowed for the study of finite temperatures [80][81][82] and open systems coupled to Markovian baths [83][84][85] should also be usable within the NOM.
|
2014-10-06T18:03:07.000Z
|
2014-10-06T00:00:00.000
|
{
"year": 2014,
"sha1": "194ebba94060ea2f996b470c40a00dce394aef24",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.91.155148",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "194ebba94060ea2f996b470c40a00dce394aef24",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
262211756
|
pes2o/s2orc
|
v3-fos-license
|
On the Nash Equilibria of a Duel with Terminal Payoffs
: We formulate and study a two-player duel game as a terminal payoffs stochastic game. Players P 1 , P 2 are standing in place and, in every turn, each may shoot at the other (in other words, abstention is allowed). If P n shoots P m ( m (cid:54) = n ), either they hit and kill them (with probability p n ) or they miss and P m is unaffected (with probability 1 − p n ). The process continues until at least one player dies; if no player ever dies, the game lasts an infinite number of turns. Each player receives a positive payoff upon killing their opponent and a negative payoff upon being killed. We show that the unique stationary equilibrium is for both players to always shoot at each other. In addition, we show that the game also possesses “ cooperative ” (i.e., non-shooting) non-stationary equilibria . We also discuss a certain similarity that the duel has to the iterated Prisoner’s Dilemma .
Introduction
In this paper, we study a two-player duel game played in turns.Players P 1 , P 2 are standing in place and, in each turn, each player may shoots at the other; in other words, abstention is allowed.If P n shoots at P m (m = n), either they hit and kill them or they miss and P m is unaffected; the respective probabilities are p n (P n 's marksmanship) and 1 − p n .The process continues until at least one player dies; if no player ever dies then the game lasts an infinite number of turns.We formulate the above as a stochastic game with terminal payoffs.The precise game rules and players' payoffs will be presented in Section 2.
Little work has been done on the duel.In fact, to the best of our knowledge, it has only been studied as a preliminary step in the study of the "truel", in which three stationary players shoot at each other.In early works on the truel [1][2][3][4], the postulated game rules guarantee the existence of exactly one survivor ("winner").In an important early paper [5], the somewhat paradoxical result of "survival of the weakest" is established; namely for certain marksmanship combinations, the player with lowest marksmanship has the highest probability of survival.A more general analysis appears in a further study [6], which considers the possibility of "cooperation" between the players, in the sense that each player has the option of abstaining, i.e., not shooting at their opponent in one or more turns of the game.This idea is further studied by Kilgour (for the simultaneous truel) [7] and (the sequential truel) [8,9].These papers are, to the best of our knowledge, the first to address the truel problem using a rigorous game theoretic analysis.Kilgour formulates both the simultaneous and sequential truel as stochastic games with terminal payoffs (i.e., the players receive a single payoff at the end of the game) and obtains Nash equilibria, under appropriate conditions.A similar analysis appears in a further study [10], where, however, the truel is formulated as a discounted stochastic game.Recent papers on the truel include: Refs.[11][12][13][14] where, among other innovations, the truel is formulated as an extensive form game; Refs.[15][16][17][18], where a Markov chain formulation of several truel variants is presented; and Refs.[19][20][21], in which truels among N players are studied, with each player being represented by a node in a scale-free network. 1 Several applications of the duel and, more frequently, of the truel have been proposed in the above literature.The truel has been used to model behavior in confrontation situa-tions [25] and in political conflicts [26].A truel variant has been used as a model of opinion dissemination [17].Business applications have been presented in a further study [27], in which it is shown that, under certain conditions, weaker companies can grow stronger and stronger companies can grow weaker with all the parties eventually converging.In legal studies, the truel has been used to explore equality issues [28].Last but not least, the nuel (an N-person generalization of the duel and truel) has been used in biology to explain the maintenance of variation in natural populations [29] and study marriage and reproduction mechanisms [30].Furthermore, the truel is relevant to the existence of "suicidal strategies" employed by cells and bacteria [31,32].
A common characteristic of all the above-mentioned works is that they limit themselves to the study of stationary strategies.As we will show in the current paper, the duel also possesses Nash equilibria in non-stationary strategies and it is safe to assume that the same is true of the truel and the nuel (the N-player generalization of the duel and truel).
While the above papers focus on various forms of the truel, we believe that the duel is interesting in its own right and has not received the attention it deserves.In particular we will show that, under our formulation, the duel has a certain similarity to the iterated Prisoner's Dilemma (IPD) and possesses "cooperative" Nash equilibria in non-stationary strategies.
In this paper, we study two versions of the duel with terminal payoffs.The rest of the paper is structured as follows.In Section 2, we define the game rigorously.In Section 3 we establish the existence of equilibria in stationary strategies.In Section 4, we discuss some similarities between the game and the IPD.In Section 5, we prove that the duel also has equilibria in non-stationary strategies (namely grim cooperation and Tit-for-Tat).In Section 6, we summarize our results and propose some future research directions.
1.
The game stays in state 11 ad infinitum (no player is ever killed); 2.
At some t the game moves to a state s(t ) ∈ {10, 01, 00} (one or both players are killed).These are terminal states, i.e., as soon as they are reached, the game terminates.
When the game reaches a terminal state s, P n (n ∈ {1, 2}) receives payoff q n (s) as follows: where we assume that for n ∈ {1, 2}, a n > 0 and b n > 0. We set a =(a 1 , a 2 )
)s(1)...f(T)s(T), a non-terminal finite history is an h = s(0)f(1)s(1)...f(T)s(T)
where s(T) = 11 and an infinite history is an h = s(0)f(1)s(1)... .An admissible history is one which conforms to the game rules; the set of all admissible finite (resp.infinite) histories is denoted by H * (resp.H ∞ ); H * denotes the set of all non-terminal finite histories.The set of all histories is H = H * ∪ H ∞ .It will be useful to define payoff as a function Q n : H → R as follows Note that if the game never terminates, both players receive zero payoff.
A strategy for P n is a function σ n : H * → [0, 1]; it corresponds to, for every non-terminal finite history h, the probability that, given that the current history is h, P n will shoot P −n : σ n (h) = Pr("P n shoots P −n ").
A stationary strategy is a σ n depending only on the current state s, hence we simply write σ n (s).Since a stationary strategy σ n depends only on the current state, it is fully determined by the values σ n (s) for s ∈ {00, 01, 10, 11}, i.e., from σ n (00), σ n (01), σ n (10), σ n (11).
But any admissible strategy (i.e., compatible with the game rules) must assign Consequently, a stationary strategy is determined by a single number x n = σ n (11).
A strategy profile is a vector σ = (σ 1 , σ 2 ).We denote the set of all admissible strategies by Σ and the set of all admissible stationary strategies by Σ.
An initial state s(0) and two strategies σ 1 and σ 2 (used, respectively, by P 1 and P 2 ) determine a probability measure on the set of all histories; hence we can define the expected payoffs ∀n ∈ {1, 2} : We have, thus, formulated the terminal payoffs duel as a game.We are interested in the game that starts at s(0) = 11, which we will denote by Γ(p, a, b).We assume that P 1 and P 2 are looking for a Nash equilibrium (NE), i.e., a strategy profile
Stationary Equilibria
As already noted, an admissible stationary strategy σ 1 for P 1 is fully determined by x 1 = σ 1 (11) = Pr(P 1 shoots P 2 ); i.e., σ 1 is determined by a single variable x 1 ∈ [0, 1].Similarly, every admissible stationary strategy σ 2 for P 2 is fully determined by a single variable x 2 ∈ [0, 1].Hence, we will often speak of the strategy x n (rather than σ n ) and the strategy profile (x 1 , x 2 ) (rather than (σ 1 , σ 2 )).When P 1 and P 2 use strategies x 1 and x 2 , the state sequence is a Markov chain; using the previous numbering of states we have the transition probability matrix x 2 ) = (0, 0) then we have the following equation for V 1 (temporarily omitting the x 1 , x 2 arguments for brevity of notation): The equation is obtained as follows: the expected payoff from state 11 is the sum of four terms: 1.
The transition to state 10 gives payoff a 1 and takes place with probability x 1 p 1 (P 1 shot and hit P 2 ) multiplied by (x 2 p 2 + x 2 ) (P 2 either shot and missed or did not shoot);
2.
The transition to state 01 gives payoff −b 1 and takes place with probability x 2 p 2 (P 2 shot and hit P 1 ) multiplied by (x 1 p 1 + x 1 ) (P 1 either shot and missed or did not shoot); 3.
The transition to state 00 gives payoff a 1 − b 1 and takes place with probability x 1 p 1 (P 1 shot and hit P 2 ) multiplied by x 2 p 2 (P 2 shot and hit P 1 ); 4.
The transition to state 11 gives payoff V 1 (it is as if the game starts from the beginning) and takes place with probability (x 1 + x 1 p 1 ) (P 1 either shot and missed or did not shoot) multiplied by (x 2 p 2 + x 2 ) (P 2 either shot and missed or did not shoot).
After some algebra, the V 1 equation is simplified to and has the following solution 3 : .
Proof.Suppose that P 1 and P 2 use the profile (x 1 , x 2 ).To determine whether this is an NE, from P 1 's point of view we have to check whether they have anything to gain by unilaterally deviating to some other strategy σ 1 .A crucial fact is that we only have to check whether P 1 gains by switching to another stationary strategy.This is true because, if P 2 uses the stationary strategy x 2 , then P 1 must solve an Markov Decision Process problem; it is well known that in this case he gains nothing by using non-stationary strategies [33].
Let us first check whether (0, 0) is a Nash equilibrium.If P 1 deviates to another stationary strategy x 1 , we will have Hence, (0, 0) cannot be an NE.Next, take any (x 1 , x 2 ) = (0, 0) and suppose P 1 deviates to y 1 .Then The denominator is positive.The numerator has the sign of x 1 − y 1 .Hence, the sign of is the same as that of x 1 − y 1 and consequently, P 1 never (resp.always) has an incentive to deviate from x 1 to a smaller (resp.greater) y 1 .The same arguments can be applied to P 2 and their strategy x 2 .It follows that the only stationary NE is (x 1 , x 2 ) = (1, 1) and this completes the proof.
Connection to the Iterated Prisoner's Dilemma
Applying Formulas (1) and ( 2) to (x 1 , x 2 ) ∈ {(0, 0), (0, 1), (1, 0), (1, 1)}, we get It can immediately be seen that and if we identify the strategy x n = 0 (never shooting at the opponent) with "cooperation" and the strategy x n = 1 (always shooting at the opponent) with "defection", the above inequalities remind us of the Prisoner's Dilemma (PD).The similarity would be complete if the additional inequalities also held; because in this case we would have which corresponds exactly to the well known sequence of PD inequalities [22]: The first inequality is equivalent to which is always satisfied.The second inequality is < 0, which will be satisfied iff The third inequality is always satisfied.Similarly, the inequalities Combining the above, we get the following "PD-like condition" which is necessary and sufficient to have the following ordering of the payoffs In light of ( 6) and ( 7), we will call the never-shooting strategy x n = 0 (which henceforth will also be denoted by σ C ) the cooperating strategy, and the always-shooting strategy x n = 1 (which henceforth will also be denoted by σ D ) the defecting strategy.The terminology is inspired by the analogy to the PD.Namely, in both the PD and the duel, both players would have a higher payoff if they adhered to σ C , σ C ; but this is not a NE and each player has incentive to switch to σ D .Consequently, rational players will follow the strategy profile σ D , σ D , which, while being an NE, yields lower payoff to both players. 4 As is well known, cooperative NE do exist for the iterated PD, and these involve the use of non-stationary strategies, such as grim-cooperation and Tit-for-Tat (TfT).Hence, in the next section, we will show that there exist corresponding non-stationary cooperative strategies which are NE of Γ(p, a, b).
Before concluding this section, it is worth discussing in what ways our duel game Γ(p, a, b) differs from the IPD.Three obvious differences are: 1.
The IPD is a deterministic game, while Γ(p, a, b) involves randomness; 2.
In the IPD, each player receives a payoff in every turn and the total payoff is the discounted (by a discount factor γ) sum of turn payoffs, while in Γ(p, a, b), payoff is obtained only at the final turn and is undiscounted; 3.
The IPD will last an infinite number of turns, while Γ(p, a, b) may (depending on the p values and the strategy used) terminate in a finite number of turns (in fact, it may be the case that it will terminate in a finite number of terms with probability one).
However, there is an formulation of the IPD in which the payoffs are not discounted but the game may terminate in every turn with a positive probability p = 1 − γ > 0. In this formulation, the IPD is also a random game and will terminate in a finite number of turns with probability one; the total expected payoff of each player equals the discounted payoff of the deterministic IPD version.
Non-Stationary Equilibria
Drawing upon similar results for the IPD, we will now show that the duel has cooperative NE in non-stationary strategies.The first such strategy we introduce is the grim cooperation strategy σ G , which is defined as follows for P n (n ∈ {1, 2}): As long as P −n does not shoot P n , P n never shoots P −n ; if P −n shoots P n at round t, then P n shoots P −n at all rounds t > t.
This strategy was originally used in the analysis of the IPD.
Proof.We have since, if both players adhere to σ G , nobody will ever get killed.Next, let us consider possible P 1 strategies σ 1 deviating from σ G .It is easy to see that it suffices to consider the strategy σ D , because, as soon as P 1 deviates from σ G , P 2 will shoot at P 1 on every turn and hence, P 1 has no incentive to not shoot; furthermore, if P 1 deviates from σ G , they might as well deviate on the first turn.Now, let us compute If P 1 uses σ D at t = 1, then P 2 will also revert to σ D at times t ∈ {2, 3, ...}.Hence, P 1 's expected payoff will be By assumption Hence, (9) holds and P 1 has no incentive to deviate from σ G .By a similar analysis, we can also show that P 2 has no incentive to deviate from σ G .This completes the proof.
Hence, the conditions (8) are stronger than the originally postulated condition (5) for the existence of a "PD-like" ordering in the duel.Now, we will define another non-stationary cooperative strategy, which will turn out to be an NE of the duel.This is the Tit-for-Tat strategy σ T f T , defined for P n (n ∈ {1, 2}) as follows: In the first turn P n does not shoot P −n ; at every other turn P n performs the same action (shooting or not shooting) that P −n performed in the previous round.This strategy was also originally used in the analysis of the iterated PD.
Proof.If both players play the strategy σ T f T , then they never shoot at each other and their payoffs are ∀n ∈ {1, 2} : Now, suppose that P 2 adheres to σ T f T but P 1 deviates.If P 1 gains by deviating from σ T f T at some turn, then they must also gain by shooting at P 2 in the first turn.If they do so, then P 2 shoots at P 1 for all subsequent turns, until P 1 reverts to not firing.Thus, P 1 has two options after their first deviation.
1.
They can continue shooting in all subsequent turns, in which case, so will P 2 ; 2.
They can revert to not shooting, in which case, in the next turn, they are in the same situation as at the start of the game.
Consequently, if P 1 can increase their payoff by deviating, then they can do so, either (a) by shooting in every turn, or (b) by alternating between shooting and not shooting.If we find conditions under which P 1 cannot increase their payoff by either of the above strategies, then, under the same conditions, P 1 cannot increase their payoff by deviating, which implies that σ T f T , σ T f T is an NE.
1.
Consider first the case in which P 1 adopts the strategy σ D of shooting in each turn.
Then we have and, by the same analysis as in the proof of Proposition 2, we know that
2.
Next consider the case in which P 1 alternates between shooting and not shooting.
Then their payoff will be The above equation holds because the expected payoff V S 1 is computed by summing the following possibilities.P 1 will certainly shoot and then: (a) With probability p 1 , P 2 will kill P 2 and hence, receive payoff a 1 ; (b) With probability 1 − p 1 , P 2 will miss (and receive zero payoff) and in the next turn P 2 will shoot and kill P 1 ; this combination has probability (1 − p 1 )p 2 and gives to P 1 payoff −b 1 ; (c) With probability 1 − p 1 , P 1 will miss and in the next turn P 2 will shoot and miss P 1 ; this combination has probability (1 − p 1 )(1 − p 2 ) and returns the game to the original state, in which P 1 receives payoff V S 1 .Simplifying the above equation and solving we obtain For an NE we must have V C 1 − V S 1 > 0 and this will hold when However, from our assumption (10), we have Combining 1 and 2, we see that P 1 has no advantage in deviating from σ T f T ; by a similar analysis, the same holds for P 2 and hence, the proof is completed.
Corollary 1.The duel NE conditions ( 8) and ( 10) are the same.In other words, σ D , σ D is an NE of Γ(a, b, p) iff σ T f T , σ T f T is an NE of Γ(a, b, p).
Let us compare the stationary and non-stationary NE.Initially, we made no assumption regarding the relative size of a n and b n (although we did assume they are both positive).In other words, a n − b n may be positive (P n sets more value in surviving), negative (P n hates their opponent so much that they value killing them more than surviving) or zero.However, even when a n > b n for both n, if the players limit themselves to using stationary strategies, then the only Nash equilibrium consists of both players shooting at each other with probability one (by Proposition 1); for the more desirable outcome of both players surviving to be (another) Nash equilibrium, they must use non-stationary strategies.
Conclusions
We have defined a turn-based duel game with terminal payoffs and shown that it has both stationary and non-stationary Nash equilibria.The non-stationary equilibria that we have established are the grim cooperation and Tit-for-Tat pairs.These are of the same form as the synonymous strategies used in the iterated Prisoner's Dilemma; we were motivated to use these in the duel by the previously explained similarity between the payoff structure of our duel game and that of the IPD.
In addition to their independent interest, the above results have potential application to the truel and nuel problems.As we have pointed out, to the best of our knowledge, the literature on truel and nuel is limited to the study of stationary strategies.In the case of the duel, in addition to stationary NE, we also have non-stationary NE.We reported here two such non-stationary NE ( σ G , σ G and σ T f T , σ T f T ), and it is not hard to construct additional ones, using an approach similar to that used in the study of repeated games [35].We conjecture that, using the methods of the current paper, it is also possible to establish a plethora of non-stationary NE for the general, N-player nuel; we intend to pursue this research direction in the future.
Several variants of the duel can be formulated and are worth exploring.In addition to the variant described in this paper, we have explored a variant in which each player receives some discounted payoff for every turn in which they stay alive.Including those results (and the techniques required for their proof) would increase the size of the current paper inordinately; hence, they will be reported in a separate publication.Further variants to be explored in the future include: sequential play, in which a single player is allowed to shoot in each turn; 2.
random play, in which the player allowed to shoot in each turn is chosen randomly and equi-probably.
In addition, in the future we intend to study the use of non-stationary strategies in truels and nuels.
|
2023-09-24T15:36:55.081Z
|
2023-09-21T00:00:00.000
|
{
"year": 2023,
"sha1": "b653a555defb416e1a1b1cdf6a5ac45bbf1651a7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4336/14/5/62/pdf?version=1695283972",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7936306fb614f9897b092086aa51af606591db0d",
"s2fieldsofstudy": [
"Mathematics",
"Economics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
145183634
|
pes2o/s2orc
|
v3-fos-license
|
Institutional Selectivity and Good Practices in Undergraduate Education: How Strong is the Link?
Academic selectivity plays a dominant role in the public's understanding of what constitutes institutional excellence or quality in undergraduate education. In this study, we analyzed two independent data sets to estimate the net effect of three measures of college selectivity on dimensions of documented good practices in undergraduate education. With statistical controls in place for important confounding influences, an institution's median student SAT/ACT score, a nearly identical proxy for that score, and the Barron's Selectivity Score explained from less than 0.1% to 20% of the between-institution variance and from less than 0.1% to 2.7% of the total variance in good practices. The implications of these findings for what constitutes quality in undergraduate education, college choice decisions, and the validity of national college rankings are discussed.
The logic underlying the use of average or median student test scores as a proxy for undergraduate educational quality is not unreasonable. Students are not simply the passive recipients of undergraduate education delivered by a college or university's faculty. Rather, interactions with other students constitute a major dimension of the educational impact of an institution on anyone student (e.g. Astin, 1993;Kuh, Schuh, Whitt, & Associates, 1991;Pascarella & Terenzini, 1991Whitt, Edison, Pascarella, Nora, & Terenzini, 1999). Consequently, the more academically adroit one's peers are, the greater the likelihood of one's being intellectually stimulated and challenged in his or her classroom and nonclassroom interactions with them-or so the argument goes. Similarly, a well-prepared student body may provide faculty with greater latitude to increase academic expectations and demands of students in the classroom and, thereby, even further enhance the impact of an institution's academic program.
Measures of an institution's academic selectivity, such as average student SAT/ACT scores, are not just a convenient and easily obtainable proxy for college quality. They also playa dominant, if perhaps unintended, role in more elaborate and public attempts to identify the nation's "best" colleges and universities. Probably the most nationally visible and credible of these attempts to identify and rank postsecondary institutions based on the quality of their undergraduate education is the annual report by U.S. News & World Report (USNWR) (Ehrenberg, 2003). USNWR "bases its college and university rankings on a set of up to 16 measures of academic quality that fall into seven broad categories: academic reputation, student selectivity, faculty resources, student retention, financial resources, alumni giving, and ... graduation rate performance" (Webster,200 I,p. 236). In a creative approach employing principal component analysis, Webster (2001) found that, although the average SAT (or ACT equivalent) score of enrolled students constituted only 6% of an institution's overall USNWR quality score, it was by far the criterion that most clearly determined an institution's rank. Consistent with Webster's more extensive analyses, we conducted a preliminary analysis for this study that estimated the simple correlation between average SAT/ACT score and the 2002 USNWR ranking for the "top 50" national universities. Even in this substantially attenuated distribution of schools, the correlation between the USNWR ranking (I = highest, 50 = lowest) and the average SAT/ACT scores of enrolled students was -.89. (Average SAT/ACT equivalent score was obtained from the 2002 report America's Top Research Universities, Lombardi, Craig, Capaldi, & Carter, 2002.) For all practical purposes, then, the USNWR ranking of "best" colleges can be largely reproduced simply by knowing Institutional Selectivity and Good Practices 253 the average SAT/ACT scores of the enrolled students. Beyond this index, the other so-called "quality" indices make little incremental contribution to the USNWR rankings.
While it is clear that an aggregate institution-level index of student body selectivity is a widely used and frequently accepted proxy for college "quality," the net impact of such an index on the outcomes of college is less certain. Over the past 30 years, three large-scale syntheses of the college impact literature have estimated the influence of institutional selectivity on a wide range of student and alumni outcomes (Bowen, 1977;Pascarella & Terenzini, 1991, in press). These three syntheses are consistent in arriving at two general conclusions about the role of college selectivity. First, although there is considerable uncertainty over causality (e.g., Arcidiacono, 1998;Dale & Krueger, 1999;Kane, 1998;Knox, Lindsay, & Kolb, 1993), the academic selectivity of one's undergraduate institution is, nevertheless, positively linked to career success and, in particular, to earnings. The second conclusion, however, is that when student cognitive, developmental, and psychosocial outcomes are considered, the net impact of college selectivity tends to be small and inconsistent.
It is not entirely clear why institutional selectivity demonstrates such equivocal impacts on student cognitive, developmental, and psychosocial growth during college. However, a number of scholars and social scientists who study the impact of college on students have suggested a reasonable explanation. They argue that there is substantially more variation within than between institutions. Put another way, the vast majority of colleges and universities in the American postsecondary system have multiple subenvironments with more immediate and powerful influences on individual students than any aggregate institutional characteristic (Baird, 1988(Baird, , 1991Hartnett & Centra, 1977;Smart & Feldman, 1998). Consequently, institutional selectivity (as measured by average student SAT/ACT score or a similar aggregate index) may simply be too global and remote an index to tell us much about the quality and impact of a student's classroom and nonclassroom experiences (Chickering & Gamson, 1987Kuh, 2001Kuh, , 2003Kuh, , 2004Pascarella, 2001 a;Pike, 2003). In an attempt to shed further light on this issue, the present study estimated the relationships between college selectivity and students' experiences in what existing evidence suggests are dimensions of good practice in undergraduate education. To determine the robustness of our findings, parallel analyses were conducted on two independent data sets. The first was the National Study of Student Learning (NSSL), a federally funded longitudinal investigation conducted in the mid-1990s, and the second was the National Survey of Student Engagement (NSSE), a cross-sectional investigation carried out in 2002. In the NSSL data, a proxy selectivity measure consisting of incoming student test scores on the Collegiate Assessment of Academic Proficiency (ACT, 1990) was used. In the aggregate, this measure correlated .95 with average institutional SAT/ACT score. In analyses of the NSSE data, two measures of selectivity were employed: median student SAT/ACT equivalent score and, for comparative purposes, the Barron's Selectivity Score, which combines median SAT/ACT score with other measures of the stringency of an institution's admissions requirements.
Good Practices in Undergraduate Education
In a project sponsored by the American Association for Higher Education, the Education Commission of the States, and The Johnson Foundation, Gamson (1987, 1991) synthesized the existing evidence on the impact of college on students and distilled it into seven broad categories or principles for good practice in undergraduate education. These seven principles or categories are: (1) student-faculty contact; (2) cooperation among students; (3) active learning; (4) prompt feedback to students; (5) time on task; (6) high expectations, and (7) respect for diverse students and diverse ways of knowing (Chickering & Gamson, 1991). The influence of Chickering and Gamson's seven principles has been extensive. For example, the NSSE, one of the most broad-based annual surveys of undergraduates in the country, is based on questionnaire items that attempt to operationalize the seven good practices (Kuh, 2001).
Although a detailed review of the extensive research on good practices is far beyond the scope of this paper, several examples are nevertheless useful for illustrative purposes. Analyzing longitudinal data from the Cooperative Institutional Research Program, Avalos (1996) found that a scale measuring the quantity and frequency of student-faculty interaction (e.g., informal conversations outside of class, working on a research project, being a guest in a professor's house) had a significant, positive link with postcollege occupational status. This association persisted even when controls were made for such factors as precollege occupational status, family background, and grades.
Meta-analyses of experimental and quasi-experimental research have clearly indicated that cooperative learning experiences provide a distinct advantage over individual learning experiences in fostering growth in both knowledge acquisition and problem-solving skills (Johnson, Johnson, & Smith, 1998a, 1998bQin, Johnson, & Johnson, 1995). There is also evidence to suggest that involvement in cooperative group class projects has a positive net effect on growth in leadership abilities and job-related skills (Astin, 1993).
A growing body of evidence has suggested that involvement in diversity activities (i.e., racial, cultural, intellectual, and political) has a positive net influence on critical thinking skills and other measures of cognitive growth during college (e.g., Gurin, 1999;Pascarella, Palmer, Moye, & Pierson, 200 I). However, the benefits of involvement in diversity experiences during college appear to extend to one's postcollegiate life. Gurin (1999) found that involvement in diversity experiences during college (e.g., discussing racial/ethnic issues, socializing with students from a different ethnic group) had a positive net influence not only on alumni reports of community involvement but also on alumni reports of the extent to which their undergraduate experience prepared them for their current jobs.
There is ample experimental and correlational evidence that effective teaching (e.g., teacher clarity and teacher organization) has positive effects on both knowledge acquisition and more general cognitive competencies such as critical thinking (Hines, Cruickshank, & Kennedy, 1985;Pascarella et aI., 1996;Wood & Murray, 1999). However, it also appears that receipt of effective teaching at the undergraduate level may positively influence one's plans to obtain a graduate degree; this influence is independent of background characteristics, tested ability, precollege plans for a graduate degree, college grades and social involvement, and the average graduate degree plans of students at the college attended.
It is clear from the existing evidence that we can identify empiricallyvalidated dimensions of good practice in undergraduate education that not only enhance cognitive and personal development during college, but also are linked to a range of postcollege benefits. The present investigation sought to estimate the extent to which these good practices are influenced by the academic selectivity of the institution one attends. In operationalizing good practices, we were guided by the research on the predictive validity of different dimensions of good practice reviewed above. Indeed, many of the operational definitions of good practices employed in this investigation were either adapted or taken directly from the studies on predictive validity previously cited (e.g., Cabrera et aI., 2002;Feldman, 1997;Hagedorn et aI., 1997;Pascarella et aI., 1996;Terenzini et aI., 1994;Whitt et aI., 1999).
Method
The results of this investigation are based on analyses of data from two multi-institutional samples: the longitudinal National Study of Student Learning (NSSL) and the cross-sectional National Survey of Student Engagement (NSSE). The sample descriptions, data collection procedures, and variables for each database are described below.
NSSL Sample
The NSSL institutional sample consisted of 18 four-year colleges and universities located in 15 states throughout the country. Institutions were chosen from the National Center on Education Statistics IPEDS data to represent differences in colleges and universities nationwide through a variety of characteristics, including institutional type and control (e.g.,
Institutional Selectivity and Good Practices 257
private and public research universities, private liberal arts colleges, public and private comprehensive universities, historically Black colleges), size, location, commuter versus residential character, and ethnic distribution of the undergraduate study body. Our sampling technique produced a sample of institutions with a wide range of selectivity. For example, we included some of the most selective institutions in the country (average SAT = 1400) as well as some that were essentially open-admission. The result of our sampling technique was a student population from 18 schools that approximated the national population of undergraduates in four-year institutions by ethnicity, gender, and selectivity level. However, the small number of institutions makes statistical generalizations to the population of American four-year colleges and universities problematic.
The individuals in the sample were students who had participated in the first and second follow-ups of the NSSL. We selected the initial sample randomly from the incoming first-year class at each participating institution, informed them that they would be participating in a national longitudinal study of student learning, and assured them of a cash stipend for their participation in each data collection. We also gave them assurances that the information they provided would be kept confidential and never become part of their institutional records.
NSSL Data Collection
The initial data collection for NSSL was conducted in the fall of 1992 with 3,331 students from the 18 participating institutions. We asked the participants to fill out an NSSL precollege survey that sought information on student background (e.g., sex, ethnicity, age, family socioeconomic status, secondary school achievement) as well as on aspirations, expectations of college, and orientations toward learning (e.g., educational degree plans, intended major, academic motivation). Participants also completed Form 88A of the Collegiate Assessment of Academic Proficiency (CAAP), developed by the American College Testing Program (ACT) to assess general skills typically acquired by students during college (ACT, 1990). The total CAAP consists of five 40-minute, multiple-choice test modules: reading comprehension, mathematics, critical thinking, writing skills, and science reasoning. We administered only the reading comprehension, mathematics, and critical thinking modules with this stage of data collection.
The first and second NSSL follow-up data collections were conducted at the end of the first year of college (spring 1993) and the end of the second year of college (spring 1994), respectively. In both data collections, each participant completed different CAAP tests as well as the College Student Experiences Questionnaire (CSEQ) (Pace, 1990) and a detailed NSSL follow-up questionnaire. The CSEQ and the NSSL questionnaires gathered extensive information about each student's classroom and nonclassroom experiences during the preceding school year. At the end of the second follow-up, complete data was available for 1,485 students, or 44.6% of the original sample tested 3 years previously at the 18 participating institutions. Because of attrition from the sample, a weighting algorithm was developed to adjust for potential response bias by gender, ethnicity, and institution. Within each of the 18 institutions, participants in the second follow-up were weighted up to that institution's end-of-second year population by sex (male or female) and race/ethnicity (White, Black, Hispanic, Other). For example, if an institution had 100 Black men in its second-year class and 25 Black men in the sample, each Black man in the sample was assigned a weight of 4.00.
While applying sample weights in this way corrects for bias in the sample we analyzed by sex, ethnicity, and institution, it cannot adjust for nonresponse bias. However, we conducted several additional analyses to examine differences in the characteristics of students who participated in all years of the NSSL and those who dropped out of the study. The dropouts consisted of two groups: (a) those who dropped out of the institution during the study, and (b) those who persisted at the institution but dropped out of the study. Initial participants who left their respective institutions had somewhat lower levels of precollege cognitive test scores (as measured by fall 1992 scores on the CAAP reading comprehension, mathematics, and critical thinking modules), socioeconomic background, and academic motivation than their counterparts who persisted in the study. Yet students who remained in the study and those who dropped out of the study but persisted at the institution differed in only small, chance ways with respect to precollege cognitive test scores, age, race, and socioeconomic background (Pascarella, Edison, Nora, Hagedorn, & Terenzini, 1998).
NSSE Sample
The NSSE is an annual survey of first-year and senior students designed to assess the extent to which students engage in empirically-derived good practices in undergraduate education and what they gain from their experience (Kuh, 2001). The NSSE sample for this study consisted of 76,123 undergraduates (38,458 first-year students and 37,665 seniors) from 271 different four-year colleges and universities who completed the NSSE in the spring of 2002. The sample of institutions is described in Table 1 and closely resembled the national profile of four-year Institutional Selectivity and Good Practices 259 colleges and universities. More than 40% were Master's institutions. Approximately one fourth of the institutions were Doctoral (Extensive or Intensive Universities) and approximately one fifth (19%) were Liberal Arts Colleges. Less than 10% of the institutions were Baccalaureate General Colleges.
The largest representation of students came from Master's Universities, with more than one third of all first-year students and seniors in the sample. Approximately one fourth of all students were from Doctoral Extensive Universities, and just 10% were from Doctoral Intensive Universities. Slightly more than 18% of the sample were enrolled in Baccalaureate Liberal Arts Colleges, and only 7% were students at Baccalaureate General Colleges.
NSSL Variables
We initially attempted to use the average SAT/ACT equivalent score for entering students at each of the 18 participating institutions as the independent variable in our analyses of the NSSL data. However, two of the institutions in the NSSL sample were essentially open-admissions schools, and it was not possible to obtain reliable average SAT/ACT data for them. Consequently, we used a composite of the average precollege CAAP reading comprehension, mathematics, and critical thinking scores as a proxy for the average SAT/ACT score of incoming students at each institution in the study. For the 16 participating schools that had reliable average SAT/ACT scores, the correlation between this index and the average precollege CAAP composite score was .95. Thus, our measure of institutional selectivity based on the precollege CAAP appeared to be a very strong proxy for average SAT/ACT score, and was the independent variable in the NSSL analyses. In selecting and creating dependent measures in the NSSL analyses, we were guided by the principles of good practice in undergraduate education outlined by Gamson (1987, 1991), and by additional research on effective teaching and influential peer interactions in college (e.g., Astin, 1993;Feldman, 1997;Pascarella & Terenzini, 1991;Whitt et aI., 1999). In the NSSL analyses, there were 20 measures or scales of "good practices" grouped in the following eight general categories: 1. Student-Faculty Contact: quality of nonclassroom interactions with faculty, faculty interest in teaching and student development; 2. Cooperation Among Students: instructional emphasis on cooperative learning, course-related interaction with peers; 3. Active Learning/Time on Task: academic effort/involvement, essay exams in courses, instructor use of high-order questioning techniques, emphasis on high-order examination questions, computer use; 4. Prompt Feedback: instructor feedback to students; 5. High Expectations: course challenge/effort, scholarly/intellectual emphasis, number of textbooks or assigned readings, number of term papers or other written reports; 6. Quality of Teaching: instructional clarity, instructional organization/preparation; 7. Influential Interactions with Other Students: quality of interactions with students, non-course-related interactions with peers, cultural, and interpersonal involvement; 8. Supportive Campus Environment: emphasis on supportive interactions with others.
All dependent measures were formed by summing student responses on the CSEQ and NSSL follow-up questionnaire during the first and second follow-up data collections. Table 2 provides detailed operational definitions and, where appropriate, psychometric properties of all independent and dependent variables employed in the NSSL analyses.
NSSE Variables
The NSSE data permitted us to employ two different measures of college selectivity. The first was the median composite verbal and mathematics SAT (or ACT equivalent) score of first-year undergraduate students at each institution in the sample. The second was the Barron's Selectivity Score. This index has nine categories ranging from "noncompetitive" to "most competitive" and combines the median composite College selectivity: Operationally defined as the average tested precollege academic preparation (composite of Collegiate Assessment of Academic Proficiency reading comprehension, mathematics, and critical thinking tests) of students at the institution attended where institutional data were available, this score correlated .95 with the average ACT score (or SAT score converted to the ACT).
Student-Faculty Contact
Quality of nonclassroom interactions with faculty: An individual's responses on a five-item scale that assessed the quality and impact of one's nonclassroom interactions with faculty. Examples of constituent items were "Since coming to this institution I have developed a close personal relationship with at least one faculty member," "My nonclassroom interactions with faculty have had a positive influence on my personal growth, values and attitudes," and "My nonclassroom interactions with faculty have had a positive influence on my intellectual growth and interest in ideas." Response options were 5 =strongly agree, 4 = agree, 3 = not sure, 2 =disagree, and I =strongly disagree. Alpha reliability = .83. The scale was summed through the first and second year of college.
Faculty interest in teaching and student development: An individual's responses on a five-item scale assessing students' perceptions of faculty interest in teaching and students. Examples of constituent items were "Few of the faculty members I have had contact with are genuinely interested in students" (coded in reverse), "Most of the faculty members I have had contact with are genuinely interested in teaching," and "Most of the faculty members I have had contact with are interested in helping students grow in more than just academic areas." Response options were 5 = strongly agree, 4 = agree, 3 = not sure, 2 = disagree, I = strongly disagree. Alpha reliability = .71. The scale was summed through the first and second year.
Cooperation Among Students
Instructional emphasis on cooperative learning: An individual's responses on a four-item scale that assessed the extent to which the overall instruction received emphasized cooperative learning. Examples of constituent items were "I am required to work cooperatively with other students on course assignments," "In my classes, students teach each other in groups instead of only having instructors teach," and "Instructors encourage learning in student groups." Response options were: 4 = very often, 3 = often, 2 = occasionally, and I = never. Alpha reliability = .81. The scale was summed through the first and second year.
Course-related interaction with peers: An individual's responses on a lO-item scale that assessed the nature of one's interactions with peers focusing on academic coursework. Examples of constituent items were "Studying with students from my classes," "Tried to explain the material to another student or friend," and "Attempted to explain an experimental procedure to a classmate." Response options were 4 = very often, 3 = often, 2 = occasionally, and I = never. Alpha reliability = .79. The scale was summed through the first and second year.
Active Learning/Time on Task
Academic effort/involvement: An individual's response on a 37-item, factorially derived, but modified scale that assessed one's academic effort or involvement in library experiences, experiences with faculty, course learning, and experiences in writing. The scale combined four lO-item involvement dimensions from the CSEQ, minus three items that were incorporated into the Course-Related Interaction with Peers Scale described above. Examples of constituent items were "Ran down leads, looked for further references that were cited in things you read," "Did additional readings on topics that were discussed in class," and "Revised a paper or composition two or more times before you were satisfied with it." Response options were 4 = very often, 3 = often, 2 = occasionally, and I never. Alpha reliability = .92. The scale was summed through the first and second year.
Number of essay exams in courses: An individual's response to a single item from the CSEQ. Response options were I =none, to 5 =more than 20. The item was summed through the first and second year.
Instructor use of high-order questioning techniques: An individual's responses on a four-item scale that assessed the extent to which instructors asked questions in class that required high-order cognitive processing. Examples of constituent items were "Instructors' questions in class ask me to show how a particular course concept could be applied to an actual problem or situation," "Instructors' questions in class ask me to point out any fallacies in basic ideas, principles or points of view presented in the course," and "Instructors' questions in class ask me to argue for or against a particular point of view." Response options were 4 = very often, 3 = often, 2 = occasionally, and I = never. Alpha reliability =.80. The scale was summed through the first and second year.
Emphasis on high-order examination questions: An individual's responses on a five-item scale that assessed the extent to which examination questions required high-order cognitive processing. Examples of constituent items were "Exams require me to point out the strengths and weaknesses of a particular argument or point of view," "Exams require me to use course content to address a problem not presented in the course," and "Exams require me to compare or contrast dimensions of course content." Response options were 4 = very often, 3 =often, 2 = occasionally, and I = never.
Alpha reliability = .77. The scale was summed through the first and second year.
Using computers: An individual's response on a three-item scale indicating extent of computer use: "Using computers for class assignments," "Using computers for library searches," and "Using computers for word processing." Response options were 4 =very often, 3 =often, 2 =occasionally, and I =never. Alpha reliability =.65. The scale was summed through the first and second year.
Prompt Feedback
Instructor feedback to students: An individual's response on a two-item scale that assessed the extent to which the overall instruction received provided feedback on student progress. The items were "Instructors keep me informed of my level of performance" and "Instructors check to see if I have learned well before going on to new material," Response options were 4 = very often, 3 = often, 2 = occasionally, and I = never. Alpha reliability = .70. The scale was summed through the first and second year.
High Expectations
Course challenge/effort: An individual's responses on a six-item scale that assessed the extent to which courses and instruction received were characterized as challenging and requiring high level of effort. Examples of constituent items were "Courses are challenging and require my best intellectual effort," "Courses require more than I can get done," and "Courses require a lot of papers or laboratory reports." Response options were 4 = very often, 3 = often, 2 = occasionally, and I = never. Alpha reliability =.64. The scale was summed through the first and second year.
Number of textbooks or assigned readings: An individual's response on a single item from the CSEQ. Response options were I =none, to 5 =more than 20. The item was summed through the first and second year. Number of term papers or other written reports: An individual's response on a single item from the CSEQ. Response options were I =none, to 5 =more than 20. The item was summed across the first and second year.
Scholarlylintellectual emphasis: An individual's responses on a three-item scale that assessed perceptions of the extent to which the climate of one's college emphasized: I) the development of academic, scholarly, and intellectual qualities; 2) the development of esthetic, expressive, and creative qualities; or 3) being critical, evaluative, and analytical. Response options were on a semantic differential-type scale where 7 =strong emphasis and I =weak emphasis. Alpha reliability =.79. The scale was summed through the first and second year.
Quality of Teaching
Instructional skill/clarity: An individual's responses on a five-item scale that assessed the extent to which the overall instruction received was characterized by pedagogical skill and clarity. Examples of constituent items were "Instructors give clear explanations," "Instructors make good use of examples to get across difficult points," and "Instructors interpret abstract ideas and theories clearly." Response options were 4 =very often, 3 =often, 2 =occasionally, and I = never. Alpha reliability =.86. The scale was summed through the first and second year.
Instructional organization and preparation: An individual's responses on a five-item scale that assessed the extent to which the overall instruction received was characterized by good organization and preparation. Examples of constituent items were "Presentation of material is well organized," "Instructors are well prepared for class," and "Class time is used effectively." Response options were 4 =very often, 3 =often, 2 =occasionally, and I =never. Alpha reliability =.87. The scale was summed through the first and second year.
lnfluential lnteractions With Other Students
Quality of interactions with students: An individual's responses on a seven-item scale that assessed the quality and impact of one's interactions with other students. Examples of constituent items were "Since coming to this institution I have developed close personal relationships with other students," "My interpersonal relationships with other students have had positive influence on my personal growth, attitudes and values," and "My interpersonal relationships with other students have had a positive influence on my intellectual growth and interest in ideas." Response options were 5 = strongly agree, 4 =agree, 3 =not sure, 2 =disagree, and I =strongly disagree. Alpha reliability = .82. The scale was summed through the first and second year.
Non-course-related interactions with peers: An individual's response on a ten-item scale that assessed the nature of one's interactions with peers focusing on non-class, or non-academic issues. Examples of constituent items were "Talked about art (painting, sculpture, architecture, artists, etc.) with other students at the college," "Had serious discussions with students whose philosophy of life or personal values were very different from your own," and "Had serious discussions with students whose political opinions were very different from your own." Response items were 4 = very often, 3 =often, 2 =occasionally, and I =never. Alpha reliability =.84. The scale was summed through the first and second year.
Cultural and interpersonal involvement: An individual's response on a 38-item, factorially-derived, but modified scale that assessed one's effort or involvement in art, music, and theater, personal experiences, student acquaintances and conversations with other students. The scale combined items from five involvement dimensions of the CSEQ, minus eight items that were incorporated into the Non-Course-Related Interactions With Peers Scale described above. Examples of constituent items were "Seen a play, ballet, or other theater performance at the college," "Been in a group where each person, including yourself, talked about his/her personal problems," "Made friends with students whose interests were different from yours," "Had conversations with other students about major social problems such as peace, human rights, equality, and justice," and "In conversations with other students explored different ways of thinking about the topic." Response options were 4 =very often, 3 = often, 2 = occasionally, and I = never. Alpha reliability = .92. The scale was summed through the first and second year.
Supportive Campus Environment
Emphasis on supportive interactions with others: An individual's responses on a three-item scale that assessed the extent to which one's relationships with faculty, administrators/staff, and other students could be described as friendly, supportive, helpful, or flexible (coded 7) to competitive, remote, impersonal, or rigid (coded I). Alpha reliability = .70. The scale was summed through the first and second year.
verbal and mathematics SAT/ACT equivalent score with four other criteria: percentage of first-year students above certain SAT/ACT scores; percentage of first-year students within specific quintiles of their high school graduating class; minimum class rank and grades needed for admission; and percent of applicants admitted.
The NSSE measures several dimensions of good practice. They include the following: student-faculty interaction, active and collaborative learning, academic challenge, diversity-related experiences, and supportive campus environment. Incorporated within several of these general scales are a number of subscales. The student-faculty interaction scale included two subscales, course-related interactions and out-ofclass interactions; the academic challenge scale included a subscale tapping high-order thinking activities; and the supportive campus environment scale contained subscales focusing on interpersonal support and support for learning. Table 3 provides detailed operational definitions of the independent and dependent variables employed in our analysis of the NSSE data.
NSSL Analyses
Analyses of the NSSL data proceeded in a series of steps that used both institutions and individuals as the units of analysis. In all analyses, we used the percent of variance in good practice dimensions associated with or explained by selectivity as an estimate of effect size (Hays, College selectivity was operationally defined as two variables. The first was the median composite verbal and mathematics SAT/ACT equivalent score of first-year students at each institution in the sample. The second was the Barron's Selectivity Score. This index has nine categories ranging from "Noncompetitive" to "Most competitive" and uses five criteria to determine an institution's selectivity index or score: I) Median composite verbal and mathematics SAT/ACT equivalent score; 2) Percentage of first-year students scoring 500 and above and 600 and above on the SAT; and the percentage of first-year students scoring 21 and above and 27 and above on the ACT. 3) Percentage of first-year students who ranked in the upper fifth and upper two fifths of their secondary school graduating class; 4) Minimum class rank and grade point average required for admission; and 5) Percentage of applicants who were admitted. Student-faculty interaction was a six-item scale with alpha reliabilities of .70 for first-year students and .71 for seniors. Constituent items were: • Discussed grades or assignments with an instructor • Received prompt feedback from faculty on your academic performance (written or oral) • Discussed ideas from your readings or classes with faculty members outside of class • Talked about career plans with a faculty member or advisor • Worked with faculty members on activities other than coursework (committees, orientation, student life activities, etc.) • Worked on a research project with a faculty member outside of course or program requirements A three-item subscale with alpha reliabilities of .62 for first-year students and .61 for seniors. Constituent items were: • Discussed grades or assignments with an instructor • Received prompt feedback from faculty on your academic performance (written or oral) • Discussed ideas from your readings or classes with faculty members outside of class • Talked about career plans with a faculty member • Worked with a faculty member on activities other than coursework (committees, orientation, student life activities, etc.) • Worked on a research project with a faculty member outside of course or program requirements Active and collaborative learning was a seven-item scale with alpha reliabilities of .61 for first-year students and .63 for seniors. Constituent items were: • Asked questions in class or contributed to class discussions • Made a class presentation • Worked with other students on projects during class • Worked with classmates outside of class to prepare class assignments • Tutored or taught other students (paid or voluntary) • Participated in a community-based project as part of a regular course • Discussed ideas from your readings or classes with others outside of class (students, family members, coworkers, etc.) Academic challenge was an eleven-item scale with alpha reliabilities of .73 for first-year students and .76 for seniors. Constituent items were: • Preparing for class (studying, reading, writing, rehearsing, and other activities related to your academic program) • Worked harder than you thought you could to meet an instructor's standards or expectations • Number of assigned textbooks, books, or book-length packs of course readings • Number of written papers or reports of 20 pages or more • Number of written papers or reports between 5 and 19 pages • Number of written papers or reports of fewer than 5 pages • Analyzing the basic elements of an idea, experience, or theory • Synthesizing and organizing ideas, information, or experiences into new, more complex interpretations and relationships • Making judgments about the value of information, arguments, or methods A three-item subscale with an alpha reliability of .77 for both first-year students and seniors. Constituent items were: • Campus Environments Emphasize: Providing the support you need to help you succeed academically • Campus Environments Emphasize: Helping you cope with your non-academic responsibilities (work, family, etc.) • Campus Environments Emphasize: Providing the support you need to thrive socially 1994). In the first step of the analyses, individual student responses on each of the 20 good practice variables were regressed on a series of dummy variables (i.e., coded 1 and 0) representing the 18 four-year institutions in the sample. This estimate yielded the percent of total variance (or differences) in each good practice variable between institutions.
In the next step in the analyses, we sought to determine the percentage of between-institution variance in good practices that was uniquely explained by college selectivity. Because our measures of good practices were based on student reports, we could not simply compute correlations between institutional selectivity and each good practice dimension. The reason for this is that such estimates do not account for the potential confounding influence of differences between institutions in the characteristics of the students who are reporting on good practices (Astin, 2003;Pascarella, 2001b). As Astin and Lee (2003) have demonstrated, a substantial portion of the differences in student reports of academic and nonacademic experiences during college are explained by differences in the background characteristics of the students themselves. Thus, failure to control for such student precollege characteristics could lead one to conclude that differences in reported good practices are institutional effects when, in fact, they may be simply the result of differences between institutions in the characteristics of the students enrolled. To address this methodological issue, we estimated a model that regressed each average good practice variable on institutional selectivity and three composite measures of student precollege characteristics: a student background composite (age, sex, race, parents' education, and parents' income; a precollege academic composite (secondary school grades, precollege plans to obtain a graduate degree, a measure of precollege academic motivation, and if the college attended was one's first choice); and a high school involvement composite consisting of time spent in high school in eight separate activities (studying, socializing with friends, talking with teachers outside of class, working, exercising or sports, studying with friends, volunteer work, and extracurricular activities).
Our final step in the analyses was to estimate the impact of college selectivity on the total variance in good practices. In these analyses, we estimated a model with individuals as the unit of analysis. This model closely paralleled the model employed with institutions as the unit of analysis. Each individual-level measure of good practices was regressed on college selectivity and the following individual-level student precollege characteristics: tested precollege academic preparation (composite of CAAP reading comprehension, mathematics, and critical thinking, alpha reliability = .83); precollege plans to obtain a graduate degree; a measure of precollege academic motivation (alpha reliability = .65); whether or not the college attended was one's first choice; age; sex; race; parents' education; parents' income; secondary school grades; and time spent in high school in eight separate activities (studying, socializing with friends, talking with teachers outside of class, working for pay, exercising or sports, studying with friends, volunteer work, and extracurricular activities).
All NSSL institution-level and individual-level estimates were based on the weighted sample. Because of the very small sample size (n =18) in the analyses of institution-level data, the critical alpha was set at .10. An alpha of .05 was used for all individual-level analyses. However, because we were conducting multiple analyses on 20 separate good practice dimensions, we applied the Bonferroni correction to all tests of significance.
It is important to point out that even though they differed widely in selectivity, the small number of institutions in the NSSL sample is a clear limitation of this part of the study. We report the results for purposes of consistency in our analyses, but caution against overgeneralization of results based on institutions as the unit of analysis.
NSSE Analyses
For the reasons specified above, the analyses of the NSSE data paralleled those of the NSSL analyses. Thus, two models were estimated. The first employed institutions as the unit of analysis in an attempt to explain the percentage of between-institution variance in good practice dimensions uniquely associated with institutional selectivity. The second model used individuals as the unit of analysis and estimated the percentage of total variance in good practices uniquely associated with selectivity. Both models introduced controls for student precollege characteristics, either at the institutional aggregate or individual level. These characteristics included age, race, sex, whether or not one was a firstgeneration college student, and whether or not one was a transfer student. Separate analyses were conducted for first-year students and for seniors. Because of the somewhat more restricted range of institutional selectivity and scale reliabilities in the NSSE sample, the estimates of total variance explained by selectivity were based on a correction for attenuation (Pedhazur & Schmelkin, 1991). Because of multiple dependent variables, the Bonferroni correction was applied to all tests of significance.
Results
The estimated associations between college selectivity and dimensions of good practice in undergraduate education based on the longitudinal NSSL data are summarized in Table 4. Column 1 in Table 4 indicates the percent of total variance between institutions in each good practice variable. These percentages ranged from 5.6% to 19.7%, with a median of 9.7%. Thus, on average, institutional differences accounted for about 10% of the total variance in good practices. Column 2 shows the percentages of between-institution variance in good practices explained by college selectivity when differences in average student precollege characteristics among institutions were taken into account. As Column 2 indicates, these estimates of college selectivity's unique or net influence on between-institution variance in good practices tend to be modest. They range from less than 0.1 % to 20.4%, with a median across the 20 good practice variables of 3.5%. Thus, on average, something less than 5% of the between-institution differences in good practices was uniquely associated with institutional selectivity. When the Bonferroni correction was applied, none of the unique variance estimates associated with selectivity in Column 2 were significant at even the .10 level.
Column 3 in Table 4 shows the percentages of total variance in good practices accounted for by college selectivity after differences in student precollege characteristics were taken into account. The unique variance percentages associated with selectivity ranged in magnitude from less than 0.1% to 2.7%, with a median across all 20 good practice variables of 0.5%. After applying the Bonferroni correction, college selectivity explained a statistically significant percentage of the variance in 10 of the 20 good practice dimensions. Net of student precollege characteristics, college selectivity explained small but statistically significant percentages of total variance in all four measures of high expectations: ematics, and critical thinking tests) of students at the institution attended. Where institutional data were available, this score correlated .95 with the average ACT score (or SAT score converted to the ACT). "Calculated by random effects ANOVA using student-level data, unweighted N = 1,485.
'Calculated by regression analysis using aggregate-level data, N =
18.
dStatistical adjustments made for the following influences: (a) student background composite, comprising age, sex, race, parents' education, and parents' income; (b) precollege academic composite, comprising secondary school grades, precollege plans to obtain a graduate degree, precollege academic motivation, and if college attended was first choice; and (c) a high school involvement composite, comprising items that measured time spent during high school in eight separate activities (studying, socializing with friends, talking with teachers outside of class, working for pay, exercising or sports, studying with friends, volunteer work, and extracurricular activities). 'Calculated by regression analysis using student-level data, unweighted N = 1,485.
'Statistical adjustments made for the following influences: (a) student background variables, including age, sex, race, parents' education, and parents' income; (b) precollege academic variables, including tested precollege academic preparation (composite of CAAP reading comprehension, mathematics, and critical thinking), secondary school grades, precollege plans to obtain a graduate degree, precollege academic motivation, and if college attended was first choice; and (c) high school involvement variables, including measures of time spent during high school in eight separate activities (studying, socializing with friends, talking with teachers outside of class, working for pay, exercising or sports, studying with friends, volunteer work, and extracurricular activities), course challenge/effort (2.1 %); number of textbooks/assigned readings (2.7%); number of term papers/written reports (1.2%); and scholarly/intellectual emphasis (1.7%). Selectivity also accounted for small but statistically significant variance percentages in all three measures of influential interaction with other students: quality of interactions with students (0.9%); non-course-related interactions with peers (2.6%); and cultural and interpersonal involvement (1.5%). The significant net relationships between college selectivity and two dimensions of active learning/time on task were somewhat contradictory; emphasis on highorder examination questions (0.7%) was positive, but number of essay exams in courses (1.1 %) was negative. Finally, college selectivity accounted for a statistically significant part of the variance in instructor feedback to students (0.7%), but the relationship was negative. The estimated associations between college selectivity and dimensions of good practice in undergraduate education based on the crosssectional NSSE data are summarized in Table 5. Part A of Table 5 summarizes the results for first-year students while Part B summarizes the results for seniors. Columns 2 and 3 in Table 5 summarize the results when median SAT/ACT score was the measure of institutional selectivity. For both first-year students and seniors, college selectivity (as estimated by median institutional SAT/ACT score) had very small, and perhaps trivial, relationships with the various measures of good practices operationalized by the NSSE. Net of student precollege characteristics, median SAT/ACT score explained from less than 0.1 % to 0.3% of the between-institution variance in good practices for first-year students, and from less than 0.1% to 0.6% of the between-institution variance in good practices for seniors. Even after a correction for attenuated range, the corresponding percentages of total variance in good practices explained by median SAT/ACT score ranged from 0.0% to 0.1% for firstyear students and from 0.0% to less than 0.1 % for seniors.
Columns 4 and 5 in Table 5 summarize the unique relationships between selectivity and good practices when college selectivity was operationally defined as the Barron's Selectivity Score. Consistent with our other findings, the associations were quite small, although larger than those found when selectivity was defined as median institutional SAT/ACT equivalent score. Net of student precollege characteristics, the Barron's Score explained from less than 0.1% to 3.8% of the betweeninstitution variance in good practices for first-year students, and from less than 0.1% to 1.2% of the between-institution variance in good practice for seniors. After a correction for attenuation, the corresponding percentages of total variance in good practices explained by the Barron's Score ranged from 0.1% to 2.3% for first-year students and from less 'Median SAT/ACT score was operationally defined as the median composite verbal and mathematics SAT/ACT equivalent score of first-year students at each institution in the sample. The Barron's Selectivity Score has nine categories ranging from "Noncompetitive" to "Most competitive" and uses five criteria to determine an institution's selectivity index or score: 1) Median composite verbal and mathematics SAT/ACT equivalent score; 2) Percentage of first-year students scoring 500 and above and 600 and above on the SAT; and the percentage of first-year students scoring 21 and above and 27 and above on the ACT; 3) Percentage of first-year students who ranked in the upper fifth and upper two-fifths of their secondary school graduating class; 4) Minimum class rank and grade point average required for admission; and 5) Percentage of applicants who were admitted.. "Calculated by random effects ANOVA using student level data, unweighted N = 38,456 first-year students and unweighted N = 37,665 senior students.
'Calculated by regression analysis using aggregate-level data, N =271.
dStatistical adjustments made for the following institution-level aggregates: age, race, sex, transfer status, first-generation student status. 'Calculated by regression analysis using student-level data, unweighted N = 38,456 first-year students and N = 37, 665 senior students.
'Statistical adjustments made for the following student characteristics: age, race, sex, transfer status, first-generation student status.
(-) Indicates a negative relationship between the dependent variable and selectivity. "p < .05, *p < .0 I-Significant at the respective level after a Bonferroni correction.
278 The Journal of Higher Education than 0.1 % to 1.0% for seniors. As with other measures of selectivity (average CAAP test scores and median SAT/ACT scores), the Barron's Selectivity Score had its strongest positive associations with measures of high expectations. However, even on this good practice dimension, the Barron's Score explained only 3.8% of the between-institution variance and 2.3% of the total variance for first-year students and 1.2% of the between-institution variance and 1.0% of the total variance for seniors. I
Conclusions and Implications
College academic selectivity plays a dominant role in the public's understanding of what constitutes "institutional excellence" in undergraduate education. In this study, we conducted analysis of two independent data sets to estimate the net effect of three measures of college selectivity on dimensions of documented good practices in undergraduate education. Consistent with the methodological arguments and evidence of Astin (2003), Astin and Lee (2003), and Pascarella (200Ib), we estimated the effects of institutional selectivity while statistically controlling for the characteristics of the students who reported on good practices. The results were consistent across both samples, suggesting two conclusions.
Conclusions
First, with statistical controls in place for important confounding influences, three measures of institutional selectivity accounted for significant percentages of the variance in established good practices in undergraduate education. Some of these significant, net relationships between selectivity and good practices were negative (e.g., selectivity and number of essay exams in courses; selectivity and instructor feedback to students; selectivity and supportive campus environment). However, the clear majority of them were positive. Thus, from the standpoint of statistically reliable associations, one could conclude from our findings that institutional selectivity does indeed count in terms of fostering good practices in undergraduate education.
A second conclusion, however, is that while institutional selectivity may count in terms of fostering good practices, the magnitude of the net relationships we uncovered suggests that it may not count very much. Net of student precollege characteristics, the between-institution variance in good practices linked to an institution's median student SAT/ACT score, a nearly identical proxy for that score, and the Barron's Selectivity Score ranged from less than 0.1 % to approximately 20%.
Institutional Selectivity and Good Practices 279
Similarly, the net total variance in good practices explained by college selectivity ranged from less than 0.1 % to 2.7%. Not surprisingly, perhaps, college selectivity had its most consistent and strongest net impact on high academic expectations. Yet, even here the major proportion of differences in measures of high academic expectations reported by students (about 80-100% of the between-institution variance, and about 97-99% of the total variance) was unexplained by the selectivity of the college one attended. On average, across all good practice variables we considered, more than 95% of the between-institution differences and almost 99% of the total differences were unexplained by the academic selectivity of a college or university. Put another way, attending a selective institution in no way guarantees that one will encounter educationally purposeful academic and out-of-class experiences that are linked to a developmentally influential undergraduate experience. If a selective institution makes some of those experiences more likely, it does so in rather minimal ways. The student body selectivity of an institution may bear too great a burden as a signal for the impact or quality of the undergraduate education actually received.
Implications
The absence of a substantial relationship between selectivity and good practices in undergraduate education may explain in large part why the weight of evidence accumulated over time indicates a similar inconsistent and trivial relationship between institutional selectivity and measures of learning and cognitive development during college (e.g., Anaya, 1996Anaya, , 1999Astin, 1968;Flowers, Osterlind, Pascarella, & Pierson, 2001;Hagedorn, Pascarella, Edison, Braxton, Nora, & Terenzini, 1999;Knox, Lindsay, & Kolb, 1992;Opp, 1991;Toutkoushian & Smart, 2001). If more selective colleges and universities are not particularly effective in fostering good practices in undergraduate education, it should not be particularly surprising that institutional selectivity makes only an inconsistent and typically small or trivial value-added contribution to student learning and cognitive growth during college.
Because our findings are robust across two distinct samples of students and come from two different decades, they raise questions about the validity of national rankings of college "quality" that are essentially de facto rankings of institutional selectivity. Indeed, two analyses of cross-sectional NSSE data (National Survey of Student Engagement, 2001;Pike, 2003) found little relationship between USNWR rankings of colleges and good practices in undergraduate education. If these rankings are poor indicators of documented good practices in undergraduate education, do they merely reflect rigorous entrance requirements and other so called "quality" dimensions that tend to be highly collinear with institutional selectivity (e.g., reputation, wealth, high graduation rates, and the like)? Such "quality" dimensions are certainly of some consequence in terms of an institution's capacity to allocate status or advantage in one's postcollege career. For some, this in itself may be sufficient justification to take the annual college rankings tournament seriouslyeven if institutional "quality" serves only as a signal for graduates' intelligence and ambition and not the actual impact or quality of the education one receives. If, however, one is concerned about exposure to undergraduate academic and nonacademic experiences that tend to foster personal and intellectual growth, national magazine rankings based essentially on selectivity may offer little guidance for selecting a college.
The academic selectivity of an undergraduate student body is an institutional characteristic that is quite difficult, if not impossible, to change in any meaningful way. For some public institutions, admissions requirements are mandated by state legislatures, while many private colleges and universities may be largely limited to a specific demographic segment of prospective students. Thus, if institutional selectivity was in fact a major determinant of influential good practices, purposeful attempts to enhance or reshape the impact of the undergraduate experience might prove largely futile. The evidence from this study, however, suggests that good practices in undergraduate education are essentially independent of the academic preparation of the students enrolled. At the same time, such practices are amenable to the influence of institutional policies and practices (Kuh, 2001(Kuh, , 2003. To illustrate, consider two measures of effective teaching used in this study: instructional clarity and instructional organization/preparation. Not only have they been validated by experimental research (Hines, Cruickshank, & Kennedy, 1985;Wood & Murray, 1999), their constituent skills (e.g., use of examples, identifying key points, providing class outlines, and using course objectives) may themselves be learnable by faculty (Weimer & Lenze, 1997). Similarly, recent evidence has suggested the potential benefits of learning communities and living-learning centers that attempt to create subenvironments that are more effective within large universities (Inkelas & Wiseman, 2003;Zhao & Kuh, 2004).
Finally, it is important to be clear about what our findings do and do not indicate. Essentially, we found replicated evidence to strongly suggest that institutional selectivity (as estimated by average test scores of incoming or enrolled students) has little, if any, net impact on established good practices in undergraduate education. What our findings do not indicate is the absence of between-college effects on good practices.
Institutional Selectivity and Good Practices 281
Some institutions may be particularly effective in fostering those academic and non-academic experiences that lead to an influential undergraduate education. For example, recent analyses of a subsample of the NSSL data suggest that some small liberal arts colleges may maximize good practices in undergraduate education irrespective of their academic selectivity, residential character, or full-time nature of their student bodies (Pascarella, Cruce, Wolniak, & Blaich, 2003). Similarly, the NSSE Institute for Effective Educational Practices is studying 20 colleges and universities that have higher-than-predicted graduation rates and higherthan-predicted engagement scores (Kuh, Kinzie, & Umbach, 2003). Within this set of institutions are some that are highly selective. Thus, selectivity and effective educational practice are not mutually exclusive. Rather, the findings presented in this paper suggest that one cannot readily identify those institutions providing a developmentally powerful undergraduate experience simply by considering the academic selectivity of their student bodies. Because national magazine rankings of the nation's "best" colleges essentially reflect institutional selectivity, the information they provide about the quality of undergraduate education is likely limited in the same way.
Endnote I Because our interest in this study was primarily magnitude of effect, and because our intra-class correlations (between-institution variances) for the dependent variables were small, we expected our results based on ordinary least squares to be quite close to those of more complex mixedlevel approaches such as hierarchical linear modeling (HLM) (Ethington, 1997). To be safe, we conducted parallel analyses using HLM and found essentially the same results as in our multi-level ordinary least squares estimates.
|
2014-10-01T00:00:00.000Z
|
2006-03-01T00:00:00.000
|
{
"year": 2006,
"sha1": "b19aae124f667ded475319504bc097ed8ff56f51",
"oa_license": "CCBY",
"oa_url": "https://scholarworks.iu.edu/dspace/bitstream/2022/24094/1/Institutional%20selectivity%20and%20good%20practices%20in%20undergraduate%20education-%20How%20strong%20is%20the%20link.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "72a45f133cd8dc012222edbabedebfdb94a78d3d",
"s2fieldsofstudy": [
"Education",
"Economics"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
10992655
|
pes2o/s2orc
|
v3-fos-license
|
Solvent-Free Patterning of Colloidal Quantum Dot Films Utilizing Shape Memory Polymers
Colloidal quantum dots (QDs) with properties that can be tuned by size, shape, and composition are promising for the next generation of photonic and electronic devices. However, utilization of these materials in such devices is hindered by the limited compatibility of established semiconductor processing techniques. In this context, patterning of QD films formed from colloidal solutions is a critical challenge and alternative methods are currently being developed for the broader adoption of colloidal QDs in functional devices. Here, we present a solvent-free approach to patterning QD films by utilizing a shape memory polymer (SMP). The high pull-off force of the SMP below glass transition temperature (Tg) in conjunction with the conformal contact at elevated temperatures (above Tg) enables large-area, rate-independent, fine patterning while preserving desired properties of QDs.
Introduction
The variable size, shape, and composition of colloidal semiconductor quantum dots (QDs) permit convenient tailoring of electrical and optical properties [1][2][3][4]. In addition, solubility of colloidal QDs in a variety of solvents can be beneficial to low-cost, large-area manufacturing compared to conventional micro-manufacturing [5]. In particular, the use of solution-based fabrication techniques [6][7][8] has attracted much attention for next-generation devices from colloidal QDs in the fields of light-emitting diodes (LEDs) [9][10][11], solar cells [12][13][14][15], and thin film transistors (TFTs) [8,16]. While solubility allows for convenient spin-casting of thin films of QDs, further processing of the deposited films is limited due to solvent stripping/damage that can easily occur, which then hinders the broader adoption of QDs for such photonic and electronic devices. In this context, direct patterning of QD through both additive and subtractive means has been studied. Additive patterning schemes such as dip-pen and polymer pen nanolithography have successfully demonstrated patterning of a single QD resolution [17][18][19]. On the other hand, subtractive patterning of QD films through transfer printing-based mechanical peeling using elastomeric stamps has been investigated and high-quality transferred films have recently been demonstrated [20][21][22][23].
Transfer printing with a structured polydimethylsiloxane (PDMS) elastomeric stamp requires certain thresholds with respect to the applied preload when the PDMS stamp is making a contact with a QD film and with respect to separation rate, i.e., peeling rate, for proper patterning [20,21]. This method suffers from inconsistent patterning due to weak adhesion between the QDs, high preload, a high peeling rate, and poor resolution. Introducing water-soluble polyvinyl alcohol (PVA) can enhance transfer of QD films to PDMS stamps and improve patterning yields, but the process still suffers from high preload, high peeling rate, and moderate resolution [22]. Recently introduced intaglio transfer printing has successfully demonstrated ultra high resolution, but the drawback of the high preload and high peeling rate (10 cm/s) still remains [23].
We propose an alternative method of patterning QD films that utilizes the shape memory effect of shape memory polymers (SMPs) to alleviate such high preload and high peeling rate requirements without compromising high pattern resolution. SMPs are a class of thermosensitive materials that change their mechanical compliance across the polymer's glass transition or melting temperature (T g , or T m ) [24,25]. The reversibility in the elastic modulus can be exploited to provide conformal contact and high pull-off force, useful for transfer printing and patterning [26]. Here, a structured SMP surface is used to make conformal contact to QD films with low preload at elevated temperature and to provide high pull-off forces enabling high-resolution large-area patterning, even with a low separation rate (~2 µm/s) at room temperature. In such a way, solvent-free, rate-independent, high yield, highly scalable QD film patterning is achieved via the SMP surface.
Synthesis of CdSe/CdS Core/Shell Quantum Dots
CdSe/CdS core/shell QDs were synthesized by an established method with slight modifications [27]. Briefly, 1 mmol of CdO, 4 mmol of oleic acid (OA), and 20 mL of 1-octadecene (ODE) were degassed at 100 • C for 1 h before being heated up to 300 • C to form clear Cd(OA) 2 precursors. Then 0.25 mL of 1 M Se in trioctylphosphine was swiftly injected into the reaction mixture. After 90 s of growth, 0.75 mmol of 1-octanethiol in 2 mL of ODE was injected dropwise. The reaction was terminated by cooling with an air jet after another 30 min of growth. The final reaction mixture was purified twice by adding 1 part toluene and 2 parts ethanol and centrifuging at 2000 rpm.
Preparation of Si-ODTS-QD Substrate
The preparation of octadecyltrichlorosilane (ODTS)-coated Si substrates followed an established method [21] with slight modifications. Silicon substrates were cleaned by acetone/isopropanol and dried with N 2 flow, and then cleaned using UV/O 3 for 30 min. After UV/O 3 exposure, the Si substrates were immediately transferred to ODTS in anhydrous hexane solution (1:600 volume ratio) and left to self-assemble for 1 h. The resulting substrates were sonicated in chloroform to remove excess ODTS molecules and then baked on a hot plate at 120 • C for 20 min. Once the ODTS-coated Si substrate was prepared, a solution of QDs in octane (~60 mg/mL) was spin-casted onto the substrate at 2000 rpm for 30 s. The schematics and an optical image of the prepared substrate are included in Figure 1a. preload, a high peeling rate, and poor resolution. Introducing water-soluble polyvinyl alcohol (PVA) can enhance transfer of QD films to PDMS stamps and improve patterning yields, but the process still suffers from high preload, high peeling rate, and moderate resolution [22]. Recently introduced intaglio transfer printing has successfully demonstrated ultra high resolution, but the drawback of the high preload and high peeling rate (10 cm/s) still remains [23]. We propose an alternative method of patterning QD films that utilizes the shape memory effect of shape memory polymers (SMPs) to alleviate such high preload and high peeling rate requirements without compromising high pattern resolution. SMPs are a class of thermosensitive materials that change their mechanical compliance across the polymer's glass transition or melting temperature (Tg, or Tm) [24,25]. The reversibility in the elastic modulus can be exploited to provide conformal contact and high pull-off force, useful for transfer printing and patterning [26]. Here, a structured SMP surface is used to make conformal contact to QD films with low preload at elevated temperature and to provide high pull-off forces enabling high-resolution large-area patterning, even with a low separation rate (~2 µm/s) at room temperature. In such a way, solvent-free, rate-independent, high yield, highly scalable QD film patterning is achieved via the SMP surface.
Synthesis of CdSe/CdS Core/Shell Quantum Dots
CdSe/CdS core/shell QDs were synthesized by an established method with slight modifications [27]. Briefly, 1 mmol of CdO, 4 mmol of oleic acid (OA), and 20 mL of 1-octadecene (ODE) were degassed at 100 °C for 1 h before being heated up to 300 °C to form clear Cd(OA)2 precursors. Then 0.25 mL of 1 M Se in trioctylphosphine was swiftly injected into the reaction mixture. After 90 s of growth, 0.75 mmol of 1-octanethiol in 2 mL of ODE was injected dropwise. The reaction was terminated by cooling with an air jet after another 30 min of growth. The final reaction mixture was purified twice by adding 1 part toluene and 2 parts ethanol and centrifuging at 2000 rpm.
Preparation of Si-ODTS-QD Substrate
The preparation of octadecyltrichlorosilane (ODTS)-coated Si substrates followed an established method [21] with slight modifications. Silicon substrates were cleaned by acetone/isopropanol and dried with N2 flow, and then cleaned using UV/O3 for 30 min. After UV/O3 exposure, the Si substrates were immediately transferred to ODTS in anhydrous hexane solution (1:600 volume ratio) and left to self-assemble for 1 h. The resulting substrates were sonicated in chloroform to remove excess ODTS molecules and then baked on a hot plate at 120 °C for 20 min. Once the ODTS-coated Si substrate was prepared, a solution of QDs in octane (~60 mg/mL) was spin-casted onto the substrate at 2000 rpm for 30 s. The schematics and an optical image of the prepared substrate are included in Figure 1a.
Preparation of the Shape Memory Polymer (SMP) Stamp
For the fabrication of SMP stamps, a SU-8 (SU-8 50, MicroChem, Westborough, MA, USA) mold was first made. A Si wafer was thoroughly degreased with acetone and isopropyl alcohol. Then, the Si wafer was treated with oxygen plasma for further cleaning. SU-8 was then spin-coated to form a 50 µm thin layer on the Si wafer. After a soft baking process on a hotplate at 65 • C for 6 min and at 95 • C for 20 min, the SU-8 layer was patterned through UV exposure with a pre-designed FeO 2 mask. Post exposure baking on a hotplate at 65 • C for 1 min and 95 • C for 5 min and developing with SU-8 developer resulted in a 50-µm-thick SU-8 mold. For the double molding process, a SU-8 mold on a Si wafer was coated with trichloro-1H,1H,2H,2H-perfluorodecylsilane (FDTS) through molecular vapor deposition (MVD) to form an anti-stick monolayer. After fully mixing and degassing, a PDMS precursor was carefully poured into a SU-8 mold and cured in a convection oven at 60 • C for 120 min. The PDMS mold was then used for the successive SMP molding process. A previously developed SMP, NGDE2 with T g of~40 • C [28], was used in this work. A mixed SMP precursor was poured into the PDMS mold and cured between the mold and a glass slide in a convection oven at 100 • C for 90 min. Final demolding of the SMP from the PDMS mold leads to a SMP stamp 50 µm in thickness on the glass slide as shown in Figure 1b.
Patterning Procedure
On a high precision translational and rotational mechanical stage, the prepared Si-ODTS-QD substrate is placed. The SMP stamp is attached to an indium tin oxide (ITO) heater designed to generate heat to increase temperature to 100 • C upon 16 V bias by an external power source. This ITO heater with the SMP stamp is attached to a fixture, which is placed over the Si-ODTS-QD substrate. An optical microscope is located over the ITO heater, with which all patterning steps are monitored during the procedure. After heating the SMP stamp above T g , which makes the stamp compliant, the mechanical stage with the Si-ODTS-QD substrate is manually raised and brought into conformal contact with the SMP stamp with minimal preload as depicted in Figure 1c. While the preload was applied, the stamp was cooled below T g as shown in Figure 1d to induce high pull-off force to remove the QDs from the substrate. Subsequently, the SMP stamp and the Si-ODTS-QD substrate was separated at~2 µm/s, which removed the QDs on the contact area as schematically shown in Figure 1e.
Characterization
The photoluminescence (PL) spectra of QDs were collected with a Horiba Jobin Yvon FluoroMax-3 spectrofluorometer (Horiba, Ltd., Kyoto, Japan). The scanning electron microscope (SEM) images were obtained on Hitachi S4800 SEM (Hitachi, Ltd., Tokyo, Japan). The thickness of the QD film was measured to be 47 nm using a J. A. Woollam VASE Ellipsometer (J. A. Woollam Co., Inc., Lincoln, NE, USA) in the wavelength range of 400-900 nm at an incidence angle of 60 • . Data analysis was performed with WVASE32 software (J. A. Woollam Co., Inc.) using the Cauchy model. PL imaging was carried out on a Jobin Yvon Labram HR800 confocal Raman spectrometer (Horiba, Ltd.) using a 10× air objective with 532 nm laser excitation source. The laser intensity was kept below 0.5 mW to prevent damage to the QD films.
Optical and SEM Images of Patterned QD Films under Different Process Conditions
Optical and scanning electron microscopy (SEM) images of QD films patterned using different states of the SMP stamp with slow separation rate (~2 µm/s) are shown in Figure 2a-c. The result in Figure 2a was achieved with the SMP stamp making contact with the QD substrate below T g . Owing to the high elastic modulus of the SMP stamp (2.5 GPa [25]) at room temperature (below T g ), the stamp has a very low probability of making highly conformal contact over the entire desired contact area. Any tilting misalignment between the SMP stamp and the QD substrate causes the stamp to slide, leading to an accumulation of QDs. Figure 2b reveals the situation when the SMP stamp is heated at 80 • C (above T g ) throughout the patterning experiment. Above T g , the stamp becomes compliant (10 MPa [25]). Relatively well defined square regions suggest conformal contact between the stamp and the QD substrate. However, the pull-off force applied to the QD film during separation is limited by the low elastic modulus, leading to an incomplete removal of the QDs. The effect of changing the elastic modulus of the SMP stamp during QD patterning is shown in Figure 2-in particular, when the stamp is heated during the initial conformal contact and is cooled prior to separation. The patterned region exhibits sharp edges, and the region surrounding the central square pattern has a near complete removal of QDs. heated at ~80 °C (above Tg) throughout the patterning experiment. Above Tg, the stamp becomes compliant (10 MPa [25]). Relatively well defined square regions suggest conformal contact between the stamp and the QD substrate. However, the pull-off force applied to the QD film during separation is limited by the low elastic modulus, leading to an incomplete removal of the QDs. The effect of changing the elastic modulus of the SMP stamp during QD patterning is shown in Figure 2-in particular, when the stamp is heated during the initial conformal contact and is cooled prior to separation. The patterned region exhibits sharp edges, and the region surrounding the central square pattern has a near complete removal of QDs. (c) The stamp is heated above Tg and brought to contact, which is subsequently cooled to room temperature (below Tg) prior to separation. All experiments were conducted at the same preload (~5.5 kPa) and separation rate (~2 µm/s). The scale bars indicate 100 µm.
The pull-off force that the SMP stamp exhibits against a rigid surface during separation can be expressed as [29] = 25.31γ (2 ) (1) where is the work of adhesion between the SMP stamp and a rigid surface. and L are the elastic modulus and the width of the SMP stamp, respectively. Clearly, Equation (1) indicates the pull-off force of the SMP stamp is a function of which also depends on whether the SMP stamp is above or below Tg. Although at the SMP-QD interface is unknown and the effect of the elastic moduli mismatch [30] between the SMP and the QDs is simplified, the equation qualitatively shows different pull-off forces of the SMP stamp between above and below Tg separation conditions. This difference in pull-off, i.e., separation force, arising from whether or not the stamp is cooled prior to separating the SMP stamp from the QD substrate causes differences in the quality of the SMP stamp patterning of QD films. Because of the high pull-off force afforded by cooling the SMP stamp below Tg, patterning yield can be completely independent of separation rates. The contacting and separation were both conducted at a temperature below T g . (b) Both contact and separation were conducted at an elevated temperature (above T g ). (c) The stamp is heated above T g and brought to contact, which is subsequently cooled to room temperature (below T g ) prior to separation. All experiments were conducted at the same preload (~5.5 kPa) and separation rate (~2 µm/s). The scale bars indicate 100 µm.
Characterization of Patterned QD Films
The pull-off force that the SMP stamp exhibits against a rigid surface during separation can be expressed as [29] F pull−off = 25.31γ 0 (2E SMP )L 3 (1) where γ 0 is the work of adhesion between the SMP stamp and a rigid surface. E SMP and L are the elastic modulus and the width of the SMP stamp, respectively. Clearly, Equation (1) indicates the pull-off force of the SMP stamp is a function of E SMP which also depends on whether the SMP stamp is above or below T g . Although γ 0 at the SMP-QD interface is unknown and the effect of the elastic moduli mismatch [30] between the SMP and the QDs is simplified, the equation qualitatively shows different pull-off forces of the SMP stamp between above and below T g separation conditions. This difference in pull-off, i.e., separation force, arising from whether or not the stamp is cooled prior to separating the SMP stamp from the QD substrate causes differences in the quality of the SMP stamp patterning of QD films. Because of the high pull-off force afforded by cooling the SMP stamp below T g , patterning yield can be completely independent of separation rates.
Characterization of Patterned QD Films
High magnification SEM images were obtained to characterize the morphology of the patterned QD films. Figure 3a shows a very well-defined edge of the QD pattern using the hot contact and cold separation condition, which corresponds to Figure 2c. Further magnified SEM image of the QD film is shown in Figure 3b. Individual QDs can be seen and are uniformly distributed and densely packed in the film. Additionally, spatially resolved photoluminescence (PL) spectra is obtained to examine the possible effects of the SMP stamp-based patterning process on the optical properties of the QDs. As demonstrated in Figure 3c, the PL intensity mapping matches the SEM image and shows a distinctively different spectrum between the QD pattern and the background. Figure 3d reveals that the PL from the dark regions where QDs have been removed by the SMP stamp is 4 orders of magnitude lower in peak intensity than that from the QD patterns remaining. Furthermore, the PL line shape and peak position are well preserved when compared with those of QDs in solution. High magnification SEM images were obtained to characterize the morphology of the patterned QD films. Figure 3a shows a very well-defined edge of the QD pattern using the hot contact and cold separation condition, which corresponds to Figure 2c. Further magnified SEM image of the QD film is shown in Figure 3b. Individual QDs can be seen and are uniformly distributed and densely packed in the film. Additionally, spatially resolved photoluminescence (PL) spectra is obtained to examine the possible effects of the SMP stamp-based patterning process on the optical properties of the QDs. As demonstrated in Figure 3c, the PL intensity mapping matches the SEM image and shows a distinctively different spectrum between the QD pattern and the background. Figure 3d reveals that the PL from the dark regions where QDs have been removed by the SMP stamp is 4 orders of magnitude lower in peak intensity than that from the QD patterns remaining. Furthermore, the PL line shape and peak position are well preserved when compared with those of QDs in solution. (d) PL spectrum measured within the patterned area of the QD film (solid red curve) is very similar to that measured from a solution of the QDs (dotted blue curve). The slight red-shift of the PL peak of the film state is presumably due to energy transfer between closely packed QDs. The PL from the dark background region (solid black line) surrounding the square pattern of the QD film is significantly reduced.
Large Scale Patterning of QD Films
The QD patterning achieved here through the SMP stamp is a two-dimensional planar process that can be developed into a large-scale patterning method. In order to demonstrate the feasibility of large scale patterning, an array of 200 µm by 200 µm square SMP stamps is prepared as shown in Figure 4a. The prepared SMP stamp array is brought into contact with a QD substrate at a temperature above Tg and subsequently cooled below Tg prior to separation as described in methods. The resulting pattern is shown in Figure 4b. As can be seen in the optical and SEM images, an array of QD patterns covering 1.2 cm × 1.2 cm area can be formed in a simple single patterning step with a low peeling rate using the SMP stamp array. (d) PL spectrum measured within the patterned area of the QD film (solid red curve) is very similar to that measured from a solution of the QDs (dotted blue curve). The slight red-shift of the PL peak of the film state is presumably due to energy transfer between closely packed QDs. The PL from the dark background region (solid black line) surrounding the square pattern of the QD film is significantly reduced.
Large Scale Patterning of QD Films
The QD patterning achieved here through the SMP stamp is a two-dimensional planar process that can be developed into a large-scale patterning method. In order to demonstrate the feasibility of large scale patterning, an array of 200 µm by 200 µm square SMP stamps is prepared as shown in Figure 4a. The prepared SMP stamp array is brought into contact with a QD substrate at a temperature above T g and subsequently cooled below T g prior to separation as described in methods. The resulting pattern is shown in Figure 4b. As can be seen in the optical and SEM images, an array of QD patterns covering 1.2 cm × 1.2 cm area can be formed in a simple single patterning step with a low peeling rate using the SMP stamp array.
Conclusions
This work demonstrates a novel method of patterning QD films under solvent-free, dry conditions. While existing PDMS-based transfer printing approaches can lead to successful QD film patterning, those methods require high peeling rates to induce sufficient pull-off forces. In this context, utilizing the shape memory effect of a SMP that switches its elastic modulus as a function of temperature provides advantages while maintaining desirable solvent-free conditions for patterning QD films. Sufficiently large pull-off forces by the SMP at below Tg in conjunction with highly conformal contact of the SMP stamp at elevated temperature (above Tg) enable dry patterning of QD films, even at an extremely slow separation rate (2 µm/s), which verifies the rate-independent patterning yield of the SMP stamp. Arrays of QD patterns formed through this method as demonstrated here can facilitate various device applications, such as solar cells, LEDs, and transistors, thereby broadening the applicability of QDs.
Conclusions
This work demonstrates a novel method of patterning QD films under solvent-free, dry conditions. While existing PDMS-based transfer printing approaches can lead to successful QD film patterning, those methods require high peeling rates to induce sufficient pull-off forces. In this context, utilizing the shape memory effect of a SMP that switches its elastic modulus as a function of temperature provides advantages while maintaining desirable solvent-free conditions for patterning QD films. Sufficiently large pull-off forces by the SMP at below T g in conjunction with highly conformal contact of the SMP stamp at elevated temperature (above T g ) enable dry patterning of QD films, even at an extremely slow separation rate (2 µm/s), which verifies the rate-independent patterning yield of the SMP stamp. Arrays of QD patterns formed through this method as demonstrated here can facilitate various device applications, such as solar cells, LEDs, and transistors, thereby broadening the applicability of QDs.
|
2017-01-15T08:35:26.413Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "5d88ead471b5aa9e9cd7e6e6de87a8b1ef1e34e9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-666X/8/1/18/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d88ead471b5aa9e9cd7e6e6de87a8b1ef1e34e9",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
233171438
|
pes2o/s2orc
|
v3-fos-license
|
Controlled Attenuation Parameter for Quantification of Steatosis: Which Cut-Offs to Use?
Chronic liver diseases (CLDs) are a public health problem, even if frequently they are underdiagnosed. Hepatic steatosis (HS), encountered not only in nonalcoholic fatty liver disease (NAFLD) but also in chronic viral hepatitis, alcoholic liver disease, etc., plays an important role in fibrosis progression, regardless of CLD etiology; thus, detection and quantification of HS are imperative. Controlled attenuation parameter (CAP) feature, implemented in the FibroScan® device, measures the attenuation of the US beam as it passes through the liver. It is a noninvasive technique, feasible and well accepted by patients, with lower costs than other diagnostic techniques, with acceptable accuracy for HS quantification. Multiple studies have been published regarding CAP performance to quantify steatosis, but due to the heterogeneity of CLD etiologies, of steatosis prevalence, etc., it had widely variable calculated cut-off values, which in turn limited the day-to-day utility of CAP measurements in clinical practice. This paper reviews published studies trying to suggest cut-off values usable in clinical practice.
Introduction
Chronic liver diseases (CLDs) are a public health problem, even if frequently they are underdiagnosed. A study from 2014 estimated that 844 million individuals are affected by CLD, with a mortality rate of 2 million per year [1]. e most frequent CLDs are chronic viral hepatitis, alcoholic liver disease (ALD), and nonalcoholic fatty liver disease (NAFLD) with its progressive variant-nonalcoholic steatohepatitis (NASH). Even if effective treatments are available for chronic viral hepatitis, in NAFLD and NASH this is not the case, an alarming fact considering that the world-vide pooled prevalence of NAFLD is estimated to be 25.24% [2], ranging from approximately 13% in Africa to approximately 30% in Asia and South America. Furthermore, the prevalence of NAFLD is expected to increase since the prevalence of its etiologic factors (obesity, diabetes mellitus, hypertriglyceridemia) is increasing.
Hepatic steatosis (HS) is encountered not only in NAFLD, but also in chronic viral hepatitis, alcoholic liver disease, etc. Several studies demonstrated that HS plays an important role in fibrosis progression, regardless of CLD etiology [3,4], and that it impairs response to treatment in chronic viral hepatitis [5].
Diagnosis of Hepatic Steatosis
Considering all these facts, detection and quantification of HS are imperative, but also a challenge. Detection of HS relies mainly on imaging methods. B-mode ultrasonography is usually the first-line imaging method to detect HS, but it cannot assess the presence of inflammation and it is imprecise to assess steatosis severity, especially mild [6,7]. Magnetic resonance imaging (MRI) techniques, especially proton density fat fraction (PDFF), are very accurate to detect and quantify HS [8], but they are very expensive and not available enough to be used for assessment of such a large number of patients.
Liver biopsy is considered the gold standard for assessing HS severity, as well as inflammation and fibrosis, when they are present [6,7]. According to histologic findings, liver steatosis is classified as absent-S 0 (normal liver), when less than 5% of the hepatocytes have fatty infiltration; mild-S 1 , when 5 up to 33% of the hepatocytes present fatty infiltration; moderate-S 2 , 33-66% of the hepatocytes with fatty infiltration; and severe-S 3 , more than 66% of the hepatocytes with fatty infiltration [6,7]. However, liver biopsy is an invasive method, poorly accepted by the patients, especially if repetitive, and there are some problems regarding inter-observer variability in assessing the sample, as well as regarding sampling errors [9]. Furthermore, the applicability of liver biopsy to assess such a huge number of patients is highly questionable.
Considering all these facts, noninvasive methods have been developed to assess HS, as well as inflammation and fibrosis (when present). ey include biomarkers and imaging techniques [6,7]. Among the imaging techniques, the controlled attenuation parameter (CAP) feature, implemented on the FibroScan ® device, seems the most promising noninvasive test to quantify HS.
Controlled Attenuation Parameter (CAP): Technical Data
Vibration-controlled transient elastography (VCTE) (FibroScan ® , EchoSens, Paris, France) is an ultrasound-based elastography technique developed more than 15 years ago, firstly used for fibrosis assessment in chronic liver diseases. It is the most validated elastography technique, accepted by international guidelines as a reliable tool to quantify liver fibrosis [10,11]. VCTE measures the velocity of shear waves generated inside the liver by a mechanical impulse. In CLD, liver stiffness increases with the progression of fibrosis. e stiffer the liver is, the higher the shear waves' velocity. Several years later, CAP feature was added to the FibroScan ® device. It measures the attenuation of the US beam as it passes through the liver. CAP correlates with the viscoelastic characteristics of the liver, dependent in their turn on the quantity of fat droplets in the hepatocytes [12]. CAP measurements can be performed by either the M or XL probes (chosen according to the skin to liver capsule distance), and the results are expressed in decibels per meter (dB/m), ranging from 100 to 400 dB/m [13]. At the beginning, CAP was available only on the M probe of the FibroScan ® . Later, it was implemented also on the XL probe developed for obese subjects.
e initial studies regarding CAP showed excellent feasibility-92.3% of cases with only the M probe [13], improved to 96.8% when both M and XL probes have been used [14], also with excellent reproducibility, inter-rater agreement 0.82-0.84 with the M probe [15,16], but lower with the XL probe, 0.75 and 0.65, respectively [14,15].
No quality technical parameters have been recommended by the producers to ensure reliable measurements.
erefore, most authors used the quality criteria recommended for VCTE: 10 valid measurements with an IQR/ M < 30% [17,18]. A study published in 2017 recommended as a quality criterion for CAP measurements an IQR < 40 dB/ m [19]. When this quality criterion was used, the AUROC of CAP to assess steatosis as compared to liver biopsy increased from 0.77 to 0.9. Another study has set the IQR upper limit at 30 dB/m [8], while another study found no difference in CAP performance when the IQR was ≥30 dB/m or ≥40 dB/m [20]. A recently published study demonstrated that CAP-IQR/M < 0.3 as a quality criterion improves accuracy and feasibility of CAP measurements, performing better than the IQR < 40 dB/m criterion [21].
Regarding the use of M vs. XL probe to assess steatosis grade by CAP, data is still conflicting. In a study performed in a Caucasian population, the cut-off and performance were similar for M vs. XL probe [22], while in a smaller study performed in a Chinese population, cut-off values were higher with the XL probe, but the performance was similar [23]. In a very recent study in Japanese population, cut-off values were higher for the XL probe, but there were no significant differences in accuracy [24].
Several studies demonstrated that CAP measurements are not influenced by the severity of liver fibrosis, nor by the presence of cirrhosis [25][26][27][28]
Controlled Attenuation Parameter (CAP): Predictive Value for Steatosis Severity in Individual Studies
Up to date, numerous studies have been published regarding the predictive value of CAP for steatosis severity. We summarized in Table 1 data from studies including more than 100 subjects, with liver biopsy as the reference method, CAP measurements being performed with the M probe ( Table 1).
As it can be seen, the performance of CAP for detecting any steatosis (S ≥ 1) is very good, the AUROC usually being higher than 0.8. In populations with mixed etiology of CLD, the AUROCs remain also high for diagnosing more severe steatosis (S 2 and S 3 ). However, in NAFLD population, the AUROCs for diagnosing moderate (S 2 ) and severe (S 3 ) steatosis decrease, sometimes as low as 0.58 [39], or even 0.37 [38]. Nevertheless, the severity of fat infiltration in NAFLD does not affect prognosis [45], so the important thing is to detect even mild steatosis (S 1 ), for which CAP is much better than B-mode ultrasonography [46]. e largest individual study assessing the value of CAP for predicting fibrosis severity was published in 2019 by Eddowes et al. [20]. It was a multicenter prospective study that included 450 patients with NAFLD evaluated by CAP/TE and liver biopsy. e AUROCs of CAP to identify patients' steatosis were as follows: for S ≥ S 1 -AUROC of 0.87; for S ≥ S 2 -0.77; while for S 3 it was 0.70. Youden cut-off values were 302 dB/m for S ≥ S 1 , 331 dB/m for S ≥ S 2 , and 337 dB/m for S 3 .
e cut-offs also vary a lot among the studies. An explanation could be the relatively small number of patients included in each study, the heterogeneity among groups regarding etiology, overall steatosis prevalence, and also among steatosis severity groups.
2
Canadian Journal of Gastroenterology and Hepatology To overcome these shortcomings, meta-analyses have been performed.
Controlled Attenuation Parameter (CAP):
Predictive Value for Steatosis Severity in Meta-Analyses e first published meta-analysis included nine studies with 11 cohorts, totalizing 1771 patients with CLD of diverse etiologies [47]. e summary sensitivities and specificities values were 0.78 and 0.79 for S ≥ 1; 0.85 and 0.79 for S ≥ 2; 0.83 and 0.79 for S 3 , respectively. e HSROCs were 0.85 for S ≥ 1, 0.88 for S ≥ 2, and 0.87 for S 3 . e median optimal cutoff values of CAP for S ≥ 1, S ≥ 2, and S 3 were 232.5 dB/m (range 214-289 dB/m), 255 dB/m (range 233-311 dB/m), and 290 dB/m (range 266-318 dB/m). e second meta-analysis included 11 studies with 13 cohorts, all of them with high methodological quality, totalizing 2076 patients with CLD of diverse etiologies [48]. e summary sensitivity, specificity, and AUC for S ≥ 1 were 0.78, 0.79, and 0.86, respectively; for S ≥ 2, they were 0.82, 0.79, and 0.88, respectively, while for S 3 they were 0.86, 0.89, and 0.94, respectively. Significant heterogeneity was found among the studies for S ≥ 1 and S 3 . CAP cut-of values for S ≥ 1 ranged from 214 to 289 dB/m, median 238 dB/m; for S ≥ 2 they ranged from 230 to 311 dB/m, median 259 dB/m, while for S 3 CAP values ranged from 266 to 327 dB/m, median 290 dB/m. Both meta-analyses above were not able to provide optimized cut-offs with high predictive values due to the limitations of conventional meta-analyses and to the heterogeneity of the included studies, so that a third meta-analysis was performed, this time using individual patient data from 19 studies, including 2735 CLD cases of various etiology, with liver biopsy and CAP measurements [29]. e overall performance of CAP in this meta-analysis was as follows: for S ≥ 1 the calculated cutoff was 248 dB/m, with 0.68 sensitivity and 0.82 specificity (AUROC 0.82); for S ≥ 2, the calculated cut-off was 268 dB/m, with 0.77 sensitivity and 0.81 specificity (AUROC 0.86), while for S 3 the calculated cut-off was 280 dB/m, with 0.88 sensitivity and 0.77 specificity (AUROC 0.88).
Another important finding of this last meta-analysis is the fact that, among etiologies, only NAFLD seems to influence CAP values. In other words, NAFLD patients have higher CAP values (by 10 dB/m) as compared with all other etiologies of CLD for the same grade of histologic steatosis [29]. Furthermore, it was calculated that BMI, as well as the Finally, a recently published meta-analysis assessed only NAFLD patients (1297 subjects) evaluated by liver biopsy and CAP in nine studies [49]. e mean AUROC, pooled sensitivity, and pooled specificity for diagnosing S ≥ 1 were 0.96, 0.87, and 0.91, respectively; for S ≥ 2, they were 0.82, 0.85, and 0.74, respectively, while for S 3 they were 0.70, 0.76, and 0.58, respectively. As observed in individual studies (Table 1), in NAFLD patients the performance of CAP to diagnose steatosis severity decreases as the steatosis progresses. No polled cut-off values have been calculated in this meta-analysis.
Controlled Attenuation Parameter, Transient Elastography, and NAFLD/NASH
As mentioned before, the prevalence of NAFLD/NASH is increasing worldwide and in the future will be the main cause of liver-related morbidity and mortality. Considering the high number of patients and the fact that not all patients with NAFLD will develop NASH and liver related events, it is not feasible to try to evaluate all of them by liver biopsy, thus the utility of noninvasive methods. As shown before, individual studies [20,24,30,32,[35][36][37][38][39] and meta-analyses [49] proved the value of CAP for diagnosing steatosis in patients with NAFLD/NASH, even if accuracy decreases with the severity of steatosis [49]. VCTE is the most validated elastographic method for fibrosis assessment in NAFLD/NASH. e cut-off values for different stages of fibrosis vary according to the probe used. For the XL probe (developed especially for obese patients), the cut-offs are as follows: 6.2 kPa for F ≥ 2, 7.2 kPa for F ≥ 3, and 7.9 kPa for F 4 [50]. For the M probe the cut-offs are as follows: 7 kPa for F ≥ 2, 8.7 kPa for F ≥ 3, and 10.3 kPa for F 4 [51]. In a recent meta-analysis that included 854 NAFLD patients from eight studies, TE had 79% Se and 75% Sp for diagnosing F ≥ 2 and 85% Se and Sp for diagnosing F ≥ 3, while for cirrhosis the Se and Sp were 92% [52]. No cut-offs were provided. e accuracy of TE increases with the severity of fibrosis; thus, TE is a very good method to rule in and to rule out cirrhosis.
Final Considerations
e ideal diagnostic test should be accurate, available, noninvasive, feasible, inexpensive, and acceptable by the patient. All the data that we presented above suggest that CAP is a feasible test with good accuracy for the detection and quantification of hepatic steatosis, if clinical aspects, such as BMI and presence of diabetes mellitus and of NAFLD/NASH, are taken into consideration. Regarding availability, FibroScan ® device is readily available in European countries such as France and even Romania, and, a few years ago, FDA accepted it as a valuable tool to assess fibrosis in the United States. Since it is noninvasive, and it takes only a few minutes to perform, VCTE and CAP are well accepted by the patients.
us, in some countries, VCTE and serologic markers replaced almost entirely liver biopsy for fibrosis severity assessment [53]. Regarding CAP costs, they are included in those of VCTE assessment of fibrosis and are much lower than of PDFF-MRI, even if with a small loss of accuracy.
Considering all of the above, the rise in NAFLD/NASH prevalence, as well as the steatosis impact on the prognosis of CLD, CAP could be used as a screening tool in patients at risk for NAFLD/NASH (diabetics, obese, patients with metabolic syndrome). Regarding cut-offs to be used, those calculated by the Karlas meta-analysis seem the most robust since they were calculated starting from a large individual data-base meta-analysis and since they take into consideration factors known to influence CAP measurements [29]. e main advantages and weaknesses of CAP/VCTE are summarized in Table 2.
Conclusion
Controlled attenuation parameter is a valuable tool to detect hepatic steatosis in day-to-day clinical practice. Cut-off values of 248 dB/m, 268 dB/m, and 280 dB/m, corrected by BMI and presence of co-morbidities, can be taken into consideration to diagnose S ≥ 1, S ≥ 2, and S 3 . Canadian Journal of Gastroenterology and Hepatology 5
|
2021-04-08T05:15:00.812Z
|
2021-03-26T00:00:00.000
|
{
"year": 2021,
"sha1": "42b60effb795edc37679b331b5cb6d6802f3a597",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/cjgh/2021/6662760.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "42b60effb795edc37679b331b5cb6d6802f3a597",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56050260
|
pes2o/s2orc
|
v3-fos-license
|
Lipoprotein (a) and Ultrasensitive C-Reactive Protein in Overweight Adolescents
Introduction: The more intense and early development of obesity, the greater the risk of persistence and severity of co-morbidities, such as cardiovascular disease. There is evidence that high serum concentrations of lipoprotein (a) [Lp (a)] and C-reactive protein (CRP) are associated with increased risk of cardiovascular diseases. Objectives: To verify the change in the levels of Lp (a) and ultrasensitive CRP and its relationship with the nutritional status of adolescents. Methods: Cross-sectional study conducted with overweight children and adolescents between August 2012 and July 2013 attended at the Center for Childhood Obesity. The measurement of inflammatory markers was performed in the Clinical Laboratory of the State University of Paraiba. Comparison of sociodemographic variables by sex was tested by chi-square; the association of the risk markers according to age was evaluated by Student’s t test, and the body mass index by analysis of Pearson correlation. The normality distribution was tested by the Kolmogorov-Smirnov test. Confidence interval of 95% was adopted in all analyses. The study was approved by the Ethics Research Committee of UEPB (CAAE 0256.0.133.000-11). Results: Of the 133 children and adolescents evaluated, 60.9% were female and 72.2% were adolescents. Body Mass Index, lipo (a) and u-CRP showed statistically significant association with age (p < 0.01). There was a positive ascending correlation (r D. L. R. Alves et al. 2350 = 0.273, p < 0.01) of u-CRP with BMI, which was not verified for lipoprotein (a). Conclusion: As a cardiovascular risk marker already established in literature, the association of u-CRP with the nutritional status of adolescents, proportionally, shows the need for losing weight in this population, especially at early age. A deeper and long-term investigation should be carried out for more effective and consistent contribution to public health.
Introduction
Obesity is a multifactorial condition involving genetic and environmental components [1] that represent a public health problem since it affects different populations, regardless of stage of life and socioeconomic status [2].In Brazil, the Brazilian Institute of Geography and Statistics (IBGE) indicates that among children aged 5 -9 years, one in three is overweight, 14.3% of them are obese [3].In the adolescent population, 19.4% of girls and 21.7% of boys present similar diagnosis [4].
This physical condition is considered the most relevant nutritional problem among children [1], since individuals with a prevalence of fat in body composition are more likely to develop chronic degenerative diseases [5].The more intense and early development of obesity, the greater the risk of persistence and severity of co-morbidities, such as cardiovascular disease, hypertension, diabetes and some types of cancers [2].These diseases were more frequent among adults, but are now occurring increasingly early, whose identification is essential in the early stages of life [5].
There is a consensus that cardiovascular disease (CVD) is a multifactorial etiology including, in addition to lifestyle, atherosclerotic, prothrombotic and inflammatory components.Thus, in addition to the evaluation of conventional risk factors, new markers have been explored in prospective observational studies in order to improve the capacity of predicting the risk of cardiovascular events [6].
There is evidence that elevated serum lipoprotein (a) [Lp (a)] and C-reactive protein (CRP) concentrations are associated with increased risk of cardiovascular diseases, which classify them as potential risk markers [7].
Lp (a) lipoprotein is a plasma lipoprotein, similar to the LDL particle (low density lipoprotein), as a apolipoprotein B molecule (Apo B) and an additional protein, apolipoprotein A (Apo A).Recent studies have reported that Lp (a) is a stable risk marker of major forms of vascular diseases, with atherogenic and thrombotic properties [8].
Ultrasensitive CRP (u-CRP) is produced in the liver in response to stimulation of inflammatory cytokines.An inflammatory marker has been widely used for detection of CVD.Prospective studies have demonstrated that elevated u-CRP levels are associated with increased risk of several manifestations of CVD, including myocardial infarction, stroke, sudden death and systemic blood hypertension (SBH) [9].
The diagnosis and interventions during childhood and adolescence have been recommended to prevent the development of chronic diseases in adulthood [10], since studies show that there is a risk at least twice greater developing obesity in adulthood for obese compared to non-obese children [11].Thus, this study aims to assess the prevalence of changes in risk markers, lipoprotein A and ultrasensitive C-reactive protein and its relation to the nutritional status of children and adolescents.
Study Location and Design
Cross-sectional study conducted at the Centre for Childhood Obesity (CCO) and Clinical Laboratory (LAC) of the State University of Paraiba, Campina Grande, PB, Brazil, between August 2012 and July 2013.
Population and Sample
The sample was composed of children and adolescents aged 2 -19 years with diagnosis of overweight or obesity attended at CCO.Those using medications or conditions that could compromise the glucose or lipid metabolism such as kidney, liver diseases; pregnancy or the presence of inflammatory diseases were excluded.
Study Variables
The recruited patients completed a checklist to verify the inclusion/exclusion criteria of the study and parents/guardians signed then informed consent form for participation.The following variables were considered: age (classified into two age groups: 2 -9 years and 10 to 19 years); sex; color (classified as white or nonwhite); income (in accordance with the minimum wage at the time of the study) and per capita income, considering the family members residing with the patient; type of school (public or private).In addition to the socioeconomic and demographic information through questionnaire, anthropometry was performed (weight and height) for classification of nutritional status, and blood collection for biochemical testing of Lp (a) and u-CRP.
Welmy ® digital scale with sensitivity of 100 g and stadiometer with scale of 1 mm were used.In addition to the evaluation as continuous variable, body mass index (BMI) was used to determine nutritional status, according to the z score and to age: overweight (+1 ≥ z score < +2), obesity (+2 ≥ z score < +3) and severe obesity (z score ≥ +3).For over 18 years of age, the cutoff points for BMI (in kg/m 2 ) were: overweight (25.0 ≤ BMI < 30.0) and obesity (≥30.0 kg/m 2 ) [12].For purposes of analysis, obesity and severe obesity variables were grouped into only one category.
Lipoprotein (a) was measured by immunoturbidimetry technique using in vitro lipoprotein (a) turbidimetric diagnosis.For interpretation of Lp (a), values above 30 mg/dL were considered elevated.Serum u-CRP was determined y chemiluminescence.The levels of this protein have been classified into low-risk for cardiovascular values < 1 mg/L, moderate risk values between 1 and 3 mg/L and increased risk levels > 3 mg/L.Samples ≥ 10 mg/L were excluded for suggesting acute infectious or inflammatory process.
Data Collection Procedures
Data collection was performed after the clarification of all procedures adopted, including the need to fast for 12 hours prior to the day of blood collection.Parental consent was required by signing the Free and Informed Consent Form (ICF).Blood for the determination of lipids was collected in schools by specialized technicians at previously scheduled day, always in the morning.Samples were processed and analyzed by outsourced laboratory (enzymatic colorimetric method), hired for this purpose, with Lab-SPC/ML quality control seal.
Data Processing and Statistical Analysis
For statistical analysis, data were described as mean, standard deviation and frequencies, analyzed using the SPSS 22.0 program.The distribution of sociodemographic variables by sex was tested by chi-square; risk markers with age by Student's t test and correlation with BMI by Pearson correlation analysis.The normality distribution was tested by the Kolmogorov-Smirnov test.Confidence interval of 95% was adopted in all analyses.
Ethical Aspects
Ethical standards in research with human beings according to the letter of Helsinki have been applied.The project was approved by the Ethics Research Committee of UEPB (CAAE: 0256.0.133.000-11).
Results
Of the total of 133 children and adolescents evaluated, 60.9% (n = 81) were female, 72.2% (n = 96) adolescents, 63.2% (n = 84) with income below or equal to two minimum wages and per capita income of R$366.93.Sociodemographic variables are described in Table 1.
The prevalence of obesity was 80.5% (n = 107).Changing lipo (a) happened in 42.1% (n = 56) of cases and u-CRP alteration was 39.8% (n = 53).Tests with clinical and biochemical variables (BMI, lipo (a) and u-CRP) stratified according to gender were performed.However, no statistically significant differences (p > 0.05) were found.However, when assessing these variables by age group, difference for all ages was found, as expected.Age, therefore, appears as an independent risk factor, although at early ages (Table 2).
Figure 1 and Figure 2 show the correlation of the two markers evaluated with respect to body mass index.Contrary to expectations, although with weak correlation with BMI, lipoprotein (a) showed reverse correlation (r = −0.195,p = 0.03).This may have occurred due to the amplitude of results (2 -113 mg/l) probably due to the wide age range, since children and adolescents were evaluated.With respect to u-CRP, an ascending positive correlation with BMI was found (r = 0.273, p < 0.01).
Discussion
The prevalence of obesity during childhood and adolescence has increased rapidly in both developed countries and developing countries, reaching proportions considered epidemics [13].This condition is a major risk factor for many health conditions, especially cardiovascular diseases, certain types of cancers and respiratory diseases, adversely affecting the quality of life of people.In addition, obese children have a much higher risk of remaining obese at adult age [14].This has led to an increasing interest in research of risk markers that may predict possible future cardiac events, such as lipoprotein (a) and ultrasensitive C-reactive protein.
Similar to the findings of Nascimento et al. (2012) [15], who in a cross-sectional study found a 54.7% female with mean age 11 years; of the 133 patients evaluated in this study, there were more girls (60.9%) and adolescent age group (72.2%).
As for skin color variable, 66.2% were nonwhite, corresponding to the findings of Wood et al. (2009) [16], whose prevalence ranged from 63.3% in the obese group and 54.8% in the overweight group.Although in this study this variable was not analyzed according to the nutritional status, it appears that the prevalence remains high.
With regard to family income, a quantitative of two or more minimum wages for only 33.8% of family members was shown.These values are lower than those obtained by Ramos et al. (2011) [17] in a study conducted in the city of Campina Grande, in 2009, which showed 42.2% for this variable, a difference of 9.2% in the economic condition.
Regarding the type of school, 66% reported public schools, corroborating the findings of Rossetti et al. (2009) [18], who identified an even higher percentage in their research, 80%.The fact that this study included children and adolescents attended in a public specialized service with predominance of low family income can explain this high percentage.
The evaluation of socioeconomic and demographic variables stratified by sex showed no statistically significant difference.Similar results were observed by Costa et al. (2012) [19], who found no variation by gender.
BMI was associated with age, the same was not observed for lipoprotein (a) and u-CRP.Similarly, in the study by Khan et al. (2010) [20], average lipoprotein (a) was greater in adolescents, which may be due to the fact that body mass index was also higher in this population.u-CRP showed similar averages for both children and adolescents, corroborating the findings of Kitsios et al. (2009) [21].In this case, positive correlation between u-CRP levels in overweight and obese children and adolescents was also observed.The difference of the correlation observed for markers investigated in his study differ from studies conducted by Cordero et al. (2011) [22] and Nascimento et al. (2012) [15], which showed a significant correlation for both variables.As for u-CRP, the ascending correlation with BMI was similar to results shown by Nascimento et al. (2012) [15].
Recent studies with similar population found associations between CRP and BMI.Eight European countries, in 2013, showed correlation between the above markers.Even stratified by sex, which differs from the present study, the values for p were statistically significant (male: p = 0.000062; female: p = 0.001) [23].In another study conducted in the same year, in the city of Rio de Janeiro (Brazil), was evidenced similar association between the same markers, with values of r = 0.51 and p < 0.0001 [24].
With respect to the Lp (a), Cohen and Damasceno (2013), in a study conducted in São Paulo with 137 schoolchildren, to analyze the molecules of apolipoprotein A and B, plasma markers that comprise the lipo (a), depending on the classification of BMI, was found that the obese group (n = 66) showed higher values than normal weight (n = 71) for Apo B (p < 0.01).The opposite profile was observed for the variable Apo AI (p < 0.01).However, when the sample was stratified by gender, it was found that in eutrophic group, the variable Apo AI (p = 0.027) and Apo B ratio/Apo AI (0.027) were significantly higher in males [25].
Conclusions
Lipo (a) and u-CRP are risk markers for CVD and may be altered in early stages of life, as in the pediatric age group.These markers seem to have relationship with overweight, having high prevalence of changes in overweight population.Although lipo (a) has not shown growing relationship with BMI, this result has been observed for CRP.This reinforces, in both cases, the need for early detection and intervention.
Given the above, a deeper and long-term investigation for the complementation of these results is needed for a more effective and consistent contribution to public health.Health professionals should be prepared to prevent and identify possible early risk groups in order to treat them with greater attention.The prevention and treatment of obesity from childhood can aid in reducing the risk of developing chronic diseases in adulthood, thus improving the quality of life of this population.
Figure 2 .
Figure 2. Correlation between ultrasensitive C-reactive protein and body mass index in children and adolescents.Campina Grande, PB, 2012-2013.
Table 1 .
Sample distribution in terms of socioeconomic and demographic characteristics, according to sex.Campina Grande, PB, 2012-2013.
Table 2 .
Sample distribution according for cardiovascular risk markers according to age.Campina Grande, PB, 2012-2013.
|
2018-12-05T22:40:02.492Z
|
2014-09-30T00:00:00.000
|
{
"year": 2014,
"sha1": "110b3904d9baf238f489857d056afdf86df195db",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=50815",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "110b3904d9baf238f489857d056afdf86df195db",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235411615
|
pes2o/s2orc
|
v3-fos-license
|
Phenotypes of caregiver distress in military and veteran caregivers: Suicidal ideation associations
The United States (US) has been at war for almost two decades, resulting in a high prevalence of injuries and illnesses in service members and veterans. Family members and friends are frequently becoming the caregivers of service members and veterans who require long-term assistance for their medical conditions. There is a significant body of research regarding the physical, emotional, and social toll of caregiving and the associated adverse health-related outcomes. Despite strong evidence of the emotional toll and associated mental health conditions in family caregivers, the literature regarding suicidal ideation among family caregivers is scarce and even less is known about suicidal ideation in military caregivers. This study sought to identify clusters of characteristics and health factors (phenotypes) associated with suicidal ideation in a sample of military caregivers using a cross-sectional, web-based survey. Measures included the context of caregiving, physical, emotional, social health, and health history of caregivers. Military caregivers in this sample (n = 458) were mostly young adults (M = 39.8, SD = 9.9), caring for complex medical conditions for five or more years. They reported high symptomology on measures of pain, depression, and stress. Many (39%) experienced interruptions in their education and 23.6% reported suicidal ideation since becoming a caregiver. General latent variable analyses revealed three distinct classes or phenotypes (low, medium, high) associated with suicidality. Individuals in the high suicidality phenotype were significantly more likely to have interrupted their education due to caregiving and live closer (within 25 miles) to a VA medical center. This study indicates that interruption of life events, loss of self, and caring for a veteran with mental health conditions/suicidality are significant predictors of suicidality in military caregivers. Future research should examine caregiver life experiences in more detail to determine the feasibility of developing effective interventions to mitigate suicide-related risk for military caregivers.
Introduction
Military caregivers (family and friends assisting a service member or veteran with their activities of daily living (ADL) or the instrumental activities of daily living (IADL)) have always been part of the military community, but this role has received greater visibility for veterans of post-9/11 conflicts (i.e., Afghanistan/Iraq wars) due to the improved survivability of previously fatal injuries resulting from better protective gear and the long duration of the conflicts. Military caregivers are the first line of response to the long-term home care of veterans with warrelated injuries. Due to the young age of post-9/11 military caregivers, the duration of caregiving could be decades with the possibility of 50 years or more [1]. Thus, military caregivers of post-9/11 veterans may face unique challenges related to long-term health, financial, and social aspects of caregiving [2].
Theoretical models of caregiving such as the "Military and Veteran Caregiver Experience Map" [3] suggest that baseline characteristics influence the caregiver's ability to meet the new demands of caregiving, which may lead to caregiver stress/strain depending on the extent to which current identity and roles are altered by caregiving requirements. The Military and Veteran Caregiver Experience Map depicts the factors that contribute to the caregiver trajectory, the dynamics across a span of time, and events in the caregiving journey. Over time, the caregiver may shift priorities and seek help within current social/family circles or the healthcare system, and may develop new coping skills. When the caregiver is not able to shift priorities and/or obtain needed support within social relationships or the healthcare system, there is a negative impact on caregiver well-being, continued dysfunction, and diminished veteran, caregiver, and family function, which can lead to a negative impact on baseline characteristics and a negative spiral of health and well-being. When the caregiver is able to adapt/cope with new roles and responsibilities, there is a positive impact on veteran, caregiver, and family wellbeing and baseline characteristics with a positive trend for health and well-being.
Models of suicidality likewise suggest that suicide risk is a combination of stable and dynamic properties [4]. Suicide risk has stable characteristics that resist change over time (e.g., biological/ genetic characteristics) and dynamic characteristics that fluctuate in response to environmental and individual processes. According to the fluid vulnerability theory, suicidal behaviors emerge as a result of the interaction between dynamic and stable risk processes [4]. The fluid vulnerability theory suggests that there are factors that play an important role in the chronicity of suicide symptoms that are aggravated over time due to associated factors [5]. Given the stresses associated with caregiving (e.g., social isolation, lack of sleep, interrupting education, ending employment, changes in roles/identity), which may add to or accentuate the stable risk processes, suicidal ideation, attempt, and completion may be significantly elevated in caregivers. Indeed, more than three decades of research has shown that caring takes a significant toll on the physical health, mental health, social engagement, career prospects, sense of self, and financial security of family caregivers [6][7][8][9][10]. Estimates show that 40-70% of caregivers have clinically significant symptoms of depression, with approximately one quarter to one half of these caregivers meeting the diagnostic criteria for major depression [11]. Emerging literature has also found that among caregivers, significant risk factors for suicidal thoughts included being unemployed, living without a partner, having lower levels of social support, having a chronic physical disorder, a mood disorder or an anxiety disorder, and having impaired social, physical, and emotional functioning [12].
While prior studies have started addressing risk for self-harm such as suicidal ideation/ attempt, to our knowledge none of the studies have addressed this serious issue in military caregivers-a population whose young age and lengthy duration as a caregiver my put the caregiver at significant risk of suicidal ideation/attempt due to changes in mental health, physical health, and roles/identity that occur after becoming a caregiver. To address this gap we identified phenotypes (clusters of symptoms/characteristics) of risk for suicidal ideation/ attempt in a national sample of caregivers of wounded, ill, and injured veterans. We hypothesized that there would be two to four phenotypes (e.g., low-, medium-, high-risk) associated with measures of perceived stress, depression and, consistent with the fluid vulnerability theory, a higher loss of self in the caregiver since becoming a caregiver. We further hypothesized these phenotypes would be associated with suicidal ideation/attempt in military caregivers during the period since becoming a caregiver.
Methods
Upon approval by the Institutional Review Board (IRB) at The University of Texas Health Science Center at San Antonio (UT Health San Antonio), we convened a "Military and Veteran Caregiver Advisory Group" and engaged the support of national organizations serving military caregivers for distribution of our web-based survey. The Military and Veteran Caregiver Advisory Group consisted of ten caregivers caring for a wounded, ill, and injured veteran: spouses (4) between the ages of 30 and 60 years old; parents (3) older than 55 years old caring for a veteran of the wars in Iraq and Afghanistan (post-9/11); and spouses of veterans (3) of the wars before Iraq and Afghanistan (pre-9/11). Members represented the various characteristics of military caregivers (i.e., pre-and post-9/11, length of caregiving time, myriad of injuries) across the United States [2]. The advisory group provided recommendations for the survey that helped address the diversity of caregivers in the community and assisted with the testing of the web-based survey instrument. Goals of the instrument testing were to measure the average time of completion, content, language, and order of the questions/instruments. The advisory group also helped facilitate recruiting participants from interest groups related to the advisory group's caregiving focus. Caregivers responded to web-based links in social media or e-mail information from caregiver organizations. Caregivers were not compensated for their time. To increase the response rate, survey length was limited to allow for completion within 30 minutes.
Participants
A convenience sampling technique was used to collect data via the web-based survey from individuals who were: 1) 18 years or older; 2) self-identified as the caregiver of a wounded, ill or injured service member or veteran; and 3) proficient English speakers.
The only identifier collected was an e-mail address to provide a mitigation strategy for those who reported prior suicidal ideation (see below).
Sociodemographic characteristics. The survey collected information regarding general demographics (age, sex, race, and education), the number of years as a caregiver, and total number of children.
Context of caregiving. Participants were first asked to describe characteristics unique to military and veteran populations (e.g., number of deployments) followed by information about their caregiving situation, including characteristics of caregiving (time, tasks, and the veteran's medical conditions with which the caregiver assists) and compensation from the Veterans Affairs (VA) Caregiver Support Program. Participants were also asked to identify the types of conditions for which they provided care (e.g., amputation, burn, TBI, PTSD, depression).
Physical health. Physical health status was measured using the Quality of Life, General Health Questionnaire, also known as the VR-12 [13], which measures health-related quality of life in the domains of general health perceptions, physical functioning, role limitations due to physical and emotional problems, bodily pain, energy-fatigue, social functioning, and mental health. The VR-12 is calculated using an algorithm developed by Selim et al. [13] with normative values of 50 (SD 10) on each scale with higher numbers indicating a more positive selfreported health status. Research has found that a one-point increase on the VR-12 is associated with lower health expenditures [14].
Pain was measured with the six-item Pain Impact Questionnaire (PIQ-6™ or "PIQ-6") [15], a patient-based assessment designed to measure pain severity and the impact of pain on an individual's health-related quality of life (HRQOL). Each item is rated on a six-point Likert scale ranging from "none" to "very severe." Scores on the PIQ-6 of 58 or higher indicate pain that should be assessed and treated by a medical professional, with scores 64 and higher indicating severe impact [16].
Participants were also asked about caregiver suicidal ideation using the question, "Since becoming a caregiver, have you thought of harming yourself or trying to take your own life?" based on the Assessment of Suicidal Ideation and Plan [20,21]. Participants who reported previous and current suicidal ideation were contacted by the research team by e-mail and provided a referral to mental health programs that serve military caregivers and a resource guide for additional services in the participant's community.
Social function/Well-being domain. The 16-item Caregiver Well-Being Scale short-form [22] measures basic needs (BN) and activities of daily living (ADL). The BN items represent biological, psychological, and social needs and the ADL items represent ways to meet these needs. Scores for BN and ADL each ranged from 0-5, with a higher score indicating better social function and well-being [22].
In addition, caregivers may experience a change in the way they perceive themselves socially, prompting the measurement of the participant's sense of self-loss. The Loss of Self instrument is a two-item questionnaire that measures the extent to which the caregiver reported a self-loss due to caregiving and engulfment resulting from being consumed by the caregiving role [23]. The two questions were: How much have you lost: a) a sense of who you are and b) an important part of yourself? Each item is measured with a four-point Likert scale ranging from "not at all" (1) to "completely" (4).
Caregiver health history. We identified previously diagnosed health conditions by asking caregivers to provide information about their health history and identify conditions for which they received a diagnosis by a healthcare provider during the period of time since beginning their caregiver role. We included the following medical conditions as part of the health history: anxiety, insomnia, autoimmune disorders, and migraines/headaches.
Data analysis
We conducted a descriptive analysis of the sociodemographic characteristics of the caregiver, context of caregiving, and health outcomes from each of the four domains, including the point prevalence of medical conditions reported by the caregivers at the time of the survey.
After examining the distribution of measures across the four domains (cognition, emotion, physiology, and behavior), we conducted general latent variable modeling (GLVM) to identify distinct caregiver distress phenotypes based on scores from self-report measures of caregivers (PSS, ADL, burn, PIQ, aLOS of self, bLOS of self, years of care, VR12) and conditions for which they provided care (ALS, depression, PTSD, suicide ideation, TBI, and trauma), adjusting for the covariates associated with the clusters.
GLVM is a robust parametric modeling technique used to identify distinct unobserved subgroups within a population based on a mixed type of multivariate outcomes (continuous and categorical) such that individuals of the same latent class share a similar joint distribution of these outcomes. Each class identified by GLVM is characterized by a distinct pattern of means and variances associated with the continuous outcomes and frequencies associated with the categorical outcomes. The GLVM consists of: (i) pre-specifying the number of latent classes; (ii) conditioned on each latent class, modeling the joint distribution of continuous and categorical outcomes with varying model parameter estimates to differentiate between classes; and (iii) predictors associated with class membership were identified based on the modified Bolck, Croon, and Hagenaars method [24], which accounts for classification errors and corrects underestimation of predictor effects. In the GLVM, covariates of class membership included distance from residence to VA care, interruption of education, and days per week of caregiving for caregivers. Each GLVM was run using Mplus 8.2 allowing 20 different start values to ensure global maximization of the model estimates and that the models produced stable results regardless of the start value. The best fitting GLVM was identified primarily using Bayesian information criterion (BIC): models with smaller BIC values indicate a better fit. The GLVM for this study explored two to four classes since the model fit did not improve nor converge when assuming four or more classes. In addition to goodness of fit, the clinical relevance of the models was evaluated, interpreting the meaning of classes as caregiver distress phenotypes.
Logistic regression analysis then examined the odds of suicide ideation by caregiver distress phenotype controlling for age, time (hours) providing daily care, interruption in the education, and distance to the VA. Table 1 provides the descriptive characteristics of the sample. Of the participants that were screened (N = 502), 93% (n = 458) were eligible and completed the survey between April and August 2017. Approximately half (56.3%) were under the age of 40 caring for veterans of similar age (M = 40.8, SD = 9.9); most had at least one combat deployment (93.3%). The majority of the participants were Caucasians (84.4%); there were also Hispanics (8.5%), Asian/Pacific Islanders (1.8%), African Americans (1.8%), and identified as other (3.5%). Nearly a third of the participants had completed four or more years of college and 39% reported interruption in their education due to their caregiving role, mostly while pursuing a bachelor's degree. On average, participants were married for 12.5 years (SD = 8.5) and the majority (60.4%) had children 18 years old and younger. Most participants were the care recipient's spouse (Table 1). Over 60% of caregivers had been providing care for five or more years, with 14% providing care for over 11 years. The majority (60.9%) reported caring for veterans with five or more medical conditions. Caregivers experienced poor health-related quality of life as measured by the VR-12 [13] (Table 1). Pain was highly prevalent in the sample, with more than half meeting the criteria for pain requiring clinical intervention [16]. A large number of participants (84.1%) exhibited depression symptoms as measured by the PHQ-9, with 29.6% exhibiting moderate to severe depression [17]. Perceived stress [18] among caregivers was also high and scores on measures of well-being and loss of self were low compared to population/prior sample norms [22,23]. Over half had been diagnosed with depression and nearly 30% were previously diagnosed with insomnia or migraines/headaches. Table 1 also shows that the health conditions most common in care recipients were depression, chronic pain, post-traumatic stress disorder (PTSD), traumatic brain injury (TBI), and suicidal ideation.
General latent variable model analysis
GLVM analysis found that among the GLVMs that converged, a three-class solution had the best fit based on the lowest BIC and clinically meaningful interpretation (
Phenotypes of caregiver distress in military and veteran caregivers
The three phenotypes identified by GLVM reflected high, medium, and lower distress. The high-distress phenotype (20% of the cohort) had the highest reported levels of stress, depression, pain, loss of self, and the highest prevalence of previously diagnosed anxiety, depression, insomnia, and migraines. The high-distress phenotype also reported the lowest levels of well-being and financial security. The medium distress phenotype (55% of the cohort) had patterns similar to the high-distress phenotypes, but with levels that were significantly less extreme than those of the high-distress phenotype. Finally, the low-distress phenotype (25% of the cohort) had the lowest (or highest for well-being/financial security) scores of all groups, except for reporting the highest probability of potentially high-risk alcohol use compared to those in the medium-and highdistress phenotypes. Caregiver distress phenotypes were also associated with conditions for which care was provided, but the relationships of differences were not as seemingly linear as was found for caregiver measures (Table 2). For example, caregivers in the low suicide risk behavior (SRB) phenotype had far lower probabilities of caring for depression and PTSD than
PLOS ONE
Phenotypes of caregiver distress in military and veteran caregivers the medium and high SRB risk phenotypes, respectively. Caring for a veteran with suicidal ideation was one variable, however, that had a more linear relationship on probabilities in medium and high SRB risk phenotypes. Conversely, the probability of caring for more physically focused (visible) conditions (e.g., ALS, amputation/burn injuries) was significantly higher for the low-distress phenotype compared to the medium-and high-distress phenotypes. Covariates significantly associated with high-distress phenotype were interruption of education and shorter travel distance to a VA medical facility. The distress phenotypes of caregivers that reported being part of the VA caregiver program were not different, X 2 (2, N = 456) = 5.90, p = .06 than those who were not part of the program.
Based on the Bolck, Croon, and Hagenaars method [24], the estimated proportions of caregiver suicidal ideation were 6%, 20%, 50% for the low-, medium-, and high-risk phenotypes (p-value <0.001 based on chi-square test), respectively. Logistic regression analysis adjusted for covariates found that individuals in the medium-and high-distress classes were significantly more likely than those in the low-distress class to report prior suicidal ideation (adjusted odds ratios [
Discussion
Prior studies have described the association of caregiving to individual symptoms such as depression, anxiety, and hopelessness; and these characteristics have all been linked to suicide ideation and suicide-related behaviors in the general caregiver population [12,25,26]. This study is the first, to our knowledge, to develop computational phenotypes of caregiver distress using diverse measures of physical, social, and emotional well-being. We found high-, medium-and low-distress phenotypes that were significantly associated with suicide ideation since becoming a caregiver. The GLVM examining the caregiving context, mental health, physical health, and social function/well-being revealed three distinct phenotypes that were strongly associated with suicidal ideation. These caregiver distress phenotypes were characterized by seemingly linear associations in risk predictors such that the high-distress phenotype exhibited significantly worse emotional outcomes, followed by medium-risk and low-risk. Studies have shown the emotional toll of caregiving, especially those caring for mental health conditions, significantly increases the caregiver's risk for suicidal behavior [26][27][28]. The high prevalence of mental health conditions, polytrauma, and comorbidities among veterans may worsen the mental health-related outcomes in their caregivers. In fact, caregivers assisting individuals with mental health conditions or psychiatric disorders are more prone to experiencing a negative impact on their emotional well-being [28,29]. This was evident in the medium-and high-caregiver distress phenotypes showing a high probability of suicide ideation among caregivers of veterans with behavioral health conditions. The long-term care of veterans with complex medical conditions and mental health comorbidities may pose additional risks for suicidal thoughts among their caregivers. Suicidal ideation rates in this sample of military caregivers were higher than rates in other published studies measuring suiciderelated behavior among family caregivers [26,30,31].
There is limited existing literature on suicide in caregivers in the general population, and it primarily focuses on caring for a loved-one with mental health conditions, Alzheimer, or Dementia. When compared with studies addressing caregiving and suicide in the general population, this sample of military caregivers exhibited higher rates of suicidal ideation. This marked difference could be attributed to various factors, including that military caregivers perform a higher complexity of care at a younger age, during the prime of their adulthood [32][33][34]. We recommend additional epidemiological studies to identify and fully understand caregiving and suicide in military and civilian caregivers. When compared to other studies that examined suicide-related behavior among family caregivers, anxiety and depression were also contributors to adverse exacerbation of the caregiver's emotional well-being. Similar findings were reported in studies of caregivers assisting family members with dementia, cancer, and disability [25,31]. Our sample exhibited a higher risk of suicidal ideation among those with depression and perceived stress. Additionally, unemployment and lack of social support are factors associated with suicide ideation in caregivers [12].
Often, when a person acquires a disability during adulthood some years of productive work are lost. Similarly, when a family member assumes the role of caregiver, a sudden interruption of the caregiver's education and/or career goals can occur, with a resulting loss of productive years. Post-9/11 caregivers, such as many of those in this study's sample, were young to middle-aged adults, many of whom stopped their education and/or their professional endeavors to care for their veteran. This study found that individuals in the high-and moderate-distress phenotypes had very high odds of suicide ideation since beginning the caregiving role. Studies of the long-term impact on productivity, health, and adverse outcomes such as suicide ideation are needed. However, future research should quantify the possible impact of caregiving using an approach similar to the disability-adjusted life years (DALY), which represents the gap between the actual health status and an ideal health situation where the entire population lives to its life expectancy with no disease and disability [35][36][37].
Loss of self has been studied in the context of the patient with some evidence of the impact in cancer survivors [38] and chronically ill patients [39] and highly associated to self-esteem, but loss of self has rarely been examined in caregiver research. Extant studies have found loss of self mostly among caregivers that are spouses, younger, and women [23]. Factors like interruption in career and school resulting from an engulfment in the caregiver role, may be contributors to this sense of identity and loss. This study found that high-and moderate-distress phenotypes had scores indicating the highest loss of self, the highest probability of interrupting education to provide care, and the highest odds of suicide ideation. It is possible that the complex interaction of interrupting education, loss of self, and possible changes in the relationship are associated with a mismatch between the actual and ideal self, which is then associated with ambiguous loss or alteration in identity [32][33][34]40].
While research has examined the concept of ambiguous loss [40][41][42] among care recipients that have suffered an injury or illness, the self-loss of the caregiver and its association to their health and well-being merits further exploration. Ambiguous loss explains the grieving process of family caregivers of people with dementia, Alzheimer's disease, TBI, and other neurological disorders [43,44], where the changes and perceived loss are for the care recipient, but not as a caregiver loss that pertains to the caregiver's essence and identity. Additional research is needed to understand the complex relationships and mediating factors that contribute to changes in identity and health outcomes in a population of military caregivers. Research is also needed to identify the most appropriate interventions and timing of those interventions to optimize health and well-being of this population of caregivers.
The travel time to the nearest VA facility and its association to the high stress phenotype may relate to confounding by indication in that people caring for veterans with more severe mental health symptoms (e.g., depression/suicidality) may want to live closer to a VA to make it easier to obtain urgent care from clinicians who are familiar with their care rather than a community emergency room where continuity of care may not be available. The increased burden of this care was demonstrated in the distress phenotypes, which is one possible explanation for this finding (this finding is further discussed in the methods section). Why caregivers in the low-distress group were more likely to report risky alcohol use is less clear. We hypothesize that alcohol use may be a primary stress management approach that leads to less perceived stress, less depressive symptoms, and lower loss of self in addition to lower risk for suicide ideation or attempt.
The limitations of this study included a convenience sample with self-reported measures. The potential selection bias that resulted from the study eligibility requirement for participants to self-identify as a caregiver was mitigated by the development of an internal algorithm that identified the responses provided and compared them to unique characteristics meeting the definition of caregiving. The participants of this study self-identified as a military and veteran caregiver. Each survey was evaluated based on the information provided regarding the caregiver role, performed tasks, period of time of caregiving as well as the care recipient's military service and medical history. These variables served as verification of caregiving status, specifically as a caregiver of a wounded, ill, and injured military or veteran. In addition, comparison of the characteristics in this study's sample to those reported by the RAND study [2] (average age, education, and health-related outcomes such as mental health symptoms of depression, stress, and anxiety) suggest that they are remarkably similar. To date, the RAND study is the only population-based data in military and veteran caregivers, which has been established in the literature as significantly different from caregivers in the general population [45]. A comparison of the characteristics in this study's sample to those reported by the RAND study (average age, education, and health-related outcomes such as mental health symptoms of depression, stress, and anxiety) suggest that they are remarkably similar. However, the RAND study did not study suicide ideation among caregivers; and with no other studies exploring the topic of suicidal ideation in military and veteran caregivers, we cannot compare this study's results with a similar sample of caregivers. Minority groups in the sample of this study were underrepresented. This is a limitation that could have resulted from various factors, including recruitment strategies. This limitation may play a role in the rate of behavioral health conditions reported in the sample, typically being higher in minority groups. The study did not collect information regarding the caregiver's pre-existing conditions, especially those associated with behavioral health that could have an association with suicidal ideation.
This study's findings identify a public health concern and a call to action. Suicide is the tenth leading cause of death in the US and suicide among veterans is already a public health problem. This study also identifies significant suicide risk for military caregivers. Presently, there are no registries or efforts in monitoring suicide among military families. Military caregivers of the latest era of war are younger, in the prime of their adult life, not yet at the age of retirement, and not yet at the age when a caregiver role is assumed, which typically occurs at the age of retirement. As a result, the emotional, physical, and social toll of caregiving in this group of younger caregivers may last for decades. A healthy and productive life while being a caregiver may require significant support from programs and policies that can strengthen the health and well-being of caregivers, with an emphasis on career progression. To date few programs are addressing suicide in military and veteran caregivers. This study sheds light on some of the factors and caregiver characteristics associated with suicidal ideation, a contribution we hope can be incorporated in future prevention initiatives. This study suggests that education interruption, loss of self, and the stress of caregiving are important contributors that warrant further evaluation.
|
2021-06-13T06:16:29.171Z
|
2021-06-11T00:00:00.000
|
{
"year": 2021,
"sha1": "a9ddd938889a0b4c6f27e5e183408aaff1ed1893",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0253207&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "adaa9ed9bc0d92712cb7d920efa2227220d96fe7",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17603324
|
pes2o/s2orc
|
v3-fos-license
|
Transcript Dynamics at Early Stages of Molecular Interactions of MYMIV with Resistant and Susceptible Genotypes of the Leguminous Host, Vigna mungo
Initial phases of the MYMIV- Vigna mungo interaction is crucial in determining the infection phenotype upon challenging with the virus. During incompatible interaction, the plant deploys multiple stratagems that include extensive transcriptional alterations defying the virulence factors of the pathogen. Such molecular events are not frequently addressed by genomic tools. In order to obtain a critical insight to unravel how V. mungo respond to Mungbean yellow mosaic India virus (MYMIV), we have employed the PCR based suppression subtractive hybridization technique to identify genes that exhibit altered expressions. Dynamics of 345 candidate genes are illustrated that differentially expressed either in compatible or incompatible reactions and their possible biological and cellular functions are predicted. The MYMIV-induced physiological aspects of the resistant host include reactive oxygen species generation, induction of Ca2+ mediated signaling, enhanced expression of transcripts involved in phenylpropanoid and ubiquitin-proteasomal pathways; all these together confer resistance against the invader. Elicitation of genes implicated in salicylic acid (SA) pathway suggests that immune response is under the regulation of SA signaling. A significant fraction of modulated transcripts are of unknown function indicating participation of novel candidate genes in restricting this viral pathogen. Susceptibility on the other hand, as exhibited by V. mungo Cv. T9 is perhaps due to the poor execution of these transcript modulation exhibiting remarkable repression of photosynthesis related genes resulting in chlorosis of leaves followed by penalty in crop yield. Thus, the present findings revealed an insight on the molecular warfare during host-virus interaction suggesting plausible signaling mechanisms and key biochemical pathways overriding MYMIV invasion in resistant genotype of V. mungo. In addition to inflate the existing knowledge base, the genomic resources identified in this orphan crop would be useful for integrating MYMIV-tolerance trait in susceptible cultivars of V. mungo.
Introduction
Plant viral pathogens exhibit an obligate intracellular mode of parasitism that manipulates host resources for their own survival and proliferation [1]. Due to their limited genetic capability, these pathogens entirely depend on the host's machinery to complete their life cycle. In doing so, they utilize diverse stratagems to conquer the host, upsetting hosts immune response by escaping its surveillance [2,3]. Disease manifestation is therefore an organized consequence of pathogen establishment and emergence of disease symptoms in the compatible host. In contrast, incompatible response is a different phenomenon altogether. Here, resistant hosts are equipped with a repertoire of specialized armours that recruits a series of coordinated cellular processes to counter infection by impairing the pathogenic effectors. Subsequently, a cascade of signaling events are elicited that culminates with the formulation of appropriate responses enabling the host to defend against the potential invader.
The key aspect in mounting up an effective response relies on the timely perception of the intruder [4,5]. In plants, this process is unique and achieved by recognition of pathogen-associated molecular patterns (PAMPs) by the host encoded pattern recognition receptors [6,7]. As a consequence multitude of reactions are initiated including rapid ion fluxes, generation of reactive oxygen species (ROS), reinforcement of cell walls, transcriptional activation of pathogenesis-related (PR) genes and synthesis of antimicrobials [8]. Depending on the type of interaction, signal molecules like salicylic acid (SA), jasmonic acid and ethylene generate concomitantly. Accordingly two hallmarks of resistance are activated, the hypersensitive reaction (HR) or localized cell death in the vicinity of attack and systemic acquired resistance (SAR) in distal parts of the host. In addition, resistant plants are also furnished with a gene-for-gene recognition system for those pathogens which suppress the PAMP-triggered immunity. This reaction is mediated by a receptor-ligand interaction involving host resistance (R) proteins and the pathogenic effector molecules, thereby neutralizing the activity of effectors and known as effector-triggered immunity [7,9].
Mungbean yellow mosaic India virus (MYMIV), classified within the Geminiviridae, draws particular attention as it infects several legume species inciting the yellow mosaic disease (YMD). YMD inflicts heavy yield reduction at high infection ratesin all blackgram (V. mungo) producing areas but is exceptionally devastating in the South-East Asian countries [10]. The disease forms large-scale epidemic under optimal environmental conditions imposing 20-80% loss in harvest. Successful pathogenesis accompanies a high degree of morphological and physiological alterations resulting in an appearance of bright yellow chloretic spots on infected leaves. Enhancement in genetic resistance is an aim in any crop improvement programme that can be accomplished by acquiring knowledge on the genetic basis of resistance. Most of the previous studies focused mainly on inheritance pattern of MYMIV-resistance [11,12] and their introgression to achieve durable resistance in V. mungo [10]. Possible involvement of the R-gene in V. mungodefence has also been investigated focusing on the molecular basis of effector coat-protein recognition of MYMIV [13]. Nevertheless, mounting durable resistance not only necessitates introgression of the R-gene but also entail inclusion of other sources of horizontal resistance to broaden genetic competence to restrict the pathogen invasion [14,15]. Such nonrace-specific resistance is governed by several interacting genes that co-act in concert limiting the pathogen at the site of infection. Use of this strategic control based on horizontal resistance relies on the prior knowledge of the genes associated with the resistance mechanism. However, there was no existing information available which could enlighten about the immune response of V. mungo in general.
Previously we had undertaken a time course proteomic study to identify the differentially expressed proteins during MYMIV-V. mungo interactions [16]. The study demonstrated biochemical and proteomic alterations associated with YMD to comprehend protein modulations that commence upon viral infection in resistant and susceptible backgrounds. However, the study was conducted at a later stage of infection (3, 7 and 14 dpi) hence it is difficult to speculate the early events that govern MYMIV-resistance or susceptibility. As the molecular controls of such complex interactions are largely unknown, one approach should be the exploration of post infection transcriptional modulations occurring in the host system. On this backdrop, the technique of suppression subtractive hybridization (SSH) was adopted during this investigation to compare the patterns of infection responsive reactions during the early phases of plant pathogen interactions. The present study illustrates transcriptional reprogramming as an indicator of host immune responses, demonstrating a cascade of signaling events that allow rapid switching to a defence mode deciding the phenotype upon MYMIV inoculation.
Plant material and growth conditions
MYMIV-resistant Vigna mungo VMR84 and susceptible cultivar T9 were selected based on their contrasting response to MYMIV. T9 is a high yielding cultivar of V. mungo, characterized by its wider adaptability and highly sensitivity to YMD [10]. VMR84 is a MYMIV-resistant recombinant inbred line, genetically related to T9 having superior yield performance [17]. Seeds after disinfection (0.1% HgCl 2 for 5 mins and thorough washing in distilled water) were sown in sterile Soilrite soil mix (mixture of peat, vermiculite and perlite) and grown under glasshouse conditions. Growth conditions were adjusted to 25±1°C with 16/8 h light/dark photoperiod (light intensity of 500 μmol m -2 s -1 ) and 80% relative humidity. Roughly after 21 days of luxurious growth (until the first trifoliate leaf expanded completely), young plants were subjected to MYMIV-stress treatments.
Pathogen inoculation procedures and evaluation of MYMIV accumulation
To impart MYMIV-stress, Bemisia tabaci (white fly) populations were reared on susceptible V. mungo plants maintained within insect proof cages in the insectory facility at the Madhyamgram Experimental Farm (MEF), Bose Institute (BI), Kolkata, India (22°4 0 N and 88°27 0 E; altitude 9 m). Approximately, 25-30 adult flies were confined in each glass trappers and allowed for an acquisition access period of 24 h on symptomatic leaves of naturally infected plants to ascertain their viruliferous nature. Subsequently, the trappers along with viruliferous flies, were attached to the first trifoliate leaf of healthy V. mungo plants and allowed for 24 h inoculation access period. During this period virus particles were transmitted from the guts of viruliferous flies to the phloem cells of the infected plants. First trifoliate leaves of both the genotypes were mock inoculated with aviruliferous vectors and were kept in separate glass-cages under identical conditions. Pathogen proliferation in inoculated leaf samples were assessed by PCR amplification of a 575 bp DNA fragment (Accession no. HQ221570) encoding a part of the MYMIV coat protein and later confirmed visually by the appearance of yellow mosaic symptoms on the inoculated leaves of susceptible genotype. Samples were harvested at 3, 6, 9, 12, 18, 24, 36 and 48 hours post inoculation (hpi) and immediately frozen in liquid nitrogen for RNA isolation. In this experiment, biological replicates of three infected and three mock inoculated samples were employed for each time point. TNA (total nucleic acid) was extracted following modified CTAB method [10]. Relative accumulation of MYMIV was evaluated by qPCR as previously reported by following the method of García-Neria and Rivera-Bustamante [18]. Three biological replicates were carried out to determine the viral titer in the inoculated leaves from the susceptible and resistant host as stated by Boyle et al. [19]. Primers for MYMIV-coat protein (CP) gene (CP-F: 5 0 -GAAACCT CGGTTTTACCGACTGTATAG-3 0 and CP-R: 5 0 -TTGCATACACAGGATTTG AGGCAT GAG-3 0 ) were used as an indicator of MYMIV-load and the actin gene was used for data normalization.
RNA isolation, mRNA extraction and cDNA synthesis
Total RNA was isolated from mock inoculated and infected leaf tissues by using the Trizol reagent (Invitrogen, Carlsbad, CA) treated with DNase-I (Sigma-Aldrich, USA) to eliminate traces of genomic DNA and purified using the RNeasy Plant Mini Kit (Qiagen, USA) following manufacturer's instruction. Integrity of the isolated RNA samples were assessed by agarose gel electrophoresis and purity and quantity of individual samples were determined spectrophotometrically (NanoDrop 1000 Spectrophotometer, Thermo Scientific, USA). Subsequently, an equal amount of total RNA (50 μg from each time point) were pooled for each sample and used as a starting material for mRNA extraction. mRNA was purified from the pooled total RNA samples using the NucleoTrap mRNA Mini kit (Macherey-Nagel, Germany).
Construction of SSH library and EST-sequencing
Experiments were conducted using the experimental design as outlined in Fig 1. to identify genes differentially expressed during incompatible and compatible reactions. Leaves being the feeding sites of whiteflies are considered as the primary site for MYMIV perception, where a signaling cascade initiates expression of genes in response to the recognition of foreign intruder. Two V. mungo cultivars, T9 and VMR84 were selected on the basis of their contrasting responses to MYMIV for this purpose. Both the genotypes were artificially challenged with MYMIV and subtracted in both directions from the respective mock controls, generating forward and reverse SSH libraries for each genotype following the method given below.
Pooled mRNA from MYMIV/mock infected T9 and VMR84 were reverse transcribed to double-stranded cDNA using the SMART PCR cDNA synthesis kit (Clontech, USA) as per manufacturer's protocol. Obtained cDNA libraries were then subtracted using the PCR-Select cDNA subtraction kit (Clontech, USA) to compare the transcript samples of T9 and VMR84. Consequently four libraries have been generated by reciprocal cDNA subtractions i.e. in both forward and reverse directions. The forward and reverse libraries were obtained by subtracting mock inoculated controls (driver cDNA) from the MYMIV-inoculated samples (tester) and vice versa, respectively following manufacturer's instructions. All PCR amplification reactions were carried out using the Advantage 2 polymerase mix (Clontech, USA). The subtracted cDNA sequences were non-directionally ligated into pGEM-T Easy vector (Promega, USA) and transformed into competent E. coli DH5α cells. Selection of the transformed clones was based on blue white screening. The transformed white colonies were picked up from an initial Luria agar plate [containing Ampicillin (50 μg/ml), X-gal (20 μg/μl) and IPTG (0.1 mM)] and streaked individually in another culture plate and grown overnight at 37°C. After an initial screening through colony PCR, plasmids from the positive clones were recovered using the PlasmidMinikit (Qiagen). Single-pass Sanger sequencing was performed using the ABI prism 3100 automated DNA sequencers using 25-50 ng of plasmid DNA template and the universal sequencing primers T7 and SP6.
Sequence processing and EST annotation
EST sequences were trimmed from both sides to remove vector contaminations and adapter sequences using web based application NCBI-vecscreen (http://www.ncbi.nlm.nih.gov/ VecScreen/VecScreen.html). ESTs from each library were assembled into contigs using the CAP3 assembly programme (http://pbil.univ-lyon1.fr/cap3.php) following the default parameters. Non-redundant sequences which are greater than 100 bp were only included to produce a differentially expressed unigene dataset.
EST sequences were annotated by homology comparison against the non-redundant NCBI databases using the BLASTn and BLASTx algorithm to assign putative function [20]. Similarity to an annotated sequence was considered to be significant having an E-value less than 1 × 10 -5 . Sequences were also compared using TBLASTx of Dana Farber cancer institute (DFCI) plant Functional categorization of the ESTs was manually done according to the functional catalogue (FunCat) of the Munich Information Center for Protein Sequences (MIPS). EST sequences obtained were submitted in the EST database (dbEST) of the NCBI (http://www.ncbi. nlm.nih.gov/dbEST/) in compliance with the GenBank guidelines.
qPCR analysis of selected genes
Real-Time qPCR was conducted to compare the expression pattern of some selected genes during compatible and incompatible interactions. Total RNA was isolated in three independent biological replicates from leaves of MYMIV-and mock-inoculated plants at 6, 12, 24, 36 and 48 hpi. DNase-treated total RNA was reverse transcribed to first strand cDNA using the Rever-tAid first strand cDNA synthesis kit (Fermentas, Canada) following manufacturer's instructions. Gene-specific primers for 15 differentially regulated transcripts were custom designed using Primer3 Plus having a GC content of 55-60%, a Tm >50°C, primer length ranging from 18-22 nucleotides and an expected amplicon size of 150-250 bp (S1 Table). Each 20-μl reaction comprised of 0.2 μm forward and reverse primer and cDNA synthesized from 50 ng of total RNA. qPCR reactions were carried out with SYBR Advantage qPCR Premix (2X) (Clontech, USA) in a BioRad iQ5 quantitative real time PCR system (Bio Rad, USA) under the following conditions: an initial denaturation at 95°C for 30 sec was followed by 40 cycles of 5 sec at 95°C, 30 sec at 60°C. On completion of each run, a dissociation curve analysis was done to check the specificity of the primers by heating the samples from 65°to 95°C in increments of 0.5°C, each lasting for 5 s. All the reactions were carried out in triplicates including three non-template controls. The obtained threshold (Ct) values were normalized against the reference gene actin [21] and relative fold changes were calculated by the comparative 2 -ΔΔCt method [22].
DAB staining of infected leaves
Hydrogen peroxide accumulation in inoculated V. mungo leaves were analyzed using 3, 3 0 -diaminobenzidine tetrahydrochloride (DAB) staining method according to Schraudner et al. [23]. DAB polymerizes in presence of H 2 O 2 and generates brown colouration. Fully expanded leaves of both genotypes (VMR84 and T9) were harvested at 0, 12, 24 and 48 h after challenging with MYMIV. Excised leaves were vacuum infiltrated with 1 mg ml -1 DAB staining solution, pH 3.8 for 5 min. Vacuum was released gently and the procedure was repeated 2-3 times until the leaves were completely infiltrated with the solution and then kept in a plastic box for 5-6 h under high humidity conditions till reddish brown precipitates were observed. Chlorophyll was removed by heating with 96% (v/v) ethanol at 40°C. DAB stained leaves were then fixed with a solution of 3:1:1 ethanol: lactic acid: glycerol and photographed.
Quantification of H 2 O 2 from DAB infiltrated leaves was determined according to Kotchoni et al. [24]. After grinding the DAB-stained leaves in liquid nitrogen, 0.2 M HClO 4 was added and centrifuged for 15 min at 12,000g. DAB activity was determined spectrophotometrically by measuring the absorbance at 450 nm. Obtained results were compared against a standard curve generated for known amounts of H 2 O 2 in 0.2 M HClO 4 -DAB and expressed in terms of mmol gm -1 FW. Experiments were repeated three times on 3 individual plants.
Lipid peroxidation assay
Lipid peroxidation was spectrophotometrically assayed by quantification of malondialdehyde (MDA), the end product of lipid peroxidation as reported by Cakmak and Horst [25]. Briefly, 100 mg leaf tissue was homogenized and extracted with 20 ml of 0.1% TCA solution and the extract was centrifuged for 10 min at 12000g at 4°C. One ml of recovered supernatant was then incubated with 4 ml of 20% TCA containing 0.5% thiobarbituric acid (TBA) for 30 min at 95°C. The reaction was terminated by placing on ice and centrifuging at 12000g for 10 min. Extent of MDA-TBA complex formation was assayed by measuring the absorbance at 532 nm and subtracting non-specific absorbance at 600 nm. Concentration of MDA (expressed as μ mol g -1 FW) was calculated using the extinction coefficient of 155 mM -1 cm -1 using the formula: MDA content ðnmolÞ ¼ DAbs ð532 À 600Þ nm = 1:55 Â 10 5
Quantification of Photosynthetic efficiency
After MYMIV-and mock inoculations, three leaves from both resistant and susceptible genotypes were analyzed for their chlorophyll fluorescence parameters. Measurements were performed in young fully expanded leaves and the values were normalized with the foliar area for each trifoliate leaf. After dark adaptation, chl a fluorescence was measured using a portable fluorescence spectrometer Handy PEA (Hansatech, King's Lynn, Northfolk, UK) according to protocol adopted by Strasser et al [25]. The initial (Fo) and the maximum fluorescence (Fm) were recorded and subsequently the variable fluorescence was calculated (Fv = Fm-Fo). Finally, the Fv/Fm ratio was carried out that correlates with the net quantum yield of PSII. Additionally, an energy pipeline model was generated utilizing the measured parameters with Biolyzer HP 3 (Bioenergetics Laboratory, Switzerland) to compare the energy flow of PSII at different levels between the genotypes in mock inoculations and after challenging with the virus.
Results
Accumulation of MYMIV in the two V. mungo genotypes after challenging with the virus Fate of artificial MYMIV-infection was evaluated based on phenotypic changes along with molecular detection of the MYMIV coat protein (CP) fragment. In the present study, 10 plants from each genotype were challenged with the virus and the phenotypic responses revealed highly resistant nature of VMR84 while T9 demonstrated a susceptible reaction. Development of yellow mosaic pattern on leaves was intimately observed in susceptible plants that started with the appearance of bright yellow specks ultimately coalescing into larger lesions. Although V. mungo plants were fully symptomatic at 15 dpi, visual symptoms started appearing from 5-7 dpi in the compatible host while establishment of pathogen was severely impaired in resistant VMR84 (Fig 2A). Assuming that the fraction of viral DNA corresponds to the degree of infection, quantitation of MYMIV coat protein (CP) fragment was carried out for a period upto 15 dpi ( Fig 2B) to assess the viral titer within inoculated foliar tissue. Low levels of MYMIV-CP were observed in resistant VMR84 plants indicating hindrance in virus proliferation thereby restricting spread of the disease. In contrast, exponential accumulation of MYMIV-CP was reported 5 dpi onwards in susceptible T9. Finally at 15 dpi, a surge in pathogen population reached 1000-fold higher in susceptible T9 than the level calculated in resistant VMR84 genotype correlating with the symptomatic changes in leaf morphology. The results also showed that the mock-inoculated controls remained asymptomatic at all the evaluated time points (data not shown). Table 1. Transcript analysis revealed a total of 75.1% ESTs induced during incompatible interaction while the majority (57%) was repressed in compatible reaction. SSH data also revealed overlapping expression of 29 ESTs between the compatible and incompatible interactions while the rest 176 and 111 are library-specific sequences of VMR84 and T9 respectively (S1 Fig). All EST sequences were deposited to the GenBank EST database (VMR84: JZ168080-JZ168359; T9: JZ168360-JZ168399).
Gene annotation and functional classification
Putative functions were assigned to ESTs with various degrees of confidence. Non-redundant ESTs from both resistant and susceptible libraries were searched for homology against the Gen-Bank non-redundant sequence databases using the BLASTX and BLASTN algorithms. It was possible to annotate about 81% of the obtained ESTs with potential functions using the NCBI GenBank database. Rest of the sequences was further compared using the TBLASTX programme of the Dana-Farber Cancer Institute Vigna unguiculata (cowpea) Gene Index (DFCI-VuGI, release 8.0) and the TAIR blast programme. This strategy allowed further annotation of 4%, still 53 (15%) sequences did not share homology to any sequences in either database within the set threshold E-value of 1 x 10 -5 . Homology search showed that majority of the ESTs (49%) shared homology with soybean sequences (S2 Fig). The percentage sharply decreased to 16.1 in Medicago and 9.2 in Arabidopsis followed by Phaseolus (4.5%) and Ricinus (3.8%).
These results indicate predominance of legume species in the homology list than other dicot or monocot species. However, lack of V. mungo in the list signifies the meager amount of sequence information in the database. Collection of novel V. mungo ESTs resulted from this ESTs were subsequently categorized into 9 different functional categories based on their putative functions (Fig 3) providing a broad impression on the types of modulated genes in the evaluated genotypes. Majority of ESTs are under the metabolism category that was altered during both compatible (24.8%) and incompatible (18.6%) interactions. ESTs involved in various metabolic pathways like glycolysis (glyceraldehyde 3-phosphate dehydrogenase, fructose bisphosphate aldolase), kreb cycle (malate dehydrogenase, citrate synthase) and photorespiration (glycolate oxidase, serine glyoxylate aminotransferase) showed up regulation in their expression. Several enzymes (cysteine synthase, tryptophan synthase) implicated in amino acid biosynthesis were also overrepresented during incompatible reaction. A significant portion of these ESTs also belongs to the class 'signal transduction' showing a higher percentage (6.9%) in the resistant library than in the susceptible libraries (4.4%). Expression of genes coding for protein kinases (MAPK6, STKs), and calcium signaling (calmodulin, calreticulin) were also abundant in VMR84. Modulations of ESTs under "transport" category was higher during incompatible (8%) than compatible (2.7%) interactions, that includes putative potassium transporter, H + -ATPases etc. Sizable differences in the number of ESTs were also observed in the category "transcription". At least 8.5% of SSH clones comprises transcription category in resistant library, while the number declined to 4.4% in the susceptible genotype. The known stressresponsive transcription factors and regulators including WD repeat-containing protein, WRKY, ZF and bHLH are the key components of this category. Although 12% of the ESTs in the susceptible genotype correspond to stress and defence related genes, defence as a category remains overrepresented in resistant VMR84 (16%). Amongst these, several are implicated in pathogen recognition (NBS-LRR, ankyrin, RLKs), oxidative stress (GST, PRX, SOD, TRX), heat shock proteins (HSP90) and other abiotic stresses (CPRD14, CPRD32). Transcripts coding for pathogenesis related proteins such as PR1, PR5 and PR17 having direct role in host immunity were also abundant in the library of incompatible interaction.
Other relevant processes affected during the MYMIV-infestation included "secondary metabolites" and "protein biogenesis and metabolism". Several transcripts corresponding to the phenylpropanoid pathway were induced in VMR84 reaching 17% of the total. Processes related with "protein biogenesis and metabolism" included 18% of induced genes, but this group also constituted repressed genes, attaining 10% in this case. Protein turnover was also increased as ribosomal proteins, proteases and ubiquitin-related genes are amongst those induced by MYMIV in incompatible interactions.
Contrasting modulation was observed in the photosynthesis/energy class as per their expression is concerned in these two genotypes. While a majority of them showed induced expression in resistant VMR84, abundance of photosynthesis related genes in the repressed list reflects the lower photosynthetic competence accompanying susceptible phenotype of T9.
A major share (17.6%) was assigned to the category "unknown/ unclassified" in the resistant library whereas 17.7% in susceptible library. Efficiency of SSH can be further evidenced by poor representation of the constitutively expressed genes indicating the richness of transcripts associated with defence response. Nonetheless, several transcripts coding forprotein disulfide isomerase, alcohol dehydrogenase etc. that have no apparent defensive roles has also been retrieved.
Validation of YMD-regulated genes using qPCR
It is of special interest to explore the expression kinetics of transcripts that clearly discriminate susceptible from resistant reactions. Since SSH does not provide any quantitative estimation of gene expression, therefore qPCR analyses of selected ESTs were undertaken to elucidate the dynamic alterations of gene expression over time. qPCR analyses was carried out at 6, 12, 24, 36 and 48 hpi of different biological replicates (Fig 4). Levels of the accumulated transcripts were normalized against the expression of the VmACT gene (JZ078743) that was shown earlier by us as the choice of internal standard for V. mungo under MYMIV-stress condition [21]. Amplification efficiencies for the primer combinations were observed to be around 2, with a 0.97 correlation in the dilution curve analyses (data not shown). The obtained qPCR expression data of the selected genes are consistent with the results of SSH analyses. Transcripts coding for SGT1 and HSP90, the two functional components of R-protein complex were quantified, maximum expression level noted was 7 and 10 folds at 48 hpi, respectively in VMR84 (Fig 4). SAR indicator gene PR1 was increased ten times at 48 hpi, triggering the orthodox defence responses against MYMIV.
As anticipated, transcripts coding for ROS homeostasis (SOD, APOX, TRX and MET) were all upregulated at 6, 12, 24, 36 and 48 hpi with their highest expression at 48 hpi except for GST which showed a peak of expression (20 fold) at 36 hpi in the incompatible host (Fig 4). In the susceptible host, a late induction of SOD, MET and GST was recorded yet at a much lower magnitude, while levels of APOX and THRX can hardly be detected at 48 hpi. Trends of expression of these genes indicate further accumulation beyond the observed time points. Transcripts corresponding to RuAc lingered under detectable level in infected susceptible plants; however a cumulative accumulation was noted in resistant host. MAPK and CAM representing the category "signal transduction" was significantly induced early upon inoculation at 12 (9 fold) and 36 hpi (17 fold) respectively. Expression of TS and UBL showed a modest increase with time in restricting pathogen in resistant host, VMR84. Transcripts corresponding to PAL and WRKY were also present in considerable amounts in resistant host but were barely noticed in infected susceptible tissues. Additionally three differentially repressed gens were also quantified
ROS Accumulation in response to MYMIV-inoculation
Abundance of transcripts in the SSH libraries regulating ROS homeostasis prompted us to detect and quantify endogenous H 2 O 2 levels by DAB infiltration procedures in pathogen inoculated tissues. Examined leaves showed presence of basal levels of H 2 O 2 and there is not much difference in colouration in susceptible and resistant host at 0 hpi (Fig 5A and 5B). However, time-course study revealed cumulative accumulation of H 2 O 2 producing intense colouration in resistant host, VMR84. Quantitative analysis revealed 7.5 mmol g -l FW of H 2 O 2 at 48 hpi; while detectable level was observed from 12 hpi onwards. In contrast, staining intensity at the site of pathogenic invasion was less prominent in the susceptible reaction exhibiting weaker accumulation. Changes in expression of ROS regulatory transcripts, in both the circumstances, are essential to cope with the amplified ROS levels maintaining a state of balance and initiating systemic redox signaling upon pathogenic invasion.
Membrane damage due to oxidative stress was assayed by quantifying MDA, a consequence produced through reactions involving superoxide radicals generated due to pathogenic infection leading to peroxidation of lipids. Following inoculation, MDA content was found to be similar in mock-inoculated plants of both resistant and susceptible genotypes, while its abundance differed in the MYMIV-inoculated samples. Extent of lipid peroxidation was found to be statistically elevated (35.6 μ mol/gm) at 48 hpi in compatible interaction but were not altered in the previous time points (S3 Fig). This suggests that membrane damage occurred concomitantly along with the appearance of chlorotic symptoms that occurred in the susceptible background upon MYMIV inoculation. In comparison, a less pronounced increase in MDA content (27.6 μmol/gm) was observed at 48 hpi in the resistant genotype.
Measurement of photosynthetic parameters
The SSH data revealed a gradual attenuation in the magnitude of gene expression that involved in photosynthesis shortly after infection. To confirm whether such transcript modulation is Transcript Dynamics during MYMIV-Vigna mungo Interactions pathogen induced, we measured the fluorescence induction kinetics of chl a molecules in both mock and MYMIV inoculated leaves at 0, 24 and 48 hpi; but no significant decrease in photosynthetic efficiency was observed during the early hours in the resistant host, VMR84 like that of observed reduction in gene expression level. But there was slight decline in the PSII electron transport and partial blockage of the reaction centers in the susceptible host (S4 Fig). While PS II electron transport and other photosynthetic parameters continued to be efficiently shielded from the hostile effects of the pathogen and the parameters tested remained unaffected in VMR84.
Discussion
The aim of the present study was to gain an insight into the functional and regulatory networks that is operational in V. mungo during host-pathogen interaction. The previous study hypothesized a plausible mechanism for MYMIV recognition via the gene-for-gene interaction [13], but the question of how V. mungo activates immune signaling upon pathogen ingression and what are the molecular determinants that determine reaction types still remain elusive. Therefore a comparison between molecular events occurring during compatible and incompatible reactions provides a better handle to understand the host's response to the pathogen attack. Here we report first-hand information on the immune responses showed by resistant V. mungo genotype upon MYMIV-infection that facilitated to decipher the molecular mechanisms underlying YMD resistance.
Transcript remodeling after pathogen ingression independent of basal defence
Successful MYMIV-inoculation to the host plant is manifested by a surge in viral titer that correlates with progressive symptom development in the susceptible host. On the contrary, the resistance machinery seems to be operational from the early hours of pathogen ingress owing to the low copies of detectable coat protein after artificial inoculation. Induced modulations of gene expression are solely due to MYMIV ingress has been confirmed by the judicious subtraction against the mock-inoculated controls. In addition, genetic relatedness of VMR84 with T9 [17] further eliminates the possibility that YMD-resistance is an outcome of constitutive differences in basal expression of defence related genes between these two genotypes. Combining the above facts, the present findings suggest that the observed transcript remodeling is explicitly a pathogen induced phenomenon and not due to the constitutively expressed basal defence of the host.
R-gene and its component transcripts during incompatible interaction
Typically, the resistance machinery is recruited upon perception of the invader that activates immune signaling and provides necessary arsenal to arrest pathogen proliferation. In the present study, a comparative transcriptional profiling was conducted to get an insight on the functional and regulatory networks of the early MYMIV stress-responsive genes, based on which a hypothetical model illustrating resistance mechanism operative in V. mungo against MYMIV has been proposed (Fig 6). Although a broad transcriptional inflection was observed, yet the modulated transcripts do not exhibit any explicit machinery associated with susceptibility.
Here compatibility seems to be a consequence of the weak implementation of pathways that are strongly enforced by the resistant host against the intruder. This may be due to the absence of requisite signaling in the susceptible genotype in absence of the candidate resistance gene, CYR1, as shown by Maiti et al [13].
Perception of pathogenic attack involves the recognition of pathogen or pathogen derived elicitors (Avr) by the host encoded R proteins [9,26]. Maiti et al. [13] have reported the R-gene mediated recognition of MYMIV-coat protein by the CYR1 of resistant V. mungo genotype similar to those demonstrated during host pathogen interactions in Vitis-downy mildew [5] and Glycine-Heterodera [27]. In the present study, one NBS-LRR candidate R-gene was identified which showed induced expression solely in the resistant background. There was also concomitant increase in a tobacco HSP90 homologue, an integral component of the R-gene complex along with its interacting partner SGT1 that participates in the assembly, activation and stability of the R-gene products [28]. Hubert et al. [29] and Takahashi et al. [30] showed that silencing of HSP90 and SGT1 has led to reduced accumulation of R-proteins compromising the resistance property of the host. Thus, the role of R-gene complex in resistant genotype of V. mungo is apparent.
ROS generation vs. scavenging in concurrence with proteomics data
Pathogen induced ROS accumulation at the site of attempted invasion is amongst the earliest cellular concerns associated with host's responses. We noted a state of balance is maintained between ROS generation and scavenging, as increase in ROS level is detrimental for cell viability [24]. In V. mungo several redox regulators including PRX, SOD, TRX and GST showed boost in expression level suggesting an imbalance in redox potential. In line with a previous proteomic analysis [16], transcripts of APOX, SOD and GST differed in their induction kinetics with a lower magnitude of expression in the susceptible background. Consistent with the transcript expression data, DAB staining assays provided an additional confirmation showing differential ROS accumulation in these two genotypes. An early state of redox imbalance, especially in the resistant host created the ideal foundation for systemic redox signaling, parallel to the findings reported by Venisse et al. [31] and Manickavelu et al. [4]. Additionally, the greater extent of lipid peroxidation in susceptible background justifies a more pronounced oxidative damage due to the gradual pathogen ingress than in the resistant background.
Involvement of transcripts in immune signaling
Plants are involved in an intricate interplay of signaling cascade, therefore it can be anticipated that several molecular pathways are in effect during the early hours of pathogen challenge [32,33]. Defence signaling is well manifested by a rise in Ca 2+ -responders and MAP kinases that participate in the activation of host surveillance against attempted viral invasion. Within the cell, Ca 2+ levels are maintained precisely through regulation of influx and efflux processes. Effective recognition of the intruder allegedly triggers selective activation of the Ca-channels, elevating its cellular concentration that provides necessary information for signaling [34,35]. Rise in cytosolic Ca 2+ level is accompanied by a parallel increase in expression of calreticulins (CRTs) and calmodulins (CAM). A biphasic induction of CAM was witnessed showing an initial burst within few hours in both the genotypes while a more pronounced second phase (16 fold) was observed exclusively in the resistant line. This data corroborates with the proteomic study wherein level of CaM was reported to be much higher during incompatible interaction compared to that of compatible interaction [16]. Besides their role as Ca 2+ sensor, CRTs are also known to offer hindrance in cell-to-cell trafficking of virion particles restricting pathogen spread in the resistant background [36]. Oscillation in the intracellular Ca 2+ -level stimulates the sensor responders (calcium-binding protein, calcium homeostasis regulator, CHoR1) that activate Ca-dependent protein kinases (CDPKs) [35] and transmit encrypted pathogenic signals further downstream. Although the present findings indicate an early and rapid generation of Ca 2+ signals, but their frequency, duration and amplitude needs further introspection.
Role of protein kinases in immune reaction
Compelling evidences suggest the role of protein kinases in bridging pathogen recognition and transcription of responsive genes [37,38]. Abundant expression of ESTs belongs to the kinase family were noted in the resistant background supporting the above conjecture. Of these, AtMAPK6 showed 9 fold expression at 12 hpi in the resistant background. The SA induced protein kinases (SIPK), a tobacco ortholog of MAPK6, stimulates N-gene mediated resistance in tobacco [39]. On the other hand silencing of this gene resulted in weak disease resistance in Arabidopsis [40]. Notable difference in its abundance in MYMIV-infected VMR84 suggests its participation in SAR. Another fascinating observation is the repression of a MPK4 in VMR84 that negatively regulates SA mediated responses [41,42]. Interestingly Clarke et al. [43] demonstrated that mpk4 mutants maintained increased level of endogenous SA resulting in the expression of defence related genes and elevated resistance to pathogens. Contrasting expression of these kinases therefore favours SA-pathways suppressing the antagonistic JA/ET-pathways. In an independent proteomics based approach Kundu et al [44] have shown that SA confers tolerance against MYMIV in susceptible V. mungo plants.
Dynamics of transcription factors during immune reaction
Transcription factors represent critical hubs in mounting defence reaction by regulating the initial step in gene expression. Here we report four transcription factors ZF, WD40, bHLH and WRKY whose expression were more prominent in incompatible than in compatible reactions. WRKY, a class of plant DNA binding protein recognizes 'W box' motifs positioned in the promoters of defence genes induced by biotic stress signals [45]. Here increased abundance of WRKY transcription factors in resistant background at 36 hpi is in conformity with the activation of defence reactions. There are reports demonstrating activation of WRKY transcription factors mediated through MAPK, followed by expression of an array of defence responsive genes to combat pathogen attack [46,47]. Although interaction between the two partners is beyond the scope of this study, expression profiling suggests WRKY to act downstream of MAPK in the signaling cascade. Even though several members of WRKY have been characterized, only a limited number of ZF, WD40 and bHLH transcription factors have so far been demonstrated in the plant immune system. Overall, the transcription factors play a complementary and/or overlapping role in enhancing expression of downstream components of the defence machinery.
Ubiquitin proteasome system in determining host-pathogen interaction
Emergence of ubiquitin proteasome system (UPS) influencing plant responses to viruses is somewhat fascinating [48]. Coordinated effort of polyubiquitination of target proteins by ubiquitin ligases and their subsequent degradation by UPS determines the fate of the interaction. However, previous endeavors demonstrating involvement of UPS in restricting pathogen or facilitating infection are somewhat ambiguous [49]. Substantial upregulations of ubiquitin ligase, 26S proteasome subunit beta and RPN7 throughout the course of pathogenesis in VMR84 were noted in this study. Transcript levels of an ubiquitin ligase amplified by 6 fold at 48 hpi in the resistant host, while expressions of the other two genes were not yet determined.
Differential transcript modulations in resistant and susceptible backgrounds
In general, the MYMIV-elicited transcripts of resistance and susceptible reactions revealed more dissimilarities than commonalities. However, majority of the overlapping responses exhibited a contrasting expression profile with induced levels in VMR84 but suppressed in T9. In particular, several transcripts belonging to stress/ defencecategory demonstrated such disparity confirming the role of paradoxical expression in defining the contrasting reactions. Responses associated with the resistant phenotype include expression of PR genes that are hallmarks of SAR [50]. Three PR gene homologs of Arabidopsis PR1, PR5 and PR17 were identified in V. mungo that were specifically induced in the resistant background, but not in susceptible. Progressive abundance of PR1 and PR5 suggests SA mediated resistance against MYMIV. Limited expression of the PR genes in T9 may be considered as one of the pathogenic defence suppression strategies. Abundance of PR1 and PR5 proteins during incompatible interaction was also observed through proteomics investigation [16]. Serine/glycine hydroxymethyl transferase, the positive regulator of PR proteins was also found in resistant V. mungo genotypes indicating a complex interaction in ROS regulation maintaining cellular redox potential. Indeed, susceptibility in T9 is conferred by an inadequate immune response that may be a direct consequence of unsuccessful pathogen recognition resulting in a compatible interaction. While up-regulation in the drought responsive ESTs (CPRD2 and CPRD14) in the resistant genotype reflects crosstalk between biotic and abiotic stress responses. MYMIV is known to depolarize plasma membrane potential due to rapid anion efflux, decreasing water potential of cells and creating drought like consequences [16]. Strong modulation in the expression of CPRD2 and CPRD14 therefore addresses such circumstances eliciting HR in VMR84 as envisaged by Brini and Masmoudi [51].
Alteration of metabolic and physiological status of infected host
Alteration in the metabolic and physiological status of infected tissue is supported by the existence of a large set of transcripts (18.6% in the resistant and 24.6% in the susceptible background) that functions in host metabolism. Overall repression of this group in T9 correlates with pathogens ability to compete and capitalize on the host's metabolism. On the contrary, general induction of transcripts of this category in the resistant host suggests a metabolic reprogramming implementing to support the defence mode. Broad analysis of the SSH results indicates specific processes including glycolysis (fructose bisphosphate aldolase), TCA cycle (malate dehydrogenase), starch synthesis (Granule bound starch synthase) and generation of aromatic amino acids (tryptophan synthase) are upregulated in resistant genotype that serves as foundations in the synthesis of antimicrobials. Up-regulation of proteins involved in TCA cycle in the resistant V. mungo genotype was also perceived through differential proteomic analyses [16] indicating provision for additional energy, an attribute essential for bestowing immune response through the generation of pyruvate and NADPH.
Although biotic stressors are known to repress metabolic processes [5], implications of their induction during early immune phase suggest protection of primary metabolism from foreign intruder. Major shift in the face of protein synthesis is accompanied by induction of rRNA genes encoding large subunit L21, L11 and small subunit proteins S13 S14 and S18. MYMIV aggression also stimulates photorespiratory pathways irrespective of host plant. Early boost in the expression of photorespiratory enzymes (e.g. glycolate oxidase, serine glyoxylate aminotransferase) accounts for intracellular generation of ROS triggered upon pathogen recognition that further increased with disease progression.
Upsurge of phenylpropanoid pathway at the early hours of virus infection
Among the SSH-enriched cDNA clones, it is important to highlight those involved in the phenylpropanoid pathway guiding the synthesis of physico-chemical barriers and signal molecules implicated in systemic and locally acquired resistance [52]. In this study, up regulation was noted in the expression of PAL, the key regulatory enzyme catalyzing the conversion of phenylalanine to cinnamic acid-an intermediate in the primary and secondary metabolic pathways [53]. A progressive accumulation in the transcript level (6 fold at 48 hpi) suggests its participation in establishing resistance in VMR84. Surveying time points beyond 48 hpi may potentiate in further accumulation, while its levels didn't reach to comparable amounts within the same time frame in T9 plants. In fact abundance of phenylpropanoid pathway enzymes in the resistant genotype was noted through proteomic investigation at the later stage of MYMIV infection leading to accumulation of SA, phytoalexins, antimicrobials, proline rich cell wall precursor, glycoprotein etc. [16]. All these biomolecules participate in an orchestrated manner in combating virus invasion. However, up-regulation of ubiquitin proteasome system is noted only in early hpi, whereas at later stages of the viral infection carbohydrate flux was redirected towards pentose phosphate pathway to support the cellular energy to a maximum extent [16]. Exogenous application of SA was shown to mimic natural resistance alleviating post infection phenotype in MYMIV-susceptible plants [44]. Increment of SA-induced UDP-glycosyltransferases and FMO1, a hallmark for SA mediated SAR confirms the hypothesis. All these findings taken together strongly support the role of SA in establishing resistance against MYMIV. Additional experiments are required to determine the magnitude and timing of SA accumulation that is concomitant with effective pathogen arrest.
Another trade-off for successful MYMIV infection is the quenching of photosynthesis related genes. Previous effort through a proteomics based approach strongly endorses such modulation accompanied by a drop in chlorophyll content at an advanced stage of infection in susceptible T9 [16]. However, an inductive expression of related transcripts in VMR84 may account for the production of sugar and energy necessary to avert pathogen colonization. Contrasting modulation in expression of rubisco and rubisco activase corroborates the situation with reduced levels in susceptible background at all the evaluated time points. Characteristic mosaics associated with YMD, may be the outcome of viral interference de-prioritizing resources towards hosts defence. Recent works in rice-blast fungus [54] and soybean-Pseudomonas interaction [55] identified PS II electron transport as the primary targets of plant viruses. During HR, altered redox state of the cells may interrupt PSII electron transport by damaging its components. FtsH, a chloroplastic zinc dependent metalloprotease, replaces the damaged components restoring PSII function [56,57]. An increased level of this transcript in VMR84 plants suggests an active PSII repair mechanism while susceptible T9 plants surrender to patho-destruction of PSII. But repression of the photosynthetic genes has no significant effect on host physiology in the early hours, demonstrating that the post infection phenotype due to impaired PSII is strictly restricted to the advanced stages of infection.
Beyond the involvement of reported interactions, participation of non-canonical genes in mediating host defence is of particular interest. This is apparent from the high percentage of ESTs that remained unannotated suggesting a reservoir of uncharacterized genes that are involved in immune response or illustrate novel pathogenicity factors. Annotation of these unknown genes in future will definitely unveil novel mechanisms involved in the pathosystem that remained elusive as yet.
In summary, the present study not only advanced our knowledge base on the regulation of MYMIV-resistance in V. mungo in genomics perspective; but also highlighted the physiological responses concomitant to mount a fruitful response against MYMIV. Dynamics of gene expression confirmed that the outcome of plant pathogen interaction is a function of gene regulation, showing a clear correspondence between elicitation of immune responsive genes and establishment of host's response. Comparative transcript analyses representing the early stages in pathogenesis have revealed the participation of both canonical and non-canonical genes during compatible as well as in incompatible reactions. Presumably, resistance is an outcome of extensive transcriptional reprogramming demonstrating an intense interplay of signaling events followed by metabolic changes to implement a defence mode. The present findings depict the coordinate action of SA-responsive pathways, Ca 2+ signaling, redox imbalance and PR genes in restricting MYMIV, complementing the proteomic data. In contrast, susceptible plants demonstrated a weak implementation of these pathways differing in the induction kinetics and transcript dynamics of stress-responsive genes. Overall repression of the ESTs in the susceptible background can be anticipated as a pathogenic strategy to limit the pathways that are insignificant for the virus life cycle and paving the way for rapid multiplication impeding host defence machinery.
Conclusion
The present study offered several promising candidate genes providing a valuable genomic resource for future functional analysis addressing mechanisms to translate directly to engineer durable resistance in V. mungo against the pathogen. Table. List of primers used for qPCR analyses. Sequence information and amplicon characteristics of gene-specific primers used for qPCR analyses. (DOC) S2 Table. Complete list of transcripts showing differential expression in response to MYMIV in resistant background. Two hundred and five sequenced ESTs obtained from resistant genotype are tabulated with EST IDs, annotations (BLASTX similarity), putative function, accession no., size, closest to database match, E-value and expression. The ESTs marked with "#" after EST ID represents the contig sequences while the rest are singletons. (DOC) S3 Table. Complete list of transcripts showing differential expression in response to MYMIV in susceptible background. One hundred and forty sequenced ESTs obtained from susceptible genotype, T9 are tabulated with EST IDs, annotations (BLASTX similarity), putative function, accession no., size, closest to database match, E-value and expression. The ESTs marked with "#" after EST ID represents the contig sequences while the rest are singletons. (DOC)
|
2017-04-12T20:19:55.167Z
|
2015-04-17T00:00:00.000
|
{
"year": 2015,
"sha1": "25f12ff08b85b5f069c24a02269edae0dea44cc2",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0124687&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "25f12ff08b85b5f069c24a02269edae0dea44cc2",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
216409598
|
pes2o/s2orc
|
v3-fos-license
|
Simulation of Secondary Ion Position on the Detector for Three-dimensional Shave-off Method
The concept of three-dimensional (3D) shave-off secondary ion mass spectrometry (SIMS) is that enables to obtain the depth information of the sample simultaneously with the mass information using the vertical axis of a two-dimensional position-sensitive detector in the mass analyzer. In this study, we simulated the trajectory of secondary ions sputtered from a virtual sample in the 3D shave-off SIMS system and investigated the magnification ratio of the ions. The simulation results showed that we could distinguish the depth position of the secondary ions sputtered from a sample by the detected position in our concept of 3D shave-off SIMS.
I. INTRODUCTION
Secondary ion mass spectrometry (SIMS) is a useful surface analysis technique having high sensitivity as well as a high spatial resolution. The shave-off method had been proposed to analyze quantitatively uneven samples with a multichannel parallel detection system in the SIMS [1]. The shave-off depth profiling has enabled to analyze the various materials and devices [2−4] and was developed to obtain two-dimensional data with high resolution [5]. Recently, our group designed a new concept of three-dimensional (3D) shave-off SIMS introducing the magnification lens system. Unlike conventional 3D SIMS obtained by combining two-dimensional data and images along surface erosion, the 3D shave-off system could obtain the depth and mass information of the sample simultaneously with surface erosion. Therefore, this system enables to reduce the time of analysis as well as overcome a drawback in topographical information of SIMS imaging. It will be useful to analyze 3D internal structure and element distribution of a complex structure such as Li transition metal oxide. In general, Characterization of the elemental distribution, chemical composition, and structure in Li-ion battery cathode materials consisting of several micro-sized spherical particles is very important in relation to battery performance and degradation. Mapping of the materials has been required a variety of techniques such as energy-dispersive X-ray spectroscopy (EDX), X-ray diffraction (XRD), focused ion beam and scanning electron microscopy (FIB-SEM), time-offlight secondary ion mass spectrometry (ToF-SIMS), etc. due to their respective drawbacks. However, the new 3D shave-off SIMS system would be able to quickly analyze the distribution of particles as well as the 3D structure of electrodes independently.
Through the previous studies, the introduced magnification lens system had verified that the secondary ions were enlarged only in the depth direction (Z axis) and converged on the Mattuach-Herzog type mass analyzer in the SIMS [6]. As the simulation results, the secondary ions sputtered from a micrometer-sized sample were magnified to a millimeter size at the detector and had 1.4 μm of the Z-axial resolution [7].
In this study, we had conducted to simulate the trajectory of the secondary ions in the 3D shave-off SIMS system using several simulations before fabricating a prototype of the system. The simulation observed the trajectories of the secondary ions from the sputtering process to convergence to the mass detector. The secondary ions were assumed to be sputtered from a virtual sample composed of two elements by the focused ion beam. The simulation was performed using the SDTrimSP code (Static and Dynamic Trim for Sequential and Parallel computer, version 5.07) [8] and the SIMION program (version 8.1) [9]. One of the binary collision approximation codes, SDTrimSP, was used to estimate the sputtering yield and calculate values about the ion bombardment by the focused ion beam. The SIMION program was used to simulate the trajectory of ions in the 3D shave-off SIMS system.
II. SIMULATION MODEL A. Sample
The simulation was conducted with a virtual sample composed of two elements. One of the elements was silicon (Si), and the other was germanium (Ge). Figure 1(a) illustrates a schematic diagram of the simulation sample. The sample was a combination of 27 voxels; 22 voxels of Si and 5 voxels of Ge. Each voxel was 2 μm wide in all lateral dimensions. To investigate the detected position of secondary ions according to their sputtering position, 9 points (from P1 to P9) were designated to the sputtering position on the sample, as shown in Figure 1(b). Each point was determined by the type of the elements and the depth of the sample.
B. Procedure of the simulation
Three information (parameters) of the sputtered particles, sputtered position from the sample, initial energy, and emission angles were required to simulate the trajectories of the secondary ions in the 3D shave-off SIMS. Figure 2 depicts the three steps of this simulation for the required parameters.
Firstly, the erosion process of the sample by the focused ion beam (current: 260 pA, diameter: 147 nm) with the shave-off method (the X-axis interval length: 10 nm, the Y-axis interval length: 1.68 nm) was calculated by the erosion equation [10,11]. The sputtering yield as a function of the ion incidence angle for the 30 keV Ga ion irradiation of Ge and Si has followed the data measured by the SDTrimSP shown in Figure 3. The angle of incidence was measured at each designated sputtering position (P1−P9), and then the values were used as the input parameters to the next step. In the second step, the initial energy and emission angle of the sputtered particles were obtained by the SDTrimSP code. The sputtered particles generated by the 30 keV Ga + ions of 107 irradiation on the sample.
Subsequently, the obtained information for sputtering particles with energy of less than 10 eV was provided as input values to the SIMION program for calculation of the ion trajectories. On average that the particles having an energy under 10 eV accounted for 48% of total sputtering particles.
The design and setting of the 3D shave-off SIMS system in the SIMION program are shown in Figure 4. The workbench of the 3D shave-off SIMS system was described in more detail on previous studies [6,7].
III. RESULTS
In 3D shave-off SIMS, the depth information of the secondary ions would be obtained as a vertical axis position of the microchannel plate (MCP) detector in the mass spectrometry. Figure 5 presents the cross-sectional shape of the sample (upper) and the position of sputtered secondary ions on the detector (lower) after (a) once, (b) 8 times, (c) 14 times, and (d) 20 times of line scanning (a step of the Y-axis scanning). The sample was totally removed by the beam after 3571 of line scanning. After 20 times of line scanning, the cross-sectional shape of the sample and the angle of incidence were similar regardless of the element. It is due to having a similar sputtering yield of the Ge and Si when the angle of incidence is over 80°, as shown in Figure 3. The shave-off method always has a high angle of incidence over 80° as a result of using only the side of the beam to shave the edge of the sample [12−15]. Therefore, the sample shows a difference in the erosion depth between Ge and Si at the start of the scan, however, when the steady-state of the shave-off scanning with a high angle of incidence of the over 80°, the difference of them would decrease, and then eventually it would show the same erosion depth.
The dotted lines in the scatterplot of Figure 5 show the intensity of the secondary ions within the detected range. Table 1 summarizes the Z-axis position of the P1−P9 distributions of Figure 6. When the sputtering position of the secondary ions was lowered by 2 μm from the top of the sample, the peak position of the distribution was increased by about 0.4 mm for Ge and 0.2 mm for Si. Blue dash lines in Figure 6 show least-squares fitting of each distribution peak. From this result, we could know the magnification ratio of each element in 3D shave-off SIMS.
The detection positions of the secondary ions were compared at the peak of the intensity distribution. The peak of the intensity distribution shows clearly to be distinguished by the sputtered depth position of the sample.
In addition, the two elements (Ge, Si) had a difference in the length of the dispersion on the detector. The average lengths of the distribution of the secondary ions were about 1 mm for 72 Ge + ions and 0.7 mm for 28 Si + ions. It is due to the difference in the mass-to-charge ratio, and Ge has a larger magnification with a longer flight path than Si.
IV. CONCLUSIONS
The concept of 3D shave-off SIMS has been developed to obtain the sample depth information with the secondary ion mass spectra. Prior to installation of the system, the trajectories of the secondary ions from sputtering to detecting were simulated using some simulation programs. The simulation conducted with a virtual sample composed of Ge and Si. The secondary ions of Ge and Si sputtered with a difference in depth of the sample of about 1 μm showed 0.2 mm (magnification: 200 times) and 0.1 mm (100 times) of the difference on the detector. The depth position of the secondary ions sputtered from the sample was distinguished on the mass detector by the peak of the secondary ion intensity distribution. In this study, we confirmed that the 3D shave-off SIMS system could obtain the depth position of the sample. Although the distribution of the secondary ions was still broad and there were overlap parts, it will be expected to improve by several methods such as optimizing the lens systems, introducing an energy filter and an alignment system. When the problems would be the complement, 3D shave-off SIMS is expected as a new method of SIMS to provide the 3D compositional data with a high axial resolution of the sample and a reduction of shape effects.
|
2020-04-02T09:11:22.387Z
|
2020-04-02T00:00:00.000
|
{
"year": 2020,
"sha1": "dad92488fc9368d6ad1bb54c6912a023cad3fa91",
"oa_license": "CCBY",
"oa_url": "https://www.jstage.jst.go.jp/article/ejssnt/18/0/18_116/_pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "95ba4456b4921d27bb17b65d13fd27984c9bacf0",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Materials Science"
]
}
|
237380715
|
pes2o/s2orc
|
v3-fos-license
|
Gendhing, King, and Events: The Creation of Gendhing Panembrama During Pakubuwana X
Gendhing Panembrama was a creative and innovative product when Paku Buwana X reigned in the Negari Surakarta. Gendhing Panembrama emerged when the Dutch East Indies government imposed restrictions on political and economic activities in Karaton Surakarta. The restriction aimed to remove Paku Buwana X’s legitimacy from the political and economic side. Gendhing Panembrama became one of the symbols of Paku Buwana X’s resistance to the Dutch East Indies government policy. Resistance through the creation of Gendhing Panembrama resulted in the enforcement of the king of Surakarta’s sovereignty and legitimacy. This study aims to reveal the background of the Gendhing Panembrama creation and the events surrounding the Gendhing Panembrama creation. This study employed a historical approach and symbolic interpretation. From the study conducted, it was concluded that the creation of Gendhing Panembrama was a means for Paku Buwana X in maintaining his sovereignty and legitimacy. The intelligence of Paku Buwana X in managing the musical arts amid the political pressure of the Dutch East Indies government, in the end, could radiate his majesty, authority, and power and maintain the Javanese tradition of the Karaton Surakarta.
INTRODUCTION
Karaton Surakarta (Surakarta Palace) is one of the Mataram Kingdom's successors divided by the Giyanti agreement (1755). The agreement contents stated that Mataram was divided into two, namely Surakarta and Yogyakarta. The Giyanti Agreement was signed on February 13, 1755, known as the Palihan Nagari (Division of Mataram Territory) incident (Joebagio 2002, 182). The existence of the Karaton Surakarta is still quite strong, marked by the form of a physical building with The karawitan activities of Karaton Surakarta performed by the abdi dalem niyaga resulted in a product in the form of gendhing (songs). Abdi dalem niyaga is a creator and musician who plays gendhing for various purposes in the karaton. Gendhing in the reality of Javanese life is interpreted and functioned in various ways. Gendhing is used to meet an aesthetic need and is also used for various purposes: social rituals, religious expressions, and related functions with various art branches. Gendhing is employed to refer to musical compositions in Javanese karawitan. Musical composition is decoded as a unified system that includes the relationships between musical elements that are united and harmoniously integrated (Sunarto 2020, 106). These elements comprise instrumental and vocal melodies, rhythm, laya (tempo), tuning, pathet (tone setting), instrument selection, dynamics, and form. Gendhing is understood as an imaginary musical expression full of aesthetic, ethical, symbolic, and philosophical values (Waridi 2003, 300-301). The Karaton Surakarta's karawitan places gendhing as one of the pillars of the Karaton Surakarta's karawitan life. The creativity of the gendhing creation is one of the parameters for the progress of Karaton Surakarta's karawitan.
The karawitan life of the Karaton Surakarta reached its peak during Pakubuwana X's reign (Priyatmoko 2019, 120). At that time, gendhing with various functions was created. Karaton Surakarta's karawitan are grouped according to its function; for example, Gendhing Beksan, which are gendhing used to support dance performances, Gendhing Pakeliran to refer to the gendhing that support wayang performances, and so on. One of gendhing created during Pakubuwana X's reign was Gendhing Panembrama. It is called the Gendhing Panembrama because gendhing in this category was created to honor an important event involving the king, empress, king's son, and people considered essential by the Karaton Surakarta community. Gendhing Panembrama was created to mark important events, such as awarding honorary stars or other honors to the king and his relatives. Gendhing Panembrama's creation is also related to visiting events from domestic and foreign guests. During Pakubuwana X's reign, gendhing that was categorized as Gendhing Panembrama emerged. At this time, Pakubuwana X received many honorary stars and other honors from the partner or fellow countries. At the behest of Pakubuwana X, abdi dalem niyaga ordered to perpetuate the event of giving honorary stars or other honors from Karaton Surakarta's partner or fellow countries to Pakubuwana X.
The innovation of Karaton Surakarta's karawitan works resulted from masters' and musical creators' thoughts at that time. Gendhing Panembrama functions as a gendhing in honor of Sunan (the king's term for Karaton Surakarta) or royal guests visiting Karaton Surakarta. The karawitan works in the form of Gendhing Panembrama during the reign of Pakubuwana X was motivated by the many awards received by Sunan. The event of receiving an award in the form of an honorary star (especially from abroad) is vital for the Karaton community. The giving of the award strengthens the belief of the Karaton people and Surakarta in general that Sunan is the most respected king (Larson 1990, 45). Gendhing Panembrama work during Pakubuwana X's reign was reported by Pradjapangrawit, as the following quote: grawit 1990, 150) [Since the throne of becoming king, Pakubuwana X has often received star awards from abroad, which has resulted in increasing the king's authority. Every time receiving an honorary star is commemorated in the Serat Panembrama in the form of gerongan song lyrics, a gendhing.] Based on Pradjapangrawit's notes, it can be ascertained that Gendhing Panembrama appeared during Pakubuwana X's reign as a marker of receiving an honorary star award event from abroad to Pakubuwana X. The naming of Gendhing Panembrama was based on the function of creating gendhing, namely, to honor and mark the event of awarding an honorary star award to Pakubuwana X.
The period of Pakubuwana X's reign was marked by the issuance of the Dutch East Indies government's policies, which did not take sides with Karaton Surakarta. Pakubuwana X was likened to a prisoner in his palace due to the Dutch East Indies government's political policies (Kuntowijoyo 1999, 85). Activities related to politics were restricted by the Dutch East Indies government. The Dutch East Indies government also limited the economic activities of karaton, so that Pakubuwana X lost legitimacy in politics and economy. Pakubuwana X's position was that of a king but was no longer in full power. Pakubuwana X only had the status of the Dutch administrator (houfhoudende) for Surakarta (Kuntowijoyo 2004, 22). If a king's political and economic authority and power are limited, it can be said that the legitimacy of the king's power has run out. The Dutch East Indies government's limitation of authority and power resulted in a reduction in the king's political power and a decline in the economic field.
The limitation of political and economic activities and the glorious karawitan life of the Karaton Surakarta during the Pakubuwana X's reign are two opposing sides. The restrictions imposed by the Dutch East Indies government did not dampen abdi dalem niyaga's creativity in creating gendhing as an event marker or Gendhing Panembrama. Two opposing sides between the limitation of political and economic activities and the glory of the karawitan life of the Karaton Surakarta raises the question of the problem formulation in this research. Why did, in a situation where political and economic activities were restricted, give rise to creativity in the form of Gendhing Panembrama? Sunarto (2005), Waridi (2006), and Rustopo (2007) wrote about the Karaton Surakarta musical during the reign of Paku Buwana X, but the studies did not mention the creative process of creating Gendhing Panembrama. Sumarsam (2003) mentioned the emergence of the Gendhing Panembrama creation during the reign of Paku Buwana X. However, the research has not revealed the background of the Gendhing Panembrama creation. Santosa (2007), Kuntowijoyo (1999,2004), Joebagio (2017), and Soeratman (2000) wrote about the Karaton Surakarta during the reign of Paku Buwana X. Nevertheless, the creative process of creating Gendhing Panembrama and the events surrounding it has not been revealed in these studies. Therefore, the background of the Gendhing Panembrama creation process is interesting to reveal until an answer is found to the question of why Paku Buwana X initiated the creation of Gendhing Panembrama.
METHOD
This qualitative research applied a historical approach so that historical research procedures, including heuristics, source criticism, interpretation, and historiography, were conducted in this study. The data sources in this study were historical data, which had different qualities in providing information, so a classification of data sources was required. The source of historical data used in this research was the archives. The use of archives as a data source in this study is because archives occupy the highest position than other historical sources (Lohanda 2011, 2). The data collection techniques employed literature study methods, document studies, and interviews. Historical analysis, which is identical to the interpretation of historical data, was used in this study. Interpretation is the interpretation of facts to be written so that they have meaning in the context of a study. These facts are viewed as being related, then adjusted to the study's focus and use so that they really deserve to be used as a basis for writing history. Historical interpretation or analysis is carried out by synthesizing a number of facts obtained from historical sources and supported by appropriate theories so that the facts found can be interpreted as a whole by including a framework that comprises the concepts and theories that will be used in making the analysis (Kartodirdjo 1992, 2). The data obtained were interpreted and analyzed based on the theoretical framework used to produce facts that were relevant to the research objectives.
Gendhing Panembrama and the King's Sovereignty
"Tuhu lamun nugraha dhatêngi, dènnya madêg katong, sri narendra ping sadasa dene..." (Gunamardawa 1936, 33-35). A sentence cut from one of the Gendhing Panembrama, namely Ladrang Mijil Ludira Laras Pelog Pathet barang, tells that since he became king of Surakarta, Paku Buwana X often received honorary stars or awards. The event when Paku Buwana X received an honorary star was always commemorated by Gendhing Panembrama. Paku Buwana X used Gendhing Panembrama as a monument or marker of events. Every time he received an honorary star, Paku Buwana X ordered abdi dalem niyaga to make a new piece specially used as an event marker called Gendhing Panembrama (Pradjapangrawit 1990, 154). Gendhing Panembrama was specially made when Paku Buwana X received an honorary star, received guest visits from outside the Surakarta area, and commemorated essential events at the Karaton Surakarta.
For the palace's people, visiting guests from outside Surakarta was a form of recognition of the king of Surakarta's power. The arrival of the King of Siam, namely King Rama VII, and his consort to the Karaton Surakarta on Sunday, July 30, 1901, or Akad Pahing, 11 Mulud 1831 Dal, was interpreted as a form of recognition of Paku Buwana X's sovereignty as the king of Surakarta. Ladrang Siyem laras slendro pathet nem was a marker of this important event.
The ability of Paku Buwana X to establish relations with foreign countries was increasingly seen by awarding honorary stars from several countries on the European Continent. The countries that gave Paku Buwana X an honorary star apart from the Netherlands included Germany, Cambodia, Africa, and China. (Pradjapangrawit 1990, 154-160).
The palace cultural placement, including the creation and sounding of Gendhing Panembrama, was an indicator of Paku Buwana X's sovereign consciousness as a king. Gendhing Panembrama as a marker of the event was sounded at the same time the ceremony was held for the awarding of an honorary star to Paku Buwana X. During the reign of Paku Buwana X, almost every ceremony was always held on a large scale. The ceremony, which was held on a large scale, involved the presence of karawitan to sound the Gendhing Panembrama. The sounding of Gendhing Panembrama in the ceremony of awarding the honorary star was one of the ways Paku Buwana X showed his power. Various regulations made by the Dutch East Indies government that narrowed the space for Paku Buwana X were not visible when Gendhing Panembrama sounded amidst the splendor of the ceremony. Paku Buwana X was still seen as a king who had all the power and authority during the ceremony. The sound of Gendhing Panembrama during the ceremony of awarding the honorary star also showed that Paku Buwana X had a sovereign consciousness as a king.
The number of honorary stars from abroad also indicated that Paku Buwana X had excessive ability to manage good relations with foreign countries. Paku Buwana X could position himself in the association with rulers from abroad. Paku Buwana X's emotional intelligence made him a king who had advantages in socializing or had social skills. Skills in social relationships caused Paku Buwana X to build good relations with foreign rulers. The skills to socialize and develop good relationships with foreign rulers could indicate that Paku Buwana X had high emotional intelligence, even said to be someone who was a genius in emotional intelligence (Kuntowijoyo 2004, 8). Paku Buwana X's skills in managing good relations with foreign countries earned him appreciation in the form of honorary stars from rulers in parts of Europe, Asia, and Africa.
Gendhing Panembrama as a Symbol of Hidden Resistance
The Dutch East Indies government, which oversaw the vorstenlanden region, acknowledged that Paku Buwana X was a king who was obedient to the Dutch East Indies government. Paku Buwana X never showed open resistance to the Dutch East Indies government. During the reign of Paku Buwana X , the resident of Surakarta had changed 13 times. Howe-ver, there are no reports that Paku Buwana X reneged on the verklaring corte, which he had signed before ascending to the throne. Paku Buwana X's obedience to the Dutch East Indies government was shown by paying tribute to the Queen of the Netherlands at the coronation of the Queen of the Netherlands, the 25th anniversary of the Queen of the Netherlands, and the 25th anniversary of the Queen of the Netherlands (Wangsaleksana 1936, 12 (Pradjapangrawit 1990, 158). Gendhing Panembrama dominated karawitan work during the reign of Paku Buwana X.
Paku Buwana X, who ruled from 1893-1939, fought against the Dutch East Indies government through cultural symbols. The optimization of cultural symbols, including musicals, was carried out by Paku Buwana X because of the awareness that culture was the remaining sovereignty and was not intervened by the Dutch East Indies government. Adapting to political realities through cultural paths made Paku Buwana X arrange Surakarta residents to conform to the prevailing protocol when a cultural event was held at the Karaton Surakarta. Resistance through this cultural path made the resident, who should have been above Sunan in the context of subordination domination, could be managed by Paku Buwana X (Kuntowijoyo 2004, 94-95). Resistance through this cultural route signified the success of Paku Buwana X in maintaining his existence and legitimacy as the king of Surakarta.
Gendhing Panembrama as a Symbol of Legitimating Paku Buwana X's Power
The rapid development of karawitan during Pakubuwana X's reign, which was marked by the emergence of innovation and creativity in the creation of gendhing, could not be separated from the king's interests in karawitan. A king, of course, wants his position to be recognized by all. Therefore, all efforts were made to ensure that the king who reigned was most entitled to occupy the kingdom's throne. Phrases in Javanese literature describe the king as ratu gung binathara (deified king), ber bandha -ber bandhu (rich in property and relatives), mbaudhendha nyakrawati (having the power to punish and rule the world). It indicates that the king's position is above all. In the traditional Javanese leadership system, a king must have elements as in the phrases as the country's leader. Apart from that, a king must also have four other elements: sekti, mandraguna, mukti, and authority. Sekti means having supernatural powers, mandraguna means proficient in all fields, mukti means having a high position, and authority means having a strong influence. If a king owns these elements, his power becomes solid. For this reason, various means or tools are needed so that the king's position is recognized by the people to obey and submit to all his orders (Kasdi 2003, 18).
The next elements that can be used to strengthen the position of a king are called king legitimacy tools. Soemarsaid Moertono used the term the cult of pomp to refer to a means of legitimacy or means of strengthening the position of a king and classified it into two types: the cult of pomp, which is immaterial or abstract, and the cult of pomp, which is material or more concrete and visible. However, the two means ultimately lead to the same goal: the disclosure of the microcosmic and macrocosmic relationships that make a king a replica of government in heaven. The Javanese consider heaven's kingdom to have incomparable advantages and abundant wealth apart from having advantages in the spiritual realm (Moertono 1985, 73).
In this case, gendhing is a means that can be used to strengthen the king of Surakarta's position. The creative process in the form of the development of the work, wilet (sound connection), and instrument play patterns boil down to the gendhing creation. At Karaton Surakarta, the creative process was carried out by empu (masters) and abdi dalem niyaga. The role of abdi dalem niyaga in the development of the Karaton Surakarta's karawitan is huge. Karaton Surakarta's abdi dalem niyaga are people who have an inner inclination and fully attentive to life and bring to life the musical culture of karawitan through the aesthetic format of gamelan sounds (Bambang Soenarto, 2005: 16). They are the creators and the music organizers capable of thinking about karawitan development for-mat from time to time. Abdi dalem niyaga played such a vital role that the karaton's karawitan could last for so long. It even reached the golden age during Pakubuwana X's reign. The importance of karawitan's role in the traditional life of the karaton, in the end, made the musical one of the legitimacy tools for the Karaton Surakarta King's power. Gendhing Panembrama, the result of innovation and creativity of the Karaton Surakarta's abdi dalem niyaga, was a means of affirming the legitimacy of Pakubuwana X's power. Gendhing Panembrama marked the awarding of service stars and other honors to Pakubuwana X from fellow countries. Gendhing Panembrama's creation denoted that Pakubuwana X still had a lot of power and authority.
Surakarta people during Pakubuwana X's reign often referred to Surakarta as Negari Gung (the great state). A great state was formed because there was a close relationship between the kingdom, the king, and the people. On the other hand, the great country placed the kingdom at the center of the government cosmology. Meanwhile, the relationship between the king and the people in the kingdom's political language was called manunggaling kawula gusti. The relationship between the king and the people (kawula-gusti) showed the relationship between the high and the low; it indicated the close interdependence between two different but inseparable elements, two elements that are actually two aspects of the same thing (Moertono 1985, 25). It can be seen that the king and the people are one unit and have interdependent characteristics. In this case, the Javanese likens the unity of this relationship to a ring (sesupe). The king was the sesotya (gemstone), and the people were as the embanan (bond). In a ring, the two parts found a symbiotic mutualism.
The cosmology concept as the basic concept of the Javanese king's power, of course, was reflected in the king's mind and produced various means of strengthening power, one of which was karawitan. In other words, the cosmology concept entered the realm of Javanese traditional music that developed in the karaton, namely karawitan. The gamelan's function to legitimate the king's power was not only seen from the gamelan instruments as heirlooms. When the gamelan is played, the Javanese gamelan's music system is also related to strengthening the king's position. In Javanese karawitan, various structures were found, for example, the lancaran structure, the ketawang structure, the ladrang structure, the merong structure, and the Inggah structure. The structure was determined by the interweaving of the structural ricikan (instrument) wasp patterns, namely the kethuk, kenong, and kempul. Structural ricikan is ricikan whose wasp patterns (play) with one another link to form a structure according to the gendhing form, or in other words, the structure of the wasp pattern of structural ricikan determines or is determined by the gendhing form (Supanggah 1984, 6).
The Javanese Karawitan knows a group of ricikan whose task is to play the main melody, namely the ricikan balungan group. The ricikan balungan category includes saron barung, saron penerus, demung, slenthem, and bonang panembung. These ricikans play the main melody, which can then be identified with a dot. When gendhing runs for a certain amount of time, the structural ricikan weaves a wasp pattern to form a particular structure. The points where the structural ricikan comes into play can hereafter be called salient points. The salient points on the several horizontal lines, in the end, meet at the same place. The greater the number of horizontal lines that meet at a certain point, the more influential the position of that point is in the formal structure. In this case, the gong's seleh (sound) is the central point used as the center or estuary of all instrument playing.
Hierarchically, the gong is the most salient point, namely the meeting of all parts of the kenong points contained in the gendhing structure. Kenong is a meeting between kethuk and kempul. This analysis reveals that in the Javanese gendhing structure, there are two forms of the cycle: a large cycle and a smaller cycle. The cycle that occurs in the gendhing structure is the repetition of large cycle forms and smaller cycle encounters in the form of larger cycles, or cycles in cycles. This gamelan music system ultimately leads to strengthening cosmological concepts and king position (Becker 1980, 26-29). It is said to strengthen cosmological concepts and the position of a king because such a music system was built with stringent rules according to Javanese musical karawitan principles such as pathet, structure, form, and laras principles.
Gong's seleh, which is the most crucial point in the Javanese music system, is personified as a king because it occupies the highest hierarchy. Meanwhile, the small dots that form small cycles can be personified as people (kawula). The presentation of Javanese gendhing reflects the relationship between the king and the people or manunggaling kawula gusti. The closeness between the king and the people can be seen from the length of the gendhing composition. Thus, not all people can be close to the king even though the closeness between each subject and the king has a different degree. However, the relationship between the king and the people is well established. In general, the relationship between the king and the people can be formulated in three main concepts: the relationship as a person, the relationship between the abdi dalem and the bendara (lord), and the relationship between officials and the people (Moertono 1985, 32).
The personal relationship between the king and his people does not necessarily eliminate the boundaries between kings and kawula. The relationship is still accompanied by feelings of mutual respect and love. The relationship between a king as a bendara and a kawula with the status of abdi dalem implies a predestined order, whether he is born as abdi dalem or as a bendara or a lord. The result is that humans have no other choice but to carry out their obligations as determined by fate. It is what ultimately resulted in a model of government. The third relationship is that the king is the ruler of the whole kingdom, including the kawula. As an official from the side of wisdom, a king cares for his people. Thus, in fact, the rulers have an attitude of protective superiority; meanwhile, those who are ordered to have an attitude of sincere devotion.
The relationship between the king and the people can be said to be two sides that cannot stand alone. The close relationship between the king and the people means that both must be able to position themselves properly and realize their respective positions with all the consequences. The relationship between the king and the people (kawula) in the context of the Karaton Surakarta life can be identified with the relationship between the king and abdi dalem niyaga. The position of abdi dalem niyaga is the subordinate who is in charge when the palace holds a ceremony and involves the gamelan's sounding at the ceremony. Besides, on certain days, they are obliged to come at the time of pisowanan padintenan. Their main task is to sound the gamelan according to the ceremonial needs.
Abdi dalem niyaga is the most important element in the gamelan playing activity in the karaton. Gendhing will be realized when the gamelan has been played by the abdi dalem niyaga. The sense of taste raised is also very dependent on the abdi dalem niyaga who sounds it. In other words, the taste raised depends on how the abdi dalem niyaga expresses it through the ricikan beating. The karawitan tradition does not provide details on the provisions made by the composers regarding the Ricikan wasps. The role of abdi dalem niyaga is an essential factor in realizing a sense of gendhing through the concept of garap (working). Garap is an action that concerns imagination, interpretation, and creativity in traditional arts. Working in a karawitan context can be interpreted as an activity or creative action in interpreting gendhing. How will the shape and feel of a gendhing appear when it has been worked on. This garap is what will determine the taste of a gendhing. Of course, creative action or work requires specific devices called tools or elements of garap. The elements of garap include ricikan, gendhing and balungan gendhing, vocabular (cengkok and wiledan), and a pengrawit (gamelan musician) or a karwitan artist (Supanggah 2009, 4-6).
When interpreting or working on a gendhing, every pengrawit with his imagination, all provisions, and interpretative skills has the freedom to translate the results of the interpretation into the ricikan play, which is his responsibility. However, the freedom to translate is still framed in unwritten rules called conventions. The importance of the pengrawit's role in realizing the feeling of a gendhing places him in the most important element in the activity of working on an endhing. In the context of Karaton's Karawitan, the Karaton's Karawitan actor or artist's role, namely the abdi dalem niyaga, is essential. A gendhing can function as a means of legitimating the king's power and authority when the abdi dalem niyaga positions himself as kawula ingabdekaken mring ratu, namely the people who devote themselves entirely to the king. Abdi dalem niyaga's dedication strengthens the king's power to make karaton the center source of musical work.
Working on the karaton's karawitan, which was initiated by the abdi dalem niyaga, is proof of the abdi dalem niyaga's responsibility in carrying out his obligations. The karaton people call the implementation of the duties and obligations of the abdi dalem with the expression netepi gawa gawene abdi dalem (carrying out obligations as abdi dalem). When abdi dalem carry out their duties and obligations, it is ensured that there is a harmonious relationship between the king and abdi dalem. A harmonious relationship between the king and abdi dalem can be interpreted as a well-established relationship between the king and the people. A well-established relationship between the king and the people has implications for the success of strengthening power through abdi dalem. As a continuation of the Mataram Dynasty, which adheres to the religious magic concept, Karaton Surakarta used abdi dalem as a means of enforcing power through a harmonious re-lationship between those who govern and are governed and the relationship between the king and his people even though the king, the people, and abdi dalem are in different classes.
Karaton Surakarta's Karawitan, which is run by abdi dalem niyaga, requires a king figure as a patron and protector in carrying out their duties and obligations. The relationship between the king and the karaton's karawitan carried out by the abdi dalem niyaga can be identified as class harmony. Between the king and the karawitan carried out by the abdi dalem niyaga, it is bound by regulation or angger-angger that have been agreed upon in Karaton Surakarta. It is that the king and the abdi dalem niyaga have a different class or position altogether. The king is the highest leader who holds the sole power in Karaton Surakarta, while the abdi dalem niyaga is the king's employee. The strata difference between the king and abdi dalem niyaga gave rise to a relationship called class harmony. There is an interdependent relationship between the two, even though the karawitan is run by the abdi dalem niyaga, and the king is at different strata or classes. Class harmony occurs because the king and abdi dalem niyaga have a vital function in the customary life of karaton. The king needs a means of legitimacy through karawitan in the form of the creation and sounding of gendhing by abdi dalem niyaga. On the other hand, abdi dalem niyaga needs the king as a patron and creative space in Karaton Surakarta.
CONCLUSIONS
The existence of karawitan in the Karaton Surakarta was not just enlivening the atmosphere with the sounds produced through the gamelan. More than that, the role of gamelan was vital. Karawitan acted as a means of political articulation of Paku Buwana X through Gendhing Panembrama's creation and sounding. The existence of this karawitan, which was marked by the creativity of Gendhing Panembrama's creation, was essentially also one of the means for Paku Buwana X to state and confirm his power. Besides, the creation of Gendhing Panembrama was a manifestation of Paku Buwana X's sovereign consciousness. The pressure and coercion of the Dutch East Indies government caused Paku Buwana X to find a way to keep his sovereignty and power visible. Also, Gendhing Panembrama became one of the tools for fighting Paku Buwana X against the Dutch East Indies government.
Since it was physically impossible to fight against the Dutch East Indies government, Paku Buwana X empowered Karaton Surakarta's karawitan as a symbol of his resistance. Moreover, the creation and sounding of Gendhing Panembrama was a symbol of affirming the legitimacy of Paku Buwana X's power.
|
2021-09-01T15:24:42.490Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "db47b8230b6afc6ddabde5469ac52f1db0569061",
"oa_license": "CCBY",
"oa_url": "https://journal.unnes.ac.id/nju/index.php/harmonia/article/download/29099/11693",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b5adf6bdc945fa4a0210f440011a291d3e76bb7f",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
}
|
2644955
|
pes2o/s2orc
|
v3-fos-license
|
Positive selection in the hemagglutinin-neuraminidase gene of Newcastle disease virus and its effect on vaccine efficacy
Background To investigate the relationship between the selective pressure and the sequence variation of the hemagglutinin-neuraminidase (HN) protein, we performed the positive selection analysis by estimating the ratio of non-synonymous to synonymous substitutions with 132 complete HN gene sequences of Newcastle disease viruses (NDVs) isolated in China. Results The PAML software applying a maximum likelihood method was used for the analysis and three sites (residues 266, 347 and 540) in the HN protein were identified as being under positive selection. Codon 347 was located exactly in a recognized antigenic determinant (residues 345-353) and codon 266 in a predicted linear B-cell epitope. Substitutions at codon 540 contributed to the N-linked glycosylation potential of residue 538. To further evaluate the effect of positively selected sites on the vaccine efficacy, we constructed two recombinant fowlpox viruses rFPV-JS6HN and rFPV-LaSHN, expressing the HN proteins from a genotype VII field isolate Go/JS6/05 (with A266, K347 and A540) and vaccine strain La Sota (with V266, E347 and T540), respectively. Two groups of SPF chickens, 18 each, were vaccinated with the two recombinant fowlpox viruses and challenged by Go/JS6/05 at 3 weeks post-immunization. The results showed that rFPV-JS6HN could elicit more effective immunity against the prevalent virus infection than rFPV-LaSHN in terms of reducing virus shedding. Conclusions The analysis of positively selected codons and their effect on the vaccine efficacy indicated that the selective pressure on the HN protein can induce antigenic variation, and new vaccine to control the current ND epidemics should be developed.
Background
Newcastle disease (ND) is notorious for its devastations to the world poultry industry and listed as one of the notifiable terrestrial animal diseases by the World Organization for Animal Health (Office International des Epizooties). The causative agent, Newcastle disease virus (NDV), also known as avian paramyxovirus serotype 1, is a member of the family Paramyxoviridae [1]. The virus genome is a non-segmented, single-strand, negative sense RNA which codes for six major proteins including nucleocapsid protein (NP), phosphoprotein (P), matrix protein (M), fusion protein (F), hemagglutinin-neuraminidase (HN), and large RNA-directed RNA polymerase (L), in the order from the 3' to 5' terminus [2]. Since its emergence in fowls in 1926, NDV has undergone substantial genetic evolution and has developed into several distinct genotypes (I to IX) [3,4]. Among these, genotype VII is considered to be responsible for the severe outbreaks in Western Europe [5], South Africa and Southern Europe [3], and East Asia [6,7] in the 1990s. Presently, the genotype VII NDV is still prevalent in China [4,[8][9][10].
Although the cleavability of F protein is pivotal to NDV pathogenicity [11,12], recent studies have shown that HN protein also contributes to tissue tropism and virulence [13]. HN is an important immunoprotective glycoprotein on the envelope of ND virions and responsible for essential viral functions, such as binding to sialic acid-containing cell receptors, facilitating the fusion activity of the F protein and removing sialic acid to release progeny virus particles [14]. Despite the critical role that HN protein plays in NDV immunity and pathogenesis, the positive selection pressure acting on HN during the viral evolution has not been well analyzed.
The ratio of non-synonymous (d N ) to synonymous (d S ) substitutions (ω = d N /d S ) provides an important means for studying the selective pressure at the protein level, with ω = 1 denoting neutral mutations, ω < 1 purifying selection, and ω > 1 diversifying positive selection. As a high proportion of amino acids in many proteins is often largely invariable (with ω close to 0) due to strong structural and functional constraints, approaches conferring an average ω over all codons across the gene are not sensitive enough to detect positive selection [15]. The program PAML [16,17], which applies a maximum likelihood (ML) criterion and a few simple models allowing for heterogeneous ω ratios among sites, has been considered an efficient integrated method to estimate positive selection and has been commonly used to study virus evolution [18][19][20][21]. In this paper, the selective pressure on NDV HN protein was examined using 132 complete HN sequences (Chinese isolates), including 106 retrieved from GenBank (up to 14 April, 2009) and the other 26 obtained from field isolates. Based on the analysis, three codons of HN were identified under positive selection and their potential effect on the routine vaccine efficacy was then evaluated.
Viruses
Four pigeon isolates: NDV03-018, NDV03-044, NDV05-028 and NDV05-029 [22], were kindly provided by Dr. Zhiliang Wang (China Animal Health and Epidemiology Center). Two chicken isolates, QH-1/79 and QH-4/85 [23], were obtained from Dr. Dianjun Cao (Harbin Veterinary Research Institute, Chinese Academy of Agricultural Sciences). Twenty field strains were isolated from diseased chicken and goose flocks in China during 2005-2006. All of these viruses were subjected to three rounds of plaquepurification in chick embryo fibroblast (CEF) monolayers and subsequently propagated in 10-day-old specific pathogen free (SPF) chicken embryos. Infective allantoic fluid containing virus stocks was aliquoted and stored at -80°C before use.
RNA preparation, PCR, and sequencing
Viral RNAs were extracted directly from the allantoic fluid with the Trizol LS reagent (Invitrogen, Carlsbad, CA), following the manufacturer's instructions. Reverse transcription (RT) was conducted with random primers, and PCR was performed with a pair of primers (sense: 5'-CTTCACAACATCCGTTCTACC-3', antisense: 5'-ACCTTCCGAGTTTTATCATTCT-3') to amplify the full-length HN gene of NDV. The PCR products were purified with a DNA purification kit (QIAGEN, Hilden, Germany) and sequenced directly using the ABI PRISM BigDye Terminator v3.1 Cycle Sequencing kit (Applied Biosystems, Foster City, CA).
Sequence information and phylogenetic analysis
GenBank accession numbers assigned to the 26 strains characterized in the present study were as follows: FJ751918, FJ751919, FJ766528, EF666110, GQ338309-GQ338311, and EU044809-EU044827. In addition, the other 106 full-length HN sequences of NDV isolates from China (vaccine strains of La Sota and Mukteswar were included as they are used extensively in poultry flocks, while recombinant strains were excluded to ensure the accuracy of detecting positive selection at amino acid sites [24]) were retrieved directly from GenBank and their accession numbers were listed in Table 1. All the 132 HN sequences were edited and aligned with the Lasergene software (DNASTAR Inc., Madison, WI). The GTR (general time reversible) + I (invariable sites) + G (gamma distribution) evolutionary model was selected as the optimal nucleotide substitution model with the program Modeltest 3.7 [25]. Phylogenetic tree was then constructed by employing the ML method implemented in PAUP* version 4.0b [26] and neighbor-joining (NJ) method in MEGA version 4.0 [27]. The robustness of the statistical support for the tree branch was evaluated by 1000 bootstrap replicates. The online server, BepiPred 1.0 [28], was used to predict the position of linear B-cell epitopes of all the HN sequences.
Positive selection detection
To estimate the selective constraints on the HN protein, the codeml program of the PAML package (version 4) [16] was utilized to calculate the site-to-site variation in ω. Two nested site-specific models, consisting of a neutral model that does not allow positive selection (ω≤1) and an alternative model that permits positive selection (ω > 1), were compared. As recommended [15], the following models were used: M0 (one-ratio) v. M3 (discrete) and M7 (beta) v. M8 (beta & ω). M0 assumes a constant ω for all codons whereas M3 allows for discrete classes of sites with different ω ratios. M7 supposes a beta distribution with 10 categories of ω over sites, each corresponding to a unique ω value that is always less than 1 while M8 has an extra category with ω > 1. Then the log likelihood values for each pair of the above nested models were compared by a likelihood ratio test (LRT) [15,17], in order to assess whether the model allowing for positive selection is significantly more suitable for the data. Finally, the Bayes empirical Bayes (BEB) procedure [29] was used to infer the particular codons under positive selection and to calculate their posterior probabilities.
Animal experiment
To further investigate the effect of positively selected sites on vaccine efficacy against the prevalent NDVs, Go/JS6/05 (field NDV strain) and La Sota (the most widely used vaccine strain in China) were chosen to construct corresponding recombinant fowlpox viruses (rFPVs) expressing each HN gene based on the transfer vector pP12LS developed by Sun et al [30]. The expression was identified by indirect immunofluorescence assay (IFA) in secondary CEF cultures using anti-NDV polyclonal antibody as previously described [31], and the levels of HN expression were further compared between the two generated rFPVs by flow cytometry [32] on DF-1 cells (a stable cell line of CEF) at a multiplicity of infection (MOI) of 5. Subsequently, two groups of fiveday-old SPF White Leghorn chickens (18 birds/group, Beijing Merial Vital Laboratory Animal Technology, Beijing, China) were immunized respectively with the above two rFPVs at a dose of 1 × 10 4 PFU. A third group was served as a mock-vaccinated control. Three weeks later, all chickens were challenged oculonasally with 100 μL of PBS-diluted allantoic fluid containing 1 × 10 5 EID 50 of Go/JS6/05. Tracheal and cloacal swabs were collected on days 3, 5 and 7 post-challenge (p.c.). Furthermore, six chickens from each vaccinated group were sacrificed humanely on day 5 p.c., and tissue samples including liver, brain, spleen, kidney, trachea and lung were collected. The swabs were immersed in PBS with antibiotics (8000 U/mL ampicilin, 5 mg/mL streptomycin and kanamycin, pH 7.2), and stored at -80°C until analyzed. The recovery of the challenged virus in these swabs or organ samples was confirmed by inoculation into embryonated chicken eggs. All animal work was approved by the Jiangsu Administrative Committee for Laboratory Animals (Permission number: SYXK-SU-2007-0005).
Sequence analysis
The full coding region of the 132 HN genes analyzed in this study exhibited diverse phylogenetic phenotypes in Chinese poultry flocks, containing six of the nine recognized genotypes (II, III, VI, VII, VIII and IX), with the overwhelming majority (113/132) belonged to genotype VII (Additional file 1, Table S1). In addition, the phylogenetic tree (Figure 1) showed that our 26 sequences belonged to four different genotypes as follows: III (Ch/ JS7/05 and Go/JS9/05), VI (NDV05-028 and NDV05-029), VIII (QH-1/79 and QH-4/85) and the remaining 20 fell into genotype VII.
Detection of positive selection
An ML method implemented in the software package PAML was used for the identification of any positive selection on the HN protein of NDV. The log likelihood differences between M0 and M3, as well as M7 and M8 were found to be significant. Models that permitted positive selection showed a better fit to the data and contained a class of codons with the non-synonymous to synonymous substitutions ratio greater than one (ω > 1), indicating the existence of positive selection ( Table 2). A further analysis on the amino acids most likely responsible for the detected non-neutral pattern revealed that three codons, 266, 347 and 540, were under positive selection identified by both M3 and M8, with the posterior probability over 95% for residue 266 and 347, and over 90% for residue 540. As the M0-M3 comparison is more a test of heterogeneity in the ω value among sites and not actually a test of positive selection [33], only the results obtained by M8 were investigated further.
Amino acid variations of positively selected codons
As shown in Table 3, each of the three positively selected codons identified by M8 exhibited diversity in amino acid substitutions which could induce the variations in hydrophobicity or charge. In particular, codon Table S1).
Generation of rFPVs expressing HN genes
Two rFPVs, rFPV-LaSHN and rFPV-JS6HN respectively expressing the HN proteins of vaccine strain La Sota (with V266, E347 and T540) and genotype VII field isolate JS-06/05 (with A266, K347 and A540) were generated by homologous recombination between the corresponding transfer plasmid and the wild-type parental FPV. Fluorescence was readily observed in CEF cells transfected with either rFPV-LaSHN (Figure 2A
Protective efficacies of rFPV-JS6HN and rFPV-LaSHN
On day 21 after immunization, effective antibody responses were induced in both rFPV vaccinated groups, with higher hemagglutinin-inhibition (HI) titers to the homologous antigen, and serum from birds inoculated with rFPV-LaSHN reacted poorly with Go/JS6/05 (Table 4). On day 5 p.c., virus replication in different visceral organs of rFPV-JS6HN or rFPV-LaSHN vaccinated chickens was examined by inoculation into 10-day-old embryonated chicken eggs. As shown in Table 4, the frequencies of virus isolation in the brain, spleen and lung from rFPV-LaSHN group were higher than that of the rFPV-JS6HN, though no statistically significant difference was observed.
Each of the rFPV-JS6HN and rFPV-LaSHN vaccinated chickens was fully protected against mortality after challenge, whereas unvaccinated birds died within 5 days p.c.. On day 3 p.c., virus shedding from the cloaca and trachea showed that both the rFPV vaccines remarkably decreased the level of virus excretion from cloaca, and that the rFPV-JS6HN group could significantly reduce the virus recovery rates from trachea when compared with the rFPV-LaSHN group. Moreover, on days 5 and 7 p.c., the frequencies of virus isolation from cloaca in rFPV-JS6HN vaccinated birds were much lower than that in rFPV-LaSHN immunized fowls (Table 5).
Discussion
Positive selection is an evolutionary process that could drive the fixation of emerging advantageous mutations in the population with higher frequencies compared to the wild-type allele [34]. Therefore, identifying proteins or protein domains that experiencing adaptive selection will improve the understanding of their genomic functions and the recognition of genetic variation that leads to phenotypic diversity [35].
Observations from previous genetic and antigenic studies of viruses such as FMDV (foot-and-mouth disease virus) [18], HIV-1 (human immunodeficiency virus type 1) [19], RHDV (rabbit hemorrhagic disease virus) [20] and influenza B virus [21], have indicated that signatures of positive selection are generally functionally important and/ or associated with antigenicity. To date, seven antigenic determinants that form a continuum on HN protein have been characterized by a panel of monoclonal antibodies (mAbs) against the HN of the velogenic Australia-Victoria/32 (AV) strain, including the amino acids positions 193, 194, 201, 263, 287, 321, 332, 333, 345, 347, 350, 353, 356, 494, 513, 514, 516, 521 and 569 [36,37]. One of our positively selected sites, codon 347, was located exactly in those defined epitopes. In the present study, both La Sota and Mukteswar, which are widely used vaccine strains in China, have a glutamic acid (E) occupied codon 347, in contrast to that with a lysine (K) substitution resulting in the opposite residual charge exclusively in genotype VII viruses (accession numbers in italic in Table 1). Furthermore, as referred to recent work of Cho et al [38] and Hu et al (2009) [39], it is reasonable to postulate that the emergence of E347K substitution might be closely related to the host immune pressure. However, the codon 266 was not included in the aforementioned antigenic sites [36,37], instead, it was involved in a predicted linear B-cell epitope that simultaneously held the epitope residue 263, suggesting that site 266 may lie in some antigenic regions yet to be recognized.
N-linked glycosylation, one of the most common forms of protein post-translational modifications, is known to be correlated with viral infectivity and immune escape [40]. There are six potential N-glycosylation sites (amino acids 119, 341, 433, 481, 508 and 538) in the HN protein of the AV strain [41]. In our analysis, positive selection was detected at codon 540, which comprised three different amino acids: alanine (A), valine (V) and threonine (T). Residue 538 was conserved with asparagine (N) in all the 132 HN sequences and tended to be a putative Nglycosylation site if T was present at site 540. However, the vast majority of prevalent strains owned A or V at codon 540 (Additional file 1, Table S1), which would deprive the possibility of N538 being glycosylated. The exact function of the resulted deglycosylation at site 540 remains unknown and needs to be further explored. Compared to vaccine strain La Sota, most genotype VII NDV isolates possessed different amino acids at the three identified positively selected sites. To further evaluate the effect of those sites on the vaccine protective efficacy, Go/JS6/05 was chosen together with La Sota for the recombinant fowlpox-virus construction. Before challenge, serum collected from the rFPV-LaSHN immunized chickens displayed lower HI titers to Go/ JS6/05 than to La Sota (Table 4), which may suggest that substitutions at the positively selected sites are partially responsible for the antigenic variation between the two HN proteins. After challenge, virus shedding results showed that the rFPV-JS6HN could prevent the excretion of the challenged virus more efficiently than rFPV-LaSHN, indicating that the positively selected sites on the HN protein could affect the vaccine immune efficacy against the prevalent NDV infection.
Although an intensive vaccination program against ND has been executed in China in the last few decades, epidemic infections with velogenic genotype VII NDV in vaccinated birds are still frequently reported in recent years [4,[8][9][10]. Currently, the most extensively used vaccine strains, such as La Sota (genotype II) and Mukteswar (genotype III), were isolated and characterized in the 1940s and belonged to the "early" genotypes (I-IV and IX), which have evident amino acid sequence divergence from the "late" ones (V-VII), especially genotype VII [42]. The results in this study suggest that positive selection may play a role in the formation of such differentiation and even induce antigenic variations compared with the vaccine strains. Therefore, new vaccine to better control the ND epizootics of prevalent NDV strains carrying novel variations at identified positively selected sites should be developed to meet the challenge.
Additional material
Additional file 1: Table S1: Background information of the 132 HN sequences investigated in the study
|
2014-10-01T00:00:00.000Z
|
2011-03-31T00:00:00.000
|
{
"year": 2011,
"sha1": "1c41e30de4e0c6c71087f6bfec96db366de320fc",
"oa_license": "CCBY",
"oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/1743-422X-8-150",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c41e30de4e0c6c71087f6bfec96db366de320fc",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
6100580
|
pes2o/s2orc
|
v3-fos-license
|
Preventable effect of L-threonate , an ascorbate metabolite , on androgen-driven balding via repression of dihydrotestosterone-induced dickkopf-1 expression in human hair dermal papilla cells
In a previous study, we recently claimed that dihydrotestosterone (DHT)-inducible dickkopf-1 (DKK-1) expression is one of the key factors involved in androgen-potentiated balding. We also demonstrated that L-ascorbic acid 2-phosphate (Asc 2-P) represses DHT-induced DKK-1 expression in cultured dermal papilla cells (DPCs). Here, we investigated whether or not L-threonate could attenuate DHT-induced DKK-1 expression. We observed via RT-PCR analysis and enzyme-linked immunosorbent assay that DHT-induced DKK-1 expression was attenuated in the presence of L-threonate. We also found that DHT-induced activation of DKK-1 promoter activity was significantly repressed by L-threonate. Moreover, a co-culture system featuring outer root sheath (ORS) keratinocytes and DPCs showed that DHT inhibited the growth of ORS cells, which was then significantly reversed by L-threonate. Collectively, these results indicate that L-threonate inhibited DKK-1 expression in DPCs and therefore is a good treatment for the prevention of androgen-driven balding. [BMB reports 2010; 43 (10): 688-692]
INTRODUCTION
The dermal papilla (DP) and dermal sheath of a mammalian hair follicle are derived from the mesenchyme. Hair follicles also contain epithelial cells in the outer root sheath (ORS), inner root sheath, matrix, and hair shaft that are derived from the epithelium (1). Reciprocal interactions between the epithelium and mesenchyme are essential for postnatal hair growth (2). The DP is known to play a key role in the regulation of hair growth and is encapsulated by the overlying follicular keratinocytes during hair growth period. Factors from the DP are believed to stimulate proliferation and differentiation of follicular keratinocytes into the hair shaft (3).
Male-pattern baldness (MPB) is the most common type of hair loss in men. Although its molecular pathogenic mechanism is not clear, dihydrotestosterone (DHT)-dependence has been well demonstrated in MPB (4, 5). Further, the treatment effects of finasteride, a selective inhibitor of type II 5α-reductase (5α-R II) that converts testosterone to DHT, support DHT-dependence in MPB (6). Circulating androgens such as DHT enter the follicle via capillaries in the DP, bind to androgen receptor (AR) within dermal papilla cells (DPCs), and then activate or repress target genes (7). Recent studies suggest that DHT-driven release of autocrine and paracrine factors from DPCs may be the key to androgen-potentiated balding (8-10).
We recently found that dickkopf 1 (DKK-1) is one of the most upregulated genes in balding DPCs (11). DKK-1 encodes a potent and specific endogenously-secreted Wnt antagonist that binds and inhibits low-density lipoprotein (LDL) receptor-related protein co-receptors that are involved in canonical Wnt signaling during hair induction and growth (12)(13)(14). Based on the finding that DHT-inducible DKK-1 expression in balding DPCs causes apoptosis in follicular keratinocytes, we claimed that DKK-1 is one of the key factors involved in androgen-potentiated balding (11).
L-ascorbic acid 2-phosphate (Asc 2-P) liberates L-ascorbic acid (AsA) via alkaline phosphatase present on the plasma membrane of various kinds of cells (15). This is followed by the incorporation of AsA into the cells. Very recently, we demonstrated that Asc 2-P represses DHT-induced DKK-1 expression in cultured DPCs of human hair follicles (16). In this study, we first investigated whether or not L-threonate, a metabolite of Asc 2-P, could attenuate DHT-induced DKK-1 expression in cultured DPCs by RT-PCR and ELISA. We next examined whether or not L-threonate could reverse the growth inhibitory role of DHT-inducible DKK-1 in follicular ORS kerahttp://bmbreports.org BMB reports tinocytes using an in vitro co-culture system.
L-threonate represses DHT-induced DKK-1 expression
Consistent with our previous report (11), we observed that 100 nM DHT induced DKK-1 mRNA expression by RT-PCR analysis (Fig. 1A, compare lanes 1 and 2). When L-threonate was added together with DHT, DHT-induced DKK-1 mRNA expression in DPCs was significantly attenuated (Fig. 1A, compare lanes 2, and 3 and 4). We next measured the concentration of DKK-1 in conditioned medium using ELISA. The mean concentration of DKK-1 was 11.49 ng/ml in the presence of 100 nM DHT and 5.25 ng/ml in the absence of DHT, demonstrating upregulation of DKK-1 in response to DHT (Fig. 1B, compare lanes 1 and 2). When 0.25 and 1 mM L-threonate was added together with DHT, the mean amount of DKK-1 was reduced to 5.03 and 5.62 ng/ml, respectively, demonstrating that DHT-induced DKK-1 secretion was repressed by L-threonate in DPCs (Fig. 1B, compare lanes 2, and 3 and 4).
L-threonate represses DHT-induced activation of DKK-1 promoter activity
A pGL3-DKK-1 promoter plasmid that expresses a luciferase reporter gene at different levels in response to various levels of DKK-1 promoter activity was constructed and used to further confirm the repression of DHT-induced DKK-1 expression by L-threonate. We found that DKK-1 promoter activity was increased by DHT treatment (Fig. 2, compare lanes 1 and 2). When L-threonate was added together with DHT, DHT-induced activation of luciferase activity in DPCs was significantly repressed (Fig. 2, compare lanes 2, and 3 and 4).
L-threonate attenuates DHT-induced growth inhibition of co-cultured keratinocytes
A co-culture system employing DPCs and keratinocytes has been used previously to analyze epithelial-mesenchymal inter-
|
2018-04-03T03:30:55.772Z
|
2010-10-31T00:00:00.000
|
{
"year": 2010,
"sha1": "75d59965fe11b2b8bc2717db7d93211ee6d4c0f5",
"oa_license": "CCBYNC",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201030859416457&method=download",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "75d59965fe11b2b8bc2717db7d93211ee6d4c0f5",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
245726257
|
pes2o/s2orc
|
v3-fos-license
|
Sarcopenia, Precardial Adipose Tissue and High Tumor Volume as Outcome Predictors in Surgically Treated Pleural Mesothelioma
Background: We evaluated the prognostic value of Sarcopenia, low precardial adipose-tissue (PAT), and high tumor-volume in the outcome of surgically-treated pleural mesothelioma (PM). Methods: From 2005 to 2020, consecutive surgically-treated PM-patients having a pre-operative computed tomography (CT) scan were retrospectively included. Sarcopenia was assessed by CT-based parameters measured at the level of the fifth thoracic vertebra (TH5) by excluding fatty-infiltration based on CT-attenuation. The findings were stratified for gender, and a threshold of the 33rd percentile was set to define sarcopenia. Additionally, tumor volume as well as PAT were measured. The findings were correlated with progression-free survival and long-term mortality. Results: Two-hundred-seventy-eight PM-patients (252 male; 70.2 ± 9 years) were included. The mean progression-free survival was 18.6 ± 12.2 months, and the mean survival time was 23.3 ± 24 months. Progression was associated with chronic obstructive pulmonary disease (COPD) (p = <0.001), tumor-stage (p = 0.001), and type of surgery (p = 0.026). Three-year mortality was associated with higher patient age (p = 0.005), presence of COPD (p < 0.001), higher tumor-stage (p = 0.015), and higher tumor-volume (p < 0.001). Kaplan-Meier statistics showed that sarcopenic patients have a higher three-year mortality (p = 0.002). While there was a negative correlation of progression-free survival and mortality with tumor volume (r = 0.281, p = 0.001 and r = −0.240, p < 0.001; respectively), a correlation with PAT could only be shown for epithelioid PM (p = 0.040). Conclusions: Sarcopenia as well as tumor volume are associated with long-term mortality in surgically treated PM-patients. Further, while there was a negative correlation of progression-free survival and mortality with tumor volume, a correlation with PAT could only be shown for epithelioid PM.
Introduction
Pleural mesothelioma (PM) is a malignant and very aggressive cancer of the pleural surface [1]. The PM incidence has risen in the last ten years and predictions indicate that it will continue rising [2]. Patient survival in PM is very poor; even with the most advanced surgical techniques, median survival ranges only from 15 to 22 months [3,4]. Further, due to the radical and extensive nature of surgical techniques employed in PM, patients have a high post-operative morbidity and mortality [5,6]. Thus, thorough patient selection is of utmost importance to only select individuals with a positive predicted outcome and minimal risk of mortality for this extensive surgery.
There are already several clinico-pathological features to predict the outcome of surgery in patients with PM. These variables include age, weight loss, dyspnea, anemia, leukocytosis, thrombocytosis, tumor volume, C-reactive protein (CPR) level, epithelial tumor histology, and white blood cell count [7,8]. To more accurately predict the outcome of patients undergoing surgery in PM, the identification of further clinical and biological markers is the goal of ongoing research.
Cachexia comes with a host of complications and is estimated to be the main cause of death in up to 50% of cancer patients [9]. Cachexia is defined by loose of adipose tissue, lean body mass (LBM) and muscle tissue [10,11]. These parameters are quantifiable and a muscle-loss below a certain threshold is called sarcopenia. There have been several studies, including a large meta-analysis, which have shown a significant increase in mortality for cancer patients who are suffering from sarcopenia [12,13]. However, this correlation has not yet been investigated for patients suffering from PM.
Another aspect of cachexia, as described above, is the loss of adipose tissue. A relatively new but established method of measuring fatty tissue is to estimate the amount of mediastinal or precardial adipose tissue (PAT) in computer tomography scans [14,15]. There have been studies which showed that PAT can be used as pre-operative outcome predictors in some surgical treatments [16]. Similar correlations with post-operative outcome in PM-patients have not yet been showed.
Another key feature of predicting the outcome of any kind of oncological surgery is the volume of the tumor involved [17][18][19]. There have already been some studies which analyzed the effect of tumor volume as a prognostic marker for the surgical treatment of PM patients [20][21][22].
The purpose of this study was to evaluate if different CT derived morphometric measures, such as the 33rd percentile of muscle area at the fifth thoracic vertebra as a surrogate for sarcopenia, precradial adipose-tissue and high-tumor volume can be used to predict the outcome of surgically treated PM-patients. Furthermore, the aim of this analysis is to define selection criteria for surgical candidates in PM, which can indicate positive outcomes and minimal risk of postoperative mortality more accurately.
Patients
From September 2005 to November 2020, consecutive surgically-treated PM patients having a pre-operative computed tomography (CT) scan were retrospectively included. The study protocol was approved by the institutional review board and local ethics committee (StV 29-2009, EK-ZH 2012-0094; 07.03.2019 EK-ZH 2019-00369) and written informed consent was obtained from all patients. The work has been carried out in accordance with The Code of Ethics of the World Medical Association (Declaration of Helsinki).
Imaging
All included patients underwent routine preoperative CT on 16 to 64-detector CT units from different vendors at tube voltages of 100 to 140 kVp with or without contrast media injection. Images were reconstructed with a soft tissue convolution kernel at slice thicknesses from 0.75 to 3.0 mm.
Muscle Area and Sarcopenia
Sarcopenia was semiautomatically assessed by CT-based parameters measured at the level of the fifth thoracic vertebra by excluding fatty-infiltration using a CT-attenuation threshold of −29 to 150 Hounsfield units (HU). According to the literature [23], sarcopenia was defined as less than the sex-matched 33rd percentile of the respective muscle area: crosssectional total paraspinal area (TPA), total rotator-cuff area (TRA), and total pectoral area (TPeA). Total muscle area (TMA) was defined as the sum of the former three measurements ( Figure 1).
Long-Term Outcome
As long-term outcome after surgically treated PM overall mortality and tumor progression were assessed. Progression was defined as tumor recurrence, tumor progression or death.
Statistical Analysis
Continuous variables were expressed as mean ± SD, and categorical variables were expressed as frequencies or percentages. Mann-Whitney and Kruskal-Wallis tests were used to perform group comparisons as appropriate. Further, survival analysis was performed using Kaplan-Meier statistics. Tumor volume as well as anterior mediastinal fat volume were correlated to tumor progression and survival using Pearson correlation (r) and linear logistic regression (R). All statistical analyses were conducted with the statistical software SPSS (SPSS, release 26.0; SPSS, Chicago, IL, USA).
Muscle Area and Sarcopenia
Mean total muscle area was of 18,661.9 ± 4517 mm 2 . Muscle area in males was significantly higher compared to their female counterparts (19,191.7 mm 2 vs. 13,526.5 mm 2 ; p = <0.001) ( Table 1).
Anterior Mediastinal Fat Volume
Mean anterior mediastinal fat volume was of 25.6 ± 15 cm 3 . Anterior mediastinal fat volume in males was significantly higher compared to their female counterparts (27.0 cm 3 vs. 13.0 cm 3 ; p = 0.001) ( Table 1).
Tumor Volume
Mean tumor volume was of 225.3 ± 268 mm 3 . Tumor volume in males was not significantly different from values of their female counterparts (228.6 cm 3 vs. 198.5 cm 3 ; p = 0.590) ( Table 1). Tumor volume in male subjects with a history of asbestos exposure was significantly lower than tumor volume in male patients without a history of asbestos exposure (194.2 cm 3 vs. 331.9 cm 3 ; p = 0.035).
Outcome
Mean time to progression was of 18.6 months (SD 11.7). Mean survival time was of 23.3 months (SD 24).
Discussion
The understanding of body composition in relation to chronic illnesses has been the focus of several studies in the last years. Sarcopenia, high tumor-volume, and low precardial adipose-tissue were reported to be associated with poor post-operative outcome in cancer-patients [13][14][15]21,24,25] In this study, we hypothesized that outcome in PM patients is affected by these factors in a similar way and could show that sarcopenia defined as the sex-related 33rd percentile of TPA at the level of the fifth thoracic vertebra as well as tumor volume are associated with long-term mortality in surgically treated PM patients. While there was a negative correlation of progression-free survival and mortality with tumor volume, correlation with PAT could only be shown for epithelioid PM.
There have been several studies who have shown that sarcopenia has a strong connection with overall survival in cancer patients [26][27][28][29][30] and patients with chronic disease [30][31][32][33][34][35][36]. Martini et al., evaluated the short-term outcome after pneumonectomy in lung cancer patients and found that sarcopenia defined as the gender-related 33rd percentile of fatexcluded total muscle area at the level of the third lumbar vertebra was associated with higher incidence of respiratory failure, ARDS and 30-day mortality [27]. Hsu and Kao et al. [35] analyzed the impact of sarcopenia on patients with chronic liver disease and concluded that Sarcopenia was not only a predictor for survival of patients on the waitlist of liver transplants, but also correlated with the pre-/and post-transplant adverse outcomes. Otten et al. [12], who evaluated the value of sarcopenia in different cancer patients, showed that the presence of sarcopenia correlated very strongly as a predictor of 1-year mortality and correlated nearly as strongly as advanced tumor stage.
In line with these studies, to the best of our knowledge we were the first to show that sarcopenia is associated with shorter three-year survival in surgically treated PM-patients. While tumor progression was associated with higher tumor stage, presence of chronic obstructive pulmonary disease, and type of operation, progression was not associated with the different muscle surface values defining sarcopenia.
At the moment, there is no consensus on the definition criteria of sarcopenia [37,38]. Traditionally, walking speed or general musculoskeletal activity were used to assess frailty of patients [39,40]. These measures, however, are very difficult if not impossible to objectify. In an attempt to make the evaluation more precise, different other approaches have been evaluated in the last years: Hasselager at al. [38] quantified sarcopenia by evaluating tricipital skin fold thickness and brachial circumference and could show that these measurements predicted poor long-term survival of patients suffering from surgically treated non-small cell lung cancer. Ruby et al. [41] used speed of sound ultrasound as a quantitative indicator for muscle loss and fatty muscular degeneration in seniors. A widely used and established approach to determine sarcopenia is the quantification of muscle tissue on cross sectional images [25][26][27][28]. Some authors propose to perform measurements at the third lumbar vertebra, others at the thoracic level [28,38,[42][43][44][45][46]. Additionally, there are some studies to give evidence, that the heights of measurements do not play a major role in the definition: Nemec et al. [43] compared muscle mass at the level of TH12 and TH7 and found very strong correlation with the muscle mass measured at L3. Swartz et al. [44] showed that muscle mass measurements at the level of the third cervical vertebra correlate very strongly with the one measured at the level of the third lumbar vertebra. In our study, we measured the muscle mass according to Fintelmann et al. [47] at the thoracic level at height of the fifth thoracic vertebra (TH5). This was motivated by practical reasons, since chest CT is part of the routine workup in all PM-patients and therefore all PM patients from our department could be included.
The main interest in recognizing sarcopenia as risk-factor resides on its potential reversibility. Rehabilitation programs, physical therapy and proper nutrition can potentially reverse sarcopenia and have a favorable impact on surgical outcome. Further studies are needed to show if in patients where sarcopenia could be reversed with preoperative rehabilitations programs have better outcome as those who were not.
Mediastinal adipose tissue was shown to be a potential predictor for the frailty of patients [14,15,48] and there have been studies which showed that PAT can be used as pre-operative outcome predictor in some surgical treatments [16]. In our study overall, we did not find evidence to support this claim in PM patients. Correlation of progression-free survival and mortality with PAT could only be shown in a subgroup of patients, namely for epithelioid PM. One aspect of this is that it is very difficult to accurately measure PAT in patients with PM since the tumor is often located in close proximity to the mediastinum or infiltration of mediastinal fat is present. These factors make a differentiation of tumor mass from mediastinal fatty tissue potentially difficult and measurements prone to segmentation errors. With a significantly lager patient population this measurement errors could possibly be minimized and there could have potentially been significant results.
Interestingly, while asbestos exposure correlated with worse outcome in terms of death and progression free survival, male patients having asbestos exposure in their history showed to have lower tumor volume compared to their male counterparts without asbestos exposure.
As already shown in different malignancies [17,18,49], it was not surprising that in our study cohort tumor volume also correlated well with time to progression and survival. This goes in line with Pass et al. [50], who showed that tumor volume is a very good and accurate predictor for overall and progression-free survival in PM patients.
The limitations of this study are as follows. First, we measured muscle mass at TH5. Although, the most common assessment method to quantify skeletal muscle area are measurements at the level of the third lumbar vertebra, there have been studies as described above who have shown that measurements at different locations in the thoracoabdominal region have strong correlations [44,44]. Second, tissue measurements were performed on contrast and non-contrast enhanced scans with the same HU thresholds. Even though this approach might lead to small measurement errors, previous studies have used the same technique and could show positive results [28,28]. Third, slice thicknesses in CT images ranged from 0.75 to 3.0 mm. Different slice thicknesses result in different image noise levels and have different impact on partial volume which can result in measurement errors. The software we used for semiautomatic volume measurements takes into account different slice thicknesses in volume calculation, and since the threshold to defined sarcopenia was set on the base of the evaluated cohort, the measurement error plays a minor role in our study. Fourth, measurements were only performed by one reader. Due to the semiautomated process of deriving data with the help of sophisticated computer software, we would expect similar results for additional observers. Fifth, the retrospective nature of the study leading to an inhomogeneous patient cohort undergoing different therapy approaches. Sixth, we did not distinguish if patients underwent surgery with a curative or a palliative intent. This will have a negative impact on overall survival time and time to progression. However, the impact of PAT and sarcopenia will not be affected.
In conclusion, our study shows that tumor volume and sarcopenia are predictive markers for patient outcome in surgically treated PM. While there was a negative correlation of progression-free survival and mortality with tumor volume, a correlation with PAT could only be shown for epithelioid PM.
Author Contributions: O.G.V.: data curation, formal analysis, investigation, software, visualization, roles/writing-original draft. L.J.: software, data curation, formal analysis, investigation, writingreview and editing. O.L.: formal analysis, investigation, resources, writing-review and editing. C.B.: investigation, software, writing-review and editing. I.O.: conceptualization, methodology, resources, writing-review and editing. T.F.: conceptualization, methodology, project administration, resources, supervision, validation, writing-review and editing. K.M.: conceptualization, data curation, formal analysis, methodology, project administration, supervision, validation, writing-review and editing. All authors have read and agreed to the published version of the manuscript. Informed Consent Statement: Written informed consent has been obtained from the patient(s) to publish this paper.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical reasons.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2022-01-06T16:29:36.032Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "35f8a14aee58a31194ef163a6cdc7954077e4ef9",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8774409",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "da63f0dec7fc33b9e0af3f03eaa5a7a7486f3e6a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
76377
|
pes2o/s2orc
|
v3-fos-license
|
Human umbilical cord mesenchymal stem cell-loaded amniotic membrane for the repair of radial nerve injury
In this study, we loaded human umbilical cord mesenchymal stem cells onto human amniotic membrane with epithelial cells to prepare nerve conduits, i.e., a relatively closed nerve regeneration chamber. After neurolysis, the injured radial nerve was enwrapped with the prepared nerve conduit, which was fixed to the epineurium by sutures, with the cell on the inner surface of the conduit. Simultaneously, a 1.0 mL aliquot of human umbilical cord mesenchymal stem cell suspension was injected into the distal and proximal ends of the injured radial nerve with 1.0 cm intervals. A total of 1.75 × 10(7) cells were seeded on the amniotic membrane. In the control group, patients received only neurolysis. At 12 weeks after cell transplantation, more than 80% of patients exhibited obvious improvements in muscular strength, and touch and pain sensations. In contrast, these improvements were observed only in 55-65% of control patients. At 8 and 12 weeks, muscular electrophysiological function in the region dominated by the injured radial nerve was significantly better in the transplantation group than the control group. After cell transplantation, no immunological rejections were observed. These findings suggest that human umbilical cord mesenchymal stem cell-loaded amniotic membrane can be used for the repair of radial nerve injury.
INTRODUCTION
Radial nerve injury generally refers to structural and functional impairment, even successive discontinuance caused by drag, pressurization or transection, leading to a series of dysfunctions in the region dominated by the injured radial nerve. Functional recovery after peripheral nerve injury is limited by complex pathological processes, slow neural regeneration velocity, adhesion of regenerative nerve to peripheral tissue, neuromuscular atrophy and motor end plate degeneration, so clinical therapeutic effects are not satisfactory [1][2][3] .
The current methods of treating radial nerve injury mainly include neurolysis, suturing and nerve transplantation. The fist key event after radial nerve injury is to recover the succession of the nerve stem in enough time to avoid neuronal death, actively promote axon regeneration and effectively prevent effector atrophy [4] . At present, for the treatment of peripheral nerve injury, the most common method with optimal therapeutic effect is end-to-end anastomosis or nerve autografting, but both treatments have limitations. In end-to-end anastomosis, scar formation during healing and uncertain anastomosis between motor nerve fibers and sensory nerve fibers hinder neurofunctional recovery, even leading to stretch injury of the proximal and distal normal nerve tissue. There are many limitations in the use of nerve autografts, including limited source of nerve grafts, unmatched sizes of donor nerve and injured nerve functions in the donor size. With the development of tissue engineering, peripheral nerve tissue engineering has been considered as one of the most effective methods to address these limitations.
Human umbilical cord mesenchymal stem cells, one of the common seed cells used in peripheral nerve tissue engineering, have strong self-renewal and multi-differentiation potentials and can be induced to differentiate into nerve cells in vitro [5] . Human umbilical cord mesenchymal stem cells have a rich source, can be easily harvested with less opportunity for contamination, high purity and low immunogenicity, and can tolerate HLA mismatch to a larger extent [5][6][7] . Umbilical cord mesenchymal stem cells can be induced to differentiate into dopaminergic neuron-like cells and choline acetyl transferase-positive cells, suggesting that these cells have a greater potential in the treatment of Parkinson's and Alzheimer's diseases [8][9] . Human umbilical cord mesenchymal stem cells can promote functional recovery after acute spinal cord injury [10] , and can be induced to differentiate into functional Schwann cells and promote peri-pheral nerve regeneration [11][12] . Intravenous administration of umbilical cord mesenchymal stem cells into the injured axillary and radial nerves by oligotrophic nonunion has achieved obvious therapeutic effects in the clinic [13] .
Amniotic membrane has been used as a biomaterial for the repair of peripheral nerve injury. It generally consists of an epithelial cell layer, basilar membrane layer and compact layer. The epithelial cell surface is rich in microvilli. The basilar membrane provides mechanical support and is composed of extracellular matrix, collagen IV, heparan sulfate, proteoglycan and other macromolecules [14] , similar to Schwann cell basilar membrane components. All of these structural characteristics of the amniotic membrane play an important role in tissue engineering [15] . In this study, we investigated the therapeutic effect of transplantation of human amniotic membrane loaded with human umbilical cord mesenchymal stem cells for the repair of radial nerve injury.
Quantitative analysis of subjects
Twelve patients who received neurolysis followed by transplantation of human amniotic membrane loaded with human umbilical cord mesenchymal stem cells for the treatment of radial nerve injury (transplantation group) and twenty patients who received only neurolysis (control group) were included in the final analysis, without any loss.
Baseline information of patients from the two groups
There were no significant differences in gender, age distribution, injury cause and pre-treatment neurological function between the transplantation and control groups (P > 0.05; Table 1).
Identification of human umbilical cord mesenchymal stem cells
Under an inverted microscope, passage 2 human umbil- ical cord mesenchymal stem cells grew well, exhibited a shuttle-shaped appearance similar to fibroblast-like cells. Through flow cytometry, CD34 and CD45 expression was not detected on the cell surface, but CD73, CD90 and CD105 expression was detected (Figure 1), indicative of umbilical cord mesenchymal stem cells [16] ( Figure 2).
Through electron microscopy, human umbilical cord mesenchymal stem cells loaded on amniotic membrane exhibited a shuttle-shaped appearance and evenly ad-hered onto human amniotic membrane ( Figure 3). The amniotic membrane without loading human umbilical cord mesenchymal stem cells exhibited a basilar membrane-like structure ( Figure 4).
Recovery of radial nerve functions after transplantation of human amniotic membrane loaded with umbilical cord mesenchymal stem cells
Motor and sensory functional recovery after peripheral nerve injury was assessed according to the criteria formulated by Neurotrauma Society, British Medical Academy [17] . At 12 weeks after transplantation of human amniotic membrane loaded with umbilical cord mesenchymal stem cells, the muscular power, touch sensation and pain sensation were significantly improved in most patients. However, only 55-65% of patients from the control group exhibited obvious improvements in motor and sensory functions ( Table 2). Cells grow evenly and exhibit a shuttle-shaped appearance similar to fibroblast-like cells. Cells exhibit a shuttle-shaped appearance and adhere to the human amniotic membrane. Figure 4 Human amniotic membrane without umbilical cord mesenchymal stem cells exhibits a basilar membrane-like structure under an electron microscope (× 5 000).
Muscular electrophysiological function in the region dominated by injured radial nerve after transplantation of human amniotic membrane loaded with umbilical cord mesenchymal stem cells
At 4 weeks after transplantation of human amniotic membrane loaded with umbilical cord mesenchymal stem cells, the muscular electrophysiological function in the region dominated by the injured radial nerve was not significantly changed (P > 0.05). However, at 8 and 12 weeks after cell transplantation, muscular electrophysiological function in the transplantation group was significantly improved compared to the control group (P < 0.05; Table 3).
Adverse events after transplantation of human amniotic membrane loaded with umbilical cord mesenchymal stem cells
During 12 weeks after transplantation of human amniotic membrane loaded with umbilical cord mesenchymal stem cells, no severe adverse events, complications or fever were found. After cell transplantation, swelling, hemorrhage, exudation, infection, and allergic reaction did not occur. There was no need to remove the transplant in any of the patients.
DISCUSSION
The mechanism underlying repair and regeneration after peripheral nerve injury is very complex and influenced by many factors. One single method provides limited effects on the regeneration of injured peripheral nerve. A combined method can overcome the axon regenerationinhibiting factors to the largest extent and effectively promote peripheral nerve regeneration and functional recovery. There is evidence that the most effective method to strengthen nerve regeneration is the combination of the following four methods: (1) remove axonal growth-inhibiting factors [18][19][20] ; (2) provide axonal growthpromoting factors, such as neurotrophic factors [21][22] ; (3) transplantation of seed cells [23] ; (4) provide a pathway suitable for axonal growth [24] .
The structural characteristics of the amniotic membrane provide substance basis for its use in nerve tissue engineering: the complex maze-shaped conduit system promotes molecular exchange; enough collagen fibers to strengthen its elasticity and tenacity; a special basilar membrane structure that facilitates the migration of epithelial cells, increases epithelial cell adhesion, acts a good cell carrier, which benefits cell adhesion and survival; an avascular matrix that prevents the excessive formation of fibrous scar tissue and inhibits inflammation [25] .
Strong evidence exists that human amniotic membrane exhibits a distinct advantage and feasibility in the repair of peripheral nerve injury [26][27] . Yoshii et al [28][29] found that abundant collagen fibers in the amniotic membrane can promote axonal regeneration in transected spinal cord and sciatic nerve. Toba et al [30] reported that the collagen component in the basilar membrane of the amniotic membrane plays an important role in the repair of peripheral nerve injury. In this study, we prepared a nerve conduit, a relatively closed nerve regeneration chamber, using human amniotic membrane with epithelial cells maintained and loaded with human umbilical cord mesenchymal stem cells. The prepared nerve conduit is an ideal biological nerve conduit because it can be twisted, expanded, easily sutured, and sterilized during preparation without influences on its physiological characteristics.
There is no consensus regarding whether the epithelial cells should be removed from the amniotic membrane to completely expose the basilar membrane for nerve regeneration. Mohammad et al [31] prepared a nerve conduit using amniotic membrane without epithelial cells to bridge 10 mm rat sciatic nerve defects. Results showed There were 12 and 20 cases in the transplantation and control groups, respectively. Touch sensation and pain sensation were graded as 0-V. Higher grade indicates better motor and sensory function. Significant symptom improvement was defined as more than two grades of improvement in sensory and motor functions after cell transplantation. that in the amniotic membrane group, an even nerve tissue grew and penetrated the amniotic membrane conduit, showing similar repair effects to nerve grafting. Researchers also used amniotic membrane with epithelial cells to prepare a nerve regeneration chamber, in which rat autologous nerve tissue, containing active Schwann cells and their secreted neurotrophic factors, were seeded within to strengthen the effects of the nerve scaffold in guiding nerve regeneration [32][33][34] . Thus, the bridged graft can provide a pathophysiological microenvironment closer to the nerve autograft, providing an in vivo environment suitable for axon regeneration. These findings suggest that amniotic membrane with or without epithelial cells can be used as a bridge scaffold to guide nerve growth to a certain extent. In this study, we used amniotic membrane with epithelial cells because human amniotic membrane epithelial cells have characteristics similar to neural stem cells. Uchida et al [35] found that under certain culture conditions, human amniotic membrane epithelial cells can synthesize and release various neurotrophic factors, including neurotrophic factor 3, nerve growth factor, basic fibroblast growth factor and brain-derived neurotrophic factor, and these factors can regulate the proliferation and differentiation of neural stem cells and promote axon regeneration.
Results from this study showed that at 12 weeks after transplantation, sensory and motor functions in the region dominated by the injured radial nerve were significantly recovered; in particular the muscular strength was obviously recovered compared to before transplantation. The possible mechanism may be as follows: (1) The special structure of human amniotic basilar membrane facilitates the migration of epithelial cells, and the adhesion and survival of human umbilical cord mesenchymal stem cells [35] .
(2) The injured radial nerve was enwrapped moderately with the artificial nerve conduit prepared using amniotic membrane (containing epithelial cells) loaded with human umbilical cord mesenchymal stem cells. Thus, a relatively closed nerve regeneration chamber was established to isolate the peripheral tissue and reduce the invasion of fibrous tissue and inflammatory cells, thereby effectively preventing the adhesion and pressurization caused by the scar formed in the anastomosis site.
(3) The human umbilical cord mesenchymal stem cell suspension injected into the nerve regeneration chamber exerted a synergistic effect together with the human umbilical cord mesenchymal stem cells adhered to the am-niotic membrane, i.e., under the effect of various neurotrophic factors, increasing Schwann cells in number and promoting cell functional output, accelerating axon regeneration in the proximal injured nerve and elongating the axonal growth cone [35][36] .
Taken together, the nerve regeneration chamber formed by the nerve conduit prepared using human amniotic membrane (epithelial cells were not removed) loaded with human umbilical cord mesenchymal stem cells provides not only a bridging role but also a microenvironment suitable for regeneration and repair of the injured radial nerve. In addition, human amniotic membrane and human umbilical cord mesenchymal stem cells can be easily obtained and there is no need to perform the second surgery after transplantation. Therefore, amniotic membrane loaded with human umbilical cord mesenchymal stem cells for the repair of radial nerve injury exerts promising potential for clinical application.
SUBJECTS AND METHODS Design
A non-randomized, concurrent controlled study.
Time and setting
This study was performed in Affiliated Central Hospital of Shenyang Medical College, China, between December 2011 and June 2012.
Subjects
Thirty-two patients with radial nerve injury who received treatment in the Department of Bone Surgery, Fengtian Hospital, Shenyang Medical College, China, were included in this study. According to disease condition and patient selection, 12 patients received treatment by transplantation with human amniotic membrane loaded with human umbilical cord mesenchymal stem cells, and the remainder received only neurolysis.
All cases suffered from radial shaft fracture complicated by radial nerve injury. The time period between nerve injury and surgery lasted for 1-9 months.
(2) Patients that did not satisfy the therapeutic effects of intravenous injection or oral administration of neurotrophic medicine after first surgery.
(3) Patients and their family agreed to accept human umbilical cord mesenchymal stem cell transplantation and signed the informed consent.
(2) Pregnancy. (3) Those unable to communicate because of severe associated injury or severe nerve injuries, or those with severe mental disorders. (4) Patients with alcoholism, diabetes mellitus, gout, collagenosis or those who use immunosuppressive drugs. (5) Patients with acute infection, severe contamination of wounds, neurological defects or the presence of unstable vital signs. Umbilical cord and amniotic membrane source: Healthy full-term parturient women from Department of Gynaecology and Obstetrics, Fengtian Hospital, Shenyang Medical College, China agreed to the use of umbilical cord and amniotic membrane in the experiments and signed informed consent.
Preparation of human umbilical cord mesenchymal stem cells
According to a previously described method [37] , human umbilical cord mesenchymal stem cells were cultured, purified, and phenotyped. Then, cells were identified by flow cytometry (BD, San Jose, CA, USA). A 10 mL aliquot of passage 2 cell suspension containing 1.75 × 10 7 cells was used.
Preparation of human amniotic membrane
The dissected human amniotic membrane was washed repeatedly with physiological saline, sterilized in alcohol for 1 minute and then washed with physiological saline. Thereafter, the amniotic membrane was cut into small pieces of required size using a sterile shears and placed in a 30 mL Petri dish. According to the amniotic membrane size, human umbilical cord mesenchymal stem cells were seeded on the amniotic membrane at a density of 1 × 10 5 /cm 2 . The Petri dish was incubated at 37°C in an incubator containing 5% CO 2 saturated humidity (Kogyo). Cell growth state was periodically observed under an inverted microscope (Olympus, Tokyo, Japan) and a scanning electron microscope.
Transplantation of human amniotic membrane loaded with human umbilical cord mesenchymal stem cells
In the transplantation group, after neurolysis of radial nerve, three or four incisions, each of 1.0-1.5 cm, were longitudinally cut on the epineurium along the injured radial nerve segment. Then, the injured radial nerve segment was enwrapped with the human amniotic membrane loaded with umbilical cord mesenchymal stem cells, with the cells on the inner surface, and the enwrapping should not be so tight that it causes pressurization. The two ends of the amniotic membrane were sutured to the epineurium. Thus, a biomembrane nerve conduit was prepared ( Figure 5). A 7.0 mL aliquot of human umbilical cord mesenchymal stem cell suspension was injected into the injured radial nerve segments enwrapped with the biomembrane nerve conduit via seven points from the center towards the distal (three points) and proximal ends (three points) with 1.0 cm intervals on the epineurium, 1.0 mL per point. Then, another 3.0 mL aliquot of cell suspension was equally injected into the injured segment via two points, i.e., 0.5 cm distal and proximal away from the center. A total of 10 mL cell suspension containing 1.75 × 10 7 human umbilical cord mesenchymal stem cells was injected. In the control group, only neurolysis and layer-by-layer skin suturing of the injured radial nerve were performed. After surgery, 10 g first generation cephalosporin was orally administered, twice a day, for total 3 days in both groups to prevent infection.
Assessment of functional recovery of injured peripheral nerve
According to the criteria formulated by the Neurotrauma Society, British Medical Academy, motor and sensory functional recovery after peripheral nerve injury was assessed [17] . Sensory function: 0, no recovery; I, recovery of skin deep sensation in the injured radial nerve-dominated region; II, partial recovery of superficial sensation and touch sensation in the dominated region; III, recovery of skin pain and touch sensation, and absence of hyperesthesia; IV, partial recovery of two point discrimination; V, complete recovery. Motor function: 0, no muscle contraction; I, proximal muscle contraction; II, proximal and distal muscle contraction; III, all important muscles can contract against resistance; IV, can perform Figure 5 Neurolysis of the radial nerve followed by transplantation of amniotic membrane loaded with human umbilical cord mesenchymal stem cells.
The right arrow indicates the amniotic membrane and the left arrow indicates the injured radial nerve that needs to be enwrapped.
all activities, including independent or synergetic; V; completely normal. After surgery, significant symptom improvement was designated for improvement in all sensory and motor functions by more than two grades.
Muscular electrophysiological examination
The amplitude (mV) of muscular electromyogram of the region dominated by the injured radial nerve was detected using a needle electrode to assess the functional recovery of injured radial nerve.
Safety observation
After transplantation of human amniotic membrane loaded with human umbilical cord mesenchymal stem cells, no foreign body reaction, inflammation, swelling, infection, hemorrhage, wound healing, hyperesthesia or allergic reaction was observed.
Statistical analysis
All data were statistically processed using SPSS13.0 software (SPSS, Chicago, IL, USA). Measurement data were expressed as mean ± SD. Two-sample t-test was used for comparison of the means between two groups. A level of P < 0.05 was considered statistically significant.
Research background: After peripheral nerve injury, a good microenvironment for neural regeneration protects injured neurons and promotes effective axon regeneration.
Research frontiers:
A conduit fabricated from amniotic membrane, which exhibits good histocompatibility and cellular compatibility, and human umbilical cord mesenchymal stem cells is an ideal biological nerve conduit because it can be twisted, expanded and easily sutured, and provides a good microenvironment for injured peripheral nerve.
Clinical significance: This study was the first to validate the efficacy and safety of human umbilical cord mesenchymal stem cells combined with human amniotic membrane containing epithelial cells for the treatment of peripheral nerve injury, providing a novel strategy for the treatment of radial nerve injury.
Academic terminology: Nerve regeneration chambers refer to the nerve conduit, an empty conduit that guides peripheral nerve regeneration. The proximal and distal ends of injured peripheral nerve were firmly anastomosed to the two ends of the conduit.
Neural regeneration is achieved within the conduit cavity.
Peer review: This study used a novel method, human umbilical cord mesenchymal stem cell-loaded amniotic membrane, to repair injured radial nerve in patients, which is an increasing research area. However, this was a non-randomized clinical trial involving a small sample size. More meaningful outcomes need to be further validated by completely randomized or multi-center clinical trials involving more cases.
|
2018-04-03T03:03:26.841Z
|
2013-12-25T00:00:00.000
|
{
"year": 2013,
"sha1": "63b89e2f42f7854768202237c53557bb0fe1948c",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "Adhoc",
"pdf_hash": "d9ab7a67bf896c8c5d09db14f93515674e90ab90",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233897344
|
pes2o/s2orc
|
v3-fos-license
|
Assessment of the pregnancy rates using sequential day 3 and day 5 embryo transfer in IVF/ ICSI patients
The clinical pregnancy rate following IVF is usually 3050% when there are no adverse maternal factors involving uterus, endometrium or immunological system, embryos are of good quality and adequate in number and maternal and paternal karyotypes are normal. The process of implantation involves two main components, a healthy embryo that should have the potential to implant and a receptive endometrium that should enable implantation. The "Cross-talk" between the embryo and the endometrium that finally leads to apposition, attachment and invasion of embryos is mandatory for successful implantation and subsequent normal placentation.
INTRODUCTION
The clinical pregnancy rate following IVF is usually 30-50% when there are no adverse maternal factors involving uterus, endometrium or immunological system, embryos are of good quality and adequate in number and maternal and paternal karyotypes are normal. The process of implantation involves two main components, a healthy embryo that should have the potential to implant and a receptive endometrium that should enable implantation. The "Cross-talk" between the embryo and the endometrium that finally leads to apposition, attachment and invasion of embryos is mandatory for successful implantation and subsequent normal placentation. 1 The embryos obtained after IVF or ICSI are transferred into the uterine cavity either on day 2 or 3 (cleavage stage transfer) or day 5 or 6 (blastocyst transfer). Blastocyst transfer has inherent advantages compared to cleavage stage transfer such as better synchrony with the endometrium, lesser uterine contractility, better embryo euploidy status and higher implantation potential per embryo. Inadequate uterine receptivity is responsible for approximately two thirds of implantation failures, whereas the embryo itself is responsible for one third of these failures. In human's sequential embryo transfer may be used to increase endometrial receptivity and to have all advantages of blastocyst transfer.
Sequential transfer prevents the patient and the clinician of the major disadvantage in 'pure' blastocyst transfer of the cycle cancellation due to embryonic block in which there is a failure of embryos to proceed to blastocyst stage.
The objective of the study was to assess the pregnancy rates using sequential day 3 and day 5 embryo transfer in IVF/ ICSI patients.
METHODS
This prospective study was conducted in Department of Obstetrics and Gynecology, Aarogya Hospital and test tube baby Centre from 1st January 2013 to 30th November 2019. Total 100 patients undergoing IVF/ ICSI were offered sequential transfer. Our inclusion criteria for the female partner was age <37 years, day 3 FSH level <10 IU/L, E2 <80 pg/ml, hysteroscopically normal endometrial cavity, all 10 follicles >14 µm in diameter on the day of receiving β -HCG and 5 or more cleaved embryos on day 3 post fertilization. Thorough counseling of all the patients was done and informed consent was taken.
Ovarian stimulation
The standard gonadotrophin releasing hormone agonist long protocol (Mid luteal phase) was utilized. 0.5 ml or 1 mg of leuprolide acetate was administered for down regulation from day 21 of preceding cycle and 100-300 IU of recombinant FSH was administered daily for ovarian stimulation. Follicle growth monitoring included serum estradiol, progesterone and LH measurements and transvaginal ultrasound. When one follicle reached a diameter ≥ 18 mm or two or more follicles reached ≥17 mm and at least 10 follicles were more than 14 mm, 250 µg of recombinant human chorionic gonadotrophin was administered and oocytes were retrieved 35-36 hours later. Routine IVF or ICSI was performed 4 hours after oocyte retrieval and the oocytes were checked for fertilization 16-18 hours later. Normal fertilization was indicated by appearance of two pronuclei. Embryos were cultured in commercial sequential IVF medium. Embryos were again observed at 48 hours (day 2) and 72 hours (day 3) after oocyte retrieval. The grading criteria for the embryos were as follows: Grade I, the size of the blastomeres was uniform, with no DNA fragmentation; Grade II, the blastomere size was slightly uneven with <20% DNA fragmentation; Grade III, the blastomere size was heterogenous or with 20-50% DNA fragmentation and Grade IV, >50% DNA fragmentation. The number and grade of the embryonic blastomeres were recorded. Good quality embryos were defined as embryos containing ≥4 cells on day 2 and ≥6 cells on day 3 with grade I-II.
In all the patients on day 3, two good quality embryos were transferred. Remaining embryos were placed in blastocyst culture medium and cultured until day 5. On day 5, one good quality blastocyst was transferred and the remaining blastocysts were frozen on the same day using vitrification technique. Blastocyst grading was done according to the Gardner and Schoolcraft grading system. 2 No embryo underwent assisted hatching before transfer. Luteal support consisted of 800 µg of micronized progesterone orally initiated on the day of oocyte retrieval and continued until the day of pregnancy testing. Estrogen supplementation was given in the form of oral estradiol valerate 2 mg tablets thrice a day.
Outcome measures
The primary outcome measures were the clinical pregnancy rate and implantation rate in the fresh cycle only. The secondary outcome measure was the miscarriage rate. Pregnancy testing was performed 14 days after embryo transfer. Ultrasound examination was performed at week 7 (about 5 weeks after transfer) to assess the fetal sac number and the fetal heartbeat. Clinical pregnancy was defined as the presents of a fetal heartbeat on ultrasound examination at 7 weeks of pregnancy. The implantation rate was defined as the number of gestational sacs seen on the ultrasound divided by the total number of embryos / blastocysts transferred. Implantation rate was calculated for all patients having ET and not just those who became pregnant. Spontaneous miscarriage was defined as a clinical pregnancy loss before 20 weeks of gestation age. Multiple pregnancies were defined as two or more gestational sacs observed on ultrasound. Frozen cycles were not taken into consideration and therefore cumulative pregnancy rates are not considered in this study.
The SPSS version 22.0 software program was used for statistical analysis. No ethical approval was needed for this study.
RESULTS
Our maximum number of patients 35% were in the age group of 27-32 years with mean age of 31 years, mean BMI was 24 and 75% patients had less than 10 years of infertility (Table 1). Maximum 72% patients had antral follicle count (AFC) between 8-12 and 90% patients had adequate endometrial thickness (ET) between 7-12 mm (Table 3). Our clinical pregnancy rates per retrieval cycle were 50% but implantation rates were 24% (Table 5). Multiple pregnancy occurred in 12% cases (Twins in all cases) and live pregnancy rates were 40%.
DISCUSSION
There are various studies suggesting improved clinical pregnancy and implantation rates in patients with repeated IVF-ET failures. In studies by Ismail Madkour WA et al in sequential transfer group they had significantly higher pregnancy rate (43.2% vs 27.4%), clinical pregnancy rate (37.8% vs 21.9%), implantation rate (17.1% vs 10.5%) and ongoing pregnancy rate (33.8% vs 19.2%) compared to conventional day 3 transfer. 3 We offered double or sequential ET to all our patients having ≥5 cleaved embryos on Day 3 even if it was their first IVF cycle. It is now suggested by various studies that to improve pregnancy rates endometrial receptivity should be increased. During first (day 3) transfer the embryos (2 embryos) in each patient may induce an increase in endometrial receptivity, thereby creating a better endometrial environment for the second transfer (1 blastocyst) on day 5.
Also, co-culture of early stage embryos with endometrial epithelium may increase the success rate of IVF indicating that the interaction between the embryo and endometrium is important. Also, insertion of catheter during first transfer may induce mechanical stimulation of the endometrium and at the same time it also makes clinician aware of the technical difficulties which may occur during transfer like uterine manipulation or cervical grasping or even cervical dilatation and so these can be avoided during blastocyst transfer. Additionally, sequential transfer increases the chance of hitting the "implantation window" and thereby improving pregnancy rates. Goto el suggested that this method may be used not only in patients with repeated failures but also in poor prognosis patients even in their first IVF-ET in order to improve pregnancy rate. 4 We got twin pregnancies in 12% of our patient Dalal et al found multiple pregnancy rate of 15.3% in the sequential transfer group. 5 Alone blastocyst transfer on day 5 can have high cancelation rates because if no blastocyst is formed than no embryo can be transferred. Cycle cancelation rates were nil in our study. Fang et al also reported no cycle cancelations in day 3 and day 5 transfer group. 6 With sequential transfer we could achieve clinical pregnancy rate of 50% which is comparable with Loutradis D et al who reported 60% clinical pregnancy rate with an additional embryo transfer on day 4. 7 Tehraninejad et al found that chemical and clinical pregnancy rate was similar in the sequential ET GROUP (40%) compared to the day 5 of ET group (38.3%) I patients with three repeated IVF-ET failures. 8 We would like to emphasis that surplus embryos add additional cost of freezing of embryos. If sequential transfer is done patient gets the best embryos and then surplus blastocyst can be frozen. We used vitrification for surplus blastocysts as the available data supports its potential safety. 9 Limitation of the study: We had limited number of case who agreed for sequential transfer considering the fact that may couples refused to enroll in the study due to the risk of multiple gestation and wanted single embryo transfer only.
CONCLUSION
Our experience with sequential transfer has improved our clinical pregnancy rates from 30% to 50%. We have not faced the problems of cycle cancelation. All the good quality embryos are properly utilized. It is also reduced our cost of Day 3 embryo freezing. Our experience with blastocyst freezing and its outcome still needs long term follow-up. We advocate that this technique is useful in all patients having good quality embryos in adequate number for double transfer as this optimizes the chance of selection of most viable embryo for transfer which is probably the key for a successful IVF program.
|
2021-04-26T06:59:26.756Z
|
2021-02-24T00:00:00.000
|
{
"year": 2021,
"sha1": "75e97ff46ca107db033ff40381f01c38d24169ee",
"oa_license": null,
"oa_url": "https://www.ijrcog.org/index.php/ijrcog/article/download/9664/6340",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "75e97ff46ca107db033ff40381f01c38d24169ee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2189280
|
pes2o/s2orc
|
v3-fos-license
|
Dynamic Landau Theory for Supramolecular Self-Assembly
Although pathway-specific kinetic theories are fundamentally important to describe and under- stand reversible polymerisation kinetics, they come in principle at a cost of having a large number of system-specific parameters. Here, we construct a dynamical Landau theory to describe the kinetics of activated linear supramolecular self-assembly, which drastically reduces the number of parameters and still describes most of the interesting and generic behavior of the system in hand. This phenomenological approach hinges on the fact that if nucleated, the polymerisation transition resembles a phase transition. We are able to describe hysteresis, overshooting, undershooting and the existence of a lag time before polymerisation takes o?, and pinpoint the conditions required for observing these types of phenomenon in the assembly and disassembly kinetics. We argue that the phenomenological kinetic parameter in our theory is a pathway controller, i.e., it controls the relative weights of the molecular pathways through which self-assembly takes place.
INTRODUCTION
Supramolecular polymerisation, β−amyloid fibril formation, actin and microtubule polymerisation all have two features in common: (i) some form of activation and (ii) reversible elongation. [1][2][3][4] Activation can happen in many different ways, the most important being conformational switching [5] and a minimum number of monomers coming together before elongation can take place (nucleation). [2,6] Generally, the activation constant for the above-mentioned systems is very small compared to the elongation constant, giving rise to very sharp polymerisation transition as a function of, e.g., the temperature, concentration, acidity and so on. This makes the polymerisation transition reminiscent of a phase transition. [7,8] Hence, if we keep aside details of the actual activation mechanism and other system-specific details, we can hope to understand the universal and the most interesting behavior of many such systems by relying on notions from the statistical mechanics of the phase transitions. [4,8] Even though our understanding of the thermodynamics of supramolecular polymers and the role of activation or nucleation, [9,10] solvent, [11][12][13] conformational [14,15] and compositional disorder [16,17] has made great advances, much less is known about the kinetics underlying reversible polymerisation processes. [4] The most extensively studied kinetic reaction rate models were initially set up to describe the equilibration of the length distribution of worm-like surfactant micelles in response to a temperature jump, and stress relaxation under shear flow via the dynamical breakdown and re-growth of polymeric assemblies. [18] Four kinetic pathways have been identified in this context: (i) end evaporation and addition, [18] (ii) scission and recombination, [19] (iii) end-interchange, [20] and (iv) bondinterchange. [21] In principle, one should consider not a single but a hybrid of pathways, [22] but this is rarely done. [23] The reaction rate or master equations are invariably highly non-linear integro-differential equations, some of which elude exact analytical solutions even in linearised form. [18] Only in some limiting cases asymptotic analytical results have been obtained for the temporal evolution of the length distribution within the end-evaporation-and-addition and scission-recombination kinetics applying the rate-equation approach outside of the linearised regime. [28,45,46] Not surprisingly, it is tempting to obtain closed-form equations for first few moments only, presumably requiring the assumption that the shape of the probability distribution function does not change with time. This in turn requires pathways to be tuned in such a way that it may not be possible to obtain correct thermodynamic equilibrium (e.g., by assuming irreversible scission). [23][24][25] In this paper we present as an alternative to the above-mentioned approaches, a phenomenological dynamical Landau theory for activated reversible polymerisation processes that makes use of what is known about the kinetics of phase transitions. The advantage of this method is that it allows for straightforward coupling of the equilibrium polymerisation to other kinds of macroscopic phase transition, such as phase separation and the spontaneous alignment in liquid crystalline states [36][37][38] as well as to flow fields. [39,40] The coupling is straightfoward because all the phenomena (polymerisation, phase ordering of various kind) can be treated on an equal footing in terms of appropriate order parameters. Here, we focus on studying the dynamics of polymerisation in the absence of any symmetry breaking and flow fields, leaving that for future work. Our theory is able to describe kinetic phenomena like hysteresis, overshooting, undershooting and the existence of a lag time observed experimentally, yet has only a single kinetic parameter. We argue that this parameter controls in some sense the relevant weights of different molecular pathways implicit in the theory. We find that our theoretical predictions are in qualitative agreement with experimental observations on the assembly kinetics of β−amyloids, at least if we renormalize the theory with an appropriate time scale.
The remainder of this article is organized as follows. In section II, we construct a thermodynamically consistent Landau free energy involving two relevant non-conserved order parameters, representing two moments of the full distribution function, and then use that to construct our dynamical equations. [41] The two moments we focus attention on are the polymer fraction and degree of polymerisation. We ignore any spatial variation and presume the system is well mixed at all times. Our two nonlinear differential equations describing the temporal evolution of the two order parameters in essence depend on only two dimensionless groups. One, the mass action variable describes the strength of the thermodynamic driving force towards polymerisation and the other, the pathway controller variable, selects the relaxation pathway. In section III, we solve the linearised version of these equations and investigate transient phenomena known as "overshooting" and "undershooting". The former has been observed in nucleated self-assembling polymer solutions of tobacco mosaic virus (TMV) coat protein [42] and of actin. [34] We analyse the delayed response before assembly takes off, an observable studied extensively in different types of system, in section IV. There, we solve our nonlinear evolution equation using the method of "Matched Asymptotic Expansion" (MAE) and obtain, in analytical form, the lag time as a function of the relevant mass action variable and the initial conditions. In section V, we provide an interpretation for our pathway control variable. We also discuss the phenomenon of temporal hysteresis and match our analytical results with experimental data for β−amyloid fibril assembly and obtain excellent agreement if we allow for an offset. We end this paper with a summary and conclusion in section VI.
II. LANDAU FREE ENERGY FUNCTION AND DYNAMICAL EQUATIONS
Our problem of interest is activated polymerisation in dilute yet well-mixed solution. This implies that we consider two basic components: assembly active polymers and assembly inactive monomers. Inactive monomers have to acquire a high-energy, activated state to be able to polymerize. Activated monomers convert inactive into active ones upon binding. If the latter does not require any free energy input, this is known as "autosteric" or "auto-catalytic" binding. [4,29] If it does require a free energy input, we have conventional activated polymerisation. [33,50] Both types of polymerisation turn out to obey the same statistics, i.e., the mass action models that describe these statistics are equivalent. [4] We arbitrarily choose the autosteric model. In it, the free-energy difference between active and inactive states is ∆f a ≥ 0, and the free-energy gain upon bonding is ∆f e ≤ 0. Let φ be the overall concentration (mass fraction) of monomers in the solution. If we now invoke the law of mass action and assume that the free monomers, the dimers, the trimers, etc., do not mutually interact, we obtain for the active polymers an exponential distribution with an average degree of polymerisation that we denoteN a . A fraction f of the material is in the polymerised, i.e., active state. The overall mean aggregation number, including active and inactive species, we denoteN . Under conditions of thermodynamic equilibrium, one can show that [29] whereN a obeys an equation of state given by Here, X ≡ φ exp (−∆f e /k B T ) is our mass-action variable and K a ≡ exp (−∆f a /k B T ) the activation constant, where, k B T denotes the thermal energy with k B Boltzmann's constant and T the absolute temperature. Note that the mean degree of polymerisation averaged over active and inactive species obeysN = 1 + K aN 2 a / 1 + K aNa . From Eqn. (1) and (2), we deduce that f = (X − 1 +N −1 a )/X. If we demand that K a → 0 and X ≥ 1 + K a , . So, indeed, the polymerisation transition becomes infinitely sharp in the limit K a → 0, with It is straightforward to show that in this limit the heat capacity exhibits a jump at the polymerisation point X = 1, as to be expected from mean-field arguments. [1,30] The limit K a → 0 is sensible because experimental values are typically 10 −2 − 10 −5 . [10,11,32] Now that we have convinced ourselves that the polymerisation transition resembles a phase transition, [8,30] we may attempt to describe it by constructing a Landau free energy that will become a starting point of a dynamical theory. We recall that there are two types of distribution: an exponential size distribution of active polymers and a distribution over active and inactive material. This implies that we should be able to describe the thermodynamics of activated polymerisation with only two order parameters, one representing the fraction of polymerised material f and the other describing the mean aggregation number of active materialN a . From Eqs. (1) and (2) we conclude that whilst f is critical in the limit K a → 0,N a is not. In this limit f is non-zero only if X > 1 yetN a > 1 for all X > 0. Indeed, from Eqn. (2) we find that at X = 1,N a = K Hence, only f exhibits a sharp transition at the polymerisation point. [4] Parenthetically we note that forN a ≫ 1, Eq. (1) suggests that f X ∼ K aN 2 a and it seems therefore that as f becomes critical, so doesN a . However, because we are interested in the limit K a → 0, the product K aN 2 a going to zero does not mean thatN a also goes to zero or even to unity as we have just seen. Hence, it makes sense not to useN a as order parameter but instead a quantity proportional to K aN 2 a . In this case both order parameters are zero below the critical point X = 1, and non-zero and finite above it even in the limit K a → 0.
It is important to point out in this context that our statistical mechanical model is identical to the Tobolsky-Eisenberg model for (spontaneous) sulfur polymerization, which is an example of activated equilibrium polymerization. [7] Wheeler and Pfeuty [8] have shown that a magnetic spin lattice model, the so-called n-vector model, becomes equivalent to the Tobolsky-Eisenberg theory in the limit n → 0. In that model, the spin-spin coupling constant corresponds to our elongation constant K e , whereas the magnetic field strength maps onto the activation constant K a . Wheeler and Pfeuty show that in their prescription the number concentration of polymers depends linearly on the external field. Hence, in the limit of zero magnetic field, the number concentration of polymers tends to zero. A vanishing number concentration of polymer chains has to accommodate a finite polymerized mass, giving rise to a diverging mean degree of polymerization above the critical point. This confirms that it is not sensible to useN a as an order parameter in the limit K a → 0.
It follows that a sensible Landau free energy describing nucleated polymerisation must involve two coupled order parameters. In a state of thermodynamic equilibrium, this free energy should retrieve Eqn. (1) and (2) or (1) and (3), which are equivalent in the appropriate limit K a → 0. To stay within the philosophy of Landau theory we opt for the latter, because it allows us to invoke the relevant control variable, X, and let it play a role similar to temperature. The mass action variable X = φ exp (−∆f e /k B T ) depends on the solution conditions, that is, the concentration φ, the temperature T and, depending on the type of system, on the solvent, the acidity and the salinity that (apart from temperature) control the binding free energy ∆f e .
As we have seen, the polymer fraction 0 ≤ f ≤ 1 shows a sharp continuous transition, which is a signature of a second order phase transition, requiring a Landau free energy function consisting of even powers of the corresponding order parameter. To satisfy two essential properties of the theory, i.e., a second-order-like transition and non-negativity of polymer fraction f , we select this order parameter to be √ f instead of f . Hence, we denote our first order parameter as S 1 = √ f , giving us the first two terms in our free energy (per particle) as −(X −1 p −X −1 )S 2 1 +S 4 1 , in accordance with the requirements of the free energy for the second order phase transition, and showing a transition at X = X p ≡ 1, X p . X p is transition point of the control variable X, even though minimizing this free energy does not quite produce Eqn. (3) yet as it is off by a factor of two. This, we fix below.
Apart from the critical order parameter S 1 that is related to the fraction of active material f , we have to introduce another order parameter, S 2 , that somehow describes the degree of polymerisation of the active material,N a , and that is a proper order parameter. As already suggested, an in this context natural order parameter would be S 2 = N a (N a − 1)K a /X, which is also critical but which in our model is enslaved by S 1 . See Eq. (1).
This suggests a contribution to the free energy density a term proportional to S 2 2 , and a coupling term that establishes the enslavement of S 2 to S 1 . We have to keep in mind that in equilibrium we have to obey eqn. (1). This we manage by adding to our free energy the terms (1/2)S 2 2 − S 2 1 S 2 . The introduction of the coupling term S 2 1 S 2 ensures that we automatically obey Eqn. (1) and (3) in equilibrium, that is, if we minimize the free energy with respect to S 1 and S 2 .
In conclusion, we obtain the following free energy per solute molecule F that obeys, and that meets all requirements. Note that as usual k B T denotes the thermal energy. As another thermodynamic consistency check, we calculate the specific heat using this free energy. It turns out to be consistent with that obtained invoking the law of mass action (or the equivalent microscopic thermodynamic theory). [30] We refer to Appendix A for details. Now that we have defined our free energy landscape, we can build on that our dynamical theory. Using the appropriate description for non-conserved order parameters (model A), [41] we set with Γ 1 and Γ 2 phenomenological relaxation rates for our order parameters S 1 and S 2 . In line with common practice, and in the absence of a clear microscopic interpretation, we presume these relaxation rates to be independent of the order parameters. As we shall see, this simplification will prove sufficient to produce all known dynamical behaviours of supramolecular assembly, including a lag phase, overshoots and hysteresis. The resulting dynamical equations now read where we introduced a dimensionless time, τ ≡ Γ 1 t, and a the ratio of relaxation rates γ ≡ Γ 1 /Γ 2 that will prove to be our kinetic pathway controller. In principle, this could point towards a microscopic interpretation of the kinetic parameters and a possible evaluation of whether, and if so, how, they should depend on the order parameters. We leave this for future work. Superficially, this set of coupled dynamical equations looks very simple. However, their highly non-linear character heralds not only a complex dynamical behavior but also difficulty in dealing with them analytically. Their non-linear character can be reduced somewhat by a simple transformation: f ≡ S 2 1 andN a ≡ K aNa (N a − 1) ≡ S 2 X. Here, f is as before the fraction of polymerised material, andN a is a measure for the degree of polymerisation of the active material that from now on we call the renormalised active degree of polymerisation or renormalised mean polymer length. Inserting this into Eqn. (6) and (7), we obtain Note that if we solve the above set of dynamical equations with the initial condition f (0) = 0, they will not evolve towards the correct equilibrium point, i.e., the minimum of the free energy. This is to be expected, as our theory is a dynamical mean field theory, if the starting point is a maximum free energy state, the order parameters will not evolve in the absence of fluctuations. In the present study, we choose not to include noise and hence focus on "seeded polymerization". Also Notice that the highest order non-linear term in Eqn. (8) is now quadratic rather than cubic as is the case in Eqn. (6), and the quadratic term in Eqn. (7) becomes linear in Eqn. (9). This makes a linear analysis more accurate. As we shall show in the next section, even at the linearised level our governing equations give rise to the interesting transient phenomena of overshooting and undershooting.
III. OVERSHOOT AND UNDERSHOOT
An exact analytical solution of our set of governing dynamical equations, Eqn. (8) and (9), has evaded us. A numerical evaluation of Eqn. (8) and (9) in Fig. (1) shows that if we perturb the fraction of polymerised material and/or their mean aggregation number away from their equilibrium values, these quantities do not necessarily relax in a simple exponential fashion, but can exhibit a transient response of non-monotonic growth or decay before approaching equilibrium. There are two types of non-monotonic relaxation that we call overshooting and undershooting. [42] What this means precisely, will become clear below. Note that overshooting of the polymerised fraction has been observed in actin assembly, [34] and that of the active degree of polymerisation in TMV coat protein assembly. [42] We have not been able to find examples of undershooting; most experimental studies focus on assembly rather than disassembly.
Interestingly, overshooting and undershooting present themselves to us even in a linearised version of the governing equations, which we can solve exactly. The analytical solution allows us to demarcate different kinetic regimes dominated by overshooting, undershooting and simple exponential relaxation for both quantities f andN a . To linearize eqns. (8) and (9) we write f (τ ) = f (∞) + δf (τ ) andN a (τ ) =N a (∞) + δN a (τ ). Here, f (∞) andN a (∞) are the respective equilibrium values for time τ going to infinity, and δf (τ ) and δN a (τ ) are the perturbations away from equilibrium, starting at zero time, τ = 0. The solutions to our linearised version of our dynamical equations read and These solutions are linear combination of eigenmodes with principal relaxation rates (the eigenvalues of the dynamical matrix), and The amplitudes A 1 , B 1 and A 2 , B 2 of the corresponding eigenmodes are, and Here, κ ≡ γ 2 + 64(X − 1) 2 . Notice that these solutions are meaningful for only X > 1, because we cannot perform a perturbation theory for X ≤ 1 as the equilibrium solution in this case is f (∞) = 0. Also note that the other limit X → ∞ is not meaningful either, as in the linearised version of our dynamical equations, (8) and (9), the equilibrium conditions are, δf (∞) =N a (∞)/2X and δf (∞) =N a (∞)/X. Both these equilibrium conditions are true if and only if δf (∞) = δN a (∞) = 0, for any finite value of X. But in the limit X → ∞, the equilibrium solution, (δf (∞), δN a (∞)) = (0, 0), becomes unstable. Fig. (1) shows that both numerical and linear solutions point at the existence of overshooting, undershooting and exponential monotonic relaxation. We never find overshooting and undershooting in both observables. What kind of response and in what variable we find the different types of response depends entirely on the initial conditions, i.e., on the values that our two order parameters have at time zero. The demarcation between different types of behavior we can evaluate using the linearised theory. For this purpose, we define the (dimensionless) time of transient response, τ tr , as the time at which the quantity showing a non-monotonic transient response is at its extremum. From Eqn. (10) and (11) we deduce that and where τ tr f and τ tr Na are the transient response times for the quantities f andN a , respectively. For a non-monotonic transient response to exist, these transient times must be real and positive, which can be evaluated using Eqn. (12) through (16). The transient times cannot be zero for the solutions obtained from a linear analysis because of the absence of any sigmoidal response at that level of approximation. As overshooting and undershooting can occur in both f andN a , but never in both, this gives rise to four sets of initial conditions for each type of the transient response. We tabulate these in Table I, for positive and negative perturbation around the equilibrium values, and notice that the demarcation of the various regimes depend on both X and γ, so on thermodynamics and kinetics.
Table I only tells us whether or not there is a transient response in f orN a , dependent on the initial conditions. It does not tell us the nature of the transient response. In principle, we can deduce this by calculating the curvature of the quantity showing transient response at the point of the extremum, that is, by calculating the second derivative of f andN a with respect to time. Conditions that we derive from the second-derivative test turn out to be extremely complicated expressions, and hence we use a simpler method to find what kind of transient response our order parameters show. It hinges on evaluating the slope of f andN a at zero time and connecting that to the expected response from Table I. For instance, if conditions are such that f shows transient behavior and if the slope at zero time is negative (positive) then we know that we are dealing with undershooting (overshooting).
Fig. (2) shows how the initial conditions dictate whether we have overshooting, undershooting or monotonic relaxation. Although this "phase diagram" is inferred from a linear analysis, it gives us information as to when overshooting or undershooting will occur for a specified set of parameters. The nonlinear terms in our dynamical equations modifies the extent of overshoot or undershoot and magnitude of transient time τ tr , and also slightly deform the boundaries separating various transient responses in Fig. (2) away from the equilibrium point. The phase diagram shows that one cannot, as already advertised, have non-monotonic response in both quantities f andN a , a fact that can be inferred from Table I also. Indeed, Table I is exhaustive in describing all regimes of initial conditions that are mutually exclusive among the three different kinds of response. We can qualitatively understand the transient response of our order parameters from the dynamical equations, (8) and (9), and the equalities obtained from minimizing the free energy yielding the equalities, 1)N a = 1 − X + 2Xf and 2)N a = Xf , which we may call local equilibrium conditions. As shown in Fig. 2, if the initial conditions are such that the system starts in the region between the local equilibrium conditions then no transient response is observed. If we start with initial conditions away from the equilibrium point, the system first transiently evolves towards either condition 1 or 2, depending on the initial conditions and once that has been accomplished, both the order parameters simultaneously decay to satisfy Eqn. (3). In the process of f followingN a , or vice versa depending on the initial conditions and the value of pathway controller, our order parameters show overshoot, undershoot or monotonic response. This implies that the only transition that is sensitive to the value of γ, is that between undershoot and overshoot.
Obviously, it would be very useful to have a physical interpretation for the existence of the various regimes shown in Table I and in Fig. (2). For instance, the question arises what plausible mechanism causes overshooting to occur in the polymerised fraction. This occurs, as per Table I, under conditions where the initial mean degree of polymerisation of active material is large and the polymerised fraction is much smaller compared to its equilibrium value. In that case, assembly starts with very few but very long filaments. These filaments can efficiently tend to equilibrium by breaking, and in the process create new nucleation centers. The number of segments a filaments has broken into depends on two factors: (1) length of the polymer and (2) probability of breaking of bonds (which in principle is also a function of length of the polymer). If a large polymer has been broken into a number of nucleation centers greater than the equilibrium number of polymers, then each newly created nucleation center extends towards the equilibrium length. This in turn causes the amount of polymerised material to be more than the equilibrium value and hence leads to the disintegration of some of the polymers in the end to satisfy the law of mass action. This is what we indeed observe as an overshoot in the polymerised fraction, which first increases to a value greater than equilibrium value and then decays towards equilibrium. We conclude that the overshoot in f is directly connected with polymers being able to fragment by random scission. Fig. (2a) and (2b) shows that conditions where overshoot happens depend quite strongly on the values of γ, our kinetic parameter. This is shown more clearly in Fig. (3a), where we show results obtained by numerically integrating our dynamical equations for different values of the parameter γ. For large enough γ, any transients in f disappear completely. This suggests γ regulates how prevelent scission is in the kinetic pathways. To substantiate this interpretation, we calculate the polymer density, ρ a = f φ/N a , where the active degree of polymerisation,N a , is related to the renormalised active degree of polymerisation via the relation,N a = K aN 2 a . As the activation constant, K a , and the total monomer concentration, φ, are not explicit parameters in our theory, we rewrite ρ a = Kf / N a , where K ≡ φ √ K a . In Fig. (3b) we show the scaled polymer density, ρ a /K. From Fig. (3b), we see that ρ a also overshoots along with f , supporting our interpretation of the scission dominated polymer fraction overshoot. Also notice a sharp increase of polymer density for γ ≫ 1 at short times. We do not fully understand the physical mechanism giving rise to this shape increase in the polymer density for short times.
IV. LAG TIME ANALYSIS
Sigmoidal response is a key characteristic of activated self-assembly and hence it should follow from our theory. Indeed, if we start off self-assembly with a very small initial fraction of polymerised material, we do find the sigmoidal behavior. This kind of response is characterized by a lag phase due to the time required to overcome the activation barrier, ∆f a > 0, and hence should become more pronounced as the height of this activation barrier increases or the initial polymer fraction, f (0), decreases. To analyse the lag time from our model, we use the conventional definition, that is, we find the time at which we have the maximum growth rate, determine the tangent at that point and identify its time intercept as the lag time. [51] Of course, to be able to do this we have to supply the dynamical equations with initial conditions. The equilibrium relation between the polymerised fraction and the mean polymer length is f = K aNa (N a − 1)/X, but we arbitrarily choose f (0) = K aN 2 a (0) for X = 2, and hence we start off with out-ofequilibrium initial conditions. It turns out that our results are qualitatively insensitive to the precise choise of initial conditions as long as we confine ourselves to the lower left part of the phase diagram Fig. (2).
In Fig. (4) we show by numerical solution that for increasing values of the initial polymerised fraction, f (0), the lag time for assembly decreases and above a certain value the lag phase disappears completely. In fact, above a critical value we retrieve the transient behavior discussed in the previous section, see again Fig. (2). The origin of the lag time is the time required to create nucleation centers, which then grow into equilibrium polymers. Indeed, increasing the initial polymerised fraction is equivalent to seeding the system, [52] and hence these pre-nucleation seeds then start elongating straight away, reducing the overall time required for nucleation. From Fig. (4), which we obtained by means of numerical solution, we see that the lag time only occurs for initial polymerised fractions f (0) close to zero, i.e., far away from the equilibrium value. If we want to obtain an analytical expression for the lag time, which will help us to compare with experimental data, we cannot rely on a linear analysis. Instead, we resort to perturbation theory. [53] Perturbation theory demands an expansion parameter around which an exact solution can be Taylor expanded. In principle, our phenomenological equations have two parameters, the mass action variable, X, and the pathway controller, γ. However, γ seems to be the only sensible parameter to play the role of expansion parameter, as it is the only kinetic parameter. It turns out that a lag time is very weakly dependent on the value of γ. To illustrate this, we solve our nonlinear equations numerically for fixed initial conditions and mass action variable, X, for different values of this parameter γ, and present results in Fig. (5). As Fig. (5) confirms, the lag time is relatively insensitive to γ, giving us a free choice of large or small γ and still obtain the reliable estimate for the lag time. Notice that in Fig. (5), we have plotted the fraction of polymerised material as a function of the logarithm of time to see the behavior at large times, that is, to highlight the pseudo plateau that appears at intermediate times. The pseudo plateau eventually equilibrates to true equilibrium, f = 1 − X −1 , for large times. It appears that for small values of γ, our system of self-assembling monomers experiences a second lag phase, the origin of which eludes us. On the other hand, it points at two-stage nucleation seen in models where a disordered aggregate has to be nucleated before this in turn can nucleate in an ordered assembly which then can polymerize. [55] It shows again how rich the temporal behaviour of our model is.
After identifying γ as our expansion parameter, Eqn. (8) and (9) give us the two options of γ ≪ 1 and γ ≫ 1. We notice that for the limiting case γ ≪ 1, our system of differential equations is regular whilst for the opposite limiting case γ ≫ 1 it is singular. This means that for the former we can put γ = 0 and hopefully get a convergent solution in powers of γ by straightforward perturbation expansion. Because of its singular nature, for the latter we cannot put γ → ∞ and calculate perturbatively corrections in powers of 1/γ. It turns out that, in spite of this, solving the singular version of our dynamical equations, i.e., for the case γ ≫ 1, is much more simple. As from the numerical solutions we show that the short-time behavior of the system is different from the long-time behavior. Regular perturbation expansion breaks down for this kind of behavior whereas the technique known as the Matched Asymptotic Expansion takes care of this. [35] Matched Asymptotic Expansion has proven a useful scheme to provide reliable asymptotic solutions but only applies to singular problems. Hence, we choose to do our perturbation theory in the limit γ ≫ 1, yet we expect accurate results for our lag time for all values of γ. Of course, our analysis for the behaviour at late times is only accurate for large γ and completely misses the pseudo plateau shown in Fig. (5). We return to this issue below. Invoking the method of Matched Asymptotic Expansion described in the appendix, we find from Eqn. (17) and (18), the following leading order solutions for f (τ ) andN a (τ ), andN valid for X > 1 and f (0) ≪ 1−X −1 . Eqn. (18) is evidently a sigmoidal (logistic) function, which in fact we expect from Eqn. (8) and (9) taking the limit γ → ∞. As the function form that we obtain for f (t) is sigmoidal, it does not exhibit the transient phenomena of overshoot or undershoot. Notice that Eqn. (18) indeed is independent of γ as we found from the numerical results shown in Fig. (5). In the limit γ → ∞ we obtain the equality f (τ ) =N a /X = K a N 2 a (τ )/X valid for all time, which in fact is the equilibrium condition for the fraction of polymerised material and the degree of polymerisation of the active material. The other equilibrium condition, f = 1 − X −1 for X > 1, is only reached for infinite time. Also, taking the limit X → ∞ in Eqn. (18) and (19), we find that the sigmoidal response is preserved. This will turn out to have important consequences that we return to below.
As is done generally to analyse experimental assembly data, we can cast our sigmoidal relation in to following form and define an effective growth rate and lag time [51] where from Eqn. (18) we read off that A = 1−X −1 is the equilibrium value, i.e., the saturation value, k app = 4(1−X −1 ) is the effective growth rate and is the time at which the maximum growth rate occurs, which turns out to be at the halfway point of assembly. Note that for X → 1 the growth rate goes to zero and the half time, τ 1/2 , diverges, signifying what one may call critical slowing down by analogy to what happens in phase transitions near the critical point. [54] From Eqn. (17) and (19) we can now simply calculate the lag time, τ lag , by the method advertised above, i.e., by i) finding the time at which we have the maximum growth rate, ii) determining the tangent at that point, and iii) identifying the time intercept as the lag time. We find Eqn. (21), being the lag time calculated from the leading order solution for the polymerised fraction, f , is independent of γ. However,N a does weakly depend on our pathway controller γ. As a consequence, this quantity can exhibit overshooting and undershooting in the large γ limit. Note that strictly speaking our asymptotic solution is accurate in the limit γ ≫ 1 implying thatN a is enslaved by f and hence has the same lag time. The conditions for obtaining the transient response in the renormalised mean polymer length,N a , have been evaluated in the previous section.
To verify that the lag time is weakly dependent on the pathway controller, as we deduced from Fig. (5), we compare in Fig. (6) τ lag for the polymerised fraction, f , from our analytical solution, Eqn. (20), and the results from a numerical solution of the governing equations. As the figure confirms, this turns out to be justified, indicating that our asymptotic solutions for the analysis of the lag time are correct. This in turn implies that Eqn. (21) is a good estimate for any value of the pathway controller and determines under what conditions the lag phase exists. Eqn. (21) is real and positive only provided f (0) < 1 − X −1 = f (∞), otherwise we lose the lag phase. This is particularly highlighted if we let X → 1 + , where the lag phase vanishes unless f (0) → 0 more quickly than X goes to unity.
Eqn. (21) and Fig. (6) point at a remarkable property of our lag time, which is that it does not vanish in the limit of an infinitely deep quench corresponding to taking the limit X → ∞. This is remarkable, because naively one would expect that if the thermodynamic driving force becomes very large, this suppresses any phenomenon associated with nucleation. In our phenomenological theory the lag phase is the only response associated with nucleation and hence we expect it to vanish. Instead, in the limit X → ∞, we find τ lag → ln ((1 − f (0))/f (0)) /4 − 1/2 ≡ τ off , i.e., the lag time tends to a finite non-zero value that we call the "off time", τ off .
V. DISCUSSION
We have seen in the previous section that depending, e.g., on the initial conditions, we can have a lag phase before assembly takes off. What we have not shown, however, is that in disassembly a lag phase is always absent. Hence, there is an inherent asymmetry between the polymerisation and depolymerisation kinetics. We call this asymmetry in the kinetics temporal hysteresis. It is of interest to study in more detail in what way assembly is different from disassembly and what the parameters are that affect this. We confine ourselves to the hysteresis under conditions characterized by a lag phase in the assembly and use analytical solution for polymerised fraction, Eqn. (18), for that purpose.
To be able to compare the assembly and disassembly kinetics in a quantitative fashion, we perform a theoretical cyclic quench experiment, using our model. For this, we quench the system from one equilibrium state X 1 to another equilibrium state X 2 > X 1 , and after equilibration has taken place we quench back from X 2 to X 1 < X 2 . The former is an assembly process that potentially experiences a lag phase and the latter a disassembly process that does not have a lag phase. For a meaningful comparison between assembly and disassembly, we define the quantity ∆f (τ ) =| f (∞) − f (τ ) |, where f (∞) is the long-time (equilibrium) value of the polymerised fraction, f , after the quenching. If assembly and disassembly do not exhibit hysteresis, the function ∆f (τ ) is identical for the two processes in our cyclic quench experiment. If there is hysteresis, this is no longer the case. We find that the magnitude of temporal hysteresis, to be specified in more detail below, depends on two factors: i) the width of the quench interval, ∆X = X 2 − X 1 , and ii) the distance of the quench interval from the critical point X = 1. As to be expected, in the limit ∆X → 0, we do not find any hysteresis as the quench experiment is perturbative and hence the problem becomes a linear one. Also, as X 1 moves away from the critical point at X = 1, the level of hysteresis decreases and, indeed, in the limit X 1 → ∞ hysteresis is absent. The reason is that, as we have seen in the preceding sections, the lag phase for the assembly ceases to exist for the large initial polymerised fraction. In this case ∆X need not be very small for the hysteresis to vanish.
To support these two statements, we define a quantity that we call the hysteretic area, A h , defined as the area between the kinetic curves, ∆f (τ ), for the assembly and disassembly in a cyclic quench experiment up to the point of intersection of the two, see also Fig. (7). To understand the reason for the existence of the hysteretic area, we revisit Eqn. (18), and note that the polymerised fraction, f , has two time scales: one associated with the apparent growth rate k app independent of initial conditions, and the half time τ 1/2 that does depend on the initial polymerised fraction. The latter quantity becomes negative if the initial conditions are such that the system evolves to equilibrium via disassembly. As a consequence, disassembly evolves with only one of these two relaxation times, whereas assembly has both of them. This difference in time scales between assembly and disassembly gives rise to non-overlapping kinetic curves, and hence to a non-zero hysteretic area and also an intersection point. For this reason, it makes sense to focus on area between the two curves upto the point of intersection that gives us the quantitative measure for the level of hysteresis. This is illustrated in Fig. (8), showing the hysteretic area, A h , for various quenches between X 1 and X 2 . We find that if we keep X 1 at some constant value but increase the quench interval, ∆X = X 2 − X 1 , by increasing X 2 , the hysteretic area increases as a function of ∆X. The figure also shows that the hysteretic area decreases as X 1 moves away from the critical point, X = 1. This confirms that we can indeed conclude that the hysteretic response of polymerisation vs depolymerisation disappears as we move away from critical point X = 1 and hence is a characteristic of nucleated or activated self assembly. We used our results of section (IV) for the lag phase to study the potential kinetic asymmetry between the assembly and disassembly. Our analysis of the lag phase along with that of the transient response, discussed in section (III), turns out to allow us to provide a physical or molecular interpretation of the phenomenological parameters in our theory. Our phenomenological model has two important parameters: the thermodynamic mass action variable, X, and the phenomenological kinetic parameter, γ. X describes the final thermodynamic state of the solution, as we have shown in section (II), whereas γ controls the temporal evolution of the polymerised fraction, f , and the mean polymer length of active material,N a , towards that final thermodynamic state that potentially takes place via a plethora of pathways. [22] The relative weight of each of the pathways governs in the end the precise kinetics of our linearly polymerizing system, and this suggests that there must be a connection between the parameter γ and the relative weight of these pathways. This is why we referred to it as the pathway controller. To justify that γ must indeed be a pathway controller, we revisit Fig. (5) showing how the parameter γ influences in what way the polymerised fraction evolves as a function of time. Initial conditions are such that at zero time the fraction of polymerised material is vanishingly small and (eventually) equilibrates to the value of one half. In the limit γ → 0, however, we obtain a pseudo plateau that arguably hints at how the pathways and the value of γ are related. This pseudo plateau emerges because of a diverging time scale that emerges post lag-phase, in the late-stages of the assembly process. We also find a similar pseudo plateau for the mean polymer length,N a .
The reason that we associate this psuedo plateau with a specific reaction pathway, is that Semenov and Nyrkova find within the rate equation approach a similar late-stage diverging time scale for the so-called scission-recombination pathway. [46] They conclude that in the limit K a → 0, the mean polymer length of the active material displays a signature of critical slowing down in particular for X → 1 + . However, the critical region vanishes in that same limit K a → 0 and hence should not be observable in the limit where our theory is valid. What remains in the Semenov-Nyrkova theory is the time scale that scales as K −2/3 a and that time scale diverges irrespective of the value of the mass action variable, X, which is also true for our model. [46] So, this suggests that our γ → 0 limit should represent reversible scission-recombination pathway. Other aspects of the kinetics point in the same direction, in particular the transients. Indeed, we observe in Fig. 3 that the magnitude of the overshoot increases with decreasing value of the pathway controller, γ. In section (III), we rationalized the existence of an overshoot in terms of almost instantaneous scission of very few, very long polymers combined with a somewhat slower unspecified growth mechanism. As γ increases, overshooting becomes less prominent and disappears completely for γ ≫ 1: this ties in with a decreasing ability of the polymers to break and create nucleation centers that then can grow. As γ → ∞, scission must be absent within our interpretation. The polymers will in that case be nucleated from inactive monomers and hence we see the lag phase emerging instead of an overshoot.
In conclusion, we have a kinetic parameter in our phenomenological theory that plausibly acts as a pathway controller. A natural question that now arises is: if γ is indeed the pathway controller, then why is the lag time independent of it? To answer this question, we need to understand that the lag time arises as a consequence of the existence of a nucleation barrier and effectively takes place at the monomer level, i.e., does not involve any polymers. The reason is that an assembly inactive monomer needs to transition to an assembly active state before it can partake in a polymerisation reaction. Naively, the lag phase is controlled on the one hand by the free energy difference between the active and inactive monomer states, and on the other by the thermodynamic driving force towards the polymerised state. This explains why, as X → 1, the lag time diverges as the thermodynamic driving force becomes zero. It does not explain, however, why in our model, even for infinite thermodynamic driving force, so for X → ∞, the lag time does not vanish.
Let us now confront our predictions for the lag time with actual experimental data on Amyloid-β assembly. (Because of a lack of data we cannot do the same for the transient response and hysteresis.) Hellstrand et al. obtained lag times as a function of the concentration of Amyloidβ(M1-42) protein in aqueous solution at pH = 8, containing 20 mM sodium phosphate, 200µM EDTA, 0.02% NaN 3 , and 20µM ThT. [51] The experimental data together with our theoretical fits are shown in Fig. (9). To obtain our theoretical fits, we need to fix our mass action variable, X, the initial value for the fraction of polymerised material, f (0), and the relaxation rate Γ 1 as defined in section (II). The mass action variable, X = φ exp(−∆f e /k B T ), we determine by realizing that the experiments are done at constant physico-chemical conditions, and hence that the elongation constant, equal to exp(−∆f e /k B T ) is fixed. Because the critical point is at X = 1, we deduce that the elongation constant must be equal to the reciprocal of the critical concentration, φ c . Hence, X = φ/φ c = C/C c where C and C c are now the dimension bearing molar concentrations. The critical concentration C c ∼ 0.18µM we determine from the experimentally determined concentration below which no polymers are is detected. [51] The corresponding value of the dimension-bearing elongation constant turns out to be 5.6 × 10 6 M −1 , which corresponds to a binding free energy ∆f e of about −20 k B T .
Our lag time expression Eqn. (21) depends logarithmically on the initial polymerised fraction, f (0). From the experiments of Hellstrand et al., we do not know its value. Hence, we choose f (0) as well as our phenomenological kinetic parameter Γ 1 in such a way to get the best agreement for low and high concentrations. We recall that Γ 1 is the fundamental relaxation rate for the polymerised fraction in our model, see Eqn. (5). As we have discussed in previous section, our lag time remains non-zero even for infinite concentration, whereas in the experiments the lag time does tend to zero with increasing concentration. Hence, to fit our theoretical results to the experimental data we need to subtract the off time τ off = ln ((1 − f (0))/f (0)) /4 − 1/2 that we defined in the preceding section and that depends only on the initial condition. We get reasonable agreement with the experiments if we set f (0) = 10 −6 and Γ 1 = 5.211/hour, telling us that a theory captures in essence the concentration dependence of the lag time of this particular system.
Clearly, the fact that we have to subtract a known off time is unsatisfactory, although this does not preclude the possibility of systems that do show such a response. Mathematically, the off time in our theory is caused by the specific form of the free energy that we constructed. This leads to kinetic equations that remain of a logistic form even in the limit, X → ∞. We have not been able to reformulate the theory in such a way that it suppresses the existence of an off time. The advantage of the model is that it is simple, that it exhibits a very rich kinetic behavior and that it can relatively straightforwardly be extended to include other macroscopic phase transitions. In future work, we connect this theory to the Landau-de Gennes theory for the isotropic-nematic phase transition, allowing us to model phase ordering kinetics in solutions of chromonics or surfactants.
VI. CONCLUSION
In this work we present a phenomenological Landau theory for nucleated linear self assembly. Our model is consistent with the thermodynamics of linear polymerisation in the limit K a → 0, i.e., for small values of the activation constant, and is able to produce all transient behaviours observed experimentally, i.e., a lag phase to assembly as well as overshoot and undershoot. We show that the transient response of overshoot and undershoot is not caused by the nonlinearity of the problem in hand but instead a consequence of the competition between the polymerised fraction, f , and the mean polymer length,N a . Both the types of transient behaviour can be obtained from the linearised version of our theory. We pinpoint the initial conditions required to observe overshoot or undershoot in the polymerised fraction, f and that in the mean polymer length,N a .
Solving our nonlinear dynamical equations for the limit γ → ∞ using the method of Matched Asymptotic Expansion, we obtain an analytical expression for the lag time, τ lag , which presents itself only when at time zero very little of the material is polymerised. The lag phase can only be found as a non-linear response, so does not follow from the linearised theory. By comparing numerical and analytical solutions we show that the lag time is for all intends and purposes independent of our phenomenological kinetic parameter γ that we identified as the pathway controller. We find different kinetic profiles for polymerisation and depolymerisation, call this a temporal hysteresis, and show that this temporal hysteresis is due to the inherent asymmetry involved in the polymerisation and depolymerisation. The reason, of course is that a nucleation barrier needs to be crossed only upon polymerisation. When we move deeply into the polymerised regime, hysteresis becomes less prominent.
By comparing the temporal behavior of the polymerised fraction in the limit γ → 0 with microscopic theory for the scission and recombination kinetic pathway, [46] we argue that in this limit our model is dominated by scission and recombination. We find that the magnitude of transient overshoot and undershoot increases with decreasing value of γ. This finding, explains why γ indeed must regulate the predominance of scission kinetics.
If we compare our theoretical lag time with experimental data on Amyloid-β, our theory agrees well with the experiments, apart from an additive constant that we need to remove. The additive constant that we call the off time, τ off , turns out to be an asymptotic value of the lag time for very large supersaturation. We do not fully understand the origin of this off time in our phenomenological theory. Finally, our phenomenological Landau theory, whilst much simpler than more conventional rate equation approaches to self-assembly, mimics many, if not all of the generic features, seen in theory and experiments, including hysteresis.
VII. ACKNOWLEDGEMENT
We arbitrarily assume that h p < 0, and the free-energy density can now be computed for T ≤ T p , where F = 0 for T > T p . Hence, the isochoric heat capacity per unit of particle c v can then be computed from, for T ≤ T p and c v = 0 for T ≥ T p . Eqn. (A6) agrees with specific heat at the critical temperature, T p , calculated by van Jaarsveld et al., after some algebra, for the so-called thermally activated polymerizing systems. [27] Appendix B: γ ≫ 1 solution using Matched Asymptotic Expansion In this appendix we outline the method of Matched Asymptotic Expansion, [35] by virtue of which we obtained analytical solutions for our dynamical equations, (8) and (9) Starting with the outer solution, we first assume that the solution has a regular Taylor expansion for ǫ, andN out a (τ, ǫ) =Ñ out a0 (τ ) + ǫÑ out a1 (τ ) + O(ǫ 2 ).
Substituting this into Eqn. (8), we obtain the zeroth-order equations, df out 0 (τ ) dτ = 4(1 − X −1 )f out 0 (τ ) + 4X −1 f out 0 (τ )Ñ out a0 (τ ) −8f out 0 (τ ) 2 , and 0 = Xf out 0 (τ ) −Ñ out a0 (τ ), where the subscript "0" refers to the order of the solutions. We cannot impose both initial conditions on the leading-order outer solutions. We therefore take most general solution of these equations. As we shall see, when we come to matching that to the inner solution, the natural choice of imposing the initial condition f out 0 (0) = f (0) is in fact correct. From the equation (B4), we conclude that N out a0 (τ ) = Xf out 0 (τ ), for all τ ≥ 0. The zeroth order degree of polymerisationÑ out a0 corresponds to quasi-equilibrium for the fraction of active material f out 0 , in which the degree of polymerisation increases because of an increase in active material. The degree of polymerisation decreases also because of breaking of long filaments, hence increasing number of polymers.
Substituting this result into equation (B3), we get a first order differential equaiton for f 0 (τ ), The solution of this equation is given by f out 0 (τ ) = 4(1 − X −1 )αe 4(1−X −1 )τ 4(1 − X −1 ) − 4α + 4αe 4(1−X −1 )τ , and henceÑ out a0 (τ ) = 4(X − 1)αe 4(1−X −1 )τ 4(1 − X −1 ) − 4α + 4αe 4(1−X −1 )τ . (B7) Here α is a constant of integration. This solution is invalid near τ = 0, because no choice of α can satisfy the initial conditions for both f out 0 andÑ out a0 . To solve the problem for short times, we surmise that there is a short initial layer, for times t = O(ǫ), in which f andN a adjust from their initial values to values that are compatible with the outer solution found above. We introduce the inner variables T = τ /ǫ, f in (T, ǫ) = f out (τ, ǫ) andN in a (T, ǫ) =N out a (τ, ǫ). The inner equations then become, Now that we have obtained expressions for the inner and outer solution, we assume both valid for intermediate times of the order ǫ ≪ τ ≪ 1. We require that the expansions agree asymptotically in this regime, where T → ∞ and τ → 0 as ǫ → 0. Hence, the matching conditions must read, lim T →∞ f in 0 (T ) = lim τ →0 + f out 0 (τ ) = f (0) and lim T →∞Ñ in a0 (T ) = lim τ →0 +Ñ out a0 (τ ) = f (0)X. The condition implies that, f out 0 (0) = α = f (0) andÑ out a0 = f (0)X. This implies that the outer solutions become, Having now obtained expressions for the first terms of both the inner and the outer expansions, they must now be matched together to obtain one composite expansion that approximates the solution over the whole time domain. To get the composite expansion, the inner and outer expansions are simply added together and the common limit found in (B16) is subtracted, for otherwise it would be included twice in the overlapping region. So our composite solution finally reads, for the fraction of active materialN a ∼Ñ out a0 (τ ) +Ñ in a0 ( for the renormalised degree of polymerisation of the active material. Keeping in mind that ǫ = 1/γ, gives us the full zeroth order solution given in Eqn. (18) and (19).
|
2016-05-23T09:05:27.000Z
|
2015-09-01T00:00:00.000
|
{
"year": 2016,
"sha1": "5f60a1aca339dd879d67f92783ee575fd9cdedb1",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epje/i2015-15105-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "334d252d23f60035bc63c28b1c555f5effe0d4ed",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
237605807
|
pes2o/s2orc
|
v3-fos-license
|
Meaning-Making of Motherhood Among Mothers With Substance Abuse Problems
Previous literature has documented the unique challenges encountered by mothers with substance abuse problems, which may hinder the ability to fulfill parenting responsibilities. Since there is evidence suggesting the engagement in meaning-making processes can help individuals reinterpret their transitions into parenthood and cope with parental stress, this study examined the meaning-making processes of motherhood among mothers with substance abuse problems. Sixteen Hong Kong Chinese mothers with a history of substance abuse were purposively selected and invited to narrate their life and maternal experiences in individual interviews. Based on the meaning-making model in the context of stress and coping, whereby global meaning refers to orienting system of an individual and situational meaning refers to the meaning one attributes to a particular situation, the global and situational meanings of participants related to motherhood and substance use, and their reappraised meanings in response to the discrepancies between global and situational meanings were analyzed. Using thematic analysis, the results showed that when faced with an internal conflict between global and situational meanings induced by substance abuse, most participants engaged in the meaning-making process of assimilation. Rather than changing their inherent parental beliefs and values, most participants adjusted their appraisals toward the situation, and hence made changes in their cognitions or behaviors such as making efforts to quit substance use or reprioritizing their parenting responsibilities. The analysis further revealed that being a mother provided a significant source of meaning to the participants in confronting highly stressful mothering experiences induced by substance abuse. Altogether, the findings suggest that a meaning-making approach may have benefits and implications for helping this population reorganize their self-perceptions, gain a clearer sense of future direction in motherhood, and achieve more positive life and parenting outcomes.
INTRODUCTION
Extensive studies have addressed challenges faced by women who have substance abuse problems during pregnancy and motherhood (e.g., Fergusson et al., 2012). Research has mostly focused on the psychosocial and emotional distress of women (Punamäki et al., 2013), their perceived or actual inadequacy in parenting (Brown, 2006;Massey et al., 2012), or the adverse effects of substance abuse on childcare and child development (Barnard and McKeganey, 2004). Simultaneously, there is a growing interest in research that studies the lived experiences and firsthand perspectives of mothers with substance abuse problems. Although most research in this area has been conducted from a third-person perspective, a first-person approach can promote the development of psychological interventions while maintaining ecological validity (Gaj, 2021), which are particularly important for incorporating the personalistic accounts of those with substance abuse histories. Among available qualitative studies examining personal accounts of motherhood and substance abuse (e.g., Virokannas, 2011;Silva et al., 2012;Torchalla et al., 2014), there is little research examining the meaning-making process in the context of stressful mothering experiences related to substance abuse.
Meaning can be understood as the significance individuals assign to different events or areas of their lives, and is a self-constructed, cognitive orienting system influencing the sense of life purpose, interpretation of lived experiences, feelings of fulfillment, and behaviors (Wong, 1998; of an individual. Likewise, meaning-making can include the restoration of meaning in the context of highly stressful situations (Park, 2010;To et al., 2018). Because meaning-making can facilitate personal growth and provide new life and maternal perspectives, it is possible that enhancing the meaning-making processes of mothers with substance abuse problems can support these individuals in reinterpreting their transitions into motherhood and in coping with maternal stress induced by substance abuse (Tedeschi and Calhoun, 1995;Sawyer and Ayers, 2009). For instance, studies have found that becoming a parent is linked to increased motivation for stopping substance use (e.g., Crozier et al., 2009;Radcliffe, 2011). Other studies have shown that mothers with substance abuse problems tend to struggle with their perception of motherhood, which affects the fulfillment of their parental roles (e.g., Virokannas, 2011;Silva et al., 2012). In contrast, if mothers with substance abuse problems can reinterpret personal challenges and make meaning from stressful life events, it is possible that the women can overcome struggles and carry out their mothering responsibilities.
Moreover, the meaning of motherhood and the identification with being a mother may become increasingly salient during pregnancy or after childbirth as the mother becomes tasked with providing for the needs of her child, tending to other maternal responsibilities, and changing her perception of herself in relation to others, including to her child (Smith, 1999). Some studies suggest that mothers with substance abuse problems often regard their child as the most important person in their life and as a primary source of meaning in their lives (Brownstein-Evans, 2001). Such findings may suggest that the meaning derived from motherhood may help mothers struggling with substance abuse to combat the label of being a "bad mother" by allowing them to reconsider their views on fulfilling the needs of their children or to change their behaviors and practices to be more available for their children (Brown, 2006).
For example, mothers who engage with or enhance their existential wellbeing tend to report fewer experiences of substance abuse and lower levels of parenting stress (Lamis et al., 2014). Additionally, women who discontinue substance abuse during pregnancy are found to report greater feelings of self-worth and decreased symptoms of anxiety and depression (Massey et al., 2012). Taken together, there may be a relationship between the meaning derived from being a mother and the ability of a mother to reconstruct her personal and maternal identities, improve parenting and parent-child relationships, and enhance motivation to change her substance abuse behavior. However, because prior studies concerning mothers with substance abuse problems are focused on the results of meaning-making of motherhood, the psychological mechanisms underlying such meaning-making processes remain unclear.
Overall, exploring meaning-making from the perspective of mothers with substance abuse problems is still relatively understudied. Given the unique challenges and circumstances often faced by these women, it is important to develop a deeper understanding of the interplay between motherhood and substance abuse, as well as the complex meaning-making processes underlying these interactions to provide better support and services to these individuals. Furthermore, meaning is not only personally created, but is also interpersonally constructed and broadly constituted by the ideologies of the wider sociocultural environment, whereby the differing experiences of socialization may result in differences in the cognitions or sense of identity of an individual (Ugazio and Castiglioni, 1998;. In the process of meaning-making, negative self-perceptions derived from social comparisons and societal stigma may cause mothers to negate their own lived experiences and devalue their personal lessons and learnings (Brownstein-Evans, 2001). For mothers with substance abuse problems, the internal conflict between wanting to be seen as a "good mother" and being identified as a "bad mother" because of their substance abuse may lead to feelings of parental guilt and dysfunctional parenting practices (Baker and Carson, 1999;Brown, 2006;Silva et al., 2012). Furthermore, perceptions of societal disapproval toward mothers with substance abuse problems may interfere with their ability to draw meaning from their own experiences. Consequently, there is a need to understand how mothers with substance abuse problems can cope with social comparisons and societal stigma through creating meaning from their motherhood experiences.
In Hong Kong, the dominant cultural conceptions of motherhood tend to hold mothers morally responsible for the outcomes of their children. For instance, a previous qualitative study reported that Chinese mothers often blamed themselves for the perceived "imperfections" of their children (Pun et al., 2004). Another study found that in comparison with fathers, Hong Kong mothers are usually expected to take more responsibility in childrearing activities . As a result, these mothers were found to suffer from a higher level of parental stress and a lower level of parental satisfaction compared to the fathers. Likewise, another prior study found that single mothers from divorced or poor families are more likely to regard themselves as inadequate mother due to their perceived inability to contribute to a stable future for their children (Leung, 2016). Taken together, these studies demonstrate how Hong Kong Chinese mothers may have manifested the surrounding cultural discourses in their personal construction of maternal meanings. For example, in contemporary Hong Kong society, there are said to be expectations for mothers to be highly involved in the lives of their children, and there is also a mother-blaming culture (Shek and Sun, 2014), both of which may adversely affect the meaningmaking and personal growth processes of mothers.
Analytical Framework
The meaning-making model in the context of stress and coping (Park, 2010) was used to inform the analytical framework of this study (Figure 1). According to Park and Folkman (1997), while there are different theoretical perspectives on meaningmaking processes in the context of stress and coping, most of such theories examine the functions of meaning-making in relation to how people appraise and cope with stressful events and circumstances. Among the theoretical perspectives, a few common tenets regarding the global meaning and situational meaning have been identified (Park, 2010). First, global meaning involves using the orienting systems of an individual to provide the cognitive frameworks to assist with interpreting life experiences and setting life goals (Pargament, 1997). Individuals who feel their lives are aligned with such orienting systems tend to feel more fulfillment and purpose in their lives . Second, situational meaning refers to the meaning associated with a particular context or event (Park and Folkman, 1997). When stressful life events challenge such global meaning systems, people may encounter discrepancies between their global and situational meanings and thus, experience psychological distress.
In the context of motherhood, global parental meaning can be described as the parental values and beliefs informing the goals of mothers and meanings related to motherhood, while situational parental meaning is related to the interaction between the global meaning of a mother and a surrounding event or circumstance (To et al., 2018). Given that substance abuse problems are generally described to violate core global beliefs about motherhood due to the societal stigma and judgment against maternal substance abuse (Stegnel, 2014;Stone, 2015), this may influence the extent to which a mother perceives her substance abuse problems as inconsistent from her global beliefs regarding motherhood.
To create meaning from such discrepancy, the individual reconstructs the appraised meaning of a situation to reduce the discrepancy between the situational meaning and the personal global meaning of the individual. This is a process referred to as assimilation (Park, 2010). Conversely, transforming global meaning into one more congruent with the situational meaning is known as accommodation (Park, 2010). Ultimately, when individuals can create meaning through assimilation or accommodation and hence reduce discrepancies between their global and situational meaning systems, they are likely to reduce the sense of internal conflict regarding the stressful event (Park and Folkman, 1997). In the context of motherhood, a mother may change her global parental meaning to meet the needs of the situation (e.g., altering her globally situated parental goals to cater to the needs of her children), or change her situationally appraised meanings to become more congruent with her globally held parental meanings (e.g., perceiving the situational challenge in a more positive light and undergoing internal growth as a parent).
Adopting the meaning-making model to examine the meaning-making processes of motherhood among mothers with substance abuse problems, this analytical framework suggests that a mother may experience psychological distress when perceiving a discrepancy between her global and situational parental meanings, where meanings are influenced by how she perceives stressful situations induced by her substance abuse. When attempting to reconcile discrepancies between her global and situational meanings through meaning-making processes, any resultant changes in her appraisals of global parental meanings or situational parental meanings may constitute as meanings made.
This framework was used to guide the development of qualitative interview guides and the data analysis procedures. In brief, this study explored how mothers with substance abuse problems appraise meaning when adjusting to stressful life situations. It also examined how meaning-making processes may allow mothers who are confronted with stressful life events to reduce their perceived discrepancies between their global meanings and situational meanings, particularly when such stressful events were stemming from their substance abuse problems.
Study Context
In Hong Kong, an increase in the number of females with substance abuse problems who were of childbearing age was observed throughout the 2000s (Central Registry of Drug Abuse, 2020). However, the specific difficulties faced by mothers with substance abuse problems in combination with the lack of services for these mothers and their children together have raised concerns among both practitioners and the public. Although neither an official definition of mothers involved in problematic substance use nor statistical records of these mothers exist in Hong Kong, there were approximately 975 women that were 21 years or older in Hong Kong who have also reported engaging in substance abuse in 2019. Among this group, 348 were married or cohabitating and 273 were divorced, separated, or widowed (Central Registry of Drug Abuse, 2020).
Various local rehabilitation and supportive services are available in Hong Kong to mothers with substance abuse problems. Moreover, with the support of the Comprehensive Child Development Service (CCDS), organized by the Department of Health and Hospital Authority, and the sponsorship of the Beat Drugs Fund Association, local community-based non-governmental organizations (NGOs) providing substance abuse treatment and rehabilitation services can obtain additional resources to provide counseling and support programs for parents with substance abuse problems. Therefore, research participants were recruited from these community-based NGOs.
Participants
Purposive sampling is commonly used in qualitative studies where specific settings or cases that can provide rich information are deliberately selected for data collection and analysis (Maxwell, 2012). Using purposive sampling, a sample with varying sociodemographic characteristics and diverse lived experiences and perspectives on motherhood and substance abuse was recruited. Specifically, a total of 16 Hong Kong Chinese mothers were recruited for this study, ranging from 20 to 40 years old. All participants had at least one child who was no more than 6 years of age and presented with a history of psychotropic substance abuse before, after, or both before and after the birth of the child. The participants were receiving casework services provided by two community-based Counseling Centers for Psychotropic Substance Abusers (CCPSAs) in Hong Kong which provide counseling services for people with habitual, occasional, or potential psychotropic substance abuse problems. Such inclusion criteria were imposed in light of research showing that substance abuse is a condition prone to chronic relapse, particularly for mothers with young children, who might feel a strong sense of strain and dissatisfaction with their relationships with partners or their role performance in early motherhood, which can prompt them to relapse (Niccols et al., 2012). Furthermore, mothers with a formal medical diagnosis related to unstable or severe emotional or mental conditions were excluded from participation because the narration of life stories during the intervention may cause further emotional disturbance or mental distress.
As mothers with substance abuse problems have been observed, both locally and overseas, to be a difficult group to reach, the small sample size (n = 16) was justified. Moreover, according to guidelines for designing samples for qualitative research (Onwuegbuzie and Leech, 2007), this sample size was sufficient for in-depth, qualitative data analysis. All participants were openly recruited through these two CCPSAs. No potential participants were excluded from this study, and no potential participants had declined participation.
The profile of all 16 participants is shown in Table 1. The majority of participants (n = 11) were in their 20s, four were in their 30s, and one was 40 years old. Ten participants had completed junior secondary education, while six had completed senior secondary education. Among the participants, seven were married, three were cohabitated, five were separated or divorced, and one was single. The majority (n = 10) had one child, five had two children, and one had three children. Eleven participants reported monthly family income below $20,000 HKD. Most (n = 12) reported they had their first experiences with drugs during adolescence or late adolescence. At the time of data collection, nine of them claimed they had abstained from substance use and seven were still partaking in drug use.
Data Collection Procedures
The study was conducted according to the ethical protocol approved by the Survey and Behavioral Research Ethics Committee of The Chinese University of Hong Kong. Permission was also sought from the institution in charge of the counseling centers. A written informed consent of each participant to join this study was obtained before data collection. The study used in-depth interviews for data collection to develop a comprehensive understanding of the lived experiences of mothers with substance abuse.
All interviews were conducted within the two Counseling Centers for Psychotropic Substance Abusers in Hong Kong, where each interview lasted between 1 and 2 h. The interviews were conducted in Cantonese and audio-taped with the written informed consent of each participant. A set of guiding questions was predetermined to develop an interview guide, and these questions were posed during the interview in a more openended manner.
Examples of guiding questions include: • Please describe your substance abuse history and current situation.
• Please describe your experiences and feelings when you found out you were pregnant.
• At the time of pregnancy, what was your personal meaning of being a mother? What were your goals as a mother?
• At present, how would you describe your meaning of motherhood? What are your current goals as a mother after giving birth?
• How would you describe yourself as a mother with a history of substance abuse?
• Please describe your experiences and feelings when it comes to taking care of your child. How do you respond to or manage such feelings?
• How do you cope with the difficulties and challenges in motherhood?
• How do you describe your mother-child relationship?
• How do you interact with other family members involved in the childrearing responsibilities for your child?
Data Analysis Procedures
Using Chinese word processors, student research assistants first transcribed the entire conversational content of the audiotapes. To ensure anonymity, pseudonyms were assigned to all participants. After checking the accuracy of transcripts, this information was imported into the computer-assisted data analysis software, NVivo 10. The software was used for reading, coding, and identifying emergent themes embedded in the interviews. A seven-phase thematic analysis (Braun and Clarke, 2006), guided by the analytical framework (i.e., the meaning-making model), was then performed to analyze the qualitative data. The seven phases of the thematic analysis involved the following: (1) Reading and rereading the transcripts to understand the narration of each participant and to identify important raw responses, (2) developing a preliminary coding scheme based on the analytical framework, (3) assigning codes to the narratives of participants by focusing on the meaning-making processes and outcomes related to Learning to be an author of their own lives 13 Perceiving their growth as a mother Striving to be better mothers for their children 15 Prioritizing their children during life struggles 10 Striving to provide a healthier environment and lifestyle for their children 14 motherhood, (4) categorizing the codes and identifying broader themes at thematic levels, (5) refining the themes, (6) naming the themes and making sense of the connections between them, and (7) summarizing the major themes and identifying representative quotations that best illustrated each theme. Several cautionary measures were taken to ensure the trustworthiness of the study. First, following the approach of negotiated agreement (Campbell et al., 2013), an intercoder agreement was established during data analysis. During this process, two researchers randomly selected five transcripts at the initial stage of data analysis and coded them separately based on the preliminary coding scheme. These codes were then cross evaluated, which yielded an intercoder agreement rate of 94.65%. Second, whenever disagreements or discrepancies between understandings of the data occurred, meetings were held to discuss the codes and themes, and to exchange opinions to thoroughly consider possible interpretations of the data. Third, although member checking was not used because the researchers found it difficult for participants to read and give feedback on the transcripts and data analysis products because of their relatively low education levels, field observations were conducted on parenting groups and programs organized by the centers and joined by the research participants. Field notes were also written after each interview and observation. Such triangulation of data provided another way of understanding the meaningmaking processes of mothers and mothering behaviors (Maxwell, 2012). Lastly, an audit trail was created by properly maintaining all consent forms, audiotapes, transcripts, procedural notes, and products of data analysis.
RESULTS
The narratives of the participants revealed that although each mother presenting with substance abuse had previously adopted her own way of making sense of her parental experiences, the recurrence of certain experiences revealed common themes embedded within their meaning-making processes. This section will thus articulate the themes elicited from the narratives of participants, with particular emphasis on the interrelationships between global parental meaning, situational parental meaning, discrepancies between such situational and global meanings, and the meanings made to reduce such discrepancies. The themes and sub-themes of this study are shown in Table 2.
In the past, I was so full of myself. Substance abuse, hair dyed gold, purple, multi-colored, and smoking. Now, I've even quit smoking. I haven't smoked for 2 years, because I'm worried my son would cough. . . I had never thought I'd be like this. Even my friends said it's a miracle I could quit drugs, not to say smoking as well. I myself am happy; makes me see my son as my treasure even more. . . He has motivated me and accompanied me through so many things. (Judith) In the above narrative, Judith vividly described the drastic changes she made in her way of life as motivated by the birth of her son. In line with the meaning-making model of Park (2010) would predict, when Judith sensed it was impossible to attain her global parental meaning of raising her son while maintaining substance abuse, she was determined to completely change her lifestyle and quit substance use to resolve the discrepancy between her global and situational parental meanings.
Similar to Judith, the mothers with substance abuse problems in the study might not have cared for their own health or life development but instead enjoyed their risk-taking lifestyle before they became mothers. Judging from the substance abuse records of the women, onlookers may expect they would not have functioned well as mothers, nor take full responsibility in childrearing. Contrary to these assumptions, it was found that the stressful situations the women dealt with after childbirth have often become very powerful catalysts to fight hard against their addictive behavior and to reconstruct the meanings of their lives as exemplified by the above narrative from Judith. In this study, the experiences of mothers were investigated to see how parenthood has become the turning point of their lives.
Global Parental Meaning
In the interviews with the sample of mothers with substance abuse problems, all of the participants mentioned their parental values and beliefs. For instance, all of them discussed the importance of providing for their children, which therefore emerged as the first theme in terms of the global parental meaning of mothers. Regardless of whether they felt they were able to meet the needs of their children, these beliefs appeared to serve as the guiding system they used to appraise themselves and adjust their daily or parenting practices. According to their narratives, it was shown that the participants all held themselves responsible for their children.
Four sub-themes have also been identified. The first subtheme is that they made an effort to ensure that the physiological needs of their children are met, as indicated by narratives from nine participants. This is reflected in the following excerpt from Doris: Raising the child, feeding him, and looking after him. . . Taking care of him even when I am not feeling well is really harsh. But as a mother, I know this is how to be a mother at that moment. (Doris) A second sub-theme is the mothers felt they should provide their children with an intact family (n = 7), illustrated by the following: Actually, I've always wanted to give my child a family. I also really want a family that belongs to myself. With a father and a mother... I want to give my child a family intact. (Ivy) Some participants also believe mothers ought to provide the best circumstances for their children (n = 7), which is the third subtheme: Giving the child the best things. Teaching him the best stuff. (Katrina) The fourth sub-theme is the mothers believed they need to be sensitive to the emotional needs of their children (n = 4), indicated by a quote from Carol below: Caring about how the child feels. For example, he really wants to share with Mummy about the happy and unhappy events at school. (Carol) The next prominent theme regarding the global parental meaning of mothers is their belief in nurturing their own children. Within this theme, two sub-themes are identified. First, some specifically pointed out mothers should be the ones responsible for and are suitable for nurturing their own children (n = 7 For the second sub-theme, some participants iterated the necessity for mothers to be the role model of their children and demonstrate good behaviors (n = 6): I think the key point is to be a role model. You need to set a good example for the child to follow. (Ivy) Overall, the participants appeared to believe in the importance of the initiative to cultivate the well-being of their children. The above themes and sub-themes illustrate how these mothers with histories of substance abuse value the well-being of their children, whether it be physiological, developmental, or emotional wellbeing. Caring for the survival and development of their children appears to be a key component of their general orienting systems as a parent. Although the public may hold preconceived notions or assumptions that mothers with a history of substance abuse are "bad mothers" (Baker and Carson, 1999;Brown, 2006;Silva et al., 2012), these narratives demonstrate how their parental values and beliefs may be similar to mothers without a history of substance abuse.
Situational Parental Meaning: Assignment of Meanings to Stressful Situations
As a result of their motherhood, the participants had to appraise their life situations as substance users after assuming a new identity as a mother. Their previously tolerated circumstances or behaviors once viewed as acceptable were considered problematic after becoming mothers. The narratives extracted from the interviews reveal entering motherhood magnified previous issues, particularly those stemming from substance use, and introduced new dilemmas into the lives of mothers.
Among the 16 participants, some appeared to have begun their own meaning-making processes and started to make behavioral changes in their lives, such as quitting substance use soon after the confirmation of their pregnancy (n = 4). During the interviews, some (n = 5) shared they were able to quit substance use entirely after childbirth and reported to have more stable mothering practices. Other mothers (n = 7) shared their goal of quitting the use of substances but were struggling to do so due to stressful circumstances.
In the following sections, the narratives of three selected mothers (Flora, Judith, and Nicole) were used to reflect how the participants made meaning from their mothering experiences. These three individuals were selected for each being at a different stage of substance use at the time of data collection, and they also differed in their life circumstances (refer to participants 6, 10, and 14 in Table 1). Thus, the following sections will mainly articulate the common themes elicited from these three narratives, in relation to their assignment of meaning to stressful situations, how they resolved the discrepancies between situational and global meanings, and their meaning-making processes.
The narratives revealed both internally and externally attributed stressful situations faced by the participants of this study, whereby internally attributed situations are those perceived within the control (or lack of control) of an individual while externally attributed situations are those perceived to be affected by factors outside of his or her control.
Perceived Consequences Resulting From Internally Attributed Situations
Several participants mentioned their use of substances had caused them to become irritable and feel a lack of control over their emotions or behaviors when caring for their children. For instance, Nicole recalled an incident where she threw her baby off the bed while she was under the influence of substances. She narrated: Judith also commented she was often emotionally unstable from her use of substances, which contributed to her reduced tolerance in response to parenting difficulties. Moreover, both Nicole and Judith had previously left their children under the supervision of other family members for an extended period of time due to their substance abuse or the consequences of substance use. This separation with their children had appeared to create greater difficulties in bonding with their children, as illustrated by the narrative by Judith below: I either looked really mental, or was very unresponsive and still. I could suddenly go crazy. . . But at that time, I didn't stay at home and didn't like living at home. . . so I abandoned my daughter for a period of time. I did not see my daughter. . . for at least 1 or 2 years. My daughter did not like me then. (Judith)
Perceived Consequences Resulting From Externally Attributed Situations
Some participants revealed challenges previously faced were often complicated by external factors, which they perceived as out of their control. In light of such narratives, three sub-themes regarding such externally attributed situations were extracted, as highlighted below.
Issues from Being Involved with Other Substance Users
The first sub-theme is difficulty stemming from the significant other who also has a history of substance abuse. A common concern raised by the participants is they feel tempted to resume their substance abuse when they are surrounded by others who use substances. Some also expressed worries of potential relationship issues when their significant other was a frequent user of substances. For example, Judith's network of others with substance abuse histories was enlarged when she worked in a night club, where she felt pressure to use substances every day after work with this group of peers. Similarly, in the narrative below, Nicole described her increased substance use during the period when she was involved in an intimate relationship with a partner who uses substances heavily.
Issues from Relationship Conflicts
The second sub-theme emerging from the narratives of participants is centered around the difficulties caused by being in an intimate relationship involving betrayal, abandonment, and violence. The physical and emotional consequences as a result of such relationships were found to complicate the challenges these mothers were already faced with, hence making it more difficult for the mothers to care for their children. Some participants found it especially difficult to overcome issues with a partner who was also the biological father of her child. Judith shared her experiences with violence imposed by the father of the child: Judith further expressed how she was both physically and emotionally hurt by the violence of her husband but would still choose to endure this because the person involved is the father of her third child, not simply a romantic partner without any connection to her child.
Issues from Disadvantaged Life Circumstances
The third sub-theme of external circumstances is related to how mothers faced with life circumstances perceived putting their children in a disadvantaged position, both at the family and community levels.
Because of their own childhood experiences and current situations, many participants perceived themselves having limited capacity to provide adequate care for their children and lacking the appropriate family or household conditions their children would want. For instance, about half of the participants were not engaged in a stable relationship with the father of their children. The narrative of Judith revealed that each of her three children have different biological fathers, which has brought about complications: I've always thought since young conceiving a baby should, of course, be with the husband. I thought this even during the times when I liked going out to play and go raving. . . My heart aches when friends ask, "Hey, why do your children have different surnames?" (Judith) Often, because of an unplanned pregnancy, they felt unprepared when becoming mothers and found it difficult to adjust to sudden changes in their lifestyles and living arrangements. Some mothers felt their neighborhood would pose negative influences on their children. Nicole illustrated how the neighborhood she grew up in, where she used substances, may pose certain social disadvantages to her child:
Discrepancies Between Global and Situational Meanings
According to the meaning-making model, appraisals of meaning are influenced by the extent to which an individual perceives certain events violate their values, beliefs, and goals. According to the model of Park (2010), after a person initially assigns meaning to a stressful event, they then determine whether there is a disagreement between the appraised meaning and their global meaning, and the extent of such discrepancy (Park, 2010). Together, this influences the perception of an individual of, or in response to, the event at hand.
In this study, although the participants were found to have constructed situational meanings to better make sense of their life situations as someone with a history of substance abuse, they also continued to hold onto their original global meanings, and in particular, the global meanings positioning mothers as the primary caregiver of her children. Moreover, these mothers shared that their life experiences or current situations as someone with a history of substance abuse were in opposition to their personal values, beliefs, and goals as mothers, which created a sense of discrepancy between their global and situational meanings of motherhood, and thus inflicted negative internal conflicts.
Self-Blame
In hindsight, nine participants discussed feelings of regret regarding their past and expressed that they wished they could have provided a better example to their children, especially because they are now aware their past may have consequences for their present status and family. They also described feelings of self-blame and guilt for the potential consequences of their past behaviors may inflict onto present or future of their children. For example, because of the unknown impact of her past substance abuse on the future of her child, Flora described her sense of guilt: I have gone out and was involved with that type of thing, so perhaps I'm afraid this will in time affect her [child] greatly. . . If this will later affect her brain development, then I would have really harmed her. I'm really worried. (Flora) Nicole also blamed herself for having deprived her child of a complete family: My source of guilt is seeing her not being in a complete family as soon as when she was born having no father. That was already very sad. She even had so many things intubated in her at birth [when she was in hospital]. I wonder if I've harmed her. (Nicole)
Worry
Eight mothers were worried their perceived inadequacy to be a positive role model or their perceived inability to provide adequate parenting support would eventually lead their children to copy parent behaviors or parenting in the future.
She might think, "I see Mum was like this before, so why shouldn't I go wild too?" I feel she would learn from what I did in the past and get back at me in that same way. . . How will I teach her then, how can I ever teach her by role modelling? No matter how much I improve myself, that will not help. (Nicole) The participants were also worried the problems stemming from their family of origin would be passed down to their children: In the past, when my mother was not in a good mood, she would say, "Don't talk to me" and then immediately explode and keep scolding. . . Actually, this could cause other people to have mental illness. . . So I don't want my kids to be like this when they are older. I am worried things would turn out exactly like that, repeating again [the fate]. (Judith)
Powerlessness
As many participants unexpectedly became mothers while they were still using substances, some of them (n = 9) described that the sudden need to provide for their children amidst their own stressful life situations had created additional stressors, which complicated their ability to fulfill their new parental responsibilities. For example, Nicole emphasized the importance of taking care of her child on her own, despite having to be away from home for work to financially support her child: Actually, what I want most is to take care of my child with my own hands, to take care of my daughter, be a good mother. . . But even though I want to do this, I feel torn. Where do I make money for her?... Only one person, not two, to raise her. . . Other people have a father and mother. I only raise her by myself. (Nicole) At the family level, although Nicole knew her partner has a severe history of substance abuse and viewed him as having negative influences on her child, she hesitated to break up with him because she wanted to maintain a whole family structure and provide her child with a father figure: As we both often took drugs, do you think our tempers would not be hot? I knew we would be in a bad mood and quarreled. But there was no need to let my daughter know why we argued again and again. . . My struggle was, breaking up with him is not the way, and not breaking up with him is also not the way. I felt trapped in a dilemma. If we broke up, my daughter would lose her father. (Nicole)
Meanings Made to Reduce the Discrepancies Between Global and Situational Meanings
The meaning-making model suggests that the emotional reactions induced by discrepancies between global and situational parental meanings may prompt one to cope by making meaning to reduce such discrepancies (Park, 2010). For instance, in the present study, the distress caused by the discrepancy between the maternal concerns of mothers and the actual situation at hand appeared to have brought forth a newfound determination to make new meanings to cope with the situation. Based on the narratives of participants, these newly created meanings appear to manifest in three different categories: reconsidering the use of substances; experiencing personal growth; and reflecting on the growth as a mother.
The narratives revealed all participants strived to fulfill their maternal roles and, ultimately, did not abandon their children. This suggests that the mothers opted to preserve their global parental meaning, particularly those in relation to what they believe a "good mother" should be and, instead, adjusted their interpretations toward situational events. Specifically, the mothers demonstrated the use of assimilation during the meaning-making processes. The narratives also showed the mothers used different ways to experience personal growth and enhanced their personal meanings of motherhood to reduce the perceived conflicts between their appraised situational and global parental meanings.
Reconsidering the Use of Substances
Flora, Judith, and Nicole have all changed their substance use behaviors at their own pace and to different extents. For example, Flora had quit substance abuse as soon as her pregnancy was confirmed: The craving is not as strong as before. Before having the baby, I could go out to play. I had to go out, and would not stay at home. In the past, I really couldn't live without it [the drug]. Yes, the difference was quite huge. . . A big difference. (Flora) On the other hand, Judith and Nicole shared they sometimes found themselves using substances in a more controlled manner compared to before. For instance, Judith shared she found the strength to quit substance use after the birth of her third child: I quit because of him [husband] and this son. I haven't been taking drugs for 2 years, I haven't touched it. I'm most healthy with this son. I really won't take it [drugs]. I won't go out, won't go out late at night. I will raise him with my whole heart. (Judith) Nicole, despite repeated relapses, kept trying to abstain from substance abuse for her daughter: I still had taken drugs every day, then I had stopped, had taken it again, and had stopped again. . . Till my daughter said "Mama, " I decided to stop [because] I could not bear this. I did quit, actually for almost a month. I tried hard. I really tried very hard. . . I still keep trying to force myself to quit it completely. (Nicole)
Experiencing Personal Growth
The current findings demonstrate the meaning-making process could lead to personal growth by heightening the awareness of the participants of their life roles and how they can become authors of their own lives. The three sub-themes related to the perceived personal growth of the mothers can be found as follows.
Reconsidering Their Career or Personal Development
For instance, Nicole started to work when she realized she had to earn money to provide for her daughter: I really needed the job. The job, actually was from 6 a.m. till 3 p.m. I could work in the kitchen. I'm okay for morning shifts. . . If I need to, then I'd work and won't care about anything else. I need to earn money for my daughter. . . My daughter is my biggest motivation. (Nicole) Another participant, Eleanor, also shared her return to the workforce after years of not working. Her narratives revealed it was not the need to provide for her child that motivated her to resume work, but rather, she felt she needed to secure her own livelihood. This suggests the mothers may have been concerned with their ability to take care of themselves, in addition to providing for their children. She narrated: I haven't been working for many years. I've just started to go to work again, so would need to adapt, and really would feel exhausted. But if you say I'm doing this for my daughter. . . actually, at the beginning, it was for my own livelihood. If I can't even take care of my own livelihood, when she's over 10 years old and studies in primary school, she'd need lots of personal spending. If I only start working then, it'd be too late. (Eleanor)
Learning to Be an Author of Their Own Lives
After becoming mothers, several participants shared that despite certain difficulties or perceived limitations, they could still appreciate their capacity for growth and create their own futures. For example, I like to have fun. But apart from having fun, I've still done my part. I don't need to depend on someone else. Since I can gradually support myself, I don't feel like a failure. (Eleanor) The mothers appeared to have come to the realization of the significance in what they chose to do with their lives and hence, began to form plans for the future. Particularly, some participants articulated their motivation was not due to the need to comply with external expectations. Rather, they began to plan for the future and reflect on their roles in life: Now I've quieted down, I think maybe I don't need to be with a man. Maybe I need to think quietly by myself about what I'll be able to do for my daughter. (Nicole) As the mothers began to focus on the roles that they hold in relational contexts, their relational self also developed. Many mothers not only thought about themselves, but they also took their family members into consideration when contemplating their own future. For example,
Reflecting on the Growth as a Mother
From the analysis of their narratives, it can be uncovered that the participants made efforts to improve their mothering practices and be "better mothers." Three sub-themes can be further identified among the theme of perceived growth as a mother.
Striving to Be Better Mothers for Their Children
Despite their negative self-perceptions, the narratives showed how these mothers continued to make efforts to become "better" mothers, even in face of the challenges caused by their substance abuse. For example, Flora described her attempts to enhance her capacity to care for her children: There were many things I was afraid I couldn't handle, and there were many things I got to learn, but I was also afraid I was not able to do them. But now I've already changed my point of view. Actually, for many things, you've got to learn them by yourself. Just learn bit by bit and absorb it bit by bit. Because actually, at this moment, you're a blank sheet of paper, since it's your first child. (Flora) On the other hand, Judith has taught herself to stop throwing a temper to control her child. Instead, she has begun to adjust her parenting approach and expectations: Now I would immediately try to talk to her [child] softly, "What happened? Mum promises not to scold you anymore and will give you all my love again." Then she would stop the temper tantrum immediately. So, this method seems to be working. Now, I would know to respond like this. (Judith)
Prioritizing Their Children During Life Struggles
Even though the mothers with substance abuse problems reported conflicting priorities, such as catering to their addictions, personal relationships, and tending to their child, many participants were found to prioritize the needs and wellbeing of their children before their own. This is illustrated by a description of Flora on changing her life focus and prioritizing her child over her own enjoyment: Everything changed, meaning after my child appeared, my future became different. The road will be walked because of her. I really consider her first for everything. So, she will be prioritized in everything. (Flora) Furthermore, the narratives demonstrated how some of the mothers were overwhelmed by their personal struggles and yet, still opted to prioritize their children. For example, Judith contemplated suicide to end her emotional turmoil, but was also worried her son would have to suffer as a result: I have thought of committing suicide, for a period of time. . . The baby is so cute. He is very fine and wants to grow up. I cannot take his right away from him. . . I can't take away the right of the child. I would also think, "Hey, if I died, what would happen to my son?" (Judith)
Striving to Provide a Healthier Environment and Life for Their Children
Many participants tried to improve their circumstances by providing a positive family atmosphere for their children. This included adjusting their relationship with the father figure, other members of the in-law family, and family of origin. Flora described adjusting her perceptions of her family relationships: Sometimes, after my in-laws and I are back from work. . . They go to work, so in the afternoon, after work, they can play with the girl. . . The relationship among everyone has deepened. (Flora) Another participant, Polly, mentioned the reason why she had to express gratitude to her mother-in-law: Need to do something to express gratitude to her [mother-in-law]. If not, my child will be pathetic, because when I go to work I wouldn't know how my mother-in-law would treat him, right? Need to think for my child, not just for myself. (Polly) Overall, although the participants reported undergoing similar challenges and difficulties due to their substance use, there were variations in the way the mothers appraised the meanings of their situations. This suggests different meaning-making processes may have taken place. Particularly, the discrepancies between the global parental meaning of the mothers and their situational parental meaning may have prompted them to engage in their own meaning-making attempts. In the following discussion, the different meanings made by the participants and the appraisals of their situations in an attempt to reduce discrepancies between their global and situational meanings are expanded.
DISCUSSION
Previous literature often portrays mothers with substance abuse problems as victims to their situations (Brownstein-Evans, 2001), with emphasis on the challenges faced by these individuals, including mental health problems (Punamäki et al., 2013), social isolation and socioeconomic disadvantages (McClelland and Newell, 2008), family dysfunction and violence (Fare et al., 2008), and other traumatic life experiences (Torchalla et al., 2014). Consistent with literature, the findings of this study also indicate many of the participants had encountered stressful life situations at different levels. However, they had sought to view their pregnancies or their children as the impetus for creating change in their lives, despite facing various difficulties as a result of their substance use (Radcliffe, 2011). The findings of this study indicate that through the process of meaningmaking, many participants who initially struggled with their personally and socially created meanings of parenthood were able to reconstruct such parental meanings and hence, experience personal and parental growth. Personal and social meanings are intertwined, and the maternal meaning of an individual is both created on a personal level and constructed based on her surrounding social systems and the socio-cultural environment at large .
Regarding personal meanings, the findings add to the current understanding of personal growth among mothers with substance abuse problems from a meaning-making perspective. These findings appear to be consistent with other studies on personal growth among women following childbirth, which found that changes in the self-perception of the mother toward her life direction are linked to increased resilience, higher levels of maturity, and greater reexamination of life philosophy, and setting of new priorities (Taubman-Ben-Ari et al., 2012;Taubman-Ben-Ari, 2014). Expanding upon the findings of Taubman-Ben-Ari (2014) and Sawyer and Ayers (2009) regarding the outcomes of personal growth, the narratives from this study suggest that the personal growth of a mother not only has a positive impact on her psychosocial functioning, but also serves as an ongoing reflective process regarding her own maternal meanings, life choices, and other parenting challenges. Consequently, both the personal growth resulting from the meaning-making processes of a mother and the meaningmaking process itself appear to have positive influences on the life and parenting outcomes of these mothers. In that sense, these findings echo the concept of posttraumatic growth developed by Tedeschi and Calhoun (1995), which perceives that posttraumatic growth is both a positive outcome of effective coping and adaptation, and an evolving process of meaningmaking.
Furthermore, the findings indicate the participants who reported less difficulties when quitting substance abuse (n = 9) were also the primary caregiver of their children, suggesting these mothers may have connected their substance abuse recovery with their roles as mothers (Gunn and Samuels, 2020). For these participants, their desire to fulfill their role as mothers and their desire to engage in substance abuse created a discrepancy between their global and situational parental meanings, which needed to be resolved. The narratives from this study also revealed most participants opted to reduce such discrepancies through the process of assimilation instead of accommodation, which would have required the mothers to reassign situational meanings rather than to alter their inherent parental values and beliefs.
The narratives showed that all participants have reconsidered their substance use behaviors (n = 16). However, some participants (n = 7) were still working on changing their substance abuse behaviors and were experiencing difficulties with assigning meaning to stressful situations at the time of data collection, even though these participants still expressed their intention to deliver on their maternal responsibilities. For this group of participants, while substance abuse has caused stressful situations, some have decided to appraise their use of substances as a quick solution to reduce their stress temporarily or to maintain an intimate relationship with their substance-using partners (Virokannas, 2011). The struggles of these mothers were found to be not well-understood or well-accepted by those around them, particularly because others viewed these problems as "self-initiated" and "socially unacceptable" (Baker and Carson, 1999;Virokannas, 2011). Thus, these unresolved substancerelated and other relationship issues, compounded with the demands of motherhood (Baker and Carson, 1999), appeared to have caused additional stress to these mothers. Consequently, these mothers may continue to experience negative emotions because of such discrepancies which may, in turn, hinder their ability to create new meanings or form a more positive maternal identity.
With regard to social meanings, the meaning-making of mothers with substance use problems is also said to relate to their daily interactions with others and to their surrounding social contexts (Klee et al., 2002;To et al., 2018). The review of Park (2010) on meaning-making processes in the context of stress and coping has argued that meaning-making should be considered in conjunction with the social constraints inhibiting meaning-making. The findings of this study demonstrate that social environments may also influence how these mothers assign meanings to maternal and child-rearing situations. For example, some participants highlighted that they were originally negatively impacted by their lack of family or social support and infliction of stereotypes from others, which therefore affected their ability to make meaning out of their situations.
In line with previous research examining the family and social environments of mothers with substance abuse problems (Nair et al., 2003), the mothers in this study had to navigate different contextual difficulties such as being involved with a partner who has a substance abuse history (n = 9), dysfunctional couple relationships (n = 10), and other disadvantaged life circumstances (n = 14). All these environmental factors may adversely affect the assignment of meanings to the situations of mothers. For example, their concerns over family relationships and life circumstances may conflict with their parental concerns (Gruber and Taylor, 2006). Although some mothers (like Judith and Nicole) reappraised meanings to reduce the discrepancies between global and situational meanings, they kept the relationships with their partners even in the face of domestic violence and negative influence on substance use. As such, the relational elements of meaning-making were uncovered by the narratives of participants. Specifically, the mothers often referenced their relationships with their partners or family while renegotiating their maternal meanings. As argued by the meaning-making model of Park (2010), positive relational dynamics facilitate the meaning-making process. In this light, how to co-construct parental meanings with family members and partners in the context of motherhood with substance abuse should be thoroughly investigated for the facilitation of the meaningmaking of mothers.
Given social, cultural, and gender stereotypes or expectations may negatively affect the self-perception of these mothers, the findings of the study suggest the significance of investigating how mothers with substance abuse problems may use meaningmaking to enhance their motherhood experiences and to better fulfill their maternal roles, especially when the surrounding discourses or norms usually position these women as "bad mothers." The current study suggests these mothers have the potential to navigate the socio-cultural claims that they were "bad mothers" by appreciating their own commitment to maternal roles (Baker and Carson, 1999). Although some mothers may feel affected by the surrounding discourses and social standards of ideal motherhood, particularly when engaging in self-evaluations, the current findings suggest that it is possible for some to recognize their resilience in mothering and sufficient effort put into taking care of their children (Baker and Carson, 1999). Doing so may allow these mothers to have a higher level of self-efficacy and become more future-oriented when performing their childrearing activities. Furthermore, it is possible that their direct encounter with and care for their children are more important than cultural or societal pressures in enhancing their motivation to create meaning out of their situations and hence change their behavioral, emotional, or parenting outcomes . These findings are consistent with the findings of another qualitative study on disadvantaged Chinese mothers, which found the maternal identity was affected by the personal perception of the mother on mother-child relationships and interactions . Therefore, future research or services for these mothers may seek to emphasize the motherchild relationship when helping these individuals create meaning out of their situations and hence enhance their personal growth and parenting outcomes. Attention should also be paid to the socio-cultural contexts that these mother face, which may exert considerable influence on how they construct meanings of motherhood.
Implications for Practice and Research
The current findings can provide insights into future initiatives aimed at enhancing the well-being and livelihood of mothers involved in problematic substance use and their children (Niccols et al., 2012). Meaning-making may be particularly relevant for these mothers as the rehabilitation process is often closely tied to other areas of the life of the recoveree, such as their maternal experiences, family relationships, and self-identity (Gunn and Samuels, 2020). Thus, future interventions should place greater attention on the narration of these mothers and reflection on their personal global and situational parental meanings, how these individuals cope with negative emotions stemming from the discrepancies between their global and situation meanings, and ways to enhance their meaningmaking processes.
For clinical assessments, social service providers can broaden their conceptualization of the needs and challenges encountered by these mothers through understanding their meaningmaking process. For interventions, the knowledge produced by this study can assist social workers and counselors in strengthening the meaning-focused coping of these mothers, as well as designing and implementing meaning-oriented parenting interventions with tailored content and goals (To et al., 2018). In addition to meaning-oriented interventions, practitioners can develop holistic and integrated programs in combination with other empirically supported parenting interventions to support these mothers and their families (Niccols et al., 2012). Furthermore, researchers and practitioners can also apply the meaning-making model to fathers with substance abuse problems to help address the unique needs of fathers.
While the current study marks a pioneering attempt to apply the meaning-making model in the context of mothers with substance use problems, several limitations should be noted. First, given the small sample size of the study, it remains unclear whether different demographic characteristics and backgrounds would influence the perceptions and experiences of mothers involved in substance abuse, especially in the context of meaning-making. Future studies involving a broader crosssection and greater diversity of participants can assist in expanding the current understanding of their needs. Second, longitudinal qualitative studies following mothers with substance abuse problems over time may be beneficial to analyzing the dynamic and evolving process of meaning-making throughout motherhood. Third, different data sources can be used to further supplement the understanding of parental meanings between mothers and fathers, and of between parents and their children. Fourth, there may be self-selection bias or social desirability processes at play given that an open recruitment process was used and that there were some common characteristics among the mothers (e.g., all participants had not abandoned their parenting responsibilities or relinquished care over their children). Future research should consider mothers who have chosen other options such as giving up their child for adoption. Likewise, although both field observations and indepth interviews were conducted, it is difficult to fully examine the extent to which social desirability influenced the responses of participants. Finally, because this study recruited and interviewed participants with relatively positive parenting outcomes, more consideration should be given to the factors contributing to the emergence of negative outcomes in future research. Taken together, these limitations can provide the basis for future research directions.
Notwithstanding the limitations, this study provides a detailed description and analysis of the meaning-making process of mothers with substance abuse problems following childbirth, which contributes to the theoretical and practical advancement of knowledge and services for the maternal experiences, meaning-making, and change-making processes, and life and motherhood outcomes of this target group. Social service providers can also incorporate the findings from this study when supporting the needs of these mothers or when providing clinical assessments and interventions. Finally, the current findings can help to generate new service strategies to promote the well-being and outcomes of these mothers and their children.
DATA AVAILABILITY STATEMENT
The datasets presented in this article are not readily available because the data set is consisting of interview data, and confidentiality cannot be safeguarded. The data will therefore not be made available. Requests to access the datasets should be directed to siumingto@cuhk.edu.hk.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Survey and Behavioral Research Ethics Committee of The Chinese University of Hong Kong (ref. number: 140021). The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
S-mT initiated the project. He has been active in all phases of the project, including design, data collection, data analysis, and writing. Mw-Y has been active in data collection, data analysis, and writing. CL has been active in data analysis and writing. All authors contributed to the article and approved the submitted version.
FUNDING
This study was funded by the Beat Drugs Fund Association of Hong Kong SAR Government (Ref. No. 140021).
|
2021-09-24T05:19:37.853Z
|
2021-09-08T00:00:00.000
|
{
"year": 2021,
"sha1": "e03f729e35755096b9bd1a905eb4f1673519cb73",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.679586/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e03f729e35755096b9bd1a905eb4f1673519cb73",
"s2fieldsofstudy": [
"Psychology",
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219473656
|
pes2o/s2orc
|
v3-fos-license
|
Police Reform and Community Policing in Kenya: The Bumpy Road from Policy to Practice
A reform is underway in Kenya, aimed at transforming the police organization into a peoplecentred police service. Among other things, this involves enhancing police-public trust and partnerships through community policing (COP). Two state-initiated COP models have been implemented: the National Police Service’s Community Policing Structure, and the Nyumba Kumi model of the President’s Office. On paper, police reform and the two COP models would appear to have the potential to improve police-public cooperation. In practice, however, implementation has proven difficult. Interviews and meetings with local community organizations, community representatives and police officers in urban and rural parts of Kenya indicate that scepticism towards the two COP models is common, as is refusal to engage in them. But why is this so? Why are these two COP models unsuccessful in enhancing police-public trust and cooperation? This article analyses how various contextual factors—such as conflicting socio-economic and political interests at the community and national levels, institutional challenges within the police, the overall role and mandate of the police in Kenya, and a top-down approach to COP—impede the intended police paradigm shift.
Introduction and Context
The police are highly dependent on collaboration and information from citizens in order to provide and maintain security within a state. 'Without at least partial collaboration of citizens, police work is impossible' ( [1], p. 23). If the police do not have the support of the public, core police tasks such as investigation, evidence gathering and prevention become more difficult. However, effective cooperation between the police and the public requires that the police enjoy a certain minimum level of trust among the population [2]. In Kenya, public trust and confidence in the police is generally low [3][4][5][6], for various reasons. Firstly, in many African states, the police forces were developed in order to secure European colonial regimes by coercive means ( [1], p. 21). Much has changed since independence, but traces of the colonial and post-colonial state are still evident in many African police systems today, as the nature and purpose of policing have remained the same ( [7], p. 69). The Kenyan police system was established by and for the British colonialists, mainly to protect colonial interests [4,8]. After Kenyan independence in 1963, the mandate of the police remained largely unchanged: to secure the interests of those in power [4,9,10]. A second factor that contributes to lack of trust in the police is the widespread practice of individual police officers taking advantage of their position for personal gain. Public opinion polls rank the police the most corrupt state institution in the country [9,11,12]. Moreover, out of 180 countries, Kenya was number 144 on Transparency International's corruption perception index for 2018 [13]. Many members of the public see the police as a hazard, not a protecting force or a service to the population. In a study conducted in ten urban and rural communities in Kenya in 2011, the police were identified as a major source of insecurity [14]. Another report, based on data collected from 500 households among urban poor in South Eastleigh in Nairobi in 2015, found that 26% of the violence experienced by these households had been carried out by the police [15]. Further, some crimes are also directly or indirectly attributed to the police [4]-with police officers contributing to crime rather to its prevention and detection [3]. The post-election violence in 2007 and 2017 also affected police-public relations. The violence in the wake of the 2017 national elections were particularly bloody; according to the official inquiry: 'the Kenyan police conducted themselves unprofessionally, used excessive force, and were woefully ineffective in protecting life and property' ( [16], p. 11).
In addition to high levels of crime and violence in general, Kenya is struggling to counter violent extremism. In particular, the Somali jihadi group al-Shabaab regularly carries out terrorist attacks in Kenya, and several Kenyans have been recruited to al-Shabaab [17]. As is the case also in other countries, the Kenyan state has responded to terrorism with top-down, militarized measures. This harsh anti-terrorism line has damaged police-public relations even further [18]. The measures employed by the state have been heavily criticized: the police have been accused of extrajudicial killings, disappearances, harassment, ill treatment and unlawful detentions-as well as misusing the anti-terror law to collect bribes and loot houses [19][20][21][22][23][24] The Kenyan police system has long been characterized by crime control through reactive policing practices, widely condemned by Kenyan society [4,25]. The reform currently underway seeks to address this by improving police-public relations through initiatives such as community policing (COP)-a concept which has become popular among donors, governments, police and policymakers worldwide. 'So popular is the concept with politicians, city managers, and the general public, that few police chiefs want to be caught without some program they can call community policing' ( [26], p. 27). However, COP as such is vague, interpreted and practised differently, with no consensus on what it entails [27]. Despite the many COP projects initiated around the world, agreed definitions of what community policing is and what it is not do not exist [28]. The concept is 'bedevilled by definitional problems' ( [29], p. 167) and 'can be transformed chameleon-like into whatever its practitioners want it to be' ( [30], p. 71). On the one hand it can be used as a tool for the state to gather intelligence to protect itself from its own population and for elites to maintain their power. On the other hand, COP can serve as a way for the police to improve relations with the public so as to provide better services, security and safety for and with the populace. The latter understanding of COP is dominant in the academic literature today. In practice, however, approaches and implementation vary widely. Some COP models aim for better effectiveness in terms of quality, responsiveness and accountability for police services; others focus on greater engagement with local communities for community-based solutions to local challenges ( [27], p. 4). Each model comes with its own set of activities, goals, expectations [27], with differing content, impact and challenges as to implementation. Although COP is primarily a Western concept, it is increasingly implemented in countries of the Global South-often encouraged by international actors mostly from the Global North [29]. Despite the positive perceptions of COP, there is little evidence of its actual impact and effectiveness, especially as regards its export to non-Western societies [29]. This is reflected in the literature: empirically based studies of COP approaches and police-public relations tend to focus on Western contexts [27,[31][32][33][34]. There is a clear need for better understanding of the practices and meanings of COP across a broader range of contexts.
Policing in Africa is still understudied and 'ethnographic work on public police bureaucracies in Africa is just beginning' ( [35], p. 1). This article seeks to contribute to the literature on policing in Africa as well as the evolving debates on the role of COP in enhancing police-public relations. In Kenya, two state-initiated COP initiatives for improving police-public relations have been developed: the National Police Service' 'Community Policing Structure' and the 'Nyumba Kumi' model led by the President's Office. However, moving from policy to practice has proven difficult. Why have these two COP models proven unsuccessful? To answer this, empirical material was gathered during fieldtrips to urban areas in Nairobi and Mombasa, and rural areas in Kisumu and Siaya counties between January 2016 and May 2018. These areas were selected in order to capture experiences across urban and rural settings as well as regions-Eastern, Mid-and Western Kenya-with different local contexts. The findings cannot claim to be representative of the implementation of the two COP models throughout the whole country. Kenya is vast and diverse entity, each county and sub-county having its own characteristics in terms of demographics, culture, livelihood, crime and political alliances. However, the study has identified some major structural obstacles to proper implementation, closely linked to overall national and programme-related challenges not solely dependent on local conditions. A total of 35 individual and group interviews were conducted with representatives from community-based organizations (CBOs), non-governmental organizations (NGOs) and community members in Mombasa town and in the Majengo and Eastleigh areas of Nairobi. These included representatives from women's, youth, human rights and religious organizations, and members of the two COP forums. To gain insights and experiences from rural areas, observation was conducted in COP sensitization meetings held by the Directorate of Community Policing, Gender and Child Protection for local communities and police in seven villages in Kisumu and Siaya counties. In order to capture police perspectives and experiences, individual and group interviews were conducted with sixteen police officers stationed in Mombasa, Nairobi, Kisumu, Siaya, Garissa and Nakuru counties. Some of these were individual interviews; some participated in a group interview held in connection with a police training in Nairobi. Due to the sensitivity of the topic, and the possible implications of interviewee claims and statements, given the current security context, individuals and organizations who have contributed to the study have been anonymized.
Police Reform and Community Policing in Kenya
Despite some positive developments occurring within the police system the last decades, the need to establish better police-public relations and trust in Kenya is still dire. Work on the current police reform began in the early 2000s, and has now become the country's largest and most complex public-sector restructuring attempt since independence from Britain in 1963 ( [6], p. 2). The post-election violence in 2007 pushed the police reform higher up on the agenda, and the National Task Force on Police Reforms was established. The Task Force produced a report, 'the Ransley Report', with a roadmap for the reform [36,37]. The following years brought about several structural changes within the police. In connection with the redrafting of the 2010 Constitution, the Kenya Police Service and the Administration Police Service were joined together under one umbrella; the National Police Service (NPS). The NPS Commission and other oversight bodies such as the Internal Affairs Unit (IAU) and the Independent Policing Oversight Authority (IPOA) were also established. In 2015, when Joseph Boinett was sworn in as new Inspector General (IG) of the NPS, he stated that his goal was a people-centred police [38], adding that the aim of the reformation process was to 'transform the Kenya police and administration police into efficient effective and professional and accountable security agencies that Kenyans can trust for their safety and security' [39]. The establishment of the NPS represented a strategic shift towards a more service-oriented institution; among its main functions is to 'foster and promote relationships with the broader society' [40]. Today the mission of the NPS is, according to its website, 'To provide professional and people centred police service through community partnership and upholding rule of law for a safe and secure society' [41]. Clearly, enhancing police-public relationships and trust are central to Kenya's current police reform, and community policing is to play a vital role.
The NPS defines COP as follows: Community Policing is the approach to policing that recognizes voluntary participation of the local community in the maintenance of peace and which acknowledges that the police need to be responsive to the communities and their needs, its key element being joint problem identification and problem-solving, while respecting the different responsibilities the police and the public have in the field of crime prevention and maintaining order ( [25], p. 1).
By using this definition, NPS has chosen to focus especially on the 'soft' aspects of policing, such as police responses to community needs, and collaborative problem-identification and problem-solving. It also highlights that both the public and the police play a role in crime prevention and maintaining peace and order-in line with the vision of moving from a police-centric to a more people-centric police.
COP is not a new concept in Kenya. Several state and non-state actors have introduced various types of COP programmes aimed at bringing the police and the populace closer together [4,42]. Today, the most widely known COP initiatives in Kenya are the two above-mentioned stateinitiated COP models: the NPS Community Policing Structure, and the Nyumba Kumi model led by the President's Office. COP was originally launched by former President Mwai Kibaki in 2005, but the concept did not catch on at the time [25]. In connection with the redrafting of the Constitution in 2010 and the shift from police force to police service, the concept of COP was revived. Today, the function and objectives of COP are emphasized and grounded in central documents such as the National Police Service Act [43] and the National Police Service Standing Orders [44].
Community Policing Committees
The NPS COP model is structured through Community Policing Committees (CPC), from sub-locations to county level under the County Policing Authority (CPA). These committees consist of civilians and police who are to meet and report up to the next level/committee in the chain [25]. The chairperson of the CPCs is a civilian; the vice-chair is a member of the police [25]. According to the NPS COP guidelines, membership in CPCs is to be on voluntary basis, with no reimbursement for participation. Civilian members can participate in a CPC for two years, with one renewal possible [25]. The idea behind the committees is for individuals representing different segments of the community (youth/children, women/men, schools, business community, religious groups, etc.) and the police to meet regularly, in order to identify and solve problems at the community level and coordinate activities, programmes and trainings to promote security. The main pillars of the NPS COP are problem solving, partnership and police transformation [25]. Further, problem solving entails a 'joint process of addressing recurring security problems within a community'; 'partnership' is defined as a collaborative effort with the primary objective of determining security needs and policing priorities; and 'police transformation' refers to 'a fundamental shift from police-centric to people-centric policing' ( [25], pp. 10-11).
Nyumba Kumi
After the al-Shabaab attack on Westgate shopping mall in Nairobi in 2013, President Uhuru Kenyatta introduced a COP model called Nyumba Kumi (NK). Similar models are found in several other African countries, including Tanzania, Rwanda, Uganda and the Democratic Republic of Congo, with varying results [45][46][47][48]. In Kenya, the NK model was introduced as a strategy for fighting terrorism and insecurity by improving communication and cooperation between communities and their police [49]. Nyumba Kumi translates as 'ten households': however, in Kenya it is not limited to a fixed number of households, but represents a cluster of people and organizations with shared aspirations and locality [50]. According to the guidelines for implementation: Nyumba Kumi is a strategy of anchoring community policing at the household level or any other generic cluster. These households can be in a residential court, in an estate, a block of houses, a manyatta, a street, a market centre, a gated community, a village or a bulla. The concept is aimed at bringing Kenyans together in clusters defined by physical locations, felt needs and pursuit of common ideals: a safe, sustainable and prosperous neighbourhood ( The aim is for local residents to get to know each other better and to have a structure for communication among themselves as well as with the local police. Local community chiefs (civilians) lead the clusters and report to the local police on matters of community security-the police are not to formally be members of these clusters. According to the NK implementation guidelines, members are to be democratically elected for a two-year period and are eligible for re-election [50]. As with the CPC, participation in NK cluster is to be voluntary and the members not reimbursed. NKs are not formally part of the NPS structure, but lie under the auspices of the President's Office.
The existence of two COP models implemented by different actors at the same time has created some confusion and tension as the two models are partly conflicting [51]. Efforts are underway to merge the two concepts and place NK clusters under sub-locations in the NPS COP structure, to give COP a grounding at the household level ( [25], p. 32).
Local Experiences and Perceptions of NK and CPCs
Kenyans have long been calling for greater inclusion of local communities in security strategies and improved cooperation between police and communities [24,52]. As stated by one NGO representative interviewed: 'The police need to work with the communities to be effective in security. Security must be owned and driven by the community. Without the community it will never succeed.' He further emphasized the need for an approach where various segments of the police and community members can come together to discuss the challenges facing communities, and together find out how to tackle them. He believed that such a problemsolving approach could be one way of bringing the police and the community closer together, or at least a way for the local police to get more accurate information about what was going on in the area. Further, it would make the police more accountable to the communities. Another NGO representative noted that many people were willing to cooperate with the state in, for example, anti-terrorism strategies, but 'what we are lacking is the how and the style'.
These views are in line with the stated aim of the CPCs and NK policies: to create a platform where communities and police work together to improve local security. However, many Kenyans are sceptical towards the two initiatives. A local resident in an urban area said he had been invited to become a NK member but had refused; he added that he did not believe in the NPS CPC model either. One member of a team that organized communities meetings to raise awareness and conduct trainings on the NK structure found the job difficult: 'People refused to come to the meetings. Even the (already established) Nyumba Kumi members didn't turn up'. Why is it, that despite voicing the need for community-friendly policing strategies, many citizens are sceptical and reluctant to engage in the two state-initiated COP models?
One explanation may be that the two COP models have not been properly grounded locally. There has been little training or follow-up with the communities, their leaders and the local police on the aim, function, objective and structure of CPCs and NK clusters. A study by the Kenyan NGO MUHURI on youth radicalization in coastal regions found that NK was well received in some coastal communities, but 'some respondents claimed that the initiative is not well understood by the community. For it to work there must be proper sensitization about it' ( [24], p. 22). And a study of NK in Kayole in Nairobi concluded: 'Due to lack of proper structures and guidelines on the Nyumba Kumi, there is a lot of confusion regarding the membership, roles and responsibility of the community.' Several interviews and my own observation of COP sensitization meetings showed that CPCs and NK clusters had been established without proper introduction to the set-up, the aims and purpose of COP. There was misunderstanding and confusion, among civilians as well as police, concerning the criteria and process of selecting members, what membership entails, how to organize the COP structures, and the main principles behind the CPC and NK. The guidelines and written policies on CPC and the NK model were either not made available or were not followed at the local level. In another study of NK in Nakuru county, the majority of the respondents answered that they did not have the information on the policy meant to guide the NK model [53].
In response to this problem, the Directorate of Community Policing, Gender and Child Protection began holding sensitization meetings and trainings on the CPC structure with communities and local police in local communities. The Police Reform Working Group in Independent Medico Legal Unit (IMLU), a Kenyan NGO working to deepen people's understanding of police reform, also conducts sensitization trainings on the CPC structure in some areas. One interviewee had conducted trainings on the NK structure in some areas in Nairobi. However, these efforts are small-scale, backed with few resources. Moreover, there is little or no monitoring or evaluation of the implementation of the two structures. In addition, local stakeholders and communities seemed to have had little influence in the development of the structures. As a result, many people see COP as a matter of top-down models imposed on them by a state apparatus they do not always trust.
Nyumba Kumi came about two years ago. They created a new structure which was very foreign to people. (...) It should be a community-driven initiative, owned by the communities. It also needs to be based on the need of the community, on community demand [54].
Lack of proper implementation, misunderstandings and confusion may lead to unfortunate practices and misuse of the COP models. In 2017, the NPS published the Community Policing Information Booklet, aimed at informing the public and police officers about the CPC structure and clearing up some of the misunderstandings as to the purpose and aim of COP. The booklet emphasizes that COP should not be identified with vigilantism, coercion or extortion, as a replacement for village elders, or as spy rings, a parallel security system, political forums, employment or other activities that contravene the law ( [25], pp. 14-16)-all of these being problems that had arisen in several places.
A common view in the areas visited for this study, in relation to the CPCs as well as NK, is that COP serves primarily as methods of surveillance and intelligence gathering for the state and the police. Many people consider those actively involved in the COP models to be police informants-a role with definite negative connotations in Kenya. A CBO representative put it this way: 'When you are a Nyumba Kumi member you are a spy. They (the community) call it a spy for the government'. An NK member found that people in her community were sceptical of her participation in the cluster. 'The community sees you as a snitch' (...). People think you are a police informant'. A police officer further explained: 'Here, being an informant is despised. (...) People working with the police are seen as un-socialized'. One NK member explained that if people see you reporting to the police, 'it spoils the relationship between you and your community'. That COP initiatives are seen as systems for the state and the police to conduct surveillance is a clear indication of the widespread mistrust and gap between the state and the public. Many people have negative experiences from previous encounters with the state, with the police in particular. Here it is important to note that the end of the reporting line for NK is the President's Office; in the NPS CPC structure, it is the Police IG. Indeed, in practice the two COP initiatives may serve as a means of gathering intelligence for the police and the state; moreover, the focus on communities and police identifying and solving problems together seems less emphasised during implementation. The NK trainer interviewed acknowledged that cluster members do not necessarily know if and what actions are taken after the meetings. 'The problem with Nyumba Kumi is that you don't know if something will be done with issues, whether there is a follow-up. The purpose is (only) to notify'. How the police understand, treat and use the information emerging from the COP forums is then of great importance for police-public trust. At the community level, many people are generally unwilling to provide information to the police, fearing that police officers may violate the principle of confidentiality. Several interviewees mentioned instances where police officers had leaked information; moreover, this issue was taken up by community members in almost all seven COP sensitization meetings observed in connection with this study. For example, in return for bribes, police officers have revealed the identity of individuals who reported cases, putting these persons at risk of retaliation and reprisals from the perpetrators or others with a stake in the case. One NK member said there had been incidents where: people who share information are at risk of being targeted. There is no confidentiality. The people who make a complaint, for example regarding criminals, might end up having descriptions of themselves given out and being targeted.
A survey conducted in Nairobi and Kisumu counties by Transparency International Kenya in 2015-2016 found that nine out of ten respondents answered that they did not proactively share information with the police about an issue or a concern in their community ( [52], p. 45). Group discussions conducted in the same study revealed that respondents were reluctant to share vital information with the police due to instances where information had been relayed back to the culprits, to the detriment of the informant [52]. When individual police officers violate the principle of confidentiality, that naturally has an impact on police-public trust, in turn negatively affecting the implementation of COP efforts. Further, filing complaints against the police, or advocating for police accountability in COP forums, may prove equally risky.
You are surrounded by people who can kill you the next minute. (...) You aren't allowed to say anything against the police. They are 'little gods'. So that is not an area where you can complain. If you do complain, they circulate your details, and things get risky for you [55].
Low confidentiality means a weak foundation for building partnerships. Due to negative views about individuals engaging in COP forums, local COP members risk accusations and retaliations from their own communities. Confidentiality breaches by police officers, as well civilian COP members, make the security situation even more difficult for COP members. An NK member explained that it is hard to recruit members to the clusters, because volunteers cannot be expected to put their lives at risk. At a sensitization meeting in one village, insecurity for COP members was identified as a main problem; and the CPC members themselves stressed the need for stronger security apparatus and measures around them. However, if the COP forums were less focused on intelligence gathering and providing information on specific criminal cases, and more steered towards collectively mapping and creating local solutions to overall insecurities in the communities, being part of a COP structure would not entail the same degree of risk.
Another factor identified during the fieldwork was that civilians were misusing their COP membership to increase or maintain their own power. As mentioned, there were misunderstanding and confusion at the local level concerning structural and organizational aspects of the implementation of the two models. This created opportunities for misuse, not only for the police, but also for civilian members as well as community leaders. Nepotism and tribalism are common in many societal, economic and political processes in Kenya, and came into to play in the selection of COP forum members. According to NK policies, cluster members are to be democratically elected in biannual elections ( [50], pp. [10][11]. And the CPC guidelines specifically state that members are to represent different segments and groups in the community. However, in several case areas, the local COP members were appointed by the community chief, rather than the community electing their own representatives. A community member raised this issue in a sensitization meeting, saying that CPC members had been 'selected through nepotism'. Several interviewees in another area claimed that the local chief had appointed the local NK members, all of whom were people in his inner circle. This supports the findings from a study in Nakuru, where more than half of the respondents answered that democracy in electing NK-initiative leaders was low or very low [53].
Moreover, a former NK member interviewed for this study said that the chief used the NK structure to promote his own agenda: 'issues get pressure whenever the chief is willing to give them pressure'. He described the NK meetings as merely playing to the gallery; further, that the chief would bribe members to report that the cluster was working well and that it had reduced the level of crime in that area.
These are examples of how political or personal agendas overshadow the pillars of community partnerships and a collective approach to problem identification and solving. As many community members are in no position to hold local powerholders and leaders of COP forums accountable, and there are no proper oversight mechanisms for the two COP structures, local powerholders can appoint members, organize and run the COP forums according to their own personal agendas-which are not necessarily in the interest of the community as a whole. Another study of NK in Kayala, in Nairobi county, also showed that NK was seen as a way for village elders and leaders to promote their own interests rather than those of the community, and that the lack of effective accountability channels made it easier for people to take advantage of the system [56]. In several areas, there were also reports of civilian CPC and NK members misusing the COP models to distort, harass and 'police'-even to legitimize 'arrests'-of other local residents. When this was mentioned in COP sensitization meetings observed in connection with this study, the trainer from the Directorate of Community Policing explained to the COP members and the police that only the police have the authority to make arrests, and discouraged civilian members from engaging in such practices, which could also entail considerable risks to personal security. In the early phases of CPC and NK implementation, civilian members were provided with COP identity cards for documentation. However, these were misused to legitimatize patrols, arrests and harassment of co-citizens, therefore such ID cards are no longer issued.
In-house Challenges in NPS
In addition to breaches of confidentiality, interviewees identified several other in-house problems that had a negative impact on police-public relations. Factors like working conditions, resources and training of Kenyan police officers hinder the implementation of the police reform and the two COP models. The public often evaluate the police forces in terms of their response time and effectiveness, and this influences both relations and trust ( [33], p. 66). In Kenya, as elsewhere, it is hard for the police to meet all the expectations of the public. For instance, in one rural village, local police officers explained that they did not have a functional vehicle at their disposal. They were often dependent on the public to pay for their transport fares involved in conducting their work. In another village, the local police officers complained that they had to pay for transport out of their own pockets. Other police stations were unable to pay for fuel [52]. In Kenya, many police stations, police posts and patrol bases lack the equipment and resources needed to be able to fulfil their duties properly.
The working and living conditions of Kenyan police officers are generally harsh. Although the state provides housing, availability is a challenge. Often, several officers and their families must live together in cramped spaces in housing of low standard [52]. Many police stations are poorly equipped and in bad shape. ICT usage is low; the station events log for registration of cases is a physical book, not yet a digitalized system. Effectiveness is also affected by the low density of police officers, who work long hours [52]. Recent years have seen an upscaling in the recruitment of new police officers in Kenya in order to meet the UN police-civilian ratio of 1:450 [57]; however, the actual number of officers working at local police stations varies from area to area. During one of the COP sensitization meetings, local police officers claimed that they were altogether only 26 police officers, from both Kenya Police Service and Administration Police Service, serving a population of 500 000. In 2012, the Usalama Reforms Forum published the report 'Communities and their Police Stations', which studied 21 police stations across Kenya. The report revealed large variation in standards in terms of facilities, equipment, organizational structure and staff among the police stations, police posts and patrol bases. It recommended that the government review all police stations, in order to determine whether they were fit for their purpose, and to ensure compliance with the minimum standards required to perform their core functions [58].
Lack of proper training of police officers is another issue. In 2014, basic police training in Kenya was cut from 15 to 9 months, and the concept of community policing has not been properly integrated into this training. A central element in mainstreaming police reforms involves reviewing and updating the police curriculum and training. An NGO representative spoke in favour of police training in 'how to connect with communities and how to build trust with the communities'. Further, a police officer emphasised that the basic training curriculum is outdated: 'The training is still based on the colonial model of training. The training was and still is to protect the ruling class'. Bringing the police training and curriculum in line with the police reform is an essential step towards ensuring a better understanding of what people-centred policing entails and how policies and models, including community policing, can be translated into everyday police practices and methods.
However, the gap between what is taught at police colleges, on the one hand, and, on the other hand, existing police culture and practices on the ground may also prove a challenge. Longstanding internal 'unofficial rules' that police officers follow in the field is not necessarily in line with the police training. Training on good practices may have little effect in changing police behaviour unless followed up by the leadership in police districts, or if it conflicts with other incentives such as possibilities for promotions [59]. Another issue is the gap between senior and junior officers. Some police officers interviewed explained that the younger officers often have a higher level of general education than their seniors, and with different views on policing methods. As police organizations are highly hierarchical and rank-based, it is difficult for junior and newly recruited personnel to bring forward new ideas, unless supported by their leaders. As two young police officers explained: We (junior officers) have a very different view. It was difficult for us to take education on the side. Our senior leaders said either you work or you take your education-we would have to choose. They don't understand the benefits from it (education). I think in 5 to 10 years, things will be different. The old generation is still there today, we'll have to wait until they are gone.
However, the junior-senior gap may take other forms as well. In a sensitization meeting in one village, a community representative spoke of how junior police officers exhibited violent behaviour towards community members. He suggested that junior officers should patrol along with senior officers who are more experienced in relating to and communicating with the community, to learn from them.
Closeness versus Distance-a Balancing Act
The two COP models are, at least on paper, aimed at bringing the police and the public closer together. Achieving the right degree of closeness versus distance to the population is a balancing act: 'There are endless stories about the tension between this need for a close relationships to the societal space which is the object of policing-and the need for distance that shall ensure 'discipline', the ultimate orien-tation of police work towards the supposed ends of the state' ([1], p. 23). During the colonial period in Kenya, the British recruited to the police forces individuals from communities seen as less hostile towards the regime; this recruitment was highly ethnicized [4]. Moreover, '(...) police officers were not allowed to serve in their home areas. This only served to cement the view of many people that the police was an alien unit' ( [4], p. 591). There are still traces of this system in the Kenyan police today. Police officers are deployed on a rotation system, preferably changing duty station every five years-one reason being to prevent them from becoming too embedded in local communities. The rotation system can be seen as a way of trying to ensure impartiality and neutrality among deployed police officers. However, tribalism is a central element in the social and political construction of Kenyan communities, and tribe and politics are closely interconnected. In the rural villages visited in Kisumu and Siaya counties, most police officers did not belong to the same tribe as the majority of the local population. In such cases, tribalism and political tensions between tribes can fuel community reluctance to trust the police and cooperate with them. In that sense, the police rotation deployment system may stand in the way of COP initiatives encouraging partnerships and collaboration. The rotation system also represents an opportunity for political leaders to govern and control areas of opposing tribes, political parties, views and alliances, and to steer elections [60]. In addition, all police officers are not necessarily interested in getting to know a new area where they are only to spend a few years. A senior police officer in one rural village also pointed out that it takes time to get to understand a new local context.
Lastly, low police salaries impede the building of policepublic trust. In fact, the Ransley Report recommended higher salaries and a greater police management focus on salaries [37]. Many police officers must depend on bribes to provide for their families and make ends meet. Low payment increases police vulnerability to corruption and makes officers susceptible to manipulations by more powerful segments of the society. One police interviewee conducting trainings on COP found it difficult to encourage police officers to engage with local communities: A challenge is that the police don't want to connect with the communities, because then it is harder to get bribes. It is harder to ask for money, bribes and harass someone you know and have a relationship with.
Hence, local police officers see models aimed at building stronger police-public relationships as economically counter-productive. Mark Leting touches on this aspect in his study of the NK in Kenya: Police who are poorly paid and have low morale as a result of serious management problems and corruption are not likely to be motivated to cooperate with the community and there may be a general lack of respect for community policing strategy ( [61], p. 32) Malpractice in the police in Kenya, such as corruption, lack of accountability, poor leadership and use of violent and undemocratic methods, collectively constitutes an institutional problem. Deniz Kocak argues that basic bureaucratic police professionalism and capacities are necessary conditions to establish community policing ( [62], p. 2). Maurice Punch applies the metaphors of 'rotten apples', 'rotten barrels' and 'rotten orchards' to police institutions [63], nothing that the police themselves often employ the 'rotten apple' metaphor for a 'deviant cop who slips into bad ways and contaminates the other essentially good officers-which is an individualistic, human failure model of deviance' ( [63], p. 173). In a 'rotten barrel', the deviance spreads to a certain unit or segment of the police. A solution to rotten apples and barrels is to remove and replace them. With 'rotten orchards', however, malpractice has become systemic, perhaps even encouraged or protected by certain elements in the system: 'at certain times and in certain contexts, police deviance becomes virtually institutionalized, is affected by and affects other parts of the criminal justice system, may be related to wider influences in the broader enviroment and leads to what I call "system failure"' ( [63], p. 172). Plagued by endemic corruption, structural and management challenges, the police institution in Kenya may be characterized as such a 'system failure'. That does not mean that all police officers are 'bad'-on the contrary, some civilian interviewees said that they do trust and cooperate with certain local police officers. The challenge can rather be seen as the system which facilitates, encourages, protects and rewards certain deviant behaviours and practices.
This problem concerns not only the police, but the justice system as a whole. Several interviewees, both civilian and police, emphasized that the justice and court system is seen as weak and corrupt. In turn, low confidence in the justice system and rule of law contributes to reluctance to report cases to the police or share information with them in the first place. As a result, the police have a hard time performing their duties, for example getting witnesses to testify in court. One CBO representative explained that sometimes local police officers even deem the justice system so corrupt and insufficient that they decide to take justice into their own hands. Punch argues that to be able to explain how system failures occur and are maintained, a range of mechanisms within the organization as well as its wider environment must be taken into account.
In an organization designed to uphold the law, the law can be broken because control, supervision, checks and balances, monitoring, audits and leadership may all fail to function adequately while cultural and institutional pressures promote and support deviance (. . . ). This requires our explanations to be posited on an analysis that ties individual and group behavior to complex, causal strands of formal and informal mechanisms of social interaction within the organization and with the external environment ( [63], p. 174).
As noted, the challenges of implementing COP are closely connected to the wider socio-economic and political context. For instance, that civilians as well as police officers may use the COP models for their own or tribal gain, and that political tensions stand in the way. Tribalism, nepotism and corruption are not specific to COP initiatives, but embedded in the overall social, political, economic structures at the local and national levels. The police represent only one piece in a larger political game for power and resources, so police reform or community policing models alone cannot to be expected to challenge or change such dynamics and systems. Reform must be a part of a larger state building process involving wider public and political reforms and developments.
Whose Police?
Finally, perhaps the most significant challenge to proper implementation of the two COP models, concerns the mandate of the police in Kenya and the role they are set to fulfil in society. The main police 'clientele' steers how policing is carried out. As Jackson and colleagues emphasize, the degree of trust in the police hinges not only on the effectiveness and competence of the police, but also involves aspects such as police commitment, the extent of which the police care about the people they serve, and their capacity to understand the needs of the community and are willing to address these needs ( [33], p. 66). Research on procedural justice shows that how citizens assess the fairness of the police and justice system is highly dependent on how the police interact with the population and how police officers execute their power [64]. In turn, the way police officers behave and view the public can be understood in light of two different 'police paradigms' of the function and role of the police, especially to whom the police are held accountable [65]. On the one hand, the main mandate of the police may be to protect the state: as a result, members of the public are often excluded from partnerships [65]. This may also be an ideological cover for more repressive functions of social control [66]. In the second paradigm, the main task of the police is defined as being to protect and serve the public ( [65], pp. 78-79). In practice, there is not necessarily a strict division between the two paradigms ( [65], p. 79) and police systems encompass both functions to varying degrees [2,67]. This difference between the two functions is reflected in how policing is carried out ( [65], p. 79). To return to the case of Kenya, one police officer interviewed said that some of his police colleagues 'have a superiority complex. They do not engage or mingle with civilians. The communities feel they are looked down upon.' One reason for this behaviour, he argued, was the focus on 'tackling the enemy' which is taught during police training. Some community members interviewed for this study confirmed this view: they feel that the police regard them as enemies and treat them as such. Such police behaviours and the role the police are trained to perform indicate a heavy state-protection mandate, with less emphasis on providing a service to the public. Grasping the role and function of the police today also requires understanding the social and political responsibilities of the police in their various historical manifestations ( [65], p. 78). As noted, the police institution in Kenya has its origins in the colonial period, where the police were to protect the powerholders from rebellion against the regime. The current police reform, endorsing a people-centred police, represent a push for a paradigm shift from this traditional system. Such transitions require political will.
Alice Hills argues that the police in many African countries are in fact accountable solely to their presidents ( [68], p. 403); further, that police commissioners in these countries are the president's point of access to the police institution ( [68], p. 407). Citing examples from several presidents in Africa, including Kenya's former President Kibaki, she hold that presidents do not want a police answerable to parliamentary committees or judicial enquiries: they 'value the police as a tool for enforcing political decisions, maintaining order, regulating activities and regime representation' ( [68], p. 407). In such contexts, Hills argues, security sector reform is unrealistic, as state leaders have little to gain from democratizing the police: on the contrary, that would reduce their personal power ( [68], p. 406). Hence, proper implementation of police reforms and changes in police practices relies on whether the president or other powerholders can benefit from it or not.
The stated intention behind the Kenyan police reform and the two COP models-to transform the police institution into a people-centred police with greater public accountability-may not serve the interests of the president, or other powerholders. On the other hand, closer police-public coordination for gathering information may yield valuable intelligence that can protect the regime. This is in line with what many interviewees for this study have said: that COP is just another platform for powerholders to collect intelligence and conduct surveillance of the population, in order to maintain or increase their power. For many Kenyans, the main pillars of the NPS COP-problemsolving, partnership and police transformation-are simply words on paper. As long as the main function of the police is to protect the regime from its own citizenry, political and economic interests will continue to steer the police reform and the two COP models. As Tyler has put it, 'Consequently, legitimacy is the most promising framework for discussing changing the goals of policing and moving from a police force model to a police service model' ( [69], p. 29). As long as tribal politics (including the president's personal power) and the police are two sides of the same coin, transforming the Kenyan police into a legitimate, democratic and peoplecentred police service seems unlikely. As Kocak points out, policing cannot be separated from its political context, and in order to promote and establish a community-oriented approach to policing that is in line with good governance, a transformative context of democratization is necessary ( [62], p. 35). He adds that the necessary conditions for establishing community policing are 'a police organization with, at least, basic professional bureaucratic capacities, a genuine commitment and political will on behalf of local authorities to promote and push for its implementation, and a concept or approach to community policing that actually matters to the respective local context and its realities' ( [62], p. 35). These conditions are not yet in place in Kenya.
Conclusions
On paper, the police reform underway in Kenya represents a paradigm shift towards a people-centred police. Enhancing police-public relations and trust are at the core of the reform. Two community policing (COP) models, the National Police Service's Community Policing Structure and the President's Office' Nyumba Kumi (NK) model, have been developed in order to bridge the gap between the police and local communities. The NK model involves anchoring community policing at the household level in clusters, in order to improve relations and communications among community members themselves as well as with the police. The NPS model is structured through Community Policing Committees (CPC), consisting of members representing different segments of the community and local police. In their meetings, CPC members are, collectively, to identify and solve problems at the community level, and coordinate activities, programmes and trainings to promote security.
In practice, however, proper implementation of NK and the CPCs has proven difficult. Observations and interviews in local communities in urban and rural parts in Western, Mid-and Eastern Kenya between 2016 and 2018, with community organizations, community representatives and police officers, show considerable scepticism towards the two COP models. This article has identified and analysed some main reasons why so many people are reluctant to engage in COP, in order to explain the failure of the two COP initiatives to improve police-public trust and cooperation.
Firstly, in the communities visited, the two initiatives had not been properly anchored at the local level among community members, their leaders and the police. As a result, confusion and misunderstandings arose, as to the set-up, goal and purpose of the two COP structures. Moreover, local communities saw the models as top-down approaches imposed on them from above-not as initiatives driven and organized by the communities themselves, which would have given the communities greater agency and ownership in the processes. As a result of uncertainties regarding the policies and guidelines for implementation, as well as the lack of oversight and accountability mechanism, civilians and police COP members have in some cases misused the two structures for personal gain and/or to accumulate and secure power.
Secondly, police-public trust and implementation of the COP models have been impeded by in-house problems within the police system, such as lack of human and economic resources, poor working conditions, training, management and leadership at various levels in the police. More-over, the police reform, including people-centred policing and COP policies, has not been translated into everyday police practices on the ground nor properly integrated into the police training and curriculum.
That is connected to the third point: the role and function of the police, and the question of who the police are to serve. Ever since Kenya's colonial period, the mandate of the police has predominantly been to serve and protect the powerholders. Also today, politics and the police are two sides of the same coin; the president and other political figures have great personal power, also over the police. Powerholders have little to gain if the police forces are transformed into a truly people-centred, democratic and accountable system. In fact, the two COP models represent a way for powerholders to monitor and maintain surveillance of local communities as a strategy for securing the regime and their own personal power.
Interviewees reiterated the common perception in the areas visited: COP merely represents a method for the state to 'spy' on its population. Without political will and power to ensure proper implementation, the policies will remain merely words on paper. Moreover, societal dynamics such as tribalism, nepotism and corruption lie at the root of many political, economic and social processes in Kenya, influencing rule of law and the justice system as a whole. Police reform and community policing models alone can hardly be expected to challenge this. What is needed is a wider state-building process with deep-going public and political reforms, developments and democratization. As of now, the basic conditions for implementation of the two COP models for the benefit of the general population in Kenya are simply not there. People-centric policing and greater police-public trust remain a long-term vision rather than a realistic goal for the coming years. For the police in Kenya to become the 'people's police', the road is indeed long and bumpy, with many hazards and detours ahead.
|
2020-05-28T09:15:27.766Z
|
2020-05-18T00:00:00.000
|
{
"year": 2020,
"sha1": "94f1071b5f1ac41030d522603a514332874f2e7a",
"oa_license": "CCBY",
"oa_url": "http://www.librelloph.com/journalofhumansecurity/article/download/johs-16.2.19/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fb4bc348b3332fb6662553d45c8879e14171a234",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
255212102
|
pes2o/s2orc
|
v3-fos-license
|
Isolation, Characterization and Immunomodulatory Activity Evaluation of Chrysolaminarin from the Filamentous Microalga Tribonema aequale
The aim of this study is to investigate the differences in the accumulation capacity of chrysolaminarin among six Tribonema species and to isolate this polysaccharide for immunomodulatory activity evaluation. The results showed that T. aequale was the most productive strain with the highest content and productivity of chrysolaminarin, which were 17.20% (% of dry weight) and 50.91 mg/L/d, respectively. Chrysolaminarin was then extracted and isolated from this alga, and its monosaccharide composition was mainly composed of a glucose (61.39%), linked by β-D-(1→3) (main chain) and β-D-(1→6) (branch chain) glycosidic bonds, with a molecular weight of less than 6 kDa. In vitro immunomodulatory assays showed that it could activate RAW264.7 cells at a certain concentration (1000 μg/mL), as evidenced by the increased phagocytic activity and upregulated mRNA expression levels of IL-1β, IL6, TNF-α and Nos2. Moreover, Western blot revealed that this polysaccharide stimulated the phosphorylation of p-65, p-38 and JNK in NF-κB and MAPK signaling pathways. Overall, these findings provide a reference for the further development and utilization of algae-based chrysolaminarin, while also offering an in-depth understanding of the immunoregulatory mechanism.
Introduction
Microalgae-based biodiesel has attracted much attention because it is renewable, sustainable and environmentally friendly, but its production cost is too high to compete with fossil fuels [1,2]. To improve its economic feasibility, substantial efforts are underway to establish an integrated biorefinery process aimed at maximizing the optimization of algal biomass to coproduce biodiesel and high-value bioproducts such as carotenoids, polyunsaturated fatty acids, active polysaccharides, etc. [3,4]. These could be employed for various practical uses due to their distinctive properties.
Microalgae contain abundant quantities of natural polysaccharides owing to their enormous biodiversity [5], and many studies have reported that microalgal polysaccharides have a wide range of biological activities, including antioxidant, antitumor, antiinflammatory and immunostimulatory properties [6][7][8]. In particular, some algal polysaccharides containing sulfate esters exhibit unique pharmacological activities due to their novel and complex structure, including sulfated polysaccharides p-KG03 extracted from Gyrodinium impudium [9], which show specific antiviral activity by inhibiting the processes of virus-cell interaction and virus-cell fusion at the same time. Therefore, microalgal polysaccharides are regarded as valuable new bioactive compounds with many downstream applications in the food, cosmetics, neutraceutical and pharmaceutical industries [10].
Polysaccharides have various important biological functions in algal cells, including storage, protection and structural roles [11]. Among them, chrysolaminarin is a class of principal energy-storage polysaccharides that is widely distributed in diatoms and chrysophites [12]. In many diatoms, chrysolaminarin is a soluble and low-molecularweight β-glucan (1-40 kDa) consisting of glucose monomers linked by β-1,3 bonds with limited β-1,6 branches [13], but its molecular weight, monosaccharide composition and number of branches are species-specific. Chrysolaminarin isolated from microalgae showed some interesting bioactivities, such as the scavenging of hydroxyl radicals and antitumor activity [14,15]. Thus, interest in the selection and cultivation of chrysolaminarin-rich microalgae strains has increased, especially with regard to those that can accumulate lipids and chrysolaminarin simultaneously.
Tribonema spp. are filamentous oleaginous microalgae belonging to the class Xanthophyceae [16]. In the past, most studies have focused on culturing Tribonema microalgae for biodiesel and palmitoleic acid production, owing to their high content of lipid and palmitoleic acid, high resistance to grazer predation and ease of harvest, so they have been considered an emerging potential biorefinery biomass feedstock for the co-production of bioenergy and valuable bioproducts to improve economic efficiency [17,18]. However, little attention has been paid to the fact that Tribonema spp. are also rich in active polysaccharides [6,19], such as a sulfated polysaccharide isolated from Tribonema spp. that shows anticancer and immunomodulatory activities. In addition, chrysolaminarin is an important intracellular polysaccharide in Tribonema species that is responsible for energy storage [20]. However, little is known about the structure of chrysolaminarin isolated from Tribonama microalgae and its immunoregulatory activity. In addition, the abilities of these species to accumulate chrysolaminarin are different and species-specific, but have been poorly researched. In this context, we first compared the differences in chrysolaminarin production ability of six Tribonema species under photoautotrophic conditions and then extracted, isolated and characterized this polysaccharide from the most productive microalga. The immunoregulatory activity of isolated chrysolaminarin was also evaluated in vitro. The key aim of the present study is to provide new insights into the accumulation, isolation and immunoregulatory activity of chrysolaminarin in Tribonema and to broaden the potential applications of microalgae-based products.
Comparison of Chrysolaminarin Production in Six Tribonema Species
Chrysolaminarin is known as the primary assimilative product of the genus Tribonema [21], but few studies have investigated differences in the ability to accumulate this compound. Therefore, six Tribonema species were cultured in mBG-11 medium to screen for a chrysolaminarin-producing algal strain.
As shown in Figure 1A, the algal strains grew well in mBG-11 medium with little difference in biomass, with the exception of T. ulotrichoides, which had the highest biomass at 4.83 ± 0.13 g/L. Nevertheless, T. aequale showed the highest content of chrysolaminarin (17.20% ± 0. 51% of dry weight), followed by T. vulgare (11.86% ± 0.45% of dry weight), T. ulotrichoides (8.13% ± 0.14% of dry weight) and T. viride (7.34% ± 0.37% of dry weight) ( Figure 1B). The amount of chrysolaminarin in T. aequale was approximately six times higher than that in Tribonema sp.2172 and T. minus, which revealed that the ability of the tested algal strains to accumulate chrysolaminarin was species-specific. Based on observation of the accumulation biomass and chrysolaminarin in six Tribonema species, the maximum chrysolaminarin productivity was found in T. aequale, which reached 50.91 mg/L/d. Thus, this strain was selected and used as biomass feedstock for further extraction and isolation of chrysolaminarin in the subsequent studies. Values are expressed as the means ± SD of three replicates.
Preparation and Characterization of Chrysolaminarin from T. Aequale
Crude polysaccharide was extracted from T. aequale biomass with a yield of 0.17 g/g algal powder, which was separated by DEAE-52 column and eluted with 0.1 mol/L NaCl, and the target elution peak was obtained as shown in Figure 2A. After further isolation by a Sephadex G-200 column with 0.1 mol/L NaCl eluting, a single main fraction (the isolated chrysolaminarin, Figure 2B) was then obtained at a yield of 0.11 g/g algal powder and used for further analysis.
To understand the monosaccharide composition and molecular weight of chrysolaminarin isolated from T. aequale, GC−MS and HPGPC were used in this study. As shown in Figure 2D, its monosaccharide composition was mainly composed of ribose (1.32%), rhamnose (1.24%), arabinose (4.63%), xylose (1.62%), mannose (28.74%), glucose (61.39%) and galactose (1.06%) according to the GC-MS analysis. Glucose accounted for the largest proportion in total sugars, followed by mannose, which was similar to the monosaccharide composition of chrysolaminarin isolated from T. utriculosum [20]. However, Xia et al. [15] reported that the content of glucose in chrysolaminarin isolated from Odontella aurita reached 82.23%. This difference might be due to species specificity. Meanwhile, the isolated chrysolaminarin had a number-average molecular weight (Mn) of 5.24 kDa and a weight-average molecular weight of 5.99 kDa based on calibration with standard dextrans, and the degree of dispersion (Mw/Mn) was 1.14. The results showed that chrysolaminarin isolated from T. aequale was a heteropolysaccharide with low molecular weight, similar to that isolated from Phaeodactylus tricornutum [22].
Preparation and Characterization of Chrysolaminarin from T. aequale
Crude polysaccharide was extracted from T. aequale biomass with a yield of 0.17 g/g algal powder, which was separated by DEAE-52 column and eluted with 0.1 mol/L NaCl, and the target elution peak was obtained as shown in Figure 2A. After further isolation by a Sephadex G-200 column with 0.1 mol/L NaCl eluting, a single main fraction (the isolated chrysolaminarin, Figure 2B) was then obtained at a yield of 0.11 g/g algal powder and used for further analysis. In addition, in the 1 H NMR spectra of the isolated chrysolaminarin ( Figure 3A), two anomeric proton singles at δ 4.58 and 4.24 ppm confirmed the presence of β-type glycosidic linkages, which were assigned to H-1 of the β-1,3-linkage and β-1,6-linkage, respectively [22]. These were in accordance with the aforementioned FT-IR results. The 13 Figure 3B), especially the anomeric carbon with a signal at δ102.5 ppm, which indicated that the glucosyl-linkage was β-type. These results were consistent with that chrysolaminarin isolated from Odontella aurita [15] and P. tricornutum [22]. Thus, the isolated polysaccharide was unambiguously identified as a chrysolaminarin with a β-D-(1→3)-(main chain) and a β-D-(1→6) (branch chain)-linked glucopyranan structure. As shown in Figure 2C, the FT-IR spectra of isolated chrysolaminarin displayed some absorption peaks at 3406, 2921 and 1400-1200 cm −1 , which are the typical absorption peaks of polysaccharides [22,23]. The stretching peaks at 1637.7 cm −1 and 1374.7 cm −1 represent the carboxyl groups [15]. Most importantly, a characteristic absorption peak at 889.24 cm −1 was attributed to the presence of β-type glycosidic linkages [15,22,24].
To understand the monosaccharide composition and molecular weight of chrysolaminarin isolated from T. aequale, GC−MS and HPGPC were used in this study. As shown in Figure 2D, its monosaccharide composition was mainly composed of ribose (1.32%), rhamnose (1.24%), arabinose (4.63%), xylose (1.62%), mannose (28.74%), glucose (61.39%) and galactose (1.06%) according to the GC-MS analysis. Glucose accounted for the largest proportion in total sugars, followed by mannose, which was similar to the monosaccharide composition of chrysolaminarin isolated from T. utriculosum [20]. However, Xia et al. [15] reported that the content of glucose in chrysolaminarin isolated from Odontella aurita reached 82.23%. This difference might be due to species specificity. Meanwhile, the isolated chrysolaminarin had a number-average molecular weight (Mn) of 5.24 kDa and a weight-average molecular weight of 5.99 kDa based on calibration with standard dextrans, and the degree of dispersion (Mw/Mn) was 1.14. The results showed that chrysolaminarin isolated from T. aequale was a heteropolysaccharide with low molecular weight, similar to that isolated from Phaeodactylus tricornutum [22].
Effects of Isolated Chrysolaminarin on Macrophages Viability and Phagocytic Activity
The cytotoxic effect of isolated chrysolaminarin on RAW264.7 cells was investigated by using the MTT assay. As shown in Figure 4, the isolated chrysolaminarin was nontoxic to macrophages within the tested concentration (10-2000 μg/mL), and it also exerted growth-promoting effects in the concentration ranging from 500 to 2000 μg/mL.
Effects of Isolated Chrysolaminarin on Macrophages Viability and Phagocytic Activity
The cytotoxic effect of isolated chrysolaminarin on RAW264.7 cells was investigated by using the MTT assay. As shown in Figure 4, the isolated chrysolaminarin was nontoxic to macrophages within the tested concentration (10-2000 µg/mL), and it also exerted growth-promoting effects in the concentration ranging from 500 to 2000 µg/mL. Figure 3. NMR spectra of chrysolaminarin isolated from T. aequale. A, 1 H NMR spectra; B, 13 C NMR spectra; NA, not assigned.
Effects of Isolated Chrysolaminarin on Macrophages Viability and Phagocytic Activity
The cytotoxic effect of isolated chrysolaminarin on RAW264.7 cells was investigated by using the MTT assay. As shown in Figure 4, the isolated chrysolaminarin was nontoxic to macrophages within the tested concentration (10-2000 μg/mL), and it also exerted growth-promoting effects in the concentration ranging from 500 to 2000 μg/mL. Phagocytosis is a basic cellular process that plays a crucial role in immunity, and the phagocytosis ability of macrophages can indirectly reflect the level of their immune activity [25,26]. Therefore, the effect of isolated chrysolaminarin on the phagocytic activity of RAW 264.7 cells was determined based on the uptake of fluorescent latex beads and analyzed using immunofluorescence microscopy and flow cytometry. As shown in Figure 5A, it was obvious that the isolated chrysolaminarin treatment leads to an increase in phagocytic activity in RAW 264.7 cells, as evidenced by the red fluorescent dots observed inside the chrysolaminarin-treated cells by microscopic analysis. This effect is further demonstrated by flow cytometry analysis, in which more phagocytic fluorescent latex beads were detected on the histogram after treatment with isolated chrysolaminarin compared to the control ( Figure 5B). In addition, with the increase in concentration from 500 to 1000 µg/mL, the percentage of phagocytic cells increased from 15.8 to 21.5% ( Figure 5C), while the latter was significantly different from the control group (p < 0.05). Although the phagocytic activity of isolated chrysolaminarin was obviously lower than that of the positive control (LPS), these results indicated that the appropriate concentration of isolated chrysolaminarin could activate macrophages to enhance its phagocytic activity.
Mar. Drugs 2022, 20, x FOR PEER REVIEW 6 of 13 Phagocytosis is a basic cellular process that plays a crucial role in immunity, and the phagocytosis ability of macrophages can indirectly reflect the level of their immune activity [25,26]. Therefore, the effect of isolated chrysolaminarin on the phagocytic activity of RAW 264.7 cells was determined based on the uptake of fluorescent latex beads and analyzed using immunofluorescence microscopy and flow cytometry. As shown in Figure 5A, it was obvious that the isolated chrysolaminarin treatment leads to an increase in phagocytic activity in RAW 264.7 cells, as evidenced by the red fluorescent dots observed inside the chrysolaminarin-treated cells by microscopic analysis. This effect is further demonstrated by flow cytometry analysis, in which more phagocytic fluorescent latex beads were detected on the histogram after treatment with isolated chrysolaminarin compared to the control ( Figure 5B). In addition, with the increase in concentration from 500 to 1000 μg/mL, the percentage of phagocytic cells increased from 15.8 to 21.5% ( Figure 5C), while the latter was significantly different from the control group (p < 0.05). Although the phagocytic activity of isolated chrysolaminarin was obviously lower than that of the positive control (LPS), these results indicated that the appropriate concentration of isolated chrysolaminarin could activate macrophages to enhance its phagocytic activity.
. Figure 5. Effect of isolated chrysolaminarin on phagocytic activity of RAW264.7 cells. A, the pictures of macrophages engulfing fluorescent latex beads under a fluorescence microscope; B, phagocytosis assay was examined by flow cytometry; C, the percentage of phagocytic cells is determined by the number of macrophages that ingest at least one fluorescent latex bead divided by 1000 macrophages. Significant differences from control group are indicated by *p < 0.05. The data are presented as mean ± SD (n = 3).
Effects of Isolated Chrysolaminarin on mRNA Expression of Selected Cytokines
As a typical β-glucan, chrysolaminarin and its extracts are considered to be an immune booster through the activation of macrophages and the production of proinflammatory cytokines [12]. Among them, TNF-α can kill tumor cells or suppress their growth and enhance the phagocytosis, proliferation and differentiation of neutrophils; IL-1β can activate immune cells, assist in T-cell proliferation, participate in the production of antibodies
Effects of Isolated Chrysolaminarin on mRNA Expression of Selected Cytokines
As a typical β-glucan, chrysolaminarin and its extracts are considered to be an immune booster through the activation of macrophages and the production of proinflammatory cytokines [12]. Among them, TNF-α can kill tumor cells or suppress their growth and enhance the phagocytosis, proliferation and differentiation of neutrophils; IL-1β can activate immune cells, assist in T-cell proliferation, participate in the production of antibodies and promote inflammation; IL-6 is the major factor that mediates inflammation, as it can activate immune cells to exert an immunoregulatory effect [27]. Therefore, the effects of isolated chrysolaminarin (1000 µg/mL) on stimulating RAW264.7 cells to secrete cytokines was determined at the mRNA expression level through RT-PCR and is shown in Figure 6. As expected, compared to the control group, the isolated chrysolaminarin had the same effect as LPS (positive control), which could significantly upregulate mRNA expression levels of IL-1β, IL6, TNF-α and Nos2, with the exception of IL10. Moreover, it even obtained a higher mRNA expression of IL-1β than LPS. Zou et al. [28] reported that cytokine IL-1β acts as a mediator of β-glucan actions and triggers a generalized downstream response through the NF-κB and MAPK signaling pathways to produce cytokines and activate the migration and phagocytic activities of macrophages. In addition, Nos2 is responsible for controlling NO synthesis, which is an indispensable immunoregulator involved in multiple physiological and pathological processes related to immune response [29]. Figure 6 showed that mRNA expression level of Nos 2 in RAW 264.7 cells treated with 1000 µg/mL of isolated chrysolaminarin was comparable to that of the positive group (LPS, 500 ng/mL), indicating that more NO may be produced to improve the activation state of macrophages. In summary, the isolated chrysolaminarin at a certain concentration effectively upregulated the mRNA expression of TNF-α, IL-6, Nos2 and IL-1β in RAW 264.7 cells, promoted the release of cytokines and NO, and thus exerted an immunoregulatory effect. acts as a mediator of β-glucan actions and triggers a generalized downstream response through the NF-κB and MAPK signaling pathways to produce cytokines and activate the migration and phagocytic activities of macrophages. In addition, Nos2 is responsible for controlling NO synthesis, which is an indispensable immunoregulator involved in multiple physiological and pathological processes related to immune response [29]. Figure 6 showed that mRNA expression level of Nos 2 in RAW 264.7 cells treated with 1000 μg/mL of isolated chrysolaminarin was comparable to that of the positive group (LPS, 500 ng/mL), indicating that more NO may be produced to improve the activation state of macrophages. In summary, the isolated chrysolaminarin at a certain concentration effectively upregulated the mRNA expression of TNF-α, IL-6, Nos2 and IL-1β in RAW 264.7 cells, promoted the release of cytokines and NO, and thus exerted an immunoregulatory effect.
Effects of Isolated Chrysolaminarin on the MAPK and NF-κB Signaling Pathways
MAPK and NF-κB signaling pathways are known to be involved in the production of cytokines in response to various stimuli [25]. In this study, high mRNA expression levels of IL-1β, IL6, TNF-α and Nos2 were observed in RAW 264.7 cells treated with isolated chrysolaminarin, but the mechanism of action remains unclear. Many studies have reported that plant polysaccharides could activate macrophages through triggering phosphorylation within MAPK (including p38, ERK and JNK) and NF-ĸB (p65) signaling pathways, thereby promoting the secretion of TNF-a, IL-6 and NO in RAW264.7 macrophages [25,30]. Therefore, the effects of isolated chrysolaminarin on the phosphorylation of key proteins in signaling pathway such as p38, JNK and p65 were assessed through Western Figure 6. Effects of isolated chrysolaminarin on mRNA expression of selected cytokines (IL-1β, IL-6, TNF-α, Nos2 and IL-10) in RAW264.7 cells. Data are expressed as the mean fold change (mean ± SE, n = 3) from the calibrator group (Control). The concentration of chrysolaminarin to stimulate RAW264.7 cells is 1000 µg/mL. IC, isolated chrysolaminarin. Significant differences from control group are indicated by * p < 0.05.
Effects of Isolated Chrysolaminarin on the MAPK and NF-κB Signaling Pathways
MAPK and NF-κB signaling pathways are known to be involved in the production of cytokines in response to various stimuli [25]. In this study, high mRNA expression levels of IL-1β, IL6, TNF-α and Nos2 were observed in RAW 264.7 cells treated with isolated chrysolaminarin, but the mechanism of action remains unclear. Many studies have reported that plant polysaccharides could activate macrophages through triggering phosphoryla-tion within MAPK (including p38, ERK and JNK) and NF-kB (p65) signaling pathways, thereby promoting the secretion of TNF-a, IL-6 and NO in RAW264.7 macrophages [25,30]. Therefore, the effects of isolated chrysolaminarin on the phosphorylation of key proteins in signaling pathway such as p38, JNK and p65 were assessed through Western blot in order to investigate the mechanism underlying the activation of macrophages. As shown in Figure 7A,B, the isolated chrysolaminarin stimulated RAW 264.7 cells with the similar effect to LPS (positive control), both of which increased the protein levels of P-p38 and P-JNK proteins in the MAPK signaling pathway. In particular, when the concentration increased from 500 µg/mL to 1000 µg/mL, the induced phosphorylation of JNK was concentration-dependent, which was 2.27 and 3.85 times higher than that of the control, respectively ( Figure 7C). In addition, the phosphorylation of p65 is a vital step for activating the NF-kB signaling cascade [31], and the isolated chrysolaminarin was found to induce the phosphorylation of p65 ( Figure 7B,C), suggesting that the NF-kB signaling pathway was also involved in the immune enhancement effect. Above all, the results showed that the phosphorylation levels of p38, JNK and p65 could be increased by a certain concentration of isolated chrysolaminarin, indicating that the MAPK and NF-kB signaling pathways were closely associated with macrophage activation. that the phosphorylation levels of p38, JNK and p65 could be increased by a certain concentration of isolated chrysolaminarin, indicating that the MAPK and NF-ĸB signaling pathways were closely associated with macrophage activation.
Figure 7.
Effects of isolated chrysolaminarin on phosphorylation of key proteins in MPAK and NF-κB signaling pathways in RAW 264.7 cells. A-B, representative blot images of phosphorylation levels of p38, p65 and JNK; C, the quantified expression of P-p38, P-p65 and P-JNK compared with controls, which were corrected to protein β-actin (internal control). IC, isolated chrysolaminarin. *p < 0.05 versus the control group; # p < 0.05, the groups treated with different concentrations of isolated chrysolaminarin only. The data are presented as mean ± SD (n = 3).
Microalgae Strains and Culture Conditions
The six Tribonema species (Tribonema sp.2172, T. ulotrichoides, T. viride, T. minus, T. aequale and T. vulgare) used in this study were purchased from the Culture Collection of Algae at the University of Göttingen (SAG). All stock cultures were maintained in modified BG-11 (mBG-11) medium [32] and deposited in our laboratory.
The starter cultures of six Tribonema species were separately prepared in a Ø6 × 60 (C) the quantified expression of P-p38, P-p65 and P-JNK compared with controls, which were corrected to protein β-actin (internal control). IC, isolated chrysolaminarin. * p < 0.05 versus the control group; # p < 0.05, the groups treated with different concentrations of isolated chrysolaminarin only. The data are presented as mean ± SD (n = 3).
Microalgae Strains and Culture Conditions
The six Tribonema species (Tribonema sp.2172, T. ulotrichoides, T. viride, T. minus, T. aequale and T. vulgare) used in this study were purchased from the Culture Collec-tion of Algae at the University of Göttingen (SAG). All stock cultures were maintained in modified BG-11 (mBG-11) medium [32] and deposited in our laboratory.
The starter cultures of six Tribonema species were separately prepared in a Ø6 × 60 cm (inner diameter × length) glass column photobioreactor containing 1.2 L of mBG-11 medium and grown under low-light conditions at 70-80 µmol/m 2 /s and 1% CO 2 (v/v) agitation. After 7 days of cultivation, the algal cells were collected by filtration, washed twice with sterile water and then used as seed cultures to inoculate into Ø6 × 60 cm glass column photobioreactors for cultivation at the same initial cell density of 0.35-0.40 g/L. These microalgae were cultured in mBG-11 medium with continuous unilateral light illumination of 300 ± 15 µmol/m 2 /s and bubbled with 1% CO 2 (v/v) from the bottom of the column for 15 days. At the end of cultivation, the culture samples were harvested by filtration using bolting silk (300 mesh) and then lyophilized in a vacuum freeze drier (Christ, Germany). The algal powder was stored at 4 • C prior to analysis.
Determination of Biomass Dry Weight and Chrysolaminarin Content
Biomass dry weight was measured on day 15 according to Wang et al. [33]. Briefly, 10 mL of cultures was filtered through 0.45 µm preweighted GF/B filter paper (DW 0 ), and dried at 105 • C overnight to a constant weight (DW 1 ). The biomass dry weight (g/L) was calculated as (DW 1 − DW 0 ) × 100. For the determination of chrysolaminarin content, it was extracted from the algal powder (50 mg) with diluted sulfuric acid (50 mmol/L) and then quantitatively assayed using the phenol-sulfuric acid method, as detailed by Xia et al. [15].
Extraction and Isolation of Chrysolaminarin
The lyophilized algal sample (40 g) was first mixed with 1 L of diluted sulfuric acid (50 mmol/L), extracted by acidolysis in a water bath at 80 • C and pretreated with ultrasound. The supernatant was collected after centrifugation, and the crude polysaccharide was then obtained by the processes of alcohol precipitation, deproteinization and dialysis, which have been described in detail by Xia et al. [15] and Zhang et al. [22]. Subsequently, the treated crude polysaccharide was redissolved in deionized water and loaded on a pre-equilibrated DEAE-cellulose-52 (3 × 30 cm) column. The column was gradient-eluted with 0.1 and 0.3 mol/L NaCl solution at a flow rate of 2 mL/min. Each 5 mL of eluate was collected as a tube to detect the variation in polysaccharide concentration using the phenol-sulfuric acid method [22]. Afterwards, chrysolaminarin was further purified by 0.1 mol/L NaCl solution on a Sephadex G-200 column (2 × 50 cm) with a flow rate of 2 mL/min. The corresponding polysaccharide-rich fraction (2 mL of each tube) was pooled, dialyzed and lyophilized for further analysis.
Characterization of Chrysolaminarin
For Fourier transform infrared spectroscopy (FT-IR) (Thermo Fisher Nicolet iS50, Waltham, MA, USA) analysis, the sample (2 mg) was first mixed with the dried KBr (w/w = 1:100), and then pressed into a flake for measurement. The spectrum was recorded in the range of 500-4000 cm −1 at a resolution of 2 cm −1 . The molecular weight of the isolated chrysolaminarin was measured by high-performance gel permeation chromatography (HPGPC) (Agilent 1260, Santa Clara, CA, USA) equipped with a TSK-G3000 PW XL column (7.5 mm × 300 mm) and a refractive index detector (Agilent RID-10A Series) [20]. The molecular weight was estimated by reference to a calibration curve made with T-series Dextran standards (1-670 kDa). The monosaccharide composition was determined using gas chromatography-mass spectrometry (GC-MS) as described by Xia et al. [15]. Identification of the derivatized monosaccharide was carried out according to the retention time and mass fragmentation patterns of the standards. For nuclear magnetic resonance (NMR) analysis (Bruker Advance III 500, Karlsruhe, Germany), the sample (10 mg) was dissolved in D 2 O (deuterium oxide) in an NMR tube, and the 1 H (500 MHz) and 13 C (125 MHz) spectra were recorded at 30 • C.
3.6. Immunomodulatory Activity In Vitro 3.6.1. Cell Culture Murine RAW 264.7 (ID: TIB-71) macrophage cells were purchased from the Cell Bank of the Chinese Academy of Science (Shanghai, China) and cultured in DMEM supplemented with 10% FBS and 1% penicillin-streptomycin in an anaerobic incubator with 5% CO 2 . Treatments included normal group (control), LPS group (positive control, LPS) and different concentrations of isolated chrysolaminarin. In addition, the endotoxin content was estimated to be less than 0.015 EU/mg using endotoxin-specific kit (Chinese Horseshoe CrabReagent Manufactory, Co., Xiamen, China). For most experiments, cells were allowed to adhere for 24 h before treatment.
RAW 264.7 Cell Viability Assay
Cell viability was evaluated using the MTT method according to Palanisamy et al. [34]. In brief, cells were seeded in a 96-well plate at a density of 5 × 10 4 cells/mL and exposed to different concentrations of isolated chrysolaminarin (10, 50, 100, 500, 1000 and 2000 µg/mL) for 24 h, then MTT solution was added to each well and incubated for another 4 h. After removal of the supernatant, 100 µL of DMSO was added to promote crystal dissolution. The absorbance at 490 nm was measured using a microplate reader (ELX800, BioTek, Winooski, VT, USA), and the percentage of cell viability was calculated as follows: Cell viability (%) = (OD 490 value of treated cells/OD 490 value of control cells) × 100.
Phagocytosis Assay
Phagocytic activity was determined by the amount of fluorescent-labeled latex beads engulfed by RAW 264.7 cells as described previously [35]. In brief, cells were seeded on 6-well plates at a density of 5 × 10 5 cells/mL, stimulated with isolated chrysolaminarin at the concentrations of 500 and 1000 µg/mL for 24 h and then incubated with fluorescent latex beads (Sigma, L-3030, carboxylate-modified, average size 2 µm, 10 µL beads in 2 mL DMEM medium) for another 4 h. Complete DMEM medium was treated as control group and lipopolysaccharides (LPS, 500 ng/mL) solution was used as the positive control. For flow cytometry analysis, cells were detached by pipetting after washing in PBS, and the fluorescence of 1000 cells per sample was measured by BD FACS Canto flow cytometer (Becton-Dickenson, San Jose, CA, USA). The phagocytic activity was determined by the percentage of phagocytic cells, which meant the number of macrophages that ingest at least one fluorescent latex bead divided by the total number of macrophages. To acquire the pictures of cell-phagocytized fluorescent latex beads, immunofluorescence microscopy (Eclipse Ti-E, Nikon, Tokyo, Japan) was used with TEXAS RED Filter cube.
3.6.4. Quantitative Real-Time PCR for Cytokine Gene Expression in RAW264.7 Cells RAW264.7 cells were seeded at a density of 5 × 10 5 cells/mL onto 12-well plates and preincubated in DMEM medium at 37 • C under 5% CO 2 for 24 h, followed by stimulation treatment with isolated chrysolaminarin (1000 µg/mL) for 12 h. Cells incubated with LPS (500 ng/mL) or DMEM medium were used as the positive control and the blank control, respectively. Next, total RNA was extracted from the treated cells using TRIzol reagent according to the manufacturer's recommendations, and its quality was evaluated using a spectrophotometer (P330, Implen, Munich, Germany). The cDNAs were synthesized with a PCR instrument (T100 thermal cycler, Bio-Rad, Hercules, CA, USA) using a PrimeScript™ RT Master Mix kit (RR036A, Takara, China) according to the manufacturer's protocol.
3.6.5. Western Blot Analysis RAW 264.7 cells incubated in the presence of isolated chrysolaminarin or LPS (positive control, 500 ng/mL) for treatment, as described in Section 3.6.3. To investigate the effect of isolated chrysolaminarin on the phosphorylation of p38, JNK and p65 protein related to MAPK and NF-κB signaling pathways, whole cell protein of RAW264.7 cells was extracted with the lysis reagent (P0013G, Beyotime) containing phosphatase inhibitors (P1081, Beyotime), separated on SDS-PAGE (P0012A, Beyotime) and transferred to PVDF membrane (ISEQ00010, Merck Millipore, Darmstadt, Germany). Membranes were treated with 5% skim milk (232100, GBCBIO) for shielding the nonspecific binding site, followed by an incubation with each antibody at 4 • C for overnight. After washing three times with TBST buffer, the strips were incubated with horseradish peroxidase (HRP)-conjoined goat anti-rabbit antibody (1:10,000 dilution, #7074s, Cell Signaling Technology) for 2 h at room temperature. The protein blot was visualized using ECL regents (4AW011-1000, 4A Biotech, Beijing, China). The band intensities were quantified using Image J software (version 1.48, National Institutes of Health, Bethesda, MD, USA). All immunoblot bands were normalized to the intensities of corresponding bands of the internal control (β-actin).
Statistical Analysis
All experiments were performed in triplicate. Numerical data are expressed as the mean ± standard deviation (SD), while the data generated by RT-PCR are presented as the means of three biological replicates ± standard error (SE). One-way analysis of variance (ANOVA) was followed by Student-Newman-Keus tests using a statistical analysis software package (GraphPad Prism ver. 8, La Jolla, CA, USA) and statistically significant differences were defined as p < 0.05.
Conclusions
This study showed that chrysolaminarin accumulation ability of different Tribonema strains was species-specific, and T. aequale had the highest chrysolaminarin production. Chrysolaminarin was then isolated from this alga, and structural analysis indicated that it was a low-molecular-weight heteropolysaccharide with a high proportion of glucose, mainly linked by β-D-(1→3) (main chain) and β-D-(1→6) (branch chain) glycosidic bonds. In vitro immunoregulatory assays showed that it could activate RAW 264.7 cells, increase their phagocytic activity, upregulate mRNA expression levels of IL-1β, IL-6, IL-10 and Nos 2 and induce the phosphorylation of target proteins in MAPK and NF-κB signaling pathways. This study represents the first report on the isolation, characterization and immunomodulatory activity of chrysolaminarin from T. aequale, which provides a reference for further development and in-depth understanding of its immunoregulatory mechanism.
Author Contributions: C.Z. and F.W. conceived and designed the experiments; F.W., R.Y. and Y.G. performed the experiments and analyzed the data; F.W. and R.Y. wrote and revised the paper. All authors have read and agreed to the published version of the manuscript.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to these data also form part of an ongoing study.
|
2022-12-29T16:09:51.325Z
|
2022-12-24T00:00:00.000
|
{
"year": 2022,
"sha1": "07b463f3ff191cacaf86489505de060ab716d71b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-3397/21/1/13/pdf?version=1671876308",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6584fcf99607b7fc624f45e01a3f1a85c78ba8d0",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
251475500
|
pes2o/s2orc
|
v3-fos-license
|
Cerebellar cognitive affective syndrome after acute cerebellar stroke
Introduction The cerebellum modulates both motor and cognitive behaviors, and a cerebellar cognitive affective syndrome (CCAS) was described after a cerebellar stroke in 1998. Yet, a CCAS is seldom sought for, due to a lack of practical screening scales. Therefore, we aimed at assessing both the prevalence of CCAS after cerebellar acute vascular lesion and the yield of the CCAS-Scale (CCAS-S) in an acute stroke setting. Materials and methods All patients admitted between January 2020 and January 2022 with acute onset of a cerebellar ischemic or haemorrhagic first stroke at the CUB-Hôpital Erasme and who could be evaluated by the CCAS-S within a week of symptom onset were included. Results Cerebellar acute vascular lesion occurred in 25/1,580 patients. All patients could complete the CCAS-S. A definite CCAS was evidenced in 21/25 patients. Patients failed 5.2 ± 2.12 items out of 8 and had a mean raw score of 68.2 ± 21.3 (normal values 82–120). Most failed items of the CCAS-S were related to verbal fluency, attention, and working memory. Conclusion A definite CCAS is present in almost all patients with acute cerebellar vascular lesions. CCAS is efficiently assessed by CCAS-S at bedside tests in acute stroke settings. The magnitude of CCAS likely reflects a cerebello-cortical diaschisis.
Introduction
Acute vascular cerebellar lesions are proportionally rare and account for 2-3% of all strokes (1,2). Most cerebellar strokes are considered relatively mild due to low initial National Institutes of Health Stroke Scale (NIHSS) score (3) and mostly favorable outcomes: 70% of patients are deemed fully independent after a cerebellar stroke when consciousness is not impaired at presentation (2,4,5). Still, the functional consequences of cerebellar vascular impairment might be underestimated. In the human central nervous system (CNS), the cerebellum hosts four times more neurons than the neocortex and displays with the prefrontal cortex the main relative increase in sapiens' brain neurons compared with other mammals (6,7). The ratio of cerebellar to neocortical neurons and the cerebellar cortical surface is, furthermore, substantially . /fneur. .
increased from big apes to humans (6,8,9). Historically associated with movement control, the cerebellum is now recognized to play an important role in perceptual and cognitive processes (10,11). This expansion is one of the explanations for the human brain's higher cognitive performance. The interplay between neocortical areas and the cerebellum occurs thanks to dense reciprocal connections with efferent cerebellar dentato-thalamo-cortical tracts that modulate a wide array of neocortical areas, which in turn are connected back to the cerebellum through cortico-ponto-cerebellar tracts (12-16). The function of the cerebello-cortical loops (CCLs) is to increase the accuracy of both motor and cognitive behaviors (17). Clinically, impairment of the CCL through acute or chronic cerebellar disorders leads to both motor and thought dysmetria (18)(19)(20). While motor dysmetria after cerebellar stroke was already well described by Holmes (21), the impact of cerebellar dysfunction on cognitive processes was only outlined two decades ago by Jeremy Schmahmann. In his seminal series, Jeremy Schmahmann described a cohort of twenty patients with cerebellar disorders, thirteen of whom had an acute vascular lesion (22), and various cognitive impairments that included language, emotional regulation, memory, attention, visuospatial, and executive function (22,23). He coined the cognitive profile associated with cerebellar lesion as the cerebellar cognitive affective syndrome (CCAS). This association between cognitive dysfunction and cerebellar disorders was confirmed in several studies [for a meta-analysis, refer to the example in Ahmadian et al. (24)]. However, the cognitive disorders associated with cerebellar pathology are seldom systematically studied due to the extensive and lengthy neuropsychological test batteries (over 90 min in many of the studies) that were, until recently, required to highlight a CCAS. In 2018, a CCAS screening and follow-up scale (CCAS-S) was developed, based on the paper and pencil neuropsychological tests, that could most efficiently single out individuals with cerebellar cognitive disorders from healthy individuals (23). The CCAS-S allows <10 min to provide evidence for a CCAS in patients with cerebellar disorders (23).
To date, the CCAS-S has not been used in patients with acute vascular cerebellar disorders. The aim of this study was therefore to (i) determine the prevalence of a CCAS in a cohort of patients with acute vascular cerebellar lesion and (ii) assess the practicability of the CCAS-S in the context of acute stroke.
Subjects and methods Population
The studied population is derived from the stroke registry of the Erasmus Hospital in Brussels (Belgium) where all cases of acute stroke are recorded since January 2015 (25,26). Our analysis included patients admitted between January 2020 and January 2022 who had an acute onset cerebellar ischaemic or haemorrhagic stroke and for whom a CCAS-S was performed within 1 week of symptom onset.
Acute stroke care and clinical evaluation
Acute stroke management and care followed the European Stroke Organization guidelines and are detailed in Elands et al. (25) and Jodaitis et al. (27). Stroke initial severity was evaluated by the National Institutes of Health Stroke Scale (NIHSS) score at admission. CCAS was evaluated using the CCAS-S. The CCAS-S is composed of 10 items: a semantic fluency task, a phonemic fluency task, a verbal category switching task, a forward digit span, a backward digit span, a cube drawing task, a verbal registration task, a verbal similarities task, a Go No-Go task, and an affect evaluation (23). A raw score is obtained for each task, with a minimum passing score. The number of failed tests determines the likelihood that the subject has CCAS: three or more failed tasks make a definite CCAS, two probable CCAS, and one possible CCAS. The raw score ranges from 82 (sum of minimum passing scores for each item on the scale) to 120 (sum of maximum scores for each item) is not diagnostic but provides quantitative values in each task that can be used for longitudinal follow-up as patients can have definite CCAS (three failed test items) with a total raw score that falls in the 82-120 range. Subjects without CCAS are not expected to fail any task (23). The French translation of the "A" version of the CCAS-S (23) was used in this study.
Population
During the study period, 1,508 patients were admitted to the stroke unit, and 25 of those who presented had an acute vascular cerebellar lesion (1.7%). Patients' characteristics are summarized in Table 1. The mean age was sixty-four, and lesions were ischemic in 18/25 patients and bilateral in 5/25. In ischemic lesions, the posterior inferior cerebellar artery (PICA) territory was most commonly affected, followed by the superior cerebellar artery (SCA) territory. The lesion was located in the posterior cerebellar lobe in all but one patient and predominantly in the right cerebellar hemisphere. Figure 1 illustrates schematically the lesion sizes and locations. The median admission NIHSS score was 1.
Cerebellar cognitive a ective syndrome CCAS-Scale was possible to realize in all patients. All patients failed at least one CCAS-S item. Twenty-one patients Table 2.
Discussion
The main findings of this study are that a definite cerebellar cognitive affective syndrome is present in most patients with acute cerebellar vascular lesions and that the CCAS-S can be easily completed at bedside tests in acute stroke settings.
The findings of this study, albeit limited by its monocentric nature and the size of the sample, are likely to be generalizable to other populations of acute cerebellar vascular lesions. In fact, our cohort matches the usual proportion of acute vascular cerebellar lesions in stroke units, ranging between 1.5% (2) and 2.3% (1). Similarly, the clinical characteristics of the reported population of an acute vascular cerebellar lesion are considered in terms of age (1,2,14), sex (1,2,14), admission NIHSS (3,4,28,29), vascular territory involved (30)(31)(32), rate of bilateral lesions (1,31), and predominant involvement of cerebellar posterior lobes (14). However, a selection bias toward less severe cases in our cohort is possible due to the fact that patients with acute cerebellar vascular lesions who need surgery for acute hydrocephalus or acute brainstem compression are not usually hospitalized in the stroke unit but neurosurgery and intensive care units. Such complications occur in 10 to 20% of acute vascular cerebellar lesions and missed inclusion in our cohort (29, 33).
Since the description of the CCAS in 1998 (22), several studies confirmed that almost all patients with acute cerebellar vascular lesion displayed significant cognitive impairments in a wide range of cognitive domains, corresponding to a CCAS (14,(34)(35)(36)) that mirrored the characteristics of Schmahmann's seminal report (22). However, the identification of a CCAS in those studies required the use of a full neuropsychological test battery, an assessment that requires over an hour in trained hands and is exhausting for acutely ill patients. Those facts limit the application of a full neuropsychological test battery in acute stroke settings. In contrast, in our cohort, the CCAS-S could be performed in all patients and allowed to screen for a CCAS in 10 min, highlighting its yield in the acute cerebellar stroke context. This report, therefore, brings evidence for the validity of the CCAS-S in acute cerebellar disorders and supports previous findings that relied on the CCAS-S to describe cognitive disorders in degenerative cerebellar diseases such as Friedreich Ataxia (20), SCA3 (37), in a mixed cohort of degenerative cerebellar ataxia (38), as well as in patients with chronic cerebellar stroke (39). The high rate of CCAS in our cohort, with 24/25 subjects displaying lesions in cerebellar posterior lobes, is consistent with lesion-symptom mapping and functional neuroimaging studies that associate CCAS and cerebellar role in cognition to cerebellar posterior lobes (14,16,40). The higher rate of cerebellar rightsided posterior lesions in our cohort may also contribute to a more severe cognitive clinical pattern due to the loss of crossconnections between the dominant hemisphere and the right cerebellum (14,41). Yet, this association between the right cerebellar lesion and worse cognitive outcomes is inconstant and needs further investigation in larger groups of subjects (35). Compared with patients with cerebellar degenerative diseases or chronic cerebellar stroke, patients with acute cerebellar vascular lesions failed more items [ Such poorer performances in patients with acute cerebellar injury are probably related to the CCAS pathophysiology. In fact, the CCAS is thought to build up from the disconnection of neocortical areas involved in cognitive processes and the cerebellum, corresponding to a cerebellocortical diaschisis (CCD) (20,24). In degenerative disorders, this diaschisis is gradual and allows compensatory mechanisms as highlighted in degenerative cerebellar ataxias (43)(44)(45)(46). At the acute vascular cerebellar lesion stage, both cognitive impairments and CCD on functional brain imaging are maximal (35, 47-49), while compensatory strategies through plasticity or recovery have not yet developed. Over time, cognitive impairments related to cerebellar vascular impairment partially improve, suggesting that the acute disconnection from the cerebellum might recover or be compensated (32,39). Our patients mostly failed the CCAS-S item relating to verbal fluency, attention, and working memory. Neuroanatomically, verbal fluency is considered to rely more on executive than language functions (50, 51) and is dependent on the prefrontal cortex integrity, similarly to attention (52) and working memory (53). Patients with acute cerebellar vascular lesion failed the item that depends on frontal cortex integrity, which parallels the metabolic brain functional imaging studies on CCD that showed that the frontal cortex was the most metabolically impaired cortical area after a cerebellar lesion (48,49) and supports CCD as the main pathophysiology for CCAS. Only one-third of patients failed the "affect" item of the CCAS when patients with cerebellar disorders display a much higher rate of non-cognitive psychiatric symptoms and social cognition disorders when formally tested (54-57). The self-reported nature of this item may explain its lack of sensitivity in the CCAS-S. This observation is also made in degenerative cerebellar disorders (20) and may warrant the evolution of the "affect" item in further CCAS-S version. The role of the CCD in CCAS related to cerebellar stroke was also further demonstrated by the long-term consequences of cerebellar vascular lesions (58). In fact, a study from 2021 described an atrophy of the neocortical areas functionally connected to the cerebellum in proportion to the acute cerebellar lesion volume (59).
In summary, this study shows that a CCAS is highly prevalent after acute vascular cerebellar injury and likely reflects acute CCD. This study also positions the CCAS-S as a highly sensitive and practical tool to screen for cerebellar cognitive disorders in the stroke context. Further studies are required to assess the relation between CCAS-S scores at acute and chronic stages and the magnitude of the CCD through both functional and structural brain imaging longitudinal studies.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by CUB-Hopital Erasme Ethics Committee. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
|
2022-08-11T13:59:20.278Z
|
2022-08-11T00:00:00.000
|
{
"year": 2022,
"sha1": "fe149c8049dc70f8df1d5d2128df308428a4f999",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "fe149c8049dc70f8df1d5d2128df308428a4f999",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259309338
|
pes2o/s2orc
|
v3-fos-license
|
Identifying Bridges and Catalysts for Persistent Cooperation Using Network-Based Approach
The framework of iterated Prisoner's Dilemma (IPD) is commonly used to study direct reciprocity and cooperation, with a focus on the assessment of the generosity and reciprocal fairness of an IPD strategy in one-on-one settings. In order to understand the persistence and resilience of reciprocal cooperation, here we study long-term population dynamics of IPD strategies using the Moran process where stochastic dynamics of strategy competition can lead to the rise and fall of cooperation. Although prior work has included a handful of typical IPD strategies in the consideration, it remains largely unclear which type of IPD strategies is pivotal in steering the population away from defection and providing an escape hatch for establishing cooperation. We use a network-based approach to analyze and characterize networks of evolutionary pathways that bridge transient episodes of evolution dominated by depressing defection and ultimately catalyze the evolution of reciprocal cooperation in the long run. We group IPD strategies into three types according to their stationary cooperativity with an unconditional cooperator: the good (fully cooperative), the bad (fully exploitive), and the ugly (in between the former two types). We consider the mutation-selection equilibrium with rare mutations and quantify the impact of the presence versus absence of any given IPD strategy on the resulting population equilibrium. We identify catalysts (certain IPD strategies) as well as bridges (particular evolutionary pathways) that are most crucial for boosting the abundance of good types and suppressing that of bad types or having the highest betweenness centrality. Our work has practical implications and broad applicability to real-world cooperation problems by leveraging catalysts and bridges that are capable of strengthening persistence and resilience.
I. INTRODUCTION
Understanding how cooperation evolves and is sustained is a prominent problem of broad interest and primary significance [1]. Among others, direct reciprocity has been extensively studied using the Iterated Prisoner's Dilemma (IPD) games, where individual behavior is categorized as different strategies [2], [3]. In particular, the so-called Zero-Determinant (ZD) strategies is a set of rather simple memory-one strategies that can unilaterally set a linear relation between their own payoffs and that of their opponent [4], [5]. The finding of such a powerful control over payoffs has greatly spurred new waves of work from diverse fields including network science, computer science, and social science, aiming to shed light on the robustness and resilience of cooperation by means of the natural selection of IPD strategies [6]- [9].
Prior work on IPD strategies focuses on their ability to foster fairness and cooperation among pairwise interactions. Because of the uncertainty in opponent types, IPD strategies can be optimized in terms of their tolerance, retaliation, and reconciliation of defective moves and also their level of selfrecognition and discerning co-players. For example, Grim Trigger (also known as Grudger) retaliates the opponent's defection by turning to defection forever [10], and Tit for Tat (TFT) always replicates the opponent's previous move [11]. In contrast, Tit for Two Tats (TF2T) can tolerate the coplayer's defection not more than twice in a row before taking revenge [12]. Contrite TFT can also reconcile errors and mistakes in moves with cooperation onwards [13]. Suspicious TFT uses defection as an initial trial of the opponent's type because of the low trust of others [6]. Collective strategies further take a particular sequence of initial moves to distinguish "us vs them" and only cooperate with themselves [14]. Even though this field has been extensively studied with more than 100 common IPD strategies discovered, the framework of IPD games still remains and increasingly becomes an important testbed for combining ideas from artificial intelligence and game theory [15]- [20].
A most striking discovery of IPD strategies is that ZD strategies are able to enforce a unilateral linear relationship between their own payoff and their co-player's [4], [5]. By prescribing their particular move conditional on each outcome, ZD players can control the payoff results and even demand an unfair share of the payoffs accrued from interactions. Inspired by this fact, previous studies classify IPD strategies using a dichotomy by their intention of cooperation and ability to reciprocate: partner vs rival strategies [21]. These strategies themselves are powerful and form a refined subset of IPD strategies. In this work, we extend prior studies and focus on the morality of strategies based on a simple yet intuitive definition: good strategies that are fully cooperative with an unconditional cooperator (ALLC), bad strategies that are fully defective with an unconditional cooperator, and ugly ones that fall in between. The present classification can lead to a mesoscopic description of cyclic population dynamics in a manner similar to Rock-Paper-Scissors games [22].
Although prior work has included a handful of strategies in consideration, it remains largely unclear which types of strategies are pivotal in steering the population away from defection and providing an escape hatch for establishing cooperation. To characterize such transitions between strategies, we consider a network of strategies where evolutionary pathways between them can be evaluated as a direct graph, depending on their ability to disfavor defection and foster the evolution of reciprocal cooperation over the long term.
In our model, individuals play against each other with prescribed IPD strategies and obtain their respective average payoffs from the interactions. Their payoffs, subsequently, will determine their reproductive fitness. To study evolutionary competition, we use a Moran process in a population of finite size and with a significantly low mutation rate [23]. The long-term equilibrium of the corresponding evolutionary dynamics can be analytically derived using the approximation method of an embedded Markov chain [24], [25]. Moreover, we can define a directed network where (i) the nodes are the strategies, (ii) the direction of an edge indicates the fixation of the "target strategy" in a homogeneous population of the "source strategy", and (iii) the weight of an edge is the ratio of the two fixation probabilities between the pair of strategies. The creation and manipulation of this network allow us to incorporate standard graph measures and algorithms to analyze the functionality of any strategy in the population.
The network-based method helps describe and compare competing IPD strategies. Above all, it can be used to identify essential IPD strategies like Win-Shift, Lose-Stay [26] (the reverse of Win-Stay, Lose-Shift (WSLS) [27]) that act as catalysts and transitions between strategies such as from ALLD to Win-Shift, Lose-Stay that act as bridges for recovering cooperation from defection. Their presence plays an important role in the robustness and persistence of cooperation. Our findings share strong similarities with previous studies on the ecological stability and resilience of food webs [28]- [30].
A. Payoff structure
The Prisoner's Dilemma (PD) game is a symmetric game involving two players X and Y, and two actions: to cooperate or to defect. In a one-shot PD game, the four possible outcomes correspond to different payoffs from the focal player's perspective: if both are cooperators, one gets the reward R, if a cooperator is against a defector, the sucker's payoff S, if a defector is against a cooperator, the temptation T , and if both are defectors, the punishment P . The game is considered a paradigm for understanding the conflict between self-interest and collective interest as the payoff structure satisfies T > R > P > S.
The iterated Prisoner's Dilemma (IPD) games further assume repeated encounters between the same two individuals and shed insights into the idea of direct reciprocity [2]. For any pair of two IPD strategies, denoted by A and B without loss of generality, we can compute the average payoff matrix for their game interactions: The pairwise competition dynamics between A and B typically fall into four types [31]: (a) dominance, if a > c and b > d or a < c and b < d;
B. Classification of IPD strategies
Although hundreds of strategies for the IPD games have been published in the literature, the set is almost negligible compared with the inexhaustible strategy space. To further evaluate these strategies, it was a natural question to ask: how can they be divided into groups?
The strategies can be labeled with their memory lengths which indicate the amount of information they can hold in mind. There are memory-zero strategies whose current actions do not depend on the history of the match. Next comes memory-one strategies remembering only the outcome of the previous round. So on and so forth. A strategy can even have infinite memory, for instance, a Looker-up strategy which always remembers the result of the very first round [18].
Despite that the classification by memory lengths is plain and simple, it cannot reflect the competitiveness and dominance of strategies in general. Later on, another taxonomy was put forward, further exploring the evolutionary relevance of strategies [21].
Definition 1 (Nice Strategies). A strategy is nice if it is never the first to defect ⇔ if it always cooperates with a cooperator ⇔ if it always cooperates with itself.
Definition 2 (Cautious Strategies). A strategy is cautious if it is never the first to cooperate ⇔ if it always defects against a defector ⇔ if it always defects against itself.
The equivalence relations are straightforward to show and we omit the proof here.
These two groups of strategies have no intersections. A nice strategy aims for the payoff R while a cautious one refuses to be extorted by others. It is worth noticing that both nice and cautious strategies take up only a measure-zero and hence a negligible portion of the strategy space. There exist other strategies that do not belong to the two groups, for example, the extortionate ZD strategies.
Remark.
A finer classification based on Definition 1 and 2 gives partners and rivals as subsets of nice and cautious strategies, respectively. If a partner's payoff is less than R, then the opponent's payoff should also be less than R: On the other hand, a rival's payoff is always greater than or equal to the opponent's payoff: The "opposite" of partners and rivals are referred to as requiting strategies and submissive strategies.
This classification touches on the performance of IPD strategies in the process of evolution. We now introduce our own taxonomy where strategies are divided into three different classes based on the expected payoff of a cooperator as their opponent in the IPD games (see Figure 1a). The method is simple and works as well as, if not better than, the existing one for identifying the competitiveness of strategies.
Definition 3 (Good Strategies). A strategy is good if a cooperator gets an expected payoff π Y = R as its opponent.
Definition 4 (Bad Strategies). A strategy is bad if a cooperator gets an expected payoff π Y = S as its opponent.
Definition 5 (Ugly Strategies). A strategy is ugly if a cooperator gets an expected payoff S < π Y < R as its opponent.
Intuitively, good strategies tend to be friendly to an innocent opponent while bad strategies are determined to exploit the same opponent. It is straightforward to make a comparison between different groups of strategies from the classifications of nice-cautious and good-bad-ugly. For instance, a nice strategy and hence a partner is always a good strategy. According to [21], successful strategies from the perspective of evolution are either partners or rivals. We can draw a similar conclusion given Definition 3, 4, and 5: successful individuals are either good or bad.
C. Evolutionary dynamics
We use a network-based approach to analyze and characterize networks of evolutionary pathways that bridge transient episodes of evolution dominated by depressing defection and ultimately catalyze the evolution of reciprocal cooperation in the long run. In our model, individuals play against each other using their prescribed IPD strategies (there are m many) and obtain an average payoff from their interactions, π i . Their payoffs will further determine their reproductive fitness, To study evolutionary competition, we use a Moran process in a population of finite size N : an individual is chosen to reproduce an offspring with probability proportional to their fitness. With probability µ, a mutation occurs and the offspring will randomly choose one of the available m strategies. With probability 1 − µ, the offspring is identical to the parent. This newly produced offspring will replace another individual randomly chosen from the entire population.
In the limit of rare mutations where µ → 0, the fate of a mutant is determined, either reaching fixation or going extinct before the next new mutant arises. Under this assumption, the evolutionary dynamics in finite populations will dwell on a homogeneous population state most of the time, followed by stochastic transitions from one state to another. And the transition between any two population states is determined by the pairwise invasion dynamics (Figure 2). The transition rate can be calculated using the fixation probabilities ρ ij , which is the probability that a population of strategy i-players is invaded and taken over by a single strategy j-player ( Figure 2b). Assuming the payoff matrix for i vs j as in (1), the ratio of fixation probabilities is given by Moreover, the long-term equilibrium of the full evolutionary dynamics for m multiple strategies can be analytically studied using the approximation method of an embedded Markov chain.
Theorem 1 (Imitation Processes with Small Mutations). The m × m Markov matrix of the m homogeneous states is determined by the transition rates between pairs of strategies. In general, the ijth component is In addition, there exists a unique vector where the ith component v i is the abundance of strategy i after the system becomes stable.
We use Theorem 1 to compute the stationary distribution of IPD strategies under the limit of a small mutation rate as shown in the Results section. Fig. 2. Pairwise competition dynamics between common IPD strategies. We consider a representative set of 110 strategies that have been studied in the literature. We first classify the nature of overall game dynamics between any pair of two IPD strategies into five typical scenarios: dominance, bistability, coexistence, neutrality, and others that do not fit into these former four types. We then calculate the fixation probability ρ ij of one single individual with strategy j taking over a resident population playing strategy i, the socalled pairwise invasion dynamics. These calculations help us understand the performance of strategies from an evolutionary perspective. Model parameters: population size N = 100, and selection strength β = 0.1.
We can now consider the m strategies as network nodes and define a directed edge from strategy i to strategy j if the latter has a greater fixation probability and the weight is given by ρ ij /ρ ji , and vice versa. In such a way, we obtain a weighted network that consists of different IPD strategies for further identification of catalysts and bridges. We search exhaustively ugly strategies and transitions from or to ugly strategies boosting the abundance of good strategies and suppressing that of bad strategies. There exist cycles in this directed network, a closed directed loop of different IPD strategies that can give rise to cyclic population dynamics of persistent cooperation in a Rock-Paper-Scissors manner.
III. RESULTS
Due to the symmetry in which good strategies and AllC each get an average payoff R, whether a good strategy can be favored over ALLC in the pairwise competition dynamics is determined by their self-cooperation levels. As such, the presence of good strategies is pivotal to determining the fate of ALLC. On the other hand, good strategies can be viewed as allies of ALLC, bad strategies can be seen as suppressors of ALLC, and ugly strategies can be considered in between the two extremes.
To characterize and quantify the long-term collective success of good strategies as compared to their counterparts of bad and ugly ones, we study the evolutionary dynamics with a pool of prescribed IPD strategies with rare mutations. In this limit, the population spends most of the time in homogeneous states and the fate of any new mutant, either fixation or extinction, is determined before the next mutant arises in the population. To gain analytical insights, we first identify the type of game interactions between each pair of IPD strategies as shown in Figure 2a. We distinguish four types: dominance (their percentage is 43%), bistability (23%), co-existence (9%), neutrality (24%), and a few others that do not fall into these four types. Remarkably, neutrality takes up a substantial proportion. The pairwise fixation probabilities are shown in Figure 2b (noting the dependence on selection strength β and population size N ). Under rare mutations, we are able to analytically calculate the stationary distribution for any given set of IPD strategies under consideration (see Methods and Model section). We confirm a good agreement between analytical results and agent-based simulations.
We use a directed weighted network to capture all possible evolutionary pathways between any pair of IPD strategies. Each edge can be distinguished by the type of game interactions as shown in Figure 3, and their weight is further given by the ratio of fixation probabilities. The neutrality games, especially between good IPD strategies (Figure 3d), provide an escape hatch for sustaining cooperation and also increasing resilience against perturbations such as invasion attempts by bad strategies. Figure 2a. This directed and weighted network is constructed based on the pairwise fixation probabilities in Figure 2b. Node color denotes the type of individual node. To visualize with clarity, directionality is not displayed with arrows but using curved edges, which can be read clockwise pointing from a source node to a target node. See main text for details.
Our network-based approach works for scenarios involving arbitrarily many IPD strategies. In order to get a clear-cut view with an intuitive understanding, we focus on a small sample of 25 IPD strategies, which are either memory-zero (3) or memory-one (22) yet contain the most prominent ones such as TFT, WSLS, and ZD strategies. We use pairwise fixation probabilities (Figure 4a) to compute the stationary abundances of strategies (Figure 4b), and the lump sum abundances of good, bad, and ugly strategies are given in the inset ( Figure 4b). We see that good strategies are collectively more successful than the other two types. The network containing all possible pairwise evolutionary pathways is visualized in Figure 4c. The weight of each edge is given by the ratio of fixation probabilities pointing from the one disfavored to the other favored. In the case of two strategies that have equal fixation probabilities, the associated edge is bidirectional with weight one. This network will help us to further identify catalysts and bridges pivotal to the evolutionary success of good strategies. The network snapshot is shown in (c): node color corresponds to the type, node size is proportional to the abundance of the individual node, and edge color denotes the competition type same as in Figure 3a. To visualize with clarity, directionality is not displayed with arrows but using curved edges, which can be read clockwise pointing from a source node to a target node. Model parameters: population size N = 100, and selection strength β = 0.1.
In Figure 5a, we find that Win-Shift, Lose-Stay, an ugly strategy, is the most important catalyst for the evolution of good strategies for varying selection strengths and across different measures. The extortionate ZD strategy (with an extortion factor χ = 2) is the second most important catalyst, followed by the Alternator (an IPD strategy alternating between cooperating and defecting). The most crucial evolutionary pathway, a particular edge in the network, depends also on the specific selection strength β and the measure used. For most scenarios (Figure 5b), the directed edge from ALLC to Win-Shift, Lose-Stay is the critical bridge, the presence of which can significantly boost the abundance of good strategies and suppress that of bad strategies while having the highest betweenness centrality among ugly strategies.
IV. DISCUSSION AND CONCLUSION
Our results demonstrate the very importance of the exact composition of the strategy set when studying natural selection (including but not limited to evolutionary stability and resilience) of particular IPD strategies in population dynamics. Our network-based approach can be used to evaluate and assess the impact of including or excluding an individual IPD strategy on the evolution of good strategies and overall cooperation level. Besides, our method can be applied to steer population dynamics to desired states by adding one or more additional catalysts (and forming bridges that act as evolutionary "ramps") that are amenable to external controls. This potential extension is an important insight arising from the present study.
In this work, we consider error-free IPD games where players perfectly implement their intended moves and perfectly observe others' moves. However, noisy games where trembling hands or fuzzy minds are at play are worthy of further investigation [32]. It remains an open problem how intelligent players discern intentional deception from innocent errors and mistakes. In particular, the mechanism that they find a common ground notwithstanding different goals and perceptions of fairness is an important and promising area for future work [33].
The present study focuses on fixation probabilities in stochastic dynamics. We emphasize that the evolutionary time scale is another important quantity that should be taken into account. Some evolutionary pathway from one type to another is only possible in theory, that is, the probability of taking over (fixation probability) is non-zero but the expected conditional time for such fixation to occur is exponential under certain circumstances. For example, the fixation time can tend to infinity when the interaction is a snowdrift game as a particular type of co-existence and the selection strength β is non-weak [34]. In light of this, the identification of bridges and catalysts needs to take into account the reasonable requirement of fixation time scales as well. The prior observation of stochastic tunneling in which a third type swiftly takes over a protracted mixture of two competing types could be quite useful in refining the search for the optimal presence of catalysts [35].
In the well-studied evolutionary dynamics between ALLC and ALLD where ALLC can never be favored, adding a third type such as TFT or Loners can fundamentally alter the underlying evolutionary dynamics [36], possibly leading natural selection to favor ALLC over ALLD. Our work generalizes this prior insight to multiple IPD strategies and offers a novel perspective of intervention and control. A targeted suppression and/or promotion of certain subgroup strategies can be achieved by adding or removing certain types of strategies from the pool. Our work highlights that the natural selection of strategies depending on the presence or absence of certain strategies is nontrivial and has broader applications to steering control problems with a broader context [37].
In conclusion, we have characterized and compared competing IPD strategies using a network-based approach. Our method can be used to identify essential IPD strategies (e.g. Win-Shift, Lose-Stay) and transitions between strategies (e.g. ALLD to Win-Shift, Lose-Stay) that act as catalysts and bridges for recovering cooperation from defection, and thus their presence plays an important role in the robustness and persistence of cooperation. Our findings resemble some interesting similarities with previous studies on the ecological stability and resilience of food webs [28]- [30].
ACKNOWLEDGMENT X.C. gratefully acknowledges the generous faculty startup fund provided by BUPT (No. 505022023).
|
2023-07-03T06:42:51.759Z
|
2023-06-30T00:00:00.000
|
{
"year": 2023,
"sha1": "0f5daf323c62192ad97215966908431fe06a6395",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0f5daf323c62192ad97215966908431fe06a6395",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Physics",
"Biology"
]
}
|
56479733
|
pes2o/s2orc
|
v3-fos-license
|
Sepsis in Intensive Care Unit Patients: Worldwide Data From the Intensive Care over Nations Audit
Abstract Background There is a need to better define the epidemiology of sepsis in intensive care units (ICUs) around the globe. Methods The Intensive Care over Nations (ICON) audit prospectively collected data on all adult (>16 years) patients admitted to the ICU between May 8 and May 18, 2012, except those admitted for less than 24 hours for routine postoperative surveillance. Data were collected daily for a maximum of 28 days in the ICU, and patients were followed up for outcome data until death, hospital discharge, or for 60 days. Participation was entirely voluntary. Results The audit included 10069 patients from Europe (54.1%), Asia (19.2%), America (17.1%), and other continents (9.6%). Sepsis, defined as infection with associated organ failure, was identified during the ICU stay in 2973 (29.5%) patients, including in 1808 (18.0%) already at ICU admission. Occurrence rates of sepsis varied from 13.6% to 39.3% in the different regions. Overall ICU and hospital mortality rates were 25.8% and 35.3%, respectively, in patients with sepsis, but it varied from 11.9% and 19.3% (Oceania) to 39.5% and 47.2% (Africa), respectively. After adjustment for possible confounders in a multilevel analysis, independent risk factors for in-hospital death included older age, higher simplified acute physiology II score, comorbid cancer, chronic heart failure (New York Heart Association Classification III/IV), cirrhosis, use of mechanical ventilation or renal replacement therapy, and infection with Acinetobacter spp. Conclusions Sepsis remains a major health problem in ICU patients worldwide and is associated with high mortality rates. However, there is wide variability in the sepsis rate and outcomes in ICU patients around the globe.
observational and anonymous audit. Of the 730 ICUs contributing to the study from 84 countries (see the participants list in Appendix 1), 419 (57.4%) were in university/academic hospitals. The organizational characteristics of these centers have been described previously [10].
Each ICU was asked to prospectively collect data on all adult (>16 years) patients admitted to their ICU between May 8 and May 18, 2012, except those who stayed in the ICU for <24 hours for routine postoperative surveillance. Readmissions of previously included patients were not included. Data were collected daily for a maximum of 28 days in the ICU. Outcome data were collected at the time of ICU and hospital discharge or at 60 days. Data were entered anonymously using electronic case report forms via a secured internet-based website. Data collection on admission included demographic data and comorbidities. Clinical and laboratory data for simplified acute physiology (SAPS) II [11] and Acute Physiologic Assessment and Chronic Health Evaluation (APACHE) II [12] scores were reported as the worst values within the first 24 hours after admission. A daily evaluation of organ function was performed according to the sequential organ failure assessment (SOFA) score [13]; organ failure was defined as a SOFA subscore >2 for the organ in question. Clinical and microbiologic infections were reported daily as well as antimicrobial therapy.
Infection was defined according to the criteria of the International Sepsis Forum [14]. Sepsis was defined as the presence of infection with associated organ failure [15]. Septic shock was defined as sepsis associated with cardiovascular failure requiring vasopressor support (SOFA cardiovascular of 3 or 4). Intensive care unit-acquired infection was defined as infection identified at least 48 hours after ICU admission. Non-ICU acquired infection was defined as infection present on admission or within the first 48 hours after ICU admission. Only the first episode of infection was considered in the analysis.
Detailed instructions and definitions were available through a secured website for all participants before starting data collection and throughout the study period. Any additional queries were answered on a per case basis. Validity checks were made at the time of electronic data entry, including plausibility checks within each variable and between variables. Data were further reviewed by the coordinating center for completeness and plausibility, and any doubts were clarified with the participating center. There was no on-site monitoring. We did not attempt to verify the pathogenicity of the microorganisms, including the relevance of Staphylococcus epidermidis or the distinction between colonization and infection.
For the purposes of this audit, we divided the world into 8 geographic regions: North America, South America, Western Europe, Eastern Europe, South Asia, East and Southeast Asia, Oceania, and Africa. Individual countries were also classified into 3 income groups according to the 2011 gross national income (GNI) per capita, calculated using the World Bank Atlas method [16]: GNI <$4035 = low and lower middle income; GNI $4036-12 475 = upper middle income; and GNI >$12 476 = high income.
Statistical Analysis
Data are shown as means with standard deviation (SD), mean and 95% confidence intervals (CIs), medians and interquartile ranges (IQRs), numbers, and percentages. Differences between groups in distribution of variables were assessed using analysis of variance, Kruskal-Wallis test, Student's t test, Mann-Whitney test, χ 2 test, or Fisher's exact test as appropriate.
To identify the risk factors associated with in-hospital mortality in septic patients, we used a 3-level multilevel technique with the structure of an individual patient (level 1) admitted to a hospital (level 2) within a country (level 3). The explanatory variables considered in the model were as follows: • Individual-level factors: age, sex, SAPS II score, type of admission, source of admission, mechanical ventilation or renal replacement therapy at any time during the ICU stay, comorbidities, onset of infection, site of infection, and the most common microorganisms • Hospital-level factors: type of hospital; ICU specialty; total number of ICU patients in 2011; number of staffed ICU beds • Country-level factors: GNI Individual-level variables to be included in the final model were selected on the basis of a multilevel model including country-level and hospital-level factors and each of the individual-level factors; variables with P < .2 were considered in the final model. Collinearity between variables was checked by inspection of the correlation between them, by looking at the correlation matrix of the estimated parameters, and by looking at the change of parameter estimates and at their estimated standard errors (SEs) [17]. Q-Q plots were drawn to check for normality in the residuals. The results of fixed effects (measures of association) are given as odds ratios (ORs) with their 95% CIs and the 80% interval OR. Random effects (measures of variation) measures included the variance (var) and its SE and the median OR. The statistical significance of covariates was calculated using the Wald test.
Data were analyzed using IBM SPSS Statistics software, version 22 for Windows and R software, version 2.0.1 (CRAN project). All reported P values are 2-sided, and P < .05 was considered to indicate statistical significance. The results of fixed effects are given as OR with 95% CIs.
Characteristics of the Study Group
A total of 10 069 patients were included in the audit; 2973 patients (29.5%) had sepsis, including 1808 (18.0%) with sepsis at admission to the ICU (Figure 1). In the whole cohort, antimicrobials were given to 5975 (59.3%) patients during their ICU stay. Patients with sepsis were older, had higher severity scores on admission to the ICU, had more comorbidities, and were more commonly receiving mechanical ventilation and renal replacement therapy on admission to the ICU than patients without sepsis (Table 1). Patients with sepsis also had more organ failures than the other patients (median [IQ]: 3 [1-4] vs 1 [0-2] organs; P < .001).
Patterns of Infections
The most common source of sepsis was the respiratory tract (67.4%) followed by the abdomen (21.8%) (Supplementary Table E1). Positive isolates were retrieved in 69.6% (n = 2069) of patients with sepsis; two thirds of these patients had Gramnegative microorganisms isolated and half had Gram-positive microorganisms; 1068 (51.6%) of the sepsis patients with positive isolates had more than 1 microorganism isolated ( Patients with ICU-acquired infections (n = 764) were younger, more likely to be surgical admissions, and had lower (Table 3 and Supplementary Table E2). Respiratory and catheter-associated infections were more frequent and abdominal infections less frequent in patients with ICUacquired than in those with non-ICU-acquired infections (Supplementary Table E2). Patients with ICU-acquired infections were more likely to have positive isolates than patients with non-ICU-acquired infections (79.5% vs 66.2%, P < .001) (Supplementary Table E3).
• OFID • Sakr et al
Although patients with ICU-acquired sepsis had longer ICU stays than those who had sepsis within 48 hours of admission to the ICU, they did not have higher mortality rates ( Table 3). The crude risk of in-hospital death was higher in patients with infections caused by Pseudomonas spp, Acinetobacter spp, and fungi (Table 4). In the multilevel analysis, independent risk factors for in-hospital death in patients with sepsis were older age, higher SAPS II score, cirrhosis, metastatic cancer, chronic heart failure (NYHA III/IV), use of mechanical ventilation or renal replacement therapy at any time during the ICU stay, and infection with Acinetobacter spp (Supplementary Table E4). The use of mechanical ventilation and presence of comorbid cirrhosis more than doubled the risk of death. The relative risk of death was higher in patients admitted to ICUs in countries with upper middle GNI than in those with high GNI (1.77 [1.31-2.39], P < .001). However, although the model suggested significant between-hospital variation (var = 0.28, P = .001) in the individual risk of in-hospital death, the between-country variation was not significant.
DISCUSSION
The present audit confirms the considerable burden that sepsis presents in modern ICUs. This large study, including more than 10 000 patients from 730 ICUs, indicates that approximately 30% of all ICU patients have sepsis, as defined by the presence of infection and organ dysfunction. This percentage is identical to that (29.5%) reported in the earlier Sepsis Occurrence in Acutely Ill Patients (SOAP) study [1], a large European study that used the same methodology, and in a recent analysis of a large United Kingdom database [18], but somewhat higher than in some other large studies [4,5,19,20]. In addition to possible differences associated with different definitions of sepsis used in the various studies, 2 other major elements may account for these apparent inconsistencies. First, we did not include all patients admitted to the ICU, but only critically ill patients, excluding patients admitted to the ICU for postoperative surveillance without complications. Second, some studies focused on admission data [20]; if we consider only the patients who had sepsis on admission in our study, the rate of sepsis was 18%. More importantly, the percentage of ICU patients with sepsis varied around the globe, with particularly high rates in East and Southeast Asia, confirming the high disease burden in this area [21,22]. Although these data were collected in 2012, we believe they are still relevant, especially given the general lack of global data in this regard.
A strength of the present study compared with studies assessing only sepsis on admission or prevalence studies (eg, EPIC II [2]) is that patients were followed throughout the ICU course, enabling evaluation of sepsis that developed during the ICU stay as well as sepsis present on admission. It is interesting to note that patients with ICU-acquired sepsis had similar outcomes to those of patients with sepsis on admission, and ICU-acquired sepsis was not independently associated with a higher risk of mortality after adjusting for confounders in the multilevel analysis. Although we were unable to assess this specifically, van Vught et al [23] recently reported a low attributable mortality of ICU-acquired infections. Shankar-Hari et al [24] reported that the inferred causal link between sepsis and long-term mortality was significantly confounded by age, comorbidity, and preacute illness trajectory. More importantly, in our multivariable regression analysis, all the above-mentioned factors were found to be significant determinants of mortality, suggesting that ICU-acquired sepsis may not on its own be a causative factor for mortality. Nevertheless, nosocomial infections are responsible for prolonged stays in the ICU and increased costs [25,26].
Positive isolates were obtained in 70% of the patients with sepsis, a similar finding to that reported in other studies [1,19,27,28]. Two thirds of these patients had Gram-negative organisms isolated and one half had Gram-positive organisms isolated. The most common Gram-negative microorganisms recovered were E coli, Klebsiella spp, Pseudomonas spp, and Acinetobacter spp, as in previous studies [1,27,28]. It is interesting to note that Gram-positive organisms were more common in North America than in other parts of the world; MRSA was also more common in North America than in other parts of the world except the Middle East. These findings are important when using guidelines for management of infection and sepsis, because guidelines developed in one part of the world, for example North America, may not be relevant to other areas. The results also underline the ongoing importance of fungal infections, which were involved in 13% of cases of sepsis overall, although the frequency was lower in the United States (5%), perhaps because more stringent criteria are used to characterize fungal infections in the United States. Finally it is noteworthy that approximately 42% of patients without sepsis received antimicrobial agents. The reasons for this are unclear, but antimicrobials may still be prescribed despite sepsis resolution or exclusion. In a retrospective analysis of 269 patients who were diagnosed with suspected sepsis in the emergency department and started on antibiotic therapy, 29% of the patients were found not to have bacterial disease, but the median duration of antibiotics in these patients was still 7 days (IQR, 4-10) [29].
Intensive care unit mortality rates in patients with sepsis were approximately 26% and were twice as high as those in nonseptic patients. This percentage is lower than the 32% observed in the SOAP study (using their "severe sepsis" definition that is equivalent to our current definition of sepsis) [1] and in other studies [1,19,27,28]. Intensive care unit mortality rates in patients with septic shock were approximately 35%, a percentage that is also lower than that reported in earlier studies [1,5]. Increased awareness of sepsis diagnosis and improved early management may have contributed to improved outcomes over time. Mortality rates varied around the globe, but in multivariable analysis, the between-country variation was not significant. These findings are in contrast to those from the International Multicenter Prevalence Study on Sepsis (IMPreSS) study of 1794 patients with sepsis from 62 countries, in which mortality rates were higher in East Europe and Central/South America compared with North America after adjustment for adjusted for ICU admission, sepsis status, location of diagnosis, origin of sepsis, APACHE II score, and country [30].
As expected, nonsurvivors were older and had more comorbidities. As in previous ICU studies [1,2], Pseudomonas and fungal infections were associated with worse outcomes, although only Acinetobacter infection was an independent predictor for hospital death in the multilevel analysis. More importantly, our data do not infer a cause-effect relationship, and the presence of Acinetobacter may simply be a marker of severity. In a systematic review of 6 matched case-control and cohort studies, Falagas et al [31] reported that Acinetobacter infection was associated with increased attributable mortality, although others have suggested no independent link between Acinetobacter infection and increased risk of death [32].
Mechanical ventilation at any time during the ICU stay and pre-existing liver cirrhosis were also important prognostic factors, more than doubling the risk of death. Use of renal replacement therapy at any time during the ICU stay was also associated with increased mortality. We also identified significant between-center variation, suggesting that differences in local ICU organization may have an impact on outcomes of patients with sepsis. Some of the potential factors associated with between-center outcomes differences have been identified in the literature. In an international cohort of 13 796 ICU patients, Sakr et al [33] reported that a high nurse/patient ratio was independently associated with a lower risk of in-hospital death. Gaieski et al [34] reported that sepsis outcomes were improved in centers with higher sepsis case volumes. In a multicenter study in Canada, Yergens et al [35] reported that ICU occupancy >90% was associated with an increase in hospital mortality in patients with sepsis admitted from the emergency department. We are unable to identify which particular organizational factors may have influenced outcomes from our data, and this is an area that needs further study.
Our database was very large, including considerable data on demographics, organ function, and outcomes. Nevertheless, to successfully collect a large amount of data in many ICUs requires some limitations in the level of detail of the collected data; therefore, we did not collect precise information on all subtypes of microorganisms or their resistance patterns or on the appropriateness of antimicrobial coverage. Moreover, data were collected by ICU doctors or research nurses who may not have specific expertise in infectious diseases, although the significance of this is uncertain. Our study has other limitations. First, although the audit included a large number of ICUs, the purely voluntary nature of the participation may have an impact on the representativeness of the data. Second, data collection was not monitored so small errors could not be corrected; only obvious incongruous data were verified. Third, in some countries, identification of microorganisms may have been incomplete because of the limited availability of microbiological testing. Moreover, the quality of the antimicrobials used in the treatment of infection has also been questioned in low-resource countries [36]. Fourth, there was no means of differentiating between colonization and infection for some organisms, including Acinetobacter and coagulase-negative staphylococci. Therefore, microorganisms were weighted equally in the multilevel analysis. The absence of comparative large epidemiologic data that address this issue makes it difficult to judge whether the estimates of microorganisms provided in our study overestimate the frequency of these infections. Fifth, data were collected for the same period in all regions and therefore do not take into account any possible influence of seasonal variation. Sixth, we did not use the exact recent Sepsis-3 definitions [37], which were published after our study, partly because we had no data on the evolution of SOFA scores before ICU admission and blood lactate levels were not available in all patients. Nevertheless, we used a definition based on the presence of organ dysfunction, a key feature of Sepsis-3. Finally, despite adjusting for a large number of variables that may influence outcome, the results of the multilevel analysis could not take into account other unmeasured variables that may have been of potential significance. CONCLUSIONS Sepsis, as defined by infection with organ dysfunction, remains a major health problem in ICU patients worldwide, associated with high mortality. There is wide variation in sepsis rates, causative microorganisms, and outcome in ICU patients around the world. A history of liver cirrhosis or metastatic cancer, use of mechanical ventilation or renal replacement therapy, and Acinetobacter infection were independently associated with an increased risk of in-hospital death. Global epidemiological data such as these help increase awareness of sepsis and provide crucial information for future healthcare planning. Further studies in this field should be done on a regular basis with standardized methodology to ensure the comparability of the results.
Supplementary Data
Supplementary materials are available at Open Forum Infectious Diseases online. Consisting of data provided by the authors to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the authors, so questions or comments should be addressed to the corresponding author.
|
2018-12-18T14:04:07.366Z
|
2018-11-19T00:00:00.000
|
{
"year": 2018,
"sha1": "1e6432611c303224938da895ee5439fd5fb44d2c",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/ofid/article-pdf/5/12/ofy313/33577612/ofy313.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1e6432611c303224938da895ee5439fd5fb44d2c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
58649402
|
pes2o/s2orc
|
v3-fos-license
|
ENHANCING STUDENT’S VOCABULARY BY USING JUMBLED-LETTER GAME IN ENGLISH LANGUAGE TEACHING
This paper aims at discussing the ways to enhance student’s vocabulary by using jumbled-letter For some language teachers, teaching vocabulary is challenging, especially in English Language Teaching classroom. Nowadays, the teacher should provide a vocabulary teaching which avoiding vocabulary list memorization or vocabulary translation. Besides, the teacher also should consider about the students’ different ability to master vocabulary. Some language students may master new vocabulary faster than others and some of them may find many difficulties to master new vocabulary. On the other side, some students may master or memorize some vocabulary but they cannot spell the word correctly.
A. Background
For some language teachers, teaching vocabulary is challenging, especially in English Language Teaching classroom.Nowadays, the teacher should provide a vocabulary teaching which avoiding vocabulary list memorization or vocabulary translation.Besides, the teacher also should consider about the students' different ability to master vocabulary.Some language students may master new vocabulary faster than others and some of them may find many difficulties to master new vocabulary.On the other side, some students may master or memorize some vocabulary but they cannot spell the word correctly.
Although teaching vocabulary is now less emphasized due to communicative approach in language teaching, it cannot be neglected that without having a big scope of vocabulary mastery, the language learner would face some obstacles to master the language learned.Vocabulary is one of the components of language that may help language learner acquire the target language.Due to this condition, the teacher should find appropriate methods or techniques to teach vocabulary to the students.
Apparently, there are so many ways to teach vocabulary.In conventional teaching, some teachers usually give a list of vocabulary to be memorized or give a list of vocabulary which uses both native and target language (L2).But, it is assumed that this way of vocabulary teaching method burdens the language students with a list of unused vocabulary memorization.It is because the students only focus on memorizing the vocabulary without knowing appropriately how and when to use the vocabulary in daily speaking.Another way to teach vocabulary that had been proposed by some language teachers is using language games.Language games are kind of games used to help both teacher and learner to teach and learn the target language.It is believed that by applying language games in language teaching, especially in vocabulary teaching, it may improve the students' ability to acquire the language.
It has been showed that language games have advantages and effectiveness in learning vocabulary in various ways.First, language games bring in relaxation and fun for students, thus help them learn and retain new words more easily.Second, language games usually involve friendly competition and they keep students interested.These create the motivation for students to get involved and participate actively in the learning activities.Third, vocabulary games bring real world context into the classroom, and enhance students' use of English in a flexible, communicative way (Huyen and Nga, 2003).
One of the language games that can be used to teach vocabulary to language learner is Jumbled-Letter Game.Jumbled-Letter Game is a language game that requires the learner to arrange the alphabet letter into specific word in target language which is asked by the teacher.By using this kind of language game, teacher can avoid memorizing list of vocabulary and the students may master vocabulary faster.One of the advantages of Jumbled-Letter Game is it can be applied for any language levels as long as the language teaching requires the students to master vocabulary.In addition, the use of Jumbled-Letter game in teaching vocabulary may overcome the students' difficulty to arrange the correct word spelling.
The purpose of this article is to discuss about the use of Jumbled-Letter Game in vocabulary teaching in English Language Teaching Classroom.The underlying theories about language game and the application of this game in classroom will be provided later.
B. Discussion 1. Language Games a. The Concept of Language Games in Language Teaching
There is a common perception that all learning should be serious and solemn in nature, and if one is having fun and there is hilarity and laughter, then it is not really learning.This is a misconception.It is possible to learn a language as well as enjoy oneself at the same time.One of the best ways of doing this is through language games.
According to Wittgeinstein (as cited by Xanthos, 2006) language games refer to games which enable language learner to learn the language.It means that the learner may use certain games as media to learn language.Cross (1992) and Martin (2000) said that language games are effective teaching tools and have many positive aspects, such as creating a relaxed, friendly, and cooperative environment.While Mc Cabe (1992) defined language game is an activity to be repeated by two or more player in language teaching.The repetition enable students to communicate effectively since playing language games will help children develop their language learning and thought.
Ersoz (2000) said that language games are highly motivating because they are amusing and interesting.They can be used to give practice in all language skills and be used to practice many types of communication.Games also help the teacher to create contexts in which the language is useful and meaningful.The students want to take part and in order to do so they must understand what others are saying or have written, and they must speak or write in order to express their own point of view or information.
Then, Uberman (2008) stated that language games can be used to recall and revise language materials in a pleasant, entertaining way.Even if games resulted only in noise and entertained students, they are still worth paying attention to and implementing in the classroom since they motivate students, promote communicative competence, and generate fluency.
Based on some general concepts about language games above, it can be said that using language game in L Li in ng gu ua a D Di id da ak kt ti ik ka a Volume 6 No 2, Juli 2013 language teaching is appropriate one.It can be used for all language skills and components, includes vocabulary teaching.
b. How and When to Use Language Games in Language Classroom
There is one thing that should be considered by language teachers whenever they want to apply language games in classroom, that is they should understand how and when language games are appropriate to be used.It is needed to be remembered that not all of language classroom situations are appropriate to use language games, since a high frequency of using language games may lead the students to indiscipline and "addicted", or they lead the students to boredom.
If a teacher plans a language game, it is crucial to explain the rationale of the game to the students in the class, no matter what.For example, if a teacher wants to employ a short, simple hangman or hotseat game, the teacher should swiftly -but very clearly -inform the students that this game will help them with spelling, get their brains focused on recognizing the shape and structure of new words, and will facilitate their learning of new vocabulary.
Yearin (2012) added that to make sure the students are aware of the learning benefits of the activity, preparing such an explanation will also help teachers to make sure that they know precisely why they are spending time on the game in the lesson in the first place.Such explanations are absolutely vital, because they satisfy the more serious learner who can feel pressured by game time, they make sure the weaker students understand that this isn't a waste of time and also enable all of the students to comprehend that the teacher is playing for an explicit reason, has planned the game to enhance their learning, and is not just wasting time by adding a fun element to the lesson.
According to Cervantes ( 2009), there are some suggestions that should be wisely considered by language teacher before planning a language game: 1. Teacher needs to consider the appropriate game for specific class needs and objectives.2. Teacher needs to select a game that will include many students and make them work continuously.3. Teacher needs to consider about the size and the amount of the students in the classroom, whether they may work individually, in pair or in group.4. Teacher should determine the time allocation of the game.It is need to remember that language game is not the only one activity during teaching and learning process 5. Teacher needs to take such a note about whether the language game is appropriate to be applied in classroom, whether the students enjoy it or not, whether this language game is likely to play again in the classroom, etc.
Jumbled-Letter Game in
Vocabulary Teaching in ELT a. Jumbled-Letter Game Jumbled-Letter Game is a kind of language game that is used to teach or to learn vocabulary in language learning, especially in English language teaching.The main activity of this game is the students arrange the new words or vocabulary from jumbled-alphabet letter in the target language.The teacher mentions specific words in native language and then the students arrange the word mentioned in the target language by using alphabet letter.The students are only given limited time to arrange the word so that they should arrange it as quickly as possible.
The purpose of this game is to improve students' vocabulary mastery.Besides, it may improve students' memory of vocabulary.The students will easily recall the new vocabulary in the target language and spell it in the right order and letter.
b. Jumbled-Letter Game Preparation
There are several things that a teacher needs to prepare before using Jumbled-Letter game in teaching vocabulary.This preparation will help the teacher to design and to use the language game smoothly in the classroom.1. Teacher should consider about how and when it is appropriate to use language game as had been mention above.2. Teacher should choose or select vocabulary material to be taught to the students and how much new vocabulary will be given to the students.For instance, a teacher will teach elementary students in grade of five.One of the material relate to vocabulary teaching is about "Animal".Then, the teacher selects the name of the animals and how many animals that will be taught later on.Teacher may decide he/she will only give ten names of the animals, such as snake, bird, crocodile, rabbit, turtle, buffalo, horse, roaster, butterfly and monkey.3. Teacher prepares some pieces of small paper in size 1,5 cm x 1,5 cm as much as the amount of alphabet letter, that is 26 pieces.(teacher may prepare more pieces of paper just in case of consonant or vocal alphabet that might be appear more than once in a specific word, e.g crocodile.)4. Teacher writes down a letter of alphabet for each of the small paper (picture 1).students, the teacher should prepare 5 sets of alphabet letter and so on, and so on.(in case of limited time and preparation, teacher may order students to prepare their own alphabet letter papers after explaining how to prepare it or showing an example of the alphabet letter papers set).6. Teacher prepares a note to write students' achievement after using the Jumbled-Letter game.
After doing the preparation above, the teacher is ready to use the game in classroom.
c. Steps and Application of Jumbled-Letter Game in Vocabulary Teaching
A teacher who wants to use Jumbled-Letter game should know that the steps, application and techniques to apply this game is flexible and can be modified.It depends on the situation in the classroom and teacher's method of teaching.The following is a kind of 1. Teacher greats the students and introduce the new material that is about "Animal" 2. Teacher explains the material first by using his/her own method.He/she may use media such as pictures or any visual aids to teach the students about animal and its name.for instance, teacher teaches 10 new vocabulary about animals name.(the method of teaching the new material may be vary depends on the teacher) 3. Teacher mentions and may write down the name of the animals in the board.The teacher may use the native language of the animal first and then mention the name of the animal in the target language.(The method of teaching the new material may be vary depends on the teacher) e.g.
Picture 2. Example And so on.The teacher has to make sure that the students have known about the name of the animal in target language (English).4. Teacher asks the students to write down the name of animal in their notebook.The purpose is to make the students know about the word spelling, so that they just recall their memory about word spelling in Jumbled-Letter game. 5. Teacher explains that they are going to have a game named "Jumble-Letter Game" to increase their ability to master new vocabulary.Teacher explain the purpose of the game and then distributes the alphabet letter paper set to the students after deciding whether they should work individually, in pair or in group.(students may have their own alphabet letter set) 6. Teacher asks the students to arrange the small paper alphabetically on their desk in order to help them to select the letter they are going to choose later easily.Teacher may repeat the step 7 until all of the students can arrange all the words correctly.9. Teacher may develop or modify the steps above based on class situation.For instance, the game begins with group work.If all of the group can arrange the word correctly, then the teacher ask the students to do the game with their pairs.If they successfully arrange the words in pair, it is time for them to work individually.So, all of the students have a chance to master vocabulary, memorizing vocabulary and knowing the correct spelling of the vocabulary learned.10.The Jumbled-Letter Game will be more interesting if it is competitive.It means that the teacher may ask the students to compete between them about who will be the first one that can arrange the words in target language correctly.For instance, the teacher divides the students into group consist of 3 students.Then, the teacher writes down the name of each group in the board.For group that can arrange the word faster and correctly in the allocated time, they will be given mark "100".But for groups who cannot arrange the word correctly or have not finished their arrangement in allocated time, they will get "-50".1. Reducing students' anxiety, obstacles and difficulties during learning new vocabulary 2. Memorizing the new vocabulary and the correct spelling in a fun and entertaining ways.3. Can be applied for all language levels related to vocabulary teaching 4. Overcoming the students' difficulty to arrange the correct word spelling 5. Can be applied for all language materials related to vocabulary teaching 6. Easy, cheap and challenging 7. Can be applied at home (students' independent learning) 8. Can be combined with other teaching methods and techniques 9. Flexible steps and applications 10.Meaningful, amusing and interesting 11.Increasing and encouraging students' cooperation and friendly competition 12. Encouraging shy students to participate actively 13.Providing students' self evaluation 14.Students centered.
C. Conclusion
Jumbled-Letter game is a kind of language games that is appropriate to be applied in vocabulary teaching.It emphasizes the students' vocabulary learning in a fun and entertaining ways.The students are able to master the vocabulary faster and even may spell the words correctly.In conclusion, learning vocabulary through games is one of effective and interesting way that can be applied in any classrooms.It is suggested that games are used not only for mere fun, but more importantly, for the useful practice and review of language lessons, thus leading toward the goal of improving learners' communicative competence.
Picture 1 .
Letter on the paper 5. Teacher counts the students in the classroom and then decides whether the game is done individually, in pair, or in group.If the amount of the students in a classroom is 20 students and the teacher wants the students work individually, it means that the teacher should prepare 20 sets of alphabet letter papers (step 4).If the teacher wants the students work in pair, it means that the teacher should prepare 10 sets of alphabet letter papers.If the teacher wants the students to work in group consists of 4
L
Li in ng gu ua a D Di id da ak kt ti ik ka a Volume 6 No 2, Juli 2013 106 ISSN: 1979-0457 steps to apply Jumbled-Letter game in teaching vocabulary about animal to elementary students.
explains the procedure of the game.Firstly, teacher will mention the name of the animal in native language.Second, the student should arrange the name of the animal mentioned in the target language in correct spelling by using the alphabet letter papers.Third, teacher explains that the time to arrange the words is limited, around 5-10 seconds.The easier the words are, the shorter the times are.So, the students should arrange the words as quickly as possible.When the time is up, the students are forbidden to continue their work, even prohibited to touch the alphabet letter papers.They should show their word arrangement to the teacher.e.g.1 the teacher mentions "burung", then the students have to arrange the English word of "burung" by taking the letter one by one from the alphabet letter papers to become "Bird".Then, teacher begins to count the time "one… two… three… four…… and five!Stop!" None of the students continue their work and they have to show their arrangement to the teacher.Picture 4. Model e.g.2 the teacher mentions "anjing", then the students have to arrange the English word of "anjing" by taking the letter one by one from the alphabet letter papers to become "dog".The teacher begins to count the time "one… two… three… four… and five!Stop!" None of the students continue their work and they have to show their arrangement to the teacher.Picture 5. Model 2 And so on.Teacher repeats this step several times until all of the new vocabulary had been arranged by the students.If there are ten new vocabularies, it means that there are ten native words mentioned by the teacher and ten target words arranged by the students, 8. Teacher should look at the studentswords-arrangement and evaluate whether the words arranged are correct or not.
d.The Advantages of Using Jumbled-Letter Game in Vocabulary Teaching As other language games, Jumbled-Letter game has advantages and strengths to be applied in the classroom.Here are the advantages of using Jumbled-Letter game in teaching vocabulary: L Li in ng gu ua a D Di id da ak kt ti ik ka a Volume 6 No 2, Juli 2013 108 ISSN: 1979-0457
|
2019-01-23T23:20:43.798Z
|
2017-04-10T00:00:00.000
|
{
"year": 2017,
"sha1": "3794c6a6454520448c3b1aa18a30a3eafd9cff0d",
"oa_license": "CCBYNC",
"oa_url": "http://ejournal.unp.ac.id/index.php/linguadidaktika/article/download/7405/5825",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3794c6a6454520448c3b1aa18a30a3eafd9cff0d",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
9895864
|
pes2o/s2orc
|
v3-fos-license
|
Cholangiocarcinoma Derived from Remnant Intrapancreatic Bile Duct Arising 32 Years after Congenital Choledochal Cyst Excision: A Case Report
We report a rare case of a 46-year-old woman with cholangiocarcinoma derived from remnant intrapancreatic bile duct arising 32 years after the excision of a congenital choledochal cyst. She had undergone anastomosis of the choledochal cyst and duodenum at birth, excision of the choledochal cyst and hepaticoduodenostomy with jejunal interposition at 14 years of age as well as the excision of an infectious cyst around the anastomosis site at 21 years of age. At 29 years of age, she was diagnosed with a chronic hepatitis C virus (HCV) infection and was referred to our hospital for treatment. She did not consent to interferon-based therapy against the HCV infection. At 46 years of age, she experienced epigastric discomfort. A dynamic CT revealed multiple tumors in the liver, a tumor in the head of the pancreas as well as lymph node metastases in the mediastinum and abdominal cavity. A liver tumor biopsy revealed adenocarcinoma, and she was clinically diagnosed with cholangiocarcinoma derived from remnant intrapancreatic bile duct with multiple metastasis in the liver and lymph node metastasis. She requested palliative therapy and eventually died during the treatment course. The autopsy specimen revealed a tumor in the head of the pancreas, and on the basis of local existence and the pattern of metastasis, it was confirmed as cholangiocarcinoma derived from remnant intrapancreatic bile duct. A microscopic examination revealed a poorly differentiated adenocarcinoma. This report provides information on a case of cholangiocarcinoma derived from remnant intrapancreatic bile duct arising after the excision of congenital choledochal cyst that was assessed pathologically.
Introduction
Congenital choledochal cyst is the local or diffuse dilation of the bile duct. It may cause severe complications such as lithiasis, cholangitis, pancreatitis, and carcinoma. In particular, it is well known that the congenital choledochal cyst has a significant association with cholangiocarcinoma [1]. According to the literature, 2.5-28% of the congenital choledochal cyst cases are associated with malignant biliary duct tumors at initial presentation [2]. Most congenital choledochal cysts are complicated by pancreaticobiliary maljunction [3] and cholestasis, infections, and the countercurrent of pancreatic and bile juice-induced changes in the biliary mucous membrane epithelium. They cause gene mutations, eventually leading to the development of cholangiocarcinoma at a high rate [4]. Therefore, the recommended standard surgical intervention is the excision of the entire extrahepatic bile duct, followed by hepaticoenterostomy to separate the streams of bile and pancreatic juice; this procedure is known as a separation-operation. However, in some patients, biliary cancer develops long after a separation-operation. In order to prevent the development of cancer, careful longterm follow-up is required for patients even after a separation-operation.
Here we report a case of cholangiocarcinoma derived from remnant intrapancreatic bile duct arising 32 years after the excision of a congenital choledochal cyst, with a review of the literature.
Case Report
A 46-year-old woman was referred to our hospital for the investigation of a liver tumor that was previously detected by ultrasonography at a local hospital after she complained of epigastric discomfort. She had previously undergone anastomosis of the choledochal cyst and duodenum for congenital choledochal cysts (Todani type I-a or I-c) at birth, the excision of choledochal cysts and hepaticoduodenostomy with jejunal interposition at 14 years of age, and the excision of an infectious cyst around the anastomosis site at 21 years of age, during which she had also received a blood transfusion. At 29 years of age, she was diagnosed with a chronic hepatitis C virus infection but did not consent to interferon-based therapy and instead, was monitored while taking ursodeoxycholic acid. At 39 years of age, she was transferred to Chiba University Hospital.
Her laboratory data were as follows: elevated carcinoembryonic antigen 730 ng/ml, carbohydrate antigen 19-9: 5,170 U/ml, alpha-fetoprotein 8.2 ng/ml, alpha-fetoprotein-L3 <0.5%, and des-γ-carboxy prothrombin 41 mAU/ml (table 1). An abdominal ultrasonography showed a hypoechoic lesion of 32 mm in diameter in the S6 region of the liver. A dynamic CT of the abdomen revealed a tumor in the head of the pancreas ( fig. 1a, b), a nondilated main pancreatic duct, and constricted vessels from the celiac artery to the left hepatic artery. Lymph nodes along the abdominal aorta, celiac artery, and superior mesenteric artery as Ishikawa . 1c). Many low-density tumors were noted in the liver and several nodes were noted on the pelvic surface of the sacrum; thus, we considered the presence of multiple liver and peritoneal metastases ( fig. 1d). On MRI with gadolinium ethoxybenzyl diethylenetriaminepentaacetic acid, the tumors in the liver were hyperintense on diffusion-weighted image and enhanced marginally in the early phase of a dynamic study. A dynamic CT of the breast revealed lymph node swelling in the left supraclavicular fossa, mediastinum, and right cardiophrenic angle. Magnetic resonance cholangiopancreatography of the abdomen did not confirm a dilation of the bile and main pancreatic duct. An upper gastrointestinal endoscopy revealed a reconstruction and bile in the duodenum. Biopsy specimens of the S6 liver tumor indicated adenocarcinoma. Based on these findings and the clinical history, she was diagnosed with cholangiocarcinoma derived from remnant intrapancreatic bile duct with metastasis to the mediastinum and abdominal cavity. She refused aggressive treatment with chemotherapy and chose best supportive care instead. Ultimately, she died because of obstructive jaundice and renal failure 2 months after the diagnosis.
A macroscopic examination of the specimen revealed that the tumor in the head of the pancreas was white, accompanied by bleeding and necrosis ( fig. 2a-c). The size of the tumor was 65 × 55 × 55 mm. A pathological examination revealed that it was a poorly differentiated adenocarcinoma with partial well-differentiation; it exhibited mucus production and pancreatic invasion ( fig. 2d). A dilation of the main pancreatic duct was not observed. The bile duct anastomosis with interposed jejunum at the portal hilum had no malignant findings. Multiple metastases occupied about 80% of the liver. The tumor had also metastasized to the lung, pleura, peritoneum, and spleen. Lymph node metastasis was located in the pulmonary hilum, splenic hilum, stomach, and duodenum. Based on the location and pattern of metastasis of the tumor, the diagnosis of cholangiocarcinoma derived from remnant intrapancreatic bile duct was confirmed, and this was also considered the cause of death.
Discussion
An excision of the cyst and hepaticojejunostomy, but not cyst-duodenum anastomosis, is now the first choice of treatment for a congenital choledochal cyst because of a high occurrence rate of carcinogenesis [5]. On the other hand, in patients without pancreaticobiliary maljunction and mutual countercurrent of pancreatic juice, surgery is performed to prevent bile accumulation. Although there are articles of cholangiocarcinoma derived from dilated intrahepatic bile duct [6]. Reports on cholangiocarcinoma derived from remnant intrapancreatic bile duct are rare. To the best of our knowledge, there are only 6 other reported cases, which include 1 man and 5 women (average age: 42.7 years, range: 27-68; table 2) [4,[7][8][9][10][11], and all of these patients underwent surgery for intrapancreatic cholangiocarcinoma. Four of 6 cases had postoperative mortality after a short term of 9-16 months (average: 12.3 months). An infectious cyst around the anastomosis site after the operation and a surgical resection as well as the progression of distant metastasis and the fact that surgical excision was not feasible are characteristics of the present case.
In the hyperplasia-dysplasia-carcinoma sequence, the carcinogenetic formation of the congenital choledochal cyst complicated with pancreaticobiliary maljunction is caused by changes in the biliary mucous membrane epithelium, mainly hyperplastic changes, and gene mutations based on chronic inflammation due to cholestasis, infection, and countercurrent of pancreatic and bile juice, inducing repeated damage and the restoration of the biliary mucous membrane epithelium [4]. Furthermore, carcinogenesis after a separation-operation is A pathological assessment of the patient in the present report revealed that most of the main tumor consisted of poorly differentiated adenocarcinoma with partial well-differentiation. We could not recognize a pathological structure of the common bile duct. The main tumor was protruding from the head of the pancreas and was located in the position of the intrapancreatic bile duct; thus, we diagnosed the patient with cholangiocarcinoma derived from remnant intrapancreatic bile duct. One of the characteristics of this case is that the infectious cyst, which was not completely resected, formed after the separation-operation; we consider this cyst to be the source of the carcinoma.
To prevent the development of malignant biliary duct tumors, a complete excision of the extrahepatic bile duct is recommended for patients with congenital choledochal cyst. During bile duct excision from the pancreas side, in principle, it is desirable to completely dissect and not leave the intrapancreatic bile duct above its junction and the main pancreatic duct [12]. And from the liver side, the dilated bile duct should be removed completely. However, there is no clear evidence to determine the excision range for Todani type IV-a intrahepatic bile duct dilatation; thus, further studies are warranted.
In conclusion, the present case indicated that cholangiocarcinoma may develop 32 years after a separation-operation. Thus, a careful long-term follow-up is required even after a separation-operation. Ishikawa
|
2017-10-06T18:46:34.199Z
|
2015-06-17T00:00:00.000
|
{
"year": 2015,
"sha1": "1c05ca64389850c06757b10ab2de3171c1bf11e8",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/431357",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1c05ca64389850c06757b10ab2de3171c1bf11e8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14518855
|
pes2o/s2orc
|
v3-fos-license
|
Dynamic Modeling of Cascading Failure in Power Systems
The modeling of cascading failure in power systems is difficult because of the many different mechanisms involved; no single model captures all of these mechanisms. Understanding the relative importance of these different mechanisms is an important step in choosing which mechanisms need to be modeled for particular types of cascading failure analysis. This work presents a dynamic simulation model of both power networks and protection systems, which can simulate a wider variety of cascading outage mechanisms, relative to existing quasi-steady state (QSS) models. The model allows one to test the impact of different load models and protections on cascading outage sizes. This paper describes each module of the developed dynamic model and demonstrates how different mechanisms interact. In order to test the model we simulated a batch of randomly selected $N-2$ contingencies for several different static load configurations, and found that the distribution of blackout sizes and event lengths from the proposed dynamic simulator correlates well with historical trends. The results also show that load models have significant impacts on the cascading risks. This dynamic model was also compared against a QSS model based on the dc power flow approximations; we find that the two models largely agree, but produce substantially different results for later stages of cascading.
I. INTRODUCTION
T HE vital significance of studying cascading outages has been recognized in both the power industry and academia [1]- [3]. However, since electrical power networks are very large and complex systems [4], understanding the many mechanisms by which cascading outages propagate is challenging. This paper presents the design of and results from a new non-linear dynamic model of cascading failure in power systems (the Cascading Outage Simulator with Multiprocess Integration Capabilities or COSMIC), which can be used to study a wide variety of different mechanisms of cascading outages.
A variety of cascading failure modeling approaches have been reported in the research literature, many of which are reviewed in [1]- [3]. Several (QSS) dc power flow models [5]- [7], which are numerically robust and can describe cascading overloads. However, they do not capture non-linear mechanisms like voltage collapse or dynamic instability. QSS ac power flow models have been used to model cascading failures in [8]- [11], but these models still require difficult assumptions to model machine dynamics and to deal with non-convergent power flows. Some have proposed models that combine the dc approximations and dynamic models [12], allowing for more accurate modeling of undervoltage and under-frequency load shedding. These methods increase the modeling fidelity over pure dc models but still neglect voltage collapse. Others developed statistical models that use data from simulations [13], [14] or historical cascades [15] to represent the general features of cascading. Statistical models are useful, but cannot replace detailed simulations to understand particular cascading mechanisms in detail. There are also topological models [16]- [19] which have been applied to the identification of vulnerable/critical elements, however, without detailed power grid information, the results that they yield differ greatly and could result in misleading conclusions about the grid vulnerability [20]. Some dynamic models [21], [22] and numerical techniques [23], [24] study the mid/long-term dynamics of power system behavior, and show that mid/long term stability is an important part of cascading outage mechanisms. However, concurrent modeling of power system dynamics and discrete protection events -such as line tripping by over-current, distance and temperature relays, under-voltage and under-frequency load shedding-is challenging and not considered in most existing models. In [25] the authors describe an initial approach using a system of differential-algebraic equations with an additional set of discrete equations to dynamically model cascading failures.
The paper describes the details of and results from a new non-linear dynamic model of cascading failure in power systems, which we call "Cascading Outage Simulator with Multiprocess Integration Capabilities" (COSMIC). In COS-MIC, dynamic components, such as rotating machines, exciters, and governors, are modeled using differential equations. The associated power flows are represented using non-linear power flow equations. Load voltage responses are explicitly represented, and discrete changes (e.g., components failures, load shedding) are described by a set of equations that indicate the proximity to thresholds that trigger discrete changes. Given dynamic data for a power system and a set of exogenous disturbances that may trigger a cascade, COSMIC uses a recursive process to compute the impact of the triggering event by solving the differential-algebraic equations (DAEs) while monitoring for discrete events, including events that subdivide the network into islands.
The remainder of this paper proceeds as follows: Section II introduces the components of the model mathematically and describes how different modules interact. In Section III, we present results from several experimental validation studies. Finally, Section IV presents our conclusions from this study.
II. HYBRID SYSTEM MODELING IN COSMIC
A. Hybrid differential-algebraic formulation Dynamic power networks are typically modeled as sets of DAEs [26]. If one also considers the dynamics resulting from discrete changes such as those caused by protective relays, an additional set of discrete equations is added, which results in a hybrid DAE system [27].
Let us assume that the state of the power system at time t can be defined by three vectors: x(t), y(t), and z(t), where: x is a vector of continuous state variables that change with time according to a set of differential equations y is a vector of continuous state variables that have pure algebraic relationships to other variables in the system: z is a vector of state variables that can only take integer states (z i ∈ [0, 1]) h(t, x(t), y(t), z(t)) < 0 When constraint h i (...) < 0 fails, an associated counter function d i (see II-B) activates. Each z i changes state if d i reaches its limit. The set of differential equations (1) represent the machine dynamics (and/or load dynamics if dynamic load models are included). In COSMIC the differential equations include a third order machine model and somewhat simplified governor and exciter models in order to improve computational efficiency without compromising the fundamental functions of those components. In particular, the governor is rate and rail limited to model the practical constraints of generator power control systems. The governor model incorporates both droop control and integral control, which is important to mid/long term stability modeling, especially in isolated systems [26].
The algebraic constraints (2) encapsulate the standard ac power flow equations. In this study we implemented both polar and rectangular power flow formulations. Load models are an important part of the algebraic equations, which are particularly critical components of cascading failure simulation because i) they need to represent the aggregated dynamics of many complicated devices and ii) they can dramatically change system dynamics. The baseline load model in COSMIC is a static model, which can be configured as constant power (P ), constant current (I), constant impedance (Z), exponential (E), or any combination thereof (ZIP E) [28].
As Fig. 1 illustrates, load models can have a dramatic impact on algebraic convergence. Constant power loads are particularly difficult to model for the off-nominal condition. Numerical failures are much less common with constant I or Z loads, but are not accurate representations of many loads. This motivated us to include the exponential component in COSMIC. During cascading failures, power systems undergo many discrete changes that are caused by exogenous events (e.g., manual operations, weather) and endogenous events (e.g., automatic protective relay actions). The discrete event(s) will consequently change algebraic equations and the systems dynamic response, which may result in cascading failures, system islanding, and large blackouts. In COSMIC, the endogenous responses of a power network to stresses are represented by (3). These discrete responses are described in detail in II-B and II-C.
B. Relay modeling
Major disturbances cause system oscillations as the system seeks a new equilibrium. These oscillations maybe naturally die out due to the interactions of system inertia, damping, and exciter and governor controls. In order to ensure that relays do not trip due to brief transient state changes, time-delays are added to each protective relay in COSMIC.
We implement in this model two types of time-delayed triggering algorithms: fixed-time delay and time-inverse delay. These two delay algorithms are modeled by a counter function, d, which is triggered by (3). The fixed-time delay triggering activates its counter/timer as soon as the monitored signal exceeds its threshold. If the signal remains beyond the threshold, this timer will continue to count down from a preset value until it runs out then the associated relay takes actions. Similarly, the timer will recover if the signal is within the threshold and will max out at the preset value. For the timeinverse delay algorithm, instead of counting for the increment of time beyond (or within) the threshold, we evaluate the area over (or under) the threshold by integration (based on Euler's rule).
Five types of protective relays are modeled in COSMIC: over-current (OC) relays, distance (DIST) relays, and temperature (TEMP) relays for transmission line protection; as well as under-voltage load shedding (UVLS) and under-frequency load shedding (UFLS) relays for stress mitigation. OC relays monitor the instantaneous current flow along each branch. DIST relays represent a Zone 1 relay that monitors the apparent admittance of the transmission line. TEMP relays monitor the line temperature, which is obtained from a first order differential equatioṅ where T i is the temperature difference relative to the ambient temperature (20 • C) for line i, F i is the current flow of line i, r i and k i are the are heating and time constants for line i [12]. r i and k i are chosen so that line i's temperature reaches 75 • C (ACSR conductors) if current flow hits the rate-A limit, and its TEMP relay triggers in 60 seconds when current flow jumps from rate-A to rate-C. The threshold for each TEMP relay is obtained from the rate-B limit. While it would have been possible to integrate the temperature relays into the trapezoidal integration used for the other differential equations, these variables are much slower than the other differential variables in x. Instead, we computed T i outside of the primary integration system using Euler's rule. When voltage magnitude or frequency signals at load Bus i are lower than the specified thresholds, the UVLS relay or UFLS relay will shed a 25% (default setting) of the initial P d,i to avoid the onset of voltage instability and reduce system stress. In order to monitor frequency at each load bus, Dijkstra's algorithm [29] and electrical distances [30] were used to find the generator (and thus frequency from x) that is most proximate to each load bus. Both the UVLS and UFLS relays used a fixed-time delay of 0.5 seconds.
C. Solving the hybrid DAE
Because of its numerical stability advantages, COSMIC uses the trapezoidal rule [31] to simultaneously integrate and solve the differential and algebraic equations.
Whereas many of the common tools in the literature [32], [33] use a fixed time step-size, COSMIC implements a variable time step-size in order to trade-off between the diverse timescales of the dynamics that we implement. We select small step sizes during transition periods that have high deviation or oscillations in order to keep the numerical within tolerance; the step sizes increase as the oscillations dampen toward steadystate values.
When a discrete event occurs at t d , the complete representation of the system is provided in the following equations: where, t is the previous time point, and d is the counter function mentioned previously. Because of the adaptive step- Every time a discrete event happens (h < 0 and d = 0), COSMIC stops solving the DAE for the previous network configuration, processes the discrete event(s), then resumes the DAE solver using the updated initial condition. COSMIC deals with the separation of a power network into sub-networks (unintentional islanding) using a recursive process, which is illustrated in Fig. 2. If islanding results from a discrete event, the present hybrid DAE separates into two sets of DAEs, which can be represented as f 1 , g 1 , h 1 , d 1 , and f 2 , g 2 , h 2 , d 2 . COSMIC treats the two sub-networks the same way as the original one, integrates and solves these two DAE systems in parallel, and synchronizes two result sets in the end.
D. Validation
To validate COSMIC, we compared the dynamic response in COSMIC against commercial software -PowerWorld [33] -using the classic 9-bus test case [34]. From a random contingency simulation, the Mean Absolute Error (MAE) between the results produced by COSMIC and PowerWorld was within 0.11%. Since COSMIC adopted simplified exciter and governor models that are not included in many commercial packages, several of the time constants were set to zero or very close to zero in order to obtain agreement between the two models.
III. EXPERIMENTS AND RESULTS
In this section we present experimental results from validation tests on three test systems: the 9-bus system [34], the 39-bus system, and the 2383-bus system [35], which is an equivalenced system based on the year 2000 winter snapshot for the Polish network.
1:
The non-zero rates of the Jacobian matrices for polar and rectangular forms are 0.0376% and 0.0383% respectively; 2: the number of tests; 3: The percentage decrease in the number of linear solves, rectangular vs. polar. -0.009% -0.04% 4: The non-zero rates of the Jacobian matrices for polar and rectangular formulations are 0.000856% and 0.000872% respectively.
A. Polar formulation vs. rectangular formulation in computational efficiency COSMIC includes both polar and rectangular power flow formulations. In order to compare the computational efficiency of the two formulations, we conducted a number of N − 1 and N − 2 experiments using two different cases, the 39-bus and 2383-bus systems. The amount of time that a simulation requires and the number of linear solves (Ax = b) are two measures commonly used to evaluate the computational efficiency of a model. Compared to the first metric, the number of linear solves describes computational speed independently of specific computing hardware, and it is adopted in this study.
Tables I and II compare the two formulations with respect to the number of linear solves. For the 39-bus case, 45 N − 1 experiments and 222 randomly selected N − 2 experiments were conducted, and each of them finished at 50 seconds and lost the same amount of power demand. As shown in Table II, the performances of the two methods were similar in terms of number of linear solves; however, the rectangular formulation required fewer linear solves and shows various improvements (e.g., positive decrease rectangular vs. polar) for different demand losses.
For the 2383-bus test case, we simulated 2494 N −1 and 556 N − 2 contingencies; Table II shows the results. There was no significant improvement for the rectangular formulation over the polar formulation, and the number of linear solves that resulted from both forms were almost identical.
One can also notice that solving the 2383-bus case required fewer linear solves than for the 39-bus case. This suggests that some branch outages have a higher impact on a smaller network and cause more dynamic oscillations than on a larger network such as the 2383-bus case.
B. Relay event illustration
To depict the functionality of how protective relays integrate with COSMIC's time delay features, we implemented the following example using the 9-bus system. The initial event was a single-line outage from Bus 6 to Bus 9 at t = 10 seconds. The count-down timer of the DIST relay for branch 5-7 was activated with a t preset-delay = 0.5 seconds. As shown in Fig. 3, the system underwent a transient swing following the one-line outage. Right after 0.5 seconds, t delay ran out (P 1 ) and branch 5-7 was tripped by its DIST relay, which resulted in two isolated islands. Meanwhile, the thresholds for UVLS relays were set to 0.92 pu. Note that the magenta voltage trace violated this voltage limit at about t = 10.2 seconds (P 2 ). Because this trace continued under the limit after that, its UVLS timer counted down from t preset-delay = 0.5 seconds till t = 10.7 seconds (P 3 ), where its UVLS relay took action and shed 25% of the initial load at this bus. The adjacent yellow trace illustrates that the UVLS relay timer was activated as well but with a small lag. This UVLS relay never got triggered because before its t delay emptied out the load shedding at P 2 put this yellow trace back upon threshold, and its t delay was restored to 0.5 seconds. 10 10
C. Cascading outage examples using the 39-bus and the 2383bus power systems
The following experiment demonstrates a cascading outage example using the IEEE 39-bus case (see Table III for a summary of the sequential events). The system suffered a strong dynamic oscillation after the initial two exogenous events (branches 2-25 and 5-6). After approximately 55 seconds the first OC relay at branch 4-5 triggered. Because the monitored current kept up-crossing and down-crossing its limit, it delayed the relay triggering based on the time delay algorithms for the protection devices. Load shedding at two buses (Bus 7 and Bus 8) occurred around t = 55.06 seconds, then another two branches (10-13 and 13-14) shut down after OC relay trips at t = 55.28 seconds. These events separated the system into two islands. At t = 55.78 seconds, two branches (3-4 and 17-18) were taken off the grid and this resulted in another island. The system eventually ended up with three isolated networks. However, one of them was not algebraically solvable due to a dramatic power imbalance, and it was declared as a blackout area. Fig. 4 illustrates a sequence of branch outage events using the 2383-bus network. This cascading was initiated by two exogenous branch outages (Branches 31-32 and 388-518, which are marked as number 0 in Fig. 4) and resulted in a total of 92 discrete events. Out of these, 24 were branch outage events, and they are labeled in order in Fig. 4. These events consequently caused a small island (light blue colored dots). The dots with additional red square highlighting indicate buses where load shedding occurred. From this figure we can see that cascading sequences do not follow an easily predictable pattern, and the affected buses with low voltage or frequency may be far from the initiating events.
The top panel in Fig. 5 shows the timeline of all branch outage events for the above cascading scenario, and the lower panel zooms in the load-shedding events. In the early phase of this cascading outages, the occurrence of the components failed relatively slowly, however, it speeded up as the number of failures increased. Eventually the system condition was substantially compromised, which caused fast collapse and the majority of the branch outages as well as the load shedding events (see lower panel in Fig. 5).
D. N − 2 contingency analysis using the 2383-bus case
Power systems are operated to ensure the N − 1 security criterion so that any single component failure will not cause subsequent contingencies [36]. The modified 2383-bus system that we are studying in this paper satisfies this criterion for transmission line outages. Thus, we assume here that branch outages capture a wide variety of exogenous contingencies that initiate cascades, for example a transformer tripping due to a generator failure.
The experiment implemented here included four groups of 1200 randomly selected N − 2 contingencies for the 2383-bus system. We measured the size of the resulting cascades using the number of relay events and the amount of demand lost. Each group had a different static load configuration. The load configuration for the first group was 100% constant Z load; the second group used 100% E load; the third had 100% constant P load; and the fourth one included 25% of each portion in the ZIPE model. We set TEMP, DIST, UVLS, and UFLS relays active in this experiment and deactivated OC relay due to its overlapping/similar function with TEMP relay. Fig. 6 shows the Complementary Cumulative Distribution Function (CCDF) of demand losses for these four groups of simulations. The CCDF plots of demand losses exhibit a heavy-tailed blackout size distribution, which are typically found in both historical blackout data and cascading failure models [37]. The magenta trace indicates constant Z load, and shows the best performance -in terms of the average power loss and the probability of large blackout-within this set of 1200 random N − 2 contingencies (listed in Table IV). In contrast, the blue trace (constant E load) reveals the highest risk of large size blackouts (> 1000 MW). The constant P load has a similar trend as the constant E load, due to their similar stiff characteristics; however, the constant E load with this particular exponent, 0.08, demonstrates a negative effect on the loss of load. The one with 25% Z, 25% I, 25% P and 25% of E performs in the middle of constant P load and constant Z load.
As can be seen in Table IV, the probabilities of large demand losses varies from 2.5% to 3.5% for those four load configurations. These results show that load models play an important role in dynamic simulation and may increase the frequency of non-convergence if they are not properly modeled. 6. CCDF of demand losses for 1200 randomly selected N − 2 contingencies using the 2383-bus case. Z 100 I 0 P 0 E 0 indicates 100% constant Z load and 0% of other load portions; similarly Z 0 I 0 P 0 E 100 , Z 0 I 0 P 100 E 0 and Z 25 I 25 P 25 E 25 represent constant E load, constant P load and a combination of equal amount of the four load types, respectively. Fig. 7 shows the CCDF plots of total event length, including all event types, such as branch outages caused by TEMP and DIST relays, and load shedding events by UVLS and UFLS relays. Fig. 8 shows the CCDF of the branch outage lengths only. We can see from these two figures that the distributions of constant P and constant E loads have a comparable pattern, and they are in general less likely to have the same amount of branch outages, relative to the other two configurations.
E. Comparison with a dc cascading outage simulator
A number of authors have implemented quasi-steady state (QSS) models using the dc power flow equations to investigate cascading outages [5]- [7]. We conducted two experiments to compare COSMIC with the dc QSS model discribed in [5] with respect to the overall probabilities of demand losses and the extent to which the patterns of cascading from the two models agree.
1) The probabilities of demand losses: For the first experiment, we computed the CCDF of demand losses in both COSMIC (with the constant impedance load model) and the dc model using the same 1200 branch outage pairs from III-D. From Fig. 9 one can learn that the probability of demand losses in the dc simulator is lower than that of COSMIC for the same amount of demand losses. In particular, the largest demand loss in dc simulator is much smaller than in COSMIC (2639 MW vs. 24602 MW, with probabilities 0.08% vs. 2.5%). This large difference between them is not surprising because the dc model is much more stable and does not run into problems of numerical non-convergence. Also, the protection algorithms differ somewhat between the two models. In addition, some of the contingencies do produce large blackouts in the dc simulator, which causes the fat tail that can be seen in Fig. 9.
Numerical failures in solving the DAE system greatly contributed to larger blackout sizes observed in COSMIC, because COSMIC assumes that the network or sub-network in which the numerical failure occurred experienced a complete blackout. This illustrates a tradeoff that comes with using detailed non-linear dynamic models: while the component models are more accurate, the many assumptions that are needed substantially impact the outcomes, potentially in ways that are not fully accurate. One thing one needs to be aware of is the size of the sample set. In this case, the 1200 randomly selected contingency pairs represent 0.0278% of the total N − 2 branch outage pairs. Extensive investigation of how different sampling approaches might impact the observed statistics remains for future work.
2) Path Agreement Measurement: The second comparison experiment was to study the patterns of cascading between these two models. This comparison provides additional insight into the impact of dynamics on cascade propagation patterns. In order to do so, we compared the sets of transmission lines that failed using the "Path Agreement Measure" introduced in (9) [12]. The relative agreement of cascade paths, R(m 1 , m 2 ), is defined as follows. If models m 1 and m 2 are both subjected to the same set of exogenous contingencies: The experiment measured R(m 1 , m 2 ) between COSMIC (with a Z 25 I 25 P 25 E 25 load configuration) and the dc simulator for 336 critical branch outage pairs. These branch outage pairs were selected by using the random chemistry algorithm proposed in [5] because they are more likely to cause cascading failures. Table V shows that the average R between the two models for the whole set of sequences is 0.1948, which is relatively low. This indicates that there are substantial differences between cascade paths in the two models. Part of the reason is that the dc model tends to produce longer cascades and consequently increase the denominator in (9). In order to control this, we computed the R only for the first 10 branch outage events. The average R increases to 0.3487, and some of cascading paths show a perfect match (R = 1). This shows how the cascading paths resulting from COSMIC and the dc simulator have a better agreement in the early stages, when non-linear dynamics are less pronounced.
IV. CONCLUSIONS This paper describes a method for and results from simulating cascading failures in power systems using full nonlinear dynamic models. The new model, COSMIC, represents a power system as a set of hybrid discrete/continuous differential algebraic equations, simultaneously simulating protection systems and machine dynamics. Several experiments illustrated the various components of COSMIC and provided important general insights regarding the modeling of cascading failure in power systems. By simulating 1200 randomly chosen N − 2 contingencies for a 2383-bus test case, we found that COSMIC produces heavy-tailed blackout size distributions, which are typically found in both historical blackout data and cascading failure models [37]. However, the relative frequency of very large events may be exaggerated in dynamic model due to numerical non-convergence (about 3% of cases). More importantly, the blackout size results show that load models can substantially impact cascade sizes -cases that used constant impedance loads showed consistently smaller blackouts, relative to constant current, power or exponential models. In addition, the contingency simulation results from COSMIC were compared to corresponding simulations from a dc power flow based quasi-steady-state cascading failure simulator, using a new metric. The two models largely agreed for the initial periods of cascading (for about 10 events), then diverged for later stages where dynamic phenomena drive the sequence of events.
Together these results illustrated that detailed dynamic models of cascading failure can be useful in understanding the relative importance of various features of these models. The particular model used in this paper, COSMIC, is likely too slow for many large-scale statistical analyses, but comparing detailed models to simpler ones can be helpful in understanding the relative importance of various modeling assumptions that are necessary to understand complicated phenomena such as cascading.
|
2014-11-14T09:47:11.000Z
|
2014-11-14T00:00:00.000
|
{
"year": 2016,
"sha1": "300490e2d2ee8993be3a0757617e456f858ef52c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1411.3990",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "933d8b5843dd15953927d8f736a8552effa4c002",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Physics",
"Engineering"
]
}
|
225858900
|
pes2o/s2orc
|
v3-fos-license
|
THE IMPACT OF TECHNOLOGY TRANSFER ON INNOVATION
The aim of this paper is to examine the two types of relationships – the first one between R&D activities of the firms and innovations and the second relationship between technology transfer and innovation among businesses in Azerbaijan. Data collection were conducted through surveys among 300 small and medium businesses operating in different sectors of economy in Azerbaijan. The novelty of the research lies in 1) surveying the SME sector which have less intensive innovation activities than large, capital intensive firms; 2) SMEs owned entirely by foreign investors are more innovative as compared to firms owned by local investors. Developed and developing economies have attached significant importance to technology transfer as a catalyst of innovation. Transfer of knowledge and technology from generators of such technology including universities and research institutions to industry has shown its result in the example of countries where there is a strong bridge between universities and industry. In other economies where there is not such a strong link between industry and research institutions, innovation can be promoted through adopting ready technology developed by universities and businesses abroad. The results of econometric analysis indicate that while a strong relationship exists between R&D investment and innovation, there is not a strong empirical support that obtaining licenses will increase innovation potential of firms. The partnership between firms and research centers as well as universities, on the other hand lead to increased innovativeness of the businesses under study.
Introduction
Businesses can increase their competitive edge by building know-how and innovation potential through technology transfer, which refers to procurement of technology developed by universities or businesses within or across national boundaries. OECD defines innovation as the implementation of a new or a years (Kuriakose, 2013). This shows a need for a system which encourages more innovative activity among small and medium businesses. Traditional entrepreneurialism lacks the innovation dimension and previously it threatened technological progress and economic growth in the long run (Block et al., 2013). Thus, the exploitation of new knowledge within the knowledge-driven entrepreneurial economy is a main issue of contemporary organizations (Audretsch and Link 2018), individual entrepreneurs are also put into the center of the innovation system (Acs et al., 2017). Not only large companies and innovation driven enterprises should be concerned in innovation, but also traditional small and medium enterprises (Acs et al. 2014). In general, Industry 4.0 process may contribute to expectations on the future performance by implementing new technologies, which provide the background for development (Dalenogarea et al., 2018). This paper presents arguments proposed by several researchers that theoretically highlight how technology transfer can create capability and impact on business performance. It builds on the previous research on the importance of technology transfer for the competitiveness at a firm level. The authors conducted an analysis of the level of innovation among Azerbaijani businesses. The rest of the paper proceeds as following: section two is on literature review and the third section discusses the data analysis. Results of the analysis are presented in section four, conclusions are given at the end of the paper.
Literature Review
Technology transfer is an important force behind developing technological capabilities as it is now being recognized as having played an important part in the industrial development of most developing economies in the 21 st century. Some researchers defined technology transfer from a broader perspective as the movement of knowledge, skill, organization, values and capital from the point of generation to the site of adaptation and application. Mansfield (1975) classified technology transfer into vertical and horizontal; explaining that vertical relates to transfer of technology from basic research to applied research to development and then to production respectively while horizontal deals with the movement and use of technology applied in one place, organization, context, to another place, organization and context. Technology transfer from developed to underdeveloped countries (often referred as North-South technology transfer) started to attract researchers in 1990s. Acs and Preston (1997) highlighted the role of exports and foreign direct investment as main channels through multinational firms for technology transfer and globalization. Currently, the focus of technology transfer is on the exploitation of comparative advantages within global competition rather than the acceleration of economic development underdeveloped nations (Audretsch et.al, 2014).
Another driver of entrepreneurial discovery and exploitation is access to information. Firms with access to information have better innovation performance and employment of appropriate technology. Companies get the information necessary for innovation-related activities from their R&D departments, customers and particularly universities and public research organizations (Fiet and Patel 2008;Link et al. 2008). Lizinska et al. (2014) highlighted the importance of the institutional background (i.e. different business environment institutions) in relation with direct foreign investment. According to their results the regional and local development agencies, chambers, training and advisory institutions are those organizations that might help to raise growth potential of local businesses by increasing their knowledge and skills, providing proper incentives to attract new investors. Some researchers also studied the role of informal university technology transfer (Grimpe and Fier 2010), and the role of incubators, coaches and financial intermediaries within the technology transfer process (Colombo et al. 2010). Informal technology transfer may be especially significant for developing countries with underdeveloped infrastructures in order to develop less capital-intensive technology like expensive laboratories and equipment (Audretsch et.al, 2014). Technology transfer helps the developing countries to improve their trading terms and may result in a decrease in welfare of workers in developed nations (Krugman, 1979). The relevance of technology transfer to firm operations in the context of developing countries has been proven to result in improved knowledge; valueadded processes through technology adoption, and enhanced competitive advantage for business performance (Liao and Hu, 2007). Evidence from the early and late industrializers shows that innovation performance of a firm has got various drivers. After analyzing Chinese innovative firms Zhang and Tang (2017) concluded that collaboration breadth of employees positively affects the innovation performance of firms, while technological heterogeneity among employees positively moderates the relationship between collaboration breadth and innovation performance. In another study among Japanese firms, Kwon and Park (2018) revealed that firms that are mainly owned by another firm are not as active in R&D as independent firms and R&D activities are significantly and positively related with foreign ownership if the parent firm is not from a G7 country. Enterprises acting entrepreneurially (within the context of a network of competitors, suppliers and customers) have been the major players in developing technological capabilities and competitiveness. The sophisticated consumer demands and rapid change in technology requires firms to have some inimitable structures and practices that will protect it from the unpredictable macro and microenvironmental factors. Technology transfer in the short term offers a company, especially local firms, lower costs or better and improved products and develops capability which feeds into a sustainable competitive advantage in the longer term depending on the effectiveness of the application of the acquired technology. Thus, the role and impact and contribution of technology transfer cannot be underestimated. According to the World Bank report, the level of technology transfer among businesses in Azerbaijan needs to be improved (Kuriakose, 2013). This report analyzed four types of innovative activities in Azerbaijan: introducing new products and services (product innovation), upgrading an existing product or service (process innovation), investment in research and development (R&D), and licensing technology from a foreign owned company. According to findings of the report, while nearly 70 percent of the businesses analyzed in the report have undertaken process innovation, the rate of product innovation was significantly lower. The number of businesses that invested in R&D was less than 10 percent. The report suggests that organizations can facilitate knowledge transfer from research institutions to SMEs through collaborative research and technology programs as well as through staff exchanges (by researchers and engineers placed in firms). According to the report, industry-research collaboration is weak in Azerbaijan with limited level of R&D even among high-growth firms.
Methodology
To analyze the impact of R&D investment by the companies on their innovative potential 300 companies operating in different industries in Azerbaijan formed the target sample for the study. The cross-sectional survey approach was used to collect data with questionnaires administered in person by trained research assistants during March-June 2016.
Confidentiality and anonymity were guaranteed in order to stimulate cooperation and information from all the companies. Respondents were asked to respond to 12 multiple-choice questions. Possible respondents are selected randomly from different sectors (manufacturing, retail, other) in Baku and other regions of the country. A total of 260 usable questionnaires were received representing a response rate of 87%. All received responses were used in the analysis with missing values and outliers deemed to be within acceptable ranges. Although there is no exact statistics about the size of target population of companies, number of received usable questionnaires is considered large enough when the difficulties during the data collection are taken into account. Innovation activeness of firms was assessed by adopting a multidimensional approach. This included measures relating to product innovation (introduction of new products/services), process innovation (improvement of existing products) and international expansion (export of products). The econometric analyses were conducted using Eviews. To conduct the regression analysis, Least Squares Model method was chosen.
Explanation of variables
The variables (dependent and independent) listed below were selected specifically for the purpose of our analysis to test the impact of TT (technology transfer) on the innovation activities. The questions included general information about the companies, such as the number of years the companies have been operating, size of the companies, field of operation, and form of ownership. Additionally, the respondents were asked the number of new products they introduced to the market (product innovation) or improved any products/services (process innovation), R&D investment in the last 3 years, and the number of licenses to new technology they purchased as well as partnership with research centers.
The list of the variables used in the regression analysis is as follows: • NP (new products) measures product innovation and indicates the number of new products introduced to the market by the company over the past 3 years.
Respondents are invited to select one of the answer choicesnone, 1-10, 11-20, more than 20. Later, the categorical information is transformed to quantitative form by giving numerical values from 0 (if the answer is none) to maximum 3 (if the answer is more than 20); • IP (improved products) is a measure of process innovation and whether there have been improved products by the company over the past 3 years. Based on yes / no answers, dummy variable is created, equals 1 if the company has improved products during the period, otherwise equals 0; • Lis (licenses) refers to the number of licenses of foreign companies owned by respondent companies which participated in the survey. Respondents are invited to present information about the number of licenses owned by the companies from foreign firms, measured in number units; • PRC (partnership with research centers) indicates the number of research centers that engage in partnership with companies, measured in number units provided by respondents; • RD (research and development) shows the amount of R&D investment during the last 3 years. Answer choices include none, 1000-25000 AZN, 25001-50000 AZN, 50001-100000 AZN, and more than 100000 AZN. The responses were transformed to quantitative, starting from 0 (if the answer is none) to 4 (if the answer is "more than 100000"; • F (foreign ownership) is a dummy variable, equals 1 if the company is completely foreign owned, 0 otherwise; • L (local ownership) is a dummy variable, equals 1 if the company is completely locally owned, 0 otherwise; • MF (mainly foreign) is a dummy variable, equals 1 if the company is owned by foreign and local investors and foreign investors possess higher proportion of ownership, and 0 otherwise; • ML (mainly local) is a dummy variable, equals 1 if the company is owned by foreign and local investors and local investors possess higher proportion of ownership, 0 otherwise; • P (production) is a dummy variable, equals 1 if the company operates in the production sector, 0 otherwise; • R (retail) is a dummy variable, equals 1 if company operates in the retail service sector, 0 otherwise; • O (other) is a dummy variable, equals 1 if the company operates in other sectors, 0 otherwise.
Models
To conduct econometric analysis, Ordinary Least Squares (OLS) method was used.
The following equation was built to conduct the analysis: Where, , , and . In other words, is used to indicate the difference of NP and IP according to the sector (production, retail or other) in which the firm operates. Other sectors (O) is used as a comparison or base group in estimation. At the same time, to test for the correlation between foreign equity in a firm and innovation ( ) was used along with fully locally owned enterprises (L) as a comparison group. As it can be seen in the model, to show the innovation potential of companies, two dependent variables were used -NP (new products) and IP (improved products). In other words, the model is estimated separately for each dependent variable using the same independent variables. Therefore, two different regression equations are built.
Data analysis and descriptive statistics
Among the 260 companies whose responses were accepted for analysis, there were companies who did not disclose some information included in the survey. Overall, distribution of the respondents across sectors is as follows: 24.2% from production, 30.4% from retail, and 45.4% from other sectors. Accordingly, approximately 9% of respondent companies owned totally by foreign entities while 51.7% belongs to absolutely to local owners. In 16.2% of participated companies, majority of the shares is owned by foreign entities while local entities dominate in 23.2% cases in terms of ownership. Data features are given in Table 1. Standard deviation is quite high for two of the variables -«licenses» and «partnership with research centers», 7.8 and 4.5 respectively. Number of new products and the amount of R&D investment have standard deviation higher than 1, all other variables have standard deviation equal to or lower than 0.5. The outlier for "licenses" is the largest local mobile operator, which is a subsidiary of international companies and therefore possesses a higher number of licenses than other companies in the local market. Outliers for "partnership with research institutions" variables can also be explained due to the nature of the industry these firms operate in.
Interpretation of analysis
The results of the econometric analysis are given in Table 2. As indicated in the previous sub-section, the two dependent variables NP (model 1) and IP (model 2) are estimated using two different regression equations. For each model, relevant standard deviations are shown next to regression coefficients. The results of the analysis show that variable Lis (number of licenses) does not have a statistically significant impact on the variable NP (number of new products introduced to the market). However, variables PRC (number of partner research centers) and RD (R&D investment) do have a significant positive impact on NP, PRC and RD have an impact on NP with a significance level of 10% and 5% respectively. As expected, this shows the firms that partner with more research centers as well as those that invest more on R&D expenses are more innovative than other firms.
The analysis showed that innovation potential of firms depends on the presence of foreign investment in the firm as well as on the sector of economy. Firms with higher level of foreign capital increases the innovation potential of firms. On the other hand, firms partially owned by foreign investors (mainly foreign and mainly local) have higher innovation potential in comparison with firms owned completely by local investors. However, it can be observed that the difference is not statistically significant ( ). The difference in the innovation potential of firms completely owned by foreign investors in comparison with firms completed owned by local investors is statistically significant ( ). When the innovation potential of firms with regard to the sector of economy in which they function was analyzed, it was determined that compared to the firms operating in production sector (P) firms operating in other sectors (O) had higher innovation potential, however the difference was not statistically significant . On the other hand, compared to companies operating in the retail sector (R), firms operating in other sector (O) had higher innovation potential, and the difference was statistically significant ( ). The econometric analysis regarding the number of "improved products", which is one the main indicators of innovation potential of companies, also reveal a similar result. Although the number of licenses has a positive relationship with IP, both economic and statistical significance tests show that the relationship is weak. Results of empiric results indicate the partnership with research centers (PRC) does not have a significant impact on improving products (IP). One of the main hypotheses in this researchstatement that R&D expenses in a company strongly encourage innovation potentialhas been proven, which is reflected in the results of the empiric studies. R&D has a positive impact on IP, which shows a positive relationship. At the same time, this relationship is both economically and statistically significant. Econometric analysis reveals that as the share of foreign capital in firms increases, innovation potential of such firms also increases as evidenced by introduction of new and improved products to the market. Innovation potential of firms owned partially or mainly by foreign investors is higher than those owned fully by local investors, but the difference is not statistically significant. However, firms fully owned by foreign investors have significantly higher innovation potential than firms with other forms of ownership. The coefficient of the difference (0.574) is economically significant with a statistical significance level of 5%. This clearly proves the hypothesis that foreign investment participation is positively correlated with product innovation (NP) and process innovation (IP). Analysis of the innovation potential between sectors indicates that production sector (P) leads other sectors. Compared to other sectors companies in the production sector have significantly higher innovation potential as measured by improved products ( ). Retails sector follows the manufacturing sector; however the statistical significance of this difference weak ( ).
Discussion of the results
The current study focused on how technology transfer can increase innovation potential of local firms. Econometric analysis of the survey results conducted among 260 firms from different sectors of Azerbaijani economy were conducted to test the relationship between independent factors, such as sectors in which the firms operate, number of licenses obtained, presence of foreign capital, level of partnership with research institutions and dependent variables of product and process innovation. The results strongly supported the hypothesis that partnership with research institutions leads to increased potential of the firms to introduce new products to the market. This emphasizes the importance of establishing closer ties between the business sector and universities as well as research institutions. Universities and other higher education institutions (HEIs) are an important source of new scientific knowledge. The potential is even higher at universities that offer degrees in engineering and applied sciences (Pazos et al., 2012). Industry can gain access to this knowledge or resource by developing formal and informal links with higher education institutes (OECD, 1993). Another important finding of the analysis was the impact of R&D investment on the innovation potential of the businesses. The econometric analysis showed investment in R&D is strongly correlated with the potential of the firms to innovate. In addition to commercializing the results of R&D activities conducted by businesses themselves, effective commercialization of findings of research conducted at academic and research institutions is also important for increasing the competitiveness of businesses and national economies (Abdurazzakov, 2015;Abdurazzakov, 2016;Abdurazzakov, 2013). It is worth to note the importance of managing the intellectual property derived from the R&D activities. The literature on technology transfer and innovation emphasizes the importance of properly managing intellectual property to spur innovation on micro level (Van Norman and Eisenkot, 2017). The problems explored by our study are in line with the findings of Cygler and Wyka (2019), who concluded that the main barriers of international cooperation in R&D are mostly related to the fear of losing companies' independence, the fear of ineffective activities, and the fear of difficulties in estimating the potential costs and benefits of cooperation. These fears are mostly due to insufficient knowledge about R&D projects and poor infrastructure and business networks. However, it was interesting to see that the analysis did not support the notion that the number of licenses obtained by the firms lead to product innovation. Finally, the analysis in the case of Azerbaijan revealed that participation of foreign investors in the ownership local businesses is an important factor to increase innovation potential. The firms with foreign ownership (fully, mainly or partially owned by foreign investors) were more innovative than firms fully owned by local investors. This shows the need to attract more foreign investors in order to increase the potential of firms to innovate. This notion has also been supported in the studies of Ghebrihiwet and Motchenkova (2017). The role of leaders/managers should also be underlined. Bilan et al. (2020) highlighted that managers play important role in organizational learning, which significantly enhances the firm's sustainability and competitiveness. An innovative culture plays a vital role in enhancing firm's sustainability and an open-minded leadership style of managers may create such flexible, innovative background that contributes to building a viable entrepreneurial ecosystem.
Conclusion
As emerging economies like Azerbaijan prioritize to move away from natural resource based economy to innovation based economy, it is essential to consider the impact of different mechanisms to stimulate innovation. The discussed and analyzed variables are similar in other countries as well, so the findings of this research can be applicable in countries beyond Azerbaijan that are in search of an effective path to innovation-based growth. For SMEsas internationalization is not so frequent among these enterprisesattracting foreign investors is not the easiest way for innovation. Instead, establishing a strong entrepreneurial ecosystem, which is built on the cooperation of the business sphere, higher educational institutions and research institutions is a more appropriate way of technology transfer and innovation. In this case, knowledge and technology transfer could be the starting point of product development and human capital improvements. The novelty of the research lies in 1) surveying the SME sector which have less intensive innovation activities than large, capital intensive firms; 2) SMEs owned entirely by foreign investors are more innovative as compared to firms owned by local investors. The analysis suggested that SMEs which are completely owned by foreign investors are more innovative than those owned entirely by local investors. Overall, attracting foreign capital increases innovation potential of local firms significantly. Thus, the government of Azerbaijan could stimulate innovation by encouraging establishment of fully foreign owned companies in the country. As with all research, this paper has some limitations as well. Despite the fact that this paper contributed significantly to the literature on the relationship between innovation, R&D activities of the firms and technology transfer in Azerbaijan more research should be conducted to analyze the drivers of innovation and technology transfer. Additionally, less is known about the degree of effectiveness of R&D in the innovation performance of developing countries as well as how partnership with research institutions may increase the innovativeness of the firms and Azerbaijan is no exception. It is worth mentioning that there is a need for further research and to conduct a similar analysis in the context of other emerging countries and see how the factors included in this study may affect innovation potential of firms in a different context. Further research can also look into the impact of factors not included in this research (such as firm age, location, etc.) on innovation. Vol.21 No.2 POLISH JOURNAL OF MANAGEMENT STUDIES Abdurazzakov O., Illés B. Cs., Jafarov N., Aliyev K.
|
2020-08-27T09:13:16.937Z
|
2020-06-01T00:00:00.000
|
{
"year": 2020,
"sha1": "aed585ed0f3740ee2e8987a393acd3e92974cf9e",
"oa_license": null,
"oa_url": "https://doi.org/10.17512/pjms.2020.21.2.01",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "34234dfb60706ad15b50b84505fb1ef462932f7a",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
254960275
|
pes2o/s2orc
|
v3-fos-license
|
Total organic carbon estimation in seagrass beds in Tauranga Harbour, New Zealand using multi-sensors imagery and grey wolf optimization
Abstract Estimation of carbon stock in seagrass meadows is in challenges of paucity of assessment and low accuracy of the estimates. In this study, we used a fusion of the synthetic aperture radar (SAR) Sentinel-1 (S-1), the multi-spectral Sentinel-2 (S-2), and coupled this with advanced machine learning (ML) models and meta-heuristic optimization to improve the estimation of total organic carbon (TOC) stock in the Zostera muelleri meadows in Tauranga Harbour, New Zealand. Five scenarios containing combinations of data, ML models (Random Forest, Extreme Gradient Boost, Rotation Forest, CatBoost) and optimization were developed and evaluated for TOC retrieval. Results indicate a fusion of S1, S2 images, a novel ML model CatBoost and the grey wolf optimization algorithm (the CB-GWO model) yielded the best prediction of seagrass TOC (R2, RMSE were 0.738 and 10.64 Mg C ha−1). Our results provide novel ideas of deriving a low-cost, scalable and reliable estimates of seagrass TOC globally.
Introduction
Seagrass meadows are hotspots of biodiversity (Mari et al. 2021;McHenry et al. 2021) and are widely distributed in different climatic regions.In recent years, diverse research approaches have unveiled valuable seagrass ecosystem services of nursery and breeding ground (Jiang et al. 2020), water filtration (Bulmer et al. 2018;Lincoln et al. 2021), food supplies (Jankowska et al. 2019), coastal stabilization (James et al. 2020) and sediment trapping (Nordlund et al. 2016;Potouroglou et al. 2017;Orth et al. 2020).
Since 2012, the blue carbon initiative (Fourqurean et al. 2012) has emerged in respond to the climate change emergency and the essential requirement of greenhouse gas (GHG) reduction (Hilmi et al. 2021).According to the Intergovernmental Panel on Climate Change (IPCC), a reduction of global GHG emission to 4-10% to the year 2030 will help to alleviate the negative impact of global warming (Grubb et al. 2022).Aside from a direct reduction of GHG from production sectors in the countries worldwide, absorption and sequestration of CO 2 emission in the atmosphere is crucial to reach and complete the 13th Sustainable Development Goal (SDG).Among coastal ecosystem, seagrass has been identified as the role in the sequestration of carbon sources (Duarte et al. 2013;Macreadie et al. 2019;Bedulli et al. 2020;Stankovic et al. 2021), through its potential to absorb and store the CO 2 in deep soil layers.There are arguments that restoration and enhancement of seagrass-based ecosystems are a potential candidate for a long-term, nature-based solution to climate change mitigation and adaptation (Stankovic et al. 2021), though the extent to which this is possible is challenged, with the IPCC suggesting that coastal blue carbon systems can sequester only about 0.5% of present day emissions (IPCC 2019).A major gap in the evaluation of the potential significance of seagrass meadow blue carbon is the limited number of assessment of carbon stock, which challenges any understanding of the relationship between seagrass and climate change mitigation (Thorhaug et al. 2017;Salinas et al. 2020).
To measure carbon stock in a seagrass meadow, field sampling coupled with laboratory analysis is the classical approach (Howard et al. 2014;Susi et al. 2019).This traditional sampling and soil analysis can provide an accurate measurement of total carbon stock; however a number of drawbacks remains, associated with the high person-time requirement that normally results in small-scale observations at a few, readily accessible locations.Remotely based approaches using the satellite-borne remote sensing and development of models are emerging as the fastest route to large-scale, long-term and cost effective mapping and monitoring of carbon stock (Pham et al. 2019;Sani et al. 2019).Unlike the mapping of seagrass biomass or above-ground parameters, which are potentially derived directly from remotely sensed reflectance (Ha et al. 2021a), a carbon stock retrieval that includes sediment contribution often requires an indirect estimation approach through the using of spectral index, soil index, and a variety of types of remote sensing band (Yang et al. 2015;Sani et al. 2019;Pham et al. 2021).As a result, the fusion of both multi-spectral data, returning visible and infra-red optical information, with synthetic aperture radar (SAR) data, returning information on texture and roughness, have been preferred recently (Yang and Guo 2019;Le et al. 2021;Nguyen et al. 2022).This has motivated a wide application of the Sentinel family of remote sensing satellites, with the most popular sensors being the SAR-equipped Sentinel-1 (S-1) and the multi-spectral Sentinel-2 (S-2).S-1 acquires the image at a spatial resolution of roughly 23.5 m, with 12-day revisiting cycle, and collects data at two polarizations as vertical transmission-horizontal receive (VH) and vertical transmission-vertical receive (VV) (ESA-S1 2020).S-2 provides 12 spectral bands (visible-near infrared (VNIR) to short wavelength infrared (SWIR)) at spatial resolutions of 10-60 m, on a 5-day revisit cycle (Naud et al. 2021).The integration of S-1 and S-2 sensors has been employed for retrieval of various biophysical parameters (Mahdianpari et al. 2018;Navarro et al. 2019;Le et al. 2021;Ha et al. 2021a), however their potential use for seagrass carbon stock estimation, is still in its infancy and requires further experiments to validate the sensor performance (Macreadie et al. 2019;Sani et al. 2019).
In order to accurately estimate the carbon stock from satellite imagery, a retrieval model, which helps to quantify the relationship between remote sensing and field measured data is a central component.Currently, machine learning (ML) models are widely used in developing these relationships.ML is capable of dealing with non-linear relationship, learning from a variety of data types, and has recently contributed to a high success of biophysical appraisal (Muttil and Chau 2007;Lary et al. 2016;Ahmad, 2019).More recently, the use of metaheuristic optimization for feature selection is emerging as the tool of choice for feature reduction to improve the quality of retrieval modelling (Agrawal et al. 2021;Ezenkwu et al. 2021;Ha et al. 2021a).To our best knowledge, a fusion of satellite products, ML modelling and metaheuristic optimization has yet to be evaluated for prediction of any seagrass ecosystem attributes.
In this study, we develop such an approach, using the fusion of S-1, S-2 datasets, the state-of-the-art ML model (CatBoost (CB)) and the grey wolf optimization (GWO), to estimate, for the first time, the seagrass total organic soil carbon stock at an accuracy of R 2 0.74 in Tauranga Harbour, New Zealand.The method contributes reliable and advanced techniques as a low cost, accurate estimator of seagrass carbon stock that with appropriate validation and calibration could be applied worldwide.
Study site
Our study site is Tauranga Harbour, which located in the western part of the Bay of Plenty, North Island, New Zealand (Ellis and Cawthron 2013) with approximately 14,000 ha of water surface area at high tide.The harbour (Figure 1) comprises north and south basins that drain in different directions to the Bay of Plenty marine environment.There are six sub-estuarines in the northern and southern basin.Two basins are connected through a intertidal flat in the central of the harbour (Tay et al. 2013).Seagrass meadows in Tauranga Harbour are in long-term decline (Ha et al. 2021b) but persists despite anthropogenic pressures from New Zealand's busiest port, intensive catchment agriculture, and ongoing urban development (Tay et al. 2013;Cussioli et al. 2019; Figure 1).Only one small-leaved seagrass species has been recorded in the study area, Zostera muelleri (Tara et al. 2019), and mostly occupies in the inter-tidal region (Park 1999;Ha et al. 2020).The tide is semi-diurnal at a tidal range of 0.5-2 m (Reeve et al. 2018).At low tide, seagrass is mostly exposed to the air and hence is available for S-1 and S-2 imaging.The soil around the harbour was classified as saline grey raw soils with the loam covering the sand and 0-2% clay in the topsoil (Environment Bay of Plenty 2010).The depth of root penetration ranges between 20 and 60 cm (Environment Bay of Plenty 2010) from the soil surface, and therefore there is potential for seagrass meadows to contribute to carbon sequestration.
Satellite image acquisition
S-1 and S-2 scenes were retrieved from the European Space Agency (ESA) Copernicus hub (https://scihub.copernicus.eu/dhus/#/home)and the United States geological survey (USGS) global visualization viewer (USGS GloVIS, https://glovis.usgs.gov/)(Table 1).The S-1 image was originally processed to level 1 and projected to the World Geodetic System (WGS-84) while the S-2 image was in level 1 C and the WGS-84 Universal Transverse Mercator (UTM), zone 60 south (60S), respectively.The images acquired on 31 March (for S-1) and 5 April (for S-2 images) were selected as closest to the field surveys 7-25 March 2020 and coincident with low tide to have a chance of exposing most seagrass meadows in the harbour (Ha et al. 2020(Ha et al. , 2021a)).
Field data collection
The field survey was conducted in the austral summer (between 7th and 25th March 2020).The locations of the soil sampling plots were randomly selected from ground truth points (GTPs) data collected in 2019 (Ha et al. 2020) with the additional requirement of accessibility with sampling equipment.At low tide at each site, we designated a 10 m  10 m plot inside the seagrass meadow, which fitted a pixel size of the satellite image and collected one soil core at the centre of the plot using a rigid plastic soil corer with a metal cutting tip.A total of 57 sediment samples to the depth of 50 cm were collected (Howard et al. 2014).Soil subsamples (a maximum of 20 cm 3 in volume) were extracted using a plastic syringe from each of sampling three depths (10, 30, 50 cm), and kept in labelled plastic bags, and maintained at À10 C until further sample processing in the laboratory (Saiz and Albrecht 2016;Smith et al. 2020).
Methodology
To estimate the seagrass meadow TOC from remotely sensed data, we developed our novel method using a series of steps (Figure 2): Preprocessing and creating of S-1 and S-2 April 2020, pseudo colour using the combination of q Red -q Green -q Blue ) and the ground truth points during the field survey.
stacked images, was followed by testing the performance of four different ML models described below for TOC retrieval through four scenarios, described below, using the field data for training and validation of models.Scenario 1 used only S-1 data.Scenario 2 used only S-2 data.Scenario 3 used both S-1 and S-2 data, and Scenario 4 was based on Scenario 3 but with the addition of feature selection using correlation, described below.After the ML model best predicting TOC was selected, Scenario 5 was run which used both S-1 and S-2 data and feature selection with the GWO algorithm.
2.3.1.Soil samples analysis and total organic carbon measurement Soil samples analysis.In the laboratory, the soil subsample (for each of depth intervals) was dried at 60 C for 72 h in the oven in the pre-weighed aluminium cup.The dried soil subsample was then taken out of the oven, cooled in a desiccator and weighted using an electric balance (accuracy ±0.01 g).The dry bulk density (DBD) was calculated using the formula of Howard et al. (2014): Measurement of loss on ignition (LOI).Organic content was estimated by the loss on ignition method.A subsample of the dried soil (approximately 15 mg) was homogenized by grinding with a mortar and pestle, and weighed into a porcelain crucible to ± 0.01 g).
The sample was transferred to a muffle furnace to heat to combustion at 450 C for 6 h.The ashed soil was taken out of the furnace, cooled in a desiccator for 1 h, then reweighed.The weight loss was calculated and the percentage loss on ignition (%LOI) (Howard et al. 2014;Githaiga et al. 2017) was estimated as: Using the %LOI, we empirically estimated the percentage of organic carbon (%OC) from the empirical formula (Howard et al. 2014): Total organic carbon stock (TOC) measurement.The TOC of each soil core was calculated using the proposed protocol for seagrass meadows (Howard et al. 2014) using the data of DBD, %OC, and core length and converted to the unit of Mg C ha À1 .
Sentinel-1 image processing and band transformation
The raw intensity value of S-1 image was converted to the backscattering coefficient (r 0 ) following the seven steps described by Filipponi (2019): (1) correct the orbit file; (2) thermal noise removal; (3) border noise removal; (4) radiometric calibration; (5) speckle filtering; (6) range Doppler terrain correction; and (7) conversion of pixel values to the normalized radar backscattering (Equation 5) of the VH and VV bands.
in which: r 0 : backscattering DN: digital number of the raw intensity In addition, we conducted the popular SAR band transformation (Pham et al. 2021;Ha et al. 2021a) to increase the number of input features for learning of ML models.There were a total of 27 transformed bands, including five band ratio, twenty bands derived from the grey level co-occurrence matrix (GLCM), and the two first principle component analysis (PCA1) derived from the seven bands (two VH, VV bands and five band ratios) and the 20 GLCM bands as the PCA1_7b and PCA1_GLCM, respectively (Table 3S).The steps of image preprocessing and band transformation were implemented in the environment of Sentinel application platform (SNAP) software (ESA 2020).All the bands were converted to the WGS-84 UTM-60S projection and resampled to a ground sampling distance (GSD) of 10 m.
Sentinel-2 image processing and band transformation
The S-2 level 1 C was advanced atmospheric correction to convert the pixel value from the top of atmosphere (TOA) reflectance to the surface reflectance (SR).We used the atmospheric correction for operational land imager (OLI) 'lite' toolbox (ACOLITE) (Vanhellemont 2016) with the dark spectrum algorithm (Vanhellemont 2019) and predefined input parameters (Table 1S) in the Python environment, resulting in an 11-SR bands image.
To improve the capability of the soil carbon retrieval, various forms of vegetation and soil radiometric indices (VIs and SIs) have been suggested (Pham, Le, et al. 2020;Ha et al. 2021a) to extract the carbon content in soil layers.Eight VIs were included here, ratio vegetation index (RVI), normalized difference vegetation index (NDVI), green normalized difference vegetation index (GNDVI), enhance vegetation index 2 (EVI2), normalized difference index using bands 4, 5 (NDI45), soil-adjusted vegetation index (SAVI), inverted red-edge chlorophyll index (IRECI), modified chlorophyll absorption in reflectance index (MCARI) and the three SIs of brightness index (BI), redness index (RI), colour index (CI) were created using the SR original bands (Table 2S), resulting in 22 bands of S-2 image.
The last step was to stack the 29 S-1 and 22 S-2 derived image bands to create a new 51 feature image at a 10 m spatial resolution and in the projection of the WGS-84 UTM-60S.
Machine learning model
Random forest (RF).The RF algorithm, which was developed by Breiman (2001), is one of the best well-known machine learning techniques.This algorithm can be applied effectively in solving both classification and regression tasks (Breiman 2001).In the regression domain, a wide range of regression trees is included, of which each tree is built by the unique bootstrap sample from the original data, which helps in reducing overfitting problems.The original data is split into two-thirds of the samples (in-bag data) for the training sets and the remaining samples for the testing sets (out-of-bag [OBB] data).In the RF model, the number of regression trees and number of predictor variables must be tuned beforehand.
Rotation forest (RoF).The RoF algorithm belongs to the ensemble decision tree learning, which is widely used in solving both classification and regression problems (Rodriguez et al. 2006).In the RoF model, the original data is divided into subsets with a number of features in each subset and then the principal component analysis (PCA) is used to each subset with bootstrapped training samples to generate the transformation matrix.The new training set, which is produced by rotation of the training set using the transformation matrix generated above is employed to train each decision tree.Last, a majority voting rules is applied to generate the individual decision trees results to produce the final result.
Extreme gradient boost (XGB).The XGB algorithm, which belongs to the gradient boosting decision tree family, was firstly developed by Chen and Guestrin (2016).The XGB model is one of the most highly accurate ML techniques, which has widely used to handle both classification and regression problems (Pham, Yokoya, et al. 2020;Ha et al. 2021aHa et al. , 2021b)).The novelty of the XGB model is its scalability in all scenarios, which can be used in solving sparse data challenges.Another advantages of the XGB algorithm are parallelization, cache optimization, and out-of-core computation, which help in training data relatively quickly than existing gradient boosted regression tree techniques (Chen and Guestrin 2016).This algorithm is able to deal with the complexity problem of an ML model, particularly when having a large dataset.Moreover, the XGB technique can integrate different optimization algorithms for tuning hyper-parameters to best suit a specific dataset.
CatBoost (CB).The CB algorithm, which is a novel gradient boosting algorithm, recently introduced by Dorogush et al. (2018).This technique is able to handle various datasets with categorical features and to minimize the over-fitting issue by choosing the best tree structure to calculate leaf values (Dorogush et al. 2018;Prokhorenkova et al. 2019).The CB model is one of the most powerful ML techniques that was recently implemented and released as an open-source library.This technique obtains excellent results in both classification and regression tasks by implementing ordered boosting, which is a modification of the gradient boosting algorithms (Dorogush et al. 2018).This algorithm has produced better performance than those in the decision tree-based ensemble learning family such as the XGB, the RF, and the RoF algorithms on various domains (Pham, Yokoya, et al. 2020;Le et al. 2021).The random permutations of the training set and the gradients are generated for choosing a best tree structure to enhance the robustness and for preventing overfitting problem of the model.
Machine learning hyper-parameters optimization
Machine learning model consists of various hyper-parameters and requires the optimization (hyper-parameter tuning) to archive the best performance for a given task.We used the GridSearchCV in the Scikit-learn library (Pedregosa et al. 2011) with five-fold crossvalidation (CV) to find the best combination of ML model hyper-parameter (Table 4S(a-d)).
2.3.6.Metaheuristic optimization using GWO Introduction of GWO.The GWO algorithm (Mirjalili et al. 2014), a powerful member (Faris et al. 2018;Maddio et al. 2019) of the group of metaheuristic optimizers, which works well with incomplete data to find a sufficient solution, and is inspired from the population structure, social hierarchy, and hunting mechanism of the grey wolf (Canis lupus).The GWO model comprises four components of alpha (the head of the pack), beta, delta and gamma 'wolves' that interlink through three phases of hunting, including tracking & chasing, pursuing & encircling, and attacking the prey.Similar to other metaheuristic optimizers, the GWO technique is capable of finding the optimized solutions for both the ML model hyper-parameter and ML feature selection (Faris et al. 2018).The implementation of the GWO is simple with a few parameters available for iteration, the size of population, and the objective function with a minimum or maximum metric.
2.3.7.Total organic carbon (TOC) retrieval using selected machine learning model Design of retrieval scenarios.Various combinations of S-1 and S-2 (transformed) bands were analyzed to explore the retrieval accuracy of the seagrass TOC (Table 2).Five scenarios were designed, scenario 1 included use of only S-1 image (29 bands), scenario 2 only S-2 image (22 bands), scenario 3 a combination of S-1 and S-2 images (51 band), and scenario 4 and 5 the optimal features derived from S-1, S-2 datasets using correlation and optimization methods of feature selection, respectively.A threshold 0.1 of the Spearman correlation coefficient was used to select a subset of input features (scenario 4) while the GWO algorithm supported the selection of the optimal features, using the best model obtained from four scenarios as the base model, for the TOC estimation (scenario 5) in the study site.
Seagrass TOC estimation and mapping.Recent studies show that the random splitting method in machine learning regression is the most well-known and common technique for the estimates of total organic carbon in mangrove ecosystems (Pham, Yokoya, et al. 2020;Le et al. 2021) and in agricultural lands (Nguyen et al. 2022).We employed this technique in the current work for seagrass TOC estimation.
We randomly split the TOC dataset (57 observations) into 60% (34) for the training and 40% (23) for the testing sets in the various scenarios (scenario 1-scenario 4).The best model derived from designed scenarios was used to predict the seagrass TOC for the entire study site, to create the spatial distribution map of TOC.A pre-existing binary seagrass map (Ha et al. 2021b) was used to mask the non-seagrass parts in Tauranga Harbour.
Evaluation metrics
The skills of the ML model of seagrass TOC estimation were evaluated using a suite of standard metrics, involving the coefficient of determination (R 2 ), root mean squared error (RMSE), RMSE percent of mean (RMSE%) (Equations 6-8).In addition, the metrics of Akaike information criteria (AIC), Bayesian information criteria (BIC) (Vrieze 2012) were applied to validate the statistical difference among the ML models (Equations 9 and 10), and the Taylor plot (Taylor 2001) used to visualize the overall performance of the model in the space of the standard deviation (SD), correlation coefficient, and root mean squared
Scenario
No. of input feature No. of observation deviation (RMSD).The position of the model is located by the values of SD, correlation coefficient, and RMSD.Higher value of R 2 is expected while the lower values of RMSE, RMSE%, AIC, and BIC are all determining a better performance of the model.The prediction interval was computed using the python library doubt (Nielsen 2022) which use the bootstrap technique (Sricharan and Srivistava 2012) for the quantification of variation in a machine learning model prediction.
in which: e : the error term n: the total number of validation samples in which: ŷi : predicted valued of the i sample y i : corresponding true value of the i sample n samples : the total number of validation samples in which: RSS : residuals sum of squares K : number of parameters (including intercept) n : number of observations
Results
In Tauranga Harbour, organic carbon (OC) varied among the depth layers with a trend of higher accumulation of OC at the depths of 30 cm and 50 cm.The OC contents range from 2.73 to 27.1 Mg C ha À1 in the top layer 0-10 cm, 5.4-79.7 Mg C ha À1 in the layer 10-30 cm and 6.8-92.4Mg C ha À1 in the layer 30-50 cm, respectively (Figure 3).Lower content of OC (layer 30-50 cm) was observed in a few sampling plots due to the contribution of clay at the layer while measured OC to the depth 30 cm agreed with observed OC content of Z. muelleri in Australia (Ewers Lewis et al. 2020).The total measured seagrass meadow TOC integrated across depths ranged from 15.5 to 199.3 Mg C ha À1 (Figure 1S) with a mean value of 60.4 Mg C ha À1 and a standard deviation (SD) of 27.5 Mg C ha À1 .Approximately 80% of the field data ranged between 15.5 and 70 Mg C ha À1 and up to 99% of the field data varied from 15.5 to 100 Mg C ha À1 .
Seagrass total organic carbon estimation from remotely sensed data
The four selected ML models were first evaluated for each of the three scenarios that did not include feature optimization (Table 3).In all these three scenarios, the CB outperformed the RF, the XGB, and the RoF models with a highest value R 2 of 0.46.Surprisingly, for most models, there was little difference in performance for each of the scenarios 1 and 2, where only S-1 or S-2 bands were involved.A fusion of S-1 and S-2 sensors (scenario 3) improved the model skill with a slight increment of R 2 in cases of the XGB and the RoF model while the CB retrieved the TOC at a higher confidence (R 2 ¼ 0.46, RMSE ¼ 16.22 Mg C ha À1 , Table 3).
Improving of seagrass TOC estimation using feature selection method
To retrieve the TOC with accuracy, feature selection is often required to define a subset of the most significant input bands.Scenario 4 applied a Spearman correlation analysis to select these, while scenario 5 experimented an advanced metaheuristic GWO to select the most informative bands.All ML models for scenario 4 showed an improvement in TOC retrieval over scenarios 1-3 (Table 4 and Figure 4) with the RF, the XGB, and the RoF were converging on R 2 from 0.42 to 0.45 while the CB continued to yield a best performance with highest R 2 (0.53) and lowest RMSE (13.09Mg C ha À1 ).As the most accurate candidate, the CB was used as the base model in the GWO for feature selection, which resulted in a further improvement of the TOC retrieval accuracy (Table 4 and Figure 4).
As the best candidate for TOC estimation in designed scenarios, we visualized the CB performance in scenarios 3, 4, and 5 using the Taylor plot (Figure 2S).In the domains of the SD, correlation and RMSD, the CB model in scenario 5 presented a highest confidence for seagrass TOC estimation in the study site (the highest correlation coefficient and the lowest RMSD of the CBR_E3 observed in the Taylor plot, Figure 2S).
Of the 22 variables derived as informative from the GWO algorithm (Figure 5), S-2 contributed 8 bands (accounting for roughly 23%).The green, the near-infrared (NIR, band 8), and the short wavelength infrared (SWIR, band 12) contributed 11.4% while the transformed bands of the CI, the RI, the RVI supported a same ratio of 11.39% to the estimation of seagrass TOC.The S-1 sensor provided 14 meaningful bands (approximately 77% variation).The single band VV constituted 2.65% while 61.05% were imparted for the derived GLCM (44.30%),PCA (12.68%), and band ratio (17.42%).The CB indicated the S-2 NIR (band 8) and CI and the S-1 VV-VH, VH_diss, VH_variance and VV_asm as the most impact input variables on the accurate mapping of the seagrass TOC in Tauranga Harbour (Figure 5).
Using the CB-GWO model, the spatial variation of seagrass TOC in Tauranga Harbour (Figure 6) ranges from 30 to 104 Mg C ha À1 , with areas with TOC from 35 to 60 Mg C ha À1 most frequent in the far north and the middle basin of the harbour (Figure 6(a,b)), while the middle and the southern basin mostly stored TOC > 60 Mg C ha À1 (Figure 6(c,d)).
Discussion
To successfully estimate the seagrass TOC from satellite sensing, we integrated a wide range of different approaches, multi-sensor satellite images, ML models, and advanced feature selection using the metaheuristic optimization.The results are promising with clear evidence that using both S-1, S-2 sensors modern ML models and methods of feature selection all improved overall prediction skill.The feature selection using the GWO algorithm improved the confidence of TOC estimation (R 2 ¼ 0.74) to a magnitude of higher 60% and 39.6% compared to the accuracy in scenario 3 (R 2 ¼ 0.46) without feature selection and scenario 4 (R 2 ¼ 0.53) with the traditional Spearman feature selection.
Of the two bagging and boosting ML models, the boosting group, XGB and CB models, performs much better with a higher confidence (Table 3 and Table 4), with the CB model consistently the best in scenarios 1, 2, 3, and 4. The RoF model produced a relatively higher in R 2 score than the XGB, however was at higher RMSE% and therefore was less confident than the XGB model (Table 4, scenario 4).The performance of the CB model was found to be consistent with reports for mangrove and other ecosystems (Pham, Yokoya, et al. 2020;Ha et al. 2021a;Luo et al. 2021;Pham et al. 2021;Tran et al. 2021) in retrieving the biophysical parameters and carbon stock.The outperformance of the CB model for seagrass TOC estimation might be in part attributed to its novel ordered boosting mechanism, which mitigates overfitting, and its oblivious decision tree structure, which allows it to regularize the tree parameters to optimize different solutions, and hence reduces modelling uncertainty.In addition, the CB is recognized as working well with heterogeneous data types (i.e.mixture of different types of input data), and therefore may be expected to learn effectively from a fusion of both S-1 and S-2 data and derived variables.
Our study supports the use of feature selection and highlights the difference in the contribution of input variables to the final CB model.The Spearman analysis introduced a new subset of input variables, however it was evidences that correlation based selection was not sufficient to find the most contributed variables to the retrieval of TOC in this study.In converse, the metaheuristic GWO effect a good optimization in the search space, which results in the best combination of 22 input variables for the seagrass TOC retrieval in Tauranga Harbour.Metaheuristic optimization and the GWO algorithm, in particular are increasingly well-known methods for optimization of a wide ranges of problems (Abdel-Basset et al. 2018;Wong and Ming 2019;Agrawal et al. 2021).The potential application of this group method in the ecology and environmental science, however has not been discovered spaciously for a specific problem of accurate TOC estimation in seagrass ecosystems.The GWO mathematically simulates the hunting behaviour of the grey wolf herds, a well-known hunting strategy in the nature with the capability of relocating the solutions (i.e.updating positions of alpha, beta and delta wolfs to the solutions), and therefore is very powerful to discover the combination of input variables to archive the best solution under a given metric (i.e. the mean squared error in this study).In a ndimensional search space, the GWO is designed to be self-adjustment to a balance of the exploitation (A parameter) and exploration (C parameter), and hence results in a fast convergence with less memory usagethe model structure using only one vector of position and three best saved solutions and computation time (Faris et al. 2018).For the seagrass TOC, we designed the objective function using the CB model with a medium size of both the iteration (200) and population ( 200), which appears to suit the estimation of the TOC in the complex coastal ecosystem.Figures indicated that TOC was sequestered to a lesser degree in the northern part (larger area with light green colour, Figure 6(a)) while more TOC was sequestered in the middle and southern parts of the harbour (Figure 6(b-d)).The variation of the TOC fit well with the dense and stable seagrass meadows (Ha et al. 2020), which supports better trapping and storage of carbon in the soil layers.The map indicated a spatial gradient of the TOC in different sub-spaces of the harbour, which can be assumed to reflect the different impacts of healthy seagrass, other biophysical parameters and topography of the catchment on carbon sequestration mechanisms (Samper-Villarreal et al. 2016;Alemu I et al. 2022).Furthermore targetted data collection and analysis may be required to evaluate, with confidence, specific impacts of the agriculture, urbanization and port business on the seagrass carbon sequestration in Tauranga Harbour.
Previous studies targeting carbon stock estimation using remote sensing across a variety of ecosystem types (though not seagrass ecosystem) reported a variation of accuracy broadly similar to that shown here (R 2 ranges 0.6-0.87)(Wicaksono et al. 2016;Sanderman et al. 2018;Pham et al. 2021;Sun et al. 2021;Nguyen et al. 2022).Seagrass meadows can be expected to be difficult for such estimates, as the above-ground biomass collapses at low tide, when sensing is possible, and a high proportion of carbon is below ground.Unlike above-ground parameters (above-ground biomass, spatial distribution, canopy height, for instances), the below-ground carbon stock requires either an indirect approach using different soil/vegetation indexes, input band transformation or longer wavelength SAR sensors (ALOS-2 PALSAR-2 with L band, for instance), in which the band is capable of penetrating deeper into the layers below the soil surface.Since the S-1 C-band penetrates only to a few centimetres below the surface, our estimates are likely indirect, and this is consistent with inputs from a variety of image transformation (band ratio, GLCM, PCA).The use of SAR only, multi-spectral only, or fusion of both, resulted in low retrieval accuracy in scenarios 1-3.High accuracy was only obtained in scenario 5 with GWO and the CB feature importance function determined a larger contribution proportion of S-1 (nearly 77%) than S-2 (approximately 23%) sensor to the final model.This indicates the advantage of satellite data fusion to achieve high confidence mapping of seagrass meadow TOC.The variation in carbon stock therefore likely correlates to both the colour, texture of the soil surface (Nguyen et al. 2022) and the above-ground vegetation structure (Xue and Su 2017;Morcillo-Pallar es et al. 2019).As expected, the green, red and NIR spectra (band 8 and CI indexes, S-2 sensor), which strongly reflect the variation in the soil/vegetation, significantly contributed to the success retrieval of TOC.The contribution from S-1 (transformed) bands were found similar for carbon stock in other ecosystems (Byrd et al. 2018;Nguyen et al. 2022) or seagrass above-ground biomass estimation (Ha et al. 2021a) and may prefect a correlation between the surface features and the presence of dense, species diverse and carbon-rich seagrass meadows.Our case study determined the VV band and the transformation of VV-VH and GLCM (VH_diss, VH_variance and VV_asm) to have the most useful information on the vegetation structure or soil texture to the accurate estimation of seagrass TOC in Tauranga Harbour.
The methods developed and results obtained in this study provide novel insights into TOC retrieval using a suite of the state-of-the-art remote sensing, modelling and feature selection tools.The proposed framework is solid and reliable, using the advanced models and high quality remotely sensed data, it is applicable and very low cost since we use only open access remote sensing data and the open source codes for image processing, modelling and optimization.The processing codes adapted in this research are all available in the GitHub (https://github.com/),SourceForge (https://sourceforge.net/) or well-developed and independent open source project (Sentinel Toolbox (https://sentinel.esa.int/web/sentinel/toolboxes),QGIS (https://www.qgis.org/en/site/)),which guarantees for a wide exchange in method and further extending to other blue carbon ecosystems and improves the certainty in global blue carbon estimation in the future.High temporal resolution (5-12 days) of both S-1 and S-2 sensors supports users to locate more acquired images to find the best scenes fitting their study site.In addition, the band transformation will provide more options for model fitting, rather than the two original bands of S-1 and 12 original bands of S-2 sensors, in deriving critical information for the retrieval model.
Our research, however was narrowed to the C-band of S-1 sensor, and constrained by the need for low tide to sense the intertidal zones.In addition, we acknowledge that the TOC mapping was only produced for the vegetated areas (i.e.seagrass meadows) in Tauranga Harbour.The proposed CB-GWO model is able to explain 74% of spatial TOC distribution with a degree of prediction variation (Figure 3S) in the study site.The model performance indicates challenges and leaves a degree of uncertainty in the quantification of the seagrass TOC.Beside the complexity in seagrass meadow structure in the intertidal zone which might increase the noise in satellite signal retrieval, the must-use indirect approach for soil properties estimate in dense covered vegetation also contributes to the uncertainty of retrieval models for seagrass TOC quantification in the coastal zones.Future field surveys will need to cover a wider range of sediment samples, including both vegetated and unvegetated areas in the intertidal zones, to provide more intensive data for our proposed methods.New research is underway with experiments for a variety of seagrass species in different bio-ecological regions, using the same approach with novel ML models or better feature selection algorithms.
Conclusion
Seagrass TOC is an important biophysical parameter with unsolved challenges for accurate estimation worldwide.We have developed a novel and effective approach using the fusion of multi-spectral and SAR satellite images, integrated with CatBoost machine learning technique and the GWO algorithm (CB-GWO) to derive and map TOC distribution across seagrass meadows in a large New Zealand estuary.Five scenarios of performance were designed, which indicated that neither S-1, S2 nor a fusion of all S-1, S-2 bands derived with high precision the variation of TOC, and that feature selection was required to extract the most contributed input variables.In addition to the well-known RF and the XGB models, this study highlights the better performance of the CatBoost model combined with GWO, the CB-GWO (R 2 ¼ 0.74, RMSE ¼ 10.64 Mg C ha À1 , RMSE% ¼ 19.58%) and the potential use of the RoF model for further improvement of TOC estimation.Feature selection using metaheuristic optimization with the GWO algorithm improved the accuracy up to 60% in comparison to the same models run without feature selection.
Our proposed method is robust and applicable, easy to implement with open-source codes for machine learning algorithms, image processing and metaheuristic optimization.The workflows presented in this study are not limited to the ecosystem of seagrass, rather the methods of image analysis and modelling are rationale to apply to various biophysical parameters in different blue carbon ecosystems (i.e.mangrove forest, salt-marsh).In addition, we suggest the using of the GWO for an accurate selection of input variables in further multi-modal data analysis of inventory and conservation of blue carbon ecosystems.This approach, however also comes with an unavoidable drawback of the SAR image application.Due to the attenuation of the SAR signal in the water environment, we suggest the use of the proposed methods for the intertidal regions where the habitats are exposed at the low tide and the SAR image could be used effectively to derive more accurate the variation of TOC.
Future field survey campaigns will validate and expand our proposed method for seagrass TOC retrieval in different climate regions worldwide.In addition, a comparison of various metaheuristic algorithms for feature selection is proposed discover and better understand the contribution of multi-spectral and SAR image bands for the estimation of seagrass TOC.
Figure 1 .
Figure1.Study site -Tauranga Harbour in the illustration of (a) Sentinel-1 (date of acquisition 31 March 2020, pseudo colour using the combination of VH-VV-VH); (b) Sentinel-2 (date of acquisition 5 April 2020, pseudo colour using the combination of q Red -q Green -q Blue ) and the ground truth points during the field survey.
Figure 2 .
Figure2.Flowchart of the research with codes of red (image processing), blue (retrieval of the TOC using scenario 1-scenario 4) and orange (retrieval of the TOC using Scenario 5) colours.
Figure 3 .
Figure 3. Variation of organic carbon content in study site.
Figure 5 .
Figure 5. Variable contribution derived from the CB-GWO model for seagrass TOC estimation.
Figure 6 .
Figure 6.Seagrass TOC derived from the CB-GWO model and (a) the northern, (b) the upper middle, (c) the lower middle, (d) the southern parts in Tauranga Harbour.
Table 2 .
Designed scenario for seagrass TOC estimation.
Table 4 .
Model performance of TOC estimation in scenario 4 and scenario 5. used to compare model performance among scenarios while other metrics are opted for the evaluation of ML model in each of scenario due to a random splitting of the samples.
|
2022-12-22T16:25:16.173Z
|
2022-12-20T00:00:00.000
|
{
"year": 2023,
"sha1": "33894297f1a41cdb23cb67d261a9ca880cbab62b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/10106049.2022.2160832",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "7c02f9753661030eaeddba3fcd58dbc368db8ed5",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
3927755
|
pes2o/s2orc
|
v3-fos-license
|
Epidemiology of ocular trauma in children requiring hospital admission: a 16–year retrospective cohort study
Background To study the epidemiology of ocular trauma requiring hospital admission in children under 18 years in age. Methods This retrospective cohort study included pediatric patients with ocular injuries at the Ophthalmology Department of the Clinical Hospital Centre, Split, Croatia, from 2000 to 2015, classified according to the Birmingham Eye Trauma Terminology. Results There were 353 children hospitalized, 82% of boys (mean age 11 years) and 18% of girls (mean age 10 years). The majority of traumas occurred in the outside environment (70%, n = 249), followed by occurrences at home (17%, n = 60), and at a school/nursery (8%, n = 28). Final visual acuity was 6/18 or better in 286 (96%) patients with closed globe injury and in 26 (49%) patients with open globe injury. Severe impairment of vision was found in 12 (4.4%) patients in the closed globe injury group and 26 (49%) patients in the open globe injury group. A statistically significant difference was found between final visual acuity and initial visual acuity in all patients (χ2 = 12.8; P < 0.001). Conclusion The majority of pediatric eye injuries are happening in the outside environment and are preventable. Implementation of well–established safety precautions would greatly reduce this source of visual disability in children.
Ocular trauma is a significant problem throughout the world and, in addition to resultant ocular disability, it also has psychological and social effects on the patient. Approximately 1.6 million people worldwide are blind due to ocular trauma, 2.3 million people have bilateral low vision due to trauma and 19 million have unilateral vision loss [1,2]. Eye trauma constitutes 7% of all bodily injuries and 10-15% of all eye diseases [3].
In the United States, eye trauma is the leading cause of noncongenital unilateral blindness in individuals younger than 20 years of age. The American Academy of Pediatrics (AAP) reported that 66% of all ocular injuries occur in individuals 16 years of age or younger, with the highest frequency occurring between 9 and 11 years of age [4][5][6]. Most ocular injuries occur in boys, as due to their more aggressive nature, they tend to spend more time playing outdoors and tend to play risky games more frequently than girls. The male-to-female ratio in published studies varies from 3:1 to 5.5:1 [5][6][7]. Most studies have shown no statistically significant difference between affected eyes [8].
Various studies have reported that children account for 12.5-33.7% of all admissions for eye injury. Trauma is clearly one of the most important preventable causes of childhood blindness [9]. The frequency of hospitalization due to ocular trauma differs between developed and developing countries; for example, the rate is 8 per 100 000 people in Scotland and 33 per 100 000 in Guiana [10].
The standardized classification of eye trauma is useful for ophthalmologists and provides the means for simple and enhanced communication regarding particular patient features [11].
Kuhn et al. [12] developed a prognostic model, the ocular trauma score (OTS), to predict the visual outcome of patients in all age groups after open globe and closed globe ocular injuries. They analyzed more than 2500 eye injuries from the United States Eye Injury Registry and the Hungarian Eye Injury Registry and evaluated more than 100 variables with the goal of identifying specific predictors. In the calculation of OTS, a numerical value is assigned to the following six variables: initial visual acuity (VA), globe rupture, endophthalmitis, perforating injury, retinal detachment, and relative afferent pupil defect (RAPD). The scores are then divided into five categories that provide the probabilities of attaining a range of VAs after injury.
Numerous studies have evaluated various aspects of ocular trauma. The purpose of this study was to analyze epidemiology of all eye injuries in children who required admission to the Ophthalmology Department of the University of Split Hospital Centre, Croatia, from 2000 to 2015.
METHODS
Medical records of all patients aged 18 years or younger who sustained serious eye injuries requiring admission to the Ophthalmology Department at University of Split Hospital Centre between 2000 and 2015 were reviewed. Ethics committee of University of Split Hospital Centre, Split, Croatia, approved the study to be reported. All study procedures adhered to the recommendations of the Declaration of Helsinki.
University of Split Hospital Centre is the only referral hospital for the population of the Split-Dalmatia County (south Croatia). The population of the province as determined in the 2011 census was 455 242. The number of children in the province was 107 316, which consisted of 54 768 boys and 52 548 girls. The distributions of age, gender and socioeconomic status of children in this county were comparable to those of the entire Croatian population.
The study included 353 children treated acutely in the hospital. The following data were recorded for each patient: age, sex, date of injury, site of incident, cause of injury (accidental self-inflicted injury vs injury by another person), visual acuity, diagnosis, associated injuries and treatment.
The injuries were classified according to Birmingham Eye Trauma Terminology (BETT) [13] as the following: closed eye globe injuries and open eye globe injuries (penetration, perforation, intraocular foreign body injury and rupture).
The data were collected, entered and processed using the statistical package SPSS version 15 (SPSS Inc., Chicago, IL, USA). The results were interpreted using a significance level of P < 0.05. The χ 2 test, McNemar test, Kruskal-Wallis test and Mann-Whitney U test were used in the analysis.
RESULTS
A total of 353 children with eye injuries were admitted to the clinic during the 16-year study period; there were 290 (82%) boys and 63 (18%) girls, yielding a male-to-female ratio of 5:1. Assessing and treatment of eye injuries and indications of hospital admission by algorithms that we developed for managing of eye trauma in our Department are presented in Figure 1 and Figure 2.
The mean age at admission was 11 years among boys and 10 years among girls. The right eye was involved in 174 (49%) cases, and the left eye was involved in 177 (50%) cases. Binocular injury was found in 1 child (0.2%). There was no statistically significant difference between injuries of the right eye and the left eye according to age (χ 2 = 2.33; P = 0.506). In average duration of hospitalization was 9.8 days with median of 5 to 13 days ( Table 1).
The cumulative incidence of eye injuries among boys was 530/100 000, and among girls 120/100 000 ( Table 3). The cumulative incidence of eye injuries among boys was 4.4 times higher than that among girls.
The majority of injuries occurred during spring and summer ( Table 4). Compared to autumn, there were 1.6 times more eye injuries during spring and 1.8 times more eye injuries during summer (χ 2 = 13.6; P = 0.035).
Children were 9 times more likely to be injured in the outside environment compared to school and nursery, and they were 4 times more likely to be injured in the outside environment than in the home (χ 2 = 412; P < 0.001).
Our study showed a statistically significant difference in the age of children according to the site of injury (χ 2 = 25.1; P < 0.001); the median age of children injured at home was 4 years lower than that of children injured in school (Z = 3.15, P = 0.02), 3 years lower than that of children injured outside the home (Z = 4.4, P < 0.001), and 4 years lower than that of children who were injured during sports (Z = 3.8, P < 0.001) ( Table 5).
Significant difference between the age of children with accidental self-inflicted injury and those injured by another person was not observed (χ 2 = 1.02; P = 0.307) ( Table 6).
With regard to the type of injury, there were 299 (85%) closed eye injuries and 54 (15%) open eye injuries. The median age of children with closed injuries was 2 years higher than the median age of children with open eye injuries (Z = 2.98, P = 0.03) ( Table 7).
Initial visual acuity was normal or mildly impaired (better than 0.3) in 208 (70%) patients with closed globe and 13 (25%) patients with open globe injury ( Table 8).
Final visual acuity was greater than or equal to 0.3 in 286 (96%) patients with closed globe in- juries and 26 (49%) patients with open globe injuries. Severe vision impairment (worse than 0.3) was found in 12 (4.4%) patients with closed globe injuries and 26 (49%) patients with open globe injuries ( Table 9).
age (years) ToTal n (%) self-inflicTed injury n (%) injured By oTher person n (%) p-value*
Overall improvement of visual acuity of all patients at the end of the treatment was significantly higher compared to initial visual acuity (χ 2 = 12.8; P < 0.001). Compared to initial visual acuity, visual acuity improved in 86% of patients and remained the same in 14% of patients; no patient experienced deteriorated visual acuity (Table 10).
DISCUSSION
Ocular injuries are the most common cause of acquired uniocular blindness in children. Pediatric ocular injuries differ from those of adults in many ways. Ocular trauma in children is mainly accidental and has an age-specific pattern [14].
In this study, pediatric ocular trauma occurred 4.5 times more often in boys than in girls. Boys are usually more susceptible to ocular damage due to the nature of their activities and presumed less supervision by their families, similar to results from other studies [1,4,6,10,14]. In our study, the highest incidence of eye injuries occurred among children age 10 to 14 years, which is also similar to studies from other settings [1,4,[15][16][17].
In contrast to the above findings, Al-Bdour and Azab reported the highest incidence of injuries among children aged 6 to 10 years. Children in this age group are relatively immature and exposed to varying surroundings that make them more vulnerable to injuries [6,14].
Both eyes were affected equally. Bilateral ocular injuries were observed only in 1 patient. This is in accordance with most other studies, where ocular trauma plays a minor role in bilateral blindness compared to its major role in unilateral blindness [6,14].
The majority of injuries occurred during spring and summer, which is similar to results reported elsewhere [18]. The summer vacation months accounted for a disproportionate number of eye injuries received throughout the year. The summer offers children the opportunity to spend more time outside and to have more freedom to play with potentially dangerous objects. Furthermore, the lack of school during the summer months may adversely affect the time children are supervised by adults.
The present study showed that ocular injury occurred most commonly in the outside environment, with the home as a second most common site; this is consistent with observations similar to results reported elsewhere [16]. It speaks in favor of possible lack of adult supervision while children play outside. A study conducted by Aghadoost et al. showed that most injuries happened at home [10]. A study in North Jordan showed that eye injuries occurring during sports and play were the most common [6]. In Canada, eye injuries occurred at a number of locations, with the majority occurring at homes, followed by schools and other residences [18].
Our study showed a statistically significant difference between the age of children according to the site of injury. Children injured at home were approximately 4 years younger than children injured at school and during sports and approximately 3 years younger than those injured outside the home. These results were expected because younger children spend more time at home.
In our study, more than two-thirds of patients were injured by another person, and this did not differ by age group. In other studies, most eye injuries were reported as being unintentional, though there were instances in which the injury happened during a physical altercation [18].
With respect to the BETT classification in our study, closed globe injury occurred five and a half times more frequently than open globe injury, and this is similar to results reported elsewhere [19]. We showed that children with open eye injuries were 2 years younger than children with closed eye injuries. Among those with open globe injuries, penetrating injuries were the most common. Penetrating injuries, in general, carry a poorer prognosis, and they are more likely to require surgery and result in long-term visual impairment.
We found a statistically significant difference between final and initial visual acuity. Good visual acuity at presentation and early primary repair were important factors for better final visual outcome. Compared to blunt injuries, penetrating injuries generally resulted in poorer visual outcomes. Posterior segment involvement adversely affects visual results [14,20,21].
Although our study covers a relatively long time period of 16 years, the retrospective nature is an acknowledged weakness of this study.
In conclusion, severe ocular trauma in children that requires hospitalization is mainly accidental and has an age-specific pattern. In general, children are more susceptible to eye injuries due to their immature motor skills, limited common sense and natural curiosity. A safe environment should be maintained for children. The majority of eye injuries in children are preventable, which reflects the importance of health education, adult supervision and application of appropriate measures to reduce the incidence and severity of trauma.
|
2018-04-03T06:00:08.377Z
|
2017-05-29T00:00:00.000
|
{
"year": 2017,
"sha1": "73812a46acfdaf3ad0f32442a240bc11092e9097",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7189/jogh.07.010415",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "73812a46acfdaf3ad0f32442a240bc11092e9097",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119190980
|
pes2o/s2orc
|
v3-fos-license
|
No nonlocal advantage of quantum coherence beyond quantum instrumentality
Recently, it was shown that quantum steerability is stronger than the bound set by the instrumental causal network. This implies, quantum instrumentality cannot simulate EPR-nonlocal correlations completely. In contrast, here we show that quantum instrumentality can indeed simulate EPR correlations completely and uniquely if viewed from the perspective of NAQC. Implication of our result is that the entire set of EPR-correlations can be explained by the LHS model in the instrumental causal background if viewed from the perspective of NAQC.
Our fundamental interest in science is to debunk the mystery of our universe by revealing its underlying laws. Quantum mechanics, so far has been the most successful theory of our universe. However, basis of why quantum mechanics works still puzzles us. There have been several attempts to simulate quantum mechanical results using the principle of locality, non-contextuality, determinism or realism and free will to name a few. In an attempt to provide an ontological model for quantum mechanics, a number of no-go theorems including the Bell theorem, Bell-Kochen-Specker theorem and the EPR theorem [1][2][3] were proposed using such theory-independent, physically motivated principles with the hope to single out quantum theory as a primary model to explain nature among the plethora of generalized probabilistic models.
Failure of these theory-independent no-go theorems led to an important question as to why nature does not allow stronger correlations than what quantum mechanics permits [4][5][6][7]. A number of theory-dependent no-go theorems were also discovered including the no cloning theorem, no deleting theorem etc. In [4], it was shown that any post-quantum theory exhibiting stronger non-local correlations may lead to a 'computational free lunch', which enables all distributed computations with a trivial amount of communication, i.e. with one bit. So far, there has been a lack of any physical principle, which could prevent such 'computational free lunch' and at the same time, also identify quantum theory uniquely. Many partially successful attempts have already been made in this direction proposing several new principles like information causality [8], local orthogonality [9,10], exclusivity [11], no-causal order [12,13], non-trivial communication complexity [4,7] and macroscopic locality [14]. * cqtdem@nus.edu.sg † jaskaransinghnirankari@iisermohali.ac.in ‡ phykd@nus.edu.sg Even though there has been significant progress, a nogo theorem based on a set of physically motivated laws or principles, which could simulate quantum mechanical results uniquely, is still unknown.
Recently, ontological models based on the principle of causality have garnered interests and led to the development of the field of quantum causal modeling [15][16][17][18][19][20][21][22], which brought forth a number of fascinating results including the idea of no definite global causal order [12,13] and its applications [23]. Surprisingly, it has also been experimentally verified recently [24]. Therefore, it is now natural to ask whether the violation of various no-go theorems is because of this stricter notion of ordered causal relation.
To that end, several attempts have already been made to understand quantum nonlocality [25][26][27][28][29][30][31], contextuality [32] and EPR-nonlocality [33] relaxing the stricter notion of ordered causal relation. In this regard, the quantum instrumental causal network is one of the most promising structure of causal models. Quantum instrumental processes are a generalization of their classical counterparts with quantized communication receiving nodes with underlying local hidden variable (LHV) or state (LHS) model and outcome communications (see Fig. 1).
Instead of the traditional ordered causal network, search for various new quantum causal models has drawn quite a bit of attention [33][34][35][36] in the last few years and the model, 'instrumental causal network' turns out to be the most prominent candidate in this regard. It may be considered as a relaxed version of our day-to-day observation of cause and effect relationship. In the derivation of all the no-go theorems, we assume to live in a world with ordered causal background. This provides us the opportunity to relax the idea of causal relationship from the existing no-go theorems and swim closer to the direction of singling out the entire set of quantum correlations.
So far, relaxing the prevalent idea of causal network has not been beneficial in this regard. In fact, in a recent work, device independent instrumental inequality was shown to admit a quantum violation [37,38]. On The graphical representation of the LHS model and the 1SQI is shown using DAGs. Each node encodes either a classical random variable or a quantum system, and each directed edge a causal influence. Circular nodes denote unobservable variables and square shaped nodes and graphic image of atomic structural nodes denote observable variables. The first type nodes represent classical observables and the second type nodes for quantum observables. It is clear from the presence of the directed edge between Alice (A) and Bob (B) in the second DAG (b) that the second DAG represents 1SQI and the other one on the left represents LHS model. the other hand, EPR-nonlocality also was shown to be stronger than the bound set by the instrumental causal network [33] or one sided quantum instrumental network (1SQI).
In this paper, in an attempt to find a theory-dependent principle or no-go theorem of quantum mechanics, we study steerability of a state from the perspective of nonlocal advantage of quantum coherence (NAQC) [39][40][41]. We derive a set of new tighter steering inequalities based on various coherence measures in the ordered causal background. Violation of these inequalities implies nonlocal advantage of quantum coherence (NAQC) beyond what a single system can achieve. We then derive a similar set of inequalities under 1SQI model [33] or in an instrumental causal background. It turns out that unlike quantum steering viewed from the perspective of entropy or uncertainty, NAQC in the ordered causal background is upper bounded by the 1SQI bound.
Implication of our result is that the entire set of EPRcorrelations can be described by the LHS model in the instrumental causal background if viewed from the perspective of NAQC.
II. ONE SIDED QUANTUM INSTRUMENTALITY(1SQI)
We consider a steering scenario, where Alice prepares a bipartite state and sends a part of the system to Bob, who does not trust her. Alice tries to convince Bob that his state is entangled with hers. To that end, Bob asks Alice to perform certain tasks. Bob believes that there exists an unobservable shared source Λ influencing both of them. In local hidden state (LHS) model, Bob thinks that there is no direct causal influence from Alice to Bob but here we relax that assumption and consider that there is indeed a direct causal influence from Alice to Bob as shown in Fig.(1) b) through a directed acyclic graphs (DAGs). The conditional states of Bob ρ a|x (unnormalized) under 1SQI model are then represented by where P λ is a probability distribution over the hidden variables λ assigned to the node Λ, p(a|x, λ) is the conditional probability of obtaining outcome a for the measurement setting x and hidden variable λ to node A and ρ λ,a is the LHS with Tr(ρ λ,a ) = 1 assigned to node B by the model. In contrast, the conditional states under the usual LHS model have a representation as This section is dedicated to the derivation of the nonlocal advantage of local quantum coherence under the usual ordered causal relation as well as instrumental causal relation. To start with, we first derive the coherence complementarity inequalities based on various measures of quantum coherence for a single qubit state. We consider a general qubit state ρ = 1 2 (I 2 + r. σ), where | r| ≤ 1 and σ ≡ (σ 1 , σ 2 , σ 3 ) are the Pauli matrices. The coherence of the state when expressed in the eigenbasis of σ i , can be expressed by the l 1 -norm of coherence (C l1 ) as where i = j = k. For the remainder of the article we adopt the notation i, j, k ∈ {1, 2, 3}. Similarly, the relative entropy of coherence (C r ) with respect to the i th basis is given by where where we take α ∈ {l 1 , r}, the expressions in Eq. (3) and Eq. (4) reduce to where Ω l1 = Ω r = 2 for an arbitrary qubit state. Using a similar approach it is possible to extend the inequality for arbitrary dimension (see the supplemental material [47]).
IV. NON-LOCAL ADVANTAGE OF QUANTUM COHERENCE.
We now derive a new NAQC inequality under both ordered and instrumental causal networks. Without loss of generality and for simplicity, we limit our analysis within the regime of two-qubit states, while the results can be easily extended for any general bipartite states (see supplemental material [47] for general proof). We consider a two qubit bipartite state ρ ab prepared by Alice and shared with Bob. We also assume that the conditional states of Bob admit an LHS or 1SQI model as given by Eq. (2) and Eq. (1) respectively. If Bob measures certain properties of his states such as entropy, uncertainty, it has been shown to violate the steering inequalities [42][43][44][45]. It has also been shown to violate the 1SQI inequality based on semi-definite programming (SDP) [33]. The quantities or the properties considered (uncertainty and entropy) so far has a classical counterpart. Both the uncertainty as well as the entropy of the state of Bob has contribution from classical mixedness of the state. Classical mixing and thermal noise directly contribute to such quantities and thus, affects the non-locality. On the other hand, quantum coherence is the absence of classical mixing and thermal noise [46] and thus, noise plays no role in the non-locality measured based on a coherence dependent quantity (less robust under noise). A steering inequality based on quantum coherence thus naturally provides a different view of the situation. It also depicts how a purely quantum resource (quantum correlation) affects another quantum resource (coherence). In the following, we show that unlike the traditional inequalities based on uncertainty or entropy, a 1SQI inequality based on quantum coherence can indeed single out the EPR-correlation viewed via NAQC. We start with the sum of square of average local quantum coherence of Bob's state in the mutually unbiased bases, i.e. for a two-qubit scenario, is the normalized conditional state of Bob and p(c|x) = Tr(ρ c|x ) = λ P λ p(c|x, λ) is the probability of being in the state. As we next show, the quantity S has a nontrivial bound under both the ordered and instrumental causal network. We derive bounds for both the cases below.
Proposition 1. Under LHS model and ordered causal network, the quantity S that Bob measures on his particle is bounded as Proof. A proof of the above under a LHS model is outlined below.
where K k n = Mod(k − 1 + n, 3) + 1. The first inequality in Eq. (8) comes from the fact that conditional states have representations as given by Eq. (2) and the fact that coherence does not increase under classical mixing of states, i.e., for a state ρ = i p i ρ i such that i p i = 1, We have also used the fact that for any real numbers x and y, xy ≤ x 2 +y 2 2 . Violation of the above inequality for any quantum state not only implies that the state is steerable but also shows that Bob can achieve the nonlocal advantage of quantum coherence beyond what could have been possible without the intervention of Alice nonlocally. In [40], a set of steering complementarity relations were derived. Here we show a set of similar complementarity relations in the supplemental material [47]. Moreover, we also show that the inequality in Eq. (14) is tight, i.e., there exist a state with LHS model (ρ ab = |0 0| ⊗ |+ +| for example), which can achieve the bound.
In the next section, we focus on deriving a similar bound on the quantity S under the 1SQI model with (13) using the l1-norm measure of quantum coherence. We use Pauli measurements in arbitrary directions and optimize over directions (θ and φ) for maximum value of S as detailed in [47]. S violates the bound (dotted, blue) given by the inequality in Eq. (14) but does not violate the bound (dashed, black) given by the inequality in Eq. (9). outcome communications.
Proposition 2. If Bob assumes that his conditional states admit descriptions as given by 1SQI model in Eq. (1) and measures the quantity S on his states, it must be bounded by Proof. Proof of this inequality follows along the same line of approach as before.
where F k (a|λ) = p(a|K k 1 , λ) 2 + p(a|K k 2 , λ) 2 . As before, it can be shown [47] that for an arbitrary qubit state, Plugging Eqn. (11) in (10), we get, As before, in the first inequality in Eq. (10), we use the 1SQI model as given in Eq. (1) and the fact that coherence does not increase under classical mixing. The second inequality is a consequence of the fact that for any two real numbers x and y, xy ≤ x 2 +y 2 2 (see the generalization of the bound in section F of [47]). In the last inequality in Eq. (12), we use the coherence complementarity relation as given in Eq. (5). A generalized form of coherence complementarity relationship for states in arbitrary dimensions can be used to generalize the above proof for two qudits. Furthermore, in the supplementary material [47], we show that the above inequality is also tight.
For example for both the cases of ordered and instrumental causal networks, we consider Werner states, given by where 0 ≤ p w ≤ 1 and |ψ ab is the Bell singlet state. We plot the behavior of S with respect to p w in Fig. 2 and Fig. 3 respectively using both l 1 -norm and relative entropy of coherence as measures of coherence. From the plots, we find that the quantity S violates the bound set by the ordered causal network for p w > 0.816 for the l 1 -norm of coherence and p w > 0.944 for the relative entropy of coherence. On the other hand, the inequality in Eq. (9) is not violated by the Werner state for any range of p w for the l 1 -norm measure of quantum coherence in Fig. (2) as well as relative entropy measure of quantum coherence in Fig. (3). One can in fact show that Proposition 3. no two-qubit state can violate the bound set by the 1SQI inequality as given in Eq. (9), i.e., Proof.
where (15) can be evaluated term wise as, where the last inequality is due to the fact the maximum value of coherence is one and for any probability distribution p(x) and any positive function f (x), . After a similar analysis for i =j =k S (2) b,j,k , we get which concludes the proof that no two qubit state can violate the bound 3Ω α . This can again be generalized to arbitrary two-qudit states by appropriately choosing the generalized coherence complementarity relationship (5) (see the supplemental material [47]). We come to the same conclusion even after studying the general two-qudit state as shown in the supplemental material [47].
V. CONCLUSIONS AND DISCUSSIONS.
It was shown that there exist quantum states, which exibit stronger EPR-nonlocality than the bound set by 1SQI ( [33]), i.e., quantum steering cannot be explained by 1SQI model. However, in this article, we start with a new and stronger steering inequality based on local quantum coherence [39,40] under the ordered causal network. We observe violation of the inequality as shown in Fig. (2) and (3) and term the phenomena as NAQC. We derive a similar bound on the quantity under instrumental causal network and outcome communication. Like [33], a violation of the inequality in Eq. (9) certifies that NAQC beyond quantum instrumentality is possible just like quantum steerability. However, we observe that although quantum steering in general can be more stronger than what quantum instrumentality allows, NAQC is in turn upper bounded by the 1SQI bound. The new inequality for instrumental causal network is never violated by the NAQC for any state. This implies that the entire set of EPR-correlations can be explained by the LHS model in the instrumental causal background if viewed from the perspective of NAQC.
There have been several attempts to single out quantum or more precisely, physical correlations based on different physical principles or no-go theorems. In this paper, we show that although 1SQI bound cannot single out quantum correlations when viewed from the perspective of violation of local quantum analogue of classical properties, but in principle, can identify the correlations when viewed from the perspective of NAQC. While identification of physical correlations using different principles or no-go theorems is considered to be a non-trivial task, we believe, our efforts indeed advance us significantly in this particular direction.
Since we have shown that an inequality based on quantum coherence fits just perfectly for arbitrary dimensions, such that the 1SQI bound uniquely singles out NAQC correlations, one of the immediate questions naturally arises: Why does quantum coherence play such an important role? For the sake of arguments, even if we consider that we stumbled upon a pair of two perfect quantities, namely EPR correlation and coherence, by a mere serendipitous coincidence, we cannot ignore that they are a perfect match for each other, which leads to another immediate question: what makes them the perfect match? We believe, quantum coherence does not play any role but rather it is transition probabilities of a state, which makes a perfect pair with the quantum correlations, wheres quantum coherence is just a function of these probabilities. Thus, in future, it will be really an important task to investigate the pairs more closely.
A. A. Pauli operators in arbitrary directions
Pauli operators are well known matrices in physics, generally denoted by σ x , σ y , σ z or σ 1 , σ 2 and σ 3 , where Eigen bases of these operators turn out to be the mutually unbiased bases(MUBs). One can, in principle, rotate the operators in arbitrary directions. For our purpose, without loss of generality, we rotate the matrices such that the Eigen vectors of the rotated σ 3 in the (θ, φ) direction(σ 3 (θ, φ)) turn out to be a set of arbitrary orthonormal states such as |0(θ, φ) = cos( θ 2 )|0 + e iφ sin( θ 2 )|1 and |1(θ, φ) = sin( θ 2 )|0 − e iφ cos( θ 2 )|1 . From the basis of σ 3 (θ, φ), we can now form the other mutually unbiased bases such as In this article, we use these MUBs to perform coherence measurements and optimize over (θ, φ) to get the maximum possible violation of the inequalities.
B. B. Steering complementarity relationships
We now derive a set of steering complementarity relations for various steering inequalities for the case of definite causal order. These relations are complementary in the sense that if one of the steering inequalities is violated by a state, its complementary part in the complementarity relation does not violate the corresponding steering inequality.
We consider the quantity S (i,j,k) , defined as follows, and show that for any arbitrary quantum state, it is bounded. Since the analysis holds for arbitrary quantum states, the bound cannot be violated by quantum theory.
To arrive at the bound in the above Eq. (B1), we use the following facts in the first and second inequality respectively: i). For any two real numbers x and y, xy ≤ and ii). Coherence complementarity relations from the Eq. (5).
The quantity S (i,j,k) can be decomposed into several parts for which a set of steering inequalities can be derived. For example, we consider the following decomposition, where, the subscript for each term denotes the choice of basis i, j and k. Below, we show that each term represents a steering inequality and for each term, a steering inequality can be derived. A new steering inequality can be derived by add two or more terms in the decomposition but not all of them together. In the paper, we have explicitly derived the bound of the quantity S (i =j =k) , while the bound for the rest of the quantities can be derived following the same method.
Here we elucidate the various decompositions of S (i,j,k) and derive their bounds following a new method. These bounds are relatively weaker than those found following the method given in the paper. However, one advantage of these new bounds is that sum of these bounds of all the terms in the decomposition gives the bound given by the Eq. (B1). We explicitly calculate the new bound for the quantity S (i=j,k) below, while bounds for the rest of the quantities can be derived following the same method as shown below. (Red, solid curve) from Eq. (B6) and Eq. (B1) respectively with pw for Werner state (15) and relative entropy measure of quantum coherence. Bounds for both of the quantities turn out to be Ωα. We observe that whereas In a similar manner, we obtain, We plot the behaviour of S (i,j,k) and S (i =j =k) for varying p W using l 1 norm and relative entropy of coherence in Fig. (4) and Fig. (5) respectively. It is observed that while S (i =j =k) shows the violation for both, S (i,j,k) is always satisfied.
If a steering inequality corresponding to a term or a particular group of terms appearing in the decomposition in Eq. (B2) is violated for a particular state, the rest of the terms together cannot violate the corresponding steering inequality and in fact compensate for the violation of the former inequality such that S (i,j,k) is always satisfied.
C. D. Tightness of the bound under 1SQI
We would like to find at least one example of a (set of) quantum state(s) under the paradigm of 1SQI-model, which saturates the bound of 3Ω α . This will ensure that the bound is tight and no further improvement can be made. Without loss of generality, we assume that the strategy to determine the conditional states ρ ′ a|k , which depend on the outcome a, λ and choice of basis k as follows.
For each choice of basis k in which coherence will be measured, choose ρ ′ a,λ and ρ ′ b,λ ′ to be a pure state which is diagonal in the i = k basis.
Let us evaluate the quantity S defined as in the main text for the term k = 1, while choosing ρ ′ a,λ = ρ ′ b,λ ′ = |0 ∀(a, b). Then we have, where for the last equality we have expanded the summation over a and b and used the fact that the cases i = 2, j = 3 and i = 3, j = 2 are symmetric and will yield the same results. A similar analysis for k = 2 and k = 3 under the aforementioned strategy for selecting 1SQI states also yields the same value. Therefore for at least one set of states we have shown that which completes the required proof. It should be noted that a similar prescription for ordered causal models does not yield value of S higher than 2Ω α .
D. E. Generalization of LHS bound
In this section, we generalize the LHS bound on average local quantum coherence in the d-dimension, where d is prime power. We know that the number of MUBs in prime power dimension d is d + 1. The coherence complementarity relation (considering the definition of coherence to be normalized by the factor of 1 d−1 ) for l 1 -norm turns out to be [1] where P (ρ) = Tr(ρ 2 ) ≤ 1 is the purity of the state. This inequality can now be used to derive the LHS bound on the quantity for bipartite states with subsystems of prime-power dimensions(d). To find the bound, we show a,b=0 λ,λ ′ d+1 k=1 d m,n=1, m>n P λ P λ ′ p(a|K k m , λ)p(b|K k n , λ ′ ) + p(b|K k m , λ ′ )p(a|K k n , λ) C where K k n = M odulo(k + n − 1, d + 1) + 1. Here, if the a's and b's in the second line is summed up, each term in the square bracket gives the value one. Similarly, if m's and n's are summed, there will be d(d − 1)/2 such pairs of terms inside square bracket giving raise to the d(d − 1) term in the third line. In the last inequality, we use the coherence complementarity relation as given in Eq. (E9).
E. F. Generalization of 1SQI bound
One can similarly, find the bound on the sum of local quantum coherence-squares in the MUBs for 1SQI model for bipartite states with subsystems of prime power dimensions (d). As before, we start with the quantity S as p(b|j, λ)C α k (ρ a,λ ) where, we consider, the first term S 1 to be the first term of the second line in the above equation and S 2 to be the second. In the above inequality, we use the fact that for any positive real numbers p i , q i , r i and s i , d+1 i =j =k i,j,k=1 (F12) Now, we can easily expand over the i−index and show that where F (a|k, λ) =
|
2019-01-21T17:49:07.000Z
|
2019-01-21T00:00:00.000
|
{
"year": 2019,
"sha1": "608eb74698c922685049fbcb6954a9a040324200",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1901.07008",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "70f88005c9bf7ddd7d5af9b71d41d9c7d2095eee",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
236739610
|
pes2o/s2orc
|
v3-fos-license
|
Effects of Powder, Extracts, and Components of Ganoderma Lucidum in Treatment of Diabetes
Ganoderma lucidum is known in China and Japan as Ling-Zhi and Reishi. Due to medicinal properties and different nutritional compositions, ganoderma lucidum is currently used in food products. It contains essential fatty acids, essential amino acids, and a wide range of polysaccharides; all of which seem to be effective in lowering blood sugar level. This study aims to review the anti-diabetic and hypoglycemic effects of various powders, extracts, and components of ganoderma lucidum, by searching articles in Persian and English published from 2001 to 2020 in SID, MagIran, Scopus, PubMed, Web of Science and Google Scholar databases using the keywords: Active compounds, ganoderma lucidum, diabetes mellitus, hyperglycemia. The results showed that ganoderma lucidum uptake in most cases reduced fasting blood sugar, glycosylated hemoglobin, and insulin resistance in diabetic patients due to the its active compounds including the extracted polysaccharides, proteins and triterpenoids. Moreover, its antioxidant and antiinflammatory properties seems to reduce the complications of diabetes. In conclusion, the consumption of ganoderma lucidum in diabetic patients can be effective in controlling and preventing the disease, although more studies are needed on its effective dose, side effects and toxicity in human samples. A B S T R A C T
Introduction
ecause of widespread prevalence of diabetes worldwide, especially in developing countries, its management is of particular importance. In addition to glucose-lowering medicines, special dietary regimes and the use of functional foods are recommended. One of the foods that has been long considered due to having beneficial compounds and low Glycemic Index value are mushrooms family [1]. Among them, ganoderma lucidum is well-known for its specific bioactive compounds and different nutritional compositions, and currently is used in food products. It contains essential fatty acids, essential amino acids and a wide range of polysaccharides; all of which seems to be effective in lowering blood sugar.
This study aims to review the effective compounds of ganoderma lucidum and summarize the possible mechanisms of its powder and extracts in controlling hyperglycemia and diabetes. The anti-oxidation and anti-inflammatory properties of this mushroom are also investigated.
glucose metabolism and enhanced serum insulin level through various mechanisms and led to lowered blood glucose level. Moreover, it was shown that proteins of ganoderma played an important role in reducing blood sugar by stimulating the immune system via activation of Forkhead box P3 (FOXP3+) regulatory cells, CD-18 production, and down-regulating CD-45 [7]. Furthermore, the triterpenoids of ganoderma including different ganoderic acids could ameliorate diabetes complications through inhibition of sorbitol accumulation and reducing intestinal glucose uptake by inhibiting aldose reductase and α-glucosidase secretion [8,9]. The effectiveness of ganoderma lucidum in controlling blood glucose and diabetes complications may be due to its antioxidation and anti-inflammatory properties. Ganoderma proteoglycan extract increased enzymatic and non-enzymatic antioxidants in diabetic rats [10,11]. Similarly, ganoderma lucidum spores could amend B lymphocytes and reduce the CD4+/CD8+ T ratio in diabetic mice led to the regulation of the immune system and controlling the inflammation caused by hyperglycemia [12]. There were only a few studies on human models. Although ganoderma uptake could improve blood sugar in some studies [13,14], but were not effective at all dosages in other studies [15,16]. The possible mechanisms were not investigated in these clinical trials. In comparing the effect of ganoderma lucidum with hypoglycemic drugs, the results of some studies indicated that polysaccharides, proteoglycans, ganoderic acids [17][18][19], and extract of ganoderma [20] had effects similar to metformin
Discussion and Conclusion
The beneficial effects of ganoderma lucidum on regulating blood glucose and glycosylated hemoglobin have been demonstrated according to the mechanisms described in animal studies. Since limited human studies have examined its effect on hyperglycemia and no mechanism for it has been suggested, more clinical trials are needed to claim that ganoderma has anti-diabetic effects in humans. Overall, it can be concluded that consumption of ganoderma lucidum in both diabetic patients and general population may have regulatory effect on blood glucose; although the effective dosage, side effects, and drug interactions are needed to be clarified. The practical use of this mushroom in food industry can be considered for the production of functional foods which may promote health status of general people and prevent diabetes.
Compliance with ethical guidelines
No data have been fabricated or manipulated to support our conclusions in the current work. All available studies in this area are included for review.
Funding
This research did not receive any grant from funding agencies in the public, commercial, or non-profit sectors.
Conflicts of interest
The authors declared no conflict of interest.
|
2021-08-03T00:05:17.365Z
|
2021-01-10T00:00:00.000
|
{
"year": 2021,
"sha1": "493791ec4e13ed70ba0a6238eec0fcfd5d70ad26",
"oa_license": "CCBYNC",
"oa_url": "http://journal.gums.ac.ir/files/site1/user_files_f80caf/mojani-A-10-1509-1-e23ab13.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f6ab1eb078e97cb091b6db5c82e609be6524d62a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119379901
|
pes2o/s2orc
|
v3-fos-license
|
Minijet transverse energy production in the next-to-leading order in hadron and nuclear collisions
The transverse energy flow generated by minijets in hadron and nuclear collisions into a given rapidity window in the central region is calculated in the next-to-leading (NLO) order in QCD at RHIC and LHC energies. The NLO transverse energy production in pp collisions cross sections are larger than the LO ones by the factors of K_{RHIC} ~ 1.9 and K_{LHC} ~ 2.1 at RHIC and LHC energies correspondingly. These results were then used to calculate transverse energy spectrum in nuclear collisions in a Glauber geometrical model. We show that accounting for NLO corrections in the elementary pp collisions leads to a substantial broadening of the E_{perp} distribution for the nuclear ones, while its form remains practically unchanged.
Introduction
Minijet physics is one of the most promising applications of perturbative QCD to the analysis of processes with multiparticle production. The minijet approach is based on the fact that some portion of transverse energy is produced in the semihard form, i.e. is perturbatively calculable because of the relatively large transverse momenta involved in the scattering but is not observed in the form of customary hard jets well separated from the soft background. A notable feature of this approach is a predicted rapid growth of the integrated perturbative cross section of parton-parton scattering, responsible for perturbative transverse energy production, with energy, and at RHIC (200 GeV/A in CMS) and especially LHC (5500 GeV/A) energies the perturbative cross section becomes quite large and in fact even exceeds the inelastic and total cross sections for large enough rapidity intervals. The crossover from the regime described by conventional leading twist QCD and the one where multiple hard interactions are important is one of the most important problems of the minijet approach [1]. The field is actively developing, recent reviews on the subject containing a large number of references are e.g. [2] and [3].
A special importance of minijet physics for ultrarelativistic heavy ion collisions is due to the fact that minijets with large enough transverse momenta are produced at a very early stage of the collision thus forming an initial parton system that can further evolve kinetically or even hydrodynamically, so that the minijet physics describes the initial conditions for subsequent collective evolution of parton matter [4], [5], [6], [7], see also a recent review [8].
Among the recent developments let us mention a new approach to minijet production based on the quasiclassical treatment of nuclear gluon distributions, [9], [10], [11], [12], [13] and a description based on the parton cascade approach [14].
The perspective of having a perturbatively controllable description of a substantial part of the inelastic cross section is certainly very exciting 1 However, to determine the accuracy of the predictions given by minijet physics, the existing calculations have to be expanded in several directions.
In this paper we discuss a conceptually simplest extension of the leading order (LO) calculation of the transverse energy spectrum produced in heavy ion collisions presented in [6], [16] and [17] by including the next-to-leading order (NLO) contributions to this cross section. The NLO corrections to conventional high p ⊥ jet production cross section were computed in [18], [19] and [20] for Tevatron energies. Later the code of [19] was used for calculating this cross section for RHIC and LHC energies in [21]. The necessity of doing this computation in the minijet region was, of course, clearly understood and emphasized in the literature on minijet physics [3], [22]. A clear goal here is to establish an accuracy of the LO prediction by explicitly computing the NLO corrections to it.
The outline of the paper is as follows.
In the second section we describe a calculation of the next-to-leading order (NLO) cross section of transverse energy production in proton-proton collisions using the Monte Carlo code developed by Z. Kunzst and D. Soper [19] and study a deviation from the LO result.
In the third section the computed NLO cross section is used in the calculation of transverse energy production in heavy ion collisions, where the nuclear collision is described as a superposition of the nucleon-nucleon ones in a Glauber geometrical approach of [6]. We show that NLO corrections lead to a substantial broadening of the transverse energy spectrum.
In the last section we discuss the results and formulate the conclusions.
NLO minijet transverse energy production in hadron collisions
The basic difference of minijet physics from that of the usual high-p t jets is that the minijets can not be observed as jets as such. Experimentally they manifest themselves in more inclusive quantities such as the energy flow into a given rapidity window. The NLO calculation of a jet cross section contains a so-called jet defining algorithm specifying the resolution for the jet to be observed, e.g. the angular size of the jet-defining cone, see e.g. [23]. As a minijet can not be observed as an energy flow into a cone separated from the rest of the particles produced in the collision, some of the restrictions employed in the standard definition of a jet should be relaxed. A natural idea is to define a minijet produced "jet" as a transverse energy deposited in some (central) rapidity window and a full azimuth. This corresponds to a standard detector setup for studying central rapidity region in nuclear collisions at RHIC and LHC. The inclusive cross section is obtained by integrating the differential one over the phase space on the surface corresponding to the variable in question. Schematically the NLO distribution of the transverse energy produced into a given rapidity interval y a < y < y b is equal to |p ⊥i |θ(y min < y i < y max )) As in all NLO calculations the separation between the real emission and virtual exchange requires regularization, in addition to the usual ultraviolet renormalization, of the infrared and collinear divergences. Physically this means that a distinction between a two-particle and three-particle final state becomes purely conventional.
In perturbative QCD one can meaningfully compute only infrared stable quantities [26], in which the divergences originating from real and virtual gluon emission cancel each other, so that adding very soft gluons does not change the answer. It is easy to convince oneself, that the transverse energy distribution into a given rapidity interval Eq. (1) satisfies the above requirement.
In more physical terms the main difference between the LO and NLO calculations is that unlike the LO case the number of produced (mini)jets is no longer an infrared-stable quantity in the NLO computation, i.e. it can not be predicted. For example, we can no longer calculate a probability of hitting the acceptance window by a given number of minijets, which is one of the important issues in an event-by-event analysis of the initially produced gluon system (for details see [24]). In view of the applications of the NLO results for nucleon-nucleon collisions to the nuclear ones this means in turn that the elementary block in the geometric approach is no longer describing the production of a fixed number of jets (2 jets in the LO case), but the production of transverse energy into a kinematical domain specified by the jet defining algorithm.
The calculation of transverse energy spectrum was performed using the Monte-Carlo code developed by Kunzst and Soper [19], with a jet definition appropriate for transverse energy production specified in Eq. (1). The results for the cross section of transverse energy production into a central rapidity window −0.5 ≤ y ≤ 0.5 are presented in Fig. 1 for RHIC energy ( √ s = 200 GeV) and Fig. 2 for the LHC one ( √ s = 5500 GeV), where for LHC we have chosen the energy of proton -proton collisions available for protons in the lead nuclei in PbPb collisions. The calculaions were performed using the MRSG(-) structure functions [25]. The parameters for the fits for the spectra having the functional form a * E −α ⊥ are given in Table 1. We see that taking into account the NLO corrections to transverse energy production can roughly be described by introducing effective K-factors K RHIC ∼ 1.9 and K LHC ∼ 2.1. This agrees well with the "expected" K-factor used in [2], [3]. Let us note, that while at RHIC energies the slopes of the LO and NLO curves are practically the same, at LHC energies the NLO slope is about 2 percent larger than the LO one.
In the third column we give the values of the integrated minijet production cross section for the cutoff value E 0 = 4 GeV. The range of validity of a leading twist approximation for transverse energy production in any given rapidity window is determined by the inequality The equality in the above formula fixes the limiting value of the cutoff E * 0 . The values of the inelastic cross section are shown in the fourth column of Table 1 2 and the limiting cutoff values E * 0 are shown in the fifth one. We see that the limiting values E * 0 are quite small. Let us stress hat the values of E * 0 depend rather strongly on the rapidity window under consideration. The limiting cutoff values shown in Table 1 refer to the central unit rapidity interval and thus present a lower bound with respect to those corresponding to larger rapidity intervals.
Let us also note that, as mentioned before, we had to fix a scale for the NLO logarithmic corrections, which for the above calculations was chosen to be µ = E ⊥ . We have checked that variations of this scale in the range 0.5 E ⊥ ≤ µ ≤ 1.5 E ⊥ does not produce significant variations of the result, so that the NLO calculation is stable and thus produces a reliable prediction for the difference between the LO and NLO cases.
NLO transverse energy production in nuclear collisions
In this section we turn to an estimate of the transverse energy production in nuclear collisions in the Glauber type approach, in which they are considered to be an impact parameter averaged superposition of hadron-hadron collisions. We shall follow the geometrical approach similar to that of [6] and consider the gaussian approximation to the transverse energy distribution at given impact parameter in the collision of two nuclei with atomic numbers A and B where E ⊥ AB (b) is a mean transverse energy produced at given impact parameter b : D AB is a dispersion of the E ⊥ distribution given by and P AB is a probability of nuclear scattering at given impact parameter where the normalization is fixed by the inelastic cross section for minijet production in pp collisions Eq. (2). Let us note that the Glauber model we use is similar to that of [6] in that the transverse energy distribution at a given value of the impact parameter is approximated with the Gaussian, see Eq. (4), but differs from it in using the Bernoulli process instead of Poissoninan one in [6] and the different overall normalization at the integrated minijet production cross section in the rapidity window under consideration. The final expression for the cross section of transverse energy production in nuclear collisions is obtained from Eq. (4) by integrating over the impact parameter The resulting transverse energy distributions are plotted in Fig. 3 We see that the main effect of taking into account the NLO corrections is a considerable broadening of the shoulder of the distribution which basically follows form the increase in the magnitude of the transverse energy production cross section from LO to NLO. At the same time the height of transverse energy distribution basically does not change. The above results show that to this order in perturbative QCD computation the NLO prediction is an increased event-by-event produced transverse energy providing more favorable conditions for collective behavior of the produced gluon system, its thermalization, etc. as compared to the leading order calculations.
Discussion
The results of the above-presented calculation of the next-to-leading order corrections to the transverse energy flow into a unit rapidity interval in the central region show that the NLO corrections to the Born calculation of the transverse energy distribution in pp collisions based on the lowest order 2 → 2 scattering are substantial, so that the effective K-factors are K RHIC ∼ 1.9 and K LHC ∼ 2.1, in accordance with the "expected" ones previously used in the literature [2], [3].
The cross section of transverse energy production in pp collisions serves as a basic building block in the geometrical Glauber model of nuclear collisions. Switching from LO to NLO cross section of E ⊥ production results in a substantial broadening of the minijet transverse energy distribution in nuclear collision providing more favorable conditions for subsequent collective effects characteristic for dense parton systems such as quark-gluon plasma to manifest themselves. Let us note, that the form of this distribution does not change a lot.
The effective K-factors being substantial (although not overwhelmingly large) certainly make it necessary to develop a more accurate treatment of minijet production. To achieve this goal one has to solve two interrelated problems. Both are linked to the large value of the integrated minijet transverse energy production cross section in the leading twist approximation discussed in the second section and the resulting conflict with unitarity at low minijet transverse energies at LHC, at least when considering large enough rapidity windows.
First of all, one has to get a reliable estimate of the higher order corrections to the hard blob responsible for E ⊥ production. Most probably it will require resumming the perturbative contributions to all orders, because even if the hypothetical next-to-next-to leading order calculation reduces the K-factor, the arising large cancellations would require further improvement of the accuracy of the calculation, etc. Such resummation program has been successfully performed in the case of jet pair production at the kinematical threshold [29]. Unfortunately at present it is not clear how to extend this program to the minijet production kinematics.
The second problem is even more difficult and is related to the necessity of a reliable computation of the nonlinear contributions to (mini)jet production which are quite important at high energies both in hadron and nuclear collisions [2] and photoproduction [28]. The current approach to describing nonlinear effects is based on the ad hoc eikonal unitarization scheme, see e.g. [2]. This scheme does not take into account the processes in which several nucleons are involved in transverse energy production. This problem was recently reanalyzed in [30] for pA collisions showing in particular an interesting connection between the nonlinear effects in the structure functions and those in the spectrum of emitted gluons. An advanced analysis of the nonlinear effects at the example of nuclear effects in jet photoproduction [31] has revealed a number of interesting features possibly relevant also for the computation of nonlinear effects in hadroproduction of jets. One of the most striking aspects of this treatment is that although diagrammatically the nonlinear effects initially look as a superposition of the leading twist contributions, the final result appears to be a next-to-leading twist one due to a delicate cancellation between various diagrams [31], [32].
Various theoretical schemes possibly responsible for taming the growth of the minijet production cross sections were discussed. One of them is based on accounting for nonlinear effects in quasiclassical approach, [33], [12]. Here at tree level the nonperturbative lattice calculations of minijet production in nuclear collisions were performed in [13]. A more traditional treatment based on accounting for nonlinear effects in QCD evolution equations is described in the review [34].
Another related problem is a necessity of switching from the collinear to high energy factorization in describing the minijet production at high energies, see e.g. [35], [12].
In summary, the computed NLO corrections to the minijet transverse energy production in hadron and nuclear collisions turned out to be substantial both for RHIC and LHC energies. Qualitatively this enhances the energy production in the central region and significantly broadens the transverse energy spectrum in nuclear collisions. However much work is still ahead in order to improve the accuracy of these predictions.
|
2019-04-14T02:46:19.538Z
|
1998-11-20T00:00:00.000
|
{
"year": 1998,
"sha1": "2c9fc689eae533cb74727d4af69e2926a02aa67c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9811417",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6610254f56a346577f6d293856f002bd88a26493",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
12663549
|
pes2o/s2orc
|
v3-fos-license
|
A Pilot Study on Integrating Videography and Environmental Microbial Sampling to Model Fecal Bacterial Exposures in Peri-Urban Tanzania
Diarrheal diseases are a leading cause of under-five mortality and morbidity in sub-Saharan Africa. Quantitative exposure modeling provides opportunities to investigate the relative importance of fecal-oral transmission routes (e.g. hands, water, food) responsible for diarrheal disease. Modeling, however, requires accurate descriptions of individuals’ interactions with the environment (i.e., activity data). Such activity data are largely lacking for people in low-income settings. In the present study, we collected activity data and microbiological sampling data to develop a quantitative microbial exposure model for two female caretakers in peri-urban Tanzania. Activity data were combined with microbiological data of contacted surfaces and fomites (e.g. broom handle, soil, clothing) to develop example exposure profiles describing second-by-second estimates of fecal indicator bacteria (E. coli and enterococci) concentrations on the caretaker’s hands. The study demonstrates the application and utility of video activity data to quantify exposure factors for people in low-income countries and apply these factors to understand fecal contamination exposure pathways. This study provides both a methodological approach for the design and implementation of larger studies, and preliminary data suggesting contacts with dirt and sand may be important mechanisms of hand contamination. Increasing the scale of activity data collection and modeling to investigate individual-level exposure profiles within target populations for specific exposure scenarios would provide opportunities to identify the relative importance of fecal-oral disease transmission routes.
Introduction
Diarrheal diseases caused by exposure to pathogenic agents are a leading cause of under-five mortality and morbidity in sub-Saharan Africa [1]. In addition, exposure to non-pathogenic fecal bacteria may contribute to child morbidity associated with malnutrition and child stunting [2]. Despite the known and potential impacts of fecal exposures on child health, there is little understanding of the relative importance of fecal-oral disease transmission routes in developing countries [3]. Contemporary research largely focuses on transmission through drinking water and food. However, recent evidence has highlighted the role of non-dietary ingestion pathways (e.g., ingestion of soil and contaminants via hand-to-mouth contacts) in children's microbial exposures in low income countries [3,4]. In one study, infants in periurban areas of Zimbabwe were observed consuming large quantities of fecal bacteria through ingestion of soil and chicken feces [5]. Other studies have demonstrated high concentrations of fecal bacteria on surfaces and soils in developing countries, suggesting these matrices may influence child exposure to fecal contamination [4,6].
Quantitative exposure and risk modeling provides opportunities to identify routes responsible for exposure to feces. Quantitative microbial risk assessment (QMRA) models identify risks associated with infectious diseases exposures [7]. Examples of QMRA models investigating non-dietary ingestion routes include quantifying risks from nosocomial surfaces, food preparation surfaces, and cleaning laundry [8][9][10]. Risk assessment models have been used to quantify risks associated with specific activities, understand relative contributions of various exposure pathways, highlight the need for quantification of exposure factors, and identify effective interventions [11,12]. For example, Mattioli et al. (2015) applied this framework to estimate that between 97-98% of total fecal matter ingested by a Tanzanian child is due to hand-to-mouth contact events as compared to consumption of contaminated drinking water [13]. One limitation to the accuracy and applicability of risk assessment models is the reliance on accurate descriptions of individuals' interactions with the environment. Mattioli et al. (2015), for example, modeled Tanzanian children's interactions with the environment based on data collected about children from the United States [13]. Traditionally, human-environment interaction data are collected by one or more of the following methods: activity recall, activity diaries, structured observation, and third person videography [14]. Videography is considered superior to other activity data collection methods because it is more accurate, eliminates recall bias, and provides an opportunity to record difficult-to-remember events, events of short duration (e.g., hand-to-mouth contacts), and specific sequences of events [14,15]. Third person videography, combined with video translation, has been used to quantify activity data in order to estimate child exposure to chemical contaminants [16,17]. The activity data provided by videography and videotranslation are known as micro-level activity time series data (MLATS). MLATS data are second-by-second sequences of discrete contact events. The high resolution data on sequential contacts of activity data can be applied to understanding dermal and non-dietary exposures. In this study, first person perspective videography (FPV) is employed. FPV shifts the position of the camera from a third person perspective to a first person perspective by using a small, portable, head-mounted videocamera. The videocamera is mounted with a forward and downward angle to capture the full range of motion of the participant's hands. When correctly framed, the angle of the videocamera is able to capture all hand-to-mouth and objectto-mouth contacts. Although FPV has been previously employed in sociological research, the present study provides the first known instance of using FPV to collect contact data [18].
The study objective was to demonstrate the application and utility of second-by-second activity data collection to understand fecal contamination exposures on hands in the developing world. The objective is accomplished by piloting FPV to capture activity data to parameterize a quantitative model of fecal bacteria exposures. The study provides a framework for increasing the scale of the described methods (activity data collection and microbial exposure profile modeling) to identify the relative importance of fecal-oral disease transmission routes in low-income countries. Microbial exposure profiles are time series of microbial contamination concentrations on surfaces (for example, hands). The profiles are then used to identify factors influencing contamination and estimate subsequent risks of infection from interactions with the surface. By providing a framework for modeling individual-level exposure profiles, we provide insight into methods needed to better understand inter-individual variability in fecaloral disease transmission.
Participant Recruitment
Two women in a low-income urban community within Dar Es Salaam, Tanzania were recruited coincident with an ongoing field trial of hand hygiene among female caretakers for children under five years old.
Ethics Statement
The following protocol and consent procedures were approved by both the Stanford University Research Compliance Office for Human Subjects Research and the Muhimbili University's Institutional Review Board. Verbal informed consent was obtained from both participants following the communication of an approved script in Swahili (the participants' native language) by a trained field enumerator. Written consent was not necessary because the research fulfilled the Stanford University Human Research Protection Program policy that verbal consent is sufficient when the research presents no more than minimal risk of harm and does not involve any procedures which would otherwise require written consent if conducted outside of the research context. Research proceeded only following affirmation of both understanding and consent, according to the approved protocol.
Videography
Video cameras (GoPro Digital Hero 5, Woodman Labs, San Mateo, CA) were attached to head mounts such that the camera angled forward and downward to capture the majority of the hands' range of motion (Fig 1). The two participants were instructed to complete their normal activities for two consecutive thirty-minute intervals from approximately 8:30 AM until 9:30 AM. In total, two hours of video were collected.
Video-Translation
A modified version of Virtual Timing Device (VTD) Software (SamaSama Consulting, Sunnyvale, CA) was used to translate the videotape footage into MLATS for the caretakers' right and left hands [14]. The VTD Software provides an interface consisting of two grids; each grid contains cells that represent various options for a) location of female caretaker, and 2) objects contacted by hand of caretaker. Each location and object is represented by a button on the software interface and can be specified by the user. The two locations in the first grid included: 1) inside and 2) outside. The second grid specified objects, which included surfaces that a hand contacted, water that a hand was immersed in (e.g., when washing clothing), and other special designations (e.g., the hand is not touching anything).
The objects that were specified for this study were chosen during an initial screening of the videos based on what objects were most frequently contacted. The eighteen surfaces in the objects grid included: 1) wooden broom, 2) metal bucket, 3) burlap sack, 4) charcoal, 5) clothing, 6) dirt or sand, 7) wooden door, 8) participant's own face (face), 9) food scraps, 10) participant's own hands, 11) money, 12) plastic objects, 13) metal cooking utensils including cooking pots and handles, 14) rubber, 15) bar soap, 16) stone, 17) wooden objects, and 18) paper towel used for hand sampling. The four water immersion objects included: 1) washing clothing, 2) washing hands, 3) water for drinking, and 4) water for hand sampling. The two special designations included: 1) nothing, and 2) not in view. To use VTD Software, a researcher views the video of the caretaker and in real time selects buttons for the location and object the participant is contacting. VTD records the sequence of the objects contacted along with the duration that the object is contacted. Contacts are recorded for all events where fingers, palm, and/or back of the hand come into direct contact with an object. Only one object can be contacted at a time, and when no object is contacted, the object category "nothing" is selected. The category "not in view" is selected when the hand is out of view and it is unclear what object the hand is contacting. Output from VTD is a computer text file with each line providing the location of the caretaker, the object contacted, and the duration of the contact. The present study provides data on the objects contacted by the left and right hands of the female caretakers. Data from VTD were imported into Microsoft Excel 2007 for analysis. As is common with video-translation using VTD, quality control measures are required (Beamer 2012). There was one researcher who translated the two hours of videotape. The same researcher then reviewed the tapes second-bysecond to ensure the translations were accurate. When errors were identified, short segments (15 seconds to 2 minutes) were translated a second time, integrated into the data file to replace the corrected segment, and the file was reviewed again.
Microbial Sampling
Hand samples were obtained from participants prior to collection of video. The hand samples were obtained using the hand rinse sampling method, which includes contacts with both water (characterized in the exposure model as the object 'water used for hand sampling') and a paper towel (characterized as the object 'paper towel used for hand sampling'). [19]. In brief, the hand rinse method includes placing hands consecutively in a Whirl-pak bag (Nasco, Fort Atkinson, WI, USA) containing 350 ml of Uhaibottled drinking water pre-screened for the absence of target bacteria (i.e., E. coli and enterococci) and dosed with sodium thiosulfate. E. coli and enterococci were chosen as target bacteria because they are indicative of fecal contamination and co-occur with fecal pathogens on hands [19,20]. Additionally, both E. coli and enterococci are more abundant and easier to quantify than fecal pathogens. Both hands were placed into one bag, as opposed to measuring contamination on each hand individually, to decrease the lower limit of detection for bacterial contamination.
In addition to hand rinses, five fomites were sampled (see Table 1). Samples were chosen from fomites that participants had contacted during the video recording, as observed by the data collectors. The samples were collected to provide data on bacterial contamination of fomites contacted by the participants. Therefore, the samples were chosen as a representative subset of all fomites that participants had contacted during the video recording, as observed by the data collectors. Fomite samples were obtained by swabbing a surface of roughly 10 cm x 10 cm with polyester-tipped swabs pre-wet in 10 ml of ¼-strength Ringer's solution (Oxoid Limited, Hampshire, UK) [4]. Hand rinse and fomite samples were transported on ice to the laboratory for analysis within 4 hours of sample collection.
All samples were processed via membrane filtration to enumerate enterococci using mEI media per EPA method 1600 [21] and E. coli using MI media per EPA Method 1604 [22] as described elsewhere [3,19]. Lower and upper limits of detection for E. coli and enterococci on fomites are estimated at approximately 2.5 and 500 CFU/100 cm 2 . Limits of detection for bacteria on fomites were calculated using a countable range of 1-200 colonies following filtration of 2/5 ths (2 ml out of 5 ml) of Ringer's solution used for fomite sampling. Lower and upper limits of detection for bacteria on hands are estimated at 3.5 and 700 CFU / 2 hands. Limits of detection on hands were calculated using a countable range of 1-200 colonies following filtration of 2/7 ths (100 ml out of 350 ml) of water used for hand sampling. Refers to concentrations recovered from objects c Refers to surface concentration used in the exposure model by adusting measurements to account for a sampling efficiency of 20%. d
Exposure Assessment Model
The exposure model is a modified version of the conceptual model for estimating microbial exposures from fomites previously described in Julian et al. (2009) [23]. Two exposure models were developed separately, one model for E. coli and one for enterococci contamination of hands. In brief, every object identified using videotranslation was assigned a quantitative estimate of surface E. coli and enterococci concentration and a quantitative estimate of the fraction of bacteria transferred on contact (Tables 1 and 2). Using the micro-level activity time series data, we assume the sequential contact of each object transferred bacteria to or from the hands based on the surface concentrations of the object and hand and the object-specific transfer efficiency consistent with the work of Julian et al. (2009) [23]. Similarly, we assumed parameters besides surface type, transfer direction, and magnitude did not influence transfer (e.g., wetness) [23]. We assumed every contact impacted the concentration of bacteria on the hand: bacteria transferred from hand-to-object when bacterial concentrations were greater on hands than objects and from object-to-hand when concentrations were greater on objects than hands [23]. The amount of bacteria transferred to or from the hand was determined by multiplying the concentration gradient between the hand and object by the transfer efficiency [23]: Where C Hf is the final concentration on the hand (CFU/100cm 2 ), C Hi is the initial concentration on the hand (CFU / 100cm 2 ), C O is the concentration on the object (CFU/100cm 2 ), and T is the transfer efficiency for the object (%). The assumption of transfer from the surface with greater contamination to that with lesser contamination is based on previous modeling work as there are limited data on the impact of existing surface contamination on the magnitude or direction of transfer between two surfaces composed of different materials. The time series of E. coli and enterococci concentrations on the surfaces of the each hand, as determined by both sequential contact events and bacterial inactivation, is the output from the exposure model [23]. Bacteria concentrations on hands are then converted, using an estimate of sampling efficiency, to concentrations measured via the hand sampling method (see Surface Concentration).
Surface Concentration
Surface bacteria concentrations for each object category were estimated. The ten fomites that were sampled (Table 1 "Measured") were used to estimate surface concentrations for eleven object categories (wooden broom, metal bucket, burlap sap, clothing, dirt or sand, wooden door, food scraps, plastic objects, metal cooking utensils, rubber, and wooden objects). Because E. coli recovery from surfaces using cotton-tipped swabs removes only an estimated 20% of bacteria [24], the measured concentrations were divided by a sampling efficiency to estimate the actual surface contamination (Table 1 "Modeled"), using: Where F I is the initial concentration of the fomite, F M is the concentration of bacteria measured on the fomite, and f F is the sampling efficiency for E. coli using swabs ( Table 3). Concentrations of bacteria for five categories (not in view, nothing, paper towel used for hand sampling, bar soap, and water used for hand sampling) were set equal to 0 CFU/100 cm 2 or 0 CFU/100 ml under the assumption the surfaces were clean. Notably, even when bar soap is contaminated with bacteria, there is no detectable bacteria transfer to hands on contact [25,26]. Concentrations of bacteria for three categories (washing clothes, washing hands, and water used for drinking) were set equal to 25 CFU/100ml and 200 CFU/100ml for E. coli and enterococci, respectively, based on mean concentrations of E. coli and fecal streptococci, a group which includes enterococci, in stored water in Tanzania [19]. The object category for 'their own hands' was assigned a concentration equal to the concentration predicted by the exposure model for the other hand at the time of contact. Concentrations for the remaining four categories (charcoal, their own face, money, and stone), for which we have no data, were assumed to be contaminated at the lower detection limit (2.5 CFU/100 cm 2 ), corresponding to contamination of 12.5 CFU/100 cm 2 when accounting for sampling efficiency (Table 1).
Transfer Efficiency
Transfer efficiencies of bacteria between hands and objects were based on a literature review of transfer efficiencies in high relative humidity (relative humidity in Dar es Salaam, Tanzania in July 2009 was between 60% and 90%, [19,20]. Although transfer efficiency of bacteria between hands and surfaces has been shown to be influenced by the type of surfaces, inoculum size, and characteristics of the contact event (pressure, friction, wetness), we simplified the model by neglecting these characteristics in line with our previous work which suggested that model output is relatively insensitive to transfer efficiency magnitude when multiple contacts are modeled [23,30].
Hand Concentration
The model assumed that bacterial contamination of hands is impacted by contact, with bacteria transferring from the surface (object or hand) with the higher concentration of bacteria to the surface (object or hand) with the lower concentration. However, only the concentrations of bacteria on hands were assumed to be impacted; the concentrations of objects were assumed to remain at the measured concentration. For hands, the initial concentrations were calculated based on the results of the hand sampling at time 0, and were adjusted based on hand sampling removal efficiencies of 52% for E. coli and 44% for enterococci [19]. Based on estimated removal efficiencies, initial hand concentrations were calculated using: Where C I is the initial concentration on hands (CFU/2 hands), C M is the measured concentration on hands using the hand sampling method (CFU/2 hands), and f H is the hand sampling removal efficiency. Concentrations are then adjusted to units of colony forming units (CFU) per cm 2 based on average hand surface areas of 440 cm 2 [31].
Bacterial Inactivation
We also accounted for inactivation on hands consistent with the model previously reported by Julian et al. (2009) [23]. The log 10 inactivation rate (k EC ) for E. coli was assumed to be 3 x 10 −3 1/s, based on the findings of Pinfold (1990), where 99% of E. coli were inactivated on hands in 10 minutes [32]. A lower log 10 inactivation rate (k ENT ) for enterococci of 1.7 x 10 −4 1/s was assumed based on a 50% decrease in fecal streptococci in 30 minutes also reported in Pinfold (1990) [32].
Surface Area
To account for surface area during contact events, we assumed that a contact event transferred only the portion of bacterial contamination on the object in direct contact with the hand. For object contacts, we assumed an average 44 cm 2 of contact area (10% of 440 cm 2 , the total hand surface area). For water contacts, we assumed total hand submersion (440 cm 2 , or100% of total hand surface area) [33,34]. We assumed duration of contact did not influence the fraction of bacteria transferred, consistent with the work of Cohen-Hubal et al. who found duration did not impact chemical residue transfer on contact [35]. This assumption was extended to include immersion of hand with water: we assumed duration of immersion did not impact fraction of bacteria transferred. However, immediately after the transfer event, we assumed that the bacteria transferred were uniformly distributed over the hand, in line with the previous model [23]
Non-dietary Ingestion
Non-dietary ingestion of fecal bacteria was modeled assuming that all hand-to-own face contact events resulted in ingestion of bacteria from hands. The amount of bacteria ingested was modeled based on previous work in Julian et al. (2009) [23]:
D ¼ S H TE HF C H
Where D is the dose of bacteria ingested, S H is the surface area of the hand in contact with the mouth, TE HF is the percentage of bacteria transferred from the hand to the mouth, and C H is the concentration of bacteria on the hands. Similar to transfer of bacteria between surfaces, we again assume the surface area of hand-to-mouth contact events equals 10% of the total hand surface area. Other sources of fecal bacteria ingestion (such as mouth contacts with contaminated objects besides hands, ingestion of contaminated water, and ingestion of contaminated food) were not included in the model because these events did not occur during the observation.
Micro-level Activity Time Series
Participant A was recorded for 54:16 min and spent her time sweeping and washing laundry ( Both hands of both participants spent 10-20% of the total observed time not in contact with any objects or surfaces (see Table 4). Specifically, the left and right hands of Participant A were observed to spend 19% (10:23) and 12% (6:44), respectively, of the total observed time not contacting any object. Similarly, the left and right hands of Participant B were observed to spend 16% (10:08) and 11% (7:02), respectively, not contacting any object.
For the majority of time, both hands for both participants were in full view of the camera (see Table 4). However, full extension of the arm and misplacement of the camera on the head during the second half of the video for one participant (Participant B) resulted in the hands frequently falling outside of the camera's field-of-view. Participant A's left and right hands were not in view 3 times each for a total duration of 0:13 (0.4%) and 0:20 (0.6%), respectively. Conversely, Participant B's left and right hands were not in view 52 and 31 times for a total duration of 8:26 (13%) and 4:30 (7.1%), respectively.
Enterococci and E. coli on Hands and Fomites
The initial enterococci concentration on Participant B's hands was 340CFU / 2 hands. The corresponding E. coli concentration was above the limit of detection (>700 CFU / 2 hands) and therefore assumed to be 700 CFU / 2 hands. For Participant A, both initial enterococci and E. coli concentration were above the detection limit (>700 CFU / 2 hands) and so were also assumed to be 700 CFU / 2 hands. Both E. coli and enterococci concentrations on fomites ranged from <2 to >500 CFU/100 cm 2 (see Table 1).
Measured concentrations on both hands and fomites were adjusted for the exposure model to account for sampling efficiency (Table 3). Initial concentrations for bacteria on hands ranged from 773 to 1591 CFU per 2 hands, or 387 to 796 CFU per hand. Fomite concentrations used for the exposure model, after accounting for sampling efficiency, ranged between 0 and 2500 CFU/100cm 2 . Total duration of contact reported in minutes: seconds format and percentage of total time in contact with each category (in parentheses). c No object was in contact with the hand
Exposure Model
The modeled concentrations on the hands of each participant for E. coli and enterococci follow similar temporal trends. The concentration of bacteria increases substantially after 10 to 15 minutes, coincident with the participants' contacts with dirt/sand (Fig 2). The bacteria concentrations then decrease. The decrease is more rapid for Participant B than for Participant A. The decreases are either due to contacts with surfaces with lower bacterial concentrations than on the hands (Participant A) or contacts with water (Participant B). The bacteria concentrations eventually reach levels near the detection limit and then remain low for both participants for the remainder of the study.
Discussion
Exposure to fecal contamination in the developing world is linked to poor child and maternal health [1,2]. In the present study, first person perspective videography is used to collect microlevel activity time series (MLATS) data and MLATS data are used to develop exposure models. The study highlights evidence that exposure to dirt or sand may be most responsible for contamination of caretaker's hands and that water-related activities (e.g., washing clothing, washing hands) may not be effective at dramatically reducing E. coli contamination on hands. The impact of washing clothing, in particular, on E. coli concentrations may be due to E. coli contamination of laundry as observed in the United States and/or by use of soil or sand as a cleaning aid, as observed in Peru [36,37]. These findings further support the work of Pickering et al.
First person videography is capable of capturing MLATS for hands of study participants. When the camera is appropriately placed on the participant's forehead, as was the case for Participant A, first person videography is able to capture more than 99% of all hand contacts. However, FPV is subject to more data loss than third person perspective videography when the camera is misplaced, as was the case for Participant B. Misalignment of the camera for Participant B impacted the entire hour duration resulting in substantial data loss: 13% of the total time for the left hand and 7% for the right hand. As a comparison, the time out of view for third person perspective videography for child exposure factors is approximately 9% (range of 2 to 18%) for the left hand and 7% (2-13%) for the right hand [38]. Two modifications would improve first person videography as a tool for collecting MLATS: a camera with a wider field of view (wide angle lens) and a headset that adjusts to the individual participant and reduces risk of misalignment. Two other concerns for FPV are distractions associated with wearing a camera on one's head and invasions of personal privacy. Although neither participant appeared, nor described themselves as, distracted or concerned about privacy, a larger sample size of participants would be needed to understand these concerns. Future studies should include adaptation of protocols to allow for temporary removal of video cameras as well as survey-based data collection from participants on issues concerning FPV surveillance compliance.
Given the small sample size of two participants, generalizable conclusions drawn from the collected MLATS are limited. The potential range of activities for female caretakers in lowincome countries is large, incorporating not only the activities observed but other domestic (e.g., shopping, cooking, water collection) and/or economically-productive (e.g., agricultural, commercial) activities. Nevertheless, similarities between the participants were observed, including high levels of activity characterized by frequent repeat contacts of common objects (e.g., metal bucket objects, metal cooking utensils, brooms, clothes) and infrequent contacts with rare objects (e.g., charcoal, dirt/sand, money, stone). This is noteworthy as the exposure models suggest that infrequent contacts (e.g, contact with dirt and sand in Fig 2, Participant B) were responsible for dramatic increases in bacterial contamination on hands. The reliance on a single video translation observer may have influenced categorization of objects contacted which were not validated via inter-observer, but accuracy of the frequency and duration of contact events was ensured via the described intra-observer validation protocol.
Estimates of parameter values strongly influence model outputs. The data used in the model were drawn from a combination of literature review, microbiological sampling, and videography. Of these sources, the microbiological sampling data are likely both the most influential and least certain. First, we did not collect estimates of microbial surface contamination for all of the fomites that the female caretakers contacted (e.g., charcoal, money, stone, faces). In the future, a wider array of surfaces should be sampled to ensure all surfaces contacted have an accurate estimate of microbial contamination. Second, bacteria concentrations on hands exceeded the upper limit of detection for the assays. We therefore had to assume that bacteria concentration on hands was equal to the upper limit of detection for the exposure models. This assumption directly influenced the performance of the model as our assumption that E. coli hand contamination was at the upper detection limit may have been an underestimate.
Underestimating initial E. coli concentrations likely resulted in a model that systematically underestimated hand contamination for the remaining time series. In the future, assays should be designed to increase the upper limit of detection of hand contamination so that an accurate estimate of initial hand contamination can be used to initialize and investigate the accuracy of the model.
There are notable limitations to our work. One limitation is that we measured bacterial contamination on hands from both hands simultaneously instead of measuring bacterial contamination of each hand separately. Because the left and right hands are not necessarily equally contaminated, the initial concentrations for each hand may be biased. Having separate data points for each hand individually would have enabled comparison of modeled to estimated hand contamination for each hand. A second limitation is that our model is not validated. Although we attempted to validate the model by collecting hand samples at the middle and end of the videography (data not shown), the data suffered from limited interpretability. This was because only two additional samples were collected for each exposure profile, the samples represented contamination on both hands, and half of the measured concentrations were outside of the countable range. Future studies looking to validate exposure models should collect hand samples at a timescale relevant to expected changes in microbial contamination. Our model, for example, suggests dirt/sand contacts dramatically increase bacterial contamination.
Exposure models provide insight into the relative importance of contact events on microbial contamination of hands. The work presented here highlights the disproportional role on infectious disease exposures that infrequent contact events may have. Given the small sample size of this study, it is important to consider increasing the scale of activity data and microbial data collection. Applying the model to develop individual-level exposure profiles within target populations would provide opportunities to identify human-environment interactions that most influence fecal-oral disease transmission routes.
|
2016-05-12T22:15:10.714Z
|
2015-08-21T00:00:00.000
|
{
"year": 2015,
"sha1": "1eaff2880780e0eac1d3df837293df09f727c473",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0136158&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a4331eea6eb8d00b26182e01abd6b6148ca8c30c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
17076755
|
pes2o/s2orc
|
v3-fos-license
|
An Epidemiological Study of Concomitant Use of Chinese Medicine and Antipsychotics in Schizophrenic Patients: Implication for Herb-Drug Interaction
Background Herb-drug interactions are an important issue in drug safety and clinical practice. The aim of this epidemiological study was to characterize associations of clinical outcomes with concomitant herbal and antipsychotic use in patients with schizophrenia. Methods and Findings In this retrospective, cross-sectional study, 1795 patients with schizophrenia who were randomly selected from 17 psychiatric hospitals in China were interviewed face-to-face using a structured questionnaire. Association analyses were conducted to examine correlates between Chinese medicine (CM) use and demographic, clinical variables, antipsychotic medication mode, and clinical outcomes. The prevalence of concomitant CM and antipsychotic treatment was 36.4% [95% confidence interval (95% CI) 34.2%–38.6%]. Patients using concomitant CM had a significantly greater chance of improved outcomes than non-CM use (61.1% vs. 34.3%, OR = 3.44, 95% CI 2.80–4.24). However, a small but significant number of patients treated concomitantly with CM had a greater risk of developing worse outcomes (7.2% vs. 4.4%, OR = 2.06, 95% CI 2.06–4.83). Significant predictors for concomitant CM treatment-associated outcomes were residence in urban areas, paranoid psychosis, and exceeding 3 months of CM use. Herbal medicine regimens containing Radix Bupleuri, Fructus Gardenia, Fructus Schisandrae, Radix Rehmanniae, Akebia Caulis, and Semen Plantaginis in concomitant use with quetiapine, clozapine, and olanzepine were associated with nearly 60% of the risk of adverse outcomes. Conclusions Concomitant herbal and antipsychotic treatment could produce either beneficial or adverse clinical effects in schizophrenic population. Potential herb-drug pharmacokinetic interactions need to be further evaluated.
Introduction
With the widespread use of various herbal products which are often concomitantly used with pharmaceutical drugs, herb-drug interactions have become an important issue in drug safety and clinical practice [1,2]. This was initially because of several case studies reporting nephrotoxic and hepatotoxic effects associated with herbal medicine use [3][4][5]. It is well documented that concomitant use of herbal medicine with conventional drug treatment can alter pharmacokinetic profiles of many classes of pharmaceutical drugs, including psychotropic agents, anticoagulants, oral contraceptives, immunosuppressants, cardiovascular drugs, anti-HIV, anticancer agents and antiepileptics [1,2,6]. However, whether such concomitant treatment with herbal and conventional medicine is associated with clinical outcomes in a defined population of patients remains to be determined.
Most patients with schizophrenia may develop a chronic course and are required for long-term maintenance treatment [7]. Although antipsychotic therapy is a mainstay in the maintenance treatment of schizophrenia, patients still often experience relapse and various adverse events caused by antipsychotic treatment [7]. In order to improve the therapeutic efficacy and reduce adverse side effects related to antipsychotic therapy, herbal medicine and other alternative therapies have been increasingly introduced into the treatment of schizophrenia [8]. This is particularly apparent in Chinese patients who have distinctive perceptions of Chinese medicine (CM), of which herbal materials account for about 85% preparations and products.
Numerous studies have shown therapeutic benefits of herbal medicine for persistent negative symptoms, cognitive impairment, and adverse side effects in schizophrenic patients [8]. Our recent study revealed that herbal medicine could alleviate hyperprolac-tinemia in schizophrenic patients [9]. Nevertheless, there are also case studies reporting acute and persistent psychosis caused by herbal supplementary use [10,11]. These data suggest that herbal medicine in combination with antipsychotic drugs may produce either positive or negative clinical effects, making it important to examine potential relationships between clinical outcomes and concomitant treatment with herbal and antipsychotic agents in schizophrenia.
The primary objective of this epidemiological survey was to determine the prevalence of CM concomitant use and its associations with demographic, clinical characteristics and treatment outcomes in a random sample of patients with schizophrenia through face-to-face interview using a structured questionnaire.
Study design and setting
This retrospective, cross-sectional epidemiological study was conducted in 17 psychiatric hospitals and mental health centers in China. The selection of the study sites was based on geographic and sociodemographic variations as previously described [12], with particular consideration of regions, local economic development levels, and overall educational levels, as these variables are heavily related to perceptions and beliefs for CM. The study was approved by Institutional Review Board (IRB) of the University of Hong Kong/Hospital Authority Hong Kong West Cluster and registered at www.HKClinicalTrials.com (HKCTR-874). All participants or their guardians were required to give written informed consent for participating in the survey. The survey was conducted between April 2009 and September 2009.
Population and sample
The study population was confined to patients with schizophrenia who visited psychiatric clinics or were hospitalized during the survey period. Patients who met the following criteria were eligible for the study: (1) aged 15 years or above; (2) had a primary diagnosis of schizophrenia or schizoaffective disorder based on International Classification of Diseases (ICD), 10th Edition [13]; (3) had been taking conventional antipsychotic treatment for at least 6 weeks; and (4) patients, their caregivers and/or doctors could provide necessary information about CM use if CM was used.
One key question that the present study attempted to answer was whether concomitant use of CM could affect the clinical outcomes of patients with schizophrenia, particularly adverse outcomes related to concomitant CM and antipsychotic use. Estimation of sample size was therefore based on the prevalence of CM use and the proportion of CM users with worse outcomes (see below). These two indices had been obtained from a pilot survey of 297 patients with schizophrenia [8], showing that nearly 36% of patients concomitantly used CM and 5.7% of CM users experienced worse outcomes, while only 2.8% of non-CM users had worse outcomes. In order to detect a 2.9% difference in the rate of worse outcomes between CM users and non-CM users, with a power (1-b) of 80% and a two-tailed level of a = 0.05, 1750 participants were required to detect statistical difference in terms of worse outcomes associated with concomitant CM use. The number of surveyed subjects allocated to each study site was determined based on the volume of visits and annual admission numbers. The selection of eligible patients at each survey site was determined using random number tables.
Study instruments
A specifically designed structured questionnaire was administered in the survey. The questionnaire covered: (1) sociodemo-graphic and clinical characteristics; (2) purpose, advice source, attitude, and awareness of CM use; (3) the concomitant use pattern including individual CMs, conventional medication modes, and duration of the concomitant use; and (4) clinical outcomes. Chinese medicine (CM) is defined as preparations and products in powder, tablet, capsule, soft-gel, or liquid form prepared from single or mixed herbal, mineral, and animal materials or extracts [8]. CM users were defined as those who had been using CM consecutively for at least one month or cumulatively for at least 3 months with no more than 45 days of absence in total and no more than 7 consecutive days of absence when the survey was conducted. Those who never or only occasionally used CM were defined as non-CM users. This definition of the length of CM use was based on CM clinical practice, demonstrating that CM therapy of most chronic mental-emotional conditions requires a considerable period before observable improvements are achieved [14].
Clinical outcomes were classified as improved, worse, and unchanged condition. Improved outcomes were defined as clinically meaningful improvements occurring in the preceding one month on one or more conditions as follows: (1) psychosis; (2) comorbid psychiatric symptoms, mainly anxiety, depression, cognitive impairment, and sleep disorders; (3) adverse side effects associated with antipsychotic therapy, frequently body weight gain, constipation, enuresis, hyperprolactinemia, hypersalivation, leukopenia, and tardive dyskinesia; and (4) comorbid nonpsychiatric conditions, such as hypertension and diabetes. Worse outcomes were defined as hospitalization, emergency room visits, or changes in medication modes due to the worsening of psychosis, comorbid symptoms, intolerable side effects, or the occurrence of new comorbid symptoms in the preceding one month. Patients who did not experience either clinically meaningful improvement or worsening in the preceding one month were defined as subjects with unchanged conditions. The assessment of clinical outcomes was conducted by psychiatrists based on changes in the severity and frequency of episodes of related symptoms, physical examination, and laboratory tests as well as reports from patients and their guardians.
Survey procedures
The survey was performed by trained psychiatrists on site with face-to-face interview. To ensure consistency of the survey across sites and over time, two sessions of training workshop were conducted for interviewers who were practicing psychiatry. Upon completion of the training workshops, inter-rater reliability was assessed by calculating interrater agreement coefficients (k value) for the designed questionnaire. All interviewers had achieved a k value of at least 0.8 after training sessions. In addition, post-survey interviews were further conducted to verify missing and illogical data.
Data analysis
The prevalence of CM use was calculated using maximum likelihood estimation of logistic regression. Chi-square (x 2 ) test was used to determine bivariate associations between CM use and demographic and clinical variables. Binary logistic regression model was further used for multivariate analyses to identify independent factors associated with CM use from the same variable tested in the bivariate analysis. The association between clinical outcomes and CM use was also examined using Chisquare (x 2 ) test and binary logistic regression analysis, with adjustment for demographic and clinical variables that were found to be significantly associated with CM use.
Subgroup analyses were conducted in CM-using subjects to further determine associations between clinical outcomes and CM and antipsychotic concomitant use modes. Chi-square test and multinomial logistic regression model were respectively utilized to examine bivariate and multivariate associations of clinical outcomes with demographic and clinical variables. Multinomial logistic regression model was also applied to evaluate associations of clinical outcomes with individual CMs and antipsychotic regimens that were used in at least 5% of CM-using respondents with either improved or worsened conditions, with adjustment for demographic and clinical variables that were shown to be significantly associated with clinical outcomes.
Odds ratio (OR) and 95% confidence interval (95% CI) were obtained from binary and multinomial logistic regression analysis. In association analyses of clinical outcomes, the unchanged outcome served as reference for improved and worse outcome in the calculation of OR values. All analyses were performed with SPSS version 16 software (Chicago, IL, USA). Statistical significance was defined as p,0.05 and all tests were two-sided.
Results
The characteristics of the sample In a total of 1795 eligible subjects surveyed, seven were excluded from the analyses due to missing basic demographic and clinical data (gender, age, and the illness duration). In the remaining 1788 subjects who were included in the final analyses, 51% were males and the mean (6SD) age was 32612 years. Fiftythree percent were diagnosed with paranoid schizophrenia. These demographic and clinical characteristics were similar to those reported in previous epidemiological surveys of schizophrenic population in China [15].
The characteristics of antipsychotic medication modes
Thirty-five psychotropic drugs were identified. Medication regimens of 99.3% subjects included antipsychotics and 43% had two or more antipsychotics. The remaining 0.7% subjects were medicated with mood stabilizers occasionally combined with antipsychotics. The most commonly used antipsychotics in all subjects were risperidone (50.8%), quetiapine (21.0%), clozapine (17.2%), olanzapine (8.2%), and phenothiazines (7.0%). This medication mode was similar to that previously reported in China [16]. No significant differences were observed in frequency distribution of these antipsychotic regimens between CM-and non-CM-using subjects (see below), except for risperidone monotherapy. For the latter the proportion of CM-using respondents treated was significantly higher than non-CM-using respondents (30.8% vs. 23.4%, p,0.001, Chi-square test).
The prevalence and characteristics of CM use
Direct observation showed a prevalence of 37.5% (671/1788) of the concomitant use. Re-calculation using maximum likelihood estimation yielded a similar prevalence of 36.4% (95% CI: 34.2%-38.6%). One hundred and twenty different CM materials used were identified: 92 herbal materials, 12 mineral materials, and 16 animal materials. But 33.7% of CM-using subjects were unable to provide full information about their CM formulae or prescriptions for identifying individual CMs. Only a small portion (6.4%, 43/ 671) of CM-using patients used single-herbal preparations.
There were 86% of CM-users who used CM therapy initially in order to enhance antipsychotic efficacy, reduce antipsychoticinduced adverse side effects and comorbid psychiatric symptoms (mainly anxiety, depression, cognitive and sleep problems). Sixtysix percent of CM users reported that CM use was recommended by their psychiatrists. Most CMs were prescribed by CM practitioners. Nearly 47% of CM-users were entirely unaware of potential risks of concomitant use of herbal and antipsychotic agents; only 16.4% realized such potential risks. In non-CM-using patients, 35.1% did not know much about CM and 58.1% did not think CM was helpful for their conditions, while only 5.1% were aware of the potential risks of the concomitant treatment.
Demographic and clinical correlates of CM use
Bivariate analysis displayed significant associations of CM use with gender (p = 0.002), household income (p = 0.001), the illness duration (p,0.001), number of episode (p,0.001), and number of hospitalization (p,0.001) (
Antipsychotic medication correlates of clinical outcomes in CM users
Five different antipsychotic agents and five antipsychotic treatment regimens that were used in at least 5% of CM-using patients with either improved or worse outcomes were identified (Table 4). Multinomial logistical regression analyses, with adjustment for resident areas, diagnostic types, and duration of CM use, variables that were significantly associated with clinical outcomes, revealed no significant associations with any antipsychotic treatment regimens favoring improve outcomes, but significantly lower odds of improved outcomes were observed in patients whose antipsychotic regimens included olanzapine (OR = 0.48, 95% CI 0.27-0.85, p = 0.035). There were significantly higher odds of worse outcomes in subjects whose antipsychotic regimens included quetiapine (OR = 1.90, 95% CI 0.88-4.08, p = 0.013), quetiapine alone (OR = 2.16, 95% CI 0.95-4.95, p = 0.031) or clozapine alone (OR = 3.02, 95% CI 0.94-9.70, p = 0.025) ( Table 4).
Discussion
In our survey of a representative sample of patients with schizophrenia, nearly 36% of them had concomitant CM and antipsychotic treatment. This prevalence rate of CM use is somewhat lower than that observed in other commonly occurring chronic conditions in Chinese communities [17][18][19]. In non-CMusing patients, nearly 58% did not believe CM could help their condition, suggesting that the lower prevalence of CM use in schizophrenic population is mainly related to their negative attitude towards this traditional remedy. Unlike patients in Western society where a minority of them informed their doctors of their use of alternative medicine [20,21], CM use in most patients in this study was recommended by their psychiatrists. However, only about one-third of patients were aware of the potential risks of concomitant treatment with CM. These findings may reflect an underestimation of both potential benefits and risks of CM use among patients and their psychiatrists.
We found that concomitant CM use was significantly associated with male, residence in rural areas, relatively higher household income, longer duration of illness, and more episodes and hospitalizations. These associations suggest that the men living in rural areas may have greater positive beliefs about CM for their illnesses. Previous studies also showed that people who had persistent and recurrent mental-emotional problems more often sought alternative therapies than populations with other chronic diseases [21,22]. While it was similar in most antipsychotic treatment regimens in CM-and non-CM-using patients, there exist significant bivariate and multivariate associations between CM use and clinical outcomes. Patients in CM concomitant treatment had a significantly greater chance of improved clinical outcomes compared to non-CM use (61.1% vs. 34.3%, OR = 3.44, 95% CI 2.80-4.24). However, a small but significant number of patients treated concomitantly with CM had a greater risk of developing worse outcomes (7.2% vs. 4.4%, OR = 2.06, 95% CI 2.06-4.83). These data clearly indicate that CM concomitant treatment could produce either beneficial or adverse effects on clinical outcome, probably depending on different combinations of CM and antipsychotics.
Furthermore, bivariate and multivariate analyses of CM-using subgroup revealed that both improved and worse outcomes were significantly associated with residence in urban areas, suggesting that patients living in urban areas may have greater impacts with CM therapy than those in rural areas. This is likely due in part to the fact that urban patients generally have more unconventional and conventional treatment options compared to rural patients [16]. This perhaps results in an increase in unpredictable positive and negative clinical effects, as the therapeutic properties of most CM preparations are not yet well identified. Meanwhile, the addition of CM significantly increased the chances of improved outcomes in paranoid patients compared to non-paranoid subtype. Several studies have demonstrated differences in neuropsychological character and clinical response to antipsychotic treatment between paranoid and non-paranoid subtypes [23,24] as well as subtype specificity of genetic profile [25,26]. Thus, the greater chance of alteration in treatment outcomes observed in paranoid patients may reflect a similar subtype difference in clinical effects of CM treatment.
Our results demonstrated that exceeding 3 months of CM use is a significant predictor for both improved and worse clinical outcomes. This finding confirms empirical evidence, suggesting that CM therapy of most chronic conditions requires a considerable duration in order to achieve observable improvement [14]. However, the finding is also consistent with those of case studies, revealing that most nephrotoxic and hepatotoxic effects associated with herbal medicine use are observed after 1-5 months of intake [4,5,27]. Considering difficulties in monitoring herbal toxicity and potential herb-drug interactions due to complex mixtures of unknown and unidentified ingredients in CM, the determination of an optimal length of the treatment might be a feasible strategy in minimizing adverse and toxic effects while maximizing beneficial effects. For this purpose, correlations between different length of CM use and changes in pharmacokinetic profile of conventional drugs may deserve to be further determined.
We found that CM-using patients whose antipsychotic regimens included olanzapine had significantly lower chance of improved outcomes. Meanwhile, quetiapine and clozapine monotherapy significantly heightened the risk of developing worse outcomes, suggesting associations of these three atypical antipsychotics with adverse clinical outcomes when used concomitantly with CM. On the other hand, among seven individual CM materials identified to be significantly associated with clinical outcomes, six were found to be significantly associated with adverse outcomes. They were Radix Bupleuri, Fructus Gardenia, Fructus Schisandrae, Radix Rehmanniae, Akebia Caulis, and Semen Plantaginis. Moreover, the concomitant treatment regimens including these herbal materials and antipsychotics associated with adverse outcomes accounted for nearly 60% of total identified treatment regimens in patients with worse outcomes. These data suggest that the heightened risk of adverse outcomes observed is closely associated with these herbal agents in combination with antipsychotic regimens.
As the identified herbal medicines have been well demonstrated to have high safety profiles [28], the adverse outcomes observed seem to be attributable to herb-drug pharmacokinetic interactions in which the pattern of drug metabolism is altered. Despite lack of information about the interactions between herbal and antipsychotic agents, early case studies reported that ginseng combined with phenelzine and betel nut with fluphenazine caused broad adverse effects in schizophrenic patients [29][30][31]. Our recent study of bipolar patients also found that combination treatment with the mood stabilizer carbamazepine and an herbal preparation for 26 weeks resulted in a significantly lower level of serum carbamazepine compared to carbamazepine alone, suggesting that the addition of herbal medicine accelerates carbamazepine metabolism and lowers its blood concentration [32]. Like carbamazepine, most antipsychotic drugs, including clozapine, olanzapine, and quetiapine, are metabolized as substrates for cytochrome P450s (CYPs) [33,34]. Therefore, pharmacokinetic interactions may play an important role in influencing clinical outcomes in patients with a. These antipsychotic regimens are significantly associated with adverse outcome as shown in Table 4. b. It is noted that all quetiapine-including treatment regimens are quetiapine monotherapy. c. Other herbal material-including regimens were not counted in subtotal regimens associated with adverse outcomes. doi:10.1371/journal.pone.0017239.t006 schizophrenia under concomitant treatment of herbal and antipsychotic agents observed in the present study.
There are several limitations in the present study. First, as patients who sought CM treatment may have distinctive perceptions about herbal medicine, psychological or ''placebo'' factors could not be excluded. The use of structured assessment instruments, including symptom scales and laboratory tests, should be helpful in further clarifying treatment effects of CM therapy. Second, since a considerable portion of CM-users were unable to provide full information about their CM formulae, it may lead to an underestimation of individual CMs associated with clinical outcomes. On the other hand, due to difficulties in collecting information about dosages of herbal and antipsychotic agents as well as the quality of herbal preparations, these factors were not considered in the present study. However, it should be noted that there have been extensive reports about severe adverse events caused by overdosing, heavy metal contaminations, and adulterants with conventional drugs of herbal supplements [5,35,36], all of which might account for the presumed adverse effects of herbal and antipsychotic combinations. Fourth, as the majority of CM treatments were recommended by psychiatrists to their patients, obtaining information about psychiatrists' attitudes and knowledge of CM would be helpful in devising safe and effective strategies of concomitant CM and antipsychotics in the treatment of schizophrenia. Finally, although there are many statistically significant results found in the study, ''chance significances'' may not be excluded. Additional cautions should be paid when the results are considered with reference in future studies.
In conclusion, a relatively small proportion of patients with schizophrenia have concomitant CM and antipsychotic treatment. Such concomitant treatment may heighten the risk of developing worse clinical outcomes in a small number, but increase the chance of improving treatment outcomes in a much greater number of patients. Better identification of the concomitant herbal and antipsychotic treatment regimens that are associated with clinical outcomes provides useful hints for further clarifying herbdrug pharmacokinetic interactions.
|
2018-04-03T05:05:43.953Z
|
2011-02-16T00:00:00.000
|
{
"year": 2011,
"sha1": "a3224bd980d5c5e98d06819ac9f181303fe6a93c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0017239&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e578753b176c03df7ee4b31aab207835272e6b67",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3051845
|
pes2o/s2orc
|
v3-fos-license
|
Characterization of Activity of a Potential Food-Grade Leucine Aminopeptidase from Kiwifruit
Aminopeptidase (AP) activity in ripe but firm fruit of Actinidia deliciosa was characterized using L-leucine-p-nitroanilide as a substrate. The enzyme activity was the highest under alkaline conditions and was thermolabile. EDTA, 1,10-phenanthroline, iodoacetamide, and Zn2+ had inhibitory effect while a low concentration of dithiothreitol (DTT) had stimulatory effect on kiwifruit AP activity. However, DTT was not essential for the enzyme activity. The results obtained indicated that the kiwifruit AP was a thiol-dependent metalloprotease. Its activity was the highest in the seeds, followed by the core and pericarp tissues of the fruit. The elution profile of the AP activity from a DEAE-cellulose column suggested that there were at least two AP isozymes in kiwifruit: one unadsorbed and one adsorbed fractions. It is concluded that useful food-grade aminopeptidases from kiwifruit could be revealed using more specific substrates.
Introduction
Kiwifruit (Actinidia spp.) is an important commercial crop in New Zealand. The fruit contains a high level of a cysteine endopeptidase called actinidin (E.C. 3.4.22.14) found in the cortex of the fruit [1]. Due to this proteolytic activity of kiwifruit, it has been used to tenderize meat and prevent gelatin-based jelly from setting.
Aminopeptidases (APs), particularly those from microbial sources, are important food processing enzymes and are widely used to modify proteins in food [2][3][4]. Animal waste products were also investigated as a potential source of useful APs [5]. It is also possible that APs from plants could be of use in the food processing industry [6]. Recently, it has been demonstrated that APs of cabbage leaves or chickpea cotyledons can be used to catalyze the hydrolysis of peptide bonds including those of hydrophobic bitter peptides in soy protein hydrolysates, resulting in the less bitter or bland taste products which have food processing applications [7,8]. However, for debittering protein hydrolysates or other food processing needs an attractive alternative would be to use APs from fruits grown commercially that are normally consumed fresh such as kiwifruit, which have the merit of already being generally regarded safe for the food processing industry.
Generally, there are many studies on seed aminopeptidases [9][10][11] but there is a paucity of information on the occurrence and characteristics of AP activities in fruits. Importantly, since there is no prior study on AP from kiwifruit, a prerequisite towards the goal of evaluating use of APs from this fruit for food processing applications is an investigation into the occurrence and biochemical characteristics of aminopeptidase (AP) activity of kiwifruit. Here, using L-leucine-p-nitroanilide (L-leu-p-NA) as a substrate, localization and some basic biochemical characteristics of AP activity within the fruit of Actinidia deliciosa, and an attempt to partially purify the enzyme that is normally sufficient for food-grade enzymes are reported here.
Enzyme Extraction.
Ripe but firm kiwifruit (Actinidia deliciosa cv. Hayward) was obtained from a local supermarket 2 Enzyme Research in Christchurch, New Zealand. Unless indicated otherwise, the whole kiwifruit was peeled and cut into small pieces before enzyme extraction. Kiwifruit tissues were ground in a mortar and pestle while adding 0.1 M of potassium phosphate buffer pH 8.0 supplemented with 1% (w/v) insoluble polyvinyl polypyrrolidone (PVPP), 5% (v/v) glycerol and 3 mM DTT. The ratio of weight of tissue (g) to volume of extraction buffer (ml) was 2 : 1. The homogenate was filtered through 2 layers of synthetic cloth and centrifuged at 10,000 × g for 20 min at 4 • C. The supernatant was carefully removed and used as crude extract of the whole fruit. The extraction process was carried out in a cold room or on an ice bath.
Determination of Total Protein Concentration.
The protein concentration in extracts was determined based on the Coomassie brilliant blue dye-protein binding principle [12]. A protein standard curve was prepared using serial dilutions of BSA (bovine serum albumin; BDH, England).
Determination of Aminopeptidase (AP) Activity.
Aminopeptidase activity was determined as described below unless indicated otherwise using L-leucine-p-nitroanilide (L-Leup-NA) as a substrate. The substrate solution was prepared by dissolving 20 mg of L-Leu-p-NA (Sigma, St. Louis, USA) in one ml of dimethyl sulfoxide (Sigma, St. Louis, USA) and adjusting the volume to 20 ml with 0.01 M potassium phosphate buffer (pH 8.0). It was found to be stored better at −20 • C for use later if prepared at pH 8.0 than at higher pH. The reaction mixture contained 0.45 ml of 0.1 M potassium phosphate buffer at pH 8.0, 0.45 ml substrate solution, and 150 μl enzyme extract in Eppendorf tubes kept on ice. The control tube contained the same reaction mixture except that the enzyme extract had previously been boiled for 5 min in a water bath at 100 • C and centrifuged afterwards. All the tubes were vortexed, and incubated for 1 h in a water bath at 37 • C. After the incubation period, they were placed in a water bath at 100 • C for 5 min to stop the enzyme reaction. After this, 0.45 ml distilled water was added to all the tubes, vortexed. and then centrifuged for 10 min at 10,000 × g at room temperature. The supernatants were carefully transferred to the cuvettes and the absorbance was measured at 410 nm. One unit of enzyme activity is defined as a change in one unit of absorbance per h at 37 • C.
Effect of
Temperature on AP Activity. The effect of temperature on AP activity was determined in three different experiments. To find the optimum temperature for the enzyme activity, AP activity in crude extracts of the whole fruit was determined at different incubation temperatures ranging from 25 • C to 70 • C for 1 h. In another experiment to investigate thermal stability, 150 μl of the enzyme extracts were pre-incubated with 0.45 ml of potassium phosphate buffer (pH 8.0) for 30 min at the above testing temperatures. After preincubation, the substrate was added to initiate the enzyme reaction for AP activity determination at 37 • C for 1 h.
Effect of pH on AP Activity.
The effect of pH on AP activity in the crude extracts of fruit was determined by replacing the potassium phosphate buffer at pH 8.0 in the assay mixture, with the three buffer mixtures (25.0 mM acetic acid, 25.0 mM MES, and 50.0 mM Tris) at different pH values ranging from 6 to 10 as described in [13]. Then AP activity was determined.
Effect of Different Classes of Proteolytic Enzyme Inhibitors
and Promoters on AP Activity. Crude enzyme extracts were preincubated with 0.45 ml of 0.1 M potassium phosphate buffer (pH 8) in the presence of different inhibitors or activators for 30 min at 37 • C. After pre-incubation, the enzyme reaction was initiated by the addition of the substrate solution (L-leu-p-NA) and AP activity was determined. Concentration of activators in the reaction mixture during preincubation was 1.0 or 10.0 mM. The chemicals tested were EDTA, 1, 10-phenanthroline, PMSF, DTT, iodoacetamide, and NEM.
Effect of Divalent
Cations on AP Activity. The crude enzyme extracts were pre-incubated at 37 • C for 30 min with 0.45 ml of 0.1 M potassium phosphate buffer in the presence of the chlorides of Mn 2+ , Co 2+ , Ni 2+ , Mg 2+ , Ca 2+ , or Zn 2+ . The concentration of divalent cations in the reaction mixture during pre-incubation was 1.0 or 10.0 mM. After pre-incubation, the substrate solution (L-leu-p-NA) was added to start the enzyme reaction and AP activity was determined.
Partial
Purification of Aminopeptidase. The whole kiwifruit (550 g) was cut into small pieces and homogenized in 225 ml of 0.1 M of potassium phosphate buffer (pH 8.0) supplemented with 1% (w/v) insoluble PVPP, 5% (v/v) glycerol, and 3 mM DTT (extraction buffer). The homogenate was filtered through 2 layers of synthetic cloth. The filtrate was centrifuged at 10,000 × g at 4 • C for 20 min, and the supernatant was removed and used as crude extract. Solid ammonium sulphate ((NH 4 ) 2 SO 4 ) was added to the crude extract, and the resulting 25-75% precipitate was dissolved in 7.5 ml of 0.01 M potassium phosphate buffer containing 10% (v/v) glycerol and 0.2 mM DTT (buffer A). After dialysis of the 25-70% ammonium sulphate fraction against buffer A, a DEAE cellulose column (10 × 2 cm) was used to separate the fractions. Unbound proteins were eluted with buffer A, and then bound proteins were eluted with 100 ml of the buffer A containing a linear gradient of 0.00-1.0 M KCl.
Statistical
Analysis. Statistical analysis of the data was performed using STATISTIX 8.0 software. The comparison between treatments was analysed using one-way analysis of variance (ANOVA). Where a statistical significance was observed, a Tukey's Honest Significance Difference (HSD) test was performed to determine how significant from the appropriate zero the values were. Standard errors were calculated and graphically represented as symmetrical error bars.
Aminopeptidase (AP) Activity in Different Parts of Actinidia deliciosa Fruit.
In preliminary experiments, when crude extracts from the whole fruit had been prepared with sodium phosphate or potassium phosphate buffer (pH 7.0), AP activity was not detectable. Kiwifruit contains more than 80 volatile aroma, and flavour compounds including terpenses, esters, aldehydes, alcohols with varying levels of monoterpenes, and phenolic compounds [14,15]. These compounds could have interfered with aminopeptidase isolation and activity. Here, a reliable protocol (as described in Section 2) for extraction of AP from kiwifruit and determination of its activity using L-leucine-p-nitroanilide (L-leu-p-NA) as a substrate has been established. The present study has established for the first time that kiwifruit has AP activity and some useful parameters with respect to its extraction, assay, stability, localization and purification.
AP activity was found in all parts of the fruit of A. deliciosa at different levels. The highest specific (units/mg soluble protein) and total (units/g fresh weight) AP activity was localized in the seed followed by the core, inner and outer pericarp, respectively, ( Table 1). In contrast, higher enzyme activities were found in the hypodermis of fully ripe grape berries than in the seed or flesh [13].
Effects of pH and Temperature on Kiwifruit AP Activity.
AP activity in crude extracts of the whole kiwifruit was most active at alkaline pH (Figure 1; ANOVA, P < .05). Similarly, hydrolysis of L-leu-p-NA by crude extracts was most active at a range of alkaline pH values from many different plants including potato [16], Arabidopsis thaliana [17], tomato [18], and daylily flowers [19].
Kiwifruit AP was most active at 37 • C and 50 • C, suggesting the presence of two aminopeptidase isozymes. At 55 • C its activity was reduced to 63% (with the activity at 37 • C designated as 100%) and then to about 20% at 60-70 o C (Figure 2). It was most stable at 37-40 • C (Figure 3) but became unstable as only less than 15% of its activity remained at temperatures higher than 55 • C (ANOVA, P < .05).
The observed inhibition of kiwifruit AP activity by metal chelators such as 1,10-phenanthroline and EDTA suggested the involvement of a metal ion in the active site of the enzyme. Similar effects were also reported in the studies on leucine aminopeptidases of potato [16], tomato, E. coli pep A, and porcine LAPs [18]. Furthermore, DTT (a thiol reducing agent) at a lower concentration (1 mM) had a stimulatory effect but an inhibitory effect at a higher concentration on kiwifruit AP activity suggesting that it was a thiol-dependent metalloprotease rather than a cysteine protease [20]. On the other hand, iodoacetamide (1 mM) and NEM (10 mM), the specific inhibitors of cysteine protease, had 60% and 40% inhibition of kiwifruit AP activity, respectively, suggesting that cysteine residues were likely involved in the enzyme conformation rather than catalysis. A serine-type protease might not be a significant contributor to the kiwifruit AP activity as PMSF, a serine protease inhibitor, did not have any significant effect on its activity.
The effects on kiwifruit AP activity of Ca 2+ , Mg 2+ , Co 2+ , Ni 2+ , Mn 2+ , and Zn 2+ with chloride as the counter ion were studied ( Figure 5). At metal ion concentrations of 1 mM, only Zn 2+ significantly inhibited kiwifruit AP activity (ANOVA, P < .05) whereas the other metal cations tested had no significant effect. When the concentration of metal ions was increased to 10 mM, the enzyme activity was strongly inhibited by Zn 2+ (ANOVA, P < .05), and inhibited to a lesser extent by Ni 2+ , Co 2+ , and Mn 2+ . At this concentration Ca 2+ and Mg 2+ did not have any significant effects. This suggests that the AP activity might be different from that of a previously studied protease in kiwifruit that was inhibited by calcium ions [21]. Furthermore, kiwifruit AP activity was different from that in potato, Arabidopsis, tomato, porcine and E. coli pep A as they were highly activated by Mn 2+ and Mg 2+ ions but were also inhibited by Zn 2+ ions [16][17][18]. The kiwifruit AP activity was also different from that of grape berries which was not inhibited by EDTA, 1,10phenanthroline, or metal ions [13].
Partial
Purification of Kiwifruit Aminopeptidase. Two major peaks of AP activity were separated using DEAE cellulose column chromatography: the unadsorbed and adsorbed fractions (Figure 6), suggesting that there were at least two isoforms of AP activity in A. deliciosa fruit. In these fractions only a few low-molecular weight polypeptides were found to be present following SDS PAGE (data not shown). This might be a facile route to obtain a relatively pure foodgrade aminopeptidases from kiwifruit. Further studies, using more specific substrates, could lead to some useful foodgrade aminopeptidases from kiwifruit. Recombinant DNA techniques could also be applied to mass produce kiwifruitoriginated APs.
|
2018-04-03T03:23:24.396Z
|
2010-11-04T00:00:00.000
|
{
"year": 2010,
"sha1": "86d5a40fa441c99fac2c24b8608c2f833da9ff2f",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/er/2010/517283.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0fb646bf2b08daaa8f3bf80aaa4b426834cc866b",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
254842569
|
pes2o/s2orc
|
v3-fos-license
|
Architectural design methods for mountainous environments
ABSTRACT As an important part of urban space, mountains play an extremely important role in the overall landscape and ecology of cities. However, with the continuous development of urbanization and building construction, mountains have been occupied and destroyed in unreasonable ways. Therefore, with the aim of reducing the damage to mountains from construction, this paper explores the design methods to deal with the relationship between buildings and mountainous environments. The historic district of Signal Hill in Qingdao is taken as the research area, since buildings here coexist with the mountainous environment harmoniously. Through the combination of modeling and field research, the slope of the mountain can be divided into three grades: “0–5.8°”, “5.8–11.7°” and “11.7–17.3°”. Therefore, the design characteristics and distribution patterns of 20 design methods that dealt with the mountainous environment in different slope grades can be obtained. Furthermore, by analyzing the design characteristics and distribution patterns, the relationship between the design methods and the slope of mountains can be found, providing more suitable design strategies for buildings located on different slopes of mountains. For other mountain cities worldwide, these rational strategies can provide some helpful design suggestions to better use terrain. It can also reduce the amount of construction volume and damage to the mountainous environment, and further better achieve sustainable development goals.
Introduction
Mountainous environments, as an important component of hilly cities, are an indispensable natural resource. Mountains provide spatial diversity for human production, living and leisure. This kind of diversity creates a richness of urban transport networks and landscape interfaces that form the unique landscape of hilly cities (Daniel and Laughlin 2005). However, with urbanization, the role of mountainous environments is gradually changing. Sociologist Kristol argues that urbanization is not only a geographical accumulation of people but also a process of changing cultural values. Applying this concept, Healy examines the impact of urbanization on mountain forestlands in the USA. The result is that mountainous environments are no longer limited in function to timber production but are becoming more responsive to human recreation needs and providing more varied urban spaces (Robert G 1984).
Although the mountainous environment plays an important role in hilly cities, it has been long neglected and misunderstood. In recent years, mountains have been disappearing at an alarming rate. As urbanization has rapidly degenerated large areas of mountainous terrain, some Western biologists and horticulturists are considering recreating inner-city landscapes that mimic mountainous environments (Arendt 1994;K 1976;O W E. Peter Francism 1998). Despite the efforts of some scholars, mountain areas are still disappearing much faster than other urban areas, especially in developing countries. China is a mountainous country, with more than two-thirds of the land area covered by mountains and only 10% covered by plains (Guangyu 2006). Furthermore, as the process of urbanization is continuously accelerating in China, the scale of cities is gradually expanding. Thus, this leads to growing conflict between urban construction and nature conservation. Particularly in hilly cities, the potential buildable areas are fully exploited, leading to serious damage to mountainous environments. Some of the hills and valleys have been isolated, and extreme deterioration has occurred in the quality of the urban environment, such as the urban heat-island effect and landslides. However, with the development of theoretical knowledge and the construction of urban forests, parks and gardens, the important functions and values of mountainous environments are receiving increasing attention. Therefore, this has pushed social organizations, planning experts and urban managers to look for ways to enhance protection and utilization (Chao Yang et al. 2021).
CONTACT Xingtian Wang xt_wang_ismart@163.com 11 Fushun Road Shibei District,Qingdao Shandong Province People's Republic of China ※This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/ licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper explores the design methods of architecture located in mountainous areas to identify ways to integrate the architecture effectively and naturally with the mountainous environment. More specifically, it analyses the effect of slope on distribution of building type and design method. The major hypothesis is that various design methods and distribution of building types can be found in areas with different slope, and the higher the slope is in an area, the less the design method and the more complicated the design method could be. The Historic District of Signal Hill in Qingdao is the research area since the buildings here show many methods for dealing with the mountainous environment. During the survey, more than 770 buildings were researched, accounting for 77% of the total number of buildings in the historic district. Through the combination of modeling and field research, the slope of the mountain can be divided into three grades: "0-5.8°", "5.8-11.7°", and "11.7-17.3°". Furthermore, this paper also summarizes the design characteristics and distribution patterns of 20 design methods that dealt with the mountainous environment in different slope grades.
By analyzing the distribution of building types and design methods, this paper explores the interrelationship between architecture and mountainous environment, and the ways in which architecture can live in harmony with the mountainous environment. Moreover, it provides more reasonable design methods for buildings in different mountainous environment around the world. Based on these reasonable design methods, the construction volume and cost can be effectively reduced. Furthermore, it can effectively reduce construction energy consumption and carbon emissions, which will promote sustainable development of buildings located in mountainous environment.
Literature review
The persistent effects of human activities have long led to changes in regional environmental patterns and ecological functions, which in turn have had an impact on the quality of human existence and the ability to develop sustainably (Mander 1998). With the rapid development of urban construction, cities continue to extend in all directions. At present, with the progress of human society, the development of science and technology, the proliferation of the human population, the expansion of human living spaces, and the impact of human construction activities on mountainous environments are becoming increasingly significant. Furthermore, this has led to a huge crisis in mountainous environments (Sarmiento 2000). Human activities are the main cause of changes in mountainous environmental structures and functions (Inoue 2001;Ivesjd 1990;Koff and Yli-Halla 1988), receiving worldwide attention academically and socially. Many hills and valleys have been swallowed up and submerged by construction, while they have only partially evolved into urban parks and green spaces. As a result of construction, many hills within mountainous cities are constantly being leveled or eroded. This destroys the mountainous environment and causes mountain cities to lose their inherent environmental character (Zoltán Kovács et al. 2019). Although each city has a different impact on its mountainous environment as it expands, in general, such effects are manifested in two main ways. First, the spatial expansion of a city has integrated mountains into it. Mountains have become part of the city's construction areas and are subject to being damaged by urban construction. Second, the rapid growth of the urban scale and population has led to a demand for the exploitation of mountain resources, resulting in serious environmental problems to the mountainous areas through deforestation and quarrying, land razing, soil erosion, ecological and environmental degradation, mountain fragmentation and destruction of mountain patterns. All of these factors have had a huge impact on mountainous environments and affected the development of mountainous cities (Ai 2015).
However, as an important part of the urban environment, mountains have an inestimable economic, social and ecological value (Alberti et al., 2003). The complex topography of mountainous cities and their unique natural landscape patterns create diverse environments. As a special urban ecosystem, the mountainous environment plays an active role in purifying water systems, regulating water balance, protecting biodiversity, enhancing the urban ecological environment, shaping the urban landscape, and enriching the recreational activities of citizens (Junlu and Ya-Fenggao 2006). In addition, it is also important in biodiversity and urban landscapes (Falcucci, Maiorano, and Boitani 2007;Grimm et al. 2008). On the one hand, as part of urban green space, mountains can provide living space for a variety of organisms (Christopher et al. 2017) On the other hand, mountain and building contours shape a city's skyline. When the location, height, shape and public space of the mountain can be integrated and coordinated with buildings harmoniously, buildings will be able to follow the contours of the mountain and each complements the other. At the same time, mountainous environments can optimize the ecological environment of cities, forming a continuous ecological network around and through cities, which can create composite walking systems and urban ecosystems with mountain characteristics (Hermy 2006). Therefore, it is essential to find ways of harmoniously integrating building construction with mountainous environments when developing areas with mountains.
At present, the research on architecture and mountainous environment mostly focuses on mountain science and ecology. In the famous academic book Site planning and Design Handbook, (Russ 2009) established the basic principles and design basis of site planning with a sustainable site planning model. (Norberg-Schulz 1983), as the author of Thinking on Architecture, established the theories of architectural phenomenology and emphasized the harmony between architecture and place. These two works have certain guiding significance for exploring the relationship between architecture and mountainous environment. In addition, the research of some scholars has also promoted the development of related fields. For example, (Bosia 2004) studied the natural buildings in the Alpine valley from the perspective of building material, type, orientation, and detail, and proposed the maintenance and protection measures as the "dialogue between architecture and environment". (Mutani and Berto 2018) conducted an inductive analysis of the building energy consumption and provided energy-saving design for the buildings located in mountainous areas. (Qing-Shun and Hongyang 2011) studied the buildings in mountain city Chongqing and proposed a three-dimensional disaster prevention system by analysing planning regulation and fire protection regulation. It can provide guidance for how to protect the spatial and morphological characteristics of the local buildings. (José, García-Ruiz, and Ruiz-Flano et al. 1996) aim to achieve a stable land structure by proposing the use of abandoned cultivated land and improving shrub coverage to control the land erosion. It explored a new direction in the site selection and mountain structure stabilization in the design process. (Wei 2015) studied the flow rate, pressure, and dynamics of mudslides disaster through data simulation. It found some solutions for mudslide protection and evaluation that can be used in the design process. (Diao and Zhou et al. 2019) found that the slope instability is the main reason that buildings in mountain area get damaged. This study provides certain guidance to improve the slope treatment of the buildings in mountain area. (Zhao and Xu 2019) summarized the residential design methods in Qinba mountain area, and provided a valuable reference for the design of residential buildings in mountainous areas. (Andrew and Sauber 2000) studied the impact of glacial erosion on mountain buildings and illustrated the importance of building design in mountain area in terms of the natural environment protection. Brian (Horton 2018) reconstructed the mountain architecture on the western edge of South America from the perspective of geology. It explored the methods of integrating geology in building design. Michael (Heads 2019) studied the large-scale passive uplift of animal and plant populations under the influence of construction, and emphasized the importance of maintaining the stability of the animal and plant environment. (Fei et al. 2018) proposed an urban morphological method to detect the wind path of mountain cities by analyzing the wind environment of Dalian. It also provided corresponding strategies to alleviate the heat island effect of coastal mountain cities. In addition, the issue of sustainable development of mountainous environments within urban areas is increasingly becoming a focus in various countries since it is of importance to regional and global ecological security and sustainable development (Brown 2000;Jodhans 2001;Jodhins 2000). The idea of sustainable development involving mountainous environment conservation has received much attention, and a number of environmental protection organizations have widely accepted this idea and explored its use to guide practical activities in mountainous environment conservation. As Foreman summarizes the study of environmental ecology as structure, function, and dynamics (Godron 1990), the matrix, corridors and patches are considered to be the three elements that make up the environmental structure (Nancai et al. 2019). Bai proposed the establishment of ecological corridors to link isolated areas of important mountain and species habitats (Bai and Wang et al. 2018). Elsen takes a large mountain range as the object of study and proposes protecting its vertical spatial ecology (Elsen and Merenlender 2018). Moreover, some laws and regulations are gradually being proposed. The French Landscape Protection and Regeneration Act of 1993, the US Environmental Policy Act (SEPA) on landscape impact assessment of development activities, and the landscape regulations of Germany and Japan show a growing concern for the protection of mountainous environments (Solomon Benti and Callo-Concha 2021). As a result, this has led to a number of practical activities, such as San Francisco's post disaster reconstruction. This includes preserving the original green space structure of the city, enhancing the continuity of green space within the city, protecting the natural topographic and mountainous features of the current urban space, and protecting good vantage points and viewpoints. By linking the height of existing buildings and new buildings to the urban skyline and the landscape space of the hills, it can provide protection for the mountainous environment in the course of urban development (Betal 1992).
In summary, as cities grow larger and taller, the desire to live in a city with clean air and a beautiful environment is becoming increasingly pressing. Therefore, it is important to explore how to rationally address the relationship between buildings and the mountainous environment while expanding the building areas of cities. To respect and protect the natural urban environment and to make use of mountain resources rationally, a range of relevant building design methods should be found, which will allow for the rapid and sustainable development of mountain cities and the harmonious integration of buildings into the mountainous environment. Furthermore, it can reduce the damage to the mountainous environment during construction and promote the development of mountain architecture.
Method
This paper mainly focuses on the research of building design methods dealing with mountainous environments of various slopes. The historic district of Signal Hill in Qingdao is taken as the research area since it has a many buildings and complex terrain. Through the combination of terrain modeling and field research, the number and distribution of design methods are counted, obtaining a series of regional design methods for mountainous environments. The process structure is shown in Figure 1.
Introduction to research area
Qingdao is located in a hilly seaside area, and the mountains account for approximately 15.5% of the city area. The unique mountainous terrain of Qingdao can be characterized by a high northeast, a low southwest coast, and an undulating central hillside. The Signal Hill Historic District is built around Signal Hill, which is 98 meters above sea level and covers an area of 63,936 m2. Due to its mountainous environment, the Historic District forms a complex slope and a typical residential space where the buildings are loosely arranged. Its landscape also has the characteristics of mountainous space. The buildings in different slope areas have their own special methods of dealing with the mountainous environment; thus, buildings with different slopes in the whole Historic District are investigated and the distribution of the different design methods used is analyzed. The scenarios of the research areas are shown in Figure 2.
Modeling the research area
A 3D model is built to analyze the design methods and slopes used by buildings on different slopes. The process can be summarized in three main steps as follows: First, the topographic data of the Historic District is obtained from an open-source data site named the "Geospatial Data Cloud". On this site, the contour vector data files of the Historic District are obtained, which are imported into the Global Mapper software. As a result, elevation dwg files (a kind of file format type with elevation data) are obtained through analysis of the contour data.
Second, a topographic model of the Historic District is formed by importing the dwg file with elevation data into Rhino software (software for making 3D models).
Finally, the model is divided into several quadrangles in the Grasshopper interface by cutting in the Xaxis and Y-axis directions. Projecting the highest point of quadrangles onto the lowest plane forms a series of spatial triangles that are used to analyze the range of slopes in the Historic District. Through the analysis, the range is divided into three slope levels so that field research and distribution statistics aimed at building design methods on different slopes can be conducted. The modeling process can be seen in Figure 3.
Field research and statistics
Based on the model, the GH (the abbreviation of Grasshopper, which is software for parametric design) analyzes the slopes of the Historic District. It divides the overall slope into three grades: 0°-5.8°, 5.8°-11.7°, and 11.7°-17.3°. By counting the total number of buildings in different slope grades and conducting field research on them, this paper summarizes the building design methods for the mountainous environment. The statistical data can be seen in Table 1.
Furthermore, the Historic District was divided into three research areas based on the division of the slope grades. Field research work was carried out in these three research areas. The overall field research routes included 8 streets, and it took 6 days to complete the research. The total walking distance of the research is 7.1 kilometers. The total number of buildings researched is 669. The firstgrade research area is approximately 376,039 m2, mainly surrounded by 9 primary roads. In this area, 361 buildings, which account for 60% of the total number of buildings, were analyzed. The secondgrade research area is approximately 352,857 m2, and 253 buildings were analyzed, which account for 72% of the total number of buildings. The third-grade research area is approximately 35,601 m2. In this area, 55 buildings accounting for 100% of the total number of buildings were analyzed. The overall research mapping and classification of slope grade can be seen in Figure 4. The distribution of each design method and building type in different slope areas area were recorded. Further, the proportion of different design methods and building types in different slope areas were also obtained. Analyzing the collected data can testify if the proposed hypothesis was valid.
Results
Through the field research work, the numbers and distribution of various building design methods dealing with the mountainous environment are counted and summarized. Nine design methods are used to deal with the variation of ground levels inside and outside the building site, 4 design methods to deal with the variation of ground levels inside the building site, 4 design methods to deal with the site fence, and 4 design methods to deal with the interrelationship between architecture and the mountainous landscape. Meanwhile, some distribution patterns for building types are shown. Finally, by comparing the distribution of design methods and building types in different research areas, the distribution pattern is analyzed and obtained.
Analysis of building type distribution
From Figure 5, we can see that different building types show different distribution characteristics on different slopes. Slope Grade I includes all building types, including single housing, group housing, religious buildings, educational buildings, commercial buildings and other public buildings. Group housing accounts for 50%, which is the largest proportion. Single housing accounts for 39%, while the smallest proportion is for commercial buildings, which is 1%. Slope Grade II includes five types of buildings, including all types except religious buildings. The largest proportion is still group housing, which accounts for 50%. Single housing accounted for 45%. The smallest proportion is educational buildings, which account for 1%. In slope grade III, there are only three building types. The largest proportion is single housing, which accounts for 85%. Group housing accounts for 13%, while the lowest proportion is found for other public buildings, which account for only 2%.
In summary, educational buildings, commercial buildings, religious buildings and public buildings that have large volumes tend to be located in Grade II or lower slope areas. Because of the small change in slope, the complexity of the structure can be reduced for building types with large volumes, reducing the difficulties of construction and capital investment. Conversely, in terms of structure, buildings with small volumes are better able to accommodate the complex changes in slope than buildings with large volumes. Therefore, it is found that there are large numbers of residential buildings located in all three slope areas. Among them, group housing is mostly located in the grade II or lower slope areas, while single housing tends to exist in the Grade III area.
Design methods to deal with the height difference
Due to the complexity and variability of mountainous terrain, there are a variety of height differences. There are two main approaches for the design methods to deal with this: one is how to handle height differences inside building sites; and the other is how to deal with height differences between building sites and external roads. The type of design methods can be divided into three categories: "flatten", "steps" and "ramps". The "flatten" design method refers to flattening the different ground levels to the same level.
Through field research, this kind of design method is mainly used in Grade I. When this design method is used inside a building site, the variation in the height differences always needs to be simple, which makes it easy to find a base ground level. More details of the design methods can be seen in Figure 6(a). When dealing with the height differences between building sites and external roads, height differences are generally less than 0.5 m. Based on these qualifications, the difficulty and cost of construction are minimized. More details of the design methods can be seen in Figure 6(b). For the "steps" design method, as the difference in height increases, the number of steps increases. Meanwhile, the type and position of steps will change depending on the size of the space available to address the height difference. To deal with the height difference between building sites and external roads, based on different height differences and available space sizes, four design methods are used: the "few steps", "single-run staircases", "folded stairs" and "long single-run staircases". When the height difference is below 1 m, the "few steps", which have no more than 3 steps, are generally used. Because it requires less space, the "few steps" are always set inside the site. As the height difference reaches 2-3 m, the "single-run staircase" and "folded stairs", which have 10-20 steps, are more commonly used. As the height difference between the building site and external road becomes even greater, reaching 3-9 m, a "long single-run staircase" is used to solve the problem. Because more space is needed, this kind of staircase, which always uses more than 30 steps, is always set outside the site. This will not only solve the traffic problem of the height differences between sites and external roads but also connect streets with different ground levels. More details of the design methods can be seen in Figure 7. When dealing with height differences within a site, the design methods of "steps" are always used where the building site is divided into several sections with different ground levels. When the height difference between each section is below 1.5 m, "multistep" with 3 to 10 steps is always used to solve the problem. With the increase in height difference, the number of steps increases. When there is enough space inside the building site, the steps are integrated to form an "internal long single-run staircase". As a result, this design method simplifies construction and reduces the use of space. More details on the design methods can be seen in Figure 8.
For the "ramps" design method, the slope is generally less than 10°, which is more suitable for human walking. In dealing with the height differences between building sites and external roads, the type of design method can be divided into three categories: "external ramps", "internal ramps" and "fan ramps". The "external ramp" is always used in situations where the building site is small and does not have enough space to deal with the height differences inside and outside the site. In this situation, the ramp encroaches on a certain space on the external road. In contrast, the "internal ramp" is used in situations where the site has enough space. It can maintain the integrity of the external road rather than encroaching it. A "fan ramp" is always used in situations where the external road has a certain tilt angle. Therefore, there is a fan-shaped area between the flat building site and the external road, which can be handled by a "fan ramp". When dealing with height differences inside a site, a "long ramp" is generally used where the building site has sufficient space. For the use of occupants, the slope will normally not exceed 10°. More details on the design methods can be seen in Figure 9.
Fence design methods
In situations where the external roads around building sites have different ground levels, there are variations in the height differences between the building sites and external roads. Although the problem can be solved by the design methods introduced, they only deal with one kind of height difference rather than multiple height differences. Therefore, fences are always used to hide the variations in height differences to maintain harmony. The fencing design methods analyze two types: "fence height" and "fence style".
The height of the fence increases as the height difference of the external road becomes greater. In Grade I or Grade II areas, the height of the fence is generally below 3 m, since the inclination angle of the external road is small and the height difference is only up to 3 m. In the area of Grade III, the height of the fence is generally above 3 m. As the length of the external road becomes longer, resulting in a greater height difference, the height can reach 4 m. More details of the design methods can be seen in Figures 10(a-b). The style of the fence shows different forms with the change in height difference. In the area of Grade I, because of less variation in the height difference, the height is low. As a result, the construction of the fence is less difficult, which makes the form of the fence more flexible. Therefore, a fence that shows a stepped shape followed by an external road is always used. In the area of Grade II, because of the large change in height difference, the height is above 3 m. To reduce the difficulty of construction, the height of the fence generally maintains a consistent rather than a stepped shape. In the area of Grade III, with greater variation in height difference, the height of the fence can reach up to 4 m. Therefore, the fence is always used in combination with the garage. More details of the design methods can be seen in Figures 10(c-d)
Design methods for interrelationship between architecture and landscape
The design methods introduced above mainly deal with variations in ground level, while the next method focuses on the interrelationship between buildings and landscapes. Since this design method is most evident in residential buildings, the paper focuses on the design methods aimed at two types of buildings: "single buildings" and "group buildings".
The design methods that address the interrelationship between single buildings and landscapes show different forms due to the variation in height difference. In the area of Grade I, the building site has enough space to be utilized since there is no need to deal with complex height differences. Therefore, most of the landscape is artificially planted inside the site. In Grade II and Grade III areas, because there is more space to handle complicated variations in the ground level inside the building site, it is not suitable for creating artificial landscapes inside the site. Therefore, buildings mostly use the plants planted outside the building site as the main landscape. More details of the design methods can be seen in Figures 11(a-b) For the design methods that deal with the interrelationship between group buildings and landscape, because more buildings are built inside the site, there is less available open space. Therefore, buildings all tend to take advantage of the landscapes outside the site. Nevertheless, the type of landscape varies with the height difference. In areas with low slopes, buildings mostly use streetscapes as the main landscape. In areas with a high slope, buildings mostly use mountainous plants as the main landscape. More details of the design methods can be seen in Figures 11(c-d) Through the research work on the entire Historic District, a total of 20 design methods are obtained with four styles: "Design Methods to Deal with Height Differences Inside and Outside the Building Site", "Design Methods to Deal with Height Differences Inside the Building Site", "Fence Design Methods", and "Design Methods for the Interrelationship between Architecture and Landscape". Furthermore, according to the data counted, the distribution of different design methods in different research areas can be obtained. More data can be seen in Figure 12. At the same time, by counting the number of different design methods, the percentage of distribution in different research areas was obtained. Additional data can be seen in Figure 13.
Discussion
Tables 2 and 3 show that the distribution trend of building types and the tendency to choose particular architectural design methods have some patterns that are influenced by different slopes.
First, compared with other complex building types, the number of residential buildings is the largest in the three research areas. As the slope increases, the number of mid-rise residential buildings gradually decreases, and the number of low-rise residential buildings increases significantly, especially in the "11.7-17.3°" research area. Low-rise residential buildings are mainly distributed in the "11.7-17.3°" research area, while there is a small number of mid-rise residential buildings. This distribution characteristic shows that residential buildings can better adapt to the undulating terrain in the mountainous environment because of their wide choice of structures. The larger the slope, the simpler the structure of the building form and the greater its adaptive ability. Therefore, the choice of building type in a mountainous environment should be based on a simple structure, which reduces the difficulties in construction. Meanwhile, simple structures also make the building more flexible in the mountainous environment and weaken the damage caused by construction to the natural environment of the mountain.
Second, there are also some tendencies regarding building design methods to cope with the mountainous environment. In areas with gentle slopes, because the variation in ground level is slight, there is no need to use many areas to address the problem of height difference, leading to more available land resources. As a result, there are a variety of design methods that mainly include steps and ramps. Even the methods involve flattening without destroying the original mountain environment. With the gradual increase in the slope, the variation in ground level becomes more complicated, and the available land resources are reduced. Therefore, the design methods of "folding stairs" and "long single-run staircases", which take up fewer land resources, will be more suitable. For the choice of building fences, as the slope becomes greater, the height of the fences will also increase, and some can even be used for the function of garages. For the interrelationship between the buildings and the surrounding landscape, in areas of gentle slopes, sufficient space inside the site can be used for artificial landscaping. However, when the slope increases, to reduce the excavation and filling of the mountain, which will destroy the mountain environment, the space inside the building site is generally small. As a result, buildings usually adopt the stepped arrangement and the method of elevating buildings following the mountain topography to take advantage of the external landscape. In summary, it can be seen that under different environmental conditions, buildings will also have different design methods. Therefore, exploring the harmonious relationship between architecture and the environment and finding the most suitable design methods for different environments is of great significance for the sustainable development of architecture.
At present, some scholars have made meaningful research on the harmonious coexistence between architecture and environment. For example, (Hermawan and Švajlenka 2022) analyzed the thermal performance of buildings in terms of temperature and humidity to determine the proper types of cladding, materials, shapes, and load-bearing elements that can provide a comfortable and energy-efficient building. (Lin, He, and Zhao et al. 2021) analyzed the ecological sensitivity and site suitability of the buildings in mountain area, and proposed an optimal development and construction plan. Jozef (Švajlenka and Kozlovská 2020) evaluated the efficiency and sustainability of buildings in the mountain region, and proposed a method to determine the material efficiency. Taking Akedala Station as the study case, (Zhao, Lu, and He et al. 2022) explored the characteristics of the development and growth of greenhouse gas emissions in the building construction. Compared with these studies, this paper explores the harmonious coexistence of architecture and mountainous environment and focuses specific design methods and further explores the distribution patterns of various design methods and building types in different slope areas. For Qingdao and other mountain cities around the world, this research can provide more ideas for mountain building design, which will effectively improve the utilization of the mountain terrain for the building. As a result, this will effectively reduce the volume of building construction, and further reduce the energy consumption of building construction, reduce carbon emissions, and promote the sustainable development of buildings located in mountainous environment.
Finally, through the field research of more than 600 buildings in the Signal Hill historic district, it is found that single housing and group housing are fully integrated into the mountainous environment. They also have many different design methods to deal with height differences. However, these design methods are mostly applied to single residential buildings of 1-2 stories and group residential buildings with fewer than 6 stories. All of this shows that there are no reasonable design methods for high-rise residential buildings. With the continuous development of modern society in China, urban populations continue to increase. Therefore, people have more requirements for housing. In view of the lack of land in cities, the relevant departments have been considering the feasibility of high-rise residential buildings in mountainous areas. Therefore, it is of significance to consider how to translate and modify the various design methods to apply to high-rise residential buildings to cope with mountainous environments. Thus, high-rise residential buildings in different slope areas can fit into the mountainous environment. Moreover, it can reduce the destruction of the original mountainous environment, make the buildings suitable for this particular environment and reflect the real local conditions.
Conclusion
In conclusion, this study discusses the relationship between building and mountain environment by taking Qingdao as an example. This paper analyzes the design methods of buildings located in mountainous areas and summarizes the characteristics and distribution rules of 20 design methods. It also explores the relationship between various building types and slope angles. The research hypothesis is valid according to the data analysis: the distribution of building types was affected by the slope. Small and mediumsized buildings with strong structural tolerance can be found in areas with higher slope angles, whereas large and structurally complicated buildings are more often to be found in flat terrain. Moreover, the number of design methods and building types decreased as the slope increased. The findings provide more reasonable strategies for the construction of buildings in mountain areas, which will effectively reduce the volume of construction, reduce energy consumption and carbon emissions, and promote the sustainable development of buildings in mountain areas. The limitation of this research is that the study mainly focuses on the investigation of low-rise buildings rather than high-rise buildings, and the various analysis number of the design method is insufficient due to the number of buildings being large. Future studies can explore the relationship between high-rise buildings and slopes, and how the two can co-exist with nature.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes on contributors
Guangzhao Zeng is a postgraduate of Qingdao University of Technology. His research interest lies in regional design methods of architecture.
|
2022-12-19T16:08:48.297Z
|
2022-12-17T00:00:00.000
|
{
"year": 2023,
"sha1": "be221a65812ce89c0c3e8041a74633075cf18d14",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/13467581.2022.2160212",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "7dfebc0c1851566033126f6c8de96cc3a482e140",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
}
|
222352107
|
pes2o/s2orc
|
v3-fos-license
|
Abundance of Non-Polarized Lung Macrophages with Poor Phagocytic Function in Chronic Obstructive Pulmonary Disease (COPD)
Lung macrophages are the key immune effector cells in the pathogenesis of Chronic Obstructive Pulmonary Disease (COPD). Several studies have shown an increase in their numbers in bronchoalveolar lavage fluid (BAL) of subjects with COPD compared to controls, suggesting a pathogenic role in disease initiation and progression. Although reduced lung macrophage phagocytic ability has been previously shown in COPD, the relationship between lung macrophages’ phenotypic characteristics and functional properties in COPD is still unclear. (1) Methods: Macrophages harvested from bronchoalveolar lavage (BAL) fluid of subjects with and without COPD (GOLD grades, I–III) were immuno-phenotyped, and their function and gene expression profiles were assessed using targeted assays. (2) Results: BAL macrophages from 18 COPD and 10 (non-COPD) control subjects were evaluated. The majority of macrophages from COPD subjects were non-polarized (negative for both M1 and M2 markers; 77.9%) in contrast to controls (23.9%; p < 0.001). The percentages of these non-polarized macrophages strongly correlated with the severity of COPD (p = 0.006) and current smoking status (p = 0.008). Non-polarized macrophages demonstrated poor phagocytic function in both the control (p = 0.02) and COPD (p < 0.001) subjects. Non-polarized macrophages demonstrated impaired ability to phagocytose Staphylococcus aureus (p < 0.001). They also demonstrated reduced gene expression for CD163, CD40, CCL13 and C1QA&B, which are involved in pathogen recognition and processing and showed an increased gene expression for CXCR4, RAF1, amphiregulin and MAP3K5, which are all involved in promoting the inflammatory response. (3) Conclusions: COPD is associated with an abundance of non-polarized airway macrophages that is related to the severity of COPD. These non-polarized macrophages are predominantly responsible for the poor phagocytic capacity of lung macrophages in COPD, having reduced capacity for pathogen recognition and processing. This could be a key risk factor for COPD exacerbation and could contribute to disease progression.
Introduction
Macrophages are key effector cells in orchestrating both the innate and adaptive immune responses [1,2]. Alveolar macrophages identify, engulf, and process inhaled pathogens, cigarette smoke
Subjects
We recruited stable COPD patients who had been free of exacerbations for at least 4 weeks prior to sample collection, and a control group, which consisted of individuals who did not have COPD but underwent bronchoscopies for a clinical indication (i.e., for lung nodules or masses, chronic cough, or mediastinal lymphadenopathy) between 2017 and 2019 at St. Paul's Hospital in Vancouver, Canada (ClinicalTrials.gov identifier: NCT02833480, 19/02/2015). All subjects provided written informed consent and the research protocol was approved by the University of British Columbia/Providence Health Care Research Ethics Committee (certificate numbers: H14-02277, 19/02/2015). The diagnosis of COPD and severity score were based on the Global Initiative for Chronic Obstructive Lung Disease (GOLD) criteria [18].
Broncho-Alveolar Lavage (BAL) Collection
Using a fiberoptic Olympus ® bronchoscope (Olympus Corporation, Tokyo, Japan), the patients' 4-6th generation airways were cannulated and then wedged. The target segment(s) had to be free of any significant disease based on chest computed tomography (CT) imaging and not contralateral to any significant lung pathology including pulmonary nodules or mass, or bronchiectasis. BAL was performed by instilling 40 mL of sterile normal saline into an occluded segment with subsequent 20 mL aliquots (to a maximum of 200 mL) until a total of 50 mL of BAL fluid was retrieved. The first 20 mL of the collection fluid was discarded.
Alveolar Macrophages
Macrophages were purified on the day of collection from the BAL fluid as previously described in detail [19]. Briefly, BAL fluid was filtered through Cell strainer 70 µm pore size (VWR, Randor, PA, USA) and centrifuged at 500 relative centrifugal force (rcf) for 10 min, at 4 • C. The total cell number was counted with a hemocytometer. BAL samples were processed under sterile conditions within 1 h after collection and maintained on ice until processed.
Inmunofluorescence Staining and Flow Cytometric Analysis
We used 10% human serum for 20 min at 4 • C to block nonspecific Ab binding. The cells were stained with anti-human HLA-DR antibody (APC/Cy7, clone L243, Biolegend, San Diego, CA, USA) (to identify macrophages), anti-human CD40 antibody (Brilliant Violet 421TM, clone 5C3, Biolegend, San Diego, CA, USA) to identify M1 polarization, anti-human CD163 antibody (Alexa Fluor 647, clone GHI/61, Biolegend, San Diego, CA, USA) to identify M2 polarization. Appropriate isotype controls were used for each antibody. These M1 (CD40) and M2 (CD163) markers gave use the clearest and strongest signals and were selected after testing a variety of other different M1 (CD80, CD86, iNOS) and M2 (CD206) markers ( Figure S1). We incubated the cells and the antibodies for 60 min on ice in the dark, and after washing the cells, we evaluated them using a Gallios flow cytometer and cell sorter MoFlo Astrios EQ ( ). An example of the gating strategy is shown in Figure S1.
Measurement of Macrophage Phagocytosis
We used pHrodo TM Red S. aureus bioparticle conjugates TM (Invitrogen, Carlsbad, CA, USA) in order to evaluate the phagocytic activity of macrophages [20]. The cells were washed in phosphate-buffered saline (PBS). Macrophages (1 × 10 6 /mL) were incubated with the pHrodo-labeled bioparticles in a 37 • C water bath for 2 h and then collected for flow cytometry analysis according to the manufacturer's recommendations. This assay is based on the principle that a fluorescence signal dramatically increases in response to the lower pH of the phago-lysosome, which occurs with macrophage engulfment of the bioparticles [20]. The fraction of pHrodo-positive cells (indicating phagocytic activity) was determined for each macrophage phenotype.
Gene Expression Measurement/Analysis
After macrophages were divided into four groups, they were transferred to an RLT buffer (Qiagen, Hilden, Germany) and stored at −80 • C. The nCounter ® GX Human Inflammation Kits (Nanostring Technologies, Seattle, WA, USA) were used to analyze gene expression. We had 48 RNA samples for this part of the experiment (seven non-COPD and five COPD subjects). The panel profiles included 255 targeted genes (249 inflammation-related genes and 6 internal reference controls). Total RNA (extract from > 5000 cells) was assayed on a nCounter Digital Analyzer (NanoString Technologies, Seattle, WA, USA) according to the manufacturer's protocol. Gene expression was analyzed on the accompanying nSolver software (NanoString Technologies, Seattle, WA, USA). Raw count data of gene expression was processed according to NanoString's recommendations. Data were normalized to the average of the 6 housekeeping genes (CLTC, GAPDH, GUSB, HPRT1, PGK1, and TUBB) in each experimental sample and log2-transformed for further analysis for differential expression (DE).
Statistical Analysis
The statistical software PRISM 5 (GraphPad Software, Inc., San Diego, CA, USA) was used for Fisher's exact test for tables (2 × 2), the Mann-Whitney U test, Kruskal-Wallis test and post hoc Dunn's test with Bonferroni adjustment for multiple comparisons, and Spearman rank correlation test, as appropriate. p < 0.05 was considered significant.
Gene Expression Analysis
For differential gene expression analysis, the raw count data were quality controlled and normalized using the positive control genes and housekeeping genes, according to NanoString's recommendation. Principal component analysis was used to assess batch effect and outliers ( Figure S2). Since we had observed a batch effect between experiments, normalization was performed on each experiment separately and then combined using ComBat (R package "sva") batch correction. The processed data were log2-transformed, and genes with log2 expression < 4 in at least 12 samples (1/4 of the total sample size) were filtered out prior to the downstream analysis.
Following the classical definition of cell markers, differential expression analysis was performed using R package limma's moderated linear model comparing one cell type versus the others. To account for the intra-subject correlation between samples, we used the mixed effect version of the model. The Benjamini-Hochberg procedure was used to correct for multiple hypothesis testing and a false discovery rate < 0.1 was used as the significance threshold. Linear regression was used to examine the association between phenotypes (i.e., sex, age, smoking status and disease status) and the first five principal components of the expression data to determine the potential covariates. R package "clusterProfiler" was used for pathway enrichment analysis. All analyses were performed using R (version 3.5.0).
Patient Characteristics
The characteristics of 28 study subjects (18 COPD and 10 control) are shown in Table 1. .3%], p < 0.001) than controls. The GOLD stages I/II/III/VI were 4/10/4/0. Both groups were similar in terms of age and percentage of FVC % predicted.
Macrophage Phenotype Distribution
In control subjects, there was a near-equal distribution of the four subtypes of macrophages Table S1).
Data are presented as median (interquartile range). Definition of abbreviations: COPD chronic obstructive pulmonary disease, FVC forced vital capacity, FEV1 forced expiratory volume in one second, GOLD Global Initiative for Chronic Obstructive Lung Disease. BAL bronchoalveolar lavage.
Macrophage Phenotype Distribution
In control subjects, there was a near-equal distribution of the four subtypes of macrophages Table S1).
COPD Severity and Distribution of Macrophage Subtypes
The percentage of double-polarized macrophages decreased and non-polarized macrophages increased with increasing COPD severity ( Figure 2C,D). The percentage of M1 macrophages was 6.5% [3.5-14.5%] in GOLD 1, 5.8% [2.9-8.7%] in GOLD II, and 5.8% [4.6-7.0%] in GOLD III grade of severity (p = 0.063; Figure 2A) and the percentage of M2 macrophage was 7.4% [1.5-13.5%] in GOLD Figure 1. The percentage of macrophage subtype was compared between control (A) and COPD (B) subjects. There was no significant difference in the percentage of macrophage subtypes in control subjects (A). The percentage of non-polarized (CD40-CD163-) macrophage was the highest in COPD subjects (B). Bottom and top of each box represent 25th and 75th percentiles, respectively; the solid line indicates median; brackets represent 10th and 90th percentiles. p values were determined using a Kruskal-Wallis test and a post hoc Dunn's test with Bonferroni adjustment for multiple comparisons. * p < 0.05. Definition of abbreviations: COPD chronic obstructive pulmonary disease.
Phagocytosis Macrophage Subtypes
. Figure 5. The percentage of pH-rodo Red Stapylococcus aureus bioparticles conjugate posimacrophages (phagocytic activity) in each macrophage subtype was compared between control (A) and COPD (B) subjects. The phagocytic activity of non-polarized (CD40-CD163-) macrophage was lower than double-polarized (CD40+CD163+) macrophages in control subjects (A). The phagocytic activity of double-polarized (CD40+CD163+) macrophage was the highest in COPD subjects (B). Bottom and top of each box plot represents 25th and 75th percentiles, respectively; the solid line indicates the median; and the brackets denote 10th and 90th percentiles. P values were determined using a Kruskal-Wallis test and a post hoc Dunn's test with Bonferroni adjustment for multiple comparisons. * p < 0.05. The phagocytic activity of M1, M2, and non-polarized macrophages in COPD subjects in general were lower than those in control subjects (p = 0.020, p = 0.006, and p = 0.004, respectively) (Table S2). There was a trend towards reduced phagocytic activity of double-polarized macrophages in COPD subjects compared to control subjects (p = 0.068; Table S2). There was an inverse relationship between the percentage of non-polarized macrophages in BAL fluid and phagocytic activity in all participants (Spearman rank correlation r = −0.785 and p < 0.001; Figure 6).
The phagocytic activity of M1, M2, and non-polarized macrophages in COPD subjects in general were lower than those in control subjects (p = 0.020, p = 0.006, and p = 0.004, respectively) (Table S2). There was a trend towards reduced phagocytic activity of double-polarized macrophages in COPD subjects compared to control subjects (p = 0.068; Table S2). There was an inverse relationship between the percentage of non-polarized macrophages in BAL fluid and phagocytic activity in all participants (Spearman rank correlation r = −0.785 and p < 0.001; Figure 6).
Gene Expression in Macrophages
We used a multiplex gene expression panel, consisting of 249 relevant genes, which are known to be involved in the initiation, propagation and resolution of inflammation to evaluate the molecular profile of BAL macrophage subtypes. By checking the association between the variables (age, sex, disease conditions and smoking status) and the first five principal components, no significant covariates were included in the differential expression analysis (all p > 0.05). The differential gene expression analysis showed that 90 genes were up-regulated and 14 genes were down-regulated in non-polarized macrophages ( Figure S3A,B); whereas in double polarized macrophages, 8 genes were up-regulated and 73 genes were down-regulated ( Figure S3C,D). In contrast, only 2 genes were upregulated and no genes were down-regulated in M1 macrophages ( Figure S3E) and only 1 gene was up-regulated and 7 genes were down-regulated in M2 macrophages ( Figure S3F,G). Heat maps of the top up-regulated and top down-regulated DE genes in each macrophage phenotype are shown in Figure 7A and Figure 7B respectively. Volcano plots graphically show gene expression in the four different macrophage populations (Figure 8). No enriched Gene Ontology and KEGG pathways were identified at false discovery rate < 0.1.
Gene Expression in Macrophages
We used a multiplex gene expression panel, consisting of 249 relevant genes, which are known to be involved in the initiation, propagation and resolution of inflammation to evaluate the molecular profile of BAL macrophage subtypes. By checking the association between the variables (age, sex, disease conditions and smoking status) and the first five principal components, no significant covariates were included in the differential expression analysis (all p > 0.05). The differential gene expression analysis showed that 90 genes were up-regulated and 14 genes were down-regulated in non-polarized macrophages ( Figure S3A,B); whereas in double polarized macrophages, 8 genes were up-regulated and 73 genes were down-regulated ( Figure S3C,D). In contrast, only 2 genes were up-regulated and no genes were down-regulated in M1 macrophages ( Figure S3E) and only 1 gene was up-regulated and 7 genes were down-regulated in M2 macrophages ( Figure S3F
Discussion
Macrophages play a pivotal role in the chronic inflammatory response in COPD lungs. Here we showed that among COPD patients, the majority of macrophages in BAL fluid are non-polarized; whereas in non-COPD subjects, these macrophages constitute~25% of the total pool of macrophages. Several studies have shown an increase in the number of macrophages in bronchoalveolar lavage fluid in smokers and COPD subjects compared with non-smoking controls [4,5]. We also showed that the proportion of these non-polarized macrophages increases with increasing COPD severity, suggesting that these macrophages could contribute to the progression of disease, though additional studies will be required to establish causality. Previous studies have shown that lung macrophages in COPD have reduced phagocytic ability [17,21,22], and here we showed that these non-polarized macrophages predominantly contribute to this impaired phagocytic activity. This reduced phagocytic activity of non-polarized macrophages was also related to the severity of the underlying COPD. Together, these findings support the notion that these non-polarized macrophages contribute to the inflammatory milieu in the lung tissues and also could contribute to enhanced risk of exacerbations and progression of COPD.
Although we found increased abundance of non-polarized macrophages in COPD subjects compared to controls, there were no fractional differences in the four different macrophage subtypes in the control group. We also found that that the percentage of double-polarized macrophages was significantly lower in the COPD patients compared with that in control subjects (p = 0.007; Table S1).
The clinical relevance of this observation, however, is unclear, although this phenotypic shift of macrophages in COPD supports the notion that these non-polarized macrophages contribute to the persistent inflammatory responses in lung tissues of COPD [6,7]. A few studies have previously addressed the issue of macrophage phenotypes shifts in COPD [11][12][13][14][15][16][17]. Kunz and coworkers, for example, used induced sputum samples (which represents mostly macrophages from the larger airways) to evaluate M1 and M2 phenotypes [15], while Eapen and co-workers [12], using macrophages harvested from BAL fluids, evaluated different phenotypes of macrophages and found that the percentage of non-polarized macrophage increased in COPD subjects; results that were similar to the findings of the present study. We extend these findings by showing a severity-dependent relationship between COPD GOLD grades and the percentage of non-polarized macrophages in BAL fluid of COPD subjects. The M1 and M2 macrophage distribution in control subjects ( Figure 1A) is similar than those reported by Eapen [12] and Shaykhiev [14]. However, it should be noted that there are slight differences in the percentages of M1 and M2 macrophages that have been reported in the literature, which may be related to differences in the cell surface markers employed by each of the studies to identify these subtypes. For example, Hodge and coworkers used MHC class I and class II to capture M1 macrophages [13], whereas Löfdahl [23] and coworkers used CR-3 as the cell surface marker for M1 macrophages.
Here, we also showed that current smokers have increased percentage of non-polarized macrophages in BAL (Figure 4). Taken together with previous studies, our current findings support the concept that lung macrophages change their phenotype depending on disease status and environmental stimuli (such as smoking). We suspect that this macrophage plasticity is predominantly influenced by the local microenvironment [24]. The chronic and persistent inflammatory milieu in COPD lung tissue (or induced by inhalation of CS or PM) may recruit monocytes from the peripheral circulation into the lung tissue and/or airspaces [8,25] where they differentiate and become macrophages; these monocyte-derived macrophages may be initially non-polarized [17]. One potential source of non-polarized macrophages could be these "newly" recruited cells. Alternatively, these non-polarized macrophages could be macrophages that have lost their M1 or M2 characteristics (markers) over time and are destined for apoptosis and removal [17,25]. Leukocyte kinetic studies are necessary to address this issue.
Our study showed that non-polarized macrophages have poor receptor-mediated phagocytic capacity, which was observed in both control and COPD subjects ( Figure 5; Figure 6). Previous studies using a mixture of all lung macrophages in COPD subjects and in cigarette smoke-treated animals, have shown decreased phagocytic function of macrophages in response to microorganisms such as Streptococcus pneumoniae, Haemophilus influenzae, Escherichia coli, Moraxella catarrhalis, and Candida albicans [21,22,[26][27][28][29][30][31]. To our knowledge, our study is the first to explore phagocytic function of distinct populations of broncho-alveolar macrophages in humans. We suspect that the low phagocytic function in these previous reports is due to the abundance of non-polarized macrophages in these samples. In contrast, we showed that the double positive macrophages demonstrated excellent phagocytic function ( Figure 5). We posit that the increased presence of non-polarized macrophages (at the expense of double positive macrophages) in COPD airways may increase the risk of these patients risk to repeated infections and exacerbations [32].
Previous studies have shown differences in gene expression between M1 and M2 macrophages derived from human peripheral blood mononuclear cells (PBMC) [33,34] or cultured human monocytes [35,36]. Differential gene expression of subtypes or phenotypes of macrophages retrieved from the airspaces or lung tissues of subjects with COPD is still unclear. Using heat maps of differentially expressed genes, we also showed that CD40 was up-regulated in double-polarized and M1 macrophages and down-regulated in non-polarized macrophages while CD163 was up-regulated in double-polarized and M2 macrophages and down-regulated in non-polarized macrophages ( Figure S3 and Figure 7A,B). These findings support our flow cytometric data. CD40 is a costimulatory receptor related to antigen presenting cells [37] and CD163 is a scavenger receptor [38,39] related to phagocytosis of a variety of particulate matter. The phagocytic function of polarized M1, M2 or double positive macrophage was higher than that of non-polarized macrophages, suggesting that these two receptors CD40 and CD163 contribute to these macrophages' ability to recognize and process foreign materials including pathogens. In the non-polarized macrophages, 14 genes were down-regulated, including CD40 and CD163, which may impair their capacity for pathogen recognition and processing. In addition, the complement components C1QA and C1QB were also down-regulated in non-polarized macrophages (Figure 8). C1Q is a pattern recognition protein that binds to antibody-antigen complexes, bacteria, and viruses and stimulates phagocytic function of human monocytes and macrophages [40,41]. Furthermore, C1Q also directs macrophage polarization and limits inflammasome activity during the uptake of apoptotic cells [42] and promotes M2 polarization [43], both key functions to assist in inflammation resolution and tissue repair. Attenuation of these key macrophage functions could promote bacterial colonization and subsequent infectious exacerbations in COPD and also contribute to dysregulated resolution of the downstream inflammatory responses and impaired tissue repair.
The RAF1 is part of a signaling pathway called the RAS/MAPK pathway, which transmits signals from outside the cell to the cell's nucleus to promote cell division (proliferation), cell maturation and differentiation, cell recruitment and eventually apoptosis [44,45]. AREG or amphiregulin is an Epidermal Growth Factor (EGF)-like molecule that has recently been shown to play a central role in orchestrating both host resistance to pathogens and immune tolerance mechanisms [46][47][48]. This cross-talk between immune cells (such as macrophages) and epithelial cells are programmed to promote inflammation resolution and tissue regeneration, leading to homeostasis after injury [49]. However, with a chronic persistent inflammatory stimulus such as cigarette smoking, it could promote proliferation of structural cells such as fibroblasts and their production of pro-inflammatory cytokines such as IL-8, vascular endothelial growth factor (VEGF), and transforming growth factor alpha (TGF-α) with augmented expression in chronic inflammatory conditions such as rheumatoid arthritis [50]. Mitogen-activated protein kinase 5 (MAP3K5) is a member of MAP kinase family that activates c-Jun N-terminal kinase (JNK) and p38 mitogen-activated protein kinases to an array of stresses such as oxidative stress, endoplasmic reticulum stress and calcium influx [51][52][53]. It has been implicated in the pathogenesis of chronic inflammatory conditions such as rheumatoid arthritis, cardiopulmonary diseases and diabetes [54,55]. Lastly, CXCR4 is a chemokine receptor for pro-inflammatory mediators such as macrophage migration inhibitor factor (MIF), which is a pleiotropic cytokine that antagonizes both apoptosis and premature senescence [56][57][58]. It has been shown to be elevated and implicated in the pathogenesis of COPD [59]. Together, upregulation of these genes suggest that these non-polarized macrophages were also pro-inflammatory in nature and suppress inflammation resolution and tissue repair. Therefore, this abundance of non-polarized macrophages could significantly contribute to the chronic persistent lung inflammatory response in COPD. Taken together, the gene expression profile of these non-polarized macrophages suggests a perturbed pathogen recognition and processing function and reduced ability to resolve inflammation and promote tissue repair ( Figure S3, Figures 7A,B and 8).
There were limitations in this study. Firstly, we did not have a group of smokers with normal lung function to determine the independent effects of cigarette smoking on macrophage phenotypes. However, we found that the percentage of non-polarized macrophages in current smokers with COPD was higher than that of ex-smokers with COPD, suggesting that there is likely to be an effect of current smoking on macrophage plasticity. Given the relatively small number of current smokers in the COPD group and that there were no current smokers in the control group, the independent effect of active cigarette smoking on lung macrophage phenotype and function needs to be explored in a larger dedicated study. Secondly, we performed targeted profiling of 249 human genes known to be involved in different pathways of inflammation rather than unbiased sequencing. Thus, we may have missed other important processes or pathways related to macrophage subtypes in both health and disease. A priori, we chose NanoString targeted gene array, owing to its high sensitivity, specificity and accuracy for the targeted inflammatory genes. Lastly, the origin of these non-polarized macrophages is still unclear. We postulate two possible pathways, firstly, that they represent newly recruited blood monocytes (in contrast to resident macrophages) that have not fully mature with their full complement of functions including their phagocytic function or, alternatively, that these dual negative macrophages are senescent macrophages that have lost both their M1 and M2 markers, including their phagocytic abilities and are destined for removal by apoptosis. Due to the lack of specific markers that can identify recently recruited macrophages in lung tissues, monocyte kinetic studies [60,61] are needed to clarify this important issue. Classification of macrophages in general has been proposed [62]; however, whether this will be applicable to a unique population such as alveolar macrophages still requires clarification. Single cell genetic analysis may provide useful information in future studies not just to better classify lung macrophages phenotypes [63], but also to give insight into the functional properties of these subpopulations of macrophages.
Conclusions
In this study, we identified a unique population of macrophages in human BAL samples of subjects with COPD that lack two important M1 (CD40) and M2 (CD163) markers and are poorly or non-polarized. These non-polarized macrophages constitute the majority of macrophages in COPD, and their abundance increases with increasing COPD severity. These non-polarized macrophages are also pro-inflammatory in nature, with poor phagocytic function. The reduced capability of these non-polarized macrophages to phagocytose bacteria could increase vulnerability for COPD exacerbations. Further studies are needed to determine whether these non-polarized macrophages can be therapeutically targeted, either to change to functional M1 or M2 macrophages or be removed. Therapeutically targeting these macrophages may reduce COPD exacerbations and slow the progression of COPD.
Author Contributions: All authors have significantly contributed to either the design, execution of these studies, analyzing the results, statistical analysis, writing and/or editing of this manuscript. K.A. significantly contribute in executing the studies, writing the first draft of the paper and editing the manuscript; K.Y. significantly contributed in executing the studies, and editing the manuscript; F.S.L.F. significantly contributed in designing, executing the studies, and editing the final manuscript; C.X.Y. significantly contributed in statistical analysis and editing the final manuscript; H.T. significantly contributed in executing the studies, and editing the manuscript; B.S. significantly contributed in executing the flow cytometry studies and sorting studies assisting in analyzing the results; B.A.W. significantly contributed in executing the flow cytometry and sorting studies and analyzing the results; C.W.T.Y. significantly contributed in study design and analysis as well and editing the final manuscript; J.M.L. significantly contributed in study design, execution and editing of the manuscript as well as financially supporting the study, D.D.S. significantly contributed in study design, supervising the execution of the study, financially supporting the study and editing the final manuscript; S.F.v.E. significantly contributed in study design, day-day supervising, analysis of results, writing and editing the manuscript and financially supporting the study. All authors have read and agreed to the published version of the manuscript.
|
2020-10-15T13:05:32.018Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "6d9fa54b8714621f3618e97e9890992b8998d06f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/8/10/398/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7ae797bfeabb8fa936deda882ea485e4f22a7a6d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246313981
|
pes2o/s2orc
|
v3-fos-license
|
Antimicrobial Activity evaluation and phytochemical screening of Silene macrosolen and Solanum incanum: A common medicinal plants in Eritrea.
Medicinal plants play great roles in the treatment of various infectious diseases. S.macrosolen and S.incanum are both important medicinal plants used traditionally for treatment of infectious diseases in many places around Eritrea. The periodically emerging new and old infectious microorganisms greatly magnify the global burden of infectious diseases. The majorities of emerging infectious events are caused by bacteria which can be associated with evolution of drug resistant strains and overwhelming of the natural host defenses. Therefore, the search of new or alternative mechanisms to effectively treat and prevent infectious diseases, particularly bacterial diseases, have to be encouraged to effectively reduce these global burden. The objective of the study is to evaluate the in vitro antibacterial activities of the aqueous and solvent crude extract of leaf and stem of S.macrosolen, and leaf and root of S.incanum against standard strains of selected bacterial species, which can in turn provide a clue for the identication of active constituent responsible for the antibacterial activity. The antibacterial activity of the aqueous ( cold and hot water) and solvent extracts (ethanol, methanol, and chloroform) were evaluated on different selected bacterial strains (E.coli, S.aureus, and P.aeruginosa) using agar well diffusion method on Mueller-Hinton agar at different concentration with the presence of positive control (Chloramphenicol and ciprooxacin) and negative control (sterile distilled water and 5%DMSO) controls. The highest inhibition zone was observed for methanol extracted S.macrosolen stem and chloroform extracted S.incanum root against S.aureus at 400mg/ml with 23mm and 24.5mm respectively. Methanol and cold aqueous extracted S.macrosolen stem also showed the highest inhibition of 26mm, 23mm diameter, against P.aeruginosa, and E.coli respectively. The reason for the high inhibition zone could be due to the presence of secondary metabolites such as saponins, tannins, avonoids, phenols and glycosides. The least result was seen in hot aqueous extract for each plant with no inhibition for all the bacteria. MIC and MBC was determined using tube dilution and plating method for those plant extracts which showed highest and consistent inhibition zone at different concentrations. The MIC and MBC of cold aqueous extract of S.macrosolen stem was found at 25mg/ml, and 50mg/ml respectively, against both E.coli and P.aeruginosa, while the MIC of chloroform extracted S.incanum root was found at 50mg/ml, however, the MBC was not determined in the concentration tested against S.aureus. The paper published after getting the results of the investigation would be anticipated to contribute for the resolution of the burden of the drug resistant bacteria species. results of MIC and MBC for both bacteria were recorded to be the same. The end result displayed for MIC and MBC were at 25mg/ml and 50mg/ml. MIC and MBC of S.incanum leaf chloroform extract against S.aureus, results of this broth dilution assay MIC was seen on the 3rd test tube with 50mg/ml while MBC was not found.
Introduction
Human kind has been exposed to infection by microorganisms since before the dawn of recorded history [1]. In treating such infections, mainly bacterial, human beings have identi ed the use of different herbs since ancient times [2]. The knowledge on plant use is the result of many years of man's interaction and selection on the most desirable, the most vigorous and the most successful plants present in the immediate environment at a given time [3][4][5].The continuous and urgent need to discover new antimicrobial compounds with diverse chemical structures and novel mechanisms of action has been greatly increased due to the incidence of new and re-emerging infectious diseases. Another big concern is the development of resistance to the antibiotics in current clinical use [6]. Plants containing medicinal properties have been known and used in some form or other, even by primitive people. Owing to the realization of the toxicity associated with the use of antibiotics and synthetic drugs, which are too costly to be practical for the majority diseases caused by microorganisms [7], developed countries are increasingly becoming aware of the fact that drugs from natural sources are safer and affordable. Therefore, an upsurge in the use of products based on plants is exposed, especially in the eld of health care products [7]. Developing countries are rich in medicinal and aromatic plants (MAPs) but, due to di culty in accessing e cient extraction technologies, value addition to this rich bio resource is di cult. In most cases, and particularly in very poor countries, the technologies used are inappropriate and not economical. The crucial problem is related to the quality of the product: primitive extraction technologies do not guarantee a stable and high-quality product and, in some cases, inappropriate technologies and procedures result in producing contaminated product which has low market value. In order to assist developing countries to achieve the objective of using rich MAP resource for producing value-added products, dissemination of knowledge of existing extraction technologies and of the latest developments in these technologies is essential [8]. Plant based traditional medicine system continues to play an essential role in health care, with about 80% of the world's inhabitants relying mainly on traditional medicines for their primary health care [9]. Studies on natural products are therefore aimed to determine medicinal values of plants by exploration of existing scienti c knowledge, traditional uses, and discovery of potential chemotherapeutic agents [10]. With the increasing demand for herbal medicinal products, nutraceuticals, and natural products for health care all over the world, medicinal plant extract manufacturers and essential oil producers have started using the most appropriate extraction technologies in order to produce extracts and essential oils of de ned quality with the least variations from batch to batch. Such approach has to be adopted by MAP-rich developing countries in order to meet the increasing requirement of good quality extracts and essential oils for better revenue generation within the country, as well as for capturing this market in developed countries. The basic parameters in uencing the quality of an extract are the plant parts used as starting material, the solvent used for extraction, the manufacturing process (extraction technology) used with the type of equipment employed, and the crude-drug: extract ratio. The use of appropriate extraction technology, plant material, manufacturing equipment, extraction method and solvent and the adherence to good manufacturing practices certainly help to produce a good quality extract. From laboratory scale to pi-lot scale, all the conditions and parameters can be modeled using process simulation for successful industrial-scale production. With the advances in extraction technologies and better knowledge for maintaining quality parameters, it has become absolutely necessary to disseminate such information to emerging and developing countries with a rich MAP biodiversity for the best industrial utilization of MAP resources [8]. In Eritrea the use of herbs to treat different types of diseases is a wide spread practice. However, little has been done to study the antimicrobial activity of Eritrean vegetation.
S.macrosolen is a glabrous, pale or somewhat glaucous perennial, branching from the stock; owering-stems erect, simple or forking, apparently a little viscid above, 2-3 ft high, often somewhat woody at the base, growing 60 -90cm tall. The plant is gathered from the wild for local medicinal use. Its location ranges from Northeast Africa -Sudan to Ethiopia and Eritrea, south to Tanzania. It mainly grows in Well-watered grassland, rocky places, and volcanic soils; at elevations of 1,800 -3,300 meters. The root, known as 'Radix ogkert' or 'Sarsari', is used in the treatment of tapeworms, crushed and pounded root in half index nger size is drunk by tea glass and if the risk comes, powder of Linum usitatissimun infusion in water is taken as reducing pain. The dried root of Silene macrosolen is smoked to make snakes away and treat "Evil eye" [11]. In Eritrea it is traditionally used for the treatment of segri, gerefta (viral infection), and gon .The stem of Silene macrosolen is also used for fumigation of house [12].
Solanum incanum is densely stellate-tomentose, shrub 3-5 feet. High. Leaves sinuate, ovate or ovate -elliptic, obtuse at the apex, unequal at the base, up to 7 inches (in) long 6 in broad, dark-green above, paler beneath, densely stellate -hairy on both surfaces. Although likely to be native to the three countries, Solanum incanum is invasive in parts of Kenya, Uganda and Tanzania [13]. Mostly as a weed of disturbed and overgrazed areas and road sides, but also found in various types of woodland, and along the margins of riverine and evergreen forest [14]. S.incanum is effective for control of cattle ticks when used as water extracted (World Agro-forestry Center, 2014). It is one of the important traditional medicinal plants which almost depend on its analgesic properties. The fruit of S.incanum are used in Kenya for treatment of skin mycotic infections. The leaves are used in composite and lower the risk of high blood pressure, stroke and heart disease. Throughout tropical Africa a sore throat, angina, stomach-pain, colic, headache, painful menstruation and liver pain are treated with S.incanum.
In this Study aqueous (hot and cold), ethanol, methanol, and chloroform leaf and root extracts of S.incanum as well as stem and root extracts of S.macrosolen were tested in-vitro for their antibacterial activity. The bacteria used were standard strains of E.coli, S.aureus, and P.aeruginosa.
Study Design
The study was an invitro-experimental study done in Asmara College of health sciences (ACHS) on certain Eritrean local plants to assess their antibacterial activity. Upon running the experiment, there were both control and experimental groups to maintain the reliability of the results. The extraction procedures were performed in Clinical Chemistry laboratory of ACHS; Media preparation and AST procedures were conducted majorly in Microbiology department of National health laboratory (NHL) and partly in National drug quality control (NDQC); and MIC and MBC were carried out in NDQC.
Collection of Plant Materials and Extraction
Information regarding to the ethno botanical uses of S.incanum and S.macrosolen were collected after interviewing traditional users and local people in South-west sub zone of zoba Maekel around Villago and Bet-mekae, and Northern Red sea region around Mai-Habar respectively .The plant S.incanum (Root and Leaf) was collected from the elds of Villago and Bet-mekae whereas S.macroselen (Stem and Root) was collected from Northern Red sea region around MaiHabar. The collection was made in February to march 2018 and was identi ed at the Department of Plant Biology herbarium, EIT, Mai-nefhi, Eritrea by three senior botanists. The collected roots, stem and leaves were washed with water thoroughly to free from debris. The roots of S.incanum and S.macrosolen were sliced and shade dried for three weeks and the leaf of S.incanum and the stem of S.macrosolen were shade dried for two weeks. Then they were grounded nely by using dry grinder and passed through a sieve and stored for further use in a tightly closed container.
Different extraction solvents namely aqueous (cold and hot), ethanol, chloroform as well as methanol were used for the preparation of plants extracts to be used against bacteria. 100 grams of each powdered plant material was mixed with a respective 2000 ml of extraction solvent. The mixture was then kept in an agitator for 3 days with occasional shaking for the cold extract, whereas the hot aqueous extract was kept in a water bath at 80oC for 4 hours with continuous stirring. The extracts were then ltered using Whatmann lter paper and were concentrated using Rotary evaporator so as to obtain the crude extract and then they were kept in sterile bottles under refrigerated conditions until use.
The crude extracted which was concentrated using rotary evaporator, was reconstituted by distilled water for aqueous extract, 5% DMSO for methanol and ethanol, and 1 00% DMSO for chloroform extract. This was done by weighting One gram of each dried powdered crude extract of leaves, stems and roots in analytical balance and mixing it with respective reconstituents to make a total solution of 2.5ml thereby preparing a stock solution of 400mg/ml, which is a standard concentration. Different concentrations were then prepared from the stock solution i.e. 200mg/ml and 100mg/ml. The crude extract solutions from S.incanum and S.macrosolen was stored inside sterilized bottle and kept in the refrigerator at 4°C until used for the antibacterial 16 test. The extracts were tested for sterility by plating it on Muller Hinton agar and incubating for 24hrs at 37°C.
Phytochemical Analyses
Phytochemical screening was done in order to detect the presence of plant constituents such as alkaloids, avonoids, saponins, and tannins, in the plant extract. A portion of the extract was used to test for the following plant constituents: avonoid, saponins, and tannins, phenols, glycosides and proteins using the methods described by kokate (2001 ) and Harbone (1998).
Tests Procedures
Flavonoids To 1mL of the extract, a few drops of dilute sodium hydroxide were added. An intense yellow color was observed, which become colorless on the addition of few drops of dilute HCl acid, which indicates the presence of avonoids. The presence of avonoids was also con rmed by another test i.e. few drops of 10% ferric chloride solution were added to 1mL of plant extract. A green or blue color indicated the presence of phenolic nucleus.
Saponins
In a test tube containing about 5ml of plant extract of, a drop of sodium bicarbonate was added.
The mixture was shaken vigorously and kept for 3minutes. A honey comb like froth formation con rms the presence of saponins.
Tannins
Five ml of the plant extract and a few drops of 1% lead acetate were mixed. A yellow precipitate was formed, which indicates the presence of tannins.
Phenols
(a) Two ml of distilled water followed by drops of 10% aqueous FeC13 solution were added to 1ml of the extract. Formation of blue or green indicates the presence of phenols.
(b) One ml of the plant extract was diluted to 5ml solution with distilled water and to this few drops of 1% aqueous solution of lead acetate was added. A yellow precipitate was formed which indicates the presence of phenols.
Glycosides A small amount the extract was dissolved in 1ml of water and aqueous sodium hydroxide solution was added. Formation of yellow color indicates the presence of glycosides.
Acquisition of bacterial strains
The study was emphasized on three Standard strains of both gram positive bacteria (S.aureus-25922) and gram negative bacteria (E.coli-25923, and P.aeruginosa-27853) with their speci c American Type culture collection (ATCC) numbers. They were collected from the Department of Microbiology in NHL and Fred hollows. The bacterial species were cultured in the selective media for con rmation of their presence. Then they were maintained in nutrient agar slopes and stored in the refrigerator at a temperature of 4°C.
Preparation and standardization of Bacterial Inoculums
Standardization of bacterial inoculums was done by picking ve colonies of each organism into a normal saline to form the bacterial suspension and thus should be used within 15 min. The microbial inoculum was standardized at 0.5 McFarland. In microbiology, McFarland standards are used as a reference to adjust the turbidity of bacterial suspensions so that the number of bacteria will be within a given range i.e. 1.5*108 CFU/ml. Original McFarland standards were made by mixing speci ed amounts of barium chloride and sulphuric acid together. Mixing the two compounds forms a barium sulfate precipitate, which causes turbidity in the solution. A 0.5 McFarland standard is prepared by mixing 0.05 ml of 1.175% barium chloride dehydrate (BaCl2.2H2O), with 9.95 ml of 1% sulfuric acid (H2SO4). The standard could be compared visually to a suspension of bacteria in sterile saline or nutrient broth [15].
Preparation of Nutrient Media
Mueller Hinton agar, Nutrient agar and broth, Mac Conkey, and Mannitol salt agar were used according to Kumar et al 2011. Different amounts of the different Medias were mixed with distilled water according to the instruction written on the media bottles and then sterilized in autoclave at 121ºC and 151 Barr pressure for 15 minutes. The sterilized media were allowed to cool to a temperature of about 50ºC and dispensed into Petri dishes inside the safety cabinet to make a media of 4mm thickness. The solidi ed plates were kept in the refrigerator of about 2-8 o C. Then the plates become ready to be used for the antibacterial studies.
Test for Antibacterial Activity of the Extracts in agar well diffusion assay
Sterile Muller Hinton agar plates were prepared and a well of about 6.0mm diameter with sterile cock borer was aseptically punched on each agar plate to make 5 ditches or wells on the plates. A sterile cotton swab was used to spread the inoculums evenly on the surface of the agar where the excess was drained off during spreading the bacterial inoculums. The plates were left on the bench for 1hour so that the inoculums will diffuse into the agar, then 100µl volume of varying concentrations of the extracts (100mg/ml, 200mg/ml, and 400mg/ml) was dropped in each of the appropriately labeled wells. A negative control i.e. 5% and 100% DMSO as well as distilled water was set up for each plate by adding the same volume as that of the extract and a positive control of cipro oxacin (for Gram positive) and chloramphenicol (for Gram negative) were used. After incubation of 24hr, the zone of clearance around each ditch was measured using vernire caliper. The zone of clearance measurement was taken from the edge of the zone to the point where the extract is applied. Hence the diameter of the zone of inhibition which represents antibacterial activity was measured in millimeter.
Determination of Minimum inhibitory concentration (MIC) and Minimum bactericidal concentration (MBC)
The MIC was determined using the tube dilution broth method. This was done when the plant extract showed strong antibacterial activity in the agar well diffusion method consistently at the three different concentrations. The tubes were lled with 0.5 ml of nutrient broth. The extract was prepared by taking 1 g of the plant extract and mixing it with 2.5ml of 5%DMSO for complete dissolution of the extract to prepare a concentration of 400mg/ml [16]. Then 0.5 ml of the plant extract suspension was dispensed into the rst tube reducing the concentration by half before serial dilutions were done by transferring 0.5 ml of the nutrient broth containing the extract from the rst tube to the second tube, the procedure was repeated until the last tube (eighth tube). 0.5 ml of the bacterial suspension (0.5 McFarland) was then dispensed into each tube. One tube (without extract or drug) was used as a negative control, whereas one tube with antibiotics, Chloramphenicol (for Gram negative) and Cipro oxacin (for Gram positive) were used as positive control.
The tubes were incubated aerobically for 24 hours at 37°C.The MIC values were determined as the lowest concentrations of the extract capable of inhibiting bacterial growth by looking at the turbidity of the tubes, it was then con rmed by plating the tubes. The MBC was determined by plating the tubes which did not show turbidity on nutrient broth. The lowest concentration of the plant extract that did not yield any colony on the solid media after sub culturing and incubating for 24 hours was taken as the MBC [17].
Statistical Analysis
All analysis was undertaken in duplicates and each experiment was repeated two times. Quantitative values were presented as means ± Standard Deviation (SD). Mixed Design ANOVA was used to evaluate the statistical differences between concentrations of plant extracts followed by Tukey HSD post-hoc test. Statistical analysis was performed using statistical package for social sciences (SPSS), version 26.0 software and Microsoft Excel 2007. Differences at P < 0.05 were considered signi cant.
Phytochemical Analysis
Phytochemical screening of Solanum incanum was performed and all solvent extracts were found to be positive for saponins, whereas all extracts were negative for avonoids except methanol extract of Solanum incanum root and chloroform extract of Solanum incanum leaf, likewise tannins, phenols, and glycosides were positive in most of the extracts and negative in some of the extracts as shown in Table 2. The phytochemical screening of Silene macrosolen was also performed, glycosides was positive for all the extracts except for hot aqueous extract of Silene macrosolen root, others like avonoids, phenols, tannins and saponins show various results with different extracts, as can be seen in the table below. Phytochemical analysis was not performed for chloroform extract of Silene macrosolen root. See Table 3. Where c is 400mg/ml, b is200mg/ml, a is 100mg/ml M±SD is mean plus or minus standard deviation. , and no inhibition zone at 100mg/ml, followed by chloroform extract (p-value<0.05)with a zone of inhibition of 12.5mm, 11.5mm, 10mm respectively at 400mg/ml, 200mg/ml, 100mg/ml, however ethanol, methanol and hot aqueous extracts of S.incanum root did not show any inhibition against E.coli at the concentrations that were tested. Methanol extracted S.marcosolen stem showed the highest zone of inhibition(p-value<0.05) of 22.5mm, 8mm at a concentration of 400mg/ml, 200mg/ml respectively but it did not show any inhibition zone at 100mg/ml,this was followed by cold aqueous extract (p-value<0.05) with a zone of inhibition of 18.5mm at 400mg/ml,7mm at 200mg/ml and no inhibition at 100mg/ml.ethanol extract of S.marcosolen stem showed inhibition zone of 8mm at 400mg/ml only, whereas chloroform and hot aqueous extracted S. marcosolen stem showed no inhibition at all the concentrations that were tested. Cold aqueous extracted S.macrosolen root was the one with the greatest activity (p-value<0.05) with inhibition zone of 11mm, 9mm, and 0mm respectively at 400mg/ml, 200mg/ml, and 100mg/ml, followed by methanol extract (p-value<0.05) with 12mm inhibition zone at 400mg/ml, but no inhibition at 200mg/ml and 100mg/ml, while ethanol and hot aqueous extracted Silene macrosolen root did not exhibit any inhibition zone at the different concentrations tested. Chloroform extract was not tested against E.coli. See Table 8-11. The plants extracts which show strong antibacterial activity in the agar well diffusion method consistently at the three different concentrations were tested for their MIC and MBC against selected bacterial strains. Cold aqueous extracted Silene macrosolen Stem showed MIC at 25mg/ml and MBC at 50 mg/ml against both E.coli and P.aeruginosa. Chloroform extract of Solanum incanum Root showed MIC at 50mg/ml, however MBC was not determined against S.aureus. Negative controls were also run to avoid contamination associated errors. See Table 16.
Discussion
In the present study, two plants were subjected to cold extraction to prepare different crude extracts and subsequently percentage yield was calculated. Each crude extract was then dissolved in their respective reconstituents and were applied against standard bacterial strains to nd out their antibacterial activity through measuring the zone of clearance. To evaluate the MIC and MBC, broth dilution assay was assessed to discover the lowest concentration where the extract can show static and bactriocidal effect respectively on the bacterial strains. Phytochemical test was also performed to nd out the bioactive compounds of the plant extracts.
The antibacterial activities of ethanol, chloroform, methanol and aqueous extracts of S.incanum and S.macrosolen were examined against standard bacterial strains of S.aureus, E.coli and P.aeruginosa and their activity were assessed by measuring inhibition zones. The effect of concentration on the antibacterial activity of the plants were assessed and their antibacterial activity increased with increasing the concentrations of their crude extracts. In this study three different concentrations of the extracts were used (400mg/ml, 200mg/ml and 100mg/ml). From these concentrations, the highest effect was seen in 400mg/ml, followed by 200mg/ml and the least effect was seen in 100mg/ml with statistically signi cant p-value (p<0.05). For example, the inhibition zone of 400mg/ml cold aqueous extract of Silene macrosolen against S.aureus was 23±0.00mm. However, it is signi cantly reduced to 15±0.00mm and 8±0.00mm in 200mg/ml and 100mg/ml respectively. Similar results have been reported by previous researchers in Eritrea indicating that as the concentration of the plant powder is reduced to half, the sensitivity is also reduced to half [18]. A comparison of antibacterial activity of different solvent extracts were done per each plant material against respective bacteria. Upon the antibacterial activities against S.aureus, methanol and cold aqueous extracts of S.incanum leaf (16mm and 15mm respectively), S.macrosolen root (17.5mm and 16.5mm respectively) and S.macrosolen stem (23mm for both) showed the highest inhibition zone (p<0.05) at 400mg/ml. However, chloroform extract showed the highest inhibition (p<0.05) with 24.5mm at 400mg/ml in S.incanum root. Upon the antibacterial activities against E.coli, most solvent extract of S.incanum leaf did not show any inhibition zone against E.coli except the cold aqueous extract with 10mm diameter at 400mg/ml. Cold aqueous extract showed the highest ( p<0.05) inhibition zone in S.incanum root with 16mm.Cold aqueous and methanol extracts showed the highest ( p<0.05) inhibition in both S.macrosolen stem(18.5 mm and 22.5mm respectively) and root(11mm and 12mm respectively).This result is coherent with study by Mamta K,et al.2011, which concluded that the aqueous root extract of W.somnifera hold an excellent potential as an antibacterial agent against E. coli. Upon the antibacterial activities against P.aeruginosa, Cold aqueous extract showed the highest effect (p<0.05) with 12mm at 400mg/ml in S.incanum leaf. In S.incanum root, Chloroform extract showed the highest effect (p<0.05) with 13.5mm. In S.macrosolen stem, methanol extract showed the highest antibacterial activity (p<0.05) with 26mm at 400mg/ml. Similar to the previous results methanol and cold aqueous extract exhibited highest( p<0.05) antibacterial effect in S.macrosolen root with a zone of inhibition of 15mm each at 400mg/ml. From the overall results found, almost all the methanol and cold aqueous extracts of each plant material were found to be the most active against all the experimental bacteria. This is supported by the results of the phytochemical tests, in which the methanol and cold aqueous extracts were positive to almost all the selected phytochemical tests. The yield percentage of these solvent extracts were also found to be the highest. These ndings might have an effect in the availability of the active ingredients of these plant extracts, thereby to be the most antibacterial active plant extracts. These results are supported by a study conducted in India, which conclude that aqueous and methanol root extracts of Withania somnifera might be exploited as a natural drug for the treatment of several infectious diseases caused by these organisms and could be useful in understanding the relations between traditional cures and current medications [15]. An overall comparison on the potency of the plant materials was also assessed. S.aureus is highly sensitive to chloroform extract of S. incanum root, and methanol and cold aqueous extract of S. macrosolen stem at 400mg/ml. In addition, E.coli and P.aeruginosa were highly sensitive to methanol extract of S. macrosolen stem at 400mg/ml. Therefore, S. macrosolen stem was found to be the most potent plant material. Almost all crude extracts showed good antibacterial activities against S. aureus which is in agreement with a study done on extracts of O.limbata [19]. Generally S.aureus, E.coli and P.aeruginosa showed good sensitivity toward cold aqueous and methanol extract but the gram negative bacteria strains E. coli and P.aeruginosa were not sensitive to almost other plants extracts. These could be due to several possible reasons, one being the distinctive feature of gram-negative bacteria is the presence of a double membrane surrounding each bacterial cells. Although all bacteria have an inner cell membrane, gram-negative bacteria have a unique outer membrane. This outer membrane excludes certain drugs and antibiotics from penetrating the cell, partially accounting for why gramnegative bacteria are generally more resistant to antibiotics than other gram-positive bacteria. Concluding as such this may be why the extracts have not shown any effect on the gram negative bacteria being resistant. Overall point of view hot aqueous extraction for each plant on every bacteria has not shown any result. This could be reasoned out by suggesting that the bioactive compound of the plants could have been destroyed due to the high temperature used to extract the plants.
MIC and MBC are very essential while evaluating the antimicrobial activity of plant extracts as a guide towards predicting the e cacy of a promising product. If pharmacokinetic and Pharmacodynamics (PKPD) principles are met by careful selection of a speci c promising antimicrobial extracts while given at an appropriate dosage, this will relate to clinical cure, eradication of carrier status of a speci c microorganism, as well as prevention of selection of resistance. Interpretation of the MIC gives an understanding of the mode of activity of a given plant extract [20]. [21].
Phytochemical property of Solanum incanum The results of our study indicate that majority of the secondary metabolites like avonoids, alkaloids, saponins, tannins, phenols and glycosides are contained in S.incanum when extracted with different solvents, which is similar to the study conducted by Tewelde and Ghebriel, 2017, where the extract of a Solancea family showed presence for the majority of the bioactive compounds. So this medicinal plant holds promises as source of pharmaceutically important phytochemicals. S.incanum possesses numerous biologically active compounds which could serve as potential source of natural drugs in herbal medicine [22].It was reported that most of the plants of Solanaceae contain alkaloids, tannins, steroids, saponins, as well as phenols [23].A study held in Kenya demonstrated that different parts of the S. incanum plant has almost similar phytochemicals present which are attributed to its antimicrobial activity and hence there is need for more studies to be done on the roots and leaf extracts of S. incanum to evaluate their antimicrobial and antifungal properties after studying about the antimicrobial effect of this plant's fruit [24]. The present study was also supported by Tewelde and Ghebriel (2017) which indicates that the extract of Solanum incanum showed the presence of saponins that have healing properties as a natural blood cleanser and expectorant. This implies that, it is possible that the presence of certain compounds in the plants like saponins and glycosides might be responsible for its antibacterial activity. Flavonoids were absent in majority of the plant extracts but are present in the methanolic root extract and chloroform leaf extracts. The avonoid compounds in plants have been reported to exert multiple biological effects including antioxidant, free radical scavenging abilities, antiin ammatory, anti-carcinogenic, etc. [22]. Our study revealed the presence of tannins in almost all the extracts of S.incanum which may be responsible for its antibacterial activity in which similar results were obtained by Tewelde and Ghebriel, 2017. The result of this research also shows the presence of phenols in methanol, ethanol and chloroform extracts but absent in both the hot and cold aqueous extracts. This goes in parallel with the study done by, Tewelde and Ghebriel 2017, which regarded phenols found in fruit extracts of Solanum incanum as one of the functional food components in fruits have signi cant contribution to the health effects of plant-derived products. The results of this study shows that glycosides were absent in almost all the extracts except in cold aqueous extracts, and methanol leaf extract which can be attributed to their higher antibacterial activity. This Statement uni es with the previous study done by, Kemei. E.and Ndukui.J., 2017, which avonoids, glycosides and terpenoids were weakly present although terpenoids were not performed in this study.
Flavonoids were absent in all the extracts except in ethanol and methanol root extracts of Silene macrosolen. This study reveals the presence of saponins in all the plant extracts indicating that it also has healing properties as a natural blood cleanser and expectorant. Phenols were absent in all the extracts except in the ethanol and methanol extract of this plant, this shows that both the stem and root methanol extract of this plant has signi cant contribution to its antibacterial activity.
Conclusion
On the basis of the antibacterial assay of this study S. aureus was found to be more susceptible to the employed plant extracts than E. coli and P. aeruginosa.In the present study the methanolic and cold aqueous plant extracts of S. macrosolen stem revealed the strong presence of saponins and glycosides whose presence may contribute to the antimicrobial activities of the plant extracts against the tested organisms. It was also evident that plant methanolic extracts S. macrosolen stem showed greater activity as compared to other extracts. This supports the continuous use of S. macrosolen stem in management of wound infection in Eritrean communities, caused by S. aureus. Moreover, this study also provides scienti c support for the traditionally used medicinal plants to act as a potential source of new drugs in the treatment of bacterial infections.
The future for using plant extracts and plant products is promising, because they are less expensive and less hazardous to the environment. A research that can strengthen the documentation of the indigenous knowledge which contributes for the drug development and for self-reliance in the future is also strongly recommended. Consequently, there is dire need to scienti cally validate the claimed medical value of plants commonly used in local communities. So this study leads to further research in the way of isolation and identi cation of the active compounds from these plants using chromatographic and spectroscopic techniques for proper drug development, so as to standardize it in recommendable dosage form.
Limitations
Even though, the present study was in complete success, there were certain limitations that would have added to the signi cance of the study if they had been performed. Because of the absence of blood agar medium AST could not be performed using the standard strain of Streptococcus pyogen. The plant extracts could not be tested for the presence of "Alkaloids" due to shortage of reagents. The AST and phytochemical testing was not performed for chloroform extract of Silene macrosolen root because it could not be ltered as the chloroform completely degraded the plant.
Competing interests
The authors declare that they have no competing interests.
|
2022-01-28T17:06:49.313Z
|
2022-01-24T00:00:00.000
|
{
"year": 2022,
"sha1": "5993ce5056882757d7fa524e00292421f9c7adc0",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1288153/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ab8d78563bcbde57ac4b8110337a9d2f7e161984",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
119321795
|
pes2o/s2orc
|
v3-fos-license
|
On vanishing class sizes in finite groups
Let $G$ be a finite group. An element $g$ of $G$ is called a vanishing element if there exists an irreducible character $\chi$ of $G$ such that $\chi(g) = 0$; in this case, we say that the conjugacy class of $g$ is a vanishing conjugacy class. In this paper, we discuss some arithmetical properties concerning the sizes of the vanishing conjugacy classes in a finite group.
Introduction
Many authors have investigated the relationship between the structure of a finite group G and arithmetical data connected to G. The arithmetical data can take various forms: for example, authors have considered the set of conjugacy class sizes, or the set of character degrees. The link between these different sets is also of interest, as demonstrated by the following result by C. Casolo and S. Dolfi. Suppose p and q are distinct primes and pq divides the degree of some irreducible complex character of G; then pq also divides the size of some conjugacy class of G [5, Theorem A]. As an important step in the proof of this result, the authors consider groups for which p and q both divide a conjugacy class size but pq does not, and they show that such groups are {p, q}-solvable [5,Theorem B(i)].
Recently, instead of considering all conjugacy class sizes, authors have been considering a subset of conjugacy class sizes "filtered" by the irreducible characters, namely, the set of vanishing conjugacy class sizes (see [3], [4], [6] and also [7] for related properties of vanishing elements). An element g ∈ G is called a vanishing element if there exists an irreducible character χ of G such that χ(g) = 0, and the conjugacy class of such an element is called a vanishing conjugacy class of G. Motivated by Casolo and Dolfi's results, we investigate some arithmetical properties of the set of vanishing conjugacy class sizes.
This context is neatly portrayed by the prime graph of G for class sizes. Recall that, given a finite nonempty set of positive integers X, the prime graph on X has vertex set defined as the set of all prime numbers that are divisors of some element in X, and edge set consisting of pairs {p, q} such that pq divides some element of X. When X = {x G : x ∈ G} is the set of conjugacy class sizes of a finite group G, we denote by Γ(G) the prime graph on X, and by V(G) the set of vertices of Γ(G). Also, we write Γ v (G) and V v (G) for the corresponding objects in the case when X is the set of vanishing conjugacy class sizes of G.
In [6] the authors investigate when, for a finite group G, a prime p is not an element of V v (G); they prove that such a group G is p-nilpotent, with abelian Sylow p-subgroups. Note that V v (G) can be strictly smaller than V(G), as shown for instance by the symmetric group on three objects. This can actually occur also for nonsolvable groups (see [6,Example 4.1]). However, our first result gives a condition to ensure this does not happen.
Proposition. Let G be a finite group, and suppose G has a nonabelian minimal normal subgroup. Then V(G) = V v (G).
Thus, for the two vertex sets to be the same in a nonsolvable group, it seems important "where" the nonsolvability of the group lies.
Still in the spirit of the work by Casolo and Dolfi, we carry out an investigation concerning the edges of the vanishing graph. In particular, is the "vanishing version" of [5, Theorem B(i)] true? That is, if the edge {p, q} is missing in the vanishing graph, but both p and q are vertices, is the group {p, q}-solvable?
As our main result shows, the answer is affirmative under the same assumptions as in the above Proposition.
Theorem A. Let G be a finite group, and suppose G has a nonabelian minimal normal subgroup. If p and q are in V(G), but there is no vanishing conjugacy class of G whose size is divisible by pq, then G is {p, q}-solvable.
As shown by [6,Example 4.1], Theorem A fails in general if G does not have a nonabelian minimal normal subgroup. An important step in the proof of the above result is the following Theorem B, that is the vanishing version of [5,Theorem 9].
Theorem B. Let G be a finite group with trivial Fitting subgroup. Then every prime divisor of |G| is in V v (G), and Γ v (G) is a complete graph.
We point out that another key ingredient in the proof of Theorem A is Corollary 4.4, that turns out to be a useful tool in locating vanishing elements, and may be of interest in its own right.
As an immediate consequence of Theorem A, we get the following p-solvability criterion. Recall that a vertex of a graph is called complete if it is adjacent to all the other vertices.
Corollary. Let G be a finite group and p a prime. Suppose G has a nonabelian minimal normal subgroup. If p is not a complete vertex of Γ v (G), then G is p-solvable.
Throughout this paper, every group is assumed to be a finite group.
Preliminaries
In this section we gather together previously known results that will be of use. We denote the set of all vanishing elements of the group G by Van(G), whereas, as customary, π(G) denotes the set of prime divisors of |G|.
Proof. We combine two results. Firstly note that (Here, for i ∈ {1, 2}, G Γi denotes the setwise stabilizer of Γ i in G.) Recall, an irreducible character χ of G is said to have q-defect zero for some prime q, if q does not divide |G|/χ(1). If χ is such a character and g is an element of G with order divisible by q then χ(g) = 0, i.e. g is a vanishing element [15,Theorem 8.17]. Thus the following is useful.
Theorem 2.4. [12, Corollary 2] Let G be a nonabelian simple group and q a prime divisor of |G|. Then G has an irreducible character of q-defect zero unless one of the following holds.
(a) The prime q is 2 and S is isomorphic to either M 12 , M 22 , M 24 , J 2 , HS, Suz, Ru, Co 1 , Co 3 , BM or Alt(n) for various values of n ≥ 7 (b) The prime q is 3 and S is isomorphic to either Suz, Co 3 or Alt(n) for various values of n ≥ 7.
The above result will often be used in conjunction with the following.
Lemma 2.5. [7, Lemma 2.7] Let G be a group, N a normal subgroup of G and q a prime divisor of |N |. If N has an irreducible character of q-defect zero, then every element of N of order divisible by q is a vanishing element of G.
As an immediate consequence of the two previous statements, we get the following result.
Proposition 2.6. [7, Corollary 2.9] Let M be a nonabelian minimal normal subgroup of a group G and suppose p is a prime divisor of |M |. If p ≥ 5 then every element of M with order divisible by p is a vanishing element of G.
Finally, we will freely use without references some basic facts of Character Theory such as Clifford Correspondence, properties of coprime actions, and elementary properties of conjugacy class sizes; for instance, recall that a prime p does not divide the size of any conjugacy class of a group G if and only if G has a central Sylow p-subgroup.
Theorem B
We start with a simple lemma.
Lemma 3.1. Let G be a group, M a normal subgroup of G with trivial centre and C = C G (M ). Then for any g ∈ C and h ∈ M the conjugacy class size |(gh) G | is divisible by both |g C | and |h M |. Furthermore if h or g ∈ Van(G) then gh ∈ Van(G).
The following lemma helps us to identify vanishing elements.
Lemma 3.2. Suppose G has a unique minimal normal subgroup M which is nonabelian. Then for all p ∈ π(G) there exists g ∈ Van(G) such that p divides |g G |. Thus π(G) = V v (G). Furthermore, g can be chosen to lie in M .
Proof. As M is nonabelian, we have M = S 1 ×· · ·×S n where every S i is isomorphic to a nonabelian simple group S. We will denote by N the kernel of the action of G by conjugation on {S 1 , . . . , S n }.
Let p be a prime divisor of |G|; our aim is to find an element g ∈ M ∩ Van(G) such that p divides |g G |. We will treat separately the cases p | |M | and p ∤ |M |.
Let us start from the latter case. In this situation, p divides |G/M |, and we first suppose that p actually divides |G/N |. Using Proposition 2.3, we can choose two nonempty susbets Γ 1 , Γ 2 of Ω = {S 1 , . . . , S n }, such that Γ 1 ∩ Γ 2 = ∅ and p ∈ π(|G : G Γ1 ∩ G Γ2 |). Certainly we can find nontrivial elements u and v in S of different orders and such that the order of v is divisible by some prime r greater than 3. For S α ∈ Γ 1 and S β ∈ Γ 2 let u α and v β correspond, respectively, to u and v (via the isomorphisms S i ≃ S), and set g to be the element in M given by As r divides the order of v, it also divides the order of g and hence g is vanishing in G by Proposition 2.6. Let x be an element in C G (g), and consider S α ∈ Γ 1 . Let x act on M by conjugation; since M is nonabelian, the factors of g are permuted.
As the orders of u and v are different, it follows that u Suppose now that p does not divide |G/N |, so p divides |N |. Take i in {1, ..., n}, and adopt the bar convention for the factor group N/C N (S i ); then N is an almost simple group with socle S i ≃ S. Moreover, since C N (S i ) = C N (M ) = 1, the subgroup N can be embedded in the direct product of the factor groups N/C N (S i ) and, as these factor groups all have the same order (because G transitively permutes the S i ), it is easy to see that p divides |N |. However, p does not divide |M |, and thus it does not divide |S i |; as a consequence, S is a simple group of Lie type (see for instance [11]). Now, by [5, Lemma 6(a)], there exists an element g ∈ S i such that p divides |g N |, which is in turn a divisor of |g G | (here g can be chosen in S i , hence in M ). As g is a vanishing element of G by Theorem 2.4 and Lemma 2.5, we are done in this case as well.
Finally, let us assume p | |M |. If M has an irreducible character of q-defect zero for every prime divisor q of |M |, then every nontrivial element of M lies in Van(G) by Lemma 2.5. Now, just take an element g ∈ S 1 such that p | |g S1 | = |g M |, and we are done. On the other hand, if there exists a prime q ∈ π(M ) such that M does not have an irreducible character of q-defect zero (so the same holds for S 1 ), then we apply Lemma 2.2 in [6]: there exists an element g of S 1 whose conjugacy class in M has size divisible by all primes in π(M ), and there exists an irreducible character θ of S 1 such that θ(g) = 0 and θ extends irreducibly to Aut(S 1 ). Moreover, Lemma 5 of [1] yields that θ × θ × · · · × θ ∈ Irr(M ) extends irreducibly to G, and thus g is vanishing in G.
We are now ready to prove the proposition mentioned in the Introduction, that we state again.
Proposition. Let G be a group, and suppose G has a nonabelian minimal normal subgroup. Then V(G) = V v (G).
Proof. Let M be a nonabelian minimal normal subgroup of G, and set C = C G (M ). Then G = G/C has a unique minimal normal subgroup, which is isomorphic to M . Suppose p ∈ V(G). If p ∈ π(G) then p ∈ V v (G) by Lemma 3.2, and hence p ∈ V v (G) by Lemma 2.1. Thus we can assume that p does not divide |G|. If C does not have a central Sylow p-subgroup, then there exists g ∈ C with p dividing |g C |. As M is nonabelian, there exists h ∈ M ∩ Van(G) by Proposition 2.6. Now apply Lemma 3.1 to get that gh is vanishing in G with conjugacy class size divisible by p, thus p ∈ V v (G) as required. Finally, suppose that C has a central Sylow p-subgroup P . Then P , which is a Sylow p-subgroup of G as well, is (abelian and) normal in G. If p ∈ V v (G) then, by [6, Theorem A], G has a normal p-complement, thus P is central in G. But this would contradict p ∈ V(G), and the proof is complete.
Thus, when F(G) = 1 we have established that V v (G) = π(G). We now turn our attention to edges in the vanishing graph and, after the following proposition, we will prove Theorem B. Proposition 3.3. Let G be an almost simple group with socle S, and let p, q be distinct primes in π(G). Then p and q are adjacent vertices of Γ v (G). Moreover, there exists an element g ∈ S such that pq divides |g G | and g is vanishing in G.
Proof. If p, q ∈ π(S), then p and q are adjacent vertices of Γ(S) by [5,Theorem 9]. So there exists g ∈ S with pq dividing |g S | and thus |g G |. If S has an irreducible character of q-defect zero for all primes q, then g is vanishing in G by Lemma 2.5 and we are done. Thus, we can assume S does not have an irreducible character of q-defect zero for some prime q, and the same argument as in the last paragraph of Lemma 3.2 yields the conclusion.
If p or q does not divide |S|, then S is a simple group of Lie type. We note that in the proof of [5, Proposition 7] the authors produce, in each case, an element of S with conjugacy class size divisible by p and q in G. Moreover, since S is of Lie type, this element will be vanishing in G by Lemma 2.5.
Theorem B. Let G be a group with trivial Fitting subgroup. Then every prime Proof. Let G be a counterexample of minimal order to our statement; thus, F(G) = 1 and there exist two distinct prime divisors p and q of |G| such that {p, q} is not an edge of Γ v (G). Let M = S 1 × · · · × S n be a minimal normal subgroup of G (where the S i are all isomorphic to a nonabelian simple group S) and let N be the kernel of the action of G on {S 1 , . . . , S n }. Also, denote C G (M ) by C and let G = G/C. We proceed through various steps.
(i) C is trivial.
Observe that G = G/C has a unique minimal normal subgroup M ∼ = M , and therefore F(G) = 1. For a proof by contradiction, let us assume C = 1.
If {p, q} ⊆ π(G) then, by the minimality of G, we get that {p, q} is an edge of Γ v (G); thus Lemma 2.1 yields that {p, q} is an edge of Γ v (G) as well, against our assumptions.
Suppose now p, q ∈ π(C). As F(C) ≤ F(G) = 1, there exists g ∈ C with pq dividing |g C | by [5,Theorem 9]. Choose h ∈ M with order divisible by some prime greater than 3; then h is vanishing in G by Proposition 2.6. An application of Lemma 3.1 yields that gh is vanishing in G with conjugacy class size divisible by pq, again a contradiction.
Thus, we can assume that p divides |G| and q does not. As G has a unique minimal normal subgroup M , by Lemma 3.2, there exists an element g ∈ M with p dividing |g G | and g vanishing in G. As C does not have a central Sylow q-subgroup (because F(C) = 1), there exists an element h ∈ C with q dividing |h C |. For g a preimage of g in M , by Lemma 3.1, we have that gh is vanishing in G with conjugacy class size divisible by pq. This contradiction proves the claim.
Note that, for every i ∈ {1, ..., n}, the factor group N/C N (S i ) is an almost simple group with socle isomorphic to S. Moreover, as C = 1, N can be embedded in the direct product of the factor groups N/C N (S i ), and since these all have the same order, we get π(N ) = π(N/C N (S i )).
(ii) N is a proper subgroup of G.
In fact, if N = G, then M is a simple group and G is an almost simple group with socle M . The desired conclusion now follows from Proposition 3.3.
In fact, assuming the contrary, choose nonempty Γ 1 , Γ 2 ⊆ Ω = {S 1 , . . . , S n } with Γ 1 ∩ Γ 2 = ∅ and {p, q} ⊆ π(|G : G Γ1 ∩ G Γ2 |) (see Proposition 2.3). Also, let u and v be nontrivial elements of different orders in S ≃ S i , such that the order of v is divisible by some prime r greater than 3. Now, for S α ∈ Γ 1 and S β ∈ Γ 2 let u α and v β correspond, respectively, to u and v (via the isomorphisms S i ≃ S). As in the proof of Lemma 3.2, we see that g = Πu α Πv β is vanishing in G and its conjugacy class size is divisible by pq, contradicting the hypotheses.
In view of the last paragraph of Claim (i), if p and q both divide |N |, then they both divide the order of N = N/C N (S 1 ) as well. By Proposition 3.3, there exists g ∈ S 1 such that pq | |g N |. Let g ∈ S 1 be a preimage of g, and set h ∈ S 2 to be an element with order divisible by a prime greater than 3. Then gh is vanishing in G by Proposition 2.6; moreover, as gh = g, we get pq | |gh N |. But |gh N | divides |(gh) N |, which in turn divides |(gh) G |, against the hypotheses.
Our conclusion so far is that we can assume p ∈ π(G/N ) and q ∈ π(N ). Then q divides |N/C N (S 1 )| and, as in the previous claim, there exists an element u ∈ S 1 such that q divides |u N | (note that u is a vanishing element of G unless possibly when it is a {2, 3}-element). Now choose nonempty subsets Γ 1 and Γ 2 of Ω = {S 1 , . . . , S n } such that p divides |G : G Γ1 ∩ G Γ2 |. Take v ∈ S with o(v) = o(u) and such that, in the case when u is a {2, 3}-element, the order of v is divisible by a prime greater than 3. As in the proof of Lemma 3.2, define g = Πu α Πv β where u α and v β correspond to u and v respectively. As already seen, g is vanishing in G and |g G | is divisible by p. Moreover, as we can assume S 1 ∈ Γ 1 , the image of g in N/C N (S 1 ) is u, and therefore q divides |g N |, which in turn divides |g G |. Thus g is a vanishing element of G with conjugacy class size divisible by pq, and this contradiction completes the proof of the theorem.
Theorem A
In this section we prove the main result of this paper. Our key tool will be Lemma 4.3 and its consequence Corollary 4.4. We start with a lemma concerning vanishing elements of alternating groups. Lemma 4.1. For n ≥ 7 and t ≥ 2, let x be a permutation in Alt(n) whose type (including fixed points) is (t, ..., t) or (t, ..., t, 1). Then there exists an irreducible character θ of Alt(n) which has an extension to Sym(n), and such that θ(x) = 0.
Proof. Assume that x is of type (t, ..., t), whence n = kt for a suitable integer k. If k ≥ 3, consider the partition µ = (n − t − 1, t, 1) of n (which is not self-associate), and let χ µ ∈ Irr(Sym(n)) be the corresponding character. Then the Murnaghan-Nakayama formula (see for instance [18,Theorem 4.10.2]) yields χ µ (x) = 0, and θ can be chosen to be the (irreducible) restriction of χ µ to Alt(n). In the case when k ≤ 2, we can argue as above using the partition (n − 3, 2, 1).
Let V be an n-dimensional vector space over a field E, and let D be the set consisting of the elements (α 1 , ..., α n ) ∈ V such that α i = 0. The (n − 1)dimensional vector space D has the natural structure of an E[Alt(n)]-module, where Alt(n) acts by permuting coordinates, and we will be interested in the case when the characteristic q of E is coprime with |Alt(n)|, i.e., q > n; in this case, D turns out to be an irreducible E[Alt(n)]-module, which is called the deleted permutation module over E (see [9]).
We will deal with the module D regarded as an (irreducible) E × × Alt(n) module over E, where E × denotes the multiplicative group of E, acting on D by scalar multiplication.
Lemma 4.2. Let D be the deleted permutation module for Alt(n) over the field E, where the characteristic of E is larger than n. Let d = (α 1 , α 2 , ..., α n−1 , β) ∈ D be such that the α i are pairwise distinct elements of E × . Regarding D as a module for G = E × × Alt(n), assume that the element (λ, x) ∈ G centralizes d. Then, if t is the order of x, the type of x (including fixed points) is either (t, ..., t) or (t, ..., t, 1).
Proof. Let d be an element of D as in the statement, and assume (λ, x) ∈ C G (d).
We first observe that, if λ = 1, then x is also 1. In fact, if x maps the symbol j ∈ {1, ..., n − 1} to jx = j then, as (1, x) ∈ C G (d), the corresponding entries of d must coincide; but, the α i being pairwise distinct, we get jx = n and α j = β. As a consequence, the only nontrivial cycle in x is possibly (j, n), but this is a contradiction since x is an even permutation.
Next, let j ∈ {1, ..., n − 1} be a symbol lying in an x -orbit of length s, so s is a divisor of t = o(x). Since (λ s , x s ) centralizes d, the jth entries of d and of d (λ s ,x s ) must coincide; therefore we get λ s α j = α j , whence λ s = 1. This yields that (1, x s ) = (λ s , x s ) centralizes d, therefore x s = 1 by the paragraph above, and s = t.
We conclude that every cycle of x involving at least one symbol in {1, ..., n − 1} has in fact length t, and the proof is complete. Lemma 4.3. Let S be a nonabelian simple group that is not of Lie type, and let M = S 1 ×· · ·×S k be the direct product of k copies of S. Let q be a prime not dividing |S|, and A a faithful M -module over the field F with q elements. Assume that there are no regular orbits for the action of M on A. Then S is isomorphic to Alt(n) for some n ≥ 7; moreover, there exists a ∈ A such that, for x = s 1 · · · s k ∈ C M (a), each s i is a permutation of type (o(s i ), ..., o(s i )) or (o(s i ), ..., o(s i ), 1).
Proof. Since q is coprime with the order of M , the F[M ]-module A is semisimple. Assuming that there is no regular orbit for the action of M on A, our aim will be to construct an element a of A yielding the desired conclusions; to this end, we will choose suitable vectors from each simple constituent of A.
Let V be such a constituent. Up to renumbering, we can assume that the kernel of the action of M on V is either trivial (and in this case we set h = k) or S h+1 ×· · ·×S k for a suitable h ∈ {1, ..., k − 1}, so N = S 1 × · · · × S h acts faithfully on V . Let E be a finite field extension of F that is a splitting field for S, and consider the faithful if W is an irreducible constituent of V E and χ is the corresponding character, we denote by K the field extension of F obtained by adjoining the set of values {χ(n) | n ∈ N } to F (so, K is a subfield of E). By Theorem 1.16 in [14,VII], that we will freely use with no further reference throughout this proof, we get Observe that, for w ∈ W , the vector v = ξ∈Gal(K|F) w ξ lies in V , and we have C N (v) ≤ C N (w) (see [8,Lemma 3.1]). As W is a simple S 1 × · · · × S h -module over a splitting field for each of the S i , Theorem 3.7.1 in [10] yields that the module W decomposes as a tensor product W 1 ⊗ · · · ⊗ W h , where each W i is a simple E[S i ]-module. In this situation, given an element of W \ {0} of the form w = w 1 ⊗ · · ·⊗ w h , it can be checked that x = s 1 · · · s h ∈ N centralizes w if and only if w si i = λ −1 i w i for all i ∈ {1, ..., h}, where the λ i are in E and λ i = 1; in other words, if x centralizes w then, regarding W i as an E × × S i -module (E × acting by scalar multiplication) the element (λ i , s i ) centralizes w i for every i ∈ {1, ..., h}. Now, assume that E × × S i does not have any regular orbit on W i , and let K i be the field extension of F obtained by adjoining the values of the character of W i (as a module for S i ). Denoting by Z an irreducible constituent of W i regarded as a K i [S i ]-module, we get W i ≃ Z E , and so the field of values of the K i [S i ]-module Z is K i as well. We claim that the group K × i × S i does not have any regular orbit on Z: assuming, for a proof by contradiction, that z lies in such an orbit, it is easy to check that the vector z ⊗ 1 ∈ W i lies in a regular orbit for the action of E × × S i , which is not possible. Now, if Z 0 denotes the K i [S i ]-module Z viewed as an F[S i ]-module, Z 0 turns out to be irreducible, and K × i × S i does not have any regular orbit on it: we are in a position to apply a theorem by D. P. M. Goodwin ([9, Theorem 1]; see also [17, Theorem 2.1 and Theorem 2.2]), getting that S i ≃ Alt(n) for some n ≥ 7 and Z 0 is the deleted permutation module for S i over F. Since this module is absolutely irreducible, we get K i = F, and W i ≃ Z E 0 is the deleted permutation module for S i over E. Whenever we are in this situation (which does occur for some constituent V , as otherwise M would have regular orbits on A against our assumptions), we choose d i ∈ W i as in Lemma 4.2; this can be applied because, q being coprime with S, we certainly have q > n. In all other cases, choose d i lying in a regular orbit for the action of E × × S i on W i . Then set w = d 1 ⊗ · · · ⊗ d h ∈ W , and finally define v = ξ∈Gal(K|F) w ξ ∈ V .
To sum up, let A = V 1 ⊕ · · · ⊕ V n be a decomposition of A into simple F[M ]constituents. For each V j , let v j be a vector as defined in the paragraph above, and let a = v j . In view of Lemma 4.2, it can be checked that such an element a satisfies the conclusions of our statement.
The following consequence of Lemma 4.3 is helpful in locating vanishing elements. Proof. Set G = G/N and, adopting the bar convention, assume that there exists an element a of A lying in a regular orbit for the action of M ; then, by coprimality, the same happens for the action of M on Irr(A). In other words, there exists an irreducible character θ of A such that I G (θ) ∩ M = N , and every element x of M \ N clearly does not lie in g∈G I G (θ g ). Now, if ψ is an irreducible character of I G (θ) lying over θ, then ψ G is an irreducible character of G which vanishes on G \ g∈G I G (θ g ), thus in particular ψ G (x) = 0, as required.
Therefore, we may assume that there are no regular orbits for the action of M on A. Since, by a well-known consequence of Brodkey's Theorem ( [2]), regular orbits do exist if M is abelian, we may focus on the case when M = S 1 × · · · × S k , where the S i are pairwise isomorphic nonabelian simple groups.
Observe that every nontrivial element of M is a vanishing element of G provided S 1 is a group of Lie type (Theorem 2.4 and Lemma 2.5), so we easily get the desired conclusion in this case. Therefore we can assume that S 1 is not of Lie type, and we can apply Lemma 4.3 with respect to the action of M on A: we get S 1 ≃ Alt(n) for some n ≥ 7, and we can choose an element a ∈ A satisfying the conclusions of Lemma 4.3. Now, by coprimality, we can consider a character θ ∈ Irr(A) whose inertia subgroup I M (θ) coincides with C M (a).
Let x be an element in M \ N . If x lies in I M (θ), then each factor of x (in its decomposition into a product of elements of the S i ) is a permutation with the cyclic structure prescribed by Lemma 4.3, and the same of course holds also if x centralizes a G-conjugate of θ. In other words, either x does not centralize any G-conjugate of θ (and in that case x is vanishing in G, by the argument in the first paragraph of this proof), or it is as in the conclusions of Lemma 4.3. In the latter case, an application of Lemma 4.1 (together with Lemma 5 of [1]) yields that x is a vanishing element of G, and the proof is complete.
We are ready to prove Theorem A, that we state again.
Theorem A. Let G be a group, and suppose G has a nonabelian minimal normal subgroup. If p and q are in V(G), but there is no vanishing conjugacy class of G whose size is divisible by pq, then G is {p, q}-solvable.
Proof. Let G be a counterexample to the statement, having the smallest possible order. Since p and q are nonadjacent vertices of Γ v (G), Theorem B yields F(G) = 1. As a consequence, there exists an abelian minimal normal subgroup A of G. We will proceed through a number of steps.
(i) A is the unique abelian minimal normal subgroup of G. Moreover, we can assume p ∈ V(G/A) and q ∈ V(G/A).
The factor group G/A clearly has a nonabelian minimal normal subgroup. If p and q are vertices of Γ(G/A), then they cannot be adjacent in Γ v (G/A), as otherwise they would be adjacent in Γ v (G) as well; therefore, by our minimality assumption, G/A is {p, q}-solvable. But then G is so, and this is a contradiction. On the other hand, if both p and q are not in V(G/A), then G/A is both p-nilpotent and qnilpotent and G is not a counterexample. Hence {p, q} ∩ V(G/A) cannot be empty, and we will assume p ∈ V(G/A), q ∈ V(G/A) (thus G is q-solvable). Now, let B = A be an abelian minimal normal subgroup of G. The above discussion applies to B as well, hence {p, q} ∩ V(G/B) contains precisely one element; if this element is q, then G/B is p-solvable and so is G, a contradiction. But if q ∈ V(G/B), then both G/A and G/B have a central Sylow q-subgroup, and the same holds for G, which embeds into (G/A) × (G/B). This would imply q ∈ V(G), which is not the case. Therefore, assuming the existence of a minimal normal subgroup of G other than A we get a contradiction, and the claim is proved.
Assume Φ(G) = 1, so A ≤ Φ(G). Let Q be a Sylow q-subgroup of G. Then, as we are assuming q ∈ V(G/A), we have QA G and hence Q ✂ G by the Frattini argument. As A is the unique abelian minimal normal subgroup of G, this clearly implies that A is a q-group. Moreover, G/A has a normal q-complement K/A and, again by the Frattini argument, a Hall q ′ -subgroup K 0 of K is actually a normal q-complement of G, thus G = K 0 × Q. Observe that Q cannot be abelian, as otherwise it would lie in Z(G), thus we can choose x ∈ Q \ Z(Q); note that |x G | is divisible by q, and that x is a vanishing element of Q (see [16,Theorem B]), whence clearly a vanishing element of G as well. Consider now y ∈ K 0 such that |y G | is divisible by p. Then xy is a vanishing element of G whose conjugacy class in G has size divisible by pq, and G is not a counterexample. We conclude that Φ(G) = 1; in particular, F(G) is a direct product of abelian minimal normal subgroups of G (see [13,III.4.5]), whence F(G) = A.
For a proof by contradiction, we assume that q ∤ |A|. Hence, if Q is a Sylow q-subgroup of G, we get A = [A, Q] × C A (Q). Note that [A, Q] cannot be trivial, as otherwise the abelian Sylow subgroup Q of G would be normal in QA G and hence central in G. Moreover, every G-conjugate of Q is of the form Q a for a suitable a ∈ A, whence [A, Q] is normal in G. As a consequence, we get [A, Q] = A and C A (Q) = 1; in particular, as this holds for any Sylow q-subgroup of G, for every x ∈ A \ {1} we have that q is a divisor of |x G |. Now, let M be a nonbelian minimal normal subgroup of G, and set C = C G (M ). If p divides |G/C| then, by Lemma 3.2, there exists y ∈ M which is a vanishing element of G such that p | |y G |. But then, for any x ∈ A \ {1}, xy is a vanishing element of G with pq | |(xy) G |, a contradiction. We conclude that C contains a Sylow p-subgroup P of G, and clearly P ≤ Z(C) (as otherwise P would be normal in G, so G would be p-solvable). Moreover, any Sylow q-subgroup Q of G lies in C because [Q, G] ≤ A, and therefore [Q, M ] ≤ A ∩ M = 1; but Q is not centralized by A, so Q ≤ Z(C) as well. As a consequence, p and q are vertices of Γ(C). Observe that {p, q} must be an edge of Γ(C), as otherwise C would be {p, q}-solvable by [5, Theorem B(i)], and so would be G, because G/C is a {p, q} ′ -group. Hence, there exists an element c ∈ C such that |c G | is divisible by pq. Take any y ∈ M which is vanishing in G: we get that xy is vanishing in G and |(xy) G | is divisible by pq.
By the previous step, we deduce that A is a Sylow q-subgroup of G; in fact, if Q ∈ Syl q (G), then Q/A is normal in G/A, and so Q ≤ F(G) = A. In particular, for every x ∈ G \ C G (A), we have q | |x G |. Observe also that G = G/C G (A) does not have any vanishing element whose G-class size is divisible by p (otherwise, if x is such an element, then x would be a vanishing element of G such that pq | |x G |); whence, by Theorem A in [6], G is p-nilpotent.
Let now K = K/C G (A) be a minimal normal subgroup of G. Clearly K does not have a central Sylow q-subgroup, because K > C G (A); but K does not have a central Sylow p-subgroup as well, since otherwise K would be p-solvable and so would be G (recall that G is p-nilpotent). As a consequence, p and q are vertices of Γ(K). Moreover, K must have conjugacy classes of size divisible by pq; otherwise K would be p-solvable by [5, Theorem B(i)] and, as above, G would be p-solvable. Note that such conjugacy classes of K are contained in K \ C G (A). But an application of Corollary 4.4 yields that every element in K \ C G (A) is actually a vanishing element of G, and this is the final contradiction that completes the proof.
As for the Corollary stated in the Introduction, observe that if G is a group having a nonabelian minimal normal subgroup, and p is a prime that is not a complete vertex of Γ v (G), then either p ∈ V v (G) = V(G) (and so G is p-nilpotent) or p ∈ V v (G) is not adjacent in Γ v (G) to some other vertex q. In the latter case, Theorem A yields the p-solvability of G.
|
2017-06-18T06:31:57.000Z
|
2017-06-18T00:00:00.000
|
{
"year": 2017,
"sha1": "f61deded0632276ba8d5332044605d4989984cc1",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.jalgebra.2017.07.007",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "f61deded0632276ba8d5332044605d4989984cc1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
55101954
|
pes2o/s2orc
|
v3-fos-license
|
Source brightness fluctuation correction of solar absorption fourier transform mid infrared spectra
The precision and accuracy of trace gas observations using solar absorption Fourier Transform infrared spectrometry depend on the stability of the light source. Fluctuations in the source brightness, however, cannot always be avoided. Current correction schemes, which calculate a corrected interferogram as the ratio of the raw DC interferogram and a smoothed DC interferogram, are applicable only to near infrared measurements. Spectra in the mid infrared spectral region below 2000 cm −1 are generally considered uncorrectable, if they are measured with a MCT detector. Such measurements introduce an unknown offset to MCT interferograms, which prevents the established source brightness fluctuation correction. This problem can be overcome by a determination of the offset using the modulation efficiency of the instrument. With known modulation efficiency the offset can be calculated, and the source brightness correction can be performed on the basis of offset-corrected interferograms. We present a source brightness fluctuation correction method which performs the smoothing of the raw DC interferogram in the interferogram domain by an application of a running mean instead of high-pass filtering the corresponding spectrum after Fourier transformation of the raw DC interferogram. This smoothing can be performed with the onboard software of commercial instruments. The improvement of MCT spectra and subsequent ozone profile and total column retrievals is demonstrated. Application to InSb interferograms in the near infrared spectral region proves the equivalence with the established correction scheme. Correspondence to: T. Ridder (tridder@iup.physik.uni-bremen.de)
gas concentrations, but cannot always be avoided. Thus, a strong effort is made within the community to reduce the impact of source brightness fluctuations by applying a correction on the spectra following the measurements. So far, it could be shown that the precision and accuracy of CO 2 total column concentrations could be improved by applying a source brightness fluctuation correction to spectra in the near infrared spectral 10 region.
The analysis of trace gas concentrations obtained from spectra in the mid infrared spectral region is fundamental. However, spectra below 2000 cm −1 are generally considered uncorrectable, if they are measured with a MCT detector. Such measurements introduce an unknown offset to MCT interferograms, which prevents a source bright-15 ness fluctuation correction.
Here, we show a method of source brightness fluctuation correction, which can be applied on spectra in the whole infrared spectral region including spectra measured with a MCT detector. We present a solution to remove the unknown offset in MCT interferograms allowing MCT spectra for an application of source brightness fluctuation 20 correction. This gives an improvement in the quality of MCT spectra and we demonstrate an improvement in the retrieval of O 3 profiles and total column concentrations.
For a comparison with previous studies, we apply our source brightness fluctuation correction method on spectra in the near infrared spectral region and show an improvement in the retrieval of CO 2 total column concentrations.
Introduction
Ground-based solar absorption Fourier Transform infrared (FTIR) spectrometry (Zander et al., 1983;Goldman et al., 1988;Rinsland et al., 1991) has been established as an accurate and precise method for the detection of trace gases in the atmosphere (Notholt et al., 2003;Velazco et al., 2005). FTIR spectrometry is used by the Network 5 for the Detection of Atmospheric Composition Change (NDACC, 1991) and the Total Carbon Column Observing Network (TCCON, 2005) for the worldwide observation of trace gases.
The accuracy and precision of trace gas concentrations retrieved from FTIR spectra depend on the stability of the light source during the measurement (Beer, 1992;Notholt et al., 1997), since intensity fluctuations can distort the fractional line depth in FTIR spectra (Keppel-Aleks et al., 2007). Variations in the source brightness caused e.g. by clouds decrease the accuracy and precision, but cannot always be avoided. Thus, one aim of the NDACC and TCCON networks is to reduce the impact of source brightness fluctuations (SBFs) on FTIR spectra by applying a correction to the spectra following 15 the measurements.
The idea of SBF correction was primarily published by Brault (1985) and was recently picked up by Keppel-Aleks et al. (2007). In general, a SBF correction can be applied on Fourier Transform DC interferograms. The measured raw interferogram I raw is reweighted by the corresponding smoothed interferogram I smooth in terms of The reweighting compensates the intensity fluctuations during the measurement and adjusts the modulation height in the interferogram (Fig. 1). Keppel-Aleks et al.'s (2007) analysis is focused on the near infrared spectral region measured simultaniously with an InGaAs diode and a Si diode detector. Their full SBF 25 correction is integrated into the software slice-ipp, and the smoothed interferogram I smooth is generated in three steps: (1) raw DC interferogram, (2) applying a spectral filter to the Fourier Transform, removing all interferometric modulation, and (3) taking a second FFT of the filtered Fourier Transform. In their analysis, they show that the precision and accuracy of CO 2 total column concentrations can be improved by the application of a SBF correction to spectra in the near infrared spectral region (3800-15 750 cm −1 ).
5
However, in the mid infrared spectral region (700-3800 cm −1 ) the analysis of many important traces gases, such as O 3 , is fundamental and the application of SBF correction is reasonable. (Griffith and de Haseth, 1986;Rao, 1992); during the measurement process, the MCT is applied with a constant voltage, which adds an unknown offset O to the measured DC interferogram (Fig. 2). The unknown offset perturbs the SBF correction method (Eq. 1), since the reweighting of the raw interferogram with the smoothed interferogram now amplifies the modula-15 tion by an incorrect ratio. Thus, the unknown offset has to be removed prior to a SBF correction.
Removing the unknown offset is essential to allow for a SBF correction of MCT interferograms. However, a procedure to determine the unknown offset has not been published so far. Thus, MCT interferograms have not been corrected for SBFs in pre-20 vious studies.
In Sect. 2, we present a solution to remove the unknown offset in MCT, DC interferograms allowing MCT spectra for an application of source brightness fluctuation correction. We present a method of source brightness fluctuation correction, which is independent of the measured wavelength and can, thus, be applied to spectra in the 25 whole infrared spectral region.
Method
Solar absorption FTIR spectra are obtained by a Bruker IFS 120/5M and a Bruker IFS 125HR Fourier Transform Spectrometer. The IFS 120/5M was applied during a field 5 campaign aboard research vessel (RV) Sonne in 2009. Here, the problem of SBF had to be addressed in particular, since the campaign was planned as a North-South transit from Japan to New Zealand crossing over the tropics, where an increased appearance of clouds were expected. By default, the 120/5M measures in AC mode, but was adjusted to measure in DC mode with all detectors (InGaAs, Si, InSb, MCT) for the 10 whole infrared spectral region. Measurements with the MCT detector were performed with low signal intensity, in order to avoid non-linearity effects. The IFS 125HR is based in Bialystok, Poland, and measures in DC mode only for the near infrared range (3800-15 750 cm −1 ) with an InGaAs diode and a Si diode detector. The offset in MCT interferograms can be removed in two ways. In the first case, we 15 measure the modulation efficiency of the instrument (Fig. 2), using an InSb detector and a small optical filter. In Eq.
(2), A is the AC signal intensity and B is the DC signal intensity plus the offset. The measurement is repeated with the MCT detector, using the same filter and optical settings. 20 Thus, we can assume that the modulation efficiency is the same for both detectors The second way to remove the offset in MCT interferograms can be applied to a pair of interferograms, which is measured directly in series, if both centerbursts differ in their intensity due to SBF (Fig. 2). The modulation efficiency in each interferogram must be equal and the offset is calculated as This method is independent of using an additional detector in comparison to the first method.
10
For the SBF correction (Eq. 1) all operations are performed within OPUS (version 6.5), the standard Bruker spectroscopy software. I raw is measured with the OPUS measurement procedure, I smooth is generated by a direct smoothing of I raw with the OPUS running mean function (mean over n datapoints), and the reweighting is performed by the OPUS calculator. The procedure has the advantage that the correction 15 is independent of the measured spectral region and it can be applied automatically and instantanously together with the measurement process.
Admittedly, the OPUS running mean function still ignores datapoints at the edges of I raw , and, thus, the reweighting is inaccurate there. However, this problem is of no great consequence for the procedure; in forward-backward scans the edges of the interfero-20 gram are only of importance for the phase correction (Brault, 1987;Chase, 1982), and the phase correction can be adjusted by slightly reducing the phase resolution. In single scans the problem can be avoided by minimally reducing the resolution and phase resolution of the measurement. Figure 3 shows the SBF correction of a solar absorption FTIR interferogram measured with the 120/5M spectrometer using a MCT detector. Three interferograms are shown in Fig. 3 (AC, DC, SBF-corrected), as well as two spectra created from the AC and the 5 SBF-corrected interferogram. The AC, forward-backward interferogram reveals that the measurement is influenced by SBF, causing different heights in both interferograms (Fig. 3, AC). More obviously, the influence of SBF is visible in the DC interferogram (Fig. 3, DC). The DC interferogram indicates an intensity loss towards the end of the measurement and shows the same modulation loss as in AC mode. Furthermore, the 10 DC interferogram features the typical MCT offset.
MCT
For the SBF correction, the offset in the MCT, DC interferogram has to be removed. Following the first method (Eq. 4) the offset is calculated as O = 0.546519 using a modulation efficiency of M InSb = 87% measured with the InSb detector. Following the second method (Eq. 6), the offset is calculated as O = 0.548764, in excellent agreement to 15 the first method (deviation of 0.41%).
Following the removal of the offset, the SBF correction with OPUS can be applied. Thereby, the phase resolution was reduced from 4 cm −1 to 4.7 cm −1 according to Sect. 2. The correction compensates the intensity fluctuations and reweights the interferometric modulation visible in the equalized heights in both interferograms (Fig. 3, 20 SBF-corr). The corresponding spectra of the AC interferogram and the SBF-corrected interferogram are shown in Fig. 3 (right). The SBF correction slightly improves the signal to noise ratio (SNR) of the spectrum from SNR AC = 156 to SNR SBF−corr = 164. In addition, the procedure corrects spectral errors, visible e.g. in the spectral range between 25 990 cm −1 and 1070 cm −1 . Here, a variety of spectral lines are generally saturated.
However, the AC spectrum shows a strong oversaturation in this range, whereby this effect is corrected in the SBF-corrected spectrum. The influence of SBF correction on the retrieval of trace gas concentrations is demonstrated in Fig. 4, based on the example of O 3 retrieved from MCT, DC spectra. Three interferograms are shown, which were measured in series within one hour. The first interferogram was measured under clear sky conditions, the second and third interferogram were measured under the influence of SBF (Fig. 4, left). All three interferograms 5 show the typical MCT offset.
The retrieval of O 3 profiles and total column concentrations was accomplished in each case for the AC and the SBF-corrected spectrum, using the retrieval software SFIT2 (Rinsland et al., 1998). For the retrieval of O 3 , the standard NDACC microwindow (1000-1005 cm −1 ) was used. Since the measurements were performed within one hour, it can be assumed that the O 3 concentrations have not sicnificantly changed during this time period.
In Fig. 4 (center) three measurements of O 3 profiles are shown. The first one was performed under clear sky conditions, whereas the second and the third one were performed under the influence of SBF. In the first case, the O 3 profiles of the AC spectrum 15 (AC) and the SBF-corrected spectrum (SBF-corr) are indentically, showing that the SBF correction has no influence on undisturbed spectra. In the second case, under the small influence of SBF, the O 3 profile from the AC spectrum differs from the first case. In contrast, the O 3 profile from the SBF-corrected spectrum equals the undisturbed profiles in case one. In the third case, the influence of SBF is more obvious. 20 The AC, O 3 profile strongly differs from the original profile. However, the O 3 profile from the SBF-corrected spectrum equals the undisturbed cases.
InGaAs
Here, we compare our SBF correction method to the method presented by Keppel-Aleks et al. (2007) for CO 2 total column concentration measurements in Bialystok, Poland, in April 2009. The CO 2 measurements were performed according to the TC-CON standard with a 125HR spectrometer in single-scan mode. Likewise, the CO 2 re-5 trieval was accomplished following the TCCON approach (microwindows: 6220 cm −1 , 6339 cm −1 ) using the retrieval software GFIT (Wunch et al., 2010;Toon et al., 1992).
The comparison contains three kinds of spectra (Fig. 5): uncorrected spectra, SBFcorrected spectra according to Keppel Aleks et al. (slice-ipp SBF-corr), and SBFcorrected spectra according to the method presented here (OPUS SBF-corr). For the 10 OPUS SBF correction the spectra were modified as discussed in Sect. 2; the resolution was reduced from 0.014 cm −1 to 0.0141 cm −1 and the phase resolution was diminished from 4 cm −1 to 4.7 cm −1 . Figure 5 shows the xCO 2 total column-averaged dry air mole fraction (Messerschmidt et al., 2010;Washenfelder et al., 2006) for Bialystok, Poland, in April 2009 15 for all three cases. All three cases agree well within the intervals where the measurements are not influenced by SBF (Fig. 5, white background). In the intervals where the spectra are influenced by SBF (Fig. 5, grey background), the precision of the CO 2 total column concentrations decrease for uncorrected spectra. SBF-corrected spectra, following Keppel-Aleks et al. (2007), show an improvement in the precision resulting in 20 a similar precision as for undisturbed spectra. The CO 2 total column concentrations from the spectra corrected with the SBF correction method presented here show the same improvement in the precision. 4,[443][444][445][446][447][448][449][450][451][452][453][454][455][456][457][458][459]2011 Source brightness fluctuation correction of Fourier Transform spectra
Conclusions
We showed a source brightness fluctuation correction method for solar absorption Fourier Transform infrared spectra, which is independent of the detector and wavelength and can be used for a correction of the whole infrared spectral region. We tested our source brightness fluctuation correction method by comparing it to 5 the method used within the TCCON network (Keppel-Aleks et al., 2007), based on the example of CO 2 total column concentration measurements. We found an improvement in the precision of CO 2 total column concentrations identical to the TCCON approach. Spectra in the spectral region below 2000 cm −1 measured with a MCT detector were previously considered uncorrectable due to an unknown offset in the interferogram.
We presented a solution to remove the unknown offset in MCT interferograms allowing MCT spectra for an application of source brightness fluctuation correction.
|
2018-12-07T14:55:54.296Z
|
2011-06-09T00:00:00.000
|
{
"year": 2011,
"sha1": "382cb367eb95d38b71619b873da7012c1e314a52",
"oa_license": "CCBY",
"oa_url": "https://www.atmos-meas-tech.net/4/1045/2011/amt-4-1045-2011.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0175df7ed859ee952acc8ff1bca2167345a312e1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
16004356
|
pes2o/s2orc
|
v3-fos-license
|
The Role of Visual Information in Numerosity Estimation
Mainstream theory suggests that the approximate number system supports our non-symbolic number abilities (e.g. estimating or comparing different sets of items). It is argued that this system can extract number independently of the visual cues present in the stimulus (diameter, aggregate surface, etc.). However, in a recent report we argue that this might not be the case. We showed that participants combined information from different visual cues to derive their answers. While numerosity comparison requires a rough comparison of two sets of items (smaller versus larger), numerosity estimation requires a more precise mechanism. It could therefore be that numerosity estimation, in contrast to numerosity comparison, might rely on the approximate number system. To test this hypothesis, we conducted a numerosity estimation experiment. We controlled for the visual cues according to current standards: each single visual property was not informative about numerosity. Nevertheless, the results reveal that participants were influenced by the visual properties of the dot arrays. They gave a larger estimate when the dot arrays consisted of dots with, on average, a smaller diameter, aggregate surface or density but a larger convex hull. The reliance on visual cues to estimate numerosity suggests that the existence of an approximate number system that can extract numerosity independently of the visual cues is unlikely. Instead, we propose that humans estimate numerosity by weighing the different visual cues present in the stimuli.
Introduction
The predominant theory in numerical cognition states that we are equipped with an approximate number system that supports our non-symbolic number processes such as estimating or comparing different sets of items. This approximate number system would enable us to extract numerosity from a visual scene (e.g. an array of dots) independently of the visual cues present in that scene (aggregate surface, diameter of the dots, etc.) [1,2,3,4,5]. This notion is supported by studies that show that humans can perform numerosity comparisons while controlling for visual cues [2,6,7]. Visual cues for sets of dots are manipulated and made uninformative of numerosity across trials. Controlling for information other than numerosity is logical if you want to study 'pure numerosity processes'. But what if numerosity judgments are based on the combination of different visual cues? Two sets of items can differ in numerosity only if their visual characteristics differ, otherwise both would represent the same numerosity. In other words, the only aspect that allows us to dissociate different numbers of objects, are the visual cues present in the stimuli.
Numerosity and visual cues are also highly correlated in real life. For example, when more apples are added to a pile of apples, the size of the pile increases; or when more people enter a room, the density increases. We argue it would therefore be inefficient not to rely on this visual information for numerosity comparison or estimation. In a previous study, we indeed showed that subjects combine information from different visual cues when they have to decide which dot-array contains more dots [8,9,10,11,12,13,14,15]. This result suggests that current methods control for the visual cues insufficiently as they only control a single visual variable at a time [16,17]. It also suggests that the existence of a system that can extract numerosity independently of the visual cues is unlikely. For numerosity comparison you only have to make smaller-larger judgments, which can easily be made on the basis of the visual cues present in the stimulus. This visual comparison process is more difficult when visual cues are controlled for. However, despite a decrease in performance [18,19] participants are still able to perform the task since not all visual cues are controlled for at the same time.
While numerosity comparison processes only require a rough estimate of numerosity, more precise numerosity processes are necessary to estimate the number of items in a set. Izard & Dehaene [3] showed that participants perform poorly when asked to estimate numerosity. In their study, participants highly underestimated the number of dots presented on a screen. The visual cues of dot-arrays were controlled for and post-hoc analyses showed that the participants did not base their judgments on the visual cues present in the stimuli. However, as the authors themselves also suggested, their method for controlling the visual cues of the dot arrays is valid only when a single visual variable is used, not when participants combine multiple visual cues. The authors did not test whether reliance on multiple visual cues could explain their data. We can therefore not yet conclude that numerosity estimates are conducted independently of the visual cues present in the dot arrays. Our recent results from numerosity comparison suggest that numerosity judgments cannot be performed independently of the visual properties of the stimuli [8]. It is the aim of the present study to test whether this is also true for numerosity estimation.
To test the influence of visual cues on numerosity estimation, we presented participants with arrays of dots (12,18,24,36 or 48 dots). The participants were asked to estimate the number of dots presented on a screen. Importantly, we controlled the visual cues of the dot arrays: the size of each visual cue did not systematically increase or decrease with increasing numerosity. For example, the aggregate surface of 28 dots was on average smaller than the aggregate surface of 20 dots but on average larger than the aggregate surface of 36 dots (see Figure 1). Thus, the visual cues were not informative about numerosity across trials. The fact that we controlled for the visual properties is an important difference between this and previous studies investigating the effect of visual cues on numerosity estimation [20,21]. In this study, there was no incentive for the participant to take the different visual cues into account. In contrast, in previous studies, visual cues correlated with numerosity across trials. If we are equipped with an approximate number system, the participants' numerosity estimates should not be affected by visual cues present in the stimuli. However, if humans cannot extract numerosity independently of visual cues but rely on the sensory input to judge numerosity instead, participants' estimates will show biases induced by the size of the different visual cues.
Participants
Twenty-nine participants (aged between 19 and 30 years) participated in the experiment. No participant was excluded from the analyses. They all had normal or corrected-to-normal vision.
Ethics statement
Written informed consent was obtained according to the Declaration of Helsinki and as approved by the Ethical Committee of the University of Leuven.
Materials
The stimuli were arrays of grey dots presented on a dark background (dot size ranged between 0.11 and 0.79 degrees visual angle). The stimuli were generated using a modified version of the program developed by Gebuis & Reynvoet [17]. Each trial consisted of a dot array representing 12, 20, 28, 36 or 44 dots. We used 5 different numerosity values to create a large enough diversity in the stimuli while still being able to control for the visual cues.
We controlled the visual cues to account for the strong correlation between numerosity and its visual properties (when numerosity increases also its visual properties increase). To this end, we manipulated the different visual properties of the numerosity stimuli in such a manner that each single visual property did not consistently increase or decrease with increasing numerosity (see Figure 1). As a consequence of this manipulation, numerosity did not significantly correlate with the size of each single visual cue across all trials. This was confirmed using regression analyses that showed that for each participant no relation between a visual cue and numerosity was present (R 2 ,0.01 and p.0.08).
The visual properties that were manipulated are: (1) the convex hull (smallest contour around the dot array), (2) the aggregate surface of the dots (or the average diameter of the dots) and (3) density (aggregate surface/convex hull). It was not possible to disentangle average diameter and aggregate surface, when the average diameter increased, aggregate surface also increased. Consequently, the results described below are identical for both visual cues. From here onwards we will therefore only refer to aggregate surface but the same effects hold for average diameter. A priori analyses showed that the different visual cues were strongly correlated. As they were not informative about numerosity across trials, this correlation between different visual cues is not problematic for the task at hand.
Procedure
First a green fixation cross was shown for 500 ms. Next the first dot-array was presented for 300 ms followed by a blank screen for 1000 ms and a question mark which remained on the screen until the participant responded. Participants had to estimate the number of dots by typing their answer on the numerical keyboard. After the response a blank screen appeared for 1250 to 1500 ms. The stimuli were fully randomized.
Analyses
Outliers (for each participants' responses 2SD larger or smaller than the average estimate) were removed from the data and the average response for each numerosity was calculated (see Figure 2). For reasons of clarity, we will explain our analysis for convex hull but the same analysis was also conducted for density and aggregate surface. First, we divided the stimuli of each target numerosity (12,20,28,36,44) in two categories: stimuli with a convex hull smaller or larger than the average convex hull. Second, we calculated the participants mean estimate for each category (i.e. small vs. large convex hull). Third, a repeated measures analysis including target numerosity (12,20,28,36,44) and visual cue size (small or large convex hull) as within participant variables and mean estimate as the dependent variable was conducted. A main effect for visual cue size would indicate that participants' estimates are influenced by the size of the convex hull.
Results
Similar as in previous studies, a large variation in the individual estimates was present. About half of the subjects overestimated numerosity while the other half underestimated numerosity (see Figure 2).
For convex hull, the repeated measures analysis showed a significant main effect for target numerosity [F(4,112) = 273.91, p,0.001]: subjects gave a larger mean estimate for large compared to small numerosities (see Figure 2). Also a significant main effect for visual cue size was present [F(1,28) = 9.39, p = 0.005] indicating that participants gave a larger estimate for the arrays that were characterized by a relatively large convex hull (see Figure 3). The interaction between target numerosity and visual cue size approached significance [F(4,112) = 2.39, p = 0.055] implicating that the estimation bias induced by convex hull differed in size between target numerosities. The bias did not systematically increase or decrease with numerosity (the difference in mean estimate was 0.42, 20.08, 1.08, 1.34 and 1.21 dots for respectively target numerosity 12, 20, 28, 36, 44). Post hoc paired samples Ttests showed that this bias was significant for numerosity 12 For density, the results showed a significant main effect for target numerosity [F(4,112) = 279.3, p,0.001] and for visual cue size [F(1,28) = 44.6, p,0.001]. These results suggest that the participants gave a larger mean estimate for larger numerosities (see Figure 2) and that this estimate was dependent on density: participants estimated that the number of dots was larger in the arrays that were relatively less dense (see Figure 3). The interaction between target numerosity and visual cue size also reached significance [F(4,112) = 4.56, p = 0.002]. Again, the estimation bias induced by density differed in size between target numerosities. The bias did not increase or decrease with increasing numerosity (the difference in estimate was 0.94, 2.03, 2.44, 1.24 and 1.55 dots for respectively target numerosity 12, 20, 28, 36 and 44). Post hoc paired samples T-tests showed that the estimation bias induced by density was significant for each target numerosity (all p's,0.019).
For aggregate surface and average diameter, the main effect for target numerosity was significant [F(4,112) = 273.36, p,0.001]. Participants gave a larger mean estimate when the number of dots was larger (see Figure 2). The main effect for visual cue size also reached significance [F(1,28) = 41.35, p,0.001] indicating that participants estimated the number of dots as larger when the dot array consisted of a relatively small aggregate surface (see Figure 3). Also a significant interaction between target numerosity and visual cue size was obtained [F(4,112) = 4.77, p = 0.001] suggesting that estimates were biased by aggregate surface but to a different extend for each numerosity (the difference in numerosity estimate was 21.02, 22.06, 22.8, 21.16 and 21.29 dots for respectively target numerosity 12, 20, 28, 36 and 44). Post hoc paired samples T-tests confirmed that the estimation bias induced by aggregate surface (or the average diameter of the dots) was significant for each target numerosity (all p's,0.03).
Discussion
In this study we investigated the role of visual cues in numerosity estimation. Participants were presented with dot arrays representing 12, 20, 28, 36 or 44 dots and had to estimate the number of dots shown. To investigate the effects of the visual properties of the stimuli on numerosity estimation, we divided the stimuli for each target numerosity into two categories: stimuli with a relatively small convex hull (or density or aggregate surface) and stimuli with a relatively large convex hull (or density or aggregate surface). The results showed that participants' estimates were influenced by the size of the visual cues comprising the dot arrays: participants estimated that the number of dots was larger in the arrays that were characterized by a relatively large convex hull, small density and small aggregate surface (or average diameter). The direction of the bias was comparable to those obtained in a recent numerosity comparison study [8].
The influence of visual cue size on numerosity estimation is remarkable given that the different visual cues separately were not informative about numerosity. No single visual cue increased or decreased consistently with numerosity. The fact that the numerosity estimates were nevertheless influenced by the size of the visual cues suggests that the brain is not equipped with a mechanism that enables humans to estimate numerosity independently of its visual cues. It can also be concluded that participants did not rely on a single visual cue but on multiple visual cues when estimating numerosity. Reliance on a single visual cue would not have resulted in numerosity estimates that increased with increasing numerosity [3]. The current results and our previous findings on numerosity comparison suggest that humans integrate multiple visual cues to estimate numerosity [8]. The very poor estimation abilities of humans might therefore not be the result of a poor mapping of approximate number to symbolic number, but of a poor mapping between the mechanism that supports the visual analyses of non-symbolic number images and the symbolic number system. Such an explanation can also account for the finding that participants' estimates improve when they receive feedback [3]. Feedback allows participants to improve their mapping of visual features of the stimuli to symbolic number.
The hypothesis that humans rely on multiple visual cues to judge numerosity has major implications for how researchers currently control the visual cues in numerosity research. The methods to control for the visual cues are grounded in the idea that participants can only rely on a single visual cue throughout the experiment and do not integrate or switch between cues. Instead of designing other, more complicated paradigms (if possible), researchers should question whether controlling visual cues in numerosity studies makes sense. The manipulations of the visual cues are insufficient to control the visual cues and therefore only add noise to the data. More specifically, if participants integrate multiple visual cues to judge number, manipulating the visual cues will not prevent the participants from relying on the visual cues to judge numerosity but instead will increase task difficulty. This is clearly demonstrated in studies that show a decrease in performance when researchers manipulate the visual cues present in the stimuli: human adults can differentiate numerosities that differ with a ratio of 6:7 when visual cues are controlled for [18] but with a ratio of 7:8 [19] when visual cues are not controlled for. In contradistinction to our experiments, in daily life the strong relation between numerosity and the majority of visual cues is unlikely to be violated. Consequently, it appears unnecessary to have a brain mechanism that can extract numerosity independently of the visual properties of the stimuli. Researchers might therefore question whether they should use more ecologically valid stimuli that do not control for visual cues to get a true notion of our numerosity estimation or comparison abilities.
Taken together, we show that we rely on visual cues when estimating numerosity. Participants gave a larger estimate for a set of items when it consists of smaller items, a smaller aggregate surface, a larger convex hull or a lower density. This happened even though the visual cues were uninformative; no relation existed between the size of the visual cues and numerosity. These results therefore allowed us to exclude the existence of a mechanism that processes numerosity independently of its visual cues. Consistent with earlier studies on numerosity processing we suggest that humans integrate multiple visual properties present in the stimulus before a number label is attached.
|
2018-04-03T00:55:46.918Z
|
2012-05-17T00:00:00.000
|
{
"year": 2012,
"sha1": "4b348b2ed0f46182de408eee26705b445fec769d",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0037426&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4b348b2ed0f46182de408eee26705b445fec769d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
}
|
251631694
|
pes2o/s2orc
|
v3-fos-license
|
Impact of a Structured Report Template on the Quality of Multislice Spiral Computed Tomography Scan Reports for Small Bowel Diseases: A Preliminary Study
Background: The application of multislice spiral computed tomography (MSCT) scan has improved the diagnosis of small bowel diseases (SBDs). Objectives: This study aimed to develop a structured report (SR) template for SBDs based on MSCT scans and to compare its value with free-text reports (FTRs) by radiologists with different levels of seniority in radiology. Patients and Methods: A total of 120 SBD cases were confirmed based on the clinical manifestations, surgery, colonoscopy, and pathology. An SR template for small bowel imaging was developed, and six radiologists were divided into inexperienced and experienced groups. Sixty cases with small intestinal MSCT data were available for FTRs and another 60 cases for SRs after training. The report accuracy, satisfaction, and completion time were compared between the two reporting methods. Results: The writing time of SRs was significantly shorter than that of FTRs. By using FTRs, the experienced group showed higher levels of sensitivity for all diseases (i.e., intestinal wall, intestinal peripheral artery, blood vessel, bone, and other abdominal organ diseases) (P < 0.05). The experienced group showed a low misdiagnosis rate for all diseases (P < 0.05), except for bone disease (P = 0.161). By using SRs, the experienced group only showed a low misdiagnosis rate for the intestinal wall disease (P < 0.05). High sen-sitivityfortheintestinalwalldisease(P< 0.05)andintestinalperipheralarterydisease(P=0.024), alongwithimprovedsensitivity for bone lesions (P < 0.05), was reported in this group. In the inexperienced group, SRs improved sensitivity for all diseases (P < 0.05),exceptforintestinalwalldisease(P> 0.05). Thesatisfactionscoresforbothinexperiencedandexperiencedgroupsimproved by using SRs (4 vs. 2.6 for the inexperienced group and 4.1 vs. 3.2 for the experienced group; P < 0.05 for both). Conclusion: TheSRsweresuperiortoFTRsintermsof writingefficiency,accuracy,andsatisfaction. Theycouldimprovetheaccuracy of inexperienced radiologists in diagnosis and help detect small bowel diseases (SBDs).
cians, and the majority of radiology reports for MSCT enterography are free-text reports (FTRs) (10). Traditional FTRs are characterized by excessive variability in length, language, and style, which can minimize the clarity of reports and make it difficult for referring clinicians to identify key information for patient care (11). In contrast, a structured report (SR) typically uses standardized phrases with consistent formatting. So far, several studies have demonstrated the advantages of SRs over conventional FTRs, which include improved report clarity and consistency, higher radiologist satisfaction, and greater efficiency (12,13). However, the value of SRs in small bowel MSCT imaging remains unknown.
Objectives
The present study aimed to develop an SR template for small bowel MSCT enterography and to evaluate its convenience and practicability by improvement of writing efficiency, satisfaction scores, and sensitivity and specificity for SBDs. This study also aimed to compare the application of FTRs and SRs and to further investigate the convenience and practicability of SRs based on evaluations by physicians with different years of experience.
Patients
Patients who underwent MSCT enterography in our hospital were included in this study. The inclusion criteria were as follows: (1) patients with pathologically confirmed or clinically diagnosed small intestinal tumors, inflammation, or mesenteric vascular lesions; (2) patients with complete clinical and imaging data; and (3) patients aged above 18 years. A total of 120 cases were included in this study and randomly assigned to two groups of FTR and SR. For neoplastic and non-neoplastic lesions requiring surgery (e.g., intestinal necrosis and strangulated intestinal obstruction), surgical pathology was performed and used as the gold standard.
Development of an SR Template for Small Bowel Imaging
In March 2017, a team was formed to develop an SR template for small bowel imaging. The SR was developed for SBDs, and the SR team composed of experienced physicians from the departments of radiology, gastroenterology, gastrointestinal surgery, and oncology to ensure a comprehensive evaluation. Our SR team included eight radiologists, one imaging technician, two gastroenterologists, two gastrointestinal surgeons, and two oncologists. The retrospective analyses of MSCT enterography findings were performed by radiologists after 2014, and the symptoms reported in the literature and the previous study by our research team were collected (14,15).
Next, the SRs for MSCT enterography were formulated and discussed by multidisciplinary experts. Finally, the SR template for MSCT enterography was generated (Appendix 1 -3). It included clinical diagnosis, details of CT examination, imaging findings (e.g., intestinal filling scores, location, intestinal wall thickening, degree and symmetry of intestinal wall thickening, strengthening of intestinal wall, intestinal obstruction, and continuity of the small bowel mucosa), and imaging diagnosis.
This study was performed according to the Declaration of Helsinki (http://www.cirp.org/library/ethics/helsinki/) and was approved by the ethics committee and animal management committee of our organization.
Preparation Before Inspection
Bowel hypotonicity and filling were assessed as previously described (14). Patients without any contraindications were allowed to have a low residue diet the night before scanning. Polyethylene glycol electrolyte powder (HeShuang, Shenzhen Wanhe Pharmaceutical Co. LTD, China) was used for bowel preparation and administered orally according to the following procedure. Initially, the first box of medication was prepared by adding polyethylene glycol electrolyte powder to 1000 mL of warm boiled water. At 8: 00 pm and 8: 15 pm, 750 and 250 mL of medication were used, respectively. The second box also contained 1000 mL of medication, and a dose of 250 mL was taken at 8: 30, 8: 45, 9: 00, and 9: 15 pm, respectively. After these two boxes were consumed, the patients were allowed to drink approximately 2000 -3000 mL of warm boiled water until their excrement became clear water.
Imaging examinations were performed between 8: 00 am and 10: 00 am on the second day. To avoid hypoglycemia, the patients were allowed to drink 150 -200 mL of sugar water around 8: 00 am on the same day. Orally, 2000 mL of 20% isotonic mannitol (Baxter Healthcare, Shanghai, China) was administered in four doses at 15-minute intervals before scanning. In patients with poor tolerance, small amounts of isotonic contrast agent were taken orally as many times as possible. Also, water deprivation was necessary for patients diagnosed with obvious intestinal obstruction. The fluid and gas within the obstructed lumen of these patients were observed via imaging. An adequate amount of isotonic contrast agent was administered when satisfactory filling was not achieved. Patients without any contraindications were injected with 20 mg of anisodamine intramuscularly at 10 minutes before scanning. Gastric tube decompression was performed immediately in case of unsuitable filling. 2 Iran J Radiol. 2022; 19(3):e120373.
MSCT Enterography Procedures and Post-processing of Images
MSCT enterography was performed using a 64-slice spiral CT scanner (Somatom Sensation 64, Siemens, Forchheim, Germany) or a dual-source CT scanner (Definition Flash, Siemens, Forchheim, Germany or Somatom Force, Siemens, Forchheim, Germany). Unenhanced CT scan was performed from the diaphragmatic dome to the symphysis pubis. ULTRAVIST (370 mgI/mL, 1.5 mL/kg body weight; Bayer Schering, Germany) was injected intravenously into the right elbow at a rate of 3.5 mL/s with high pressure, and then, a three-phase contrast-enhanced scan was performed. The CT threshold-triggered scanning technique was applied for the arterial phase scanning of the small intestine. The abdominal aorta and the 11th thoracic vertebra were selected for scanning by positioning the film. The region of interest (ROI) was delineated in the image of abdominal aorta. When the CT value reached 100 HU, arterial phase scanning was performed. Data were collected at 40 seconds after injection in the small bowel phase. A venous phase scan was also performed at 75 seconds postinjection.
All CT images were transferred to a post-processing workstation for analysis, using the picture archiving and communication system (PACS). The multiplanar reconstruction (MPR), maximum intensity projection (MIP), and volume rendering technique (VRT) were used to indicate the overall morphology of the small intestine and its lesions, the mesenteric artery, and abdominal aorta and its major branches, respectively.
Evaluation of SR Accuracy and Satisfaction in MSCT Enterography
Six radiologists were selected and divided into an inexperienced group (group A, three radiologists with less than five years of experience in imaging diagnosis; A1, A2, and A3) and an experienced group (group B, three radiologists with more than five years of experience in imaging diagnosis; B1, B2, and B3). Overall, 60 MSCT scans of SBDs were analyzed by the two groups using FTRs, and the completion time was recorded. All physicians participated in SR training until they passed it. Another set of 60 MSCT scans of SBDs were provided for each trainee to write SRs. Radiologists in both groups wrote reports using SRs, and the completion time was then recorded. The six radiologists were blind to the clinical data and pathological findings of all cases. All 120 cases, including both negative and positive cases, were confirmed by surgery, pathology, and clinical examination. The disease types are presented in Table 1.
The accuracy and satisfaction of FTRs and SRs were assessed by the SR team, based on the surgical pathology, colonoscopy, and biopsy results, clinical diagnosis, clinical treatment, and follow-up as composite endpoints. Accuracy was evaluated via surgical pathology for patients who underwent surgery, while the clinical data and followup results were considered for patients who did not undergo surgery. The SR team was blind to the study design and the radiologists' information. The report accuracy was evaluated for diseases in five regions, including the intestinal wall, intestinal periphery, blood vessels, bones, and other abdominal organs. The positive and negative accuracy, misdiagnosis rate, and total misdiagnosis rate were calculated for these diseases. Besides, sensitivity was defined as the positive coincidence rate, while specificity was defined as the negative coincidence rate. The misdiagnosis rate for each item was calculated using the following equation: M isdiagnosis rate = F alse negative rate + F alse positive rate N umber of cases The total misdiagnosis rate was the sum of misdiagnosis rates for all the items, including the intestinal wall, intestinal peripheral artery, blood vessels, bones, and other abdominal organs. The satisfaction scores, ranging from one to five, were determined based on previous studies (16 -19) and by discussion among researchers in this study. The satisfaction scores were assigned by the radiology department and physicians from relevant clinical departments, according to the scoring standards in Table 2.
Statistical Analysis
All statistical analyses were performed in SPSS version 20.0 (IBM Corp., Armonk, NY, USA). The normal distribution of continuous data was examined before comparisons using Kolmogorov-Smirnov test. The writing time was compared between the two reporting methods using paired t-test. A Chi-square test was also used to analyze the report accuracy. Besides, Wilcoxon rank-sum test was used to compare satisfaction between the two reporting methods. Statistical significance was considered to be P < 0.05.
Patients' Characteristics
The baseline characteristics of 120 patients examined in this study are presented in Table 1. The mean age of cases in the FTR and SR groups was 55 ± 17 and 53 ± 13 years, respectively. Comorbidities, such as heart disease, diabetes, and hypertension, were reported in 22 cases in the FTR group and 25 cases in the SR group. There was no significant difference in the baseline characteristics and SBDs between the FTR and SR groups, suggesting that all variables were comparable between the two groups. Mesenteric warp with small bowel obstruction 1 2 Lumbar clamping with small bowel obstruction 2 2 Inguinal hernia with small bowel obstruction 1 3
Report Time
The FTR completion time for radiologists in the inexperienced and experienced groups was 14.0 ± 3.1 and 11.7 ± 2.4 seconds, respectively. Also, the SR completion time was 11.1 ± 1.7 and 9.8 ± 1.1 seconds in the inexperienced and experienced groups, respectively. In both inexperienced and experienced groups, the completion time of SRs was significantly shorter than that of FTRs (z = 6.152, P < 0.001 and z
Comparison of FTR Accuracy Between Experienced and Inexperienced Groups
The accuracy of FTRs (i.e., sensitivity, specificity, misdiagnosis rate, and total misdiagnosis rate) by radiologists was evaluated in this study. As shown in Table 4, radiologists in the same group showed no significant differences regarding sensitivity, specificity, and misdiagnosis rate for all diseases (P > 0.05 for all). Next, the accuracy of reports by radiologists was compared between the inexperienced and experienced groups. The radiologists in the experienced group had a lower total misdiagnosis rate compared to the inexperienced group (P < 0.05). Additionally, radiologists in the experienced group showed higher sensitivity for all diseases (P < 0.05), as well as higher specificity for blood vessel and other abdominal organ diseases compared to the inexperienced group (P < 0.05). Moreover, the experienced group had a low misdiagnosis rate for all diseases (P < 0.05), except bone disease (P = 0.161).
Comparison of SR Accuracy Between the Experienced and Inexperienced Groups
The reporting accuracy (i.e., sensitivity, specificity, misdiagnosis rate, and total misdiagnosis rate) of radiologists using SRs was evaluated in this study. As shown in Table 5, the three radiologists in the same group showed no significant differences regarding sensitivity, specificity, and misdiagnosis rate for all diseases (P > 0.05 for all). Compared to the inexperienced group, the experienced radiologists had a lower misdiagnosis rate only for the intestinal wall disease (P < 0.05) and higher sensitivity for the intestinal wall disease (P < 0.05) and intestinal peripheral artery disease (P = 0.024). There was no significant difference between the inexperienced and experienced groups regarding the reporting accuracy for blood vessel, bone, and other abdominal organ diseases.
Sensitivity Comparison of FTRs and SRs
The sensitivity of FTRs and SRs was compared in this study (Table 6). In the inexperienced group, the sensitivity of radiologists for intestinal peripheral artery, blood vessel, bone, and other abdominal organ diseases significantly improved after using SRs (P < 0.05). In contrast, no significant difference was found in the radiologists' sensitivity for detecting intestinal wall disease using FTRs and SRs (P > 0.05). Also, there was no significant difference in their sensitivity for identifying intestinal wall disease, intestinal peripheral artery, blood vessel, bone, and other abdominal organ diseases between the FTRs and SRs. Notably, the positive accuracy for bone lesions improved in the experienced group using SRs (P < 0.05) ( Table 6).
Satisfaction with the Reporting Method
The satisfaction scores of the experienced group (3.2 points) were significantly higher than those of inexperienced radiologists (2.6 points) when using FTRs (z = -2.767, P = 0.034). The satisfaction scores of SRs in the experienced (4.1 points) and inexperienced (4.0 points) groups were superior to those of FTRs (z = -3.789, P < 0.001 and z = -4.116, P < 0.001, respectively), although no significant difference was observed between the two groups regarding the satisfaction scores when using SRs (z = -0.624, P = 0.533) ( Table 7).
Discussion
Accurate radiological assessment and diagnosis of SBDs using MSCT enterography is of great importance. It is essential to use standardized templates to determine the correct treatment approach and improve the clinical outcomes of patients. It has been shown that CT diagnosis reporting for SBDs are mainly FTRs, causing significant differences in report quality within a region and even in a single department (20,21). As recommended by the Intersociety Conference, SRs have been shown to improve the intrinsic report quality by reducing variability and certain error types in radiological reports. Moreover, they have been shown to have higher clinical practicability than FTRs (11,22).
In the present study, we developed an SR template of small bowel imaging based on the MSCT technique, which was divided into three categories according to the disease Iran J Radiol. 2022; 19(3):e120373. 5 b The data of groups are the same or very close, and the P-value is one or close to one under this condition, indicating no significant difference type: (1) small intestinal neoplastic lesions; (2) inflammatory bowel disease; and (3) vascular disease. The report template contained complete information and key clinical diagnostic points for clinical needs. The effect of SRs on report quality was investigated, and the SRs and FTRs by radiologists with different levels of seniority in radiology were evaluated. It was found that SRs were superior to FTRs and could improve the reporting quality of physicians, especially inexperienced radiologists.
Some professional radiological institutions have attempted to introduce standardized SR templates (11). The SRs are gradually becoming accepted by radiologists and clinicians because of their high content integrity, clear organization, high report quality (19), and few spelling and grammatical errors (22). The misdiagnosis rate of imaging signs in SRs has been shown to be significantly lower than that of FTRs, whereas their diagnosis accuracy was higher (23). Meanwhile, several studies have indicated that SRs do not have obvious advantages over FTRs in terms of efficiency or quality of radiological reports (18). Although SRs have been gradually applied for CT scans of many organs (24,25), their efficiency in the diagnosis of SBDs remains unclear.
In the current study, the diagnostic accuracy of FTR and SR methods for the key report elements was examined. It was found that the accuracy of these methods for intestinal peripheral artery, blood vessel, bone, and other abdominal organ diseases improved in the inexperienced group using the SR method. Similarly, the radiologists' accuracy in detecting all the mentioned diseases increased numerically in the experienced group; particularly, comparison of diagnostic accuracy for bone diseases showed a significant difference between the FTR and SR methods (Table 6). These findings are inconsistent with the results of a study by Johnson et al., suggesting that SRs did not have an advantage in terms of accuracy over FTRs (18). This may be re- b The data of the groups are the same or very close, and the P-value is one or close to one under this condition, indicating no significant difference.
lated to the fact that both reporting methods focused on cerebrovascular and cerebral parenchyma, and factors affecting the report accuracy might have been related to the physicians' diagnostic experience (18).
In this regard, Nguyen et al. suggested that SRs of interventional radiology could improve compliance with radiation dose and contrast reporting, improve satisfaction, and decrease the writing time (26). Moreover, Persigehl et al. showed that SR templates contributed to optimization of radiological reporting, including completeness, repeatability, and differential diagnosis of solid and cystic pancreatic tumors in CT scans and magnetic resonance imaging (MRI) (27). In the clinical examinations of MSCT enterography in the present study, we often focused on lesions in the intestinal wall and ignored extraintestinal diseases, leading to misdiagnoses. The present study showed no significant difference in the accuracy of the two reporting methods for the intestinal wall disease, which is consistent with the results reported by Johnson and colleagues. Besides, our findings revealed that SRs reduced the misdiagnosis rate and total missed diagnosis rate of blood vessel, bone, and other abdominal organ diseases, especially for low-seniority radiologists, which is consistent with the results of a study by Lin et al. (23). Therefore, application of MSCT SRs to SBDs can help improve the positive accuracy of detecting extraintestinal diseases and reduce the misdiagnosis and total missed diagnosis rates.
For the adoption of SR in clinical practice, several issues need to be considered. First, SRs are more suitable for diseases with clear indications, and definite diagnostic criteria, such as breast and prostate cancers. Overall, there are many diseases of the small intestine with varying imaging characteristics. Therefore, it is necessary to construct SRs of various modes, including tumors, inflammation, and vascular disease. Before the application of SRs, it is important to determine which SRs are suitable for these conditions.
Second, today, there is no SR method for small intestinal CT imaging. Therefore, in the design of report content, it is necessary to collect the opinions of all related personnel as much as possible, including radiologists and clinicians, and integrate their diagnostic opinions and experience, as well as terminologies previously reported. The basic elements and clinical requirements of a report should be formulated via discussions. Subsequently, the report template should be improved according to the feedback of different physicians, which is a long and difficult process. Third, in clinical practice, the developed SR template cannot be used for all intestinal diseases, and some parts of it need to be combined with FTRs. The data of the groups are the same or very close, and the P-value is one or close to one under this condition, indicating no significant difference. There were several limitations to this study. First, the number of SBD cases and the number of radiologists participating in this study were insufficient, leading to statistical bias in the results. Besides, there may be some bias in the selection of physicians and patients. We made efforts to reduce such bias. For example, the physicians included in this study were selected based on the similar number of working years and passing the training program. Also, for case selection, the disease types for the two methods (FTRs and SRs) were basically similar. Also, we included patients who had no surgical indications (e.g., mesenteric vascular disease, non-strangulated intestinal obstruction, and inflammatory bowel disease), and surgical pathology as the gold standard was not performed for these patients, although previous studies have demonstrated that clinical data, biochemical indicators, and imaging data can be used as diagnostic standards (28)(29)(30)(31); to ensure a more reliable accuracy evaluation, these patients should be excluded. Second, pretreatment preparation, scanning techniques, and individual differences between patients could partially influence the manifestations of SBDs, and perfect intestinal filling and excellent image quality were not guaranteed for all patients in this study. Finally, this was a retrospective single-center study; therefore, further prospective multicenter studies are warranted.
In conclusion, the use of SRs in MSCT enterography by radiologists could improve the writing efficiency and reporting satisfaction compared to FTRs and increase the diagnostic accuracy of SBDs. Besides, the SR method could reduce the misdiagnosis rate of extraintestinal diseases and the overall misdiagnosis, especially for inexperienced radiologists. Overall, SRs may help increase the homogeneity of radiology diagnosis reports.
Supplementary Material
Supplementary material(s) is available here [To read supplementary materials, please refer to the journal website and open PDF/HTML].
|
2022-08-18T15:13:03.894Z
|
2022-08-16T00:00:00.000
|
{
"year": 2022,
"sha1": "36357863059d8d947bec3766da98ec771765a07d",
"oa_license": "CCBYNC",
"oa_url": "https://brieflands.com/articles/iranjradiol-120373.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fb448fb2fac38ca3887979cbc457cd6fb4d425b1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
18600172
|
pes2o/s2orc
|
v3-fos-license
|
Low self-esteem and psychiatric patients: Part I – The relationship between low self-esteem and psychiatric diagnosis
Background The objective of the current study was to determine the prevalence and the degree of lowered self-esteem across the spectrum of psychiatric disorders. Method The present study was carried out on a consecutive sample of 1,190 individuals attending an open-access psychiatric outpatient clinic. There were 957 psychiatric patients, 182 cases with conditions not attributable to a mental disorder, and 51 control subjects. Patients were diagnosed according to DSM III-R diagnostic criteria following detailed assessments. At screening, individuals completed two questionnaires to measure self-esteem, the Rosenberg self-esteem scale and the Janis and Field Social Adequacy scale. Statistical analyses were performed on the scores of the two self-esteem scales. Results The results of the present study demonstrate that all psychiatric patients suffer some degree of lowered self-esteem. Furthermore, the degree to which self-esteem was lowered differed among various diagnostic groups. Self-esteem was lowest in patients with major depressive disorder, eating disorders, and substance abuse. Also, there is evidence of cumulative effects of psychiatric disorders on self-esteem. Patients who had comorbid diagnoses, particularly when one of the diagnoses was depressive disorders, tended to show lower self-esteem. Conclusions Based on both the previous literature, and the results from the current study, we propose that there is a vicious cycle between low self-esteem and onset of psychiatric disorders. Thus, low self-esteem increases the susceptibility for development of psychiatric disorders, and the presence of a psychiatric disorder, in turn, lowers self-esteem. Our findings suggest that this effect is more pronounced with certain psychiatric disorders, such as major depression and eating disorders.
Background
Self-esteem is an important component of psychological health. Much previous research indicates that lowered self-esteem frequently accompanies psychiatric disorders [1][2][3][4][5]. It has been suggested that low self-esteem is an etiological factor in many psychiatric conditions as well as in suicidal individuals [6]. Self-esteem also plays some role in quality of life for psychiatric patients [7]. However, the nature of the relationship between lowered self-esteem and psychiatric disorders remain uncertain. It is not yet clear if lowered self-esteem occurs in a few psychiatric conditions, being relatively specific to them, or if it is simply representative of poor psychological health regardless of the diagnosis.
One of the major problems in the area of self-esteem research is the lack of a clear consensus definition. Self-esteem has been given a number of different definitions, each emphasising different aspects [4]. Hence, measurement instruments based on different definitions sometimes have poor correlation. An appropriate approach to better evaluate self-esteem may therefore be to use more than one measure of self-esteem.
Lowered self-esteem has been consistently found to occur in several psychiatric disorders. These include major depressive disorder, eating disorders, anxiety disorders, and alcohol and drug abuse. For example, there are multiple studies demonstrating that patients with major depressive disorder have lowered self-esteem [2, 8,9]. Lowered selfesteem has also been considered a psychological hallmark of most patients with eating disorders [10][11][12]. Indeed, lowered self-esteem has been suggested to be the final common pathway leading to eating disorders [13,14]. Studies have shown that with increasing anxiety self-esteem decreases [15,16,5]. However, in a study comparing the self-esteem of patients with different psychiatric diagnoses, patients with anxiety disorders had the highest selfesteem [17]. The relationship between alcohol dependence and lowered self-esteem has also long been recognised [1,18,19]. A relationship between the use of drugs and low self-esteem has been demonstrated in a number of studies [20][21][22][23].
Despite these studies, it remains unclear whether lowered self-esteem occurs in a few discrete psychiatric conditions, or in all psychiatric conditions, and also whether self-esteem is equally lowered in different psychiatric conditions. The aim of the present study is to address these issues.
Population Sample
The current study was carried out on data collected on a consecutive sample of 1,190 cases attending the Walk-In clinic at the University of Alberta Hospital, Edmonton, Canada. The sample consisted of 957 psychiatric patients, 182 cases with conditions not attributable to a mental disorder but due to psychosocial stressors ("V-codes" in DSM-III R), and 51 controls who accompanied patients and were themselves assessed but did not receive a psychiatric diagnosis (controls). The Walk-In clinic refers to a psychiatric open access clinic where patients can refer themselves or be referred through a family doctor. A therapist, who is a psychologist, a social worker or a psychiatric nurse, sees each patient. Any diagnoses made are then confirmed during a subsequent interview with a psychia-trist, with a final consensus diagnosis being made according to DSM III-R criteria. It is common practice in the Walk-In clinic that frequently the individuals who accompany the patient, particularly the family members, will also be assessed. As part of the assessment, all subjects complete a questionnaire containing two self-esteem scales.
Self-Esteem Scales
Two well-recognized patient-completed questionnaires were used to measure self-esteem. These were the Janis and Field Social Adequacy Scale (JF Scale) [24] and the Rosenberg Self-Esteem Scale (Rosenberg Scale) [25]. The JF Scale is available in Appendix 1 (see additional file 1) and the Rosenberg Self-Esteem Scale is available in Appendix 2 (see additional file 2). The JF Scale consists of 23 self-rating items, which measure anxiety in social situations, self-consciousness, and feelings of personal worthlessness. The maximum score is 115, and a higher score reflects increased self-esteem. Reliability estimates based on the Spearman-Brown formula and split-half reliability estimates for this scale are 0.91 and 0.83, respectively. The Rosenberg Scale measures global self-esteem and personal worthlessness. It includes 10 general statements assessing the degree to which respondents are satisfied with their lives and feel good about themselves. In contrast to the JF Scale, a lower score reflects higher self-esteem. In the original report, Rosenberg quoted a reproducibility of 0.9 and a scalability of 0.7. The Rosenberg Scale has previously been validated in other studies [25][26][27]. It is the most widely used scale to measure global self-esteem in research studies.
Grouping of patients
Individuals were categorized as being in one of the 19 groups, including two groups of controls (the "psychosocial stressor" group and the healthy "control" group). Eleven of these groups, namely psychotic disorders; major depression; dysthymia; bipolar disorder; anxiety disorders; alcohol use disorders; drug use disorders; eating disorders; adjustment disorder; conduct disorder; and impulse control disorder were according to the DSM-III-R classification with two modifications: the group named "psychotic disorders" consisted of schizophrenia, and psychotic disorders not elsewhere classified; the psychoactive substance use disorders was divided into two groups, namely "alcohol use disorders" (consisting of alcohol abuse and alcohol dependence) and "drug use disorders" (consisting of drug abuse and drug dependence). Five groups consisted of patients who had comorbid diagnoses. These groups were major depression and anxiety disorders; major depression and dysthymia; major depression and alcohol use disorders; major depression and drug use disorders; and alcohol and drug use disorders.
The final group consisted of patients with any other psychiatric diagnoses.
Statistical analysis
Analysis of variance (ANOVA) was used to examine the data. The two measures of self-esteem were considered as dependent variables, and all psychiatric diagnoses were considered as independent variables or factors. In cases where the result of ANOVA showed statistically significant differences between the means, post-hoc Student-Newman-Keuls test for multiple comparisons was applied. The Levene test was used to examine the homogeneity of variances, a main assumption in ANOVA.
Correlation between Self-Esteem Scales
In the current study the correlation coefficient between the two scales of self-esteem was -0.72, showing a high correlation.
Janis and Field Self-esteem Scores
Comparing the self-esteem of the 19 independent groups by ANOVA indicated that they were significantly different (F 18,1064 = 10.63, P < 0.0001). Further probing with the use of the Newman-Keuls test for multiple comparisons demonstrated the following findings (Table 1 and Figure 1).
3) Patients with a diagnosis of adjustment disorders had statistically significantly higher self-esteem compared to patients with eating disorders (P < 0.001), major depression (P < 0.001), dysthymia (P < 0.001), and comorbidity of major depression and dysthymia (P < 0.001), major depression and alcohol abuse disorders (P < 0.01), and major depression and anxiety disorders (P < 0.05). 4) Patients with comorbidity of major depression and dysthymia had significantly lower self-esteem compared to patients with diagnosis of conduct disorder (P < 0.001), adjustment disorder (P < 0.001), impulse control disorders (P < 0.01), psychotic disorders (P < 0.05), alcohol use disorders (P < 0.05), and anxiety disorders (P < 0.05). 5) Patients with eating disorders had the lowest scores on the JF scale and thus the lowest level of self-esteem. They had statistically significantly lower self-esteem compared to patients with diagnosis of adjustment disorder (P < 0.001), conduct disorder (P < 0.01), and impulse control disorders (P < 0.05).
6) In addition to the groups of eating disorders and comorbidity of major depression and dysthymia, patients with the diagnosis of dysthymia (P < 0.05), and patients with the comorbidity of major depression and alcohol abuse (P < 0.05) had statistically significantly lower selfesteem compared to patients with conduct disorders. 7) There was a trend that patients with comorbid diagnoses had a lower self-esteem compared to the patients with sole diagnosis. For example, the self-esteem of patients with comorbid major depression and dysthymia were lower compared to patients with either major depression or dysthymia alone. However, none of the differences reached statistical significance.
Rosenberg Self-Esteem Scale scores
The results of ANOVA on the Rosenberg Scale scores indicated statistically significant differences between the selfesteem of the 19 different groups (F 18,997 = 10.61, P < 0.0001). Further probing, using the Student-Newman-Keuls test, demonstrated the following findings (Table 1 and Figure 2): 1) The normal group had the highest level of self-esteem and patients with comorbidity of depression and dysthymia had the lowest level of self-esteem. As with the JF scale, the range of scores differed widely between different patient groups. The control group had significantly higher self-esteem compared to 11 of the psychiatric patient groups, namely: "eating disorders (P < 0.001)", "dysthymia (P < 0.001)", "major depression (P < 0.001)", "drug use disorders (P < 0.001)", "alcohol use disorders (P < 0.001)", adjustment disorders (P < 0.001), "major depression and dysthymia (P < 0.001)", "major depression and drug use disorders (P < 0.01)", "major depression and anxiety disorders (P < 0.001)", "major depression and alcohol use disorders (P < 0.001)", and "others (P < 0.01)".
Figure 1
Effect of diagnosis on the mean score on the Janis and Field Social Adequacy scale. This figure shows that feelings of social adequacy vary widely between different diagnostic groups. Control patients had the highest scores, and the highest self-esteem, with this measure. Dual diagnoses patients with Major Depressive Disorder ("MDD") had significantly lower scores, as did patients with a single diagnosis of Eating Disorders, Dysthymia, and MDD. The differences between groups that reached statistical significance are given in the text.
2) The Psychosocial stressor group had significantly higher self-esteem compared to patients with the following diagnoses; "eating disorders (P < 0.05)", "dysthymia (P < 0.001)", "major depression (P < 0.001)", "drug use disorders (P < 0.01)", "alcohol use disorders (P < 0.01)", "major depression and dysthymia (P < 0.001)", " "major depression and anxiety disorders (P < 0.05)". This finding suggests that the presence of a psychiatric disorder has a more important role in the decrease of self-esteem levels compared to the presence of stressful life circumstances.
3) Patients with either major depression or dysthymia, had significantly lower self-esteem compared to patients with anxiety disorders (P < 0.01) or adjustment disorder (P < 0.01). Dysthymic patients also had significantly lower self-esteem than that of bipolar disorder patients (P < 0.05).
Discussion
Self-esteem is an abstract concept, which has a composite nature. Available measurements of self-esteem usually measure different components of this global entity. For example, the Janis and Field Self-Esteem Inventory primarily measures anxiety in social situations, self-consciousness and feelings of personal worthlessness; three components of what its inventors called feelings of social adequacy. Some investigators like Rosenberg tried to de-vise a scale that can capture primarily the global entity of self-esteem. In the present study both scales of self-esteem have been used to better capture different aspects of selfesteem. Nonetheless, the high correlation between the two scales in the present study suggests that they measure overlapping aspects of self-esteem.
Before examining the results of the present study two potential concerns need to be addressed. Firstly, the control group was not randomly selected, and its size was small in comparison to the patient population. Furthermore, they were not the primary focus of the psychiatric assessment, and hence, the presence of a psychiatric condition may have been overlooked. Nonetheless, the mean score on the Rosenberg scale for the normal control group in the present study was 1.71, which is very similar to the findings in larger studies using the Rosenberg scale in normal controls [25][26][27][28]. Thus, the control group in the present study appears to be similar to results reported from previous normal control groups.
Secondly, a semi-standardised interview, such as the SCID (Structured Clinical Interview for DSM-III-R) [29], was not used in the diagnostic process. However in our study, a patient first completed a detailed questionnaire, and then had an extensive interview with an experienced nonphysician therapist, followed by an interview with one of
Figure 2
Effect of diagnosis on the mean score on the Rosenberg global self-esteem scale. This figure shows that feelings of low selfesteem vary widely between different diagnostic groups. Control patients had the lowest scores (and the highest self-esteem) using this scale. Dual diagnoses patients with Major Depressive Disorder ("MDD") had significantly lower scores, as did patients with a single diagnosis of Eating Disorders, Dysthymia, Drug abuse, and MDD. The differences between groups that reached statistical significance are given in the text.
a small group of experienced psychiatrists in the presence of the therapist. A consensus diagnosis was then reached. Such a diagnostic method leads to a high level of diagnostic consistency. Therefore, we do not believe that the absence of a standardised interview process adversely affected the results.
A number of previous studies have reported lower self-esteem in psychiatric patients compared to normal controls. Our findings confirm these previous studies and extend them based on the following key findings. The present study shows that all psychiatric patients had lower self-esteem compared to the control group. However, the degree of lowering of self-esteem in psychiatric patients varied with their diagnostic groups. Also, most psychiatric patients had a lower level of self-esteem compared to the Psychosocial stressor group. Furthermore, patients with comorbidity of psychiatric disorders, particularly when one of the diagnoses was major depressive disorder, tended to have lower self-esteem compared to patients who suffered from only one of those disorders.
The lower level of self-esteem in psychiatric patients compared to normal and Psychosocial stressor groups, and the tendency towards lower self-esteem in patients with comorbidity, suggest that the presence of any psychiatric disorder lowers self-esteem. In other words, when patients develop a psychiatric disorder, self-esteem is affected. The presence of more than one disorder can lower self-esteem further. The considerable difference between the self-esteem of patients with different psychiatric diagnoses suggests that the type of psychiatric disorder is linked to the degree by which self-esteem is lowered.
Since the present study is not a longitudinal study, we were not able to determine if the self-esteem of these patients was lowered before they became ill or if it was improved as their illness improved. This requires further longitudinal research.
Self-Esteem and Depressive Disorders
The link between low self-esteem and depressive disorders is well known and documented [8,9,6], and is further demonstrated in the current study. There is convincing evidence of a reciprocal link between depressive mood states and self-esteem, but the causal direction of this association is not obvious. It is certainly true that low self-esteem arises during major depression [30][31][32][33][34][35] and depressive subtypes such as seasonal affective disorder [36]. It has also been proposed that low self-esteem also acts as a vulnerability factor for the development of major depression [37][38][39]. Low self-esteem also adversely affects prognosis, at least in women, and may be a very useful factor for prognosis [40].
There is certainly evidence that changes in either depressive state or self-esteem can affect the other. It has been shown that, as the mood of depressed patients improves, their level of self-esteem also increases [41]. Also, following the onset of a depressive illness, self-esteem levels decrease [33]. Furthermore, with enhancement of selfesteem, the condition of depressed patients improved [42], whilst a lowering of self-esteem has been shown to produce depression [43]. There is also some uncertainty about the trait vs. state nature of this interaction. The previously quoted studies show evidence of a state-dependent effect. Nonetheless, there is also some evidence for a trait effect. It has been reported that self-esteem lability is a better index of depression proneness than low self-esteem as a trait [44]. It has also been suggested that low self-esteem may be a final common pathway to the development of depression [37,42]. The present study further confirms the close relationship between lowered self-esteem and the presence of depression but is not able to further clarify if it is a trait or state relationship.
Interestingly, lowered self-esteem has also been shown to occur in other depressive disorders such as dysthymia [45]. In fact, lowered self-esteem is one of the diagnostic criteria for dysthymic disorder. In view of the findings from the present study that lowered self-esteem occurs across the psychiatric spectrum, it may not be appropriate to use lowered self-esteem in the diagnostic criteria for any individual psychiatric condition. The present study confirms that patients with dysthymia have lowered self-esteem. Interestingly, this is the first study to report that patients with comorbidity of major depression and dysthymia had lower self-esteem compared to patients with either major depression or dysthymia alone. This suggests that the lowering of self-esteem can be cumulative with different depressive disorders.
Self-Esteem and Eating Disorders
The presence of lowered self-esteem among patients with eating disorders has been widely shown in previous studies [13,10,11]. It has been suggested that low self-esteem may be an epidemiological risk factor for eating disorders [46,47,14], and we have previously suggested that low self-esteem is the final common pathway in the etiology of eating disorders [48]. The present study further confirms the finding that eating disordered patients have lowered self-esteem, and extends these findings by showing that these changes in self-esteem are among the most severe for any patient group. Indeed, in one measure of selfesteem, the eating disordered patients had lower self-esteem than any other group, including those with comorbid diagnoses.
Significant comorbidity has been found between eating disorders and major depression, anxiety disorders, and al-coholism [10,11]. Because of high comorbidity between eating disorders and major depression, it might be suggested that the association between low self-esteem and eating disorders is secondary to the association of eating disorders and major depression. However, we and others have shown that low self-esteem occurs in patients with eating disorders in the absence of depression [10,13]. The overall findings from the studies to date, including the present study, is that patients with eating disorders have very significant lowering of self-esteem that may predate the onset of the disorder and contribute to its development. Further research in this area is required to clarify this relationship further.
Self-Esteem and Substance Abuse
Previous studies have shown those patients with alcohol use disorders [1,18,49,19] or drug use disorders [20,22,23] have lowered self-esteem compared to controls. The results from the present study confirmed this finding, with both these patient groups having significantly lower self-esteem than the controls. The results also showed that these patients had a moderate level of self-esteem compared with other psychiatric patient groups. Interestingly, in those patients where there was a comorbid major depressive disorder, the self-esteem was lower than that for either condition alone, although this did not reach statistical significance. The relevance of this can is emphasized by the finding that low self-esteem in alcohol-use disorders can increase suicidal risk [50].
Self-Esteem and Other Psychiatric Disorders
As well as the findings with the depressed, eating disordered, and alcohol abuse and drug abuse patients, the present study examined self-esteem in a number of patient groups that have not been much studied previously. We observed that patients with bipolar disorder in the manic phase had high self-esteem levels compared to other patients, but still a lowered self-esteem compared to controls. One other study also suggested that bipolar patients may have altered self-esteem [51]. In the present study patients with anxiety disorders had significantly lower self-esteem than the control group but a significantly higher self-esteem compared to some of the patient groups. This finding is in keeping with a previous study in which global self-esteem (measured with the Rosenberg self-esteem scale) was higher in patients with anxiety disorders compared to five different psychiatric conditions including depression, psychosis, personality disorder, and alcohol dependence [19]. Also, previous studies have found lower levels of self-esteem in anxiety disordered patients compared to controls [52,53,17]. There are a limited number of previous studies regarding self-esteem of psychotic patients, although one recent large study has suggested low self-esteem may be a risk factor for development of psychosis [54]. In our study, patients with psy-chotic disorders had intermediate levels of self-esteem compared to other psychiatric conditions. However, psychotic patients had significantly lower self-esteem levels than controls, which is consistent with the findings of a previous study [55]. In our study, patients with impulse control disorders were not significantly different from controls. One group [56] has found similar results, although in patients with both attention-deficit and hyperactivity disorder (ADHD) and comorbidity, self-esteem was significantly lowered.
Conclusion
Based on both the previous literature, and the results from the current study, we propose that there is a vicious cycle between low self-esteem and psychiatric disorders. Low self-esteem makes individuals susceptible to develop psychiatric conditions, particularly depressive disorders, eating disorders, and substance use disorders. The occurrence of these disorders subsequently lowers self-esteem even further. When more than one psychiatric disorder is present then the effects on self-esteem are additive.
|
2014-10-01T00:00:00.000Z
|
2003-02-11T00:00:00.000
|
{
"year": 2003,
"sha1": "058972f1af2e5620c3ab76f422d5d9fff1bef86c",
"oa_license": null,
"oa_url": "https://annals-general-psychiatry.biomedcentral.com/track/pdf/10.1186/1475-2832-2-2",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "058972f1af2e5620c3ab76f422d5d9fff1bef86c",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
233312761
|
pes2o/s2orc
|
v3-fos-license
|
Activin/Nodal/TGF-β Pathway Inhibitor Accelerates BMP4-Induced Cochlear Gap Junction Formation During in vitro Differentiation of Embryonic Stem Cells
Mutations in gap junction beta-2 (GJB2), the gene that encodes connexin 26 (CX26), are the most frequent cause of hereditary deafness worldwide. We recently developed an in vitro model of GJB2-related deafness (induced CX26 gap junction-forming cells; iCX26GJCs) from mouse induced pluripotent stem cells (iPSCs) by using Bone morphogenetic protein 4 (BMP4) signaling-based floating cultures (serum-free culture of embryoid body-like aggregates with quick aggregation cultures; hereafter, SFEBq cultures) and adherent cultures. However, to use these cells as a disease model platform for high-throughput drug screening or regenerative therapy, cell yields must be substantially increased. In addition to BMP4, other factors may also induce CX26 gap junction formation. In the SFEBq cultures, the combination of BMP4 and the Activin/Nodal/TGF-β pathway inhibitor SB431542 (SB) resulted in greater production of isolatable CX26-expressing cell mass (CX26+ vesicles) and higher Gjb2 mRNA levels than BMP4 treatment alone, suggesting that SB may promote BMP4-mediated production of CX26+ vesicles in a dose-dependent manner, thereby increasing the yield of highly purified iCX26GJCs. This is the first study to demonstrate that SB accelerates BMP4-induced iCX26GJC differentiation during stem cell floating culture. By controlling the concentration of SB supplementation in combination with CX26+ vesicle purification, large-scale production of highly purified iCX26GJCs suitable for high-throughput drug screening or regenerative therapy for GJB2-related deafness may be possible.
INTRODUCTION
Hearing loss is the most common congenital sensory impairment worldwide (Chan et al., 2010). Approximately 1 child in 1,000 is born with severe or profound hearing loss or will develop hearing loss during early childhood (Morton, 1991;Petersen and Willems, 2006), and about half of such cases are attributable to genetic causes (Birkenhager et al., 2010). To date, there are >120 known forms of non-syndromic deafness associated with identified genetic loci 1 , and the types of cells associated with the disease are diverse. In particular, the gene gap junction beta-2 (GJB2), which encodes connexin (CX)26 protein, is the most common causative gene for non-syndromic sensorineural hearing loss (Rabionet et al., 2000;Morton and Nance, 2006). CX26 is expressed in non-sensory cochlear supporting cells and in such cochlear structures as the spiral limbus, stria vascularis, and spiral ligament (Kikuchi et al., 1995;Ahmad et al., 2003;Forge et al., 2003;Zhao and Yu, 2006;Liu and Zhao, 2008;Wingard and Zhao, 2015). CX26 and CX30 (encoded by GJB6) form functional heteromeric and heterotypic gap junction (GJ) channels in the cochlea (Sun et al., 2005). At the plasma membrane, GJs further assemble into semi-crystalline arrays known as gap junction plaques (GJPs) containing tens to thousands of GJs (Koval, 2006). GJs facilitate the rapid removal of K + from the base of cochlear hair cells, resulting in cycling of K + back into the endolymph of the cochlea to maintain cochlear homeostasis (Kikuchi et al., 2000). We previously showed that disruption of CX26-GJPs is associated with Gjb2-related hearing-loss pathogenesis and that assembly of cochlear GJPs is dependent on CX26 (Kamiya et al., 2014). Furthermore, we recently described the generation of mouse induced pluripotent stem cell (iPSC)-derived functional CX26 GJ-forming cells (induced CX26 GJ-forming cells, iCX26GJCs), as are found among the cochlea supporting cells, based on floating culture (serum-free floating culture of embryoid bodylike aggregates with quick reaggregation, SFEBq culture) and adherent culture (Fukunaga et al., 2016) systems. The inner ear (cochlea) is an organ surrounded by bones and it is difficult to access from the outside. In addition, the inside of the cochlea is filled with lymph, and invasive procedures such as biopsy can lead to irreversible hearing loss. Accordingly, the inner ear is much more difficult to treat using human cells and tissues than in other sensory organs (eye, nose, tongue), and research on the pathophysiology and the development of treatment methods has been delayed. For this reason, rodents (mainly mouse) are a powerful tool for researching hearing loss. Of course, it is difficult to directly translate the results of drug screening of mouse iCX26GJC into human therapies. However, using mouse iCX26GJC for drug screening and conducting mouse experiments based on the results will be an important discovery opportunity for future applications to human deafness. However, before these cells can be used as a disease model for drug screening or for other large-scale assays, the cell culture system must be improved to increase the number of cells available at a single time. Our previous research suggested that the CX26expressing cell masses (CX26 + vesicles) observed in day 7 aggregates that form as a result of BMP4 signaling in SFEBq cultures represent the origin of iCX26GJCs in the adherent culture (Fukunaga et al., 2016). If CX26 + vesicles in SFEBq cultures from embryonic stem cells (ESCs) or iPSCs could be obtained in a substantial quantity, we may have an adequate number of iCX26GJCs in adherent cultures. The inner ear, which is our target, is derived from the otic placode, which is part of 1 http://hereditaryhearingloss.org the non-neural ectoderm (Barald and Kelley, 2004;Freter et al., 2008;Groves and Fekete, 2012). Several strategies to induce the differentiation of inner ear cells have been based on the generation of non-neural ectoderm from ESCs/iPSCs, which is promoted by the addition of BMP4, TGF-β inhibitor, and wnt inhibitor (Koehler et al., 2013;Ronaghi et al., 2014;Ealy et al., 2016). BMP4 is a strong neuronal inhibitor (Schuldiner et al., 2000;Tropepe et al., 2001;Munoz-Sanjuan and Brivanlou, 2002) and acts as a potent mesoderm induction factor (Wiles and Johansson, 1999;Czyz and Wobus, 2001) in stem cell differentiation. It has been reported that BMP promotes the differentiation of CX43-expressing cells such as astrocytes (Bani-Yaghoub et al., 2000) and cardiomyocytes (Takei et al., 2009). Similarly, the Activin/Nodal/TGF-β pathway inhibitor SB431542 (SB) has been implicated in efficient neural conversion of ESCs and iPSCs via inhibition of SMAD signaling (Chambers et al., 2009;Chambers et al., 2012) and by blocking the progression of stem cell differentiation toward trophectoderm, mesoderm, and endoderm lineages (Li et al., 2013). However, we did not find any reports that SB promotes the differentiation of stem cells into CXexpressing cells. Given this background, we hypothesized that SB may affect the differentiation of iCX26GJCs. At the beginning of the experiment, we compared the drug responsiveness between ESCs and iPSCs using CX26 + vesicles as an indicator. As a result, ESC was more responsive to drugs than iPSC ( Supplementary Figures 1A-D). Therefore, in the present study, we evaluated SFEBq culture conditions incorporating BMP4 and/or SB with the aim of generating iCX26GJCs from mouse ESCs at a greater efficiency than those generated from iPSCs.
Differentiation of ESCs
Induction of iCX26GJCs was performed as shown in Figure 1A. Briefly, ESCs were dissociated with Accutase (Innovative Cell Technologies, Inc.); suspended in differentiation medium (G-MEM, Gibco) supplemented with 1.5% (v/v) knockout serum replacement (Gibco), 0.1 mM nonessential amino acids (Gibco), 1 mM sodium pyruvate (Gibco), and 0.1 mM 2-mercaptoethanol; and then plated at 100 µl/well (3,000 cells) in 96-well low-cellattachment V-bottom plates (Sumitomo Bakelite). Recombinant BMP4 (obtained from Miltenyi Biotec) was diluted with DW at 100 µg/ml, and SB431542 (obtained from Tocris Bioscience) FIGURE 1 | Culture conditions for cells that expressed high levels of Gjb2 (CX26) and Gjb6 (CX30) mRNA. (A) A schematic procedure for differentiating iCx26GJCs from mouse ESCs. SFEBq, serum-free floating culture of embryoid body-like aggregates with quick reaggregation; KSR, knockout serum replacement; BMP4, Bone morphogenetic protein 4; SB431542, Activin/Nodal/TGF-β pathway inhibitor. (B) Relative expression of mRNA at day 0 (for undifferentiated ESCs) and at day 7 for untreated, BMP4-treated, SB-treated, and BMP4/SB-treated aggregates. mRNA expression levels were normalized to those of BMP4 cultures on day 7. The data are expressed as the mean ± SE from five independently generated cell cultures per treatment; for each replicate, expression was assessed in eight aggregates per treatment. Differences among samples were assessed by one-way ANOVA and Scheffe's multiple comparison test; **p < 0.01. was diluted with DMSO at 10 mM. On day 1, half of the medium (50 µl) in each well was replaced with fresh differentiation medium containing 4% (v/v) Matrigel (BD Bioscience). On day 3, one of three types of media was added to the culture: medium containing BMP4 (10 ng/ml, final concentration), SB (1, 5, or 10 µM, final concentration), or both factors at the aforementioned concentrations. BMP4 and SB stock solutions were prepared at a 5 × concentration in fresh medium. Stock solutions were stored for up to 6 months at -20 • C. On days 7-11, the aggregates were partially dissected, and the CX26 + vesicles (20-60 µm) were mechanically isolated under stereo microscope and collected using forceps. The CX26 + vesicles were transferred to adherent cultures containing trypsin-resistant inner-ear cells (TRICs) in the growth medium, which consisted of Dulbecco's modified Eagle's medium (DMEM) GlutaMAX (Gibco) and 10% (w/v) fetal bovine serum (FBS).
TRICs were isolated by exposing cochlear tissue to trypsin and screening for trypsin-resistant cells. The cochlear tissue (from 10-week-old mice, obtained from CLEA Japan, Inc.) used for the preparation of TRICs included the organ of Corti, basilar membrane, and lateral wall and mainly comprised supporting cells, hair cells, cochlear fibrocytes, and other cells in the basilar membrane. This cell line was used as inner ear-derived feeder cells on which to proliferate the otic progenitor cells. For the feeder cell layer preparation, 3 × 10 5 TRICs/cm 2 were seeded onto gelatin-coated wells of 24-well culture plates and mitomycin C (10 mg/ml) treatment for 3 h.
Analysis of Gjb2 and Gjb6 mRNA Expression
Total RNA was isolated from day 7 aggregates using reagents from an RNeasy Plus Mini kit (Qiagen) and reverse transcribed into cDNA using reagents from a Prime Script II first strand cDNA synthesis kit (Takara). Real-time PCR was performed with the reverse transcription products, TaqMan Fast Advanced Master Mix reagents (Applied Biosystems), and a genespecific TaqMan Probe (see below; Applied Biosystems) on a StepOne Real-Time PCR system (Applied Biosystems). Each sample was run in triplicate. Applied Biosystems StepOne software was used to analyze the Ct values of the different mRNAs normalized to expression of the endogenous control, Actb mRNA. TaqMan Probes (Assay ID; Applied Biosystems) were used to detect the expression of mouse Gjb2 (Mm00433643_s1), Gjb6 (Mm00433661_s1), and Actb mRNAs (Mm02619580_g1).
FACS Analysis
Cells were counted by FACSCalibur (BD Biosciences) and the data analyzed with FlowJo software (BD Biosciences). For cell preparation, cells were dissociated to single cells by 0.25% trypsin-EDTA treatment, fixed in 4% paraformaldehyde in DPBS at 4 • C, and permeabilized in 0.1% Triton in DPBS at 4 • C. Primary antibody (CX26, 1:150, mouse IgG, Invitrogen) was incubated at RT for 1 hr. Secondary antibodies (Alexa Fluor 488-conjugated anti Mouse, 1:1,000, Invitrogen) was incubated at RT for 1 h. Cells were washed with DPBS and counted using FACS.
Statistical Analyses
The data were analyzed using Microsoft Excel software and are presented as the mean ± SE. A two-tailed Student's t-test, with a significance criterion of p < 0.05, was used to compare the GJP lengths. One-way ANOVA and Scheffe's multiple comparison test, with a significance criterion of p < 0.05, were used to (D) Average diameter of CX26 + vesicles in day 7 aggregate (n = 12-24 CX26 + vesicles from three independent experiments). The data are expressed as the mean ± SE. Statistical differences among treatments were assessed by one-way ANOVA and Scheffe's multiple comparison test, or Student's t-test. Different letters (a-c) represent significant differences, p < 0.01.
Frontiers in Cell and Developmental Biology | www.frontiersin.org compare Gjb2 and Gjb6 mRNA levels and the number of CX26 + vesicles.
RESULTS
SB Promoted BMP4-Induced Gjb2/Gjb6 mRNA Expression in SFEBq Cultures iCX26GJCs were induced from mouse ESCs as described (Fukunaga et al., 2016), and the conditions required for differentiation were then assessed. ESCs were cultured in SFEBq medium containing BMP4, SB, or BMP4 plus SB. Aggregates were collected on day 7, and mRNA (Gjb2 and Gjb6) levels under different culture treatments were measured. Cultures treated with BMP4 and BMP4/SB produced more Gjb2/Gjb6 mRNA than cultures treated with SB alone or control cultures ( Figure 1B). BMP4 induces Gjb2/Gjb6 mRNA expression during iPSC differentiation (Fukunaga et al., 2016). In addition, ESCs cultured in differentiation medium supplemented with BMP4 and SB showed greater expression levels of mRNA (Gjb2, 1.8-fold greater; Gjb6, 1.7-fold greater) as compared with those cultured with BMP4 alone.
SB Promoted Formation of CX26-Expressing Small Vesicles in SFEBq Cultures
By day 7 of differentiation, the aggregates showed differentiated outer regions with a morphology similar to that reported previously (Fukunaga et al., 2016). Clear outer epithelia and small vesicles were observed beneath the outer epithelium of BMP4 alone or BMP4/SB-treated cells. By contrast, no small vesicles were observed for the control or SB-treated cells (Figure 2A, left column). To determine the location of CX26 in the cell aggregates, immunohistochemistry was performed. In BMP4or BMP4/SB-treated aggregates, CX26 + vesicles were observed (Figure 2A, right column).
In the confocal analysis of the day 7 aggregates from BMP4/SB-treated cells, CX26-expressing cells were dispersed throughout the numerous CX26 + vesicles ( Figure 3A and Supplementary Video 1). These cells formed CX26 + GJs at their cell-cell borders ( Figure 3B). In the three-dimensional construction of the confocal images, we observed large planar CX26-containing GJPs (Figure 3C and Supplementary Video 2), which, as we reported previously (Kamiya et al., 2014;Fukunaga et al., 2016), are characteristic of the mouse cochlea. On the other hand, when counting the number of CX26 + cells that composed CX26 + vesicle, the aggregate treated with BMP4/SB consisted of more CX26 + cells (mean ± SE, 51.6 ± 9.0 cells per CX26 + vesicle) than the aggregate cultured only with BMP4 (mean ± SE, 20.5 ± 2.9 cells per CX26 + vesicle; Figure 3D). In addition, the positive rate of CX26 + cells in the day 7 aggregate treated with BMP/SB was 3.73 % (Figures 3E,F). CX26 + vesicles were found to exist separately from core region in BMP4 alone or BMP4/SBtreated aggregates (Figures 4A,B), suggesting that they could be easily isolated. Numerous CX26 + vesicles were mechanically collected as a purified iCX26GJC population ( Figure 4C).
ESC-Derived iCX26GJCs That Co-expressed CX30 in Adherent Cultures Formed Gap Junctions
Between day 7 and 9, BMP4/SB-treated aggregates were transferred onto cochlear-derived feeder cells, namely TRICs, as follows. The differentiated regions with CX26 + vesicles were separated from the day 7 aggregates and subcultured in DMEM GlutaMAX with 10% (v/v) FBS on TRIC feeder cells. The subcultured regions containing CX26 + vesicles colonized the TRIC feeder cells. In the adherent cultures at day 10 (3 days after transferred onto TRIC feeder cells), CX26 + vesicle derived colony co-expressed CX26, Pax2, PAX8, and E-cadherin (Supplementary Figure 1). In the adherent cultures at day 15 (8 days after transferred), CX26-containing GJPs were observed (Figures 5A-C), as found in cochlear supporting cells (Kamiya et al., 2014;Fukunaga et al., 2016). The mean length of the longest dimension of the GJPs along a single cell border was 1.91 ± 0.11 µm for BMP4/SB-treated on day 7 aggregates, which increased significantly to 5.39 ± 0.25 µm in the adherent cultures at day 15 on TRIC feeder cells (Figure 5D), similar to observations when iPSCs were used (Fukunaga et al., 2016). To assess the similarities between these cells and cochlear cells, we characterized the expression of CX30, which is frequently absent in hereditary deafness. CX30 co-localized with CX26 in most CX26-GJPs in the differentiated cells (Figures 5E-K and Supplementary Video 3), suggesting that CX26 and CX30 were the two main components of these GJPs, as was found for cochlear cells (Kamiya et al., 2014;Fukunaga et al., 2016). In addition, the scrape loading-dye transfer assay revealed that mouse ESC-derived iCX26GJC forms functional GJs (Supplementary Figure 3) as in our previous report (Fukunaga et al., 2016).
SB Addition Increased the Number of CX26 + Vesicles in a Dose-Dependent Manner
Finally, to produce a large number of iCX26GJCs in SFEBq cultures, we examined whether the differentiation from ES cells to iCX26GJCs depended on the concentration of SB. Based on a quantitative reverse transcription-PCR (qRT-PCR) analysis, aggregates treated with BMP4 and either 5 or 10 µM SB showed higher expression of Gjb2 mRNA (BMP4/5 µM SB, 1.7-fold greater; BMP4/10 µM SB, 1.7-fold greater) as compared with treatment with BMP4/1 µM SB. Expression of Gjb6 was not affected by the concentration of SB (Figure 6A). In the SB alone group, expression of Gjb2 and Gjb6 was consistent across all three concentrations of SB (Supplementary Figure 3A). We next determined the number of CX26 + vesicles in day 7 aggregates based on immunostaining. Again, aggregates treated with BMP4 and either 5 or 10 µM SB showed a greater number of CX26 + vesicles (BMP4/5 µM SB, 1.5-fold greater; BMP4/10 µM SB, FIGURE 6 | In SFEBq cultures, SB had a dose-dependent effect on Gjb2/Gjb6 mRNA expression and the number of CX26 + vesicles. (A) Relative expression of Gjb2 and Gjb6 mRNA in day 7 aggregates from SFEBq cultures after treatment with BMP4 (10 ng/ml) alone or with SB (1-10 µM) as indicated. mRNA expression was normalized to that of cultures treated with BMP4/1 µM SB. The data are expressed as the mean ± SE from five independently generated cell cultures per treatment; for each replicate, expression was assessed in eight aggregates per treatment. (B) The average number of CX26 + vesicles per aggregate from aggregates treated as described in (A). The data are expressed as the mean ± SE from three independently generated cell cultures per treatment; for each replicate, vesicles were quantified for 2-3 aggregates per treatment (n = 8 aggregates in total). Statistical differences among samples were assessed by a one-way ANOVA and Scheffe's multiple comparison test; *p < 0.05; **p < 0.01.
1.3-fold greater) compared with BMP4/1 µM SB ( Figure 6B). Conversely, in the SB alone group, there was no difference in the number of small vesicles across all three concentrations of SB (Supplementary Figure 1B).
DISCUSSION
ESC/iPSC-derived in vitro models can be a powerful platform for understanding pathological mechanisms and developing therapeutic methods. Previously, we produced CX26 gap junction-forming cells (iCX26GJCs), which have characteristics of cochlear supporting cells, from mouse iPSCs by using BMP4 signaling in combination with SFEBq cultures and subsequent adherent cultures (Fukunaga et al., 2016). In this study, we evaluated the necessary conditions for the differentiation of pluripotent stem cells using SFEBq cultures containing BMP4 and/or SB for large-scale production of iCX26GJCs. SB accelerated BMP4-induced iCX26GJC differentiation in the SFEBq cultures. The SFEBq culture system is currently the most suitable method for inducing neural ectoderm from ESCs/iPSCs, and it leads to the differentiation of these cells into various ectoderm-derived tissues, for example, forebrain, midbrain, hindbrain, optic cup, and otic cup, depending on the culture conditions (Wataya et al., 2008;Eiraku and Sasai, 2012;Koehler et al., 2017;Takata et al., 2017). In SFEBq cultures, BMP4 upregulates a non-neural ectoderm marker (Dlx3) and downregulates a neuroectoderm marker (Sox1; Koehler et al., 2013). In contrast, SB induces suppression of brachyury and induces expression of transcription factor activator protein 2 (AP2, also known as TFAP2), and it is thought to promote proper non-neural induction after BMP4 treatment (Koehler et al., 2013;Koehler and Hashino, 2014). The AP2 transcription factor regulates the expression of a variety of genes during development (Saffer et al., 1991;Nottoli et al., 1998), and, based on bioinformatic predictions, it is inferred that several FIGURE 7 | A schematic illustration of the effect of SB431542 on CX26 GJ formation in BMP4-induced ESC differentiation. In the BMP4-based inner-ear three-dimensional differentiation from ESCs, addition of SB431542 was associated with significantly higher mRNA levels of Gjb2 (CX26) and Gjb6 (CX30) and CX26 + vesicle counts relative to cultures without the addition of SB431542. SB431542 was demonstrated to be an accelerator of GJ formation.
transcription factors (including AP2) are likely to have a role in regulating the expression of Gjb2 and Gjb6 (Common et al., 2005;Jayalakshmi et al., 2019). Furthermore, Gjb2 is upregulated by AP2 in normal tissue (Tu et al., 2001;Adam and Cyr, 2016). In SFEBq cultures, AP2 is used as a marker of non-neural ectoderm, and its expression is observed in epithelium-like structures formed outside of aggregates (Koehler et al., 2013(Koehler et al., , 2017. From these reports and our results, we speculate that the addition of SB to BMP-based SFEBq cultures increased the non-neural ectoderm region expressing AP2 in the aggregate (Koehler et al., 2013), resulting in increased Gjb2/Gjb6 mRNA expression and production of CX26 + vesicles. That is, SB strongly promoted BMP4-mediated differentiation into iCX26GJCs in SFEBq cultures (Figure 7). On the other hand, in the modified SFEBq culture in this study, we confirmed that the activin/Nodal/TGF-β pathway was inhibited by the addition of SB431542 (Supplementary Figure 5), as in the previous study (Osakada et al., 2009).
In many reports for drug screening, 0.5-1.0 × 10 4 Hela cells per well are seeded in 96 wells for use the next day. Assuming that it will be used after culturing for a certain period (7 days), 2 × 10 3 Hela cells per well will be required (Ke et al., 2004). In this study, about 200-300 CX26 + cells per aggregate were confirmed. Since it takes at least 7 days to complete the induction to iCX26GJC, it is expected that 7-10 aggregates/well (10 × 200 = 2,000; 7 × 300 = 2,100) will be required when used for drug screening.
On the other hand, in this study, we separated the CX26+ vesicles from the aggregate by using forceps under microscopy. This step may be a drawback for large-scale cell production in the future. Cell separation techniques include physical methods (manual pipetting, density gradient centrifugation, cell adhesion) and affinity-based methods (FACS, MACS; Diogo et al., 2012). Recently, several methods have been reported for purifying target cells induced from ES/iPS cells. For example, pure retinal pigment epithelial cell sheets are produced by a combination of manually picking up and subculture of target cells (Iwasaki et al., 2016).
In addition, it has been reported that corneal epithelial cells can be purified by a combination of MACS and differences in cell adhesion (Shibata et al., 2020). On the other hand, a method for purifying cardiomyocytes by density gradient centrifugation using percoll has been reported (Xu et al., 2002). In addition, purification methods by using culture media that are considered to be extremely unlikely to be intervened by human, have been reported (Tohyama et al., 2016). From these reports, it is considered necessary to study the conditions of purification methods that do not rely on manual picking up, such as density gradient centrifugation, difference in cell adhesion, and antibody used for FACS, as future tasks.
Increases in mRNA expression and CX26 + small vesicles were found to depend on the concentration of SB. However, with respect to the expression of Gjb2 and the number of CX26 + vesicles, there was no significant difference between treatment with 5 or 10 µM SB (Figure 6). These results indicated that iCX26GJCs could be most efficiently induced with the BMP4/5 µM SB combination. These data suggest that SB promotes BMP4-mediated production of CX26 + vesicles in a dosedependent manner, thereby increasing the yield of highly purified iCX26GJCs. This is the first study to show that SB accelerates BMP4-induced Gjb2/Gjb6 expression and CX26 + vesicle production during in vitro differentiation of ESCs (Figure 7). By controlling the concentration of SB in combination with CX26 + vesicle purification, large-scale production of highly purified iCX26GJCs for high-throughput screening of drugs that target GJB2-related deafness may be possible.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The animal study was reviewed and approved by the Institutional Animal Care and Use Committee at Juntendo University School of Medicine.
AUTHOR CONTRIBUTIONS
KK: project administration and supervision. KK and KI: conceptualization. KK and IF: data curation, formal analysis, funding acquisition, investigation, methodology, visualization, writing-original draft preparation, and writing-review and editing. KK, IF, CC, YO, KD, SO, AK, and KI: resources. KK, IF, CC, YO, KD, SO, and AK: validation. All authors contributed to the article and approved the submitted version.
FUNDING
This work was supported by Grants from the JSPS KAKENHI (Nos. 17H04348 and 16K15725 to KK and Nos. 19K09914, 17K16948, and 15K20229 to IF), Subsidies to Private Schools (to KK and IF), Japan Agency for Medical Research and Development (AMED, Nos. 15ek0109125h0001, 19ae0101050h0002, and 19ek0109401h0002 to KK), and the Takeda Science Foundation (to KK).
ACKNOWLEDGMENTS
This manuscript has been released as a pre-print at bioRxiv (Fukunaga et al., 2020).
|
2021-04-21T13:15:34.247Z
|
2021-04-21T00:00:00.000
|
{
"year": 2021,
"sha1": "5f89cda2ec23d4893bf8a1df61fa65dffc05ac74",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2021.602197/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f89cda2ec23d4893bf8a1df61fa65dffc05ac74",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2220163
|
pes2o/s2orc
|
v3-fos-license
|
Higher homotopies and Maurer-Cartan algebras: Quasi-Lie-Rinehart, Gerstenhaber, and Batalin-Vilkovisky algebras
Higher homotopy generalizations of Lie-Rinehart algebras, Gerstenhaber-, and Batalin-Vilkovisky algebras are explored. These are defined in terms of various antisymmetric bilinear operations satisfying weakened versions of the Jacobi identity, as well as in terms of operations involving more than two variables of the Lie triple systems kind. A basic tool is the Maurer-Cartan algebra-the algebra of alternating forms on a vector space so that Lie brackets correspond to square zero derivations of this algebra-and multialgebra generalizations thereof. The higher homotopies are phrased in terms of these multialgebras. Applications to foliations are discussed: objects which serve as replacements for the Lie algebra of vector fields on the"space of leaves"and for the algebra of multivector fields are developed, and the spectral sequence of a foliation is shown to arise as a special case of a more general spectral sequence including as well the Hodge-de Rham spectral sequence.
Introduction
In this paper we will explore, in the framework of Lie-Rinehart algebras and suitable higher homotopy generalizations thereof, various antisymmetric bilinear operations satisfying weakened versions of the Jacobi identity, as well as similar operations involving more than two variables; such operations have recently arisen in algebra, differential geometry, and mathematical physics but are lurking already behind a number of classical developments. Our aim is to somewhat unify these structures by means of the relationship between Lie-Rinehart, Gerstenhaber, and Batalin-Vilkovisky algebras which we first observed in our paper [19]. This will be, perhaps, a first step towards taming the bracket zoo that arose recently in topological field theory, cf. what we wrote in the introduction to [19]. The notion of Lie-Rinehart algebra and its generalization are likely to provide a good conceptual framework for that purpose. It will also relate new notions like those of Gerstenhaber and Batalin-Vilkovisky algebra, and generalizations thereof, with classical ones like those of connection, curvature, and torsion, as well as with less classical ones like Yamaguti's triple product [62] and operations of the kind introduced in [35]; it will connect new developments with old results due to E. Cartan [9] and Nomizu [49] describing the geometry of Lie groups and of reductive homogeneous spaces and, more generally, with more recent results in the geometry of Lie loops [34,55]. We will see that the new structures have incarnations in mathematical nature, e. g. in the theory of foliations. The higher homotopies which are exploited below are of a special kind, though, where only the first of an (in general) infinite family is non-zero.
Let R be a commutative ring with 1. A Lie-Rinehart algebra (A, L) consists of a commutative R-algebra A, an R-Lie algebra L, an A-module structure on L, and an action L ⊗ R A → A of L on A by derivations. These are required to satisfy suitable compatibility conditions which arise by abstraction from the pair (A, L) = (C ∞ (M ), Vect(M )) consisting of the smooth functions C ∞ (M ) and smooth vector fields Vect(M ) on a smooth manifold M . In a series of papers [16][17][18][19][20][21], we studied these objects and variants thereof and used them to solve various problems in algebra and geometry. See [23] for a survey and leisurely introduction. In differential geometry, a special case of a Lie-Rinehart algebra arises from the space of sections of a Lie algebroid.
In [19,21,22] we have shown that certain Gerstenhaber and Batalin-Vilkovisky algebras admit natural interpretations in terms of Lie-Rinehart algebras. The starting point was the following observation: It is nowadays well understood that a skew-symmetric bracket on a vector space g is a Lie-bracket (i. e. satisfies the Jacobi identity) if and only if the coderivation ∂ on the graded exterior coalgebra Λ ′ [sg] corresponding to the bracket on g has square zero, i. e. is a differential; this coderivation is then the ordinary Lie algebra homology operator. This kind of characterization is not available for a general Lie-Rinehart algebra: Given a commutative algebra A and an A-module L, a Lie-Rinehart structure on (A, L) cannot be characterized in terms of a coderivation on Λ A [sL] with reference to a suitable coalgebra structure on Λ A [sL] (unless the L-action on A is trivial); in fact, in the Lie-Rinehart context, a certain dichotomy between A-modules and chain complexes which are merely defined over R persists thoughout; cf. e. g. the Remark 2.5.2 below. On the other hand, Lie-Rinehart algebra structures on (A, L) correspond to Gerstenhaber algebra structures on the exterior A-algebra Λ A [sL]; cf. e. g. [38]. In particular, when A is the ground ring and L just an ordinary Lie algebra g, under the obvious identification of Λ[sg] and Λ ′ [sg] as graded R-modules, the (uniquely determined) generator of the Gerstenhaber bracket on Λ[sg] is exactly the Lie algebra homology operator on Λ ′ [sg]. Given a general commutative algebra A and an A-module L, the interpretation of Lie-Rinehart algebra structures on (A, L) in terms of Gerstenhaber algebra structures on Λ A [sL] provides, among other things, a link between Gerstenhaber's and Rinehart's papers [13] and [52] (which seems to have been completely missed in the literature). In the present paper, we will extend this link to suitable higher homotopy notions which we refer to by the attribute "quasi"; we will introduce Lie-Rinehart triples, quasi-Lie-Rinehart algebras, and certain quasi-Gerstenhaber algebras and quasi-Batalin-Vilkovisky algebras, and we will explore the various relationships between these notions. Below we will comment on the relationship with notions of quasi-Gerstenhaber and quasi-Batalin-Vilkovisky algebras already in the literature.
When an algebraic structure (e. g. a commutative algebra, Lie algebra, etc.) is "resolved" by an object, which we here somewhat vaguely refer to as a "resolution" (free, or projective, or variants thereof) having the given structure as its zero-th homology, on the resolution, the algebraic structure is in general defined only up to higher homotopies; likewise, an A ∞ structure is defined in terms of a bar construction or variants thereof, cf. e. g. [31], [32] and the references there. Exploiting higher homotopies of this kind, in a series of articles [27][28][29][30] we constructed small free resolutions for certain classes of groups from which we then were able to do explicit calculations in group cohomology which until today still cannot be done by other methods. A historical overview related with A ∞ -structures may be found in the Addendum to [33]; cf. also [24] and [31] for more historical comments.
In the present paper, we will explore a certain higher homotopy related with Lie-Rinehart algebras and variants thereof. A Lie algebra up to higher homotopies (equivalently: L ∞ -algebra) on an R-chain complex h may be defined in terms of a coalgebra perturbation of the differential on the graded symmetric coalgebra on the suspension of h; alternatively, it may be defined in terms of a suitable Maurer-Cartan algebra (see below). Since a genuine Lie-Rinehart structure on (A, L) cannot be characterized in terms of a coderivation on Λ A [sL], the first alternative breaks down for a general Lie-Rinehart algebra. The higher homotopies we will explore in the present paper do not live on an object close to a resolution of the above kind or close to a symmetric coalgebra; they may conveniently be phrased in terms of an object of a rather different nature which, extending terminology introduced by van Est [60], we refer to as a Maurer-Cartan algebra. A special case thereof arises in the following fashion: Given a finite dimensional vector space g over a field k, skew symmetric brackets on g correspond bijectively to degree −1 derivations of the graded algebra of alternating forms on g (with reference to multiplication of forms), and those brackets which satisfy the Jacobi identity correspond to square zero derivations, i. e. differentials. This observation generalizes to Lie-Rinehart algebras of the kind (A, L) under the assumption that L be a finitely generated projective A-module; see Theorem 2.2.16 below. For an ordinary Lie algebra g over a field k, in [60], the resulting differential graded algebra Alt(g, k) (which calculates the cohomology of g) has been called Maurer-Cartan algebra. The main point of this paper is that higher homotopy variants of the notion of Maurer-Cartan algebra provide the correct framework to phrase certain higher homotopy versions of Lie-Rinehart-, Gerstenhaber, and Batalin-Vilkovisky algebras to which we will refer as quasi-Lie-Rinehart-, quasi-Gerstenhaber, and quasi-Batalin-Vilkovisky algebras.
The differential graded algebra of alternating forms on a Lie algebra occurs, at least implicitly, in [10] and has a long history of use since then, cf. [41], and once I learnt in a talk by van Est that this algebra has been used by E. Cartan in the 1930's to characterize the structure of Lie groups and Lie algebras.
For the reader's convenience, we will explain briefly and somewhat informally a special case of a quasi-Lie-Rinehart algebra at the present stage: Let (M, F ) be a foliated manifold, the foliation being written as F , let τ F be the tangent bundle of the foliation F , and choose a complement ζ of τ F so that the tangent bundle τ M of M may be written as τ M = τ F ⊕ ζ. Let L F ⊆ Vect(M ) be the Lie algebra of smooth vector fields tangent to the foliation F , and let Q be the C ∞ (M )-module Γ(ζ) of smooth sections of ζ. The Lie bracket in Vect(M ) induces a left L F -module structure on Q-the Bott connection-and the space Q L F of invariants, that is, of vector fields on M which are horizontal (with respect to the decomposition τ M = τ F ⊕ ζ) and constant on the leaves inherits a Lie bracket. The standard complex A arising from a fine resolution of the sheaf of germs of functions on M which are constant on the leaves acquires a differential graded algebra structure and has H 0 (A) equal to the algebra of functions on M which are constant on the leaves, and the Lie algebra Q L F of invariants arises as H 0 (Q) where Q is the complex coming from a fine resolution of the sheaf V Q of germs of vector fields on M which are horizontal (with respect to the decomposition Γ(τ M ) = L F ⊕ Q) and constant on the leaves. In a sense, Q L F is the Lie algebra of vector fields on the "space of leaves", that is, the space of sections of a certain geometric object which may be seen as a replacement for the in general non-existant tangent bundle of the "space of leaves". Within our approach, this philosophy is pushed further in the following fashion: The pair (A, Q) acquires what we will call a quasi-Lie-Rinehart structure in an obvious fashion; see (4.12) and (4.15) below for the details. We view A as the algebra of generalized functions and Q as the generalized Lie algebra of vector fields for the foliation. The pair (H 0 (A), H 0 (Q)) is necessarily a Lie-Rinehart algebra, and the entire cohomology (H * (A), H * (Q)) acquires a graded Lie-Rinehart algebra structure. As a side remark, we note that here the resolution of the sheaf V Q is by no means a projective one; indeed, it is a fine resolution of that sheaf, the bracket on Q is not an ordinary Lie(-Rinehart) bracket, in particular, does not satisfy the Jacobi identity, and the entire additional structure is encapsulated in certain homotopies which may conveniently be phrased in terms of a suitable Maurer-Cartan algebra which here arises from the de Rham algebra of M . When the foliation does not come from a fiber bundle, the structure of the graded Lie-Rinehart algebra (H * (A), H * (Q)) will in general be more complicated than that for the case when the foliation comes from a fiber bundle. Thus the cohomology of a quasi-Lie-Rinehart algebra involves an ordinary Lie-Rinehart algebra in degree zero but in general contains considerably more information. In particular, in the case of a foliation it contains more than just "functions and vector fields on the space of leaves"; the additional information partly includes the history of the "space of leaves", that is, it includes information as to how this space arises from the foliation, how the leaves sit inside the ambient space, about singularities, etc. In Section 6 we will show that, when the foliation is transversely orientable with a basic transverse volume form ω, a corresponding quasi-Batalin-Vilkovisky algebra isolated in Theorem 6.10 below has an underlying quasi-Gerstenhaber algebra which, in turn, yields a kind of generalized Schouten algebra (generalized algebra of multivector fields) for the foliation; the cohomology of this quasi-Gerstenhaber algebra may then be viewed as the Schouten algebra for the "space of leaves". See (6.15) below for details.
Thus our approach will provide new insight, for example, into the geometry of foliations; see in particular (1.12), (2.10), (4.15), (6.15) below. The formal structure behind foliations which we will phrase in terms of quasi-Lie-Rinehart algebras and its offspring does not seem to have been noticed in the literature before-indeed, it involves, among a number of other things, a suitable grading which seems unfamiliar in the literature on quasi-Gerstenhaber and quasi-Batalin-Vilkovisky algebras, cf. (6.17) below-, nor the formal connections with Yamaguti's triple product and with Lie loops.
A simplified version of the question we will examine is this: Given a Lie algebra g with a decomposition g = h ⊕ q where h is a Lie subalgebra, what kind of structure does q then inherit? Variants of this question and possible answers may be found at a number of places in the literature, cf. e. g. [9,49] where, in particular, in a global situation, an answer is given for reductive homogeneous spaces. In the framework of Lie-Rinehart algebras, this issue does not seem to have been raised yet, not even for the special case of Lie algebroids.
As a byproduct, we find a certain formal relationship between Yamaguti's triple product and certain forms Φ * * which may be found in [41]. In particular, the failure of a quasi-Gerstenhaber bracket to satisfy the Jacobi identity is measured by an additional piece of structure which we refer to as an h-Jacobiator ; an h-Jacobiator, in turn, is defined in terms of Koszul's forms Φ 3 * . Likewise the quadruple and quintuple products studied in Section 3 below are related with Koszul's forms, and these, in turn, are related with certain higher order operations which may be found e. g. in [55]. We do not pursue this here; we hope to eventually come back to it in another article.
A Courant algebroid has been shown in [54] to acquire an L ∞ -structure, that is, a Lie algebra structure up to higher homotopies. The present paper paves, perhaps, the way towards finding a higher homotopy Lie-Rinehart or higher homotopy Lie algebroid structure on a Courant algebroid incorporating the Courant algebroid structure.
Graded quasi-Batalin-Vilkovisky algebras have been explored already in [14]. Our notions of quasi-Gerstenhaber and quasi-Batalin-Vilkovisky algebra, while closely related, do not coincide with those in [4], [5], [14], [40], [53]. In particular, our algebras are bigraded while those in the quoted references are ordinary graded algebras; the appropriate totalization (forced, as noted above, by our application of the newly developed algebraic structure to foliations and written in Section 6 below as the functor Tot) of our bigraded objects leads to differential graded objects which are not equivalent to those in the quoted references. See Remark 6.17 below for more details on the relationship between the various notions. Also the approaches differ in motivation; the guiding idea behind [14] and [40] seems to be Drinfeld's quasi-Hopf algebras. Our motivation, as indicated above, comes from foliations and the search for appropriate algebraic notions encapsulating the infinitesimal structure of the "space of leaves" and its history, as well as the search for a corresponding Lie-Rinehart generalization of the operations on a reductive homogenous space isolated by Nomizu and elaborated upon by Yamaguti (mentioned earlier) and taken up again by M. Kinyon and A. Weinstein in [35]. Indeed, the present paper was prompted by the preprint versions of [35] and [61]. It is a pleasure to dedicate it to Alan Weinstein. Throughout this work I have been stimulated by M. Kinyon via some e-mail correspondence at an early stage of the project as well as by M. Bangoura, P. Michor, D. Roytenberg and Y. Kosmann-Schwarzbach. I am indebted to J. Stasheff and to the referees for a number of comments on a draft of the manuscript which helped improve the exposition.
This work was partly carried out and presented during two stays at the Erwin Schrödinger Institute at Vienna. I wish to express my gratitude for hospitality and support.
Lie-Rinehart triples
Let R be a commutative ring with 1, not necessarily a field; R could be, for example, the algebra of smooth functions on a smooth manifold, cf. [20]. The problem we wish to explore is this: Question 1.1. Given a Lie-Rinehart algebra (A, L) and an A-module direct sum decomposition L = H ⊕ Q inducing an (R, A)-Lie algebra structure on H, what kind of structure does then Q inherit, and by what additional structure are H and Q related? Question 1.2. Given an (R, A)-Lie algebra structure on H and the (new) structure (which we will isolate below) on Q, what kind of additional structure turns the A-module direct sum L = H ⊕ Q into an (R, A)-Lie algebra in such a way that the latter induces the given structure on H and Q? Example 1.3.1. Let g be an ordinary R-Lie algebra with a decomposition g = h ⊕ q where h is a Lie subalgebra. Recall that the decomposition of g is said to be reductive [49] provided [h, q] ⊆ q. Such a reductive decomposition arises from a reductive homogeneous space [9,34,49,55,62]. For example, every homogeneous space of a compact Lie group or, more generally, of a reductive Lie group, is reductive. Nomizu has shown that, on such a reductive homogeneous space, the torsion and curvature of the "canonical affine connection of the second kind" (affine connection with parallel torsion and curvature) yield a bilinear and a ternary operation which, at the identity, come down to a certain bilinear and ternary operation on the constituent q [49], and Yamaguti gave an algebraic characterization of pairs of such operations [62]. Example 1.3.2. A quasi-Lie bialgebra (h, q), cf. [37], consists of a (real or complex) Lie algebra h and a (real or complex) vector space q with suitable additional structure where q = h * , so that g = h ⊕ h * is an ordinary Lie algebra; the pair (g, h) is occasionally referred to in the literature as a Manin pair . Quasi-Lie bialgebras arise as classical limits of quasi-Hopf algebras; these, in turn, were introduced by Drinfeld [11]. Example 1.4.1. Let R be the field R of real numbers, let (M, F ) be a foliated manifold, let τ F be the tangent bundle of the foliation F , and choose a complement ζ of τ F so that the tangent bundle τ M of M may be written as τ M = τ F ⊕ ζ.
Thus, as a vector bundle, ζ is canonically isomorphic to the normal bundle of the foliation. Let (A, L) be the Lie-Rinehart algebra (C ∞ (M ), Vect(M )), let L F ⊆ L be the (R, A)-Lie algebra of smooth vector fields tangent to the foliation F , and let Q be the A-module Γ(ζ) of smooth sections of ζ. Then L = L F ⊕ Q is a A-module direct sum decomposition of the (R, A)-Lie algebra L, and the question arises what kind of Lie structure Q carries. This question, in turn, may be subsumed under the more general question to what extent the "space of leaves" can be viewed as a smooth manifold. This more general question is not only of academic interest since, for example, in interesting physical situations, the true classical state space of a constrained system is the "space of leaves" of a foliation which is in general not fibrating, and the Noether theorems are conveniently phrased in the framework of foliations.
Example 1.4.2. Let R be the field C of complex numbers, M a smooth complex manifold M , A the algebra of smooth complex functions on M , L the (C, A)-Lie algebra of smooth complexified vector fields, and let L ′ and L ′′ be the spaces of smooth sections of the holomorphic and antiholomorphic tangent bundle of M , respectively. Then L ′ and L ′′ are (C, A)-Lie algebras, and (A, L ′ , L ′′ ) is a twilled Lie-Rinehart algebra in the sense of [21,22]. Adjusting the notation to that in (1.4.1), let H = L ′ and Q = L ′′ . Thus, in this particular case, Q = L ′′ is in fact an ordinary (R, A)-Lie algebra, and the additional structure relating H and Q is encapsulated in the notion of twilled Lie-Rinehart algebra. The integrability condition for an almost complex structure may be phrased in term of the twilled Lie-Rinehart axioms; see [21,22] for details.
The situation of Example 1.4.1 is somewhat more general than that of Example 1.4.2 since in Example 1.4.1 the constituent Q carries a structure which is more general than that of an ordinary (R, A)-Lie algebra. Another example for a decomposition of the kind spelled out in Questions 1.1 and 1.2 above arises from combining the situations of Example 1.4.1 and of Example 1.4.2, that is, from a smooth manifold foliated by holomorphic manifolds, and yet another example arises from a holomorphic foliation. Abstracting from these examples, we isolate the notion of Lie-Rinehart triple. For ease of exposition, we also introduce the weaker concepts of almost pre-Lie-Rinehart triple and pre-Lie-Rinehart triple. Distinguishing between these three notions may appear pedantic but will clarify the statement of Theorem 2.7 below. See also Remark 2.8.4 below. As for the terminology we note that our notion of triple is not consistent with the usage of Manin triple in the literature. However, a Lie-Rinehart algebra involves a pair consisting of an algebra and a Lie algebra, and in this context, it is also common in the literature to refer to this structure as a pair which, in turn, is not consistent with the notion of Manin pair . We therefore prefer to use our terminology Lie-Rinehart triple etc.
Let A be a commutative R-algebra. Consider two A-modules H and Q, together with -skew-symmetric R-bilinear brackets of the kind (1. 5 We will say that the data (A, H, Q) constitute an almost pre-Lie-Rinehart triple provided they satisfy (i), (ii), and (iii) below.
and, furthermore, an operation in the obvious way, that is, by means of the association By construction, the values of the adjoint of (1.6.3) then lie in Der R (A), that is, this adjoint is then of the form at first it is only R-bilinear but is readily seen to be A-bilinear. The formula (1.6.2) is then merely a decomposition of the initially given bracket on L into components according to the direct sum decomposition of L into H and Q, and (1.6.3) is accordingly a decomposition of the L-action on A. Furthermore, given x, y ∈ H and ξ ∈ Q, in L we have the identity where a ∈ A, x, y ∈ H, ξ, η, ϑ ∈ Q.
Recall [18] that, given a commutative algebra A and Lie-Rinehart algebras (A, L ′ ), (A, L) and (A, L ′′ ) where L ′ is an ordinary A-Lie algebra, an extension of Lie-Rinehart algebras is an extension of A-modules which is also an extension of ordinary Lie algebras so that the projection from L to L ′′ is a morphism of Lie-Rinehart algebras. Theorem 1.9 entails at once the following. Proof of Theorem (1.9). The bracket (1.6.1) is plainly skew-symmetric. Hence the proof comes down to relating the Jacobi identity in L and the Lie-Rinehart compatibility properties with (1.9.1)-(1.9.7).
Thus, suppose that the bracket [ · , · ] on L = H ⊕ Q given by (1.6.1) and the operation L ⊗ R A → A given by (1.6.3) turn (A, L) into a Lie-Rinehart algebra. Given ξ ∈ Q and x ∈ H, we have [ξ, x] = ξ · x − x · ξ; since L acts on A by derivations, for a ∈ A, we conclude that is, (1.9.4) holds. Next, since L is a Lie algebra, its bracket satisfies the Jacobi identity. Hence, given x ∈ H and ξ, η ∈ Q, whence, comparing components in H and Q, we conclude that is, (1.9.2) and (1.9.5) hold.
Likewise given ξ ∈ Q and x, y ∈ H, whence, comparing components in Q and H, we conclude that is, (1.9.3) and (1.5.12) hold; notice that (1.5.12) holds already by assumption.
Next, given ξ, η, ϑ ∈ Q, Thus the Jacobi identity implies that is, (1.9.6) and (1.9.7) are satisfied. Conversely, suppose that the brackets [ · , · ] H and [ · , · ] Q on H and Q, respectively, and the operations (1.5.3), (1.5.4), and (1.5.5), are related by (1.9.1)-(1.9.7). We can then read the above calculations backwards and conclude that the bracket (1.6.1) on L satisfies the Jacobi identity and that the operation (1.6.3) yields a Lie algebra action of L on A by derivations. The remaining Lie-Rinehart algebra axioms hold by assumption. Thus (A, L) is then a Lie-Rinehart algebra. Given an (R, A) Lie algebra L and an (R, A) Lie subalgebra H, the invariants A H ⊆ A constitute a subalgebra of A; we will then denote the normalizer of H in L in the sense of Lie algebras by L H , that is, L H consists of all α ∈ L having the property that [α, β] ∈ H whenever β ∈ H. Notice that H is here viewed as an ordinary A H -Lie algebra, the H-action on A H being trivial by construction.
Proof. Indeed, given α ∈ Q and β ∈ H, ∈ H for every β ∈ H if and only if β · α = 0 ∈ Q for every β ∈ H, that is, if and only if α is invariant under the H-action on Q. The rest of the claim is an immediate consequence of Theorem 1.9.
1.12. Illustration. Under the circumstances of Example 1.4.1, Corollary 1.11 obtains, with H = L F . Now A H = A L F ⊆ A is the algebra of smooth functions which are constant on the leaves, that is, the algebra of functions on the "space of leaves", and L H consists of the vector fields which "project" to the "space of leaves". Indeed, given a function f which is constant on the leaves and vector fields X ∈ L H and Y ∈ L F , necessarily Y (Xf ) = [Y, X]f + X(Y f ) = 0 whence Xf is constant on the leaves as well. Thus we may view Q H as the Lie algebra of vector fields on the "space of leaves", that is, as the space of sections of a certain geometric object which serves as a replacement for the in general non-existant tangent bundle of the "space of leaves". Remark 1.13. In analogy to the deformation theory of complex manifolds, given a Lie-Rinehart triple (A, H, Q), we may view H and Q as what corresponds to the antiholomorphic and holomorphic tangent bundle, respectively, and accordingly study deformations of the Lie-Rinehart triple via morphisms ϑ: H → Q and spell out the resulting infinitesimal obstructions. This will include a theory of deformations of foliations. Details will be given elsewhere.
Lie-Rinehart triples and Maurer-Cartan algebras
In this section we will explore the relationship between Lie-Rinehart triples and suitably defined Maurer-Cartan algebras. In particular, we will show that, under an additional assumption, the two notions are equivalent; see Theorem 2.8.3 below for details. As an application we will explain how the spectral sequence of a foliation and the Hodge-de Rham spectral sequence arise as special cases of a single conceptually simple construction. More applications will be given in subsequent sections.
Maurer-Cartan algebras.
Given an A-module L and an R-derivation d of degree −1 on the graded A-algebra Alt A (L, A), we will refer to (Alt A (L, A), d) as a Maurer-Cartan algebra (over L) provided d has square zero, i. e. is a differential.
Recall that a multicomplex (over R) is a bigraded R-module {M p,q } p,q together with an operator d j : M p,q → M p+j,q−j+1 for every j ≥ 0 such that the sum d = d 0 +d 1 +. . . is a differential, i. e. dd = 0, cf. [43], [44]. The idea of multicomplex occurs already in [15] and was exploited at various places in the literature including [29], [30]. We note that an infinite sequence of the kind (d 2 , d 3 , ...) is a system of higher homotopies. We will refer to a multicomplex (M ; d 0 , d 1 , d 2 , . . . ) whose underlying bigraded object M is endowed with a bigraded algebra structure such that the operators d j are derivations with respect to this algebra structure as a multi R-algebra.
Given A-modules H and Q, consider the bigraded A-algebra (Alt A (Q, Alt A (H, A)); we will refer to a multi R-algebra structure (beware: not multi A-algebra structure) on this bigraded A-algebra having at most d 0 , d 1 , d 2 non-zero as a Maurer-Cartan algebra structure. The resulting multi R-algebra will then be written as and referred to as a (multi) Maurer-Cartan algebra (over (Q, H)). Usually we will discard "multi" and more simply refer to a Maurer-Cartan algebra. We note that, for degree reasons, when (2.1.1) is a Maurer-Cartan algebra, the operator d 2 is necessarily an A-derivation (since d 2 (a) = 0 for every a ∈ A ∼ = Alt 0 A (Q, Alt 0 A (H, A))). Remark 2.1.2. In this definition, we could allow for non-zero derivations of the kind d j for j ≥ 3 as well. This would lead to a more general notion of multi Maurer-Cartan algebra not studied here. The presence of a non-zero operator at most of the kind d 2 is an instance of a higher homotopy of a special kind which suffices to explain the "quasi" structures explored lated in the paper.
into a Maurer-Cartan algebra. However, not every Maurer-Cartan structure on Alt A (Q ⊕ H, A) arises in this fashion, that is, a multi Maurer-Cartan algebra structure captures additional structure of interaction between A, Q, and H, indeed, it captures essentially a Lie-Rinehart triple structure. The purpose of the present section is to make this precise.
For later reference, we spell out the following, the proof of which is immediate.
is a (multi) Maurer-Cartan algebra if and only if the following identities are satisfied.
2.2. Lie-Rinehart and Maurer-Cartan algebras. Let A be a commutative R-algebra and L an A-module, together with a skew-symmetric R-bilinear bracket and an operation Let M be a graded A-module, together with an operation subject to the following requirement: We refer to an operation of the kind (2.
where α 1 , . . . , α n ∈ L and where as usual ' ' indicates omission of the corresponding term. We note that, when the values of the homogeneous alternating function f on L of n − 1 variables lie in M q , |f | = q − n + 1. Here and below our convention is that, given graded objects N and M , a homogeneous morphism h: N p → M q has degree |h| = q − p. This is the standard grading on the Hom-functor for graded objects.
This pairing induces a (bi)graded pairing which is compatible with the generalized CCE operators. An A-module M will be said to have property P provided for x ∈ M , φ(x) = 0 for every φ: M → A implies that x is zero. For example, a projective A-module has property P, or a reflexive A-module has this property as well or, more generally, any A-module M such that the canonical map from M into its double A-dual is injective. On the other hand, for example, for a smooth manifold X, the C ∞ (X)module D of formal (= Kähler) differentials does not have property P: On the real line, with coordinate x, consider the functions f (x) = sin x and g(x) = cos x. The formal differential df − gdx is non-zero in D; however, the C ∞ (X)-linear maps from D to C ∞ (X) are the smooth vector fields, whence every such C ∞ (X)-linear map annihilates the formal differential df − gdx. Proof. A familiar calculation shows that d is a differential if and only if the bracket [x, y] L satisfies the Jacobi identity and if the adjoint of (2.2.2) is a morphism of R-Lie algebras. Cf. also 2.8.5(i) below.
Example 2.2.12. The Lie algebra L of derivations of a polynomial algebra A in infinitely many indeterminates (over a field) has property P as an A-module but is not a projective A-module. To include this kind of example and others, it is necessary to build up the theory for modules having property P rather than just projective ones or even finitely generated projective modules.
Let now (A, L) be an (ungraded) Lie-Rinehart algebra, and let (Alt A (L, A), d) be the corresponding Maurer-Cartan algebra; notice that the operator d is not A-linear unless L acts trivially on A. For reasons explained in [23] we will refer to this operator as Lie-Rinehart differential. We will say that the graded A-module M , endowed with the operation (2.2.5), is a graded (left) (A, L)-module provided this operation is an ordinary Lie algebra action on M . When M is concentrated in degree zero, we simply refer to M as a (left) (A, L)-module. In particular, with the obvious L-module structure, the algebra A itself is a (left) (A, L)-module. The proof of the following is straightforward and left to the reader.
Given a graded (A, L)-module M , we will refer to the resulting (co)chain complex as the Rinehart complex of M -valued forms on L; often we write this complex more simply in the form Alt A (L, M ). It inherits a differential graded Alt A (L, A)-module structure via (2.2.10).
We now spell out the passage from Maurer-Cartan algebras to Lie-Rinehart algebras. Proof. The operator induces, for q = 0, an operation L ⊗ R A − → A of the kind (2.2.2) and, for q = 1, a skew-symmetric R-bilinear bracket [ · , · ] L on L of the kind (2.2.1). More precisely: Given x ∈ L and a ∈ A, let This yields an operation of the kind (2.2.2). Given x, y ∈ L, using the hypothesis that L is a finitely generated projective A-module, identify x and y with their images in the double A-dual L * * and define the value [x, y] L by Henceforth we spell out a particular homogeneous constituent of bidegree (p, q) (according to the conventions used below, such a homogenous constituent will be of bidegree (−p, −q) but for the moment this usage of negative degrees is of no account) in the form The operations (1.5.3) and (1.5.4) induce degree zero operations A little thought reveals that, in view of (1.5.6.H), (1.5.6.Q), (1.5.7.H), (1.5.7.Q), (1.5.8)-(1.5.11), these operators, which are at first defined only on the R-multilinear alternating functions, in fact pass to operators on A-multilinear alternating functions. Furthermore, the skew-symmetric A-bilinear pairing δ, cf. (1.5.5), induces an operator The operator d 0 : The last term involving the double summation necessarily appears since, for 1 ≤ j ≤ p and 1 ≤ k ≤ q + 1, the bracket [x k , ξ j ] in Q ⊕ H, cf. (1.6.2), is given by Remark 2.5.2. A crucial observation is this: The operator d 0 may be written as the sum However, even when (A, H, Q) is a (pre-)Lie-Rinehart triple, the individual operators d H and d Q are well defined merely on Alt R (Q, Alt R (H, A)); only their sum is well defined on Alt A (Q, Alt A (H, A)).
The operator d 1 : The last term involving the double summation necessarily appears in view of (1.6.4).
With the generalized operation of Lie-derivative which, for x 1 , . . . , x q ∈ H, is given by the identity (2.5.3) may be written as The operator d 2 : We will use this observation in (5.8.7) and (5.8.8) below.
Remark 2.6. Given an almost pre-Lie-Rinehart triple (A, H, Q), the vanishing of d 2 d 2 is automatic, for the following reason: View H and Q as abelian A-Lie algebras and H as being endowed with the trivial Q-module structure. Since δ is a skew-symmetric A-bilinear pairing, we may use it to endow the A-module direct sum L = H ⊕ Q with a nilpotent A-Lie algebra structure (of class two) by setting We write L nil for this nilpotent A-Lie algebra. The ordinary CCE complex for calculating the Lie algebra cohomology Proof. This is a consequence of Lemmata 2.
Since H has property P, we conclude that the bracket on H satisfies the Jacobi identity, that is, H is an R-Lie algebra. Likewise, for j = 0, given x, y ∈ H and a ∈ A, we find Consequently the adjoint H → Der R (A) of ( We note that Alt 1 Since H is assumed to have property P, we conclude that, for every ξ ∈ Q, x, y ∈ H, (iii) Pursuing the same kind of reasoning, consider the operator Let a ∈ A, ξ ∈ Q, x ∈ H. Again a calculation shows that whence the vanishing of d 0 d 1 +d 1 d 0 in bidegree (0, 0) entails the compatibility property (1.9.1). Likewise consider the operator (H, A)).
In the same vein: (iv) The vanishing of the operator entails the compatibility property (1.9.4).
(vii) The vanishing of the operator entails the compatibility property (1.9.7). Indeed, given ξ, η, ϑ ∈ Q and α: H → A, ϑ)) . There is a slight conflict of notation here but it will always be clear from the context whether d j (j ≥ 0) refers to the differentials of a spectral sequence or to a system of multicomplex operators. The spectral sequence (2.9.1) is an invariant of the Lie-Rinehart triple structure. H * (H, A)), d) is defined.
2.10. Illustration. The spectral sequence (2.9.1) includes as special cases that of a foliation and the Hodge-de Rham spectral sequence. This provides a conceptually simple approach to these spectral sequences and subsumes them under a single more general construction. We will now make this precise. (i) Consider a foliated manifold M , the foliation being written as F . Recall that a p-form ω on M is called horizontal (with reference to the foliation F ) provided ω(X 1 , . . . , X p ) = 0 if some X j is vertical, i. e. tangent to the foliation, or, equivalently, i X ω = 0 whenever X is vertical; a horizontal p-form ω is said to be basic provided it is constant on the leaves (i. e. λ X ω = 0 whenever X is vertical). The sheaf of germs of basic p-forms is in general not fine and hence gives rise to in general non-trivial cohomology in non-zero degrees, cf. [51,56,57]; this spectral sequence is an invariant of the foliation. The cohomology E p,0 2 is sometimes called "basic cohomology", since it may be viewed as the cohomology of the "space of leaves".
(ii) Suppose that the foliation F arises from a fiber bundle with fiber F , and write ξ: P → B for an associated principal bundle, the structure group being written as G. In this case, the spectral sequence (2.9.1) comes down to that of the fibration. (iii) Returning to (i) above, suppose in particular that the foliation is transversely complete [2]. Then the closures of the leaves constitute a smooth fiber bundle M → W , the algebra A H is isomorphic to that of smooth functions on W in an obvious fashion, and the obvious map from Q H to Vect(W ) ∼ = Der(A H ) which is part of the Lie-Rinehart structure of (A H , Q H ) is surjective [47] and hence fits into an extension of (R, A H )-Lie algebras of the kind Here L ′ is the space of sections of a Lie algebra bundle on W , and the underlying extension of Lie algebroids on W is referred to as the Atiyah sequence of the (transversely complete) foliation F [47]. Thus we see that the interpretation of Q H as the space of vector fields on the "space of leaves" requires, perhaps, some care, since L ′ will then consist of the "vector fields on the "space of leaves" which act trivially on every function".
To get a concrete example, let M = SU(2)×SU(2), and let F be the foliation defined by a dense one-parameter subgroup in a maximal torus S 1 × S 1 in SU(2) × SU (2). Then the space W is S 2 × S 2 , and L ′ is the space of sections of a real line bundle on S 2 × S 2 , necessarily trivial. One easily chooses a vector bundle ζ on SU(2) × SU (2) which is complementary to τ F , and the Lie-Rinehart triple structure is defined on (C ∞ (M ), L F , Γ(ζ)). In particular, the operation δ is non-zero. We note that the Chern-Weil construction in [18] yields a characteristic class in H 2 deRham (S 2 × S 2 , R) for the extension (2.10.1), and this class may be viewed as an irrational Chern class [18] (Section 4). The non-triviality of this class entails that the differential d 2 of the spectral sequence (2.9.1) is non-trivial. We also note that, in view of a result of Almeida and Molino [2], the transitive Lie algebroid corresponding to (2.10.1) does not integrate to a principal bundle; in fact, Mackenzie's integrability obstruction [45] is non-zero. (v) Under the circumstances of Corollary 1.9.8, so that (A, Q, H) is a Lie-Rinehart triple with trivial (A, H)-module structures on A and Q, the spectral sequence (2.9.1) is the ordinary spectral sequence for the corresponding extension of Lie-Rinehart algebras. If, furthermore, A is the ground ring so that Q and H are ordinary Lie algebras, this comes down to the Hochschild-Serre spectral sequence of the Lie algebra extension.
3. The additional structure on Q Let (A, H, Q) be a Lie-Rinehart triple. Theorem 1.9 gives a possible answer to Question 1.2 as well as to Question 1.1. What is missing is an intrinsic description of the structure induced on the constituent (A, Q) which, in turn, should then in particular encapsulate the Lie-Rinehart triple structure on (A, H, Q).We now proceed towards finding such an intrinsic description. To this end, we will introduce, on the constituent Q, certain operations similar to those introduced by Nomizu on the constituent q of a reductive decomposition g = h ⊕ q of a Lie algebra [49]; the operations in [49] come from the curvature and torsion of an affine connection of the second kind. We note that the naive generalization to Lie-Rinehart algebras of the notion of reductive decomposition of a Lie algebra is not consistent with the Lie-Rinehart axioms. Given a Lie-Rinehart algebra L and an A-module decomposition L = H ⊕ Q where (A, H) inherits a Lie-Rinehart structure, since for x ∈ H, ξ ∈ Q, and a ∈ A, necessarily Let (A, H, Q) be an almost pre-Lie-Rinehart triple. We will now define triple-, quadruple-, and quintuple products of the kind To this end, pick α, β, γ, ξ, η, ϑ, κ ∈ Q and a ∈ A. For 1 ≤ j ≤ 6, we will spell out an explicit description of each of the operations (3.j) and label it as (3.j ′ ), as follows.
We note that ( = (ξ · δ(η, ϑ)) · κ + (η · δ(ϑ, ξ)) · κ + (ϑ · δ(ξ, η)) · κ Moreover, with the notation x = δ(ξ, η), (3.8.2) comes down to which is just (1.9.2), and (3.8.5) reads which is (1.9.5). To complete the construction, we must require that the ordinary commutator bracket on End R (Q) descend to a bracket [·, ·] H on H in such a way that (A, H), with this bracket and the pairing (1.5.2.H) (which we reconstructed from the triple product (3.4)), be a Lie-Rinehart algebra in such a way that (3.8.3) and (3.8.8) are satisfied. The remaining compatibility properties in order for (A, H, Q) to be a Lie-Rinehart triple will then be implied by the structure isolated in (3.7) and (3.8). A (H, A), d) inherits a differential graded R-algebra structure and Q is, in particular, an (A, H)-module whence the Rinehart complex Q = (Alt A (H, Q), d) is a differential graded A-module in an obvious fashion. For the special case where (A, H, Q) is a twilled Lie-Rinehart algebra (i. e. the operation δ: Q ⊗ A Q → H, cf. (1.5.5), is zero), we have shown in [21] (3.2) that the pair (A, Q) acquires a differential graded Lie-Rinehart structure and that the twilled Lie-Rinehart algebra compatibility conditions can be characterized in terms of this differential graded Lie-Rinehart structure. We will now show that, for a general Lie-Rinehart triple (A, H, Q) (i. e. with in general non-zero δ), the pair (A, Q) inherits a higher homotopy version of a differential graded Lie-Rinehart algebra structure; abstracting from the structure which thus emerges, we isolate the notion of quasi-Lie-Rinehart algebra. This structure provides a complete solution of the problem of describing the structure on the constituent of a Lie-Rinehart triple written as Q and hence yields a complete answer to Question 1.1.
A pairing is graded skew-symmetric provided it is graded alternating as a graded bilinear function.
With these preparations out of the way suppose that, in addition, (A, Q) carries -a graded skew-symmetric R-bilinear pairing of degree zero -an R-bilinear pairing of degree zero -an A-trilinear operation of degree −1 which is graded skew-symmetric in the first two variables (i. e. in the Q-variables).
Given a pre-quasi-Lie-Rinehart algebra (A, Q), consider the bigraded algebra where ξ 1 , . . . , ξ p+2 ∈ Q. The graded Lie-Rinehart axioms (4.4) and (4.5) imply that the operator d 1 is well defined on Alt A (Q, A) as an R-linear (beware, not A-linear) operator. The usual argument shows that d 1 is a derivation on the bigraded Aalgebra Alt A (Q, A). Since the operation ·, · ; · Q , cf. (4.3.Q), is A-trilinear, the operator d 2 is well defined on A-valued A-multilinear functions on Q. Since (4.3.Q) is skew-symmetric in the first two variables, the operator d 2 automatically has square zero, i. e. is a differential. Proof. Since, as a graded A-module, Q is an induced graded A-module, the bigraded algebra Alt A (Q, A) may be written as the bigraded tensor product Alt A (Q, A) ∼ = Alt A (Q, A) ⊗ A, and it suffices to consider forms which may be written as βα where β ∈ Alt A (Q, A) and α ∈ A; the formula (4.8.2) yields d 2 (β) = 0, d 2 (βα) = (−1) |β| βd 2 (α) and, since for ξ, η ∈ Q, the operation ξ, η; · Q is a derivation of A, we conclude that the operator d 2 is an R-linear derivation on Alt A (Q, A). Furthermore, since for a ∈ A = A 0 , for degree reasons, d 2 (a) is necessarily zero the operator d 2 is plainly well defined on A-valued A-multilinear functions on Q and in fact an A-linear derivation on Alt A (Q, A) as asserted.
Remark 4.8.4. On the formal level, the notion of quasi-Lie-Rinehart algebra isolated above is somewhat unsatisfactory since the definition involves the structure of Q as an induced A-module. The operator d 1 may be written out as an operator on the bigraded A-module Alt A (Q, A) of A-graded multilinear alternating forms on Q directly in terms of the operations (4.1) and (4.2), that is, in terms of the arguments of these operations, without explicit reference to the induced A-module structure. Indeed, given an n-tuple η = (η 1 , . . . , η n ) of homogeneous elements of Q, write |η| = |η 1 | + . . . + |η n | and |η| (j) = |η 1 | + . . . + |η j |, for 1 ≤ j ≤ n, and define the operators by means of where η 1 , . . . , η p+1 are homogeneous elements of Q. Then the sum d (·,·) +d [·,·] descends to an operator on Alt A (Q, A) which, in turn, coincides with d 1 . In this fashion, d 1 appears as being given by the CCE formula (2.2.8) with respect to (4.1) and (4.2). We were so far unable to give a similar description of the operator d 2 , though, in terms of a suitable extension of (4.3.Q) to an operation of the kind Q⊗ A Q⊗ A A − → A.
4.9.
Definition. Let (A, Q) be a pre-quasi-Lie-Rinehart algebra so that, in particular, A is a differential graded commutative algebra and Q a differential graded A-module. Consider the bigraded A-algebra cf. (4.6) above, where Mult R (Q, A) refers to the bigraded algebra of A-valued Rmultilinear forms on Q. The differentials on Q and A (both written as d, with an abuse of notation,) induce a differential D on Mult R (Q, A) in the usual way, that is, given an R-multilinear A-valued form f on Q, where, with a further abuse of notation, the "d" in the constituent f d signifies the induced operator on any of the tensor powers Q ⊗ R ℓ (ℓ ≥ 1). We will say that the pre-quasi-Lie-Rinehart algebra (A, Q) is a quasi-Lie-Rinehart algebra provided it satisfies the requirements (4.9.1)-(4.9.6) below where d 1 and d 2 are the operators (4.7.1) and (4.8.1), respectively. (4.9.1) The differential D descends to an operator on Alt A (Q, A), necessarily a differential, which we then write as d 0 .
(4.9.2) The differential on Q is a derivation for the bracket (4.1).
In (4.9.6), it suffices to require the vanishing of the operator d 1 d 2 +d 2 d 1 on Alt 0 A (Q, A 1 ). We leave it to the reader to spell out a description of this requirement directly in terms of the structure (4.1)-(4.3); this description would be less concise than the requirement given as (4.9.6).
Under the circumstances of Theorem 4.10, we will refer to the multialgebra as the Maurer-Cartan algebra for the quasi-Lie-Rinehart algebra structure on (A, Q). 4.11. Relationhip with almost pre-Lie-Rinehart triples. Our goal is to show how a Lie-Rinehart triple determines a quasi-Lie-Rinehart algebra. Here we explain the first step, that is, how a structure of the kind (4.1.Q)-(4.3.Q) that underlies a pre-quasi-Lie-Rinehart algebra arises: Let (A, Q, H) be an almost pre-Lie-Rinehart triple, and let A = Alt A (H, A) and Q = Alt A (H, Q). Then A = Alt A (H, A) is a graded commutative algebra (beware, not necessarily a differential graded commutative algebra) and Q = Alt A (H, Q) is a graded A-module (not necessarily a differential graded module). The pairings (1.5.2.Q) and (1.5.4) induce a pairing Q ⊗ R A → A of the kind (4.2.Q) by means of the association The corresponding induced pairing of the kind (4.2) has the form Furthermore, the bracket [ · , · ] Q is exactly of the kind (4.1.Q). It extends to a graded skew-symmetric bracket (4.11.4) [ · , · ] Q : Q ⊗ R Q − → Q of the kind (4.1). To get an explicit formula for this bracket we suppose, for simplicity, that the canonical map from A ⊗ A Q to Q = Alt A (H, Q) is an isomorphism of graded A-modules so that Q is indeed an induced graded A-module of the kind considered above. This will be the case, for example, when H is finitely generated and projective as an A-module or when Q is projective as an A-module. Under these circumstances, given homogeneous elements α, β ∈ A and ξ, η ∈ Q, the value [α ⊗ ξ, β ⊗ η] Q of the bracket (4.11.4) is given by Furthermore, setting where, for x ∈ H, i x refers to the operation of contraction, that is, Consider the operators d 1 and d 2 on Alt A (Q, A) given as (4.7.1) and (4.8.1) above, respectively. These operators now come down to the operators (2.4.6) and (2.4.7), respectively. By Theorem 2.7, when (A, Q, H) is a genuine Lie-Rinehart triple, is a Maurer-Cartan algebra, that is, d = d 0 +d 1 +d 2 turns Alt A (Q, A) into a differential graded algebra. Furthermore, still by Theorem 2.7, under the assumption that H and Q both have property P, the converse holds, i. e. when Under the circumstances of (2.10(ii)), so that the foliation F comes from a fiber bundle and the space of leaves coincides with the base B of the corresponding fibration, from the graded commutative R-algebra structure of H * (F, R), the space Γ(ζ * ) of sections of the induced graded vector bundle inherits a graded C ∞ (B)-algebra structure and, as a graded C ∞ (B)-algebra, H * (A) coincides with the graded commutative algebra Γ(ζ * ) of sections of ζ * ; in particular, H 0 (A) = C ∞ (B). Furthermore, H 0 (Q) is the (R, C ∞ (B))-Lie algebra Vect(B) of smooth vector fields on the base B and, as a graded (R, H * (A))-Lie algebra, H * (Q) is the graded crossed product (cf. [21] for the notion of graded crossed product Lie-Rinehart algebra). Under the circumstances of (2.10(i)), when the foliation does not come from a fiber bundle, the structure of the graded Lie-Rinehart algebra (H * (A), H * (Q)) will in general be more complicated than that for the case when the foliation comes from a fiber bundle. The significance of this more complicated structure has been commented on already in the introduction. Remark 4.16. We are indebted to P. Michor for having pointed out to us a possible connection of the notion of quasi-Lie-Rinehart bracket with that of Frölicher-Nijenhuis bracket [12], [48].
Quasi-Gerstenhaber algebras
The notion of Gerstenhaber algebra has recently been isolated in the literature but implicitly occurs already in Gerstenhaber's paper [13]; see [19] for details and more references. In this section we will introduce a notion of quasi-Gerstenhaber algebra which generalizes that of strict differential bigraded Gerstenhaber algebra isolated in [21,22] (where the attribute "strict" refers to the requirement that the differential be a derivation for the Gerstenhaber bracket). The generalization consists in admitting a bracket which does not necessarily satisfy the graded Jacobi identity and incorporating an additional piece of structure which measures the deviation from the graded Jacobi identity.
For intelligibility, we recall the notion of graded Lie algebra, tailored to our purposes. As before, R denotes a commutative ring with 1. A graded R-module g, endowed with a graded skew-symmetric degree zero bracket [ · , · ]: g ⊗ g − → g, is called a graded Lie algebra provided the bracket satisfies the graded Jacobi identity for every triple (a, b, c) of homogeneous elements of g.
We will consider bigraded R-algebras. Such a bigraded algebra is said to be bigraded commutative provided it is commutative in the bigraded sense, that is, graded commutative with respect to the total degree. Given such a bigraded commutative algebra G, for bookkeeping purposes, we will write its homogeneous components in the form G q p , the superscript being viewed as a cohomology degree and the subscript as a homology degree; the total degree |α| of an element α of G q p is, then, |α| = p − q.
We will explore differential operators in the bigraded context. We recall the requisite notions from [41] (Section 1), cf. also [1]. Let G be a bigraded commutative R-algebra with 1, and let r ≥ 1. A (homogeneous) differential operator on G of order ≤ r is a homogeneous R-endomorphism D of G such that a certain G-valued (r + 1)-form Φ r+1 D on G (the definiton of which for general r we do not reproduce here) vanishes. For our purposes, it suffices to recall explicit descriptions of these forms in low degrees. Thus, given the homogeneous R-endomorphism D of G, for homogeneous ξ, η, ϑ, In the literature, a (homogeneous) differential operator D of order ≤ r with D(1) = 0 is also referred to as a (homogeneous) derivation of order ≤ r. In particular, a homogeneous derivation d of (total) degree 1 and order 1 is precisely a differential turning G into a differential graded R-algebra.
Relationship with Lie-Rinehart triples.
We will now explain how quasi-Gerstenhaber algebras arise from Lie-Rinehart triples. To this end, we recall that, given an ordinary Lie-Rinehart algebra (A, L), the Lie bracket on L and the L-action on A determine a Gerstenhaber bracket on the exterior A-algebra Λ A L on L; for α 1 , . . . , α n ∈ L, the bracket [u, v] in Λ A L of u = α 1 ∧ . . . ∧ α ℓ and v = α ℓ+1 ∧ . . . ∧ α n is given by the expression where ℓ = |u| is the degree of u, cf. [19] (1.1). In fact, given the R-algebra A and the A-module L, a bracket of the kind (5.8.1) yields a bijective correspondence between Lie-Rinehart structures on (A, L) and Gerstenhaber algebra structures on Λ A L. Our goal, which will be achieved in the next section, is now to extend this observation to a relationship between Lie-Rinehart triples, quasi-Lie-Rinehart algebras, and quasi-Gerstenhaber algebras. Thus, let (A, Q, H) be a pre-Lie-Rinehart triple. Consider the graded exterior A-algebra Λ A Q, and let G = Alt A (H, Λ A Q), with the bigrading G q p = Alt q A (H, Λ p A Q) (p, q ≥ 0). Suppose for the moment that (A, Q, H) is merely an almost pre-Lie-Rinehart triple. Recall that the almost pre-Lie-Rinehart triple structure induces operations of the kind (4.11.3), (4.11.4), and (4.11.6) on the pair (A, Q) = (Alt A (H, A), Alt A (H, Q)) but, at the present stage, this pair is not necessarily a quasi-Lie-Rinehart algebra. Consider the bigraded algebra Alt A (H, Λ A Q); at times we will view it as the exterior A-algebra on Q, and we will accordingly write The graded skew-symmetric bracket (4.11.4) on Q (= Alt A (H, Q)) extends to a (bigraded) bracket where α, β, γ are homogeneous elements of Λ A Q = Alt A (H, Λ A Q), and where ξ ∈ Q and a ∈ A.
We now construct an operation Ψ of the kind (5.1) from the operation ·, ·; · Q , that is, one which formally looks like an h-Jacobiator for (5.8.3). To this end we suppose that, as an A-module, at least one of H or Q is finitely generated and projective; then the canonical A-linear morphism from Alt A (H, A) ⊗ Λ A Q to Alt A (H, Λ A Q) is an isomorphism of bigraded A-algebras. Let ξ 1 , . . . , ξ p ∈ Q. Now, given a homogeneous element β of Alt A (H, A), with reference to the operation ·, ·; · Q induced by δ, cf. (4.11.6), let we will write Ψ δ rather than just Ψ whenever appropriate. As an operator on the graded A-algebra Alt A (H, Λ A Q), Ψ may be written as a finite sum of operators which are three consecutive contractions each; since an operator which consists of three consecutive contractions is a differential operator of order ≤ 3, the operator Ψ is a differential operator of order ≤ 3. Furthermore, since for ξ, η ∈ Q, the operation ξ, η; · Q is a derivation of the graded A-algebra Alt A (H, A), given homogeneous elements β 1 and β 2 of Alt A (H, A), A somewhat more intrinsic description of Ψ results from the observation that the operation Ψ: is simply given by the assignment to χ: H → Λ 2 A Q of the trace of the A-module endomorphism δ • χ of H when H is finitely generated and projective as an A-module, and of the trace of the A-module endomorphism χ • δ of Λ 2 A Q when Q is finitely generated and projective as an A-module.
We now give another description of Ψ, cf. (5.8.11) below, under an additional hypothesis: Suppose that, as an A-module, Q is finitely generated and projective of constant rank n. Then the canonical A-module isomorphism extends to an isomorphism of graded A-modules.
In this fashion, Alt * A (H, Λ * A Q) acquires a bigraded Alt * A (H, Alt * A (Q, A))-module structure, induced from the graded A-module Λ n A Q. Further, the skew-symmetric A-bilinear pairing (1.5.5) induces an operator This is just the operator (2.4.7 ′ ), suitably rewritten, with M = Λ n A Q, where the degree of the latter A-module forces the correct sign: The A-module Λ n A Q is concentrated in degree n, and a form in Alt n−p A (Q, Λ n A Q) has degree p. In bidegree (q, p), given ) of the operator (5.8.8) is given by the formula where x 1 , . . . , x q−1 ∈ H and ξ p−1 , . . . , ξ n ∈ Q and, with |ψ| = q + p (the correct degree would be |ψ| = p − q but modulo 2 this makes no difference), this simplifies to (5.8.9) cf. (2.5.4).
Lemma 5.8.10. The operator Ψ makes the diagram commutative.
Thus, under the isomorphism (5.8.7), the operator Ψ is induced by the operator d 2 (on the right-hand side of (5.8.7)).
In view of Remark 2.6, the operator Ψ thus calculates essentially the Lie algebra cohomology H * (L nil , Λ n A Q) of the (nilpotent) A-Lie algebra L nil (= H ⊕ Q as an A-module) with values in the A-module Λ n A Q, viewed as a trivial L nil -module. In particular, Ψ is A-linear.
Suppose finally that (A, Q, H) is a genuine Lie-Rinehart triple, not just an almost pre-Lie-Rinehart triple. By Proposition 4.13, (A, Q) then acquires a quasi-Lie-Rinehart structure. Our ultimate goal is now to prove that, likewise, Λ A Q endowed with the bigraded bracket (5.8.3) and the operation Ψ, cf. (5.8.5), (which formally looks like an h-Jacobiator) acquires a quasi-Gerstenhaber structure. The verification of the requirements (5.2)-(5.4) does not present any difficulty at this stage, and the vanishing of ΨΨ is immediate. However we were so far unable to establish (5.5) and (5.6) without an additional piece of structure, that of a generator of a (quasi-Gerstenhaber) bracket. The next section is devoted to the notion of generator and the consequences it entails. A precise statement is given as Corollary 6.10.4 below.
Quasi-Batalin-Vilkovisky algebras and quasi-Gerstenhaber algebras
Let G = G * * be a bigraded commutative R-algebra, endowed with a bigraded bracket [ · , · ]: G ⊗ R G → G of bidegree (0, −1) which is graded skew-symmetric when the total degree is regraded down by 1. Extending terminology due to Koszul, cf. the definition of [ · , · ] D on p. 260 of [41], we will say that an R-linear operator ∆ on G of bidegree (0, −1) generates the bracket [ · , · ] provided, for every homogeneous a, b ∈ G, we then refer to the operator ∆ as a generator . In particular, let (G; d, [ · , · ], Ψ) be a quasi-Gerstenhaber algebra over R. In view of the identity (1.4) on p. 260 of [41], a generator ∆ is then necessarily a differential operator on G of order ≤ 2. Indeed, given a differential operator D, this identity reads Hence, when a differential operator ∆ generates a quasi-Gerstenhaber bracket [ · , · ], However, by virtue of (5.3), the right-hand side of this identity is zero, whence ∆ is necessarily of order ≤ 2.
We note the identity (6.6.4) is formally the same as (5.6), but the circumstances are now more general. We also note that the hypotheses of the Lemma imply that ∆(1) = 0 and Ψ(1) = 0. Remark 6.6.5. For a graded commutative (not bigraded) algebra, multiplicatively generated by its homogeneous degree 1 constituent and endowed with a suitable Batalin-Vilkovisky structure, formally the same identity as (5.6) has been derived in Theorem 3.2 of [5]. Our totalization Tot yields a notion of Batalin-Vilkovisky algebra not equivalent to that explored in [5], though; see Remark 6.17 below for details. The distinction between the ground ring R and the R-algebra A, crucial for our approach (involving in particular Lie-Rinehart algebras and variants thereof), complicates the situation further. We therefore give a complete proof of the Lemma.
Proof of Lemma 6.6.2. We start by exploring the operator ∆Ψ + Ψ∆: Let α ∈ G 1 0 and ξ 1 , ξ 2 , ξ 3 ∈ G 0 1 ; then αξ 1 ξ 2 ξ 3 ∈ G 1 3 . Since Ψ is of order ≤ 3, and, for degree reasons, this identity boils down to In view of the definition (6.1) of the bracket [·, ·], On the other hand, Hence that is, Thus the graded commutator ∆Ψ + Ψ∆ vanishes on G 1 3 if and only if, for every α ∈ G 1 0 and ξ 1 , ξ 2 , ξ 3 ∈ G 0 1 , since Ψ(ξ 1 ξ 2 ξ 3 ) = 0, the latter identity is equivalent to With the more neutral notation (a 1 , a 2 , a 3 , a 4 ) = (α, ξ 1 , ξ 2 , ξ 3 ) since, for degree reasons, This is the identity (6.6.4) for the special case where the elements a 1 , a 2 , a 3 , a 4 are from G 1 0 ∪ G 0 1 . The operator ∆ being of order ≤ 2 means precisely that the bracket [·, ·] = ±Φ 2 ∆ (generated by it) behaves as a derivation in each argument and, accordingly, the operator Ψ being of order ≤ 3 means that the operation Φ 3 Ψ is a derivation in each of its three arguments. The equivalence between the identities (6.6.3) and (5.7.1) for arbitrary arguments is now etablished by induction on the degrees of the arguments.
Proof of Theorem 6.6.1. The quasi-Gerstenhaber bracket [ · , · ] on G is that generated by ∆ = ∂ via (6.1). This bracket is plainly graded skew-symmetric in the correct sense, and the reasoning in Section 1 of [41] shows that this bracket satisfies the identities (5.3)-(5.5). In particular, the identity (5.5) is a consequence of the identity (6.3): This identity may be rewritten as Hence, given homogeneous elements ξ, η, ϑ of G, the identity (5.5) takes the form . and the defining properties (6.2)-(6.5) say that the sum (6.7.4) is a square zero operator on TotG, i. e. a differential. Consider the ascending filtration {F r } r≥0 of TotG given by This filtration gives rise to a spectral sequence which is the bigraded homology Batalin-Vilkovisky algebra spelled out in Proposition 6.7 above. This spectral sequence is an invariant for the quasi-Batalin-Vilkovisky algebra G which is finer than just the bigraded homology Batalin-Vilkovisky algebra (H * * (G) d , ∂).
We will now take up and extend the discussion in (5.8) and describe how quasi-Gerstenhaber and quasi-Batalin-Vilkovisky algebras arise from Lie-Rinehart triples. To this end, let (A, H, Q) be a pre-Lie-Rinehart triple and suppose that, as an A-module, Q is finitely generated and projective, of constant rank n. Consider the graded exterior A-algebra Λ A Q, and let G = Alt A (H, Λ A Q), with G q p = Alt q A (H, Λ p A Q); this is a bigraded commutative A-algebra. The Lie-Rinehart differential d, with respect to the canonical graded (A, H)-module structure on Λ A Q, turns G into a differential graded R-algebra. Our aim is to determine when (A, Q, H) is a genuine Lie-Rinehart triple in terms of conditions on G.
The graded A-module Alt * A (Q, Λ n A Q) acquires a canonical graded (A, H)-module structure. Further, since (A, H, Q) is a pre-Lie-Rinehart triple (not just an almost pre-Lie-Rinehart triple), the canonical bigraded A-module isomorphism (5.8.7) is now an isomorphism of Rinehart complexes, with reference to the graded (A, H)-module structures on Λ * A Q and Alt n− * A (Q, Λ n A Q). We will say that (A, H, Q) is weakly orientable if Λ n A Q is a free A-module, that is, if there is an A-module isomorphism ω: Λ n A Q → A, and ω will then be referred to as a weak orientation form. Under the circumstances of Example 1.4.1, this notion of weak orientability means that the foliation F is transversely orientable, with transverse volume form ω. For a general pre-Lie-Rinehart triple (A, H, Q), we will say that a weak orientation form ω is invariant provided it is invariant under the H-action; we will then refer to ω as an orientation form, and we will say that (A, H, Q) is orientable. In the situation of Example 1.4.1, with a grain of salt, an orientation form in this sense amounts to an orientation for the "space of leaves", that is, with reference to the spectral sequence (2.9.1), the class in the top basic cohomology group E n,0 2 (cf. 2.10(i)) of such a form is non-zero and generates this cohomology group. Likewise, in the situation of Example 1.4.2, an orientation form is a holomorphic volume form, and the requirement that an (invariant) orientation form exist is precisely the Calabi-Yau condition.
Let (A, H, Q) be a general orientable pre-Lie-Rinehart triple, and let ω be an invariant orientation form. Then ω induces an isomorphism Alt * A (Q, Λ n A Q) → Alt * A (Q, A) of graded (A, H)-modules and hence an isomorphism of Rinehart complexes. Here, on the right-hand side of (6.9), the operator d 0 is that given earlier as (2.4.5), with the orders of H and Q interchanged. On the right-hand side of (6.9), we have as well the operator d 1 given as (2.4.6) and the operator d 2 given as (2.4.7) (the order of H and Q being interchanged), cf. also (5.8.8). The operator d 1 induces an operator (6.10.1) on the left-hand side of (6.9) by means of the the relationship By Lemma 5.8.10, the operator d 2 on the right-hand side of (6.9) corresponds to the operator Ψ δ on the left-hand side of (6.9) given as (5.8.5) above. Notice that ∆ ω is an R-linear operator on G * * = Alt * A (H, Λ * A Q) of bidegree (0, −1) which looks like a generator for the corresponding bracket (5.8.3). We will now describe the circumstances where ∆ ω is a generator. Proof. We note first that, when (A, d 0 , d 1 , d 2 ) is a multialgebra, so is (A, d 0 , −d 1 , d 2 ). Furthermore, when (Alt * A (H, Alt * A (Q, A)), d 0 , −d 1 , d 2 ) is a multialgebra, is a multicomplex, the operators d j (0 ≤ j ≤ 2) (where the notation d j is abused somewhat) being the induced ones, with the correct sign, that is, where ω * is the induced bigraded morphism of degree n. Hence the equivalence between the Lie-Rinehart triple and quasi-Batalin-Vilkovisky properties is straightforward, in view of Theorem 2.7 and Theorem 6.6.1. In particular, the identities (2.1.4.2)-(2. 1.4.5) correspond to the identities (6.2)-(6.5) which characterize (Alt * A (H, Λ * A Q); d, ∆ ω , Ψ δ ) being a quasi-Batalin-Vilkovisky algebra. It remains to show that, when (A, H, Q) is a genuine Lie-Rinehart triple, the operator ∆ ω (given by (6.10.1)) is indeed a strict generator for the bigraded bracket (5.8.3) to which the rest of the proof is devoted.
6.10.2. Verification of the generating property. We note first that, in view of the derivation properties of a quasi-Gerstenhaber bracket, it suffices to establish the generating property (6.1) on Λ A Q, viewed as the bidegree (0, * )-constituent of Alt A (H, Λ A Q) = Λ A Q. To make the operator ∆ ω somewhat more explicit, we note that the pairing (1.5.2.Q) and the choice of ω determine a generalized Q-connection be commutative. This operator coincides with the operator ∆ ω but we prefer to use a neutral notation. In view of the derivation properties of a quasi-Gerstenhaber bracket, to establish the generating property, it will suffice to study the restriction Given α ∈ Λ p A Q, we will write φ α ∈ Alt n−p A (Q, Λ n A Q) for the image under φ so that, for ξ p+1 , . . . , ξ n , φ α (ξ p+1 , . . . , ξ n ) = α ∧ ξ p+1 ∧ . . . ∧ ξ n .
This etablishes the generating property (6.10.3) for α 1 and α 2 homogeneous of degree 1 since, as an A-module, Q is finitely generated and projective of constant rank n.
Since, as an A-algebra, Λ A Q is generated by its elements of degree 1, a straightforward induction completes the proof of Theorem 6.10.
For the special case where δ and hence Ψ δ is zero, the statement of the theorem is a consequence of Theorem 5.4.4 in [21]. Indeed, the identity (5.6) then corresponds to (1.9.7); cf. also (4.9.6) and (6.11)(vii) below.
Remark 6.11. It is instructive to spell out the relationship between the quasi-Batalin-Vilkovisky compatibility conditions (6.2)-(6.4) and the Lie-Rinehart triple axioms (1.9.1)-(1.9.7); cf. (2.8.5) above. As before, write G = Alt A (H, Λ A Q), and recall that n is the rank of Q as a projective A-module.
fields for the foliation. This interpretation relies crucially on the totalization spelled out as (6.7.2) above; with the more familiar totalization Tot ′ G given by such an interpretation is not visible. Thus, consider the bigraded algebra G * * = Alt * A (H, Λ * A Q) = Λ A Q, where as before A = Alt A (H, A) and Q = Alt * A (H, Q). Suppose that the foliation is transversely orientable with a basic transverse volume form ω, and consider the resulting quasi-Batalin-Vilkovisky algebra (Alt * A (H, Λ * A Q); d, ∆ ω , Ψ δ ), cf. Theorem 6.10. In particular, G * * is then a quasi-Gerstenhaber algebra. This quasi-Gerstenhaber algebra yields a kind of generalized Schouten algebra (algebra of multivector fields) for the foliation; the cohomology H 0 * (G) may be viewed as the Schouten algebra for the "space of leaves". However the entire cohomology contains more information about the foliation than just H 0 * (G). Under the circumstances of (2.10(ii)), where the foliation comes from a fiber bundle, cf. also (4.15), let B denote the "space of leaves" or, equivalently, the base of the corresponding bundle; an orientation ω in our sense is now essentially equivalent to a volume form ω B for the base B. Let L B = Vect(B). The volume form ω B induces an exact generator ∂ ω B for the ordinary Gerstenhaber algebra G * = Λ C ∞ (B) L B , and the corresponding bigraded homology Batalin-Vilkovisky algebra (H * * (Alt A (H, Λ * A Q)) d , ∂ ω ) coming into play in Theorem 6.10 may then be written as the bigraded crossed product Under the circumstances of (2.10(i)), when the foliation does not come from a fiber bundle, the structure of the bigraded homology Batalin-Vilkovisky algebra H * * (Alt A (H, Λ * A Q)) d may be more intricate. Illustration 6.16. For a (finite dimensional) quasi-Lie bialgebra (h, h * ) [37], with Manin pair (g, h), where g = h ⊕ h * , the resulting quasi-Batalin-Vilkovisky algebra has the form Alt(h, Λh * ) ∼ = Λh * ⊗ Λh * ∼ = Λ(h * ⊕ h * ).
|
2014-10-01T00:00:00.000Z
|
2003-11-17T00:00:00.000
|
{
"year": 2003,
"sha1": "c8d3348cc0b5af1fe7094e9c2a100958c15ef314",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/math/0311294v2.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c8d3348cc0b5af1fe7094e9c2a100958c15ef314",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
27524706
|
pes2o/s2orc
|
v3-fos-license
|
From medical imaging data to 3D printed anatomical models
Anatomical models are important training and teaching tools in the clinical environment and are routinely used in medical imaging research. Advances in segmentation algorithms and increased availability of three-dimensional (3D) printers have made it possible to create cost-efficient patient-specific models without expert knowledge. We introduce a general workflow that can be used to convert volumetric medical imaging data (as generated by Computer Tomography (CT)) to 3D printed physical models. This process is broken up into three steps: image segmentation, mesh refinement and 3D printing. To lower the barrier to entry and provide the best options when aiming to 3D print an anatomical model from medical images, we provide an overview of relevant free and open-source image segmentation tools as well as 3D printing technologies. We demonstrate the utility of this streamlined workflow by creating models of ribs, liver, and lung using a Fused Deposition Modelling 3D printer.
Introduction
Anatomical models have applications in clinical training and surgical planning as well as in medical imaging research. In the clinic, the physical interaction with models facilitates learning anatomy and how different structures interact spatially in the body. Simulation-based training with anatomical models reduces the risks of surgical interventions [1], which are directly linked to patient experience and healthcare costs. For example, improvement of central venous catheter insertions has been achieved by the use of anatomically and ultrasonically accurate teaching phantoms [2]. In addition, the phantoms can be used for pre-operative surgical planning, which has been shown to be beneficial in craniofacial surgery [3] and is being explored in a number of other surgical fields [4,5]. Lastly, anatomical phantoms can be designed to mimic tissue when imaged with the modality of interest; most commonly ultrasound, Computed Tomography (CT), or Magnetic Resonance Imaging (MRI). Imaging phantoms are also important for the development of novel imaging modalities such as photoacoustics [6], or for validation of imagebased biomarkers such as pore size estimation using nuclear magnetic resonance [7], where they provide controlled experimental environments.
Anatomically accurate models can be computer-generated from medical image data. CT and MRI are widely used to image biological features, ranging from whole-body imaging to PLOS particular areas of interest such as tumours or specific parts of the brain. Depending on the imaging modality, different features can be observed and different image segmentation algorithms will be appropriate. CT pixel intensities directly correlate to tissue density. The modality thus lends itself well to segmenting structures such as bones (high density) or lungs (low density). MRI offers excellent soft tissue contrast, which, for example, enables differentiation between white and grey matter in the brain [8].
Recent advances in segmentation software have made it increasingly easy to automatically or semi-automatically extract the surface of structures of interest from three-dimensional (3D) medical imaging data. This has made it possible to generate anatomical models using a standard personal computer with little prior anatomical knowledge. At the same time 3D printers, traditionally used in industrial applications, are now available for home use thanks to low-cost desktop alternatives. This technology enables fast creation of 3D models without the need for classical manufacturing expertise.
Accessibility to 3D printers and advanced segmentation algorithms have led to an increase in use of 3D printing in medicine, which has received interest due to a multitude of potential medical applications [4,9]. Models can be made patient-specific, and rapidly redesigned and prototyped, providing an inexpensive alternative to generic commercially available anatomical models. 3D printing thus found applications in teaching the structure of kidney [5], heart [10], and liver [11]. A number of studies have also investigated the potential of using 3D printing techniques to produce tissue-mimicking phantoms for research and teaching, with example applications in producing models of vessels [2,12], parts of the skull [13], optic nerves [14], and renal system [15]. The process of going from medical imaging data to 3D printed models has been described for the brain [16,17], the human sinus [18], as well as from a general point of view [19], but challenges remain to make the process widely available to novice users.
In this work, we present a practical guide to creating a broad range of anatomical models from medical imaging data. In the next section, we provide an overview of the general workflow and include a table listing the relevant 3D printing technologies. We have implemented a streamlined processing pipeline on various examples to illustrate the different approaches that can be followed. We have developed 3D printed models of ribs, a liver, and a lung. Ribs and lung were chosen as they have complex structure, while the liver illustrates the potential of segmenting and printing soft-tissue organs which have lower contrast with the surrounding tissue in CT images. Finally, we introduce and discuss the currently freely available segmentation tools, which can be applied to any organ or region of interest.
The general workflow
In this section, we describe the process of going from medical imaging data (CT scan) to a finished 3D printed model from a general point of view. We have implemented a streamlined pipeline on different regions of interest (ribs, liver, and lung), which will be described in the sections below. While the pipeline is illustrated using CT images, it is also applicable to other volumetric medical imaging modalities, like MRI. The workflow is broken down into three steps (Fig 1): Image segmentation. After acquiring a medical image, structures of interest need to be segmented. Image segmentation is the process of partitioning an image into multiple labelled regions locating objects and boundaries in images. It can be used to create patient-specific, highly accurate computer models of organs and tissue. There are a number of image segmentation techniques, which each have advantages and disadvantages, but there is no single segmentation technique which is suitable for all images and applications. Basic segmentation approaches rely on the principle that each tissue type has a characteristic range of pixel intensities. Hence, it is possible to distinguish between tissues and identify boundaries.
There is a wide range of software that is capable of performing image segmentation, ranging from multi-purpose commercial platforms with integrated physics simulations (e.g. Mimics [20], or Simpleware [21]), to open-source tools targeted to specific organs (e.g. FreeSurfer [22] for the brain). In this work, we have used the freeware software packages called Seg3D [23] and 3D Slicer [24], as they are capable of processing a range of medical imaging data. Furthermore, we provide a summary of comparable freeware software available at the time of writing in the discussion.
Mesh refinement. Following image segmentation, the 3D model can be further refined into a printable 3D mesh. There are a number of computer-aided design tools that can be used for this purpose and allow almost limitless mesh manipulation and refinement. However, the main reasons for such mesh post-processing of the segmentation are as follows: Repairing: Errors and discontinuities that sometimes arise in the image segmentation and exporting process need to be repaired before printing.
Smoothing: Staircasing errors resulting from the resolution of the original medical image can be mitigated by smoothing the surface of the mesh model.
Appending: The segmentation will often only be one component of a final model. To convert the model into a useable form, it is often necessary to combine it with other structures or remove unneeded parts from the segmentation.
3D printing. There are many different 3D printing technologies available, each with their own characteristics. Here we provide an overview of the 3D printing methods, which are suitable for the creation of anatomical models and highlight their respective advantages. The relevant 3D printing technologies can be classified into three groups: extrusion printing, photopolymerisation and powder-based printing. The most common example of extrusion printing is known as Fused Deposition Modelling (FDM), which is based on melting and depositing a material via a nozzle, building the desired shape layer by layer. In photopolymerisation, liquid polymers are selectively cured, typically using UV light. Important examples are Stereolithography (SLA) and Digital Light Processing (DLP), which selectively cure a plastic in a bath. Moreover, the photopolymer can be sprayed onto the print in thin layers, where it is subsequently cured. This technique is known as Material Jetting (MJ). Lastly, in powder-based techniques, a powdered material is bound together. This can either be done using a liquid binding agent (Binder Jetting, BJ), or by fusing the particles together using heat (Selective Laser Sintering, SLS). The characteristics of these techniques are summarised in Table 1.
Preparation of example anatomical models
The models of the ribs and liver were segmented using Seg3D (v.2.2.1) while the lung was segmented using 3D Slicer (v.4.6). We smoothed the models using MeshMixer [27] (v.3.0) and printed all of them using Filament Deposition Modelling (FDM). All processing was done using Windows 10 as operating system. Image segmentation. Seg3D v.2.2.1 was used to generate the rib model from the CT MECANIX dataset (Siemens Sensation 64, 3 mm slice thickness, 0.56 mm by 0.56 mm pixel size, 120 kV peak kilo-voltage, 100 mAs exposure) available on the OSIRIX website [28]. The ribs were segmented using thresholding, manual modification (cropping), a connected-component-filter as well as a fill-holes-filter. The segmentation was exported as a stereolithography (STL) file using the "Export Isosurface" command. 3D Slicer v.4.6 was used to create a model of the liver and the right lung from the CT ARTI-FIX dataset (Siemens Sensation 64, 1.5 mm slice thickness, 0.59 mm by 0.59 mm pixel size, 120 kV peak kilo-voltage, 300 mAs exposure) from the OSIRIX website [28]. This was done using the level tracing algorithm as well as manual modification. After segmentation, the "make model" tool was used to export the volume as a STL file.
Mesh refinement. The ribs model was refined using Meshmixer to improve its topology. This was done by adjusting the mesh density of the surface and by applying a global smoothing filter that removes step artefacts due to the finite voxel size. Furthermore, we have used Free-CAD (v.0.16) [29] to design a holder for a tissue phantom. As FreeCAD has limitations working with large mesh files, the holder was attached to the ribs model structure in another software package called Blender (v.2.6) [30].
The lung model was also smoothed using MeshMixer. The different segmentation method demanded a local smoothing approach utilising the "RobustSmooth" brush provided by the software. Furthermore, the "Flatten" and "Inflate" brushes were used to remove unphysiological holes in the model.
The smoothing of the liver was also done utilizing both a global smoothing filter as well as the "RobustSmooth" brush tool.
3D printing. We have used an Ultimaker 2 (Ultimaker, Chorley, England) FDM printer to create our models. They were prepared for the printer using the open-source slicing software Cura (v.15.04.6), which is provided for free by Ultimaker. All models were printed with a layer height of 0.12 mm and a shell, bottom, and top thickness of 0.8 mm with a nozzle size of 0.4 mm. The prints were created at 20% infill, except for the rib model, which was printed with 100% infill to be functionally similar to bone when viewed under a medical ultrasound scanner (Siemens Acuson S1000 with a 16 MHz ultrasound probe). The material used for printing was "enhanced Polymax" polylactic acid (PLA) (PolyMax; Polymakr, Changshu, China). To estimate the print accuracy, the dimensions of the models were quantified at different sites in silico using Meshmixer and compared to the dimensions of the 3D prints, which were measured using calipers and a micrometer.
Results and discussion
The 3D printed models of the ribs, liver and lung can be seen in Fig 2. On the ribs phantom the holder can be seen, which can be used to place tissue mimicking phantoms underneath the ribs in order to perform realistic ultrasound imaging experiments and training. The liver phantom was printed in coloured PLA, while the lung was painted using acrylic colour to be used as a teaching model.
The print duration for lung, ribs, and liver were 43h, 87.5h, and 27.5h respectively. The cost of the PLA for each of the models was approximately £16, £25, and £10 respectively. The print accuracy of the models is summarised in Table 2.
The ribs phantom was used as an ultrasound imaging phantom for clinical training. By embedding the ribs into a mineral-oil based material (Mindsets, Waltham Cross, United Kingdom) to simulate surrounding musculature and soft tissues, and combining these with a chicken breast and an 18 gauge puncture needle, it was possible to perform a low-cost mock kidney fine needle aspiration (FNA) procedure (Fig 3) [31]. In Fig 3B, the reflection of part of the ribs phantom can be seen in the top right corner. The artificial ribs are creating a shadowing effect which can also be observed in real ultrasound imaging procedures. When going from medical imaging data to 3D printed anatomical models, the choice of an appropriate image segmentation algorithm is arguably the most important step. An overview of relevant freeware segmentation tools can be seen in Table 3. We have illustrated the use of two different open-source tools, which offer a multitude of ways to achieve accurate segmentation.
Seg3D has both manual and automatic segmentation tools and provides a library of addons with additional algorithms and applications optimised for particular segmentation applications. A key feature is that the interface makes it possible to visualise images in 3D with multiple volumes managed as layers. This facilitates the manipulation of several segmentations, which is particularly useful when it is necessary to use a combination of segmentation techniques to obtain the final surface. For example, the images may be cropped prior to more advanced segmentation processes in order to isolate the volume of interest. Furthermore, it provides Boolean transforms to combine multiple segmentations into a single surface. Seg3D also provides the option of exporting the final segmentation to the STL file format.
3D Slicer has a multitude of other image manipulation options and can be used to register different scans to each other. Because the large range of tools provided by the software, the interface is more difficult to master. However, it provides a range of powerful segmentation algorithms and has a unique selection of extensions available, which can be utilised for more specific tasks.
The segmentations formed the basis of the 3D printed models, which we created using FDM, allowing easy, low-cost creation of anatomical models. It was possible to create the segmented structures with high detail, allowing them to be used as teaching models. Furthermore, the printed model of the ribs was found to be functionally close to a real rib cage when imaged by an ultrasound scanner [31].
Using FDM for the creation of anatomical models has limitations inherent to the printing technique: The surface of the models was rough and rigid, which is not a realistic representation of the real tissue, and support material has to be carefully removed without damaging the finished print. However, there is flexible PLA available and the model surface can be smoothed using 3D print coatings (e.g. XTC-3D by Smooth-On, Macungie, USA). An alternative approach is to use a material jetting technique, which can combine different polymers seamlessly in one print, offering the possibility of creating a gradient of flexibility.
Conclusions and future work
We have introduced a general workflow that can be used to generate 3D printed anatomical models from medical imaging data. This streamlined pipeline is applicable for volumetric medical imaging data and works for a wide variety of organs and other anatomical regions of interest. We have demonstrated its use in the creation of models of ribs, a liver and a lung from CT datasets. Recent developments in image segmentation algorithms have enabled the use of a multitude of tools and strategies in delineating anatomical structures of interest. We have provided an overview of the most relevant open-source tools that can be used for anatomical structure segmentation by end-users who are not medical or image processing specialists.
Future work will focus on creating flexible phantoms and exploring different materials with regard to their tissue mimicking characteristics in US and MRI systems.
|
2018-04-03T03:57:16.703Z
|
2017-05-31T00:00:00.000
|
{
"year": 2017,
"sha1": "91abcc3506d3c589f3bf61f3f3701f918591113f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0178540&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "91abcc3506d3c589f3bf61f3f3701f918591113f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
257254885
|
pes2o/s2orc
|
v3-fos-license
|
Nonminimally-coupled warm Higgs inflation: Metric vs. Palatini Formulations
In this work, we study the non-minimally-coupled Higgs model in the context of warm inflation scenario on both metric and Palatini approaches. We particularly consider a dissipation parameter of the form $\Gamma=C_{T}T$ with $C_{T}$ being a coupling parameter and focus only on the strong regime of the interaction between inflaton and radiation fluid. We compute all relevant cosmological parameters and constrain the models using the observational Planck 2018 data. We discover that the $n_s$ and $r$ values are consistent with the observational bounds. Having used the observational data, we constrain a relation between $\xi$ and $\lambda$ for the non-minimally-coupled warm Higgs inflation in both metric and Palatini cases. To produce $n_s$ and $r$ in agreement with observation, we find that their values are two orders of magnitude higher than those of the usual (cold) non-minimally-coupled Higgs inflation.
inflation. This is a combination of the exponential accelerating expansion phase and the reheating.
Warm inflation has at the moment become a growing area of research, with the potential to provide new insights into the physics of the early universe.
Warm inflation is an alternative version of standard inflation that takes into account the effects of dissipation and thermal fluctuations on the inflationary process. In warm inflation scenario, the scalar field responsible for driving inflation, is coupled to a thermal bath and transfers energy to radiation during inflation, thus maintaining a non-zero temperature. Warm inflation was first proposed by Berera and Fang [6]. Since then, numerous studies have been carried out to study the dynamics and predictions of warm inflation. One of the main advantages of warm inflation is that it provides a natural solution to the graceful exit problem, as the inflaton can gradually decay into the thermal bath, leading to a smooth transition from inflation to the hot big bang era.
The predictions of warm inflation have been studied both analytically and numerically. Some of the most notable works in this field include Berera et. al. [7][8][9][10][11][12], Graham and I. G. Moss [13], Bastero-Gil et. al. [50] and Zhang [15]. These studies have shown that warm inflation can produce a sufficient number of e-folds, consistent with the observed CMB temperature fluctuations, and that it can lead to a broad spectrum of curvature perturbations. There have also been several studies comparing the predictions of warm inflation with those of the standard inflationary model and other alternative models of inflation. For example, Kamali [16] compared warm inflation with the Higgs inflation model and found that warm inflation can produce a smaller tensor-to-scalar ratio, which is more in line with the current observations. Similarly, the authors of [17] showed that even when dissipative effects are still small compared to Hubble damping, the amplitude of scalar curvature fluctuations can be significantly enhanced, whereas tensor perturbations are generically unaffected due to their weak coupling to matter fields.
Warm Higgs inflation is recently investigated in several publications, by emphasizing on different aspects of theory. In [49], the Galileon scalar field dissipation formalism is proposed via its kinetic energy. The radiation fluid throughout inflation emerges and it has been shown that in this scenario, the universe smoothly can enter into a radiation dominated era without the reheating phase. Different regimes for temperature are investigated and the backreaction of radiation on the power spectrum is calculated. In [50], the warm Little Inflation scenario proposed as a quantum field theoretical realization of the warm inflation. Different potential terms are used including Higgs model and chaotic potential. Keeping in mind 50-60 e-folds of inflation, by introducing a viable thermal correction to the inflaton potential term, the primordial spectrum of different modes of perturbations and the tensor-to-scalar ratio calculated in light of the Planck data. The motivation of our study is that we provided a constraint between two parameters of the potential term.
In the context of warm inflation, it was also found that recent studies in many different theories were proposed. For instance, the authors of Ref. [18] conducted a possible realization of warm inflation owing to a inflaton field self-interaction. Additionally, models of minimal and non-minimal coupling to gravity were investigated in Refs. [16,19,20,22,23,49]. Recently, warm scenarion of the Higgs-Starobinsky (HS) model was conducted [24]. The model includes a non-minimally coupled scenario with quantum-corrected self-interacting potential in the context of warm inflation [25].
An investigation of warm inflationary models in the context of a general scalar-tensor theory of gravity has been made in Ref. [26]. Recent review on warm inflation has been recently discussed in Ref. [27].
The physical motivations of the present work is devoted to a comparative analysis of the dynamics of non-minimally coupled scalar fields in Metric and Palatini formulations. It will be shown that the two formalisms generally yield different answers for both metric tensor and scalar fields provided that non-minimal coupling of the scalar field to curvature scalar does not vanish. Investigation both formalisms helps ensure the theoretical consistency of the gravitational theories. By investigating different formalisms, one can uncover potential limitations or inconsistencies in the assumptions made in each formalism, or even may lead to distinct predictions. This aids in refining and extending our understanding of gravity and its mathematical framework. Investigating both formalisms enables us to identify potential observational or experimental tests that could distinguish between the predictions of these formalisms and is crucial for testing the validity of different theories.
The plan of the work is structured as follows: In Sec.II, we review a formulation of nonminimally-coupled Higgs inflation considering both metric and Palatini approaches. In Sec.III, we provide the basic evolution equations for the inflaton and the radiation fields and define the slow roll parameters and conditions. We also describe the primordial power spectrum for warm inflation and the form of the dissipation coefficient. In Sec.IV, we present the models of nonminimally-coupled Higgs inflation and compute all relevant cosmological parameters. We then constrain our models using the the observational (Planck 2018) data in Sec.V. Finally, we summarize the present work and outline the conclusion.
NONMINIMALLY-COUPLED HIGGS INFLATION
Models in which the Higgs field is non-minimally coupled to gravity lead to successful inflation and produce the spectrum of primordial fluctuations in good agreement with the observational data. Here we consider the theory composed of the Standard Model Higgs doublet H J with the non-minimal coupling to gravity in the Jordan (J) frame: where M p is the Planck mass, ξ is a coupling constant, R J is the Ricci scalar, and H is the Higgs field with λ being the self-coupling of the Higgs doublet. Note that the mass term of the Higgs doublet is neglected throughout this paper because it is irrelevant during inflation.
As was known the metric formalism is considered as a standard gravitational method, however, one can study gravity adopting the Palatini approach leading to different phenomenological consequences in a theory with a non-minimal coupling to gravity. The differences of them are explicit and easily understandable in the so-called Einstein (E) frame where the non-minimal coupling is removed from the theory by taking a conformal redefinition of the metric g µν → Ω 2 g J,µν , Using the metric redefinition, the connection is also transformed in the metric formalism since it is given by the Levi-Civita connection: It is noticed that the connection is left unaffected in the Palatini formalism because it is treated as an independent variable as well as the metric. Thus, the Ricci scalar transforms differently depending on the underlying gravitational formulations as [28] where κ = 1 and κ = 0 correspond to the metric and the Palatini formalism, respectively. The Einstein frame expression can then be obtained after the rescaling of the metric: In the Einstein frame, the connection is not directly coupled to the Higgs field H J and the gravity sector is just the Einstein-Hilbert form. In this case, the Euler-Lagrange constraint in the Palatini formalism restricts the connection to the Levi-Civita one, and the two approaches become equivalent, up to the explicit difference in the κ term [28].
Let us next review phenomenological aspects of the metric-Higgs inflation [29] and the Palatini-Higgs inflation [30][31][32][33]. In this subsection, we neglect the gauge sector for simplicity. In the inflationary fashion, we usually consider the unitary gauge in which the Higgs doublet is described . Therefore, the action in Eq. (5) becomes where κ = 1 for the metric-Higgs and κ = 0 for the Palatini-Higgs inflation. The non-trivial kinetic term can be canonically normalized by introducing the field ψ defined through In terms of ψ, the action can be rewritten as with the potential in the Einstein frame The change of variable can be easily integrated in the Palatini case, while an asymptotic form in the large field limit ξφ 2 /M 2 p 1 is useful in the metric case as The potential is reduced to The potentials in both scenarios approach asymptotically to a constant value U λM 4 p 4ξ 2 at a large field region, which is suitable for slow-roll inflation. An observed amplitude P ζ 2.2×10 −9 [34] fixes the relation between ξ and λ in the metric and Palatini approaches, ξ met ∼ 5×10 4 √ λ, ξ Pal ∼ 10 10 λ, respectively. The CMB normalization restricts that the coupling to gravity ξ should be quite large unless the quartic coupling λ is extremely small both in the metric and Palatini formalisms, see also models with non-minimal coupling in metric and Palatini formalisms [35].
III. THEORY OF WARM INFLATION REVISITED
The warm inflation dynamics is characterized by the coupled system of the background equation of motion for the inflaton field, ψ(t), the evolution equation for the radiation energy density, ρ r (t).
Considering the Einstein frame action with the flat FLRW line element, the Friedmann equation for warm inflation taks the form withψ = dψ/dt and ρ r being the energy density of the radiation fluid with the equation of state given by w r = 1/3. The Planck 2018 baseline plus BK15 constraint on r is equivalent to an upper bound on the Hubble parameter during inflation of H * /M p < 2.5 × 10 −5 (95% CL) [34]. The equation of motion of the homogeneous inflaton field φ during warm inflation is governed as where U (ψ) = dU (ψ)/dψ. The above relation is equivalent to the evolution equation for the inflaton energy density ρ φ given bẏ with pressure p ψ =ψ 2 /2 − U (ψ), and ρ ψ + p ψ =ψ 2 . Here the RHS of Eq. (16) acts as the source term. In case of radiation, we haveρ A condition for warm inflation requires ρ 1/4 r > H in which the dissipation potentially affects both the background inflaton dynamics, and the primordial spectrum of the field fluctuations. Following Refs. [15,42], we consider the general form of the dissipative coefficient, given by where m is an integer and C m is associated to the dissipative microscopic dynamics which is a measure of inflaton dissipation into radiation. Different choices of m yield different physical descriptions, e.g., Refs. [15,42,43]. For m = 1, the authors of Refs. [10,19,36] have discussed the high temperature regime. For m = 3, a supersymmetric scenario has been implemented [10,42,44].
A minimal warm inflation was also proposed [45][46][47]. Particularly, it was found that thermal effects suppress the tensor-to-scalar ratio r significantly, and predict unique non-gaussianities. Apart from the Hubble term, the present of the extra friction term, Γ, is relevant in the warm scenario. In slow-roll regime, the equations of motion are governed by where the dissipative ratio Q is defined as Q = Γ/(3H) and Q is not necessarily constant. Since the coefficient Γ depends on φ and T , the dissipative ratio Q may increase or decrease during inflation. The flatness of the potential U (ψ) in warm inflation is measured in terms of the slow roll parameters which are defined in Ref. [37] given by Since a β term depends on Γ and hence disappears in standard cold inflation. In warm inflationary model, the slow roll parameters are defined as follows: Inflationary phase of the universe in warm inflation takes place when the slow-roll parameters satisfy the following conditions [7,37,38]: where the condition on β ensures that the variation of Γ with respect to φ is slow enough. Compared to the cold scenario, the power spectrum of warm inflation gets modified and it is given in Refs. [7,12,13,37,[39][40][41]50] and it takes the form: where the subscript "k" signifies the time when the mode of cosmological perturbations with wavenumber "k" leaves the horizon during inflation and n = 1/ exp H/T − 1 is the Bose-Einstein distribution function. Additionally, the function G(Q k ) encodes the coupling between the inflaton and the radiation in the heat bath leading to a growing mode in the fluctuations of the inflaton field.
It is originally proposed in Ref. [13] and its consequent implications can be found in Refs. [12,48].
This growth factor G(Q k ) is dependent on the form of Γ and is obtained numerically. As given in Refs. [20,50], we see that for Γ ∝ T : In this work, we consider a linear form of G(Q k ) with Q 1. Clearly, for small Q, i.e., Q 1, the growth factor does not enhance the power spectrum. It is called the weak dissipation regime.
However, for large Q, i.e., Q 1, the growth factor significantly enhances the power spectrum.
The latter is called the strong dissipation regime. The primordial tensor fluctuations of the metric give rise to a tensor power spectrum. It is the same form as that of cold inflation given in Ref. [17] as The ratio of the tensor to the scalar power spectrum is expressed in terms of a parameter r as As of the primordial power spectrum for all the models written in terms of Q, λ, and C 1 , we can demonstrate how the the power spectrum does depend on the scale. The spectral index of the primordial power spectrum is defined as where x = ln(x/x p ) and k p corresponds to the pivot scale. From a definition of N , it is rather straightforward to show that [23] Now we compute r and n s using Eq. (27) and Eq.(28) for a linear form of the growing mode function G(Q) given in Eq. (25). Note that r and n s are approximately given in Refs. [12,20,50].
A. Metric Formalism
The energy density during inflation is predominated by the potential of the inflaton field. Therefore, we can write Using this we can express Eq. (14) for this model aṡ Using Eq.(30) and Eq. (31), we come up with the following expression: On substituting Q = Γ/3H = C T T /3H in the energy density of radiation given in Eq. (20), we obtain the temperature of the thermal bath as Dividing the above relation with H, we find The dissipation parameter is defined as Q = Γ/3H = C T T /3H. In this model of warm inflation, we have 3H considered Γ = C T T . On substituting this form of Γ we get T = 3HQ/C T . We equate this with Eq. (33) to obtain On substituting Eq. (35) in Eqs. (33) and (32), we can express P R (k) in terms of variables ξ, λ, Q and C T . Also, from its definition in Eq. (22), the slow roll parameters can be written Using Eq.(30), the tensor power spectrum for this model is evaluated and we can use Eq.(35) and express P T (k) in terms of model parameters In this subsection, we will evaluate how the dissipation parameter, Q, evolves with the number of efolds, N . We differentiate Eq.(35) w.r.t N and then again write dψ/dN = −ψ/H. By using Eqs.
(30), (31) and (35), we obtain where we have assumed a large field approximation ξφ 2 /M 2 We show the behaviour of the evolution of ψ (in units of M p ) and the temperature T during warm inflation of the metric case in Fig.(1). The dissipation parameter, Q, depending on both ψ and T , is not a constant but rather evolves during inflation. This behaviour can also be seen in Fig.(1). Additionally, as shown in Fig.(1), we find that the energy density of radiation does not change appreciably when the modes of cosmological interest cross the horizon.
B. Palatini Formalism
We follow the proceeding subsection. Since the energy density during inflation is predominated by the potential of the inflaton field, for the Palatini case, this allows us to write Using the above relation, we can express Eq. (14) for this model aṡ Using Eq.(40) and Eq.(42), we come up with the following expression: On substituting Q = Γ/3H = C T T /3H in the energy density of radiation given in Eq. (20), we obtain the temperature of the thermal bath as We divide the above relation with H to obtain The dissipation parameter is defined as Q = Γ/3H. In this model of warm inflation, we have considered Γ = C T T . On substituting this form of Γ we get T = 3HQ/C T . We equate this with Eq.(43) to obtain On substituting Eq. (45) in Eqs. (43) and (42), we can express P R (k) in terms of variables ξ, λ, Q and C T . Also, from its definition in Eq. (22), the slow roll parameters can be written Using Eq.(30), the tensor power spectrum for this model is evaluated and we can use Eq.(45) and express P T (k) in terms of model parameters the energy density in radiation, and temperature T of the Universe is shown as a function of the number of efolds N with the dissipation coefficient Γ = C T T in the Palatini case. To generate this plot, we take ξ = 10 12.8 λ, C r = 70, C T = 0.045.
In this subsection, we will evaluate how the dissipation parameter, Q, evolves with the number of efolds, N . We differentiate Eq.(45) w.r.t N and then again write dψ/dN = −ψ/H. By using Eqs.
The behaviour of the evolution of ψ (in units of M p ) and the temperature T during warm inflation of the Palatini case is displayed in Fig.(2). Similarly, the dissipation parameter, Q, depending on both ψ and T , is also not a constant but rather evolves during inflation. This behaviour can also be seen in Fig.(2). We also find that the energy density of radiation does not change appreciably when the modes of cosmological interest cross the horizon shown in Fig.(2).
V. CONFRONTATION WITH THE PLANCK 2018 DATA
We constrain our results using the amplitude of the primordial power spectrum. Consider Eq.(24) we find that our predictions can produce the prefered values of P R ∼ A s = 2.2 × 10 −9 shown in Fig.(3). We notice for the metric case that in order to produce a corrected value of P R , when we decrease values of C T , the magnitudes of ψ get increased. However, in the Palatini case, when we decrease values of C T , a number of efolds get decreased. We compute the inflationary observables and then compare with the Plank 2018 data. We plot the derived n s and r for our models along with the observational constraints from Planck 2018 data displayed in Fig.(4). Left panel, we used ξ = 10 6.1 √ λ, C r = 70 and N = 50, 60 for C T ∈ and Palatini (right panel) approaches. We consider a linear form of the growing mode function G(Q N ). For the plots, we have used C r = 70, ξ = 1.26 × 10 6 √ λ for the metric case, and C r = 70, ξ = 6.31 × 10 12 λ for the Palatini case. We consider theoretical predictions of (r, n s ) for different values of C T with Planck'18 results for TT, TE, EE, +lowE+lensing+BK15+BAO.
VI. CONCLUSION
In this work, we studied the non-minimally-coupled Higgs model in the context of warm inflation scenario using both metric and Palatini approaches. We particularly considered a dissipation parameter of the form Γ = C T T with C T being a coupling parameter and focused only on the strong regime of the interaction between inflaton and radiation fluid. We compute all relevant cosmological parameters and constrained the models using the observational Planck 2018 data.
We discovered that the n s and r values are consistent with the observational bounds. Having used the observational data, we obtained a relation between ξ and λ for the non-minimally-coupled warm Higgs inflation in both metric and Palatini cases. Our constraints on the parameters are compatible with Planck data. Furthermore in comparison to other literature on the topic, we proposed that the computed (r, n s ) parameters values are two orders of magnitude higher than those of the usual (cold) non-minimally-coupled Higgs inflation.
Having compared between two approaches, the energy density and the temperature of the thermal bath in the metric case, see Fig.1, are many orders of magnitude larger than those found in the Palatini case, see Fig.2. To produce n s and r in agreement with observation, we found that their values are two orders of magnitude higher than those of the usual (cold) non-minimally-coupled Higgs inflation [30,31]. However, we noticed that the ratio of ξ 2 /λ of the metric case in this work are four orders of magnitude higher than that of model present in Ref. [16]. This may can quantify the amount of primordial gravitational waves produced during the inflationary epoch between cold and warm Higgs inflation. Since the value of r depends on the specific inflationary model, different models predict different amounts of gravitational waves generated during inflation. A lower value of r implies weaker gravitational waves, while a higher value indicates stronger gravitational waves.
It is worth mentioning that in standard inflationary models, the inflaton field is minimally coupled to gravity, meaning its dynamics are governed solely by the Einstein equations. However, in warm inflation, a non-minimal coupling term of the form ξH 2 R is introduced, where ξ is the coupling constant, H is the inflaton field, and R is the scalar curvature. In the context of warm inflation, where there is dissipative particle production and energy transfer between the inflaton and other fields, the non-minimal coupling can influence the dissipation mechanism. The coupling term introduces additional interactions between the inflaton and the thermal bath of particles, affecting the dissipation coefficient and the energy transfer rate. In suumary, the effects of the non-minimal coupling on the dissipative term in warm inflation can influence the energy transfer, particle production, backreaction effects, and stability of the inflationary dynamics. These effects play a significant role in determining the observational predictions and the viability of warm inflation models. We will leave these interesting issues for our future investigation.
|
2023-03-02T02:15:51.618Z
|
2023-03-01T00:00:00.000
|
{
"year": 2023,
"sha1": "4d6264cd122525f2a2c005fb4dfb6f8dafd3c220",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4d6264cd122525f2a2c005fb4dfb6f8dafd3c220",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
253359879
|
pes2o/s2orc
|
v3-fos-license
|
Erythrosin B as a New Photoswitchable Spin Label for Light-Induced Pulsed EPR Dipolar Spectroscopy
We present a new photoswitchable spin label for light-induced pulsed electron paramagnetic resonance dipolar spectroscopy (LiPDS), the photoexcited triplet state of erythrosin B (EB), which is ideal for biological applications. With this label, we perform an in-depth study of the orientational effects in dipolar traces acquired using the refocused laser-induced magnetic dipole technique to obtain information on the distance and relative orientation between the EB and nitroxide labels in a rigid model peptide, in good agreement with density functional theory predictions. Additionally, we show that these orientational effects can be averaged to enable an orientation-independent analysis to determine the distance distribution. Furthermore, we demonstrate the feasibility of these experiments above liquid nitrogen temperatures, removing the need for expensive liquid helium or cryogen-free cryostats. The variety of choices in photoswitchable spin labels and the affordability of the experiments are critical for LiPDS to become a widespread methodology in structural biology.
Introduction
Electron paramagnetic resonance (EPR) pulsed dipolar spectroscopy (PDS) is a crucial tool for the study of the structure and dynamics of biomacromolecules [1][2][3][4][5]. Using microwave pulses to measure the electron-electron dipolar interaction between paramagnetic moieties, PDS techniques can be used to determine the relative distance and, for rigid systems, orientation distributions of the paramagnetic moieties, yielding information about the conformation of the biomacromolecule to which the moieties are attached [6][7][8][9][10]. Typical PDS methods, such as double electron-electron resonance (DEER) using nitroxide spin labels introduced by site-directed mutagenesis, can access the distance range of 1.5 to ca. 8 nm [11], with extension up to 16 nm possible if the biomacromolecule and solvent are fully deuterated [12]. Other stable organic radicals [13,14] and metal centers [15][16][17] are emerging as alternatives to nitroxide spin labels, offering different spectroscopic properties and improved stability for biological studies.
The photogenerated triplet state of organic chromophores has recently been introduced as a photoswitchable spin label, sparking a paradigm shift in PDS and starting the field of light-induced PDS (LiPDS) [18][19][20][21][22][23]. The formation of the EPR-active triplet states of these chromophores following photoexcitation by pulsed laser irradiation allows the study of biomacromolecules without relying exclusively on permanent paramagnetic moieties. In addition to being photoswitchable, these labels frequently lead to stronger EPR signals compared to stable spin centers, owing to the strong spin polarization arising from the initial non-Boltzmann population of the triplet state sublevels after intersystem crossing [24,25].
Several light-induced versions of PDS experiments have recently been developed based on the photogenerated triplet state of a 5(4 -carboxyphenyl)-10,15,20-triphenylporphyrin (TPP) moiety incorporated into model peptides, showing an accessible distance range similar to that of conventional PDS [18,19,[26][27][28][29][30]. Light-induced DEER (LiDEER) [18] uses the photogenerated triplet formed by an initial laser flash as the detection spin while the dipolar modulation arises from the flip of a stable radical spin induced by a time-dependent microwave pulse at a second frequency. The single-frequency laser-induced magnetic dipole spectroscopy (LaserIMD) [27], on the other hand, optically switches on the dipolar interaction by forming the triplet state using a time-dependent laser pulse while detecting on a permanent radical spin. The refocused version of this technique (ReLaserIMD) offers a more accurate determination of the zero time in the dipolar time traces and is preferable to study short spin-spin distances [19]. The performances of these techniques at different microwave frequencies, X-band and Q-band, have been compared [28,31], and they have been successfully used to study the structure of different chromophore-containing proteins. A mutant of the heme protein human neuroglobin, containing a single cysteine labeled with nitroxide and reconstituted with Zn(II) proto-porphyrin IX, was studied by ReLaserIMD and a light-induced version of the relaxation-induced dipolar modulation enhancement (LiRIDME) technique, providing distance distributions in perfect agreement with highresolution X-ray structural data [19]. The light-harvesting peridinin chlorophyll protein, containing both porphyrin and carotenoid chromophores, was bis-labeled with nitroxides and studied by LiDEER [32]. In this case, the carotenoid triplet state populated by triplettriplet energy transfer from a nearby chlorophyl was used for detection, and triangulation with the two nitroxide radicals allowed identification of the carotene pigment involved in photoprotection. LiDEER was also employed to identify the preferential binding sites of different functionalized porphyrins to the human serum albumin protein singly labeled with nitroxide [21].
The best choice of technique to use for a particular system depends on the relative relaxation times of the different spin centers and the length of time trace that is required to measure the inter-spin interaction. The principles of LiDEER and LaserIMD are combined in light-induced triplet-triplet electron resonance spectroscopy (LITTER) [29], where photogenerated triplets are used as both detection and pump spin centers, and no permanent radical is required.
The anisotropy of EPR spectra can be exploited to obtain information on the relative orientation between the spin centers used for detection and the dipolar vector connecting the two spin centers [8]. Using TPP triplet states, the orientational effects in the dipolar traces from LiDEER, ReLaserIMD, and LITTER have been used to obtain additional information on the conformation of the molecules of study in frozen solutions [29,30]. This can be simulated and analyzed in a similar way to conventional orientationally selective DEER results [8], as all experiments involve a change in the spin state of the pumped spin of ∆m s = ±1.
The search for new photoswitchable spin labels for LiPDS, which could be attached to proteins that lack intrinsic photoexcitable groups, is an active area of research and a priority for the development of the field. In addition to having the right spectroscopic properties, good candidate labels for structural biology studies must be small, biocompatible, and commercially available in functionalized forms suitable for biolabeling [33]. Halogenated derivatives of fluorescein, such as eosin Y (EY) and rose Bengal (RB), and thioxanthene-based chromophores, such as ATTO Thio12 (AT), have been proposed as photoswitchable triplet labels based on spectroscopic characterization and DFT calculations [34] and were subsequently used in LaserIMD studies of the protein oxidoreductase thioredoxin in conjunction with nitroxide labels [22]. Orthogonal labeling strategies were exploited, combining the conventional maleimide-cysteine conjugation chemistry with the copper-catalyzed azide-alkyne cycloaddition between azide-functionalized labels and alkyne-bearing non-canonical amino acids. Promising results were obtained at Q-band and liquid helium temperatures. However, these studies did not take into consideration the orientational effects arising from the anisotropy of the nitroxide EPR spectrum at Q-band.
Here, we report the first utilization of erythrosin B (EB) as a photoswitchable triplet spin label for LiPDS, as previously proposed [34]. By performing multiple ReLaserIMD experiments resonant with different parts of the nitroxide spectrum, we exploit the orientational effects arising from the anisotropy of the nitroxide spectrum at Q-band to extract information both on the inter-spin distance and on the conformation of a model peptide in frozen solution. In addition, we show the feasibility of these experiments above liquid nitrogen temperatures, removing the need for expensive liquid helium or cryogen-free cryostats in LiPDS.
Results and Discussion
The bis-labeled peptide 1 (Figure 1b) was chosen as a model compound for this study because of the rigid α-helical structure expected from its alternating Leu-Aib sequence. In vacuo DFT optimization of 1 supports the α-helical structure of the peptide backbone and predicts a distance of 1.9 nm between the two labels. Details on the synthesis and purification of 1 are given in the Materials and Methods section. A sarcosine linker was used instead of the more rigid direct attachment of the EB label to the N-terminus of the peptide via its carboxylate group in order to avoid the formation of the colorless spirolactam form [33]. This method is advantageous to previously reported strategies with similar chromophores, which involved the use of more expensive para-substituted derivatives of the chromophores as starting materials or introduced longer 5-atom linkers with additional unwanted conformational flexibility [22]. Our approach uses the more affordable nonderivatized form of EB and only introduces a 3-atom linker between the chromophore and the labeled molecule. Such an approach could also be expanded to other dyes, including EY and RB.
Halogenated derivatives of fluorescein, such as eosin Y (EY) and rose Bengal (RB), and thioxanthene-based chromophores, such as ATTO Thio12 (AT), have been proposed as photoswitchable triplet labels based on spectroscopic characterization and DFT calculations [34] and were subsequently used in LaserIMD studies of the protein oxidoreductase thioredoxin in conjunction with nitroxide labels [22]. Orthogonal labeling strategies were exploited, combining the conventional maleimide-cysteine conjugation chemistry with the copper-catalyzed azide-alkyne cycloaddition between azide-functionalized labels and alkyne-bearing non-canonical amino acids. Promising results were obtained at Q-band and liquid helium temperatures. However, these studies did not take into consideration the orientational effects arising from the anisotropy of the nitroxide EPR spectrum at Qband.
Here, we report the first utilization of erythrosin B (EB) as a photoswitchable triplet spin label for LiPDS, as previously proposed [34]. By performing multiple ReLaserIMD experiments resonant with different parts of the nitroxide spectrum, we exploit the orientational effects arising from the anisotropy of the nitroxide spectrum at Q-band to extract information both on the inter-spin distance and on the conformation of a model peptide in frozen solution. In addition, we show the feasibility of these experiments above liquid nitrogen temperatures, removing the need for expensive liquid helium or cryogen-free cryostats in LiPDS.
Results and Discussion
The bis-labeled peptide 1 (Figure 1b) was chosen as a model compound for this study because of the rigid α-helical structure expected from its alternating Leu-Aib sequence. In vacuo DFT optimization of 1 supports the α-helical structure of the peptide backbone and predicts a distance of 1.9 nm between the two labels. Details on the synthesis and purification of 1 are given in the Materials and Methods section. A sarcosine linker was used instead of the more rigid direct attachment of the EB label to the N-terminus of the peptide via its carboxylate group in order to avoid the formation of the colorless spirolactam form [33]. This method is advantageous to previously reported strategies with similar chromophores, which involved the use of more expensive para-substituted derivatives of the chromophores as starting materials or introduced longer 5-atom linkers with additional unwanted conformational flexibility [22]. Our approach uses the more affordable nonderivatized form of EB and only introduces a 3-atom linker between the chromophore and the labeled molecule. Such an approach could also be expanded to other dyes, including EY and RB. tection pulses is evident from the appearance of a faster dipolar frequency component in the oscillating traces as the external magnetic field is increased (Figure 2b, colored lines). ReLaserIMD was chosen over Hahn-echo LaserIMD because of its more accurate determination of the experimental zero time [19], which is critical for the correct interpretation of the fast frequency components arising from orientational effects.
The ReLaserIMD technique (Figure 1a) was applied at different field positions spanning the full width of the nitroxide EPR spectrum at Q-band ( Figure 2a) to obtain an orientation-resolved set of dipolar traces. The orientation selection of the microwave detection pulses is evident from the appearance of a faster dipolar frequency component in the oscillating traces as the external magnetic field is increased (Figure 2b, colored lines). ReLaserIMD was chosen over Hahn-echo LaserIMD because of its more accurate determination of the experimental zero time [19], which is critical for the correct interpretation of the fast frequency components arising from orientational effects. Orientation-dependent simulations were carried out using a previously published algorithm [8] and were used to fit the experimental dataset in an iterative least-squares global fitting process [35]. An initial fit using a molecular model generated around the DFT-optimized geometry of 1 was performed considering the photoexcited triplet spin density to be concentrated at the center of the EB moiety ( Figure S9). This is a reasonable approximation as the light used here is not polarized and, consequently, there is no experimental photoselection for different orientations of the EB moiety with respect to the external magnetic field. The spread of the spin density over the EB moiety is included (c) DFT-optimized structure of 1 (side and projection views) showing the different positions of the EB center determined by the fitting procedure as red spheres, relative to the nitroxide g-tensor frame (arrows: red = g x , green = g y , blue = g z ). The diameter of the spheres is proportional to the number of times a single EB position contributes to the complete fit shown in panel b. The green structure on the right of the panel corresponds to another local energy minimum conformation identified by DFT.
Orientation-dependent simulations were carried out using a previously published algorithm [8] and were used to fit the experimental dataset in an iterative least-squares global fitting process [35]. An initial fit using a molecular model generated around the DFT-optimized geometry of 1 was performed considering the photoexcited triplet spin density to be concentrated at the center of the EB moiety ( Figure S9). This is a reasonable approximation as the light used here is not polarized and, consequently, there is no experimental photoselection for different orientations of the EB moiety with respect to the external magnetic field. The spread of the spin density over the EB moiety is included implicitly as the relative separation between EB spin density positions on consecutive simulated conformers was smaller than the size of the EB moiety and therefore several fitted points of the spin density could originate from the same molecular conformation of EB. Modulation depths were normalized to 1 for ease of analysis.
The results of this initial fit, in which the delocalization of the EB triplet spin density was not explicitly considered ( Figure S9), were used to refine the model and a second fit was carried out including the delocalized EB triplet spin density calculated by DFT ( Figure S8). In this case, the 20 best-fitting dipolar vectors linking the nitroxide spin density to the center of the EB moiety were used. The relative orientation between the g-frame of the nitroxide radical, which was fixed with respect to the dipolar vector for each chosen dipolar vector, and the zero-field splitting (ZFS) frame of the EB triplet were varied with three Euler angles. The simulation results show small variations in the calculated traces ( Figure S9), which mainly arise due to the changing distances of each point of the spin density relative to the nitroxide as the EB orientation is varied for each dipolar vector. The results of this fit (Figure 2b, black lines) are in excellent agreement with the experimental data and the corresponding spin-spin distance distribution has the maximum at 1.8 nm (Figure 2d).
The conformational distribution of the molecule derived from the orientational analysis is plotted as spheres positioned at the center of the EB moiety with respect to the nitroxide g-frame, where the diameter of each sphere is proportional to the population of this particular conformation (Figure 2c). These results suggest the chromophore folds back onto the peptide backbone as predicted by DFT. However, the direction of this fold is different from the DFT structure with the minimum calculated energy (Figure 2c, beige structure). The flexibility in the sarcosine linker and at the N-terminus of the alpha helix means that other DFT-optimized structures with similar but slightly higher energies can be converged, leading to other local energy minima. Specifically, rotation around the C-N bond of the sarcosine linker yields a structure where the EB chromophore is closer to the results of the orientational analysis ( Figure 2c, green structure). The fit results also show reasonable agreement of the z-axis of the EB ZFS-frame (concurrent with the long axis of the EB moiety) with the green structure ( Figure S13). This provides further evidence that this structure may be the dominant conformation in frozen solution. In addition, it should be considered that all DFT calculations in this work were performed in vacuo. Consequently, the presence of solvent could stabilize alternative conformations of the peptide that involve further rotation around the sarcosine C-N bond and/or change the hydrogen bonding network at the N-terminus.
A model-free analysis, involving a more time-consuming exploration of the full conformational space, rendered very similar results, with only a slightly wider conformational spread due to the unrestricted nature of this approach (Figures S14 and S16). These results validate the model-based analysis and confirm that the use of the minimum-energy DFT structure as a guide to the conformation space in which the model is constructed does not skew the conformational information obtained.
If only the spin-spin distance distribution is of interest, the dipolar traces measured across the nitroxide spectrum can be added together to average and thus remove the orientational effects, resulting in an orientation-independent form factor. This can be analyzed by Tikhonov regularization using standard software such as DeerAnalysis [36] ( Figure 3a). For this system, the resulting distance distribution agrees well with that obtained from the orientation-dependent analysis and with the DFT prediction (Figure 3b), and the small deviations are likely a result of incomplete orientation averaging caused by the discreet nature of the traces measured. Comparison to an orientation-independent analysis of each individual dipolar trace by Tikhonov regularization shows that the traces acquired around the nitroxide spectral maximum are not subjected to strong orientational effects in the studied molecule and render very similar distance distributions ( Figure S17). It is therefore possible, for this particular molecule, to measure a single dipolar trace at the spectral maximum and to analyze it without taking orientational effects into consideration. However, this is not a general result, and it might not be true for other bis-labeled molecules, as the orientational effects on the dipolar traces depend on the relative orientation between the nitroxide g-frame and the dipolar vector. If this relative orientation is not known a priori, orientation-dependent analysis or orientational averaging, involving the measurement of several dipolar traces at different parts of the spectrum of the detection spin center, are the recommended approaches. However, this is not a general result, and it might not be true for other bis-labeled molecules, as the orientational effects on the dipolar traces depend on the relative orientation between the nitroxide g-frame and the dipolar vector. If this relative orientation is not known a priori, orientation-dependent analysis or orientational averaging, involving the measurement of several dipolar traces at different parts of the spectrum of the detection spin center, are the recommended approaches. The ReLaserIMD measurements were repeated at temperatures up to 100 K ( Figures 4 and S19), demonstrating that LiPDS experiments can be carried out in the same conditions as conventional nitroxide-nitroxide PDS, without the need for expensive liquid helium or cryogen-free cryostats. This is a big step towards the widespread use of LiPDS in structural biology.
The modulation-to-noise ratio (MNR) of the traces, calculated as the modulation depth relative to the noise level at the end of the trace, is significantly reduced at 100 K; however, the values for the data measured at 60 and 80 K are very similar: 70 and 73, respectively (Figures 4 and S19). This can be rationalized as the Tm times of the nitroxide measured at 60 and 80 K are very similar, whereas that measured at 100 K is approximately 3 times shorter ( Figure S18 and Table S1).
Comparison of the data measured at 60 and 80 K shows that fewer oscillations are resolved in the trace recorded at 80 K compared to that recorded at 60 K (Figures 4 and S19). One possible explanation for this could be an increase in the longitudinal spin relaxation and thermalization of the non-Boltzmann sublevel populations of the EB triplet at higher temperatures such that the spin state of EB generated by the laser pulse changes during the precession of the nitroxide spin, causing additional fluctuations in the nitroxide electron spin-echo intensity. It was not possible to observe a spin echo from the EB triplet at any of the temperatures studied, consequently it was not possible to measure an inversion-recovery or echo-detected delay after the flash experiment to determine the relaxation of the EB center and investigate this hypothesis. An alternative explanation is that, as the temperature increases, molecular motions also increase, and these motions may broaden the distribution of the dipolar frequencies observed at higher temperatures, leading to faster damping of the oscillations [37,38]. The ReLaserIMD measurements were repeated at temperatures up to 100 K ( Figure 4 and Figure S19), demonstrating that LiPDS experiments can be carried out in the same conditions as conventional nitroxide-nitroxide PDS, without the need for expensive liquid helium or cryogen-free cryostats. This is a big step towards the widespread use of LiPDS in structural biology.
Sample Preparation
The peptide sequence in 1 was obtained by solid-phase peptide synthesis on a 2-chlorotrityl resin preloaded with Lol. The sequence contains several Aib residues known to induce helical conformations. The synthetic procedure followed for this new TOAC-containing peptide resembles those previously described [39,40]. All reagents and solvents were purchased from either Merck KGaA (Darmstadt, Germany) or Iris Biotech GmbH (Marktredwitz, Germany). Since Aib is a poorly reactive residue, all coupling steps were based on Oxyma Pure and N,N'-diisopropylcarbodiimide activation and were performed twice. EB was linked to the N-terminal Sar residue under the same experimental conditions, but the coupling reaction was repeated three times. The peptide was cleaved from the resin by repeated treatments with 30% 1,1,1,3,3,3-hexafluoroisopropanol in dichloromethane, purified by preparative reversed-phase (RP)-HPLC on a Phenomenex C4 column (40 × 250 mm, 10 µ, 300 Å) using a Pharmacia (GE healthcare, US) system (flow rate 10 mL min −1 , λ = 206 nm). Eluant A, H2O/CH3CN 9:1 v/v; Eluant B, CH3CN/H2O 9:1 v/v; gradient 75-100-100% B in 12 + 10 min. The purified fractions were characterized by ana- The modulation-to-noise ratio (MNR) of the traces, calculated as the modulation depth relative to the noise level at the end of the trace, is significantly reduced at 100 K; however, the values for the data measured at 60 and 80 K are very similar: 70 and 73, respectively ( Figure 4 and Figure S19). This can be rationalized as the T m times of the nitroxide measured at 60 and 80 K are very similar, whereas that measured at 100 K is approximately 3 times shorter ( Figure S18 and Table S1).
Comparison of the data measured at 60 and 80 K shows that fewer oscillations are resolved in the trace recorded at 80 K compared to that recorded at 60 K (Figure 4 and Figure S19). One possible explanation for this could be an increase in the longitudinal spin relaxation and thermalization of the non-Boltzmann sublevel populations of the EB triplet at higher temperatures such that the spin state of EB generated by the laser pulse changes during the precession of the nitroxide spin, causing additional fluctuations in the nitroxide electron spin-echo intensity. It was not possible to observe a spin echo from the EB triplet at any of the temperatures studied, consequently it was not possible to measure an inversion-recovery or echo-detected delay after the flash experiment to determine the relaxation of the EB center and investigate this hypothesis. An alternative explanation is that, as the temperature increases, molecular motions also increase, and these motions may broaden the distribution of the dipolar frequencies observed at higher temperatures, leading to faster damping of the oscillations [37,38].
Sample Preparation
The peptide sequence in 1 was obtained by solid-phase peptide synthesis on a 2chlorotrityl resin preloaded with Lol. The sequence contains several Aib residues known to induce helical conformations. The synthetic procedure followed for this new TOACcontaining peptide resembles those previously described [39,40]. All reagents and solvents were purchased from either Merck KGaA (Darmstadt, Germany) or Iris Biotech GmbH (Marktredwitz, Germany). Since Aib is a poorly reactive residue, all coupling steps were based on Oxyma Pure and N,N'-diisopropylcarbodiimide activation and were performed twice. EB was linked to the N-terminal Sar residue under the same experimental conditions, but the coupling reaction was repeated three times. The peptide was cleaved from the resin by repeated treatments with 30% 1,1,1,3,3,3-hexafluoroisopropanol in dichloromethane, purified by preparative reversed-phase (RP)-HPLC on a Phenomenex C4 column (40 × 250 mm, 10 µ, 300 Å) using a Pharmacia (GE healthcare, US) system (flow rate 10 mL min Analytical HPLC and MS indicate that the peptide was obtained at good purity, with a yield after purification of 9% (Figures S1 and S2).
Samples for EPR were prepared to 50 µM of 1 for Q-band and 100 µM of EB (95 %, Aldrich) for X-band in ethanol-d 6 (anhydrous, >99.5 atom %, Merck, Gillingham, UK). Samples were degassed by several freeze-pump-thaw cycles, sealed inside quartz tubes (3 mm outer diameter for Q-band, 4 mm for X-band), and flash-frozen in liquid nitrogen prior to insertion into the spectrometer.
Spectroscopy
UV-Vis absorption spectra were acquired in 10 mm quartz cuvettes using a UV-Vis spectrophotometer (Cary 60, Agilent, Santa Clara, CA, USA). Transient absorption spectra were acquired in sealed 2 mm glass cells using a nanosecond transient absorption spectrometer (EOS, Ultrafast systems, Sarasota, FL, USA) after photoexcitation by a Nd:YAG-pumped optical parametric generator (PL2210 and PG403, Ekspla, Vilnius, Lithuania).
Pulsed EPR experiments were carried out in an ElexSys E580 spectrometer (Bruker, Billerica, MA, USA), using an ER 5106 QT2 resonator (Bruker, Billerica, MA, USA) at Q-band (34 GHz). The temperature was maintained at 60 K using liquid helium or at 80, 100, and 120 K using liquid nitrogen, in a CF935 cryostat (Oxford Instruments, Abingdon, UK) with an ITC103 temperature controller (Oxford Instruments, Abingdon, UK). Laser excitation was provided by an OPO (Opolette355, Opotek, Carlsbad, CA, USA) operated at a repetition rate of 20 Hz (5 ns pulses) at a wavelength of 532 nm, with an energy of 2 mJ per pulse. The wavelength was chosen to be close to the absorption maximum of the Figure S3). The beam was passed through a depolarizer (Thorlabs, Newton, NJ, USA) before being directed into the spectrometer.
Rectangular microwave pulses of π = 40 ns and π/2 = 20 ns were used for all pulsed EPR experiments. Field sweep and phase-memory time experiments were performed using a standard Hahn echo sequence (π/2 -τπτ -echo). For the inversion-recovery experiments, this sequence was preceded by an inversion pulse (π -T -π/2 -τπτ -echo). ReLaserIMD used the refocused echo three-pulse sequence shown in Figure 1a: π/2 -τ 1 -πτ' -laser τ" -πτ 2 -echo, with τ 1 = 600 ns, τ 2 = 200 ns, and τ 1 + τ 2 = τ' + τ". The experiment was carried out at 5 different values of the external magnetic field (c. a. 1209.1, 1210.0, 1211.5, 1213.3, and 1216.8 mT) resonant with different parts of the nitroxide spectrum in order to probe orientation selection. The raw dipolar traces were phase-and backgroundcorrected to obtain orientation-dependent ReLaserIMD form factors. For the orientationindependent analysis, these form factors were averaged weighted by the corresponding spectral intensities of the nitroxide to obtain an orientation-independent form factor, which was then analyzed via Fourier transform and Tikhonov regularization using the MATLAB ® DeerAnalysis2019 [36] routine to extract a distance distribution. MNRs were calculated using the modulation depth of the ReLaserIMD traces and noise intensities estimated as the root-mean-square deviation (RMSD) of the form factors after complete damping of the dipolar oscillations.
The EPR characterization of free EB was carried out in an ElexSys E680 spectrometer (Bruker, Billerica, MA, USA) using an EN 4118X-MD5 resonator (Bruker, Billerica, MA, USA) at X-band (9.7 GHz) and 20 K, with the same laser settings reported above. For the variable delay-after-flash (DAF) measurement (laser -DAF -π/2 -τπτ -echo), the resonator was over-coupled. For the time-resolved EPR (trEPR) measurement, the resonator was critically coupled, and no field modulation or phase-sensitive detection was used. The signal was averaged between 0.5 and 2.0 ns after the laser flash, around the signal maximum of the time trace.
DFT Calculations
Initial geometries for molecule 1 were built in UCSF Chimera [41]. All DFT calculations were performed in vacuo. Geometry optimizations of 1 in the ground state were carried out using Gaussian ® 16 (revision A.03) [42], with the functional PBE1PBE, the basis set 6-31 g (d), and a spin multiplicity of 2. Iodine atoms in EB were replaced by hydrogens to speed up the calculation. The spin density in the nitroxide radical was obtained by single-point calculation using the functional BP86 and the basis set Def2-SVP.
The geometry optimization and spin density calculation of EB in the triplet state (spin multiplicity of 3) was carried out in Orca (release version 4.2.0) [43], using the functional BP86 and the basis set Def2-TZVP. Only the EB-Sar-Leu segment of the peptide was included. The resolution of identity (RI) approximation, with the auxiliary basis set def2/J, was used.
Orientation-Dependent Simulations
The protocol for orientation-dependent simulations was first described for conventional PDS by Lovett et al. [8] and adapted for LiPDS by Bowen et al. [30]. ReLaserIMD simulations were carried out in the g-tensor frame of the nitroxide radical (Figure 1b). The 'pump pulse' used in the simulation was considered to excite all orientations of EB with respect to the external magnetic field, reflecting the fact that the laser light used in this experiment was depolarized. The spin Hamiltonian parameters used to simulate the EPR spectra of the nitroxide radical and EB triplet were obtained from fitting the echo-detected field-swept ( Figure 2a) and trEPR ( Figure S4) spectra, respectively, using the MATLAB ®based EasySpin routine (pepper function) [44]. A library of pre-simulated traces was fitted to the experimental data following the protocol similar to that reported by Marko et al. [35]. For the model-based fit, an initial model consisting of a cone of 1200 dipolar vectors around the DFT-optimized geometry was generated (∆θ = 30 • , ∆r = 0.2 nm) and the corresponding dipolar traces at the 5 different field positions were simulated considering the spin density of the EB triplet to be concentrated at a single point in space in the center of the chromophore. The 20 dipolar vectors found to give the largest contributions to this first fit were then selected to generate a second model, where the spin delocalization of the EB and orientation of the EB moiety with respect to the nitroxide were considered. The orientation of the EB moiety with respect to the nitroxide was varied by ∆α = ∆β = ∆γ = 45 • in steps of 22.5 • , where α, β, γ are the Euler angles defining the orientation of the ZFS-tensor frame of the EB triplet with respect to the g-tensor frame of the nitroxide radical. The electron spin delocalization in the EB triplet state was included as calculated by DFT. For the model-free fit, one quarter of a spherical shell of ∆r = 0.5 nm around r = 1.8 nm, containing 5115 dipolar vectors, was used to simulate the library of dipolar traces, again considering the spin density of the EB triplet to be concentrated at the center of the chromophore. A total of 50 least-squares fitting iterations was sufficient to reach convergence of RMSD from the experimental data in all the cases (Figures S10, S12 and S15).
Conclusions
The variety of choice in photoswitchable spin labels and the affordability of the experiments are critical for the expansion of light-induced pulsed electron paramagnetic resonance dipolar spectroscopy (LiPDS) to become a widespread and complementary methodology to conventional PDS. With this work, we have taken an important step in this direction by introducing a new photoswitchable spin label and by removing the need for expensive cryogenics in these experiments.
We reported, for the first time, the use of the photoexcited triplet state of erythrosin B (EB) as a photoswitchable spin label for LiPDS. Its strong light absorption in the green and its high triplet quantum yield allow for low-demanding photoexcitation using the second harmonic of a simple Nd:YAG laser. In addition, its small size, biocompatibility, and commercial availability in functionalized forms for biolabeling make it an ideal candidate for biological applications.
Employing the refocused laser-induced magnetic dipole (ReLaserIMD) technique to measure the dipolar interaction between the EB triplet and a permanent nitroxide radical in a rigid model peptide, we exploited the orientational effects in the dipolar traces to obtain information on both the distance and relative orientation between the two spin-bearing moieties. The agreement between different data analysis approaches and density-functional theory calculations demonstrates the robustness of this methodology. With this, we showed the importance of capturing the orientational effects underlying these experiments, which can then be correctly averaged to perform an orientation-independent analysis free of orientational artefacts if only the distance distribution is of interest.
In addition, we demonstrated the feasibility of these experiments above liquid nitrogen temperatures, removing the need for expensive liquid helium or cryogen-free cryostats, which often restricts the accessibility of LiPDS to researchers. Figure S3. Room-temperature UV-Vis absorption spectrum of 1 in ethanol, Figure S4. X-band trEPR spectrum of EB measured after photoexcitation at 532 nm, 20 K (black) and simulation (red) using EasySpin [44], pepper function, with the following triplet state spin Hamiltonian parameters: D = 3486 MHz, E = 328 MHz, D-strain = 990 MHz, E-strain = 0 MHz, Gaussian linewidth = 3 mT, p x = 0.6, p y = 0.4, p z = 0.0, Figure S5. Characterization of the relaxation of the nitroxide spectral maximum in the dark, 60 K. (a) Inversion-recovery experiment (red) with biexponential fit (black) rendering lifetimes of (0.315 ± 0.008) and (1.03 ± 0.04) ms, with relative weights of 0.73:0.27. The signal has been plotted as a positive decay for ease of analysis. (b) Phase-memory-time experiment (red) with mono-exponential fit rendering a lifetime of (1.07 ± 0.01) µs (black), Figure S6. Transient absorption spectroscopy of free EB in ethanol at 100 K. (a) Absorption spectrum of the EB triplet formed by photoexcitation at 532 nm, time-averaged around the signal maximum. (b) Time decay of the triplet absorption at 600 nm (circles) and mono-exponential fit (red line) with a lifetime of 0.69 ms, Figure S7. Variable delay-after-flash (DAF) spin-echo experiment with free EB in d 6 -ethanol at 20 K, with photoexcitation at 532 nm. Echo intensity time trace measured at the most intense feature of the EB triplet EPR spectrum (red) and biexponential fit with lifetimes of 0.01 and 0.1 ms and relative weights of 0.26:0.74, Figure S8. Calculated electronic spin densities for the two labels (EB triplet, left; nitroxide radical, right) used in the orientation-dependent analysis. See the Materials and Methods section for the computational details, Figure S9. Results of the model-based fit with the triplet spin density in the center of the EB moiety. (a) Echo-detected field-swept spectrum in the dark, showing the field positions where ReLaserIMD traces were acquired. (b) Background-corrected and modulation depth-normalized ReLaserIMD traces (thick lines) and corresponding orientationally dependent fits (thin lines). (c) DFT-optimized structure of 1 showing the different positions of the EB center determined by the fitting procedure as red spheres, relative to the nitroxide g-tensor frame (arrows: red = g x , green = g y , blue = g z ). The diameter of the spheres is proportional to the number of times a single EB position contributes to the complete fit shown in panel b. (d) Corresponding distance distribution. Figure S10. RMSD form the fit in Figure S9, Figure S11. Simulated ReLaserIMD traces with the dipolar vector most contributing to the best fit in Figure S7, using the delocalized EB triplet spin density calculated by DFT and changing the orientation of the EB chromophore with the Euler angles α, β, and γ by 45 • in steps of 22.5 • . The colors correspond to the different values of the external magnetic field as introduced in Figure S9a. Figure S12. RMSD from the fit in Figure 2, Figure S13. Orientations of the D-tensor (zero-field splitting tensor) z-axis of the EB triplet (parallel to the long axis of the chromophore) for the dipolar vector most contributing to the best fit in Figure 2, Figure S14. Results of the model-free fit with the triplet spin density in the center of the EB moiety. (a) Echo-detected field-swept spectrum in the dark, showing the field positions where ReLaserIMD traces were acquired. (b) Background-corrected and modulation depth-normalized ReLaserIMD traces (thick lines) and corresponding orientationally-dependent fits (thin lines). (c) DFT-optimized structure of 1 showing the different positions of the EB center determined by the fitting procedure as red spheres, relative to the nitroxide g-tensor frame (arrows: red = g x , green = g y , blue = g z ).
The diameter of the spheres is proportional to the number of times a single EB position contributes to the complete fit shown in panel b. (d) Corresponding distance distribution, Figure S15. RMSD form the fit in Figure S13, Figure S16. Comparison between model-based (red, Figure S9) and model-free (blue, Figure S15) fits, both with single-point spin density, Figure S17. Orientationindependent analysis of individual dipolar traces. (a) Echo-detected field-swept spectrum in the dark, showing the field positions where ReLaserIMD traces were acquired. (b) Background-corrected and modulation depth-normalized ReLaserIMD traces (thick lines) and orientationally-independent fits by Tikhonov regularization using DeerAnalysis2019 [36] (black). (c) Corresponding distance distributions, Figure S18. Characterization of the relaxation of the nitroxide spectral maximum in the dark at different temperatures: 60 K (black), 80 K (green), 100 K (blue), and 120 K (red). (a) Inversionrecovery experiments (thick lines) with biexponential fits (thin lines). The signal was plotted as a positive decay for ease of analysis. (b) Phase-memory-time (T m ) experiments (thick lines) with mono-exponential fits (thin lines). The lifetimes determined from the fits are reported in Table S1, Figure S19. Background-corrected and modulation-depth-normalized ReLaserIMD traces acquired at the nitroxide signal maximum, at different temperatures: 60 K (black), 80 K (green), 100 K (blue), and 120 K (red). The traces have been averaged for the following number of scans: 1875, 2520, 6960, and 29,350, respectively. A time step of 8 ns was used to acquire the traces at 100 K and 120 K in order to reduce the acquisition time while a 4 ns step size was used at 60 K and 80 K. Modulation depths werẽ 7% before normalization in all cases, Table S1. Lifetimes extracted from the fits in Figure S18. Data Availability Statement: Data is available from the authors on request and will be deposited in a data repository.
|
2022-11-06T16:09:55.489Z
|
2022-11-01T00:00:00.000
|
{
"year": 2022,
"sha1": "38bb4dadf3b3637d8e3f652620def6fa974ea082",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/21/7526/pdf?version=1667472148",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a560e82e54e26847773d325075c72340f88e50d",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
227053714
|
pes2o/s2orc
|
v3-fos-license
|
Scene text removal via cascaded text stroke detection and erasing
Recent learning-based approaches show promising performance improvement for scene text removal task. However, these methods usually leave some remnants of text and obtain visually unpleasant results. In this work, we propose a novel"end-to-end"framework based on accurate text stroke detection. Specifically, we decouple the text removal problem into text stroke detection and stroke removal. We design a text stroke detection network and a text removal generation network to solve these two sub-problems separately. Then, we combine these two networks as a processing unit, and cascade this unit to obtain the final model for text removal. Experimental results demonstrate that the proposed method significantly outperforms the state-of-the-art approaches for locating and erasing scene text. Since current publicly available datasets are all synthetic and cannot properly measure the performance of different methods, we therefore construct a new real-world dataset, which will be released to facilitate the relevant research.
I. INTRODUCTION
Scene text is an important information carrier, and often appears in various scenarios.The problem of scene text removal can be stated as follows: given an image with appropriate amount of text [e.g., Fig. 1(a)], the goal is to remove the text in this image [e.g., Fig. 1(d)].This task has many applications in our daily life, such as personal private information protection (hiding telephone numbers or home address from public photos), text translation (removing the original text and pasting new translated results), and so on.
Several approaches have been proposed to erase graphical text (e.g., subtitles) from color images [1]- [3].For the challenging scenario of scene text removal, which usually has complex background and text with various fonts and sizes, etc, however, these methods often produce results with visual artifacts.Inspired by the notable success of deep learning in image transformation [4]- [6], recent works have introduced deep-learning-based approaches to solve this problem and have achieved promising results [7]- [9].The learning-based methods can be roughly classified into two main categories, i.e., text removal without/with using mask.The former simply takes the given image as input and removes all the texts from the whole input image.This kind of methods often left noticeable remnants of text or distort non-text area incorrectly, and cannot remove text locally.The latter usually uses a region mask, i.e., a rectangle or polygon mask roughly indicating the text region [e.g., Fig. 1(b)], as additional input to facilitate the text removal.
Recent MTRNet [9] achieved noticeable improvement compared to prior works for scene text removal, by focusing on text regions via auxiliary/binary mask.In fact, their pipeline is similar to the general image inpainting tasks [10], [11].However, there is an apparent difference between them: for the text removal, the pixel values of original input image in the regions indicated by auxiliary mask (i.e., text regions) are known; whereas the corresponding values are unknown (corrupted) for the general image inpainting, i.e., the so-called missing regions.Generally speaking, when the regions to be processed (indicated by mask) are larger, it becomes harder to fill or remove the corresponding regions not only for the image inpainting, but also for the text removal.In addition, for the scene text removal problem itself, there is no need to remove regions not covered by text strokes like MTRNet.In other words, the mask used by MTRNet covers some unnecessary/redundant regions (i.e., non-stroke areas), especially when text strokes are scattered sparsely.It is obvious that if we can extract the exact text stroke, which means that we can preserve original contents of input image as much as possible, and then we could achieve better result.However, such precise areas are difficult to obtain, to best of our knowledge, there is no related research to focus on distinguishing text strokes from non-stroke area in the pixel-wise level.
In this paper, we propose a novel "end-to-end" framework based on generative adversarial network (GAN) to address this problem.The key idea of our approach is first to extract text strokes as accurately as possible, and then improve the text removal process.These two processes can be further enhanced via a simple cascade.In addition, current public datasets for scene text removal are all synthetic, which to some extent affect the generalization ability of trained models.To facilitate this research and be close to real-world setting, we construct a new dataset with high quality.The main contributions of our work include: • We design a text stroke detection network (TSDNet), which can effectively distinguish text strokes from non-text area.
• We propose a text removal generation network and combine it with TSDNet to construct a processing unit, which is cascaded to obtain our final network.Our method demonstrates the superior performance.• We propose a weighted-patch-based discriminator to pay more attention to the text area of given images, making it easier for the generator to generate more realistic images.• We construct a high-quality real-world dataset for the scene text removal task, and this dataset can be used to benchmark related text removal methods.It can also be used in other related tasks.The remainder of this paper is organized as follows.Section II reviews relevant existing work.Section III introduces the motivation and network details of our method.Section IV presents the performance evaluations for our method and detailed comparisons with existing methods.Section V draws the conclusions and discusses the future working directions.
A. Scene text detection
Scene text detection is a fundamental step of scene understanding and is widely studied in the field of computer vision [12].With the aid of deep learning, the performance of scene text detection framework has been significantly improved and surpassed traditional methods by large margins.Shi et al. [13] decompose text into two locally detectable elements of segments and links, which are simultaneously detected by a fully-convolutional network.Liu et al. [14] collect a curved text dataset called CTW1500 to facilitate the curved text detection task, and propose a method with the intergration of transverse and longitudinal sequence connection.Chen et al. [15] propose the concept of weighted text border and introduce attention module to boost the detection performance.To obtain better detection performance, multi-scale pyramid input is widely used with the consumption of much more running time.He et al. [16] achieve remarkable speedup via a novel two-stage framework including a scale-based region proposal network and a fully convolutional network.CRAFT [17] effectively detect arbitrary text area by exploring each character and affinity between characters.In this work, we adopt this method as the tool to measure the performance of scene text removal (more details in Section IV-B).
B. Text/non-text image classification
Another relevant research is text/non-text image classification, which identifies whether a image block contains text or not.Zhang et al. [18] first propose an effective method for text image discrimination, which is the suitable combination of maximally stable extremal region (MSER) [19], CNN, and bag of words (BoW) [20].Bai et al. [21] propose a mutli-scale spatial partition network to efficiently solve this task by predicting all image blocks simultaneously in a single forward propagation.Zhao et al. [22] investigate this task from two perspectives of the speed and the accuracy.They use a small and shallow CNN to accomplish high speed and then apply the knowledge distillation to improve its performance.Very recently, Gupta and Jalal [23] combine a text detector EAST [24] and classification subnetwork to achieve text/non-text image classification.Different from these previous works, our method mainly captures the exact position of text stroke, i.e., in the pixel-wise level, instead of image block/patch-wise level, to effectively facilitate the text removal network.
C. Scene text removal
Existing approaches of the scene text removal can be classified into two major categories: traditional non-learning methods and deep-learning-based methods.
Traditional approaches typically use color-histogram-based or threshold-based methods to extract text areas, and then propagate information from non-text regions to text regions depending on pixel/patch similarity [1]- [3].These methods are suitable for simple cases, e.g., clean and well focused text, whereas they have limited performance on complex scenarios, such as perspective distortion and complicated background, etc.
Recent learning-based approaches try to solve this problem with the powerful learning capacity of deep neural networks.Nakamura et al. [7] first propose a scene text erasing method (ST Eraser) based on convolutional neural network (CNN), and conducted text erasing patch by patch.This patch-based processing fails to localize text with complex shape and inevitably damaged the consistency and continuity of erased result.More recently, Zhang et al. [8] design an end-to-end trainable framework (EnsNet) with a conditional GAN to remove text from natural images.Different from [7] which erases text in an image patch by patch, EnsNet can erase the scene text on the whole image in an "end-to-end" manner.For these two works, they do not use mask, and thus need to localize and remove text simultaneously.Such kind of methods often suffer from inaccurate text localization and incomplete text removal.To solve this, Tursun et al. [9] develop a mask-based text removal network (MTRNet).Auxiliary mask is used to provide information on where the text is, and enables MTRNet to focus on text removal better.The additional information provided by mask is the main reason why MTRNet outperforms previous studies.MTRNet also supports partial/local text removal by providing mask purposefully.All of these existing approaches often leave some text strokes unchanged or generate unpleasant contents because they cannot appropriately and exactly pay attention to the text strokes.Another shortcoming of current methods is that their training datasets are all synthetic, because the collection of real-world datasets are difficult and time-consuming.
In addition, a closely related problem to scene text removal is the general image inpainting, which aims at synthesizing plausible contents to fill missing/hole regions of the corrupted input images.Image inpainting has been extensively studied with the aid of deep learning methods.More recently, several GAN-based approaches are proposed for this purpose [10], [25], [26] and show strong ability of generating reasonable contents for missing regions.We also use the GAN framework in our approach for text removal.
III. PROPOSED METHOD
Region mask is commonly used in general image inpainting studies to indicate which regions should be filled.MTRNet [9] first introduces region mask into text removal task, and achieves good performance.Yet such direct introduction and application is inappropriate because it ignores the difference between scene text removal and general image inpainting.Region mask is suitable for image inpainting where it properly specifies the to-be-filled region.When directly applying in the text removal mask like MTRNet, unfortunately, it cannot reach pixel-level accuracy in distinguishing which regions are text strokes.Regarding the above inappropriateness, we believe that a more accurate text stroke mask can help to improve the performance of text removal methods.We therefore propose to decouple the text removal task into two sub-tasks: text stroke detection and stroke removal, and solve them separately.In the following, we explain the details of our network architectures and the training losses used in our network.
A. Network architecture
We design and implement a text stroke detection network, and then combine it with our proposed text removal generation network to construct a processing unit.The final network is obtained by cascading this unit and combining with a weightedpatch-based discriminator.
1) Cascaded generator: The proposed generator is designed for the following two purposes, i.e., 1) to detect text strokes in the input image accurately; 2) to inpaint the detected text strokes with proper content.To achieve the first goal, we construct a text stroke detection network (TSDNet).For the second goal, we propose a text removal generation network (TRGNet).The whole generator is obtained by cascading the group of TSDNet and TRGNet, as shown in Fig. 2. Note that, the parameters in these four networks are not shared.Technically, both TSDNet and TRGNet employ a U-Net-like architecture [27], since by comparing with simple encoder-decoder framework, the U-Net architecture with skip connection helps to recover the structure and the texture details of unmasked area from input images, as well as to avoid over-smoothing and undesired artifacts to some extent.
The inputs of the TSDNet (noted as G D ), are a text image I and a binary mask M (indicating the text regions).The output is a float matrix M s with the same size as M, ranging from 0 to 1, in which larger value indicates higher confidence that corresponding position of image I is covered by text stroke.
The ground-truth of text stroke distribution is a binary mask M gt , in which 1 means corresponding position of I is covered by text stroke.Different from M which only specifies the rough region of certain text, M gt is a pixel-level annotation of text strokes.Practically, M gt can be obtained by binarizing the difference between paired text image I and text-free image I gt (see more details in Section IV-A), and this stroke annotation is only used in the training stage as the supervised information to train TSDNet.After obtaining the stroke mask M s , the TRGNet G R is then applied to erase text from the input image I. G R takes three items as input, namely, text image I, binary mask M, and obtained stroke mask M s from G D , and outputs text-erased image I te , i.e., A TSDNet followed by a TRGNet (the first row of Fig. 2) can already detect and erase text effectively, but the result images (I te ) sometimes contain awkward artifacts and slight remnants of text.We observed that a simple cascade can eliminate such artifacts and bring visual improvement significantly.The designed architecture is as follows: the second TSDNet G D takes I te , M, and M s as input, and outputs M s .Then, the second TRGNet G R takes I te , M, and M s as input, and outputs the final text-erased result I te .Through combining the previous outputs, G D obtains more accurate text stroke distribution in an incremental manner, and thus G R can reduce artifacts and inconsistency effectively.Previous studies such as EnsNet and ST Eraser, which simply take an image/patch as input and try to erase text without any prior information, showed relatively limited performance.In this work, we use a binary mask (specifying the text region) as additional information to decrease the difficulty of detecting and erasing text at the same time, and design a TSDNet to provide more accurate instruction on which area should be removed.By doing so, we successfully decouple text removal into text stroke detection and stroke removal, and propose an effective solution and framework to solve these decoupled problems.
2) Weighted-patch-based discriminator: As text removal only needs to alter partial content of input image, thus a patch-based discriminator (see Fig. 3) is more suitable to effectively concentrate on altered areas.In this work, we use the discriminator proposed in SN-PatchGAN [26], [28] to discriminate the text-erased image patch by patch.We further improve the original discriminator by attaching an additional convolutional branch D M to discriminator D for assigning different weights to different patches according to mask M. The D M has the same architecture as the D, but each layer only has one channel and weights in convolutional kernel are fixed to 1.By doing so, the patches covered by more text will be paid with more attention.
B. Training loss
In this subsection, we present our loss functions for the generator and discriminator.To verify that our proposed method is valid, we use relatively simple loss function when training our network.For TSDNet G D and G D , we use simple l 1 loss, where λ t balances the l 1 loss of G D and G D .We set λ t = 10 in all our experiments, as most text strokes have been detected by G D .
For scene text removal task, our main goal is to remove the text and preserve the non-text regions, therefore, more attention have to be paid to masked area (indicated by M), especially detected stroke area (indicated by M s /M s ).More precisely, we define the corresponding weight matrix M w and M w for G R and G R as following, respectively: where 1 has same shape as M and its all elements are 1.Then, the total loss of G R and G R is defined as where is the element-wise product operation, and λ r is the balance parameter.In all our experiments, we set λ m = 5, λ s = 5, and λ r = 10.
For the objective function of patch-based GAN, we use the hinge version of adversarial loss [29], [30].The corresponding loss function for generator and discriminator are respectively defined as Note that, means element-wise product with broadcasting in terms of depth.
To summarize, the total loss for our cascaded generator is the summation of Eqn. 3, 5 and 6: Moreover, we observed that the perceptual loss [4] and the style loss [31] have no noticeable improvement for our task.One reason is that scene text is usually located in relatively flat area.The total variation loss [32] has no apparent effect on the erased result either, and thus is not used in our method.
IV. EXPERIMENTAL RESULTS
To evaluate our proposed method quantitatively and qualitatively, we compare our method with recent state-of-the-art text removal methods and general image inpainting methods, on synthetic dataset and our collected real-world dataset.Ablation study is also conducted to evaluate different components of our network.
A. Dataset
To train the deep model for text removal task, paired text image and text-free image are required.However, it is difficult to obtain such paired data for real-world scene images, this is why synthetic datasets are used for constructing text removal dataset in previous approaches.Currently, there are only two synthetic datasets, i.e., the Oxford synthetic scene text detection dataset [33] and the SCUT synthetic text removal dataset [8].These two datasets adopted the same synthetic technology proposed by [33], and shared the same drawback, i.e., given a text-free image as background, a few or more images with synthesized text are obtained.For instance, the Oxford dataset synthesized 800,000 images using only 8,000 text-free images, which means that each 100 text images are synthesized using the same background image, leading to insufficiency of the diversity of background.Such kind of repetition would cause negative affect to the generalization ability of models.
In addition, synthetic data is only an approximation of real-world data.Current text synthesis technologies cannot generate realistic enough text images, which would restrain the text removal ability of models trained on synthetic data.When existing text removal methods are trained on synthetic dataset, and then tested on real-world data, we found that there often exists obvious text remnants and unsatisfactory artifacts, which is quantitatively analyzed in Section IV-E.To this end, we propose to construct a real-world dataset for the text removal scenario.
To construct such dataset, we first collect 5,070 images with text from ICDAR2017 MLT dataset [34], and 1,970 images captured from supermarkets and streets, etc.Then, post-processing are applied to obtain corresponding text-free images, region masks, and text stroke masks.We manually remove the text from these collected images and obtain text-free images as groundtruth using the inpainting tools in Photoshop © .The region masks are annotated by using VGG Image Annotator tool [35].For the ground-truth stroke mask, we first compute the difference between paired text images and text-free images, and then turn it into a binary image.To enrich the diversity of our dataset, we also use synthesis method and then manually select 4,000 images with high realism.In total, we obtain 11,040 images as training set (Train rw).Several samples of our dataset are shown in Fig. 4. To construct testing set (Test rw), we additionally collect 1,080 real-world images and apply the same above post-processing to obtain the text-free images, region masks and text stroke masks.
In this work, we mainly conduct the experiments and comparisons on our real-world dataset.In the meanwhile, we also conduct experiments on a public synthetic dataset, i.e., the Oxford dataset [33], which is much larger than the SCUT dataset [8].For the Oxford dataset, we randomly select 75% of the whole dataset as training set (Train ox), and then randomly select 2,000 images from remaining set as testing set (Test ox).Note that, for these two datasets, i.e., our real-world (RW) dataset and Oxford dataset, there is no overlapping between training set and testing set.
B. Evaluation metrics
We evaluate the performance from two different aspects: (1) can the method remove text from an image completely; (2) can text area be replaced with appropriate content.An accurate text detector is often used for the former evaluation metric.As for a text-erased image, the cleaner text is erased, the fewer text will be detected.In this work, we use the state-of-the-art text detector CRAFT [17] and use DetEval protocol [36] for evaluation (i.e., recall, precision, and f-measure).For the second evaluation metric, we adopt general image inpainting metrics, and mainly use the following three indicators: 1) mean absolute error (MAE); 2) peak signal-to-noise ratio (PSNR); and 3) the structural similarity index (SSIM).
C. Implementation details
We implement our network using TensorFlow 1.13.The GPU version is TITAN RTX of NVIDIA ® corporation.Input images are resized to 256 × 256.Adam optimizer [37] with a minibatch size of 16 is used to train our network, and its β 1 and β 2 are set to 0.5 and 0.9 separately.Initial learning rate is set to 0.0001.The model is trained for 10 epochs on our dataset and 6 epochs on Oxford dataset.
D. Dataset comparison
Fig. 5 shows the comparison between Oxford synthetic dataet and our real-world dataset using two different networks (MTRNet and our proposed network).Each group of images contains the input (left), the result of model trained on Oxford dataset (middle), and the result of model trained on our dataset (right).It is obvious that the text in right image of each group is better removed, especially for our method (the second row in Fig. 5), which has no noticeable remnant and better preserves the original image details, such as lighting effect.This indicates that our dataset is more suitable for the scene text removal, even though the number of images in Oxford dataset is much larger than that of ours.
E. Comparison with state-of-the-art methods
We quantitatively and qualitatively compare our method with state-of-the-art text removal methods: ST Eraser [7], EnsNet [8], and MTRNet [9], as well as recent image inpainting method: GatedConv [26].We use the official implementation of EnsNet and GatedConv, and re-implemented ST Eraser and MTRNet.
Table I reports the quantitative comparison of the above five methods on the Oxford dataset and our dataset.We can find that our method is superior to other methods in MAE, PSNR,and SSIM by a large margin.When training on Train rw and testing on Test ox, our method achieves the best performance, and this phenomenon also exists when training on Train ox and testing on Test rw.This observation further indicates that our network has better generalization capability.For the cross-dataset validation, the results of training on Train ox and testing on Test ox are relatively similar with that of training on Train rw and testing on Test ox (see Table I column [9][10][11][12][13][14].However, the performance of training on Train rw and testing on Test rw is obviously better than that of training on Train ox and testing on Test rw (see Table I column 3-8), e.g., the PSNR and SSIM of EnsNet are improved by 7.37 (from 26.41 to 33.78) and 8.13% (from 87.30% to 95.43%), respectively.These two results imply that our dataset is more suitable for this scene text removal task, especially for the real-world applications.Our method has the best performance among four scene text removal methods when evaluated by recall, precision, and f-measure.These three metrics are often lowest for GatedConv, because GatedConv first fully removes the text area and then fill the so-called missing region.Such processing can avoid incomplete text removal, bringing lowest recall, precision and f-measure, but can also bring obvious over-smoothing and boundary inconsistency to inpainted area, as shown in the following qualitative comparison.Fig. 6 shows the text-erased images of all five methods.Compared with other text removal methods, our method is more effective in erasing text and inpainting text area with proper content.In the first row of Fig. 6, our result preserves more consistent texture details with original non-text areas, whereas that of other methods has obvious text remnants or visual inconsistency.Comparing the results in the fifth row, our result has no text remnant and well maintains the original structure, i.e., the light transition.Furthermore, the text-erased images of our method show more reasonable texture details than that of GatedConv (comparing the fifth and sixth column in Fig. 6).The reason is that GatedConv first replaces the masked area with blank images, which will lose the useful detail information, thus resulting in the filling of texture itself becomes relatively more difficult.These results indicate that the reasonability of distinguishing text stroke area from non-text area in the masked region, which can guide the network to focus on text stroke area and preserve the useful information of original input image.
As analyzed earlier, using text region masks instead of text stroke masks is an important reason why existing scene text removal methods have limited performance.Inspired by this, our proposed novel and generic framework decouples the text removal problem into text stroke detection and stroke removal, and achieves superior performance.Following the pipeline of our "end-to-end" framework, a two-stage method, i.e., first extracting text strokes with a semantic segmentation algorithm and then filling these holes with image inpainting approach (e.g., GatedConv [26]), may be possible to handle this task.Fig. 7 compares the results of our method, GatedConv, and this two-stage method.In this experiment, we simply use the detected stroke mask via our TSDNet as the output of the first stage for the two-stage method.In the third and sixth row of Fig. 7, we find that the results of this two-stage method [(c) and (d)] both are visually unpleasant, whereas the results of our method [(a)] are consistently good.Noticeably, the final result of this two-stage method is sensitive to the output of the segmentation stage, i.e., the detected text stroke mask, when directly using the stroke mask provided by our TSDNet, there is plenty of text remnants [see Fig. 7(c)], and this phenomenon is improved by dilating the corresponding stroke mask [see Fig. 7(d)].The possible reason is that GatedConv relies so much on the surroundings of the masked area that a tiny noise would effect the final text removal results noticeably.On the contrary, our "end-to-end" framework only takes the detected stroke mask of our TSDNet as the middle information, and thus is robust to stroke mask.
F. Multi-lingual text removal and selective text removal
In this subsection, we also illustrate more results about multi-lingual text removal and selective text removal.We train MTRNet [9] and our method on our real-world dataset, and the corresponding results are reported in Fig. 8. Compared with MTRNet, our method can successfully remove the text in various languages.The reason is that our text stroke detection network focuses on the text, even learns the difference between various languages, and thus provides more useful information for the consecutive text removal generation network than the region mask.
In addition, our method can also accomplish the selective text removal.Given an auxiliary mask, where the desired removal text is indicating by a polygonal mask, our method can purposefully remove desired texts and does not affect the other text.
G. Ablation study
Next, we study the effect of different components of our network.The corresponding results are reported in Table II and Fig. 9.
1) Baseline: For baseline model, we use a single TRGNet G R (shown in Fig. 2) as the generator, and use the discriminator proposed in SN-PatchGAN [26] as the discriminator (i.e., D in Fig. 3).The inputs of TRGNet here are a text image I and a binary mask M. Comparing Table I and II, we find that such designed baseline model already shows similar performance with previous text removal methods, including EnsNet which does not use auxiliary mask and MTRNet which uses region mask.The visual results are also good as shown in the second column of Fig. 9.
2) Weighted-patch-based discriminator (WD): Original discriminator proposed in SN-PatchGAN treats all patches equally.In our work, masked regions indeed are the focus of our attention.We propose a weighted-patch-based discriminator, which can pay more attention to masked area via assigning higher weight.Comparing the rows of "Baseline" and "WD" in Table II it shows that our proposed discriminator can significantly improve the performance of the baseline model with the help of this weighted design.For example, the first row in Fig. 9 shows that our proposed weighted-patch-based discriminator can help to maintain the structure consistency of given image.
3) Text stroke detection network (TSDNet): A TSDNet is added into baseline model to prove the effectiveness of accurate text stroke extraction.As shown in Table II, baseline model with TSDNet obtains much higher PSNR, SSIM, and much lower MAE (compare the rows of "Baseline" and "TSDNet").Our proposed TSDNet can effectively distinguish whether the given area is text stroke or not.And this useful information can help TRGNet to remove masked areas more purposefully.When combining the WD and TSDNet (see "WD+TSDNet" in Table II), the performance can be further improved.Comparing the results in the second row of Fig. 9, it can be seen that our TSDNet can help to completely remove text from image (see the character "T").
4) Cascaded TSDNet and TRGNet (Cascade): Cascading of TSDNet and TRGNet can help to fix minor mistakes and slight text remnants of the first unit of TSDNet and TRGNet, such as, completing partial-detected text stroke, removing text residual, and fixing visual artifacts.We also experiment three cascade, and the text-erased results are a little bit blurry.A possible reason is that part of high frequency information is lost during cascading.
5) The effect of stroke detection: The text stroke detection is an important ingredient of our generic framework, here, we further discuss the effect of stroke detection performance on the final text removal.When inserting TSDNet into the Baseline, the performance is obviously improved, which validates our design of the TSDNet.Furthermore, the text stroke detection performance is enhanced via the cascaded design (tMAE of "Cascade" is significantly smaller than that of "TSDNet" in Table II), in the meanwhile, the text removal performance is better.This further illustrates that the improvement of stroke detection can enhance the final text removal result.
V. CONCLUSION
In this work, we proposed a novel GAN-based framework to solve scene text removal task via decoupling text stroke detection and stroke removal.We designed and implemented a text stroke detection network and a text removal generation network, and constructed the final model by cascading the group of above two networks.Quantitative and qualitative results illustrate the superior performance of our proposed network.Our study implies that it is beneficial to know the position of text strokes for the scene text removal problem.To the best of our knowledge, our study is the first to reveal the importance of accurate text strokes to text removal task.In the meanwhile, we also constructed a versatile real-world dataset, including text images, ground-truth text-free images, and auxiliary masks, which can be used to benchmark text removal methods.Moreover, our approach can be used for quickly constructing the large scale text-free image dataset from images with text, and pixel-wise text stroke annotations can be obtained as well (i.e., binarizing the difference between paired text image and text-free image).This kind of dataset will provide more and fine-grained supervised information to further improve the performance of scene text detection and recognition tasks.
Our method might generate implausible result if the text area is too large.We believe that a larger dataset with more diverse data can help to mitigate existing shortcomings.In the future, we plan to collect more real-world text images and construct a larger and richer dataset that can be used for both text removal task and other related research, e.g., realistic text synthesis.In this work, we use the text region masks in an "off-line" manner, considering not very perfect performance of the current automatic text detectors and the requirement of partial text removal applications.We would like to design a more "complete" framework combining automatic text detection, which supports the refinement of possible detection errors and the selection of specific text region with simple user guidance.It would also be interesting to study text removal problem in the semi-supervised manner.We plan to share our source code and real-world dataset to the research community.
Fig. 1 .
Fig. 1.Sampled results of the proposed scene text removal method.From left to right: (a) input image, (b) the region mask, (c) text stroke mask obtained by our TSDNet, and (d) the final result.Each row shows a representative result of commonly appeared scenarios.
𝐌Fig. 2 .
Fig. 2. The overall structure of the proposed generator, which consists of cascaded Text Stroke Detection and Text Removal Generation.⊕ indicates the concatenation of image, region mask and stroke mask.The convolutional kernel size of the first layer of G R and G R is 5 × 5, and the remaining kernel size is 3 × 3 in our proposed generator.
Fig. 3 .
Fig. 3. Architecture of our proposed weighted-patch-based discriminator.* means element-wise multiplication between two branches with broadcasting.The convolutional kernel size is 5 × 5.
Fig. 4 .
Fig. 4. Samples of our dataset.From left to right: images with text, text-free images, region masks, and text stroke masks.
Fig. 5 .
Fig. 5. Oxford dataset vs.Our dataset.We use different datasets to train the same method and compare the generalization ability of obtained models.The first row corresponds to the results of MTRNet and the second row corresponds to that of our method.Every three consecutive images (in row) form a group.For each group, from left to right: (a) the input image, (b) text removed result by model trained on the Oxford dataset, and (c) the text removed result by model trained on our dataset.
Fig. 6 .
Fig. 6.Qualitative comparison of all methods.The top three rows are synthetic images, and the rest are real-world images.From left to right: input image, ST Eraser, EnsNet, MTRNet, GatedConv, Our method, ground-truth, and input mask.
Fig. 7 .Fig. 8 .
Fig. 7. Comparison of our method with a two-stage method.For each group (including three rows), the first and second rows are the inputs of network, and the third row is the final result.From left to right: (a) our method; (b) GatedConv with rectangle mask; (c) GatedConv with stroke mask, which is detected by our TSDNet; (d) GatedConv with dilated stroke mask.
Fig. 9 .
Fig. 9. Qualitative results of ablation study.The last column gives the best result.
TABLE I QUANTITATIVE
COMPARISON OF OUR METHOD AND STATE-OF-THE-ART METHODS.ALL METHODS ARE TRAINED AND TESTED ON THE OXFORD DATASET AND OUR DATASET SEPARATELY.FOR PSNR AND SSIM (IN %), HIGHER IS BETTER; FOR MAE, R (RECALL), P (PRECISION), AND F (F-MEASURE), LOWER IS BETTER.
TABLE II ABLATION
STUDY.MODELS ARE TRAINED ON TRAIN RW AND TESTED ON TEST RW.TMAE IS THE MEAN ABSOLUTE ERROR BETWEEN DETECTED STROKE MASK AND GROUND-TRUTH STROKE MASK. ,
|
2020-11-20T02:01:01.370Z
|
2020-11-19T00:00:00.000
|
{
"year": 2020,
"sha1": "fbe46dd6019dd25f690b8af7cb122d5a168242ad",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s41095-021-0242-8.pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "fbe46dd6019dd25f690b8af7cb122d5a168242ad",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
227496991
|
pes2o/s2orc
|
v3-fos-license
|
The Involvement of Metals in Alzheimer’s Disease Through Epigenetic Mechanisms
Alzheimer’s disease (AD) is the most frequent cause of dementia among neurodegenerative diseases. Two factors were hypothesized to be involved in the pathogenesis of AD, namely beta-amyloid cascade and tauopathy. At present, accumulating evidence suggest that epigenetics may be the missing linkage between genes and environment factors, providing possible clues to understand the etiology of the development of AD. In this article, we focus on DNA methylation and histone modification involved in AD and the environment factor of heavy metals’ contribution to AD, especially epigenetic mechanisms. If we can integrate information together, and that may find new potential targets for the treatment.
INTRODUCTION
Neurodegenerative disorders are characterized by the progressive accumulation of misfolded proteins, which trigger damage of synapses, disturb network of pathway, facilitate death of specific neuronal populations, and finally initiate diseases. Several factors were hypothesized to be associated with the etiology of those diseases, including genetic and environmental factors. Alzheimer's disease (AD) is the most common neurodegenerative disease, and the hallmarks of AD pathology are an accumulation of Aβ to form amyloid-plaques and aggregation of phosphorylated tau to constitute neurofibrillary tangles (NFTs). Aβ is viewed as the core stone and trigger of diseases, which induces the dysfunction of synapses, loss of neurons, and ultimately dementia, with the existence of Aβ plaques and NTFs (Morris et al., 2014). Hyperphosphorylation changed the conformation of tau, which was believed to play a role in synaptic plasticity and facilitated its misfolding in pathological process (Zhang et al., 2016). Beside, apolipoprotein E (ApoE) gene shows strong association with risk for AD, for ApoE combined directly with Aβ to promote its aggregation and that facilitated tau phosphorylation inducing NFTs (Brecht et al., 2004).
Epigenetics is the study of heritable and reversible changes in gene expression, including DNA methylation, multi-modification of histones, and microRNA (Collotta et al., 2013), which occur without a change in the DNA sequence. This article reviewed DNA methylation and histone modifications to exhibit latest understanding about the role epigenetics plays in AD.
DNA Methylation
The first report epigenetic changes in AD found hypomethylation of amyloid precursor protein (APP) from an AD patient (West et al., 1995). In a pair of monozygotic twins, levels of DNA methylation significantly decreased in temporal neocortex neuronal nuclei of the AD twin (Mastroeni et al., 2009). Besides, DNA methyltransferase (DNMT) decreased in entorhinal cortex layer II of AD patients (Mastroeni et al., 2010). In a recent research, the patient group showed 25% reduction of DNA methylation levels in mitochondrial DNA D-loop region (Stoccoro et al., 2017), suggesting the underlying role of mitochondrial DNA methylation in AD. Hypomethylation of BRCA1 was observed in AD patients, and this result was in consistent with the higher expression of its mRNA (Mano et al., 2017). Through comparing brains of mouse models and AD patients, hyper-methylation of three genes namely TBXA2R, SPTBN4, and SORBS3 resulted in silence of these genes in AD process (Sanchez-Mut et al., 2013).
Histone Modification
Comparing the temporal cortex and hippocampus, the twin with AD showed a significantly higher level of H3K9me3, a sign of gene silence, and H3S10 phosphorylation, a regulator of chromatin structure (Wang et al., 2013). The brains from AD patients showed hyper-acetylation in histone H3 and H4 (Narayan et al., 2015). Histone deacetylation catalyzed by histone deacetylase (HDAC) results in a condensed state of chromatin and consequent transcriptional repression. HDAC2 increased in AD-related neurotoxic insults in vitro, two mouse models and patients with AD, which decreased the histone acetylation of genes related to memory and inhibited their expression (Graff et al., 2012). Tau interacts with HDAC6 to decrease its activity. Through this way, tau promoted the acetylation of related genes (Perez et al., 2009). As a feedback and compensation, the expression of HDAC6 was significantly increased. In a mouse model of AD, decreased HDAC6 facilitated the recovery of learning and memory through disturbing mitochondrial trafficking dysfunction caused by Aβ (Govindarajan et al., 2013). Importantly, the AD mouse model treatment with valproic acid (VPA), one of widely used HDAC inhibitors in clinical research, has shown exciting results. VPA significantly decreased Aβ production by inhibiting γ-secretase cleavage of APP and alleviated the memory deficits of the AD mice (Qing et al., 2008).
Plumbum
Plumbum facilitated the concentrations of free radicals, which leaded to the death of neurons. Pb exposure stimulated the serine/threonine phosphatases to impair memory formation (Rahman et al., 2011). Pb exposure leaded to the DNA methylation changes in the whole blood cells (Hanna et al., 2012). Early exposure of Pb increased Aβ product in old age. While in aged monkeys exposed to Pb as infants, the expression of APP and BACE1 elevated, and the activity of DNMT decreased (Wu et al., 2008). In rodents exposed lead, the expression of APP increased 20 months later, implying that lead exposure showed a life-long risk of AD (Basha et al., 2005).
In mice model of AD exposure to Pb, the levels of DNMT1, H3K9ac, and H3K4me2 decreased, the level of H3K27me3 increased, while the concentration of DNMT3a did not change . Besides, Pb exposure altered the production of tau (Dash et al., 2016). In mice expressing human APP, Pb stimulated the production of Aβ (Gu et al., 2011). Pb also disturbed the clearance of Aβ plaques by suppressing the activity of neprilysin (Huang et al., 2011). In primates with early exposure of Pb, their brains showed overexpression of APP and Aβ through hypo-methylation of related genes when aging. Yegambaram also reported that early exposure of Pb leaded to overexpression of APP, BACE1, and PS1, one of their regulators (Yegambaram et al., 2015). Both of them suggest that early exposure of Pb played a role in the development of AD when aging.
Arsenic
S-adenosyl-methionine (SAM) is essential for methylation of inorganic arsenic to detoxication, and it is also the metyl-donor required by DNA methyltransferases. So, it is reasonable to speculate that arsenic exposure leads to hypo-methylation of DNA and facilitates tumor-related gene expression (Zhao et al., 1997). Insufficiency of SAM leaded to hypomethylation of PS1 and BACE genes. This hypomethylation increased the expression of PS1 and BACE, which facilitated the production of Aβ (Fuso et al., 2005). Besides, arsenic inhibited the expression of the DNA methyltransferase genes, DNMT1 and DNMT3a (Reichard et al., 2007). Sodium arsenite exposure inhibited HDAC p300 for attenuating H3K27ac at enhancers in mouse embryonic fibroblast cells (Zhu et al., 2018). Su reported a dose-response relationship between the environmental concentration of total arsenic in topsoils and the prevalence and mortality of AD in European countries (Yegambaram et al., 2015).
Environmental toxin arsenite induced a remarked increase in the phosphorylation of several sits in tau, including which was in coincidence with results from AD (Giasson et al., 2002). Gong argued that arsenic stimulated the generation of free radicals, which leaded to oxidative stress and neuronal death (Gong and O'Bryant, 2010). When mothers were exposed to arsenic during pregnancy, their children showed a higher activation of inflammation-related pathways involved in the development of AD (Fry et al., 2007).
Aluminum
Aluminum has been reported to induce neurofibrillary degeneration in neurons of higher mammals in 1970s (Crapper et al., 1973). McLachlan reported a dose-effect association between the risk of AD and residual aluminum in municipal drinking water. The estimated relative risk of AD for residents with drinking water containing more than 100 ug/L of Al was 1.7 (McLachlan et al., 1996). Walton (2014) reported that long term intake of Al was an etiology of AD. A 15-year follow-up implemented by Rondeau et al. (2009) also showed a significant association between a high daily intake of aluminum and increased risk of dementia. Al could selectively interact with Aβ to facilitate the formation of fibrillar aggregation, while copper, iron, or zinc could not (Bolognin et al., 2011). In transgenic mice overexpressed human APP (Tg2576), dietary Al stimulated the expression and aggregation of Aβ through increasing oxidative stress (Pratico et al., 2002). In embryo rat hippocampal neurons, high concentration of Al facilitated the production of ROS induced by Fe (Xie et al., 1996). Al facilitated the degradation from APP to the aggregation of Aβ (Kawahara et al., 1994). Besides, the structure of non-Aβ component of AD amyloid was changed by the induction of Al to resist degradation and form plaque (Paik et al., 1997).
CONCLUSION
No mutation in genes has been definitely associated with neurodegenerative diseases, suggesting that, besides risk factors of gene, environmental exposure also is involved in the etiology of AD, and those two factors may be abridged through epigenetic alterations. Recently, an integrated multiomics analyses identified molecular pathways associated with AD and revealed the H3 modifications H3K27ac and H3K9ac as potential epigenetic drivers linked to transcription and chromatin and disease pathways in AD (Nativio et al., 2020). These findings provide mechanistic insights on AD for aiming epigenetic regulation of therapeutic strategy. We should get more enlightenment from it and explore the relationship between AD and epigenetics. On this basis, we will further study the effective diagnosis, treatment, and prevention methods of AD, and develop new intervention measures for AD from the field of epigenetics.
AUTHOR CONTRIBUTIONS
MC wrote the manuscript. XZ helped to edit the manuscript. WH and JZ revised the manuscript. All authors contributed to the article and approved the submitted version.
|
2020-12-08T14:08:47.965Z
|
2020-12-08T00:00:00.000
|
{
"year": 2020,
"sha1": "9917d2c2f04595bab23e3f14ac9ed478f00daaea",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2020.614666/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9917d2c2f04595bab23e3f14ac9ed478f00daaea",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
226192603
|
pes2o/s2orc
|
v3-fos-license
|
Peer review of the pesticide risk assessment of the active substance Bacillus thuringiensis subsp. kurstaki strain SA‐11
Abstract The conclusions of the European Food Safety Authority (EFSA) following the peer review of the initial risk assessments carried out by the competent authorities of the rapporteur Member State, Denmark, and co‐rapporteur Member State, the Netherlands, for the pesticide active substance Bacillus thuringiensis subsp. kurstaki strain SA‐11 and the considerations as regards the inclusion of the substance in Annex IV of Regulation (EC) No 396/2005 are reported. The context of the peer review was that required by Commission Implementing Regulation (EU) No 844/2012, as amended by Commission Implementing Regulation (EU) No 2018/1659. The conclusions were reached on the basis of the evaluation of the representative uses of Bacillus thuringiensis subsp. kurstaki strain SA‐11 as an insecticide on pome fruits (field use), protected tomato (including permanent greenhouses and walk‐in tunnels) and turf (field use). The reliable end points, appropriate for use in regulatory risk assessment, are presented. Missing information identified as being required by the regulatory framework is listed. Concerns are identified.
Identity of the Microbial or Viral Agent used in plant protection / Active
SA-11 Identification / detection: Btk SA-11 are characterized by morphological and biochemical characterization, serotyping, plasmid profiling, activity spectrum, fatty acid analysis, DNA fingerprinting AFLP and cry toxin analysis. For unequivocal identification of strain SA-11 a chromosomal primer in combination with restriction by BveI provided a specific marker.
Origin and natural occurrence, Background level: Btk as a species occurs naturally in a range of environmental compartments such as soils, plant surfaces and infected insects. Strain SA-11 was isolated from an infested insect. Background populations of Btk in the environment were found in the range from 10 4 to 10 8 CFU/g in soil and 0 -10 4 CFU/g on plants in areas not previously treated with Bt. Target organism(s): Lepidopteran pests (GAP: Cydia pomonella, Spodoptera littoralis) Mode of action: The crystal proteins of B. thuringiensis must be ingested to be effective against the target insect. Upon ingestion of B. thuringiensis by the larvae, the crystalline inclusions dissolve in the larval midgut, releasing insecticidal crystal proteins. The activated Cry toxins interact with the midgut epithelium cells of susceptible insects. After binding to the midgut receptors, they insert into the apical membrane to create ion channels, or pores, disturbing the osmotic balance and permeability. This can result in colloid-osmotic lysis of the cells. Spore germination and proliferation of vegetative cells into the haemocoel may result in septicaemia, contributing to mortality of the insect larvae. Host specificity: It is generally agreed that Btk acts highly specific against members of the insect family of Lepidoptera. Some are also active against Diptera or Coleoptera. The activity spectrum of a certain strain is defined by the production of cry toxins. Btk SA-11 was shown to be active against lepidopteran species only. Life cycle: Bacillus thuringiensis is a ubiquitous micro-organism that colonizes a range of habitats and environments and can be found in two different stages. Under favourable conditions regarding moisture, temperature and nutrients, the basic metabolizing cell type is the vegetative cell that is actively growing and dividing. When a population of vegetative cells passes out of the exponential phase of growth, usually as a result of nutrient depletion, the differentiation of endospores begins. Endospores are formed intracellularly and are liberated after lysis of the parent cells. The transformation of dormant spores into vegetative cells can be described in three stages: (i) Activation: a reversible process that prepares the spore for germination and usually results from treatments like heating or exposure to certain chemical stimuli; (ii) Germination: the breaking of the spore stage involves the swelling, rupture of the spore coat, loss of resistance to deleterious environmental factors and increase of metabolic activity; (iii) Outgrowth: development into a vegetative cell by remerging new components from the spore coat. Infectivity, dispersal and colonisation ability: Spores are the form of Bt that assures survival. They can survive in soil for months and it was showed that cells and spores of Bt can also survive for 10 days in water, without altering their number. Applied as a spray on above ground leaves and fruits, endospores are rapidly inactivated and endotoxins are rapidly degradable when exposed to UVradiation. Neither cells nor spores of Bt are mobile, so their dispersal is limited. It is generally agreed that Bt is a poor competitor and does not germinate and grow extensively in the environment. Except for target insects, Btk SA-11 is not expected to colonize any non-target organism and is not infective in humans. Relationships to known plant, animal or human pathogens: As a member of the B. cereus-group, Btk is closely related to B. anthracis and B. cereus. Btk strains are however distinguishable from B. cereus and B. anthracis.
Genetic stability:
Culture maintenance programs ensure that only genetically unchanged and pure cultures of Btk SA-11 are used for manufacturing of the strain and the end-use product. After field or greenhouse application genetic exchange is unlikely to occur and will not lead to any adverse effects. From the literature search for Btk SA-11 it can be concluded, that transfer of genetic material cannot be completely ruled out upon use of the strain as pest control agent in agricultural settings but the likelihood is rather low because the event requires germination and growth of the applied SA-11 spores at a high level and the presence of competent recipient vegetative cells at a high level. Even under these conditions, rates of genetic exchange were shown to be extremely low. In addition, Btk SA-11 is a wild type strain and does not have the capacity to produce any other compounds than indigenous Btks already present in the environment and it is not multiresistant. Hence, in the unlikely case that genetic material would be transferred from SA-11 to indigenous bacteria, there is no risk that any unwanted properties are spread in the environment. Information on the production of relevant metabolites (especially toxins): Btk SA-11 produces Cry1A and Cry2A insecticidal proteins and two Cry-like proteins. Apart from the Cry proteins several other insecticidal proteins are produced by Bt (vegetative insecticidal proteins VIP, cytolytic proteins Cyt etc.). Absence of toxicity to humans and mammals from all metabolites involved in the mode of action was confirmed by a literature search. Beta-exotoxins, are considered to have toxic properties but were shown not to be produced by commercial Btk strains.
Btk SA-11 has the potential to form a non-haemolytic (Nhe) and haemolytic (Hbl) enterotoxin complex. The ability to produce B. cereus-enterotoxins and possible consequences for consumers is discussed since first evaluation of the strain. However, based on available knowledge on Btk including Btk SA-11, there is no hint that the strain has the ability to cause foodborne disease as it will not fulfil all prerequisites required for pathogenic action in humans.
Resistance/ sensitivity to antibiotics / anti-microbial agents used in human or veterinary medicine: Btk SA-11 has been shown to be sensitive to a broad range of antibiotics commonly used in human and veterinary medicine.
The strain is not multi-resistant All abbreviations used must be explained (l) PHI -minimum pre-harvest interval (g) Method, e.g. high volume spraying, low volume spraying, spreading, dusting, (m) Remarks may include: Extent of use/economic importance/restrictions * based on minimum and maximum CFU content in Delfin WG # 850 g/kg or 32,000 IU/mg, (min. 8.5 × 10 12 CFU/kg, max. 6.4 × 10 13 CFU/kg)
Further information, Efficacy Effectiveness (Regulation (EU) N° 284/2013, Annex Part A, point 6.2)
According to the latest guidance on the preparation of dossiers for the renewal of active substances, information on efficacy is not required (SANCO/10181/2013rev. 2.1, 13 May 2013. The representative products have all been authorised at Member State level for > 10 years and have therefore been assessed in line with Uniform Principles. The GAP for the representative uses is realistic.
Adverse effects on field crops (Regulation (EU) N° 284/2013, Annex Part A, point 6.4)
The representative products have all been authorised at Member State level for > 10 years and have therefore been assessed in line with Uniform Principles. No unacceptable adverse effects are known.
Observations on other undesirable or unintended side-effects (Regulation (EU) N° 284/2013, Annex Part A, point 6.5)
The representative products have all been authorised at Member State level for > 10 years and have therefore been assessed in line with Uniform Principles. No unacceptable side effects are known.
Classification and proposed labelling (Symbol, Indication of danger, Risk phrases, Safety phrases)
with regard to physical/chemical data: Keep away from food, drink and animal feeding stuffs.
with regard to fate and behaviour: Not required with regard to ecotoxicological data: Not required There are no confirmed case reports linking agricultural use of plant protection products based on Btk strains with human disease although Btk products have been used worldwide for more than sixty years. No incidents related to adverse health effects such as toxicological effects, allergic response, or irritation, to employees, resulting from exposure to B. thuringiensis subsp. kurstaki SA-11 during development, manufacture, preparation or field application of the product have been reported Sensitisation:
Methods of analysis
In a 3-year follow up study on sensitization and health effects of exposure to microbiological control agents used in Danish greenhouses including Bacillus thuringiensis subsp. kurstaki, increased IgE levels was observed in 53% of the blood samples (measurement was only qualitativea positive IgE was defined as exceeding the detection limit of 0.025 OD units). Bacillus thuringiensis subsp. kurstaki strain SA-11 should be regarded as potential sensitizers and the following warning phrase is proposed: "Bacillus thuringiensis subsp. kurstaki strain SA-11. Micro-organisms may have the potential to provoke sensitising reactions" Acute oral infectivity, toxicity and pathogenicity: Delfin WG (containing Bacillus thuringiensis subsp. kurstaki strain SA-11/by laboratory: 9.1 x 10 10 CFU/g.): Rabbit: LD50 > 20 mg/kg bw (3.5 x 10 9 CFU/site) Specific toxicity, pathogenicity and infectivity: (MA 5.3) Dermal toxicity study with 4 mL Javelin WG SAN 415 WG 354 (corresponding to 40 mg test item): negative. Eye irritation: Delfin WG/ Javelin WG revealed conjunctival irritation that was completely reversible within 14 days. The hazard statement H319: Causes serious eye irritation is required Genotoxicityin vivo studies in germ cells: (MA 5.5) No studies conducted with Btk.
AOEL:
As no exposure models exist for microbials, setting an AOEL would be of low relevance to the risk assessment. The recommended use of RPE for both operators and workers is considered to cover the potential risk after repeated exposure by inhalation. ADI: The threshold of 10 5 CFU/g food is applicable to cover the risk of food-borne poisonings caused by the B. cereus group of micro-organisms. ARfD: The threshold of 10 5 CFU/g food is applicable to cover the risk of food-borne poisonings caused by the B. cereus group of micro-organisms. Based on uncertainties in the toxicology section related to potential production of enterotoxin in the human gut, a threshold level of 10 5 CFU/g at harvest was proposed. Therefore, quantification of viable counts linked to specific PHIs is requested (data gap).
Non-viable residues:
Not relevant for dietary exposure (see Section 2).
Persistence and multiplication (competitiveness) in soil, water and air: Btk SA-11 Soil: Bacillus thuringiensis including Btk SA-11 occurs naturally and ubiquitously in the environment. It is a common component of the soil micro-biota and has been isolated from most terrestrial habitat. Available information indicates that Bacillus thuringiensis spores may persist from days to years in soil under natural field conditions. The low potential for spore germination, growth and re-sporulation in bulk soils minimises multiplication. Germination in the rhizosphere may occur.
Water: Information on B. thuringiensis Btk SA-11 was not available. Bacillus thuringiensis including Btk is an inhabitant of aquatic environments. Data gap for information on proliferation in natural surface water systems. Air: re-aerolisation of applied spores is possible but spores rapidly drop in viability following release to air. Fate and transport via air after application is unlikely to play a role in environmental exposure to B. thuringiensis subsp. kurstaki including Btk SA-11 spores and endotoxins. Predicted environmental concentrations of Cry-proteins in the water body (surface water and sediment) -pome fruit (worst case exposure; FOCUS surface water)
|
2020-10-29T09:07:01.158Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "5d0c06efc141cb458fa9c76d63699481b8c186b6",
"oa_license": "CCBYND",
"oa_url": "https://efsa.onlinelibrary.wiley.com/doi/pdfdirect/10.2903/j.efsa.2020.6261",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1515deb6930f66f3f5803ed251676d078471d355",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225077341
|
pes2o/s2orc
|
v3-fos-license
|
The Current Recommended Drugs and Strategies for the Treatment of Coronavirus Disease (COVID-19)
Background The coronavirus 2019 (COVID-19) has been known as a pandemic disease by the World Health Organization (WHO) worldwide. The drugs currently used for treatment of COVID-19 are often selected and tested based on their effectiveness in other diseases such as influenza and AIDS and their major identified targets are viral protease, host cell produced protease, viral RNA polymerase, and the interaction site of viral protein with host cell receptors. Until now, there are no approved therapeutic drugs for definitive treatment of this dangerous disease. Methods In this article, all of the documentary information, such as clinical trials, original research and reviews, government’s database, and treatment guidelines, were reviewed critically and comprehensively. Moreover, it was attempted to present the most common and effective drugs and strategies, to suggest the possible treatment way of COVID19 by focusing on the body’s defense mechanism against pathogens. Results Antiviral drugs and immune-modulatory agents with the traditional medicines using the natural compound are usual accessible treatments. Accordingly, they have better beneficence due to the large existence studies, long time follow-ups, proximity to the natural system, and the normal physiological routine of the pathogen and host interactions. Besides, the serotonergic and dopaminergic pathways are considered as attractive targets to treat human immune, infectious, and cancerous diseases. Fluoxetine, as a host-targeted small molecule with immunomodulatory action, may be known as effective drug for treatment and prevention of COVID19 disease, in combination with antiviral drugs and natural compounds. Conclusion Co-administration of fluoxetine in the treatment of COVID19 could be considered due to the possibility of its interaction with ACE2 receptors, immune-modulatory function, and a proper immune response at the right time. Fluoxetine plays a beneficial role in reducing stress due to fear of infecting by COVID19 or worsening the disease and psychological support for the affected patients.
Introduction
In January 2020, World Health Organization (WHO) discovered a new coronavirus disease emerged in Wuhan, Hubei Province. Correspondingly, this disease, caused by a novel SARS-CoV-2, has been named as the novel coronavirus, 2019-nCoV or COVID-19. 1 The WHO characterized the COVID-19 as a pandemic disease, which has been already spread in many countries that resulted in numerous infections and unfortunate human deaths worldwide. 2 So, the COVID-19 epidemic is unique because of its high spread, incidence, and mortality rates all around the world. The outbreak of this virus is the third highly pathogenic coronavirus, which has infected human being in the 20th and 21st centuries. 3,4 In this regard, several trials have been listed in the reported clinical trials, research works or government databases such as the remdesivir, favipiravir, lopinavir, ritonavir, oseltamivir, methylprednisolone, bevacizumab, human immunoglobulin, interferons, chloroquine, hydroxychloroquine, arbidol, etc. Besides, alternative medicine such as traditional Chinese medicines (TCM) or a combination of some of these trials together have been used to get probably successful treatment. 5 Even after performing several types of research on the COVID-19, there are still no approved vaccines or therapeutic drugs for its treatment. So, there is a need to develop some effective agents and vaccines for successful treatment and future epidemics prevention of this dangerous and fatal disease. Comprehensive and intensive clinical trials using the traditional Chinese medicine and western medicine, are ongoing in China; however, due to the low quality, small sample size, and long duration of these studies, still no certain treatment way of COVID-19 has been achieved for a long time in the future. 6 However, for a long time, clinical use should be ethically approved by the WHO, and safety and high-quality clinical trial data are also needed. 7 The big family of coronaviruses is diverse in both phenotypical and genetical aspects. They have enveloped viruses containing single-stranded positive-sense RNA with the ability to cause infection in birds, mammals, and humans. The genome of the virus encodes the structural and non-structural proteins, which is 27-32 kb. Accordingly, the membrane (M), envelope (E), nucleocapsid (N), and spike (S) are the structural proteins, which play a key role in the virus entry and replication in the host cell (Figures 1 and 2). 8,9 Human coronaviruses have access to the host cells via some specific receptors, which are present on the host cells. It was indicated that the pathogenesis of a coronavirus strongly depends on the interaction of the S protein with its receptor. Several studies have shown that blocking of S1 subunit and protease inhibition can prevent the SARS-CoV -2 entrance into target cell, since host receptor binding is mediated by the N terminal domain-NTD of spike protein S1 subunit. Also, some researchers suggested that S1 subunit and host proteases play potential therapeutic roles in the treatment of COVID-19 ( Figure 3). 10 In this regard, it was shown that Zoonotic β-coronaviruses have a high binding affinity towards angiotensin converting enzyme 2(ACE2), which serves as an operative receptor for viral entry to the host cell. ACE2 have also been reported as cellular receptors used by β-coronaviruses MERS-CoV and bat coronavirus. Accordingly, a high propensity associated with ACE2 receptor and S1 domain of spike protein has been shown. 11 Due to many produced humanity losses caused by this virus in a short time worldwide, many doctors and scientists are trying to find a way to prevent and also to fight this disease. In a short time, many studies and reports have been published, which one could face lots of scattered information. Moreover, there are some review studies in terms of the introduction of drugs or methods, more or less, on their usage; but there is still no review on the definite and possible benefits and challenges of these drugs or methods. Therefore, in this review article, as far as the documentary information is concerned, it was tried to present the methods and drugs that may have been proposed and used for the prevention and treatment of COVID19. This comprehensive and discussing review study has been performed referring to the effect mechanisms, benefits, and challenges of each drug or method. In this article, based on the valuable studies, a possible drug and treatment strategy that may be considered in clinical trials is been proposed.
Search Methodology
In this article, all the documentary information such as clinical trials and original research and reviews obtained from some international databases such as Google Scholar, PubMed, Scopus, Web of Science, and Science Direct were searched for the following MeSH words: notably, COVID-19 with each one of the following words: clinical trial, drug, treatment, therapy, alternative medicine, nano-medicine, antiviral, anti-inflammatory, stem cell, plasma, and TCM. Moreover, the government's databases and treatment guidelines were also reviewed critically and comprehensively. Correspondingly, it was attempted to present the most common and effective drugs and strategies, in order to suggest the possible treatment way of COVID19 by focusing on the body's defense mechanism against pathogens.
Antiviral Drugs and Anti-Inflammatory Agents
Based on several types of research and clinical trials in different countries, the lopinavir/ritonavir (LPV/r) can be recommended from the early stage of COVID-19 shown in vitro and in vivo protease inhibitory effect on SARS-CoV2. 12 However, more evidence from some wellcontrolled clinical trials is needed to demonstrate the clinical efficacy of LPV/r. 13,14 The data obtained from the registered clinical trials for COVID-19 up to March 7, Furthermore, anti-viral and anti-inflammatory treatments are of great interest to the investigators. In this regard, the first anti-viral clinical trial with LPV/r has already reached the targeted sample size of 160 cases. The results of a systematic review using lopinavir therapy for SARS and MERS coronaviruses showed that lopinavir may be considered as a potential treatment option for the COVID-19. 19 Also, remdesivir, as an adenosine analog, was used for this virus treatment, and besides, the related clinical trials of this drug are ongoing. Moreover, remdesivir terminates the viral replications when incorporated into the nascent viral RNA chains. 20 The national institute for the infectious diseases "L. Spallanzani", IRCCS in Italy recommended to use LPV/r, the Darunavir or Tocilizumab alternatively depending on the symptoms and severity of the disease.
Besides the above-mentioned drugs, oseltamivir was suggested for the treatment of the COVID19 suspected children based on the results of an algorithm that was according to the standard diagnosis and treatment strategies for pediatric viral infections. 21 However, some supportive therapies such as prompt availability of O2 should be done, in case of necessity. 22 Another antiviral drug is the favipiravir, which targets the viral RNA polymerase; also, it is different from many influenza antivirals that target proteins on the surface of the virus like Roche's Tamiflu (oseltamivir phosphate). Since the SARS-CoV-2 RNA polymerase is similar enough to the influenza polymerase, this ability makes the favipiravir a potential drug for the COVID-19. 23,24 Moreover, Favipiravir, also known as T-705, Avigan, or favilavir, is an antiviral drug being developed by the Toyama Chemical of Japan known with the activity against many RNA viruses. In the clinical trial, it was reported that this recommended drug leads to faster viral clearance compared to the LPV/r group, with a median of 4 days versus 11 days, respectively, as well as an improvement in chest imaging. However, a large sample size of the participants is needed to perform further clinical studies. 25 Deng et al 14 Blocking of S1 Subunit and protease inhibition prevents the SARS-CoV-2 entry into target cell. Therefore S1 subunit and host proteases are potential therapeutic ways for the treatment of COVID-19. Notes: Reprinted from Archives of Medical Research, Vol 66, Arafah A, Ali S, Yatoo AM, Ali MN, Rehman MU, S1 subunit and host proteases as potential therapeutic avenues for the treatment of COVID-19, In press, Copyright (2020), with permission from Elsevier. 10 Sheikhpour Dovepress submit your manuscript | www.dovepress.com
DovePress
Therapeutics and Clinical Risk Management 2020:16 COVID-19 and then reported satisfactory clinical results with arbidol and LPV/r compared to LPV/r alone. In a study, Wang et al 26 evaluated the antiviral efficiency of five FAD-approved drugs including ribavirin, penciclovir, nitazoxanide, nafamostat, chloroquine as well as two wellknown broad-spectrum antiviral drugs remdesivir and favipiravir on a clinical isolate of COVID-2019. In this regard, favipiravir showed a significant effect in vitro and they suggested performing more in vivo studies to evaluate its antiviral property additionally. Also, other researchers conducted randomized clinical trials among the moderate COVID-19 patients who were previously untreated with antiviral. They reported that favipiravir could be considered as a preferred treatment due to the higher clinical recovery rate of day 7, which reduced the incidence of fever and cough more effectively, except for the manageable antiviral-associated adverse effects. Moreover, this drug can better act compared to arbidol, which is the recommended drug in the Chinese diagnosis and treatment protocol for the novel Coronavirus Pneumonia. 27 Also, there is pre-clinical evidence of effectiveness resulted from the use of chloroquine in the patients with COVID-19. Accordingly, chloroquine or hydroxychloroquine has shown an anti-SARS-CoV-2 activity under in vitro condition and have been used in the clinical trial research. 26 The in vitro data suggest that chloroquine has an in vitro activity against many different viruses, which inhibits the SARS Cov-2 replication. Also, it is hypothesized that chloroquine prevents SARS-CoV-2 binding to target cells because it interferes with a virus-cell surface receptor, as the angiotensinconverting enzyme 2 (ACE2) receptor glycosylation. 28 In addition, chloroquine, which is a broadly used antimalarial drug, has been proposed several times for the treatment of acute viral diseases in human being without success. 29 However, poisoning that accrued with the toxic dose of chloroquine has been reported, which is also associated with cardiovascular disorders. 30 Another compound named Thymoquinone has anti-sepsis and immunomodulatory activities at specific doses. Thymoquinone has been shown to downregulate inflammatory cytokines, reduce NO levels, and improve organ functions and survival of sepsis in an animal model. It has demonstrated that Thymoquinone, an emerging natural drug, is the main constituent of Nigella sativa and is anti-inflammatory, antioxidant, anti-tumor, and antimicrobial agent. 31 In the anti-inflammatory or immunomodulation approach, such agents as corticosteroids, immunoglobins or antiinterleukin 6 are being used. Most of the deaths that accrue from infection are resulted by dysfunctions or failures of the lung or multiple organs, which are related to the host's immune dysfunction or the related disorders. Huang et al, 32 using immune profiling analysis with single-cell resolution, demonstrated that the blood single-cell immune profile reveals the interferon-MAPK pathway is mediated by adaptive immune response for the COVID-19. Also, the interferon-MAPK pathway is known as the major defense mechanism in the COVID-19 immune response. Moreover, they noted that the SARS-CoV-2 can cause a blood immune reaction after infecting the respiratory system. Explaining, it enters the blood through the circulatory system in the patients with COVID-19 and its damage would be related to multiple organs. 33 The expression of the key transcription factors such as FOS, JUN, and JUNB through the downstream activation of MAPK causes the production of several effectors such as IFI27, IFITM1, and IFITM3, to fight the virus. Therefore, after the infection, a wide range of antiviral responses appears in the blood, while the MAPK signal is inhibited in the normal situation of the patient's recovery condition. Accordingly, their results showed that immune deficiency or imbalance adaptive immune response condition may worsen the COVID-19 patients' condition. [33][34][35] So, efforts to maintain a balanced and optimal immunity focusing on this pathway can help in the prevention, management, and treatment of this disease, as well as providing some important information for the drug development of COVID-19.
In agreement with the role of the interferon-MAPK pathway in the COVID-19 immune response, other researchers reported that IFN-α, usually used to treat hepatitis, inhibits SARS-CoV reproduction under in vitro condition. Also, in the fifth edition of the guidelines, it has been recommended that the specific method for the administration of IFN-α is vapor inhalation combined with the therapy by antivirals including LPV/r, and ribavirin for the treatment of COVID-19. Also, other researchers suggested that the regulation of interferon production can be considered as a potential strategy for the COVID-19 treatment. 25,34 Furthermore, one of the other clinical trial methods, which have been applied against COVID-19 treatment, was the use of Cytokines directed antagonists such as adalimumab (TNF-α) and CMAB806 (IL-6). In addition, upstream regulation of cytokines production could be regarded as a promising strategy for the treatment of COVID-19. For example, the approved drugs such as suramin and anaplastic lymphoma kinase inhibitors are worthy for clinical trials. 34
DovePress
A valuable model for correlating the Pathogen Infection Recovery Probability (PIRP) in competition with Pro-inflammatory Anti-Pathogen Species (PIAPS) levels within a host unit was proposed by Shajing Sun. 35 Correspondingly, the mentioned researcher has also reported that the maximum PIRP was exhibited when the PIAPS levels were either equal to or around the PIAPS equilibrium levels at the pathogen elimination or clearance onset. For performing a successful treatment, it is very important to improve the host's PIRP and more important to prevent the cytokine storm. So, such treatments are needed only for those hosts who have deficient or very weak anti-pathogen immune responses, which should be administered within the primary stage of infection in a controlled manner. The immune-modulatory therapy, with or without the combination of antiviral agents, may improve the outcome because the homeostasis of the immune system plays a key role in the development of COVID-19 pneumonia. 36,37 In a multi-center study conducted on 416 cases of COVID-19 with the definite outcome from 14 hospitals in Hubei province, the clinical characteristics and treatment regimens up to Feb 17, 2020, were extracted. In this regard, the obtained data showed that 91% (380/416) of the patients were given the anti-viral therapy. While the rates of the corticosteroid therapy, γ-globulin, and invasive ventilation significantly increased as 84%, 67%, 24% for deaths and prolonged hospitalization for survivors who used corticosteroids, respectively. Also, corticosteroid therapy, γ-globulin, and invasive ventilation were the most frequently used in the death group. 38
Monoclonal Antibodies
The major class of bio therapeutically methods and antiviral infection approach is passive immunotherapy using monoclonal antibodies whose therapeutic potential in the treatment of many diseases has been well recognized. Until now, there are no approved therapeutic drugs to treat coronavirus infection, so it is necessary to develop effective agents to have a successful therapy and also to prevent future deaths caused by this dangerous virus. On the other hand, several genetic data, clinical signs, and epidemiological features of COVID-19 resemble the SARS-CoV infection. Therefore, the previous advanced research works conducted on SARS-CoV treatment could help scientists to develop effective therapeutic strategies and novel drugs to treat this infection. 39 Tian et al 40 reported that CR3022, which is a SARS coronavirus-specific human monoclonal antibody, could be developed as a potential candidate for the treatment of SARS-CoV-2 infection, alone or in combination with other neutralizing antibodies. Moreover, CR3022 can effectively bind with the receptor-binding domain (RBD) of SARS-CoV-2, and its epitope has no overlap with the angiotensin-converting enzyme 2 (ACE2) binding site within 2019-nCoV RBD. Accordingly, in agreement with them, other researchers stated that other monoclonal antibodies neutralizing SARS-CoV such as m396 and CR3014, by targeting the ACE2 binding site of SARS-CoV, could be known as an alternative for the treatment of COVID-19. 8,41 Since the alteration in the RBD of SARS-CoV and 2019-nCoV has a critical effect on the cross-reactivity of neutralizing antibodies, so the development of new monoclonal antibodies, which can exactly bind to 2019-nCoV RBD, is needed.
Alternative Medicine
Herbal medicine, which was the only treatment available before the introduction of antibiotics, has been found to be effective in reducing infectious disorders. Nowadays, the use of natural products with anti-viral properties is increasingly growing among many populations worldwide, which provides a rich tool for the production of novel antiviral drugs. 42 Several results obtained from research works and clinical trials showed that traditional Chinese medicine (TCM) plays a significant role in the prevention, control, and treatment of COVID-19. 43 Interesting point is that onethird of the registered clinical trials on the COVID-19 patients used TCM therapy that has usually been used in combination with modern medicine. A meta-analysis study on TCM in the treatment of SARS suggested that this method could be effective in relieving the symptom of fever, which showed the requiring lesser dosage of corticosteroid and reduced pulmonary infiltration. Besides, the combination of TCM with modern medicine sounds hopeful for treating diseases caused by influenza and coronavirus. 44,45 In this regard, several doctors and researchers recommended that the chemical compounds required to cure COVID-19, could be found in natural products like tea. Dr. Li Wenliang had documented some case files on the use of tea by the patients who suffered from the COVID-19. Even the previously performed studies show that tea is an interesting natural substance possessing anti-viral activities in plants, animals, and human beings. 46 Since tea is one of the most common and popular drinks worldwide, which is in harmony with everyone's nutritional habits, itself or its useful extracts could be used to improve this disease. 48 In recent decades, the green tea catechins (GTCs) have been reported to provide various health benefits for numerous diseases. Correspondingly, they are polyphenolic compounds obtained from the leaves of Camellia sinensis, which have antiviral effects on diverse viruses. 49 Besides, an investigation was performed on the anti-viral effect of the aflavins in tea, and the related results also showed that the aflavins manifest the inhibitory effects on infection and multiplication of virus, by bonding themselves to nucleic acids of the virus. [50][51][52] By quantitative RT-PCR of the influenza virus-specific mRNA, the researchers reported that the GTCs can affect the transcription of viral genes in the infected cells. Also, they suggested that the GTCs have inhibitory activity on a viral attachment to the host cells. Accordingly, the primary target for GTCs is a membrane and physical integrity of the virus particles to the host membrane. Besides, the GTCs act on the acidification of intracellular endosome compartments, which are required for the fusion of viral and cellular membranes. 53 Further studies also showed that the 3-galloyl group of GTC skeleton plays an important role in the activity of the virus, while the 5-OH at the trihydroxy benzyl moiety at 2-position plays a minor role. 52 Runfeng et al 54 reported that Lianhuaqingwen (LH), as TCM formula, significantly inhibits SARS-CoV -2 replication under in vitro condition, affects the virus morphology, and reduces the production of the proinflammatory cytokines at the mRNA levels. In a virtual screening study, some compounds were predicted to bind to the binding pocket of 3CLPro protease (3CLpro). In this regard, 3CLpro plays an important role in the replication of coronavirus, which can be considered as a potential drug target for the development of a drug against COVID-19. 55
Stem Cell Therapy and Plasma Containing Antibodies
Mesenchymal stem cells (MSCs) have shown a powerful immunomodulatory function. Accordingly, researchers found that the MSC transplantation improves the clinical outcomes, as well as the changes of inflammatory and immune function levels. Moreover, they reported that the transplantation of ACE2 negative MSCs improves the outcome of the patients with COVID-19 pneumonia. In addition, the intravenous transplantation of MSCs was safe and effective, especially for the patients who are in critically severe conditions. 56 Chen et al 57 have conducted a clinical study of mesenchymal stem cell (MSC) therapy on the patients with epidemic Influenza A (H7N9) infection who had acute respiratory distress syndrome (ARDS). In this regard, they suggested that this protocol may be effective in the treatment of COVID-19. Also, they found that the mortality was significantly lower in the experimental group compared to the control group. During the fiveyear follow-up period, MSC transplantation had no harmful effects on the bodies of some of the treated patients. Based on the similar complications (eg, ARDS and lung failure) of H7N9 and 2019-nCoV as well as the correspondence of multi-organ dysfunction, they suggested that MSC-based therapy could be considered as a possible alternative treatment to treat the COVID-19. A group of scientists suggested that the expanded umbilical cord MSCs (UC-MSCs) could be a therapeutic strategy in managing the critically ill COVID-19 patients to be compassionately used to reduce morbidity and mortality. Based on the preclinical and preliminary clinical data, they reported that the use of UCS-MCS through its antiinflammatory and immunomodulatory actions could heal tissues and enhance recovery. Additionally, they claimed that the UCS-MCS treatment could be regarded as an antimicrobial strategy. The MSCs affect the host immune response against pathogens such as increasing the activity of phagocytes, secretion of antimicrobial peptides and proteins, and the expression of molecules such as indoleamine 2, 3-dioxygenase (IDO) and interleukin (IL)-17. 58 The convalescent plasma or immunoglobulins has been used to improve the survival rate of the patients with severe acute respiratory syndrome (SARS), the pandemic 2009 influenza A (H1N1), the avian influenza A (H5N1), several hemorrhagic fevers such as Ebola, and other viral infections. Accordingly, some clinical trials as well as a meta-analysis study have shown that the plasma donation of the patients recovered from the coronaviruses disease such as SARS-CoV and MERS-CoV, has had some favorable results in the treatment of other patients, which may reduce the mortality rate. So, in the other clinical trial way, the plasma containing antibodies that are developed during the convalescent phase from the infected patients are being used to treat COVID-19. The efficacy of these approaches is not clearly known yet, and the historical experience proposes that the convalescent sera may be more effective in preventing disease compared to [59][60][61] Also, a shorter hospital stay and a lower mortality rate have been observed in the patients who were treated compared to those who were not treated with convalescent plasma in the related clinical trials. However, there are still some challenges, specifically on COVID-19 treatment. The donated blood products would be screened for the infectious agents according to the current blood banking practices. Besides, individual sera would be correctly studied for specific antibody content and neutralizing activity to SARS-CoV-2. However, using old drugs for the treatment of COVID-19 currently is the only and available way, until finding an effective treatment. 62
Antiviral Drug Development Strategies
In multiple structural and biochemical interaction studies, it was found that ACE2 is a key receptor for the spike glycoprotein of SARS-CoV-2. 63 The S protein RBD domain of SARS-CoV-2 interacts with ACE2 and this interaction determines the host range and tropism. In this regard, the interaction between RBD of SARS-CoV-2/SARS-CoV and mammalian ACE2 was predicted by a sequence alignment study performed on the amino acids binding to RBD in ACE2. 64 Also, the structural simulation results of SARS-CoV showed that N82 in ACE2 has a closer contact with F486 of SARS-CoV2 S protein compared to M82 of ACE2 (Figure 4).
Based on the above-mentioned studies, the focus of multiple drug development projects is on the ACE2-SARS-CoV-2 Spike interactions. 65,66 In a study, researchers showed that the human recombinant soluble ACE2 (hrsACE2) blocks the growth of SARS-CoV-2 in a dose-dependent manner, which also reduces the SARS-CoV-2 recovery from Vero cells by a factor of 1000-5000. Accordingly, they concluded that their study had some limitations, because the design of the study focused on the early stages of infection, which represents that hrsACE2 could block the early entry of SARS-CoV-2 infections in the host cells. So, to illuminate the effect of hrsACE2 at the later stages of infection, performing further in vitro and in vivo studies are needed. Also, lung organoids should be carefully studied, because the lung is the major target organ for COVID-19. 67 The SARS-CoV-2 main protease (Mpro) is known as another target that has been recently suggested for the development of the novel drug candidates for COVID-19. In a deep docking of 1.3 billion compounds obtained from ZINC15 library, 1, 000 potential inhibitors of SARS-CoV-2 Mpro have been selected for further characterization and development. 68 Some other agents such as the statins, angiotensin receptor blockers (ARBs), and angiotensin-converting enzyme inhibitors (ACEIs), have been suggested to treat SARS-CoV-2 infection. Based on the epidemiological data, over two-thirds of the patients died from COVID-19 had diabetes or cardiovascular diseases. Accordingly, they were treated by angiotensin-receptor blockers (ARBs) as a first-line therapy. Moreover, there are several known indications that ARBs can more significantly increase the ACE2 expression in the kidney and the heart (2-to 5-fold); whereas, regarding lungs, there is no direct evidence yet. A study showed that ACE2 could be expressed in lower lungs on type I and II alveolar epithelial cells of normal human lungs. In addition, ACE2 has a high expression in the mouth and tongue that facilitates viral entry in the host. So, those ARBs can increase ACE most probably even in alveolar cells. It may be wondering that how the use of these drugs could predispose the patients to more severe illness and the increased SARS-CoV-2 infection. 69 On the other hand, preclinical studies conducted on different animal models of severe lung injury have demonstrated that ACE2 is substantially downregulated. Also, the use of losartan reduces severe acute lung injury in mice injected by the spike glycoprotein of SARS-CoV. Due to these contradictory findings, one cannot explain clearly the protective effect of ACE2 upregulation by ARBs during COVID-19 infection.
Nano-Based Drug Delivery Systems
Nanotechnology, as an advanced therapeutic option, was shown in recent studies to play a great potential role against COVID-19 such as vaccine development and using engineered nanocarriers in drug delivery systems. Therefore, in this regard, it is important to look and select a suitable nanocarrier delivery system to find a safe and effective treatment. 70 All of Nano medicine strategies for COVID-19 therapeutics and vaccine development are designed on the key of target identification, in order to stop or block the pathogenesis of the viral infection. Accordingly, the major identified targets are viral protease, host cell produced protease, viral RNA polymerase, and the interaction site of viral S protein with host receptor ACE2. [71][72][73][74] Moreover, another strategy proposed for COVID-19 treatment is targeting the SARS-CoV-2 surface S protein using neutralizing antibody (nAbs). 75 Besides, targeting the SARS-CoV-2 viral RNA genome by the use of RNA interference (RNAi) or antisense oligonucleotides is known as another interesting approach to have a potential therapy of this disease. 76,77 In Nano-based drug delivery systems, several evidences indicated that multiple nanomaterials such as graphene, nanodiamonds, carbon nanotubes, and polystyrene particles, have an intrinsic capacity to act as anti COVID-19 agents, especially via activating the immune system, which mostly depends on their functionalization. 78 Several other studies have also shown that extracellular vehicles (EVs), which are a family of natural carriers in the human body, play critical roles in cellto-cell communications. Moreover, they can be used as unique drug carriers to deliver protease inhibitors to treat COVID-19 with fewer systemic side effects. 79 Although several researchers in their studies proposed that nanotechnology can provide biosensors, vaccines, and antiviral materials, all of these studies should be developed in in vitro and in vivo expanded investigations as well as a proper formulation to be confirmed for clinical trials usage.
Discussion About Suggested Strategy
Over 30% of the treated patients do not achieve remission because of the body's inflammatory response, which is known as one of the causes of depression. On the other hand, the nervous and immune systems have interactions with each other. In the patients suffering from depression, changes in the plasma concentrations of cytokines as well as the number and level of activation of immune cells have been found ( Figure 5). Moreover, neurotransmitters are signal substances and mere mediators of signal states among the cells in the nervous system. Several pieces of evidence elucidated the modulatory role of neurotransmitters in immune function, the regulation of migration of leukocytes, and even tumor cells. Serotonergic and dopaminergic pathways are attractive targets to treat human immune, infectious, and cancerous diseases. 80,81 In several successful investigations, agonists or antagonists of dopamine or serotonin receptors have been used for the activation of autophagy or the induction of apoptosis. [82][83][84] In these pathways, serotonin has a stimulatory effect on the cytotoxic T cells, plays a role in the activation of NK cell, and also increases the proliferative activity of B cells. Dopamine is a catecholamine neurotransmitter, which is synthesized in the brain and widely known as an important factor in the regulation of the homeostasis of immune, renal, hormonal, and central nervous systems. In the interim guidance document of WHO, when the novel coronavirus was suspected, it was mentioned that utilizing vasopressors (norepinephrine, epinephrine, and dopamine most safely received from a central venous catheter) helps clinicians in managing the patients with acute respiratory failure and septic shock, as a consequence of severe infection. 57,85,86 Based on the described role of host immune responses, in overcoming the infection disease, it has been recommended that serotonergic and dopaminergic pathways are attractive targets to treat the human infectious disease. For example, fluoxetine, by its action on the serotonergic pathway in the central nervous system, regulates the neuroendocrine signals and also modulates the immune response against infection. In addition, the administration of fluoxetine would reduce the lymphocyte activity in the high basal immune function DovePress as well as improving the immune function when it is deficient by direct and indirect mechanisms. The indirect mechanisms would be either 5-HT-dependent (a group of G protein-coupled receptor named as 5-hydroxytryptamine receptors, 5-HT receptors, or serotonin receptors) or independent ( Figure 6). It was shown that the concentration of this drug dose usage can affect the mechanism of its action. So, the results of the studies highlighted the importance of the novel pharmacological action of fluoxetine, as a novel immune-modulatory drug. Besides, fluoxetine could directly act on the T lymphocytes and dually modulate their proliferation, depending on the cellular activation. Fluoxetine, as one of the selective 5-HT reuptake inhibitors (SSRIs), has been demonstrated to be very efficient and safe with some low side effects. Moreover, fluoxetine and all other SSRIs are 5-HT2B agonists that are important for their therapeutic effects. In addition, fluoxetine is usually chosen for the treatment of depression symptoms, obsessive-compulsive disorder, panic attacks, and bulimia nervosa. The SSRIs are frequently used in antidepressant therapy as well as immune function modifications. In this regard, fluoxetine showed a novel pharmacological action as an immune modulator, which helps in the treatment of several pathogenesis immune deficiencies and/or in the presence of any deregulation. 87 There are several reports on the advantage of hostdirected therapies (HDTs) for infections like MERS-CoV. Correspondingly, these benefits are immune mechanism protection, modulation of the destructive immune-mediated inflammatory responses, prevention of cytokine storm, and protection of tissues from inflammatory damage. 88 Thus, the supportive and successful care of COVID19 is the main goal of treatment. In this way, prevention of these complications, particularly organ failure, ARDS, and secondary bacterial infections should also be considered. In addition, the modulation of immune signaling may significantly affect the outcome of this disease. Petersen et al 89 reported that there were commonly used drugs with the good safety profiles that can be used in HDTs, and supplement host innate and adaptive immune mechanisms to MERS-CoV. Nevertheless, they suggested carefully performing the controlled trials to determine this relationship.
In a meta-analysis study, researchers analyzed data of 7 studies with a total of 662 Hepatitis C patients and reported that prophylactic SSRIs reduced the risk of depression. The incidence of IFN-induced major depression and depression severity were defined as primary and sustained virologic response is secondary outcomes. They found that SSRIs prevent interferon-a-induced depression in patients with Hepatitis C. 90 An interesting report states that for some COVID-19 patients, reduction in tryptophan (TRP), the precursor of serotonin, levels may either expose an underlying vulnerability to depression or trigger a de novo episode of depression. This commentary discussed the pathway involved and recommended in-hospital augmentation with foods or supplements that increase TRP levels for COVID-19 patients treated with INFs and recommended that SSRIs may also could be tried. 91 Since fluoxetine is a specific and potent inhibitor of the presynaptic reuptake of serotonin, works by increasing the amount of serotonin could be useful in the treatment of respected COVID-19 major depression. 92 Molecular screening of small molecule libraries identified that fluoxetine was a potent inhibitor of coxsackievirus replication, which reduced the synthesis of viral RNA and protein. 93 Immunomodulatory properties of fluoxetine have also been reported in some animal and human models. The host-targeted small molecules modulate excessive inflammation and reduce lung tissue destruction. Also, researchers reported that fluoxetine, as a host-targeted small molecule, restricts the growth of intracellular Mycobacterium tuberculosis as well as inducing autophagy in the infected macrophages. 94,95 Ulferts et al 96 demonstrated that SSIR fluoxetine inhibited the replication of human Enteroviruses B and D by targeting viral protein.
Due to the ability of fluoxetine to enter the cell, even its currency through the new receptors, it could be used to compete with the virus to enter the cell and prevent the binding of SARS-CoV-2 to the target cells. Indeed, coadministration of fluoxetine in the treatment of COVID19 could be considered due to the possibility of its interaction with ACE2 receptors, immune-modulatory function, and a proper immune response at the right time. Moreover, it can compete with the virus to enter the cell and help in the regulation of the body's immune system against the virus. The Centers for Disease Control and Prevention (CDC) reported that the outbreak of coronavirus disease 2019 (COVID-19) may be stressful for people, which cause strong emotions in adults and children. Also, a document has been developed by the WHO department of mental health with a series of messages to support the mental and psychosocial well-being in different target groups during the outbreak of COVID19. 97 As the influence of antidepressant drugs on the immune system was earlier demonstrated, the fluoxetine combination treatment may not only reduce the stress of the virus outbreak, but also decrease viral load to prevent and cure the infection. It seems that the use of this drug would be useful and effective, especially for the people near the patients because of care or therapy of them such as doctors, nurses or other treatment staff, and also their families. However, the accurate laboratory and clinical research are needed to prove these hypotheses based on the previous strong studies.
Conclusion
The outbreak of the new coronavirus is a major concern for human life. The high mortality rate and significant deaths caused by this virus are threats for international and global unity and cooperation. To deliver access to drugs and to find a cure as soon as possible, the entire world must work together. As a general summary of significant studies, about 50% of the clinical trials and treatment strategies are dedicated to use the antiviral and anti-inflammatory drugs. Notably, the use of traditional medicine is about 35%, which indicates a tendency to use herbal medicines. Perhaps, it is because of the higher confidence in the low risk of side effects or their long-term consequences.
The coronavirus pandemic makes the progression of clinical studies slow with a long period. In this critical situation, the most accessible facilities and the most likely ways with a safer practical background must be paid attention. It is better to use the costly and late-yielding methods and external factors that interfere with the body's natural systems, which have not long-term effects evidence yet. Given the challenges ahead for some of these methods including cell therapy, recombinant factors, or even plasma transmission, it is best to deal with and overcome this emerging virus through biomimetic and the mechanisms of virus and host interaction.
So, the approved antiviral drugs and immunemodulatory agents combined with natural compounds are considered in the first line of accessible and beneficial treatments. Accordingly, this treatment method is the best because of large existence studies, long time follow-up, and proximity to the natural system, and the normal physiological routine of pathogen and host interactions.
Publish your work in this journal
Therapeutics and Clinical Risk Management is an international, peerreviewed journal of clinical therapeutics and risk management, focusing on concise rapid reporting of clinical studies in all therapeutic areas, outcomes, safety, and programs for the effective, safe, and sustained use of medicines. This journal is indexed on PubMed Central, CAS, EMBase, Scopus and the Elsevier Bibliographic databases. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
|
2020-10-21T13:09:49.795Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "30bb8de64525c9c0d42db6672ab771efe2c0b982",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=62221",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b1291a8f049c35b6ed96d1c4e5efc931964f05bc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2131122
|
pes2o/s2orc
|
v3-fos-license
|
What Your Username Says About You
Usernames are ubiquitous on the Internet, and they are often suggestive of user demographics. This work looks at the degree to which gender and language can be inferred from a username alone by making use of unsupervised morphology induction to decompose usernames into sub-units. Experimental results on the two tasks demonstrate the effectiveness of the proposed morphological features compared to a character n-gram baseline.
Introduction
There is much interest in automatic recognition of demographic information of Internet users to improve the quality of online interactions. Researchers have looked into identifying a variety of factors about users, including age, gender, language, religious beliefs and political views. Most work leverages multiple sources of information, such as search query history, Twitter feeds, Facebook likes, social network links, and user profiles. However, in many situations, little of this information is available. Conversely, usernames are almost always available.
In this work, we look specifically at classifying gender and language based only on the username. Prior work by sociologists has established a link between usernames and gender (Cornetto and Nowak, 2006), and studies have linked usernames to other attributes, such as individual beliefs (Crabill, 2007;Hassa, 2012) and shown how usernames shape perceptions of gender and ethnicity in the absence of common nonverbal cues (Pelletier, 2014). The connections to ethnicity motivate the exploration of language identification.
Gender identification based on given names is very effective for English (Liu and Ruths, 2013), since many names are strongly associated with a particular gender, like "Emily" or "Mark". Unfortunately, the requirement that each username be unique precludes use of given names alone. Instead, usernames are typically a combination of component words, names and numbers. For example, the Twitter name @taylorswift13 might decompose into "taylor", "swift" and "13". The sub-units carry meaning and, importantly, they are shared with many other individuals. Thus, our approach is to leverage automatic decomposition of usernames into sub-units for use in classification.
We use the Morfessor algorithm (Creutz and Lagus, 2006;Virpioja et al., 2013) for unsupervised morphology induction to learn the decomposition of the usernames into sub-units. Morfessor has been used successfully in a variety of language modeling frameworks applied to a number of languages, particularly for learning concatenative morphological structure. The usernames that we analyze are a good match to the Morfessor framework, which allows us to push the boundary of how much can be done with only a username.
The classifier design is described in the next section, followed by a description of experiments on gender and language recognition that demonstrate the utility of morph-based features compared to character n-gram features. The paper closes with a discussion of related work and a summary of key findings.
Unsupervised Morphology Learning
In linguistics, a morpheme is the "minimal linguistic unit with lexical or grammatical meaning" (Booij, 2012). Morphemes are combined in various ways to create longer words. Similarly, usernames are frequently made up of a concatenated sequence of smaller units. These sub-units will be referred to as u-morphs to highlight the fact that they play an analogous role to morphemes but for purposes of encoding usernames rather than standard words in a language. The u-morphs are subunits that are small enough to be shared across different usernames but retain some meaning.
Unsupervised morphology induction using Morfessor (Creutz and Lagus, 2006) is based on a minimum description length (MDL) objective, which balances two competing goals: maximizing both the likelihood of the data and of the model. The likelihood of the data is maximized by longer tokens and a bigger lexicon whereas the likelihood of the model is maximized by a smaller lexicon with shorter tokens. A parameter controls the trade-off between the two parts of the objective function, which alters the average u-morph length. We tune this parameter on held-out data to optimize the classification performance of the demographic tasks.
Maximizing the Morfessor objective exactly is computationally intractable. The Morfessor algorithm searches for the optimal lexicon using an iterative approach. First, the highest probability decomposition for each training token is found given the current model. Then, the model is updated with the counts of the u-morphs. A umorph is added to the lexicon when it increases the weighted likelihood of the data by more than the cost of increasing the size of the lexicon.
Usernames can be mixed-case, e.g. "JohnDoe". The case change gives information about a likely u-morph boundary, but at the cost of doubling the size of the character set. To more effectively leverage this cue, all characters are made lowercase but each change from lower to uppercase is marked with a special token, e.g. "john$doe". Using this encoding reduces the u-morph inventory size, and we found it to give slightly better results in language identification.
Character 3-grams and 4-grams are used as baseline features. Before extracting the n-grams a "#" token is placed at the start and end of each username. The n-grams are overlapping to give them the best chance of finding a semantically meaningful sub-unit.
Classifier Design
Given a decomposition of the username into a sequence of u-morphs (or character n-grams), we represent the relationship between the observed features and each class with a unigram language model. If a username u has decomposition m 1 , . . . , m n then it is assigned to the class c i for which the unigram model gives it the highest posterior probability, or equivalently: where p C (c i ) is the class prior and p(m k |c i ) is the class-dependent unigram. 1 For some demographics, the class prior can be very skewed, as in the case of language detection where English is the dominant language. The choice of smoothing algorithm can be important in such cases, since minority classes have much less training data for estimating the language model and benefit from having more probability mass assigned to unseen words. Here, we follow the approach proposed in (Frank and Bouckaert, 2006) that normalizes the token count vectors for each class to have the same L 1 norm, specifically: where n(·) indicates counts and β controls the strength of the smoothing. Setting β equal to the number of training examples approximately matches the strength of the smoothing to the addone-smoothing algorithm. Z = β + |M | is a constant to make the probabilities sum to one. Only a small portion of usernames on the Internet come with gender labels. In these situations, semi-supervised learning algorithms can use the unlabeled data to improve the performance of the classifier. We use a self-training expectationmaximization (EM) algorithm similar to that described in (Nigam et al., 2000). The algorithm first learns a classifier on the labeled data. In the E-step, the classifier assigns probabilistic labels to the unlabeled data. In the M-step, the labeled data and the probabilistic labels are combined to learn a new classifier. These steps are iterated until convergence, which usually requires three iterations for our tasks. Nigam et al. (2000) call their method EM-λ because it uses a parameter λ to reduce the weight of the unlabeled examples relative to the labeled data. This is important because the independence assumptions of the unigram model lead to overconfident predictions. We used another method that directly corrects the estimated posterior probabilities. Using a small validation set, we binned the probability estimates and calculated the true class probability for each bin. The EM algorithm used the corrected probabilities for each bin for the unlabeled data during the maximization step. Samples with a prediction confidence of less than 60% are not used for training.
Gender Identification
Data was collected from the OkCupid dating site by downloading up to 1,000 profiles from 27 cities in the United States, first for men seeking women and again for women seeking men to obtain a balanced set of 44,000 usernames. The data is partitioned into three sets with 80% assigned to training and 10% each to validation and test. We also use 3.5M usernames from the photo messaging app Snapchat (McCormick, 2014): 1.5M are used for u-morph learning and 2M are for self-training. All names in this task used only lower case, due to the nature of the available data. The top features ranked by likelihood ratios are given in Table 1. The u-morphs clearly carry semantic meaning, and the trigram features appear to be substrings of the top u-morph features. The trigram features have an advantage when the u-morphs are under-segmented such as if the u-morph "niceguy" or "thatguy" is included in the lexicon. Conversely, the n-grams can suffer from over-segmentation. For example, the trigram "guy" is inside the surname "Nguyen" even though it is better to ignore that substring in this context. Many other tokens suffer from this problem, e.g. "miss" is in "mission".
Male
The variable-length u-morphs are longer on average than the character n-grams (4.9 characters). The u-morph inventory size is similar to that for 3-grams but 5-10 times smaller than the 4-gram inventory, depending on the amount of data used since the inventory is expanded in semi-supervised training. By using the MDL criterion in unsupervised morphology learning, the u-morphs provide a more efficient representation of usernames than n-grams and make it easier to control the tradeoff between vocabulary size and average segment length. The smaller inventory is less sensitive to sparse data in language model training. The experiment results are presented in Table 2. For the supervised learning method, the character 3-gram and 4-gram features give equivalent performance, and the u-morph features give the lowest error rate by a small amount (3% relative). More significantly, the character n-gram systems do not benefit from semi-supervised learning, but the u-morph features do. The semi-supervised u-morph features obtain an error rate of 25.8%, which represents a 10% relative reduction over the baseline character n-gram results.
Language Identification on Twitter
This experiment takes usernames from the Twitter streaming API. Each username is associated with a tweet, for which the Twitter API identifies a language. The language labels are noisy, so we remove approximately 35% of the tweets where the Twitter API does not agree with the langid.py classifier (Lui and Baldwin, 2012). Both training and test sets are restricted to the nine languages that comprise at least 1% of the training set. These languages cover 96% of the observed tweets (see Table 4). About 110,000 usernames were reserved for testing and 430,000 were used for training both u-morphs and the classifier. Semi-supervised methods are not used because of the abundant labeled data. For each language, we train a one-vs.all classifier. The mixed case encoding technique (see sec. 2.1) gives a small increase (0.5%) in the Table 3: Precision, recall and F1 score for language identification using the 4-gram, u-morph representations and a combination system, averaging over all users. accuracy of the model and reduces the u-morph model size by 5%. The results in Tables 3 and 4 contrast systems using 4-grams, u-morphs, and a combination model, showing precision-recall trade-offs for all users together and F1 scores broken down by specific languages, respectively. The combination system simply uses the average of the posterior log-probabilities for each class giving equal weight to each model. While the overall F1 scores are similar for the 4-gram and u-morph systems, their precision and recall trade-offs are quite different, making them effective in combination. The 4-gram system has higher recall, and the u-morph system has higher precision. With the combination, we obtain a substantial gain in precision over the 4-gram system with a modest loss in recall, resulting in a 3% absolute improvement in average F1 score.
Looking at performance on the different languages, we find that the F1 score for the combination model is higher than the 4-gram for every language, with precision always improving. For the dominant languages, the difference in recall is negligible. The infrequent languages have a 4-8% drop in recall, but the gains in precision are substantial for these languages, ranging from 50-100% relative. The greatest contrast between the 4-gram and the combination system can be seen for the least frequent languages, i.e. the languages with the least amount of training data. In particular, for French, the precision of the combination system (0.36) is double that of the 4-gram model (0.18) with only a 34% loss in recall (0.24 to 0.16).
Looking at the most important features from the classifier highlights the ability of the morphemes to capture relevant meaning. The presence of the morpheme "juan", "jose" or "flor" increase the probability of a Spanish language tweet by five times. The same is true for Portuguese and the morpheme "bieber". The morpheme "q8" Table 4: Language identification performance (F1 Scores) and relative frequency in the corpus for 4-gram (4-gr) and u-morph (u-m) representations and the combination system (Comb).
increases the odds of an Arabic language tweet by thirteen times due to its phonetic similarity to the name of the Arabic speaking country Kuwait.
Other features may simply reflect cultural norms. For example, having an underscore in the username makes it five percent less likely to observe an English tweet. These highly discriminative morphemes are both long and short. It is hard for the fixed-length n-grams to capture this information as well as the morphemes do.
Related Work
Of the many studies on automatic classification of online user demographics, few have leveraged names or usernames at all, and the few that do mainly explore their use in combination with other features. The work presented here differs in its use of usernames alone, but more importantly in the introduction of morphological analysis to handle a large number of usernames. Two studies on gender recognition are particularly relevant. Burger et al. (2011) use the Twitter username (or screen name) in combination with other profile and text features to predict gender, but they also look at the use of username features alone. The results are not directly comparable to ours, because of differences in the data set used (150k Twitter users) and the classifier framework (Winnow), but the character n-gram performance is similar to ours (21-22% different from the majority baseline). The study uses over 400k character n-grams (n=1-5) for screen names alone; our study indicatess that the u-morphs can reduce this number by a factor of 10. Burger et al. (2011) used the same strategy with the self-identified full name of the user as entered into their profile, obtaining 89% gender recognition (vs. 77% for screen names). Later, Liu and Ruths (2013) use the full first name from a user's profile for gender detection, finding that for the names that are highly predictive of gender, performance improves by relying on this feature alone. However, more than half of the users have a name that has an unknown gender association. Manual inspection of these cases indicated that the majority includes strings formed like usernames, nicknames or other types of word concatenations. These examples are precisely what the u-morph approach tries to address.
Language identification is an active area of research (Bergsma et al., 2012;Zubiaga et al., 2014), but the username has not been used as a feature. Again, results are difficult to compare due to the lack of a common test set, but it is notable that the average F1 score for the combination model approaches the scores obtained on a similar Twitter language identification task where the algorithm has access to the full text of the tweet (Lui and Baldwin, 2014): 73% vs. 77% .
A study that is potentially relevant to our work is automatic classification of ethnicity of Twitter users, specifically whether a user is African-American (Pennacchiotti and Popescu, 2011). Again, a variety of content, profile and behavioral features are used. Orthographic features of the username are used (e.g. length, number of numeric/alpha characters), and names of users that a person retweets or replies to. The profile name features do not appear to be useful, but examples of related usernames point to the utility of our approach for analysis of names in other fields.
Conclusions
In summary, this paper has introduced the use of unsupervised morphological analysis of usernames to extract features (u-morphs) for identifying user demographics, particularly gender and language. The experimental results demonstrate that usernames contain useful personal information, and that the u-morphs provide a more efficient and complementary representation than character n-grams. 2 The result for language identification is particularly remarkable because it comes close to matching the performance achieved by us-ing the full text of a tweet. The work is complementary to other demographic studies in that the username prediction can be used together with other features, both for the user and members of his/her social network.
The methods proposed here could be extended in different directions. The unsupervised morphology learning algorithm could incorporate priors related to capitalization and non-alphabetic characters to better model these phenomena than our simple text normalization approach. More sophisticated classifiers could also be used, such as variable-length n-grams or neural-network-based n-gram language models, as opposed to the unigram model used here. Of course the sophistication of the classifier will be limited by the amount of training data available.
A large amount of data is not necessary to build a high precision username classifier. For example, less than 7,000 training examples were available for Turkish in the language identification experiment and the classifier had a precision of 76%. Since little data is required, there may be many more applications of this type of model.
Prior work on unsupervised morphological induction focused on applying the algorithm to natural language input. By using those techniques with a new type of input, this paper shows that there are other applications of morphology learning.
|
2015-07-08T08:52:50.000Z
|
2015-07-08T00:00:00.000
|
{
"year": 2015,
"sha1": "35f115cb16810d9c2fd3a052fcff7e11b3ba3610",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1507.02045",
"oa_status": "GREEN",
"pdf_src": "ACL",
"pdf_hash": "42eea23c59025a872c5e21b5c4d09dcc6edf3053",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
232067307
|
pes2o/s2orc
|
v3-fos-license
|
The 5-year overall survival of cervical cancer in stage IIIC-r was little different to stage I and II: a retrospective analysis from a single center
Background The 2018 International Federation of Gynecology and Obstetrics (FIGO) staging guideline for cervical cancer includes stage IIIC recognized by preoperative radiology (IIIC-r) to state there are lymph nodes metastases (LNM) identified by imaging tools. We aim to explore the reasonability and limitations of stage IIIC-r and try to explore the potential reasons. Methods Electronic medical records were used to identify patients with cervical cancer. According to the new staging guidelines, patients were reclassified and assigned into five cohorts: stage I, stage II, stage IIIC-r, LNM confirmed by pathology (IIIC-p) and LNM detected by radiology and confirmed by pathology (IIIC r + p). Five-year overall survivals were estimated for each cohort. The diagnosis accuracy of computed tomography (CT), magnetic resonance imaging (MRI) and diameter of detected lymph nodes were also evaluated. Results A total of 619 patients were identified. The mean follow-up months were 65 months (95% CI 64.43–65.77) for all patients. By comparison, the 5-year overall survival rates were not statistically different (p = 0.21) among stage IIIC-r, stage I and stage II. While, the rates were both statistical different (p<0.001) among stage IIIC-p, IIIC r + p and stage I and stage II. The sensitivities of CT and MRI in detecting LNM preoperatively were 51.2 and 48.8%. The mean maximum diameter of pelvic lymph nodes detected by CT cohort was 1.2 cm in IIIC-r cohort, and was 1.3 cm in IIIC r + p cohort. While, the mean maximum diameter of pelvic lymph nodes detected by MRI was 1.2 cm in IIIC-r cohort, and was 1.48 cm in IIIC r + p cohort. When the diagnosis efficacy of the diameter of pelvic lymph nodes in detecting LNM were evaluated, the area under the receiver operating characteristic curve (ROC curve) was 0.58 (p = 0.05). Conclusions It seems that the FIGO 2018 staging guideline for cervical cancer is likely to has certain limitations for the classification of those with LNM. CT or MRI, however, has limitations on detecting LNM. It would be better to use more accurate imaging tools to identify LNM in the clinical practices.
Background
There were 570,000 cases and 311,000 deaths of cervical cancer in 2018 globally. In Global Cancer Statistics 2018, cervical cancer ranks as the fourth most-common cancer worldwide and the second in incidence and mortality behind breast cancer in developing countries [1]. The incidence of cervical cancer is decreasing in the developed countries but is still increasing in developing countries [1].
Cervical cancer has been staged clinically [2]. The most widely used staging guidelines for cervical cancer is the International Federation of Gynecology and Obstetrics (FIGO) staging guidelines. Before 2018, gynecologists staged patients mainly by physical examinations and imaging was the auxiliary [3]. Different from some types of cancer, the prognosis of cervical cancer seems not highly effective. Increasing studies have emerged and implied that lymph node status was a key prognostic factor [4][5][6]. Therefore, the 2018 revised FIGO cervical cancer staging guideline includes the use of imaging for staging and allows pathological results to modify the staging [7]. In this staging guideline, patients with the involvement of pelvic and/or para-aortic lymph nodes, irrespective of the tumor size and extent are staged as IIIC, with r and p notations (r represents imaging indicating the nodes metastasis and p is pathology confirming the metastasis). Patients with pelvic lymph node metastasis are only staged as IIIC1, and those with paraaortic nodes metastasis are staged as IIIC2.
A good staging guideline is able to define the extent of the cancer and differentiate survival outcomes [8]. However, there are some controversial comments to the revised 2018 FIGO staging guideline. Some studies pointed out that the survival rates were not consistent with the stages. For example, patients in stage IIIC1 had a higher survival rate than that of in stage IIIA and IIIB [9].
In corrigendum of the new FIGO staging guideline micrometastases of lymph nodes was included in stage IIIC [10]. It suggested that in the future clinical practice, more accurate approaches like ultrastaging or sentinel lymph nodes biopsy (SLNB) should be used to make it clear whether the lymph nodes exist micrometastases or not. But we did not perform ultrastaging or SLNB before so the lymph node micrometastases were not included in this study. The primary objective of this study was to explore the validity of stage IIIC recognized by preoperative radiology (stage IIIC-r). Specifically, the cervical cancer patients based on the new staging guideline is to be restaged and it is tried to determine whether the new stage IIIC-r was able to improved 5-year survival rate differentiation with potential reasons.
Methods
Data of patients with cervical cancer, confirmed by histology, were identified from Electronic Medical Record System of West China Second University Hospital (Chengdu, P.R. China) from January to December 31 in 2016. Written informed consents were obtained from all those patients, and the Institutional Review Board of West China Second University Hospital approved the study.
In the hospital's medical record system, the complete medical records for every outpatients and inpatients like patient socio-demographics, tumor characteristics, first course of treatment before disease progression or recurrence, follow-up and partial survival can be found. For the ones with incomplete follow-up, we then investigated through questionnaires or called the patients or their family and got the detailed information for their disease progression, recurrence, survival or death. For the dead patients, we asked the reasons of death, we included the patients who died for cervical cancer and excluded the patients who died for other reasons.
The inclusion criteria were patients underwent surgery firstly, patients with demographic data, complete preoperative imaging results and clinical and pathological data, patients with regular follow-up data, patients with survival data, patients died for cervical cancer (progression or recurrence or distant metastasis). The exclusion criteria were patients without complete clinical and pathological data, patients underwent adjuvant chemotherapy or concurrent radiotherapy and chemotherapy, patients missed follow-up, patients died for other reasons.
Patients' demographic data included age at diagnosis, and menstruation (menopause and pre-menopause). Imaging data included pelvic lymph nodes size, description and imaging methods (CT, MRI or PET-CT). Clinical and pathological data included stage of carcinoma (FIGO 2009), histological subtypes (squamous carcinoma, adenocarcinoma, adenosquamous carcinoma and others), degree of differentiation (poor, moderate and high), degree of stromal invasion (< 1/2 or ≥ 1/2), parametrial invasion (positive or negative), lymph node metastasis (positive or negative, described in the pathological reports) and lymph-vascular space invasion (positive or negative). The prognostic outcome assessed was overall survival. In order to ensure data authenticity and reliability, two investigators worked together: one collected and the other checked.
In FIGO 2018 cervical cancer staging guidelines, patients with positive lymph nodes, determined either pathologically or clinically, are classified as stage IIIC, irrespective of the tumor size and extent. If imaging indicates LNM, the stage allocation would be stage IIIC r, and if confirmed by pathological findings, it would be stage IIIC p. Among patients with stage IIIC, patients with positive pelvic lymph nodes are grouped as stage IIIC1 and women with positive para-aortic lymph nodes were as stage IIIC2.
In our past medical records for cervical cancer patients, FIGO 2009 was the guideline and the tumor size was categorized as either more than 4 cm or less than 4 cm. As a result, we lack partial data for the detailed tumor sizes. In order to guarantee the accuracy of our records and results, we did not reclassify all patients. The aim of this study was to figure out the prognostic performance of the stage IIIC recognized by preoperative radiology (IIIC-r). Therefore, we only reclassified the patients with positive LNs. All the included patients had standard surgical procedures according to the NCCN and 2009 FIGO guidelines for cervical cancer: radical hysterectomy and pelvic lymphadenectomy. Therefore, the patients with LNM identified by radiology were all confirmed by postoperatively pathology. Then we divided patients with LNM into three cohorts: IIIC-r cohort (LNM detected by preoperative radiology), IIIC-p cohort (LNM confirmed by postoperative pathology) and IIIC r + p cohort (LNM detected by both radiology and pathology). The stage of left patients remained the same as FIGO 2009 staging guidelines. Based on previous studies [11,12], pelvic lymph nodes with diameter over 1 cm detected by CT or MRI of initial imaging data were classified as lymph nodes positive.
Statistical analysis
Clinical and pathological characteristics were presented descriptively. Student's t-test or nonparametric test for quantitative variables. The survival time was estimated from the date of diagnosis until death or last follow-up. Overall survival was estimated by Kaplan-Meier method and the log-rank test was used to compare the difference among groups. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were used to describe the diagnosis accuracy of CT and MRI in detecting lymph nodes positive. ROC curve was used to describe the diagnosis value of the maximum diameter of lymph nodes in indicating lymph nodes positivity. Data were analyzed by SPSS (version 25, IBM Corp.) and GraphPad Prism 7 (GraphPad Software, Inc.). P values less than 0.05 were considered statistically significant.
Results
A total of 619 patients with cervical cancer and met the inclusion criteria were identified. The mean age of the cohort was 45 years. 74.2% (459 in 619) patients were premenopausal. The most common histological type was squamous cell carcinoma which accounted for 80.5% in all included patients. All included patients were undergone radical hysterectomy plus bilateral pelvic lymphadenectomy and confirmed by pathology. According to the preoperative imaging and postoperative pathology, the stage of 239 patients with LNM were changed from stage I or II in FIGO 2009 staging guidelines to stage IIIC-r, IIIC-p and IIIC r + p in FIGO 2018 staging guidelines. Since the patients with positive para-aortic lymph nodes were rare, patients in stage IIIC were included pelvic lymph nodes and para-aortic lymph nodes clinically or pathologically positive. Among them, 128 patients classified as stage IIIC-r cohort for the pelvic lymph nodes were positive detected by CT or MRI preoperatively only. Sixry-eight patients as stage IIIC-p cohort for their pelvic lymph nodes were positive confirmed by postoperative pathology only. Forty-three patients classified into IIIC r + p cohort for their pelvic lymph nodes were both positive in preoperative radiology and postoperative pathology (Fig. 1). The stage of the left 380 patients remained the same as FIGO 2009 guidelines. The mean age of patients was 44 years for IA-IB stage cohort, 48 years for IIA-IIB stage cohort, 46 years for stage IIICp cohort, 46 years for stage IIIC-r cohort and 44 years for IIIC r + p cohort. More than half patients were premenopausal and the most-common histologic subtype was squamous cell carcinomas in all groups. More detailed information displayed in Table 1.
A summary table was to display the follow-up time and 5-year overall survival rates ( By comparison, the 5-year overall survival rates among stage IIIC-r cohort, IA-IB and IIA-IIB were not statistically different (p = 0.21) (Fig. 2c). It suggested that higher FIGO 2018 staging was less likely to be Fig. 1 The populations of included cases in IIIC-r, IIIC-p and IIIC r + p cohorts consistently associated with worse 5-year overall survival rates. When stratified based on the LNM confirmed by pathology, the 5-year overall survival rates among stage I, stage II, stage IIIC-p and IIIC r + p cohort would be all significantly and statistically different (p<0.001) ( Fig. 2a and b). It suggested that when stratified by nodal status, there would be a decrease in survival with increasing stage.
It was then tried to explore the potential reasons for why the prognostic outcome among stage I, stage II and stage IIIC-r in this study was not consistently associated with the widely-held view: higher stage indicates a worse survival. Firstly, the diagnosis accuracy of CT or MRI in detecting adenopathy were evaluated. The sensitivities of CT and MRI were 51.2 and 48.8%, respectively. But the positive predictive values (PPV) of MRI (35.6%) was higher than that of CT (19.6%) ( Table 3).
Based on previous studies [11,12], pelvic lymph nodes with diameter over 1 cm detected by CT or MRI of initial imaging data were classified as lymph nodes positive. In this study, a total of 154 lymph nodes were detected in stage IIIC cohort. One hundred five nodes were detected by CT and 49 nodes were by MRI. Sixty-two nodes were detected in IIIC r + p cohort. Thirty-one nodes were by CT and 31 nodes were by MRI. The overall mean diameters detected by initial imaging examinations for stage IIIC-r cohort was 1.2 cm, and for IIIC r + p cohort was 1,38 cm. It indicated there was statistical difference between these two cohorts (p = 0.02). When using MRI, the mean diameter was 1.2 cm for stage IIIC, and 1.48 cm for IIIC r + p cohort, it indicated there was statistical difference between these cohorts (p = 0.02). When using CT, there seemed to be no statistical difference between these cohorts (p = 0.07) ( Table 4). It suggested that the lymph nodes detected by initial MRI examination in IIIC r + p cohort were larger than these in stage IIIC-r cohort. When the diagnosis efficacy of the diameter of pelvic lymph nodes in detecting LNM were evaluated, the area under the ROC curve was 0.59 (p = 0.05) (Fig. 3). It seemed that using the diameter of lymph nodes in imaging tools to predict metastasis had little clinical significance.
Discussion
Since the study was a retrospective analysis, so the data we collected were all implied the past consensus. Based on the NCCN guideline and 2009 FIGO guideline for cervical cancer, LNM was not included in the staging system and did not influenced the treatment choice. The majority patients before stage IIA could choose surgery or concurrent radiotherapy and chemotherapy and the final decision was dependent on full communication between the doctors and patients. If these patients had extensive LNM identified by preoperative imaging, the doctors would recommend the concurrent radiotherapy and chemotherapy. In our study, all included patients were underwent standard surgery and pelvic lymphadenectomy. In the follow-up, the deceased patients included in this study were died after recurrence or after distant metastases. The data in this study implies that maybe the reasonability of stage IIIC-r in new FIGO staging system to guide prognosis is limited. After analysis, it was found that the false positive rate of LNM detected by preoperative imaging approaches like CT or MRI is high and the diagnostic accuracy for CT or MRI was relatively low. By comparison, the positive predict value of MRI was higher than that of CT. Although the diameters of metastatic lymph nodes detected by MRI was larger than those detected by CT preoperatively, the accuracy of diameter of lymph nodes to indicate metastasis was also comparatively low.
With the popularization of cervical cancer screening and vaccine, the epidemiology has been changed in many ways. Firstly, cervical cancer is controlled well in developed countries [1], but owing to the imbalance of regional development in developing countries, the incidence and mortality of cervical cancer are still high [1]. Secondly, incidence rate of early stage cervical cancer has been gradually increasing [13]. At last, the comparability of clinical stages among different countries and regions in the world has decreased, because the clinical stage in some countries, developed countries in a particular, is affected by imaging like MRI [14]. Optimal staging guidelines are more likely to keep pace with the times. Cancer tagging guidelines should be updated based on developments in diagnostic technology and reliable treatment, new results about prognostic factors and outcomes data [8]. Therefore, the revised 2018 FIGO cervical cancer staging guideline is upgraded [15].
For a long time, many researches have indicated that LNM is a poor prognostic factor [5,16]. Yu Liu et al. indicated that the overall survival was 91% for pelvic LNM negative cohort, and 67% for nodes metastasis positive cohort [16]. LNM is the key consideration for postoperative radiotherapy. Therefore, the revised 2018 FIGO staging guideline includes the lymph nodes involvement and classified as stage IIIC, making it possible to know the lymph nodes status preoperatively detected by imaging examinations [15].
However, as showing in the results, the diagnosis accuracy of CT and MRI seem to be less ideal. In addition, the LNM of majority patients are confirmed by postoperative histology and could not be detected by preoperative imaging examinations.
It seems to be consistent with other previous studies. The diameter of lymph nodes over 1 cm is the current diagnostic basis in most cases diagnosed as lymph node metastases by CT or MRI. In the results, the mean diameter of metastatic lymph nodes was 1.48 cm detected by MRI. However, the value of diameter indicating metastases seems to be limited. There are many metastatic lymph nodes with normal size. Meanwhile, enlarged lymph nodes could be benign lesions like inflammation or reactive hyperplasia lesions [18]. Therefore, it is hard for simple morphological characteristics to differentiate whether the lymph nodes are metastatic or not. Some studies have indicated that specific imaging features like irregular margin and central necrosis could improve the diagnosis accuracy of CT or MRI in detecting nodes metastases [19]. On the other hand, changing imaging methods could also be useful. Flurorine-18 fludeoxyglucose positron emission tomography/ CT is a useful technique in detecting the metastatic lymph nodes and could provide detailed information of the entire body [20], but owing to high cost it has not been widely used in clinical practice. Diffusion-weighted imaging (DWI) is sensitive to the diffusion of water molecules in tissue, which can make subtle abnormality more obvious and can provide better characterization of tissue and their pathological processes at microscopic level [21,22]. With the development of technology, deep learning model could play a role on detecting adenopathy. In Qingxia Wu et al.
(2020) study, they used deep learning model to identify adenopathy on magnetic resonance imaging in patients with cervical cancer. They found that the deep learning model that used both intratumoral and peritumoral regions on MRI imaging had an optimal performance, the AUC-ROC was 0.84 (96%CI 0.78-0.91) [23]. In addition, it is also important to improve the intraoperative assessment's accuracy and the sentinel lymph node biopsy (SLNB) [24] can solve this problem. Sentinel lymph nodes (SLN) reflects the statues of the related regional lymph nodes and can be detected by lymphoscintigraphy using specific reagents like blue dye, indocyanine green [25], technetium 99 and so on, and confirmed by pathologists using frozen section, hematoxylin and eosin or ultrastaging [26] to identified micrometastases. A metaanalysis focused on SLNB in early stage cervical cancer showed that the pooled specific side sensitivity for SLNB was 88% [27].
There are some limitations in this study. Firstly, it was a retrospective analysis from a single medical center, selective bias could exist. Second, the false positive rate of stage IIIC-r was high due to the low diagnosis accuracy of CT or MRI in detecting adenopathy. Pelvic LNM plays a key role on prognosis of cervical cancer, therefore, it is important and promising to detect it preoperatively.
Conclusion
In summary, the stage IIIC in revised FIGO 2018 staging guideline for cervical cancer is reasonable since pelvic LNM plays a key role on prognosis of cervical cancer. The routine imaging approaches like CT or MRI, however, maybe lack detailed criteria to detect LNM preoperatively. It would be better for gynecologists to use high precision imaging equipment like PET/CT or DWI-MRI to detect metastases in the future clinical practice.
|
2021-02-28T14:34:08.987Z
|
2021-02-27T00:00:00.000
|
{
"year": 2021,
"sha1": "76142aae4324d3ff68ec826fd05a380613c0a432",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-021-07890-w",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f77703558f9b8db6aa7990f45f2df69be610a8f0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255970772
|
pes2o/s2orc
|
v3-fos-license
|
Not When But Whether: Modality and Future Time Reference in English and Dutch
Abstract Previous research on linguistic relativity and economic decisions hypothesized that speakers of languages with obligatory tense marking of future time reference (FTR) should value future rewards less than speakers of languages which permit present tense FTR. This was hypothesized on the basis of obligatory linguistic marking (e.g., will) causing speakers to construe future events as more temporally distal and thereby to exhibit increased “temporal discounting”: the subjective devaluation of outcomes as the delay until they will occur increases. However, several aspects of this hypothesis are incomplete. First, it overlooks the role of “modal” FTR structures which encode notions about the likelihood of future outcomes (e.g., might). This may influence “probability discounting”: the subjective devaluation of outcomes as the probability of their occurrence decreases. Second, the extent to which linguistic structures are subjectively related to temporal or probability discounting differences is currently unknown. To address these, we elicited FTR language and subjective ratings of temporal distance and probability from speakers of English, which exhibits strongly grammaticized FTR, and Dutch, which does not. Several findings went against the predictions of the previous hypothesis: Framing an FTR statement in the present (“Ellie arrives later on”) versus the future tense (“…will arrive…”) did not affect ratings of temporal distance; English speakers rated future statements as relatively more temporally proximal than Dutch speakers; and English and Dutch speakers rated future tenses as encoding high certainty, which suggests that obligatory future tense marking might result in less discounting. Additionally, compared with Dutch speakers, English speakers used more low‐certainty terms in general (e.g., may) and as a function of various experimental factors. We conclude that the prior cross‐linguistic observations of the link between FTR and psychological discounting may be caused by the connection between low‐certainty modal structures and probability discounting, rather than future tense and temporality.
Introduction
Do differences between languages change the way people think, feel, and act? The linguistic relativity hypothesis suggests that they do (Whorf, 1956; also see Gumperz & Levinson, 1996;Leavitt, 2011;Lucy, 1992). The idea is that languages force speakers to notice different things in order to communicate and that the resultant differences in online attentional demands can grow through lifelong language use into entrenched offline cognitive differences (Wolff & Holmes, 2011). For instance, when choosing between the English demonstratives this and that, speakers need only pay attention to whether the referred-to object is located near or far from themselves. Spanish breaks this space into three degrees of distance: este 'this,' ese 'that, ' and aquél 'that' (distant, i.e., 'yon' [archaic]). Malagasy breaks it into seven (Evans, Bergqvist, & San Roque, 2018). Might speakers of Spanish or Malagasy be faster or more precise at estimating distance from ego? A growing body of research attests to affects like this (see Casasanto, 2016;Everett, 2013;Lupyan, Rahman, Boroditsky, & Clark, 2020;Majid, 2018;Wolff & Holmes, 2011).
A typical way linguistic relativity research progresses is by identifying cross-linguistic differences and then investigating whether they give rise to corollary cognitive effects (Lucy, 1997(Lucy, , 2016. In this vein, economists have been exploring whether cross-linguistic differences in the grammatical rules that apply when forming linguistic utterances about future events (future time reference, or FTR) 1 affect speakers' subjective estimation of the value of delayed outcomes (for review, see, . This is referred to as "temporal discounting." However, prior research of this kind has been criticized for its superficial treatment of FTR (Dahl, 2013;McWhorter, 2014;Pereltsvaig, 2011;Pullum, 2012;Sedivy, 2012). In this paper, we aim to develop a more comprehensive understanding of the relation between various strategies for talking about the future and the cognitive biases behind psychological discounting in order to help develop the linguistic savings hypothesis. 4 of 36 C. Robertson, S. G. Roberts / Cognitive Science 47 (2023) Fig. 1. Mechanisms by which FTR grammaticization is hypothesized to affect temporal beliefs and therefore discounting. K. Chen (2013) hypothesized that speakers of weak-FTR languages would construe future events as more temporally proximal (a) or less temporally precise (b). In (a), distal representations lead to decreased relative subjective value in strong-FTR speakers; in (b), more precise temporal representations lead to relatively lower average subjective value in strong-FTR speakers. We have presented the mechanisms in simplified terms. The distance mechanism is presented as a point estimate (a), and the precision mechanism is presented as the mean of a two-item uniform distribution (b). In K. K. Chen's (2013) account, temporal beliefs are represented as normal distributions and subjective values are integrals. The discounting function plotted is a hyperboloid function, V = A/(1 + kD) s , from Green and Myerson (2004), where V is subjective value, A is the objective amount, D is the delay, b is a parameter that governs discounting rate, and s is a non-linear scaling factor typically less than 1. This function has been found to accurately describe empirical discounting rates in humans (Du, Green, & Myerson, 2002;Green & Myerson, 2004;Green, Myerson, & Vanderveldt, 2014;Vanderveldt, Green, & Myerson, 2015). Plotted values for s and k are approximately average human discounting rates for the given delay D (0-100 months) and value V ($200 in this case), that is, s = 0.7 and k = 0.4 (Green & Myerson, 2004, from). Second, he hypothesized weak-FTR languages might not mandate that speakers think as precisely about the temporal location of future events (Fig. 1b). The idea is that strong-FTR languages divide the "arrow of time" into three segments (past vs. present vs. future). Weak-FTR languages divide it into two (past vs. present + future). K. Chen (2013) hypothesized that this finer segmentation in strong-FTR languages causes more precise temporal representations of future events (K. Chen, 2013). If beliefs are affected in either of these ways, it would lead to relatively less discounting in weak-FTR speakers (see Fig. 1). This would cause speakers of weak-FTR languages to be more future oriented (K. Chen, 2013).
Such differences in future orientation reliably predict real-world "intertemporal decisions," in which individuals balance present versus future costs and rewards. For instance, time preferences have been found to predict real spending (Bickel et al., 2010) and financial outcomes such as income levels and financial mismanagement (Hamilton & Potenza, 2012;Xiao & Porto, 2019). Time preferences also predict substance abuse tendencies, which often incur long-term costs (professional, social) but confer short-term benefits (hedonistic pleasure). This includes alcohol abuse (Vuchinich & Simpson, 1998), opioid dependency (Garami & Moustafa, 2019), and substance abuse in general (Kirby, Petry, & Bickel, 1999; Mejía-Cruz, Green, Myerson, Morales-Chainé, & Nieto, 2016). Health behaviors are often impacted as well, because many heath-critical decisions involve trade-offs between immediate (dis)comfort and future (ill)health. For instance, time preferences predicted the odds of smoking cigarettes (Bickel, Odum, & Madden, 1999) and the likelihood of exercising in older individuals (Tate, Tsai, Landes, Rettiganti, & Lefler, 2015). Therefore, compared with speakers of strong-FTR languages like English, speakers of weak-FTR languages like Dutch are predicted to save more for the future, exercise more, and make healthier lifestyle choices.
K. Chen (2013), tested these predictions by using FTR status in regression analyses to predict a range of behaviors. He found that speakers of weak-FTR languages were more likely to have saved each year, retired with more assets, were less likely to have smoked, and were more likely to practice safe sex. He also found they were healthier, as indexed by obesity, peak blood flow, grip strength, and physical exercise levels. Since then, numerous studies have extended this basic approach. For example, speakers of weak-FTR languages engaged less in present-oriented accounting practices (Fasan, Gotti, Kang, & Liu, 2016;J. Kim, Kim, & Zhou, 2017), had better educational outcomes (Figlio et al., 2016), made healthier lifestyle choices (Guin, 2017), had greater support for future-orientated environmental policies (Mavisakalyan, Tarverdi, & Weber, 2018;Pérez & Tavits, 2017), and had better macroeconomic performance (Hübner & Vannoorenberghe, 2015a, 2015b. A number of other studies attest to the conclusion that FTR status is a reliable predictor of intertemporal behavior (S. Chen, Cronqvist, Ni, & Zhang, 2017;Chi, Su, Tang, & Xu, 2018;Galor et al., 2016;Liang et al., 2018;Lien & Zhang, 2020;Sutter, Angerer, Glätzle-rützler, & Lergetporer, 2015;Thoma & Tytus, 2018). Although there are various statistical concerns with the robustness of these associations (Gotti, Roberts, Fasan, & Robertson, 2021;Roberts, Winters, & Chen, 2015), practically all studies make simplified assumptions about FTR typology. We now turn to some criticisms of these assumptions.
Critical perspectives on the linguistic savings hypothesis
In this section, we outline three issues with the theory and evidence for the linguistic savings hypothesis. These are (a) probability may be a confounding factor in observed effects of FTR status, (b) modal FTR expressions are disregarded despite being an import way of talking about the future, and (c) temporal accounts of the future tense disregard modal semantics of future tenses themselves.
Probability may confound observed findings
A serious issue is that (as far as we know), no work has directly tested the temporal mechanisms proposed by K. Chen (2013). Regression analyses which use FTR status to predict realworld intertemporal behavior cannot identify whether temporal or probability discounting is driving outcomes. Probability discounting is analogous to temporal discounting. It refers to the subjective devaluation of outcomes as their odds of occurring reduce (Green et al., 2014;Rachlin et al., 1991). For example, most people would prefer $100, over a 50% chance of receiving $100. However, offer a 50% chance of $200, and some will choose to gamble while others will choose the guaranteed $100. Differences like this are referred to in terms of "risk preferences." Recently, there has been an increasing interest in investigating outcomes which are both delayed and risky, for example, $100 or a 50% chance of $200 in a year (Luckman, Donkin, & Newell, 2018;Vanderveldt et al., 2015;Vanderveldt, Green, & Rachlin, 2017). These are referred to as "risky intertemporal decisions." Many (if not all) of the behaviors found to be predicted by FTR status involve risky intertemporal decision-making. Even nominally risk-free outcomes usually involve some degree of uncertainty. For instance, the pursuit of educational goals is fraught with uncertainty about their relative rate of return (Figlio et al., 2016). The discounting of future suffering in the context of support for euthanasia is permeated with uncertainty about the relative extent of future suffering (Lien & Zhang, 2020). And accountants undertaking earnings management must weight the probability of being caught (Fasan et al., 2016;Kim et al., 2017). Even the main finding in K. Chen (2013) involves predicting whether survey respondents had saved in the past year, which could have involved investment in risky assets such as stocks and shares (World Values Survey Association, 2014).
Critically, probability and delay have been found to interactively predict subjective estimations of future value (Vanderveldt et al., 2015(Vanderveldt et al., , 2017. Models which combine these factors fitted empirical results better than models which isolate them (Luckman et al., 2018). The probability of a reward had a greater impact on temporal discounting rates than delay has on probability discounting (Vanderveldt et al., 2015). These results support the conclusion that probability and delay interact to inform intertemporal decision-making. This is a critical issue for the linguistic savings hypothesis. FTR status has been found to predict a range of behaviors. However, the nature of the outcomes makes it unclear why this is the case. Is probability or temporal discounting driving results?
FTR-status and modal future time reference
K. Chen (2013) uses obligatory tense marking of prediction-based FTR as a proxy for FTR grammaticization . This may be reasonable (Dahl, 2000b), but what is it a proxy for? The expression of future time is very complex and often involves the expression of modal notions of ability, desire, (un)certainty, probability, volition, intention, and obligation (Bybee & Dahl, 1989;Bybee et al., 1994;Fries, 1956;Palmer, 2001, see). Modality involves quantifying what is likely-unlikely, or possible-necessary, relative to various modal "bases" (Kratzer, 1977;Palmer, 2001). For instance, deontic modality involves expressing what is desirable or necessary relative to social norms, taboos, and institutions (Palmer, 2001), for example, One should always get up early. In epistemic modality, speakers express what is likely relative to what they know or believe (Palmer, 2001), for example, I really think he's got a chance! The grammaticization of FTR can involve multidimensional obligatorization processes, which involve many of these domains simultaneously becoming more grammaticized (Hopper, 1996). Epistemic modality is of critical relevance to questions of psychological discounting. Risky intertemporal preferences are impacted by the perceived likelihood of a future outcome. The obligation to use low-certainty modal FTR constructions might cause strong-FTR speakers to construe future events as more risky. In suggesting this, we are sympathetic to accounts which treat modal expressions as scalar operators which map transparently onto notions of probability. Rather than traditional accounts which invoke Boolean quantification (Kratzer, 2012), modal semantics are seen as encoding the likelihood of events on a one-dimensional scale between high (p = 1) and low (p = .5) certainty. 3 Evidence suggests scalar accounts capture modal semantics better than notions of Boolean quantification since the latter yields incorrect predictions in some linguistic contexts (Lassiter, 2015). With this in mind, it is uncontroversial that modal constructions encode weakened certainty relative to the future tense (Enç, 1996;Huddleston Pullum, 2002;& Palmer, 2001). If such operators map onto scalar notions of probability, the obligation to use "low-probability" modal constructions could cause strong-FTR speakers to construe risky future outcomes as having a lower probability of occurring and therefore as less valuable. This is problematic because FTR status affects the extent to which languages oblige the encoding of low-certainty epistemic modality. For instance, will is not actually obligatory for prediction-based FTR. Rather, English obliges speakers to use will or another modal verb: (2) a. The Bears will win (tonight Any of examples (2a-h) are perfectly acceptable. These modal verbs all encode futurity but express differing speaker commitment to the probability of the event occurring (Karawani & Waldon, 2017). If the English case generalizes, the salient difference between strong-and weak-FTR languages might be that strong-FTR languages oblige speakers to use a modal verb to encode whether they think an event will occur. If this results in more frequent net use, lowcertainty linguistic structures, such linguistic spotlighting, might cause increased probability discounting in strong-FTR speakers.
Critically, it is unclear whether the grammatical distinction noted above actually results in more frequent use of low-certainty language in English FTR as compared to Dutch. In Dutch, kunnen 'may' is the only modal verb for which epistemic use is possible and encodes possibility (Nuyts, 2000). Kunnen is not obligatory for prediction-based FTR, whereas the English modals are. It seems plausible that this results in higher encoding of low-certainty modality in English. However, Dutch speakers might be making up for language-level grammatical constraints by expressing low certainty in other ways. For instance, in English and Dutch, epistemic modality can be expressed using modal modifiers, for example, English possibly, probably, certainly; Dutch mogelijk (erwijze) 'possibly,' waarschijnlijk 'probably,' zeker 'certainly.' Mental state predicates might also facilitate the expression of complex modal notions about the future. These are psychological verbs which allow speakers to express modal notions by talking about their thoughts and beliefs (Nuyts, 2000). In English and Dutch, the mental state predicate prototypically used to express epistemic modality is think (Dutch denken 'think'), while believe (Dutch geloven 'believe') is also fairly common (Nuyts, 2000, p. 110), while know (Dutch weten 'know') has a minor role (Nuyts, 2000, p. 130). There may also be modal FTR differences which cut across the FTR status dichotomy. For instance, Dutch has a system of modal particles which can attenuate other modal structures, for example, wel eens 'well be' (approximate) (Nuyts, 2000). The flavor of this can be seen in well in English. For instance, That could well be the train arriving communicates strengthened modality compared with That could be the train arriving. However, English lacks this word class. English and Dutch both exhibit sophisticated systems for expressing modal notions (see Nuyts, 2000). However, the relevant question to linguistic relativity is not what may be said but what must be said (Jakobson, 1971). The English modal system is obligatory. If this causes English speakers to use more low-certainty modals in FTR, this could impact riskyintertemporal preferences.
Do future tenses encode time or modality?
A second issue is that future tense markers tend to be characterized by a division of labor between temporal and modal semantics (Dahl, 2000b). Tenses are usually thought of as deictic expressions, which relate the time of a referenced event to the time of speech (Lyons, 1968;Mezhevich, 2008). In a typical ternary account of tense, Klein (1995) proposes that tense clarifies the temporal order between the utterance time and the reference time, so for example, the present tense indicates reference time and utterance time are the same, past tense indicates reference time precedes utterance time, and the future tense indicates reference time follows utterance time. For instance, in English: (3) a. Past: It rained. b. Present: It is raining. 5 c. Future: It will/shall/is going to rain.
What is being expressed in example 3a-c is when, relative to the time of utterance, the event in question takes place. Other theoretical treatments of tense eschew ternary models inspired by properties ascribed to time by contemporary physics (Broekhuis & Verkuyl, 2014). For instance, Te Winkel (1866), and later Verkuyl (2008), combines elements of tense and aspect in positing that there are eight Dutch tense forms based on three binary oppositions: (1) present versus past, (2) synchronous versus posterior, and (3) imperfect versus perfect. In the present-synchronous category, an imperfective statement would be Elsa loopt 'Elsa walks' (i.e., the simple present), while a perfective statement would be Elsa heeft gelopen 'Elsa has walked' (the present perfect); whereas in the present-posterior category, an imperfective statement would be Elsa zal lopen 'Elsa will walk' (simple future), and a perfective would be Elsa zal hebben gelopen 'Elsa will have walked' (future perfect) (examples from Broekhuis & Verkuyl, 2014). Thus, past/present distinguishes between what most would consider past and present tense, synchronous/posterior distinguishes between past + present on the one hand and future on the other, and perfective/imperfective distinguishes between the English simple and prefect aspect (which express deictic time relations relative to the time of reference rather than the time of utterance). These are two accounts of tense. The salient point is that tenses are semantically defined as those linguistic structures which encode notions about when in time events occurs relative to the time of speech (Lyons, 1968).
However, it is often difficult to account for future tense semantics entirely in the framework of deictic time relations. This is because future tenses tend to comprise a mixture of modal, temporal, and aspectual notions (Dahl, 2000b). To understand this discussion, it is necessary to understand what we refer to as "FTR mode." FTR mode is a set of notions which are essential to understanding FTR. They delineate the contexts in which it is possible to refer to future events. As we have mentioned, these are (a) intentions, (b) predictions, and (c) schedules. We follow Dahl's (2000b) useful schema by defining these categories as follows.
Intentions are statements about our own or other people's intentions for the future, for example, I shall see what's behind that door. Speakers can usually be fairly certain about their own intentions, because they have access to the internal contents of his own minds. Schedules are high-certainty statements about well-known scheduled events, for example, the game is at 6 pm. Predictions are statements about less well-known events about which the speaker cannot be sure. For instance, that coin will land on heads is a prediction.
The modal semantics of the English will: The case that will encodes modal weakening usually involves pointing out that it becomes increasingly obligatory as the implied certainty decreases from schedules, to intentions, to predictions: (4) a. Sun rise is/?will be at 6am. b. I set out/am setting out/will set out for the coast soon. c. The bomb ?explodes/?is exploding/will explode soon. (Bouma, 1975) These sentences are syntactically similar, and all refer to the future. However, will becomes obligatory as the FTR mode grows increasingly uncertain. In example (4a), will sounds out of place. It manages to convey an overly formal register, that is, as a maître d' might announce Dinner will be served at 7. While it is grammatical, it does not seem standard. In example (4b), will does not serve strictly as a marker of future time. Rather, the meaning changes as a matter of stress. In I will set out for the coast…, the speaker will go (as opposed to someone else). In I will set out for the coast…, the speaker in fact going (as opposed to not going at all). Apart from the use of will to express such notions, the present tense is likely more common. On the other hand, in example (4c), neither the present or the present progressive is grammatical. On the basis of acceptability judgments like these, it is usually suggested that will marks prediction rather than FTR (Enç, 1996;Dahl, 2000b;Huddleston, 1995;Fries, 1956;Klecha, 2014).
Such a conclusion is supported by the fact that it is perfectly acceptable to use will to mark a prediction in present time contexts. For instance, on hearing a knock at the door, it is grammatical to say either of: a. That will be the postman. b. That is the postman.
In example (5a), will marks a present time prediction. This suggests that the semantics of will are not strictly temporal. Rather, will tends to mark predictions regardless of the time frame (Enç, 1996;Giannakidou & Mari, 2018;Huddleston, 1995;Huddleston & Pullum, 2002;Klecha, 2014, inter alia:). Some commentators have pointed out that will may also operate as a marker of modal necessity, similarly to must (Giannakidou, 2017;Giannakidou & Mari, 2018). For instance, in example (5a), will expresses something similar to that must be the postman. A relevant point here is that such statements actually also express modal weakening relative to statements of fact (Giannakidou & Mari, 2018). In other words, that must be the postman implies that the speaker is inferring this, perhaps on the basis of relevant knowledge. If they knew it were the postman, they would just use example (5b).
For these and other reasons, most scholars agree that a purely temporal interpretation of will is inadequate, though the precise modal semantics of will are debated (Broekhuis & Verkuyl, 2014;Cariani & Santorio, 2018;Dahl, 2000b;Enç, 1996;Fries, 1956;Huddleston, 1995;Huddleston & Pullum, 2002;Klecha, 2014;Sarkar, 1998;Salkie, 2010). For instance, obliging the use of will for predictions may spotlight the uncertainty associated with this FTR mode. The meaning of will may be associated with its use, that is, it may "mean" epistemic weakening. On the other hand, there do not appear to be any convincing demonstrations that the modal weakening of will in example (5a) "carries over" when will is used to mark future predictions. It seems unclear that it would, given it is not possible to use the present tense for prediction-based FTR in English. In fact, as we have pointed out, it is not actually obligatory to use will in example (5a). English speakers are rather obliged to use one of the English modals. A paradigmatic analysis of the options available indicates that will therefore encodes high certainty: It is among the highest certainty options available. This echoes suggestions that it is a marker of epistemic necessity (Giannakidou & Mari, 2018;Klecha, 2014).
The modal semantics of the Dutch zullen: Similar debates are had about the theoretical status of the Dutch future, zullen 'will.' Is it a modal or a tense? Broekhuis and Verkuyl (2014) make the case that its semantics are only modal. The authors point out that the Dutch present tense can be used to refer to a time span encompassing both before (using the present perfect) and after the time of speech. On this basis, it is concluded that the contribution of zullen must be purely modal (Fehringer, 2018;Giannakidou, 2014Giannakidou, , 2017. They give the following examples. The uncontroversial modal auxiliaries of possibility, kunnen 'may,' and necessity, moeten 'must,' are contrasted with zullen 'will': (6) Dutch a. Dat huis op de hoek moet instorten. that house on the corner must collapse:PRS 'That house on the corner must be collapsing.' b. Dat huis op de hoek kan instorten.
that house on the corner may collapse:PRS 'That house on the corner may be collapsing.' c. Dat huis op de hoek zal instorten.
that house on the corner will collapse:PRS 'That house on the corner will be collapsing.' According to Broekhuis and Verkuyl (2014), examples (6a-c) are all compatible with a future reading. However, given concurrent evidence of a collapse actually occurring (i.e., rumbling, visible instability), they can also refer to a present time event (Broekhuis & Verkuyl, 2014). If both present and future time interpretations are possible for zullen, they suggest that its primary contribution cannot be temporal and must be purely modal. This is probably an extreme position, but the more modest assertion that zullen encodes modal semantics appears uncontroversial. For instance, the Algemene Nederlandse Spraakkunst , which is a standard reference for Dutch speakers (Fehringer, 2018), indicates that zullen tends to encode low certainty, while greater certainty is expressed by gaan 'be going to,' though these differences may be limited to interrogative contexts (Geerts, Haeseryn, Romijn, de Rooij, & van den Toorn, 1997). Like the English shall, zullen grammaticized from a Germanic word meaning "to owe" (Dahl, 2000b, p. 319), and it historically retained a deontic flavor, expressing obligations and necessities (Fehringer, 2018), as well as epistemic supposition (Fehringer, 2018), and simple FTR (Behydt, 2005). Fehringer (2018) points out that, both synchronically and diachronically, it is difficult to disentangle zullen's modal and temporal semantics, leading scholars to question whether a clear partition is even possible. As with English be going to future constructions, gaan emerged much later as a future marker and retains elements of its earlier "movement towards a goal" meaning. This may lend itself to the expression of intentions (Fehringer, 2018). At the same time, there may be differences in temporal semantics between gaan and zullen: Some scholars suggest the former may encode near, and the latter distal, future time (Behydt, 2005;Ten Cate, 1991) (the same observation has been made of English be going to versus will; Behydt, 2005;Royster & Steadman, 1923). There are also regional differences, for instance, gaan is more common and may be more grammaticized in West-Flemish Dutch as compared to (Northern) Dutch (Behydt, 2005;Fehringer, 2018). Like will, modern zullen seems characterized by an admixture between modal and temporal semantics (Kirsner, 1969; also see: Janssen, 1989;Fehringer, 2018;Olmen, Mortelmans, & Auwera, 2009;Sluijs, 2011)-a statement that applies to many future "tenses." Comparing future tense semantics in English and Dutch: As with will, the exact nature of the semantic contribution of zullen is difficult to pin down. Broekhuis and Verkuyl (2014) suggest zullen constitutes marking of an expected or "projected" future. A paradigmatic analysis is useful. In Dutch, it is possible to use the present tense for prediction-based FTR (Behydt, 2005;Dahl, 2000b). The Dutch future-reference present tense may encode complete certainty (Behydt, 2005). This suggests that zullen encodes modal weakening relative to present tense FTR. On the other hand, relative to kunnen 'may,' zullen appears to encode higher certainty. In contrast, the English future is the highest certainty option available for prediction-based FTR. This means paradigmatic analyses of the will and zullen lead to different conclusions contingent on FTR status. The English future tense is the highest certainty construction possible for future predictions. On the other hand zullen and gaan may be paradigmatically contrasted present tense FTR. Relative to such unmarked statements of fact, any modalization is weaker. The paradigmatic oppositions of future tenses may therefore differ as a function of the crosslinguistic differences indexed by FTR status. On the other hand, if will and zullen are markers of futurity, serving to move reference time posterior to utterance time, then, by this account, their semantics are both high certainty .
Implications for linguistic relativity
Relativity accounts of how FTR grammaticization impacts (risky) intertemporal decisions need to confront these evident complexities. Critically, K. Chen's (2013) arguments ignore the division of labor between temporal and modal semantics which often characterizes future "tenses." We have outlined plausible arguments that will and zullen encode either modal strengthening or weakening. Which of these accounts is closer to reality has important implications. If modal weakening is encoded, obligatory future tenses should cause speakers to perceive the future as less certain. They would therefore discount more. If modal strengthening is encoded, future outcomes might be construed as more certain. Speakers would therefore discount less. Additionally, cross-linguistic differences in future "tense" semantics undermine K. Chen's (2013) argument that obligatory use of the future tense should impact speakers of different languages in the same way.
In other words, FTR tends to entangle the notional domains of time and probability, and both domains impact subjective estimations of value. Research which isolates only one of these factors (time) may be producing biased results due to unmeasured confounding variables (probability). Alternatively, the grammaticization of modality may actually be driving reported results. At the same time, the extent to which the encoding of future probability is obligatory in strong-FTR languages is not known (as far as we know). As we have pointed out, modal systems are flexible enough to permit lexical workarounds. Additionally, arguments among linguists have not resolved questions as to the modal semantics of future tenses, despite this having implications for the linguistic savings hypothesis. Therefore, these factors should be studied in a sample of both weak-and strong-FTR languages. This is what we undertook to do.
Study overview and hypotheses
To establish FTR language use, we created an FTR-elicitation task based on Dahl's (1985Dahl's ( , 2000b) FTR questionnaires. K. Chen's (2013) FTR status dichotomy is largely based on work by Dahl and colleagues in the EUROTYP Working Group on Tense and Aspect (Dahl, 2000a), so this was an appropriate starting point. In this task, participants were given a context and a target sentence. The main verb in the target sentence was unconjugated, and participants were asked to render the target sentence given the context. All contexts referred to future events. We made several modifications to the original questionnaire. In addition to creating many new items, we modified the contexts to include information of the likelihood of the referenced event occurring. This change was made in order to elicit modal future-referring language. We refer to this as the "modality condition." In order to elicit language from a wide variety of contexts, we included a range of temporal distances from time of speech as well as examples from each FTR mode.
After completing the FTR-elicitation task, participants completed two additional measures which allowed us to establish whether future tenses encode temporal or modal notions. In the first instance, participants rated FTR structures in terms of whether they perceived them to be temporally distal or temporally proximal. In the second, they rated FTR structures in terms of whether they perceived them to encode high or low certainty. We made several predictions.
Predictions about FTR mode: A modal verb is obliged in prediction-based FTR in English but not Dutch, and most modals are low certainty (see Section 1.2.2). We, therefore, predicted that English-but not Dutch-speakers would be more likely to use low-certainty language for prediction-based FTR. We refer to this as the uncertain predictions hypothesis.
Predictions about modality condition: Relativity researchers have postulated that the grammatical obligation to mark some domain can, over time, cause speakers to become more attentive to that domain (Wolff & Holmes, 2011). If English obliges speakers to encode notions of low-certainty FTR, we reasoned that this might make speakers more attentive to the modal characteriztics of the speech context. We, therefore, predicted that use of low-certainty language would be higher for English participants in the low-certainty condition. We refer to this as the low-certainty-sensitivity hypothesis, that is, because English speakers are predicted to be more sensitive to the low-certainty condition.
Predictions about effects of temporal distance in the FTR-elicitation task: We reasoned that if English speakers use more low-certainty FTR language, this could, over time, lead to stronger cross-modal mapping between temporal distance and notions of low certainty, that is, that English speakers might construe temporally distant events as inherently uncertain. We, therefore, predicted that English speakers would use more low-certainty language as a function of temporal distance, but this would not be true of Dutch speakers. We refer to this as the English cross-modal-mapping hypothesis.
Predictions about temporal-distance ratings: We made two predictions about temporaldistance ratings. On the basis of the linguistic savings hypothesis, we predicted that (a) future tenses would be rated as more distant than present tenses and (b) that Dutch participants would construe future events as more proximal (since higher future tense use in English should lead speakers to construe future events as distal). We refer to these predictions as the linguisticsavings-distance hypotheses.
Exploratory analyses: With regard to ratings of high versus low certainty, we did not make any hypotheses. Rather, we chose to conduct exploratory analyses.
Participants
A final sample of N = 651 participants completed the study (n = 330 in (British) English [n = 165 female, n = 162 male, n = 3 other], n = 321 in Dutch [n = 161 female, n = 159 male, n = 1 other]). This is after one participant was excluded because their age datum was missing. Data were collected between September and November 2019. English participants were recruited from Prolific Academic and Dutch participants were recruited from Qualtrics. Participants were native English and Dutch speakers currently residing in the United Kingdom and the Netherlands. The sample was matched to United Kingdom population norms for age and sex. Ethical approval for the study was granted by the University of Oxford Internal Review Board (ref. no. R39324/RE001). All participants were remunerated.
Materials
The study comprised three tasks: (1) an FTR-elicitation task designed to establish futurereferring language, (2) a subjective-temporal-distance task designed to establish whether the tense of an FTR statement (future vs. present) impacted participants' construals of future temporal distance, and (3) a subjective-certainty task designed to establish whether participants construed FTR structures as encoding high or low certainty.
The FTR-elicitation task
Participants were given a context and a target sentence and were tasked with typing in the conjugated target sentence. Before starting, participants were advised that there "were no correct answers," and that they should complete the questionnaire sentences, "as though they were speaking to a close friend." They were given two training items with example responses, and one trial item where they typed in a response. These were in the past tense in order to avoid biasing participants. There was one attention check: At a random point, participants were instructed to enter the word "dance" (Dutch "dans"). If they failed to do this, there were ejected from the survey immediately.
There were three within-subjects factors in the task: FTR mode (predictions, intentions, schedules; modality condition (high certainty , low certainty, neutral); and temporal distance (1 month, 2 months, 3 months, 6 months, 1 year, 5 years). FTR mode was operationalized by constructing contexts which matched the criteria given in Section 1.2.3. Temporal distance was operationalized using temporal adverbials in the contexts, for example, "1 month," "1 weeks," etc. Modality condition was operationalized by giving participants numerical "certainty information" above each target sentence, for example: Context: Chris's brother {SEND} him some money next month. You never know with him… When he gets it… Certainty: 50% certain.
Target: …he {SPEND} it at the bar.
A typical response might be "He'll likely spend it at the bar." Prior to starting, participants were told "there will be some 'certainty information' included in the context." They were informed that "this indicates how certain you are about what you are saying." They were then directed to "please imagine you are this certain and write down what you would say." For schedules and predictions, participants were told they were supposed to be "___% certain", and for intentions they were told they were supposed to be "___% decided" (this was because it was difficult to make "certain" agree with all intention contexts). In the low-certainty condition, certainty information varied between 40%, 50%, and 60%. This was implemented to try to maintain participant engagement. In the high-certainty condition, certainty information was invariably 100%. In the neutral condition, no certainty information was given. In creating FTR mode, we counted as intention any intention statement whether it was first or third person. This was to try to isolate second-person intention (which can be difficult to differentiate from prediction, for example, John will go out later) from language usage in more prototypical prediction contexts.
The modality conditions were constructed by conserving syntactic structure while minimally altering semantic details between items at matched levels of temporal distance and FTR mode. This was done in order to address the possibility that idiosyncratic aspects of items were driving language usage. Semantic details (nouns, names, pronouns) were altered, but other linguistic details (e.g., sentence length and syntactical structure) were only minimally changed to ensure the certainty information did not clash with the certainty implied by the context of the item (see Table 1 and Supporting Information Figs. A.3 and A.4; see the Supporting Information for full questionnaire and example responses).
In each temporal distance by modality condition, there were five critical items: three prediction items, one intention item, and one scheduling item. This means there were 15 critical items per temporal distance, 3 predict ion × 3 mod.cond. + 1 intention × 3 mod.cond. + 1 schedule × 3 mod.cond. = 15. There were 90 critical items in total, 6 temp.dist × 5 FT Rmode × 3 mod.cond. . Because of time constraints, each participant completed 60 randomly selected trials. Trial order was randomized, and one trial was displayed per page.
Text classification: After an initial exclusion of n = 240 observations because of missing demographic data, there were N = 38, 398 text responses. It was therefore necessary to automate the scoring of responses in terms of whether they used the present tense, future tense, or some kind of modal expression. To accomplish this, we wrote a keyword-based, deterministic, closed-vocabulary classification program written in Python Python Software Foundation (2017). We refer to it as the FTR-type classifier. It comprises a number of word lists which are used in combination with a set of rules to classify text items according to which tense and/or modality words they contain. The FTR-type classifier categorizes text data into four exclusive semantic categories: future tense, present tense, low certainty, and high certainty. The latter two are further divided into two non-exclusive categories based on whether a modal verb or some other construction type is used (see below). Each category is coded with (1) to indicate a response is a positive example, otherwise (0). These comprise the dependent variables for this task. Note. Minimal alterations between modality conditions were implemented to constrain possible idiosyncratic item effects (due to irreconcilable semantic differences across FTR modes and temporal distances, this was not possible across the levels of these conditions). For example, the intention item in each certainty condition follows the conserved structure: " [ In English and Dutch, modal words can be used in combination with the future and present tense. For example, It will probably rain and It will definitely rain are both future tense, but different epistemic commitments are expressed. Similarly, They could win tonight and The game definitely is at 7 are both present tense, but different modal notions are expressed (on present time modals see Condoravdi, 2002). Since we were not interested in formal tense structure and were rather attempting to explore differences in marking of the notional domains involved, it was appropriate to have epistemic modal morphemes "dominate" tense morphemes. Specifically, responses which used both tense and modal words were classed as low certainty (or high certainty) and not also as future or present tense. We outline the FTR-type classifier categories classification system below (see the Supporting Information).
PRESENT TENSE: Responses were classed as present tense if they conjugated the main verb in the target sentence using the present tense and also failed to be classed as any of the other categories.
FUTURE TENSE: Responses were classed as future tense if they used commonly accepted "future" auxiliaries or explicit temporal adverbials (English will, shall, be going to, about to; Dutch zullen 'will,' gaan 'be going to,' staat op 'about to'). Any response exhibiting these words, without additional modal epistemic words, was counted as future tense.
VERBAL-LOW-CERTAINTY: Responses which used low-certainty modal verbs were classed as verbal-low-certainty (English can, could, may, might, should; Dutch kunnen 'may'). A prototypical example is This team might/may/could/should win tonight.
VERBAL-HIGH-CERTAINTY: Responses which used modal verbs which encode high certainty were classed as verbal-high-certainty (English must; Dutch moeten 'must'). A prototypical example is I must remember to take in the laundry, although this suggests a deontic or bouletic base (i.e., having to do with obligations or desires, respectively). In fact, clearly epistemic contexts in which must sounds natural in English are difficult to find, for example, The test tonight must be difficult seems to again suggest a bouletic rather than epistemic base. We nonetheless include it as the only criteria for the verbal-high-certainty category.
OTHER-LOW-CERTAINTY: Responses which used modal expressions indicating low certainty (apart from modal verbs) were classed as other-low-certainty. This includes low-certainty modal modifiers (English possibly, probably, potentially, etc.; Dutch misschien 'perhaps,' mogelijk 'possibly,' waarschijnlijk 'probably,' wellicht 'maybe,' etc.). A prototypical example of a modal modifier encoding low-certainty FTR is It will possibly rain tonight. It also includes low-certainty mental state predicates (English think, believe, reckon, etc.; Dutch denken 'think,' annehm 'assume,' veronderstellen 'suppose,' etc.). A prototypical example might be, I think it's going to be a hard win. Finally, it also includes low-certainty epistemic modal particles (Dutch wel eens, wel, approximately 'well be,' 'well,' as in There could well be rain later.).
OTHER-HIGH-CERTAINTY: Responses which used modal expressions which encode high certainty (apart from modal verbs) were classed as other-high-certainty. This includes modal modifiers (English certainly, definitely, absolutely, etc.; Dutch zecker 'certainly,' definitief 'definitely,' etc.). A prototypical example is The storm will definitely hit the east coast this week. It also includes high-certainty modal particles (Dutch toch, approximately 'fixed,' 'firm').
Data exclusions:
The FTR-type classifier cannot accurately classify responses which use negations, or responses which use words from two conflicting class criteria keyword lists. We refer to these as "mixed modal" responses. In the first instance, modal keywords switch polarity in the presence of negations. For instance, I'm not certain it will rain tomorrow, expresses low certainty. However, because of the presence of the high-certainty class-criterion keyword certain, it would be classed as high certainty. Similar in-determinability characterizes mixed modal responses. For instance, Rain tomorrow is certainly possible expresses moderate certainty, but would be classed as both other-high-certainty and other-low-certainty because of the present of the class-criterion keywords certainly and possible. Since such responses were in practice low frequency, our strategy was simply to exclude them from data analysis. We, therefore, detected the presence of negations using an averaged perceptron tagger following Collins (2002) but with Brown cluster features as described by Koo, Carreras, and Collins (2008) and using greedy decoding (implemented in spaCy; Explosion AI, 2020). Of the total responses, n = 471 were excluded (n = 191 mixed-modality responses, n = 229 negations, and n = 51 because they were in both of these categories). This left a final sample of n = 37, 927 responses.
FTR-type classifier reliability testing:
To test the reliability of the FTR-type classifier, linguistically trained coders annotated N = 1006 responses (n = 504 in English, and n = 501 in Dutch). Where systematic errors were found, the FTR-type classifier was adjusted. After this process, all accuracy metrics were > 0.99 (see the Supporting Information).
The subjective-temporal-distance task
In this task, participants were given two phrases. One used the future tense (English "Ellie will arrive later on"; Dutch "Ellie zal later aankomen."), and the other used the present tense (English "John is arriving later on"; Dutch "John arriveert later"). We refer to this manipulation as "tense condition." Both used the temporal adverbial "later on" to ensure that participants construed the present tense frame as referring to the future. Participants rated subjective temporal distance using a slider between "close to now" (0) and "far from now" (10). Numbered slider intervals were not displayed. Prior to starting, participants were told "you will also be asked to indicate how far away from you a length of time feels." For each item, they were told to "Indicate with the slider how far away from NOW the given time feels to you." Before beginning, participants were given one example involving past time reference ("9 months ago"). As a distraction task, participants also rated 10 objective future distances (later today, 1 week, 1 month, 2 months, 3 months, 6 months, 9 months, 1 year, 2 years, and 5 years). Item order was randomized, and one item was displayed per page.
The subjective-certainty task
In this task, participants used a slider to rate between "uncertain" (0) and "certain" (100) how much certainty they construed a given FTR statement as expressing. FTR statements were created imputing different common FTR constructions types into the same "base" sentence: "It {RAIN} next week." We chose representative examples from each of the coding categories of the FTR-type classifier: future tense ("It will rain…"), present tense ("It is raining…"), 6 verbal-low-certainty ("It could rain"), other-low-certainty ("It will possibly rain…"), and other-high-certainty ("It will definitely rain…"). Verbal-high-certainty was excluded because must/moeten are used to express deontic notions rather than epistemic high certainty about the future (Nuyts, 2000). (For a complete set of the items, see Fig. 7). Prior to beginning, participants were told "you will be asked to indicate how much certainty each statement expresses in YOUR eyes." For each item, they were told, "Indicate how much certainty YOU would be expressing in the following statement." Before starting, they were given one training example involving past time reference: "I think Pete picked up bread yesterday." Item order was randomized, and one item was displayed per page.
Procedure
The study was hosted on the Qualtrics survey platform and was conducted online. (Participants recruited on Prolific were linked through to the Qualtrics survey.) It had a mixed design. The within-subjects factors for each task are described above. There was one betweensubjects factor: survey language. At the beginning of the surveys, participants confirmed their first language and current residence. English speakers confirmed they were native English speakers residing in the United Kingdom, and Dutch speakers confirmed they were native Dutch speakers residing in the Netherlands. If they did not, they were immediately ejected from the survey. Following this, they answered some demographic questions (age, sex, income, education, marital status, and employment status), which were recorded as control variables. To understand whether multilingualism was affecting language elicited language, participants then completed a second-language proficiency measure, in which they self-rated their proficiency for up to three second languages. Ratings were between "can ask directions and answer simple questions" (1) and "very fluent, can use the language as well as a native language" (5) (see the Supporting Information). Following this, participants completed the FTRelicitation task, the subjective-temporal-distance task, and then the subjective-certainty task.
Results
We present an overview of results in Fig. 2. English speakers used more future tense and fewer present tense constructions. This reflects well-known differences between English and Dutch, that is, FTR status. Additionally, English speakers appeared to use more low-certainty language than Dutch speakers. This was mostly driven by modal verb use, for example, It could/may/might rain. English speakers used more low-certainty language for predictions than another other FTR mode, a pattern which did not characterize Dutch (Fig. 2c).
To test our hypotheses, we combined verbal-low-certainty and other-low-certainty into a single dichotomous variable ("low certainty") which was (1) for any response which encoded low certainty and otherwise (0). For example, responses like I think/believe/guess it will rain, It will possibly/probably/potentially, and It could/might/may/should/can rain would all be classed as low certainty (1). Multilevel modeling was appropriate, as responses from a single participant were likely to be similar across different items, and responses to a single item were likely to be similar across different participants. We followed research practice by building models sequentially and using log-likelihood ratio tests to ascertain whether adding variables improved model fit (Aguinis, Gottfredson, & Culpepper, 2013;Legler & Roback, 2019;Raudenbush & Bryk, 2002;Twisk, 2006). Using generalized linear regression with a logit link function (logistic regression), we regressed binary (0,1) low-certainty language over a fixed intercept and then allowed intercepts to randomly vary by item and participant. We added fixed effects for language, FTR mode, modality condition, and temporal distance. For temporal distance, we used the natural log of the number of days from time of speech. Since effects might be expected to vary interactively, we also included all two-way interactions between these variables. Finally, we allowed slopes for language to vary by item, which allowed us 20 of 36 C. Robertson, S. G. Roberts / Cognitive Science 47 (2023) Fig. 2. FTR-type proportions over modality condition, temporal distance, and FTR mode. Dutch speakers used more present tense and fewer future tense constructions. English speakers use more low-certainty modal verbs. Dutch speakers made up for this to some degree through the use of other low-certainty constructions. Note. Coefficients are exponentiated, so represent changes in the odds ratio of using a low-certainty term. Age was mean centered at 0 and scaled such that SD = 1. Modality condition, FTR mode, and employment were sumcoded, so coefficients represent level-wise differences from the grand mean, and interactions can be interpreted as marginal effects with variables at mean. Only those demographics the addition of which improved model fit were included, p < .05. We also tested whether multilingualism affected elicited language. We operationalized this as S i = k p, where S is the sum for participant i, of self-reported proficiency p (1-5) for up to k (0-3) second languages. In no case did adding this improve model fit, p > .1. Generally, English speakers used more future and low-certainty constructions as the task progressed, and Dutch speakers used more present constructions, suggesting speakers trended towards to language-level norms as they progressed. For random components see the Supporting Information. * * * p < .001; * * p < .01; * p < .05; ·p < .1 to statistically capture differences in the random effects of items in both languages. All of these steps were significant, p < .001. By modeling such variance, we were able to estimate parameters of fixed effects independently of item-by-language-level and participant-level idiosyncrasies. Inspection of random effect, plots over normal quartiles indicated estimate bias was within tolerable bounds (Maas & Hox, 2004) (the Supporting Information). Some demographic variables significantly predicted low-certainty language use. We included these (see Table 2). We also included effects of item order, which was significant (the Supporting Information).
of 36
C. Robertson, S. G. Roberts / Cognitive Science 47 (2023) Fig. 3. Low-certainty language use over modality condition by FTR mode. Confidence intervals here and for Figs. 4, 7, 5, and 6 are calculated using the R package ggpredict by matrix-multiplying a predictor X by the parameter vector B to get the predictions, then extracting the variance-covariance matrix V of the parameters and computing XVX' to get the variance-covariance matrix of the predictions. The square root of the diagonal of this matrix represents the standard errors of the predictions, which are then multiplied by ±1.96 for the confidence intervals (Lüdecke, 2019). English speakers used more low-certainty constructions, particularly when making predictions in the neutral condition and in the low-certainty condition overall.
The uncertain predictions hypothesis
We had predicted that relative to intentions and schedules, English speakers would use more low-certainty terms when making predictions. We predicted that this would not be the case for Dutch speakers. To test this, we used the emmeans package to conduct planned comparisons for the effect of FTR mode by language averaged across modality condition. We found that compared with intentions, English speakers used significantly more low-certainty constructions when making predictions, e β = 2.86, SE = 0.13, z = 8.1, and p < .001. Contrary to our prediction, we found that Dutch speakers did as well, e β = 1.75, SE = 0.19, z = 3, and p = .032. However, they did this to a much lesser extent and inspection of Fig. 3 suggests significant effects were driven by high model confidence around low-frequency low-certainty language use in the certain and neutral conditions. Indeed, Dutch speakers making predictions used significantly fewer low-certainty constructions than English speakers, e β = 0.17, SE = 0.14, z = −13.08, and p < .001. A particularly striking effect is that English speakers used low-certainty language when they made predictions in the neutral condition (Fig. 3). This pattern is not evident in the Dutch data. These results support the uncertain predictions hypothesis. They suggest that the grammaticization of FTR may involve increasing obligatorization of the encoding of low certainty when making predictions.
The low-certainty-sensitivity hypothesis
Next we wanted to understand effects of modality condition. We had predicted that English speakers would be more sensitive to modality condition, using more low-certainty language in the low-certainty condition. As predicted, we found that English speakers were more sensitive to our certainty manipulation. Averaged across FTR mode, English speakers in the low-certainty condition used significantly more low-certainty language than Dutch speakers did, e β = 5.87, SE = 0.15, z = 11.86, and p < .001 (see Fig. 3). This indicates that in addition to using more low-certainty language generally, English speakers used more lowcertainty language as a function of the low-certainty condition. They were more sensitive to our manipulation of certainty.
The English cross-modal-mapping hypothesis
Next we wanted to understand how temporal distance impacted low-certainty language use. We had predicted that English-but not Dutch-speakers would use more low-certainty language as a function of temporal distance.
To test this hypothesis, we estimated the slope for uncertain language use over temporal distance in the neutral modality condition (since the hypothesis posits that temporal distance will be cross-modally mapped onto notions of uncertainty in English, it would not make sense to test it in modality conditions which primed modal notions). As predicted, we found that English speakers used more uncertain language as a function of temporal distance in the neutral condition, e β = 1.19, SE = 0.08, z = 2.29, and p = .022 (see Fig. 4).
Was the pattern in Dutch different? It was. In Dutch, the slope for low-certainty language over temporal distance in the neutral condition was not significant, e β = 1.17, SE = 0.09, z = 1.68, and p = .093.
These results support the English cross-modal-mapping hypothesis. English speakers used more low-certainty language in the neutral and low-certainty conditions as temporal distance increased. Dutch speakers did not.
The linguistic-savings-distance hypotheses
On the basis of the linguistic savings hypothesis, we had predicted (a) that participants would rate the future tense frame, "Ellie will arrive later on," as more temporally distal than the present tense frame, "John is arriving later on"; and (b) that Dutch participants would rate the future as more temporally proximal than English participants. To test these predictions, we regressed subjective distance ratings over language and tense condition and the interaction between them. We used a multilevel linear regression with random intercepts for participant (these were significant, χ 2 (1) = 388.44, p < .001).
24 of 36 C. Robertson, S. G. Roberts / Cognitive Science 47 (2023) Fig. 4. Low-certainty language use over temporal distance by modality condition. In the low certainty and neutral conditions, English speakers used more low-certainty language as temporal distance increased. Dutch speakers did not. The x-axis is log-scaled. C. Robertson, S. G. Roberts / Cognitive Science 47 (2023) 25 of 36 Fig. 6. Subjective ratings of temporal distance by language and objective temporal distance.
Did Dutch speakers construe the future more temporally proximal? They did not. In fact, relative to English speakers, Dutch speakers rated the future as more distal (Fig. 5), and significantly so, β = 0.66, SE = 0.13, t (905.25) = 5.21, and p < .001. This effect might have been limited to the temporal distance of the two "future/present" items (i.e., "later on"). To test whether, it was, we re-estimated the model but using the objective distances in the distractor tasks, ranging from "later today" to "5 years." We again found that Dutch speakers rated the future as more distal, β = 0.61, SE = 0.13, t (648) = 4.84, and p < .001. This was particularly marked in temporal distances between 1 week and 1 year (Fig. 6). This is the opposite to the direction predicted by the linguistic savings hypothesis.
Together, these results fail to support the hypothesis that tense framing impacts construals of temporal distance and that therefore Dutch speakers construe the future as closer in time (cf. Chen, 2013).
Exploratory analyses: The subjective probability task
To explore the results of the subjective probability task, we regressed certainty ratings over an unordered factor which indexed each item. Because the items were not strictly comparable between English and Dutch, we did this separately for each language. We included random intercepts for participant. This was significant in both languages, p < .001. We present the results in Fig. 7. We were particularly interested in the future tenses, given the conflicting accounts that they either encode modal strengthening or modal weakening. Interestingly, Fig. 7. Subjective ratings of certainty by item and FTR type in English and Dutch. In both languages, future and present tense appear to encode high certainty, while modal and other-low-certainty constructions encode low certainty. English speakers appeared to break modal polarity into finer gradations than Dutch speakers, with clearer differences between low certainty (could/may/might) and intermediate-certainty (should/probably/I think) modal expressions.
* We acknowledge that present tense prediction (It is raining…) is either low-frequency or unacceptable in English. We nonetheless included this item to maintain comparability with Dutch data. Certainty ratings for this item should be interpreted as speculative.
future tenses in both languages were rated as high certainty. This undermines accounts which suggest future tense marking encodes modal weakening. However, the languages differed in subtle ways. In discussing these results, we will use the term "modal polarity" to refer to the one-dimensional scale between high and low certainty. English appeared to break modal polarity into finer gradations, with clearer differences between low certainty (could/may/might) and intermediate certainty (should/probably/I think). This suggests that English may oblige speakers to express greater degrees of precision about the likelihood of future outcomes.
Discussion
The study supports the hypothesis that the encoding of modality is implicated in FTR grammaticization processes. We found that English speakers were more likely to mark their predictions using an low-certainty construction. English speakers also used more low-certainty language as a function of temporal distance. This suggests that English speakers construe temporally distant events as increasingly uncertain. Additionally, English speakers were more sensitive to modality condition. They used more low-certainty language in the low-certainty condition. All these results suggest that, relative to Dutch speakers, English speakers are more likely to encode low-certainty notions when they talk about the future. The fact that it was mostly modal verbs which drove this effect (Fig. 2) suggests grammatical constraints are responsible.
In exploratory analyses of the subjective probability task, we found that both the future and present tenses were rated as high certainty. This suggests obliging their use would cause less, not more, discounting (cf. K. Chen, 2013). Additionally, differences in the relative modal polarity of the present and future tenses in English and Dutch suggest that FTR status may be a relevant determinant of modal future tense semantics.
Finally, we found no support for the account that the future tense encodes temporal distance. There was no difference in subjective-temporal-distance ratings as a function of whether a future-referring statement was framed using the future or present tense. This suggests that the future tense does not encode temporal distance (cf. K. Chen, 2013). We also found that Dutch speakers rated future events as more distal than English speakers (cf. K. Chen, 2013). In combination, these findings suggest the temporal mechanisms hypothesized to underpin the relationship between FTR grammaticization and temporal discounting cannot be involved in producing observed effects-at least in English and Dutch.
FTR status:
The weak/strong dichotomy Do our results corroborate or undermine the FTR status dichotomy? English speakers used more future tense constructions (Fig. 2). However, they additionally used more low-certainty language. Low-certainty language use in English was also more sensitive to FTR mode, probability condition, and temporal distance. This was mostly driven by modal verbs, which means that grammatical features of English may be involved in producing higher encoding of low certainty, that is, the obligatory modal verb system. This suggests that obligatory future tenses and stricter encoding of modality arise from a unified underlying process. Dahl (1985) delineated a "futureless area" comprising European languages which do not oblige the future tense for prediction-based FTR. Obligatory tense marking in prediction-based FTR is suggested to be reasonable proxy for FTR grammaticization in general. Our results suggest this includes the obligatorization of modal verbs as well as future tenses. As such, in one sense, the FTR status dichotomy was supported. FTR appeared more grammaticized in English, with the noted caveat that modal FTR structures were implicated in this difference.
An important point is that our results have no implications for whether it is possible to form nuanced linguistic FTR utterances in these languages. In pointing out that English speakers use more low-certainty language, we do not imply that Dutch FTR is deficient, simple, or vague in what Dutch speakers may articulate. Our results rather suggest that English grammatical constraints nudge English speakers towards encoding more low-certainty modality.
Future tense semantics: FTR status impacts tense semantics
In the subjective-certainty task, we found that English speakers rated the future tense as highest certainty, while Dutch participants rated the present tense as highest. This finding supports our paradigmatic analysis. We pointed out above that in English the highest certainty FTR structure available for prediction-based FTR is the future tense. In Dutch, present tense statements are possible. Our results are compatible with the conclusion that this difference causes differences in relative encoded certainty between the future and present tense in these languages. Moreover, the result suggests there are cross-linguistic differences in the modal strength of future tenses and that FTR status is a determinant of these. This means obliging the use of the future tense is expected to affect psychological discounting differently in different languages.
Causal mechanisms: A modal account of observed findings
In a recent risky intertemporal-choice task, Vanderveldt et al. (2015) found that a function of the following form best described empirical valuations of risky future rewards: In this instance, V is monetary value, D is temporal distance, θ is odds against, k and h are parameters affecting the discounting rates, and sd and sp are scaling factors which have been found to best describe experimental evidence. This means that psychological discounting is better described by a discounting plane, than a discounting curve. Subjective value is a function of both the odds against and the time until the receipt of a future reward.
How might cross-linguistic differences in FTR grammaticization impact such psychological discounting processes? First, the mechanisms proposed by the linguistic savings hypothesis might still be in effect. However, they could just as easily apply to probabilistic discounting, that is, we might predict speakers of languages which more strictly grammaticize FTR to both have relatively more precise beliefs about, and relatively lower estimates of, the probability of future events. We would thereby predict them to probabilistically discount more heavily. However, as the probability of a future reward decreases, temporal distance has an increasingly negligible effect on subjective value; in contrast probability discounting is relatively unaffected by temporal distance (Vanderveldt et al., 2015). We, therefore, suggest that differences in the grammaticization of probability (i.e., modality) may be the more important factor in driving observed cross-cultural differences in discounting-related behavior (cf. K. Chen, 2013). Real-world (risky) intertemporal decisions could be impacted by such probability discounting differences. If the English case generalizes, this suggests that a "modal" account could plausibly explain many reported results (K. Chen, 2013;Chen et al., 2017;Chi et al., 2018;Figlio et al., 2016;Galor et al., 2016;Guin, 2017;Hübner & Vannoorenberghe, 2015b, 2015aLiang et al., 2018;Lien & Zhang, 2020;Pérez & Tavits, 2017;Roberts et al., 2015;Sutter et al., 2015;Thoma & Tytus, 2018).
Causal mechanisms: Temporal distance and precision
The results of the subjective-temporal-distance task did not support the linguistic savings hypothesis. English speakers rated future events as closer in time than Dutch speakers. This is the opposite to the direction expected if the English future tense encoded temporal distance. Additionally, we found that tense framing (future vs. present) had no effect on distance ratings. It is possible that this null result is an artifact of the single phrase we used: "___ is arriving/will arrive later on." A difference might emerge with more distant FTR statements or other phrases. Future research could take this up. However, our findings are consistent with findings that tense framing does not affect intertemporal decisions. Banerjee and Urminsky (2017) conducted a series of six experiments investigating this. They had participants make intertemporal choices, which were framed in either the present or future tense, that is, "you get $10 in a week" versus "you will get $10 in a week." In a series of several experiments which used a range of distances, such manipulations had no effect on participants' time preferences (a similar result is reported in Thoma & Tytus, 2018). This suggests that future tenses do not encode temporal distance, regardless of the temporal distances involved. Our findings corroborate this conclusion.
What do tenses encode? We found the present and future tenses were rated as high certainty in English and Dutch. This suggests obligatory future tense use would cause speakers to discount less, not more, the opposite to observed results. In fact, the ratio of high-certainty (present + future + certain) versus low-certainty language is the only linguistic feature we identified which might plausibly affect psychological discounting in the observed direction. This lends support to our general argument that FTR grammaticization impacts psychological discounting because it affects speakers beliefs about future risk rather than their construals of future temporal distance and/or precision.
Contributions to work on temporal-distance representations
Dutch speakers rated the future as farther away. This contributes to a nascent body of literature which has begun to investigate how subjective ratings of future distance impact discounting (see Bradford, Dolan, & Galizzi, 2019;Zauberman, Kim, Malkoc, & Bettman, 2009). For instance, Thorstad and Wolff (2018) found that people whose tweets reference increasingly distant future times were more likely to invest in the future and less likely to undertake risky behavior. Ireland, Schwartz, Chen, Ungar, and Albarracín (2015) found that U.S. counties with higher rates of FTR tweets had lower rates of Human Immunodeficiency Virus (HIV). In this context, HIV exposure is expected to be impacted by time preferences because risky behaviors (e.g., intravenous drug use, unprotected intercourse) incur long terms costs (risk of contracting HIV) but confer short-term benefits. Finally, using a measure similar to ours, Thorstad, Nie, and Wolff (2015) found that people who construed the future as farther away were more present oriented. Together, these results support K. Chen's (2013) proposal that subjective representational distance is a significant predictor of time preferences. However, we found that Dutch speakers represented the future as farther away. As far as we can tell, this is the first study to use time slider type tasks to identify cross-cultural differences of this nature. If this is related to cross-linguistic differences in FTR grammaticization, this suggests that higher obligation to mark future statements causes future events to be construed as more proximal by strong-FTR speakers. However, if this were the case, it would cause strong-FTR speakers to be more future-oriented not less-as is hypothesized and observed. This entails that differences in construals of future distance are not likely to be causally implicated in the relationship between FTR grammaticization and psychological discounting.
Conclusions
In general, we found that FTR status indexes cross-linguistic differences in the encoding of future modality. English speakers encoded low-certainty modality more than Dutch speakers. This was mostly driven by a more highly grammaticized modal verb system. Moreover, we found that future tenses encode notions of high certainty, not temporal distance, or low certainty. This implies the effect of obligatory future tense marking would go in the opposite direction to that hypothesized by K. Chen (2013).
Together, these results undermine the notion that FTR grammaticization is primarily about time and call into question the validity of the causal mechanisms suggested in K. Chen (2013). If tense and modal FTR grammaticization are generally correlated, it may be the case that observed cross-cultural differences in discounting-related behavior actually involve probabilistic discounting driven by stricter encoding of modal notions in strong-FTR languages.
Economists continuing to work on this question might begin exploring the complex potential relationships between FTR grammaticization and discounting. These processes are worth understanding: Psychological discounting processes are an important determinant of a wide range of behaviors, including health outcomes (Ireland et al., 2015;Vuchinich & Simpson, 1998), drug use (McKerchar & Renda, 2012, climate change attitudes , educational performance (Figlio et al., 2016), pathological gambling (Hodgins & Engel, 2002), and investment in savings (Liu & Aaker, 2007). If the precise nature of the relationship between FTR grammaticization and discounting is better understood, researchers may be able to better understand how-or whether -cross-linguistic differences impact the discounting mechanisms which underpin intertemporal decisions. Detailed experimental work which combines behavioral economic techniques with usage-based typological linguistics should be employed to explore the precise relationships between cross-linguistic differences in FTR grammaticization and psychological discounting.
Notes
1 Typically, linguists use separate but related terms for notional categories and the linguistic structures which grammatically encode them (Bybee, Perkins, & Pagliuca, 1994). As such, we use "FTR" to refer to any statement about the future and "future tense" to refer to the linguistic structures which sometimes grammatically mark FTR, for example, in English will, shall, or be going to. 2 Dahl (2013) points out that these terms are problematic. FTR stands for "future time reference," and, of course, it is possible in all languages to refer to future time (whether or not a tense is obliged). Better might be "strong-FTR G " to indicate that the difference is a matter of Grammatical marking. However, the terms are in widespread use. We will not deviate from them, though wish to acknowledge Dahl's (2013) critique. 3 Negated high certainty (p = 0) might also be added, for cases where a speaker is highly certain something is NOT the case. 4 Context is given in [brackets]. 5 English uses the present progressive to refer to present time events. The zero-form simple present tense, It rains, is used for gnomic statements, which have truth value independent of any deictic time reference, for example, It rains in Oxford (Broekhuis & Verkuyl, 2014). 6 We used the present progressive because the simple present tense it rains is not grammatical for English predictions.
|
2023-01-19T20:37:59.570Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "0d6840410331062b005a4f73ade291aa3a23eaad",
"oa_license": "CCBY",
"oa_url": "https://orca.cardiff.ac.uk/id/eprint/156130/1/cogs.13224.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "553b424ab0036e9c0ece61f2c2b73e99b630d248",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
264172023
|
pes2o/s2orc
|
v3-fos-license
|
Formal support services and (dis)empowerment of domestic violence victims: perspectives from women survivors in Ghana
Background As part of efforts to prevent violence against women, several countries have institutionalized formal support services including legislations to prevent, protect victims, and deter perpetrators of domestic violence (DV). Prior research on formal support service utilization shows that DV survivors do not get the necessary services they deserve. However, much remains to be known about the experiences of women survivors of DV who accessed a range of formal support services and how their experiences (dis)empowered them. Here, we assessed the experiences of Ghanaian women survivors of DV with formal support services vis-à-vis the provisions of the Ghana DV Act and insights of subject experts. Methods From May to August 2018, we recruited a total of 28 participants: 21 women survivors of DV in Weija-Gbawe Municipality of Ghana, and 7 experts from the police, human rights, and health professions. We used two sets of in-depth interview guides: one to collect data on survivors’ experiences, and the second for the insights of experts. We performed summary descriptive statistics on survivors’ sociodemographic characteristics and used thematic analysis to assess their experiences of DV; and access to, patronage, and response of formal support services. Results Of 21 DV survivors, 19 (90.1%) were aware of the existence of the DV law, however none was well informed of their entitlements. DV survivors have low formal education and are not economically empowered. Some DV survivors are revictimized in the process of accessing formal services. DV survivors expect the government to provide them with shelter, upkeep, medical, and legal aid. All the 21 survivors had at least one contact with a women’s rights organization and were knowledgeable of their supporting services namely legal services, temporary shelter, and psychosocial support. Conclusions The experiences of DV survivors do not reflect the legal provisions of Ghana’s DV Act. Government under funding of formal services and negative gender norms are disempowering to survivors. NGOs are popular among women survivors of DV in Ghana for the education, legal, and material support they provide. A close collaboration between the government and NGOs could better mitigate DV in Ghana.
Formal support services and (dis) empowerment of domestic violence victims: perspectives from women survivors in Ghana Ruth Minikuubu Kaburi 1* and Basil Benduri Kaburi 2
Background
To curb the menace of domestic violence (DV), the international community over the years has made several efforts through world women conferences, human rights instruments, and United Nations declarations to eradicate violence against women.These concerted international campaigns have led to the creation of international gender rights norms such as the due diligence standard that prescribe how countries could protect their citizens against systemic gender-based violence and human rights abuses [1].Several African countries including Ghana, have ratified international human rights treaties on the prevention of DV and violence against women more broadly [2].Some of these treaties incorporate the due diligence standard that obliges ratifying countries to proactively prevent, investigate, and punish perpetrators of acts of violence against women in accordance with national laws [3,4].
Prior to the emergence of international norms prohibiting violence against women, several African countries considered DV a private matter, thus not an issue for governments' intervention [5][6][7].However, actions aimed at revealing and fighting violence against women by Africa's burgeoning women's movement increased in the 1980s [8,9].This pressured governments to see DV against women as an issue of public concern.By the close of the 20th century, campaigns to eliminate violence against women became widespread with global attention drawn to the phenomenon of DV in the region [10][11][12].
Studies on the causes of DV in most African contexts offer that a mixed of individual, interpersonal, community and structural level factors often predict incidence of violence [13].At the individual level, several factors such as age especially being young, alcohol abuse, depression and personality disorders, low income, educational levels, and having witnessed or experienced violence as a child often account for DV [14,15].On the other hand, marital disputes, weak family ties, enactment of traditional gender roles by couples, and economic constraints have been found to be contributors to DV within interpersonal relationships [16][17][18].At the community and societal level, unaccountability of abusive behaviours, low social capital, traditional gender norms, and cultural norms supporting husbands' use of violence to correct wives are factors that engender DV [13,15,17,19,20].
In 2007, Ghana passed the Domestic Violence Act (DV Act 732), hereafter DV Act, to create the legal basis to fight DV [5].This DV Act specifically makes provision for filling complaints with the police and arresting perpetrators.It also makes provisions for criminal charges as well as civil procedures such as protection orders and the process for accessing these services.The DV Act also calls for the establishment of the Victims of Domestic Violence Support Fund to provide basic material and social supports systems in bringing relief to victims.Further, the DV Act explicitly gives a broad definition of DV, which embraces physical, sexual, economic, and emotional abuse.Compared to DV legislations of other African countries such as Botswana, Mauritius, Rwanda, and South Africa provide only police service interventions, Ghana's DV Act has been described as progressive because it makes provisions for social and material support services in addition to services of the police and justice system [2,5].The DV Act notwithstanding, sociocultural, institutional, and economic factors that challenge availability and access to these services are rife at the national and local levels [21,22].
Studies on government-sponsored protective services or formal support services including legislation against DV have reported that many women victims mostly turn to their informal networks first, wait long to report abuse to formal institutions, do not get the necessary services they deserve from these institutions, and may be blamed for the violence against them in the process of accessing formal support services [7,21,[23][24][25][26][27][28][29].However, to date, much remains to be known about the experiences of women survivors of DV in Ghana who sought formal support services and how their contact with these services (dis)empowered them in the process of seeking justice.Our study assessed and described the experiences of Ghanaian women survivors of domestic violence who contacted formal support services vis-à-vis the provisions of the Ghana DV Act and insights of subject experts.
Study design
From May to August 2018, we conducted a descriptive cross-sectional survey among women survivors of DV.We used in-depth interview guides to collect data on their perceptions, experiences, and perspectives on formal support services, and the availability and patronage of designated services.We also conducted key informant interviews among subject experts of government and non-governmental institutions concerned with protecting women against violence.We reviewed archival sources of secondary data from publicly available documents pertaining to violence against women to inform the interview guides and situate the data collected in context.
Study setting
We conducted the study in Weija -an urban, and ethnically diverse community located about 27 km west of Accra -the national capital.According to the 2010 population and housing census of Ghana, Weija has a projected population of 15,892 -about 51.6% of which is female [30].The town has a divisional police command with a Domestic Violence and Victims Support Unit (DOVVSU), and a municipal hospital that provides primary healthcare services.The town is commercially brisk with many shopping centers.It also has primary, secondary, and tertiary educational institutions.
Sampling and recruitment of study participants
We recruited a total of 28 participants: 21 women survivors of DV in Weija and 7 experts with professions in DV, human rights, and health.There were three eligibility criteria for participating in the study.These were: women who had ever suffered DV, resident in the community 12 months or longer prior to the start of our study, and a history of seeking formal DV support services on at least one occasion.To get access to our study sample, we contacted leaders of the three local non-governmental organizations who work with abused women and provide counselling service to survivors of DV in the community.We used convenient sampling to recruit women survivors who showed up, met the selection criteria, and gave consent to participate in the study.We stopped recruiting participants on reaching theoretical saturation.
Three of the seven respondents to the key informant interviews represented three departments namely, the Department of Social Welfare, the Domestic Violence and Victims Support Unit (DOVSSU), and the Commission on Human Rights and Administrative Justice (CHRAJ).The remaining four experts were a medical officer, and three women's rights advocates.The women's rights advocates represented three non-governmental women's rights organizations namely, Ark Foundation, International Federation of Women Lawyers (FIDA-Ghana), and Women Alliance, Ghana.We purposively selected these key informants based on their professional expertise and experience in DV support services in Ghana.
Data collection
We used semi-structured interview guides to collect data.We adopted two separate interview guides from the Ghana Domestic Violence Report [31]: one for the women survivors of DV and the other for the experts.The interview guide for women survivors of DV contained variables on sociodemographic characteristics, their knowledge of the DV Act (732) 2007, awareness of the scope of formal support services, their perspectives on these support services, and how survivors exercised their rights of seeking support and redress from government agencies.We conducted interviews with the women survivors in Twi -the most widely spoken language in the community.We first wrote the interview guide in English language and translated it to Twi with assistance of two native speakers.We used back-translation reiteratively during transcription and analysis process to ensure that the accurate meaning of the original responses of participants were maintained.The durations of interviews with DV survivors averaged 35 min.We conducted the expert interviews in the English language; each lasting an average of 45 min.The in-depth interview guide for the experts included questions relating to challenges of service provision, awareness of domestic violence act, awareness of various support services available to survivors, and survivors' attitude towards formal support services.We reviewed records of relevant secondary data and described the disconnect between government advocated directives on DV and the experiences of DV survivors.
Data analysis
We described survivors' sociodemographic characteristics using summary descriptive statistics.We performed a descriptive thematic analysis on the qualitative data from participants' responses based on predetermined themes namely, perceptions, experiences, the government's responsibilities to DV survivors, the availability of expected services, and their experiences with access to formal services.
Ethical considerations
The Ethics Committee for Humanities at the University of Ghana granted ethical approval for the study.We explained the background and purpose of the study to the participants.They were made aware that no form of compensation would be given for participation, but that their participation could possibly benefit the fight against DV.We also made the participants to understand that participation in the study was voluntary and that they could withdraw at any point or refuse to respond to any questions without the need to explain themselves or fear of any repercussion.All respondents signed a written informed consent for participation.
We protected the privacy and safety of the respondents by using an inner room at the offices of the Department of Social Welfare at the Weija-Gbawe Municipal Administration.We also made provision for a counsellor to be at hand to support any participant who may experience emotional breakdown.We stored the hard copies of completed interview guides under lock and key, and the recordings and transcripts of the data under password.We pseudonymized the data by keeping personal identifiers separate from the responses.Only the authors analysed the data.
Sociodemographic characteristics of women survivors of domestic violence
A total of 21 women survivors of domestic violence participated in our study.Their ages ranged from 23 to 54 years; with a median age of 42 years.Of these 21, 18 (85.7%)were married, 13 (61.9%)had 3 to 5 children (Table 1).The highest level of formal education among them was senior high school; with 3 (14.3%)attaining this.Majority -14 (66.7%) engaged in petty retail trading.About half of them 11 (52.4%)depended entirely on their abusers for upkeep (Table 1).
Perspectives of women survivors of DV on formal support services
Survivors' perceptions of legislation and formal support services fell into four categories viz.survivors knowledge of the scope of services and utilization; survivors experiences of accessing formal support services; challenges associated with utilization of formal support service; and (dis)empowerment and agency of survivors accessing formal support services.
Survivors' knowledge of the scope of support services and utilization
There were three main categories of formal support services that were oversteered by government agencies.These were the criminal justice services, social services and basic material support, counselling, and mediation services from quasi-judicial institutions.These services were complemented by a range of support services rendered by civil society organizations approved by the government.
The criminal justice services
Review of relevant secondary data showed that the criminal justice services comprised arrest and detention of abusers by the police and the adjudications of DV cases by the law courts and family tribunals.DOVVSU of the Ghana Police Service was reconstituted as a gender neutral entity to tackle gender-based violence from the then Women and Juvenile support Unit [WAJU] in 2005 [5].This unit is the main point of call for survivors when a case of abuse is reported to authorities.It also served as a point of referral to and from other services within the formal support system [32].Specifically, DOVSSU received and responded to complaints, conducted investigation into cases, provided temporary shelter, referred victims for medical, legal, and counselling services.It also referred dockets to the Justice Department for advice on prosecution [33,34].The DV law in Ghana also made provisions for civil protection orders to protect survivors of violence.
The courts handled DV and other violence against women specified by law.In order to swiftly respond to DV, the judiciary service operated human rights, genderbased and sexual offences courts to speed up the adjudication of cases.The justice system in the country also made provision for the family and juvenile courts at the district court level to handle mild cases of DV and other non-violent family disputes bordering on economic neglect and child custody.These courts used alternative dispute resolution (ADR) methods to settle cases.The family courts also had jurisdiction to deal with criminal cases and civil protection orders under the DV Act.Of the 21 survivors, 13 (61.9%)were aware of the role of the police and courts in the provision of formal support for them.
Four (19.1%) of the survivors filed complaints at DOV-VSU that led to arrests of their abusers (Table 2).They complained that they experienced difficulties in their attempts to access justice at the law courts for lack of money to file their cases, pay for medical reports, and fund legal representation.Of the 21 DV survivors, 19 (90.1%) were aware of the existence of the DV Act, but none of them knew about the protection and occupation orders it provided as an avenue of seeking protection.
Social services and basic material support
Support services under this category included provision of reception shelters, medical care, skill training, rehabilitation, and reintegration with families.These services were to be provided free of charge under the domestic violence fund.The establishment of the DV fund under the law was equally challenged by low government budgetary allocation to the sector.Of the 21 survivors 5 (23.8%) were aware of the scope of social and material support services, and14 (66.7%) of them sought medical care from health facilities.Of the 14 who sought medical care, 1 (7.1%), had valid health insurance.The remaining had to make out of pocket payments -some with the help of friends and relatives.A medical report costed a minimum of 100 Ghana cedis (20 USD in 2018), provided medical evidence of the abuse in court, these survivors perceived the fee expensive.On different occasions, police officers accompanied three of the survivors to the hospital.On one such occasion, a police officer had to pay the medical bill of one of the survivors.The law requires the police to assist victims to obtain medical care, but is not explicit on the specifics of the assistance such as accompanying victims to hospitals and paying their bills.
All the DV survivors and subject experts agreed that paying for medical care and medical reports before a case can be booked in the law courts is a disincentive for the pursuance of DV cases.The medical report is a mandatory requirement particularly for physical assault including sexual abuse without which the police cannot prosecute the case.There was no form of specialized or priority services for survivors of DV who reported at the health facilities.
The Department of Social Welfare (DSW) is a government agency mandated to provide psychosocial counselling and shelters for abused women.Of the 21 survivors, 5 (23.8%) had knowledge of the range of services provided by this agency and 10 (47%) utilized psychosocial counselling services.Nevertheless, the provision of shelters remains non-existence due to inadequate funding to operate them.Some survivors indicated that the threat of homelessness delayed their decision to seek help from formal support services.A survivor who got separated from her husband after reporting him to the police and is now lodging with her sister shared her frustration:
If my sister did not take me in, I don't know what would have happened to me. However if government provides a place where people in my situation can lodge for the meantime whilst they reorganize their lives, it will help. (Survivor 1, Interview, May 3, 2018)
In the absence of shelters for battered women, the DSW provided psychosocial counselling, conduct physical and medical needs assessment and refer survivors to other support services including the services of partner civil society or non-governmental organizations.The DWS also provided counseling and mediation services particularly within the family tribunals.Specifically, the DSW mediated DV incidents that were considered civil cases, as against criminal.These case bordered on economic abuse including child maintenance and paternity disputes among couples.
Mediation and counseling services from quasi-judiciary agencies.
The Commission on Human Right and Administrative Justice (CHRAJ) also provided mediation and counseling services.Of the 21 survivors, 1 (4.7%) reported knowledge of the role of CHRAJ in supporting abused persons, and none utilized any services from this agency.CHRAJ is a quasi-judicial institution created under the 1992 Constitution to help promote transparency and public accountability in the area of human rights protection.
The DV Act (Sect.24) made provisions for the court in a criminal trial of DV cases to promote reconciliation by referring some cases to mediation under certain circumstances.These circumstances included when the violence is not aggravated or does not require a prison sentence more than two years, the victim offers to have the case settled out of court, and when the court had the opinion that the case can be resolved peacefully through alternative disputes resolution (ADR).Following a complaint of human rights abuses, officials of CHRAJ invited both the complainant and respondent to talk out their issues.As a neutral facilitator, the officials helped the parties to negotiate a mutually beneficial agreement for an amicable settlement.Though these settlements or agreements are not legally binding and a party can violate it without any consequences, CHRAJ was still a point of call for some DV victims.When parties do not reach an agreement, they are referred to the judicial system for redress.
Government approved support services provided by civil society organizations
Our study found that civil society organizations including non-governmental women's rights organizations also played an important role in the provision of services to DV survivors.The services provided by these entities were supplementary to the formal support services.The range of services rendered included training and advocacy on DV prevention, psychosocial and legal counselling, education on gender-based violence, skills training, and provision of temporal shelter for survivors of DV.
The International Federation for Women Lawyers (FIDA-Ghana) provided free legal aid to women survivors, initiated research into socio-legal issues affecting the status of women and children, trained and sensitized stakeholders and the public on gender-based violence, served as referral centers for DV survivors from the police, CHRAJ, and DWS for legal assistance.Of the 21 survivors, 2 (9.5%) sought assistance for redress from FIDA.All the 21 survivors indicated that at one point, they had contact with these women's rights organizations and were knowledgeable of the services they provided.At the time of this study, FIDA-Ghana also operated a flagship legal literacy and capacity building program that trained community members to serve as paralegals to DV survivors and other vulnerable groups at the community level.The project also offered legal training on handling gender-based violence with police officers in some selected police stations.
Survivors experiences of accessing formal support services
Severity of abuse, social, and economic obstacles informed preference for specific services.
Of the 21 survivors -16 (76.2%) experienced physical violence (Table 2).Four of these 16 survivors (25%) of physical violence experienced sexual violence in addition.Severe physical violence was a major trigger to seek formal support services.Other forms of violence included verbal/psychological abuse, and economic abuse in the form of financial neglect.Nearly all 20 (95.2%) were abused by their intimate partners.The remaining survivor was abused by her in-laws.Counselling was offered to 10 (47.6%) of the survivors (Table 2).
Survivors who received counselling service indicated that they felt better afterwards even though counselling alone did not stop the abuse.Survivor 3 who reported her ex-partner to the department of social welfare for economic abuse and the refusal to pay child maintenance shared her experience: At the social welfare department, I was advised to avoid situations that will lead to fights resulting in my child's father not giving money.The officer also told me to engage in some economic activity to reduce financial pressure on him (ex-partner).My ex was also advised to take up his responsibility until our child becomes 18 years.He agreed but did not change his behaviour thereafter.(Survivor 3, Interview, May 3, 2018) Four of the 21 survivors (19%) confirmed that the perpetrators of violence against them were arrested and detained by the DOVVSU of the Ghana Police Service.In cases of severe physical and emotional abuses, survivors in our study obtained support from their friends or relatives and advice on decision to report the abuse to the police.Survivor 4 who had endured physical and sexual violence from her husband for many years had the support from family members when she decided to report him.Her decision to report her abuser led to a separation, and the case was pending adjudication at the time of our study.She recounted that: Reporting the case has led to our separation.I am living with a friend for the meantime.I thank God that my children are now adults.(Survivor 4, Interview, May 3, 2018) Aside fear of loss of life, severe and prolong violence was also a driver of the decision to report the abuse to authorities.In such cases, the informal networks of survivors (e.g., friends, family members, and pastors) supported them to report abuses to the police.The support from these agencies ended the violence for seven (33.3%)
Survivors' lack of economic empowerment
Most DV survivors in the study were not economically empowered.This countered their efforts at seeking medical care, and medical reports.Section 8 of the Act makes provision for free medical care for DV survivors, but this is not provided for and hence, not complied with at the point of services.In adequate funds for this sector has not created an enabling environment for implementation of this provision.Consequently, survivors express the view that the free treatment and medical reports need to be urgently implemented or at worse medical bills to be charged on abusers.Survivor 9 also made this proposal to government as a way of deterring abusers in the following statement: The government should educate the men and let them know that there is a law protecting women from domestic violence.If they do not comply with it…they would be arrested, pay medical bills, and punished severely.(Survivor 9, Interview, May 3, 2018) DV survivors perceived hospitals as one places where they have temporary shelter devoid of the risk of encountering the perpetrators.They were of the conviction that, attending public hospitals in their time of distress is important and should not attract fees, whether for treatment or medical reports.
Survivors lack of comprehensive knowledge on the scope of support services
Survivors were not able to seek all the range of services available to them under the DV Act because of their limited awareness of these services.Hence, in expressing their expectations, some survivors called for the police in cases of arrests and subsequently release to secure a bond of non-violent behaviours (protection order) from their abusers, even though this service is stipulated in the Act.Those who suffered financial neglect expected a bond on their partners to assume their financial responsibilities to the family.
Institutional constraints
The efforts of survivors to access justice were thwarted by institutional constraints.This often result in delays attributed to several factor such as lack of central government budgetary allocation for service provision, lack of logistics by officials to carry out their mandate and inadequate trained official specialized in handling gender-based violence.A women's Right advocate at FIDA-Ghana stressing on these constraints have this to say: They (police) will help but the speed with which they do that is not fast.We partner with some police sta- This state of affairs negatively impacted on survivors desire to see the police arrest and prosecute abusers (for severe cases of physical and sexual abuse), fine, and caution where necessary.However, prosecution in severe cases was hindered among survivors who sort for this service.This was partly due to ineffective law enforcement in some cases, the unprofessional conduct of officials in the forms of trivializing domestic violence cases, accepting inducements from abusers and acting cold and unsupportive of survivors' case.Similarly, an officer from the DSW stated that inadequate funding hinders the provision and maintenance of government-run shelters (Officer, Department of Social Welfare, Interview, June19, 2018).Our records review also found that the government budgetary allocation for the Ministry of Gender, Children and Social Protection for its expanded role was under 1% of the total national budget as of 2015 [35].One of the three women's rights advocates indicated that police trivialization of DV and the cold and sometimes harsh reception of victims at the police stations instantly discouraged women reporting DV.Oftentimes, this discouraged survivors from pursuing the case even if they succeeded registering it.She shared some experiences of survivors who reported at the police station to file a complaint and some of the responses they got were:
Survivor (dis)empowerment and agency in seeking formal support services
Survivors in our study find that positive attitude from officials and support from their informal networks to their decision to seek formal support services empowering.Survivors also appeared to have internalized local and national campaigns against violence with emphasis on abuser accountability.This is reflected in the majority 19 (90.5%) of our participants indicating knowledge of the existence of the DV to punish abusers if victims reported the abuse, even though they were not conversant with the specificities of the provisions of the law itself.
An officer of the department of social welfare weighed in on DV survivor's awareness of formal support services stating that: Previously, fewer DV survivors reported abuse, but now more of them are reporting.This is because of education on radio, TV, and community outreaches by women's right organizations.Even with this education, family and friends discourage some women from reporting abuse to us.It is only on few extreme cases of violence that survivors' informal network supports their decision to report the perpetrator.(Social welfare officer, Interview, June 19, 2018) This awareness notwithstanding, all survivors indicated that their initial point of call to end the abuse was to their informal social network.Also, they all indicated that their first decision to access formal support services was informed by their contact with media and community outreaches of DV activists and backed by some form of support from at least one family member or friend.
Though some survivors in the study still faced resistance from their informal networks in reporting abuses, majority of them 12 (57.1%)reported some instances that they were \ supported financially and psychologically by relatives and even law enforcers to report their abuse.In all these instances, the abuse was severe and habitual.For instance, a survivor's sister encouraged her to report the abuse, and later supported her accommodation needs.Another Survivor also reported that she was motivated to continue the filing of a complaint against her partner after she received encouragement and support from a police officer.She narrated that:
When we got to the hospital, I did not have enough money to pay for the services. Had it not been the police officer who paid the bills, I would not have been attended to and as such couldn't have filed the case. (Survivor 2, Interview, May 3, 2018)
A significant finding is that psychosocial and financial supports from victims' informal networks as well as law enforcers were drivers for decision to report the abuse to the criminal justice system.Nevertheless, some survivors of the study still reported revictimization and pressures from their families to some extent.Of the 21 survivors 6 (28.5%) felt disempowered by discouragements from their informal networks and negative attitude of officials.These survivors reported exercising their agency by defying these setbacks and reported their abuse.Whilst others 3 (14.2%)issued threats to the abuser.Threats of reporting alone did not stop the abuse even though they sent a signal to the abusers that survivors were aware of the avenues for redress.Defiance and issuance of threats were ways that some of the survivors negotiated setbacks that were disempowering to them in seeking state support services.
Insights and perspectives from experts
The expert interviews revealed that the implementation of the DV Act had been problematic.This had to do with a disconnection between the DV law and its actual implementation.For instance, Sect.29 of the DV Act makes provision for the establishment of victims of domestic violence support fund intended to provide basic material needs and support for victims.This fund was not operational.Some of the identified impediments that created this disconnect included lack of awareness of the Act and its provisions, inadequate funds for the implementing agencies, and inadequate personnel with appropriate specialized skills to handle domestic violence cases.A women's right activist who bemoaned the lack of knowledge of the protection and occupation order had this to say: Even professionals including police officers and social workers are not conversant with the protection and occupation order and procedure for their application as protection strategies for survivors of DV. (Women's rights advocate 2, Interview, June 14, 2018) A police officer at DOVVSU stated that, inadequate logistics hinders the effectiveness of their services to the public.In explaining the challenges of DOVVSU further, the officer stated that:
Lack of financial resources to run the office negatively affect service delivery to the public. Other challenges we face include inadequate training of personnel on effective prosecution of domestic violence and sexual and gender-based violence cases, and also heavy workload. (Police officer, Interview, May 30, 2018)
A women's right advocate pointed out that getting a medical report at the right time is challenging to most women; and delay in getting the report because of financial difficulty tempers evidence (women's rights advocate 2, Interview, June14, 2018).
The medical report fee commits a medical officer to the case and is meant to defray costs of transportation should the officer have to appear in court in person to give testimony during hearing of the case.A medical officer interviewed on the rationale of the fee charged explained that: The issue is that, if the government takes up the costs incurred in terms of transportation and other related costs for medical officers to appear in court to testify, then the medical fee would be waived for DV survivors.(Medical officer, Interview, June 4, 2018).
All the women survivors in the study and experts agreed that paying for medical care and medical reports before a case can be booked is a disincentive for the pursuance of DV cases.
The interview with CHRAJ official explained that mediation and counselling by psychologists are often used to resolve milder cases of abuse, with an aim of promoting reconciliation.However, in cases of extreme physical violence the law was applied (CHRAJ official, Interview, June, 12 2018).The process and conditions for the referral of mild cases is well prescribed at Sect. 24 of the DV Act.
In the case of providing protective services to complement formal support services, The Ark Foundation -a Christian based non-governmental organization had a shelter for battered women.However due to financial constraints, the organization could no longer run the shelter at the time of this study.Hence, the organization resorted to shelter provision with support from other partners who sponsor the upkeep for persons they refer there (Women's rights advocate 1, Interview, June 12, 2018).
According to a women's rights advocate, some external family members of the survivors mostly prefer counselling or verbal cautioning of the abuser compared to prosecution in less severe instances of abuse.The advocate explained that the family members prefer this option because:
Arresting and prosecuting often lead to separation and divorce, which is detrimental to the family if they have children and especially if the abuser is the sole bread winner. (Women's right advocate 3, Interview, June 20, 2018)
These pressures often led to complainants withdrawing the case before adjudication even began.For example, Survivor 5 from our sample was pressurized by family members to drop a complaint filed at DOVVSU.She revealed that: Our family members advised me not to pursue the case because of our young children.I had to drop the case because I would have been tagged disrespectful to them if I did not.(Survivor 5, Interview, May 3, 2018)
Discussion
Our study sought to assess formal support services from the perspectives of women survivors of DV in Ghana who accessed them and how their experiences with these services (dis) empowered them in seeking justice in the Weija-Gbawe municipality.A majority of the DV survivors in our study were married, attained basic formal education, young, and not economically empoweredsociodemographic profiles consistent with survivors of DV survivors in Malawi [36], and earlier reports from Ghana [37,38].Women survivors of DV who are not economically empowered in Ghana face financial constraints in accessing formal support services.Advocacy for such women to be economically empowered either through the attainment of higher formal education or informal vocational skills could positively influence the outcome of their contact with formal support agencies to a large extent [39,40].
Other studies also call for changes in societal norms to promote equality among men and women [41,42].For instance from vulnerability theory standpoint, Fineman [43] and Kohn [44] offer that there is the need for the governments to provide equal access to public institutions and laws that distribute social goods such as healthcare, employment, and security.This call is timely as it has been well documented in the Ghanaian society and similar contexts that gender gaps exist in access to education, income, access to wealth from inheritance, access to health services, and household decision-making [45][46][47].
Several studies of the DOVVSU as a specialized police unit find that, due to severe institutional constraints such as heavy workload, inadequate trained personnel and lack of logistics to operate, DOVSSU officials are not able to provide satisfactory services to DV victims [5,6,48].This corroborates our finding that government institutions such as DOVVSU that have the mandate to provide support services are beset with logistical constraints and inadequately trained personnel and hence, are unable to function as stipulated by the DV Act.Generally, the literature on legislation against DV often shows a disconnect between the law and its implementation [2,49].Governments more often do not follow up with adequate funding, infrastructure, human resource, and logistics to guarantee successful enforcement after passing DV laws.As a constitutional democratic country, the 1992 constitution of Ghana mandates government officials to ensure that the fundamental human rights of all citizens are protected.Additionally, countries' ratification of various human right treaties at the United Nations and the regional level makes it imperative for authorities to take active measures to protect the human rights of citizens.In this regard, Ghana has not been successful in protecting DV victims.In our study, the social and material support for medical care and shelters were not available even though the law makes provisions for such services.For example, other studies have shown that affordable and timely medical service not only provide verifiable evidence of abuse but also save lives in cases of severe physical injuries [50,51].Similarly, Meyer [52], and Stylianou and Pich [53] underscore the critical role of shelters for abused persons fleeing their abuser.The absence of shelters and medical cover undermines efforts to protect DV survivors who sought help from formal support services in this study.Given the crucial role that medical services and shelters play in filing a case against perpetrators, several studies reiterated the need for authorities to make these services free or affordable [32,53,54].
Some civil society and non-governmental organizations including faith-based institutions have stepped in to provide basic social support services for survivors.Even though these benevolence institutions also has faced financial difficulties, their contribution to the fight against domestic violence in Ghana is notable.This goes to underscore the need for governments to collaborate with non-governmental and civil societies organizations in the fight against gender-based violence.Through the pulling of resources together, the government and nongovernmental actors could possibly enhance protective service delivery to DV survivors.
It is mandatory that once a DV victim files a complaint of physical and/or sexual abuse with the police, she is referred to a health facility to obtain a medical report detailing her health condition related to the violence.This report serves as proof of the abuse without which, the police may decline proceeding with the case.For example, Cantalupo et al. [55] report that prosecutors at the Attorney Generals' Office cannot effectively handle cases without the medical testimony.A prosecutor in their study underscored the importance of the medical testimony to this effect: "cases with the absence of medical testimony can result in less compelling, unarguable, or even damaging evidence… [55] .Due to inability to afford medical reports, the survivors of DV in the study were discouraged from pursing criminal justice services.Depending on the peculiarity of the case, medical officers must necessarily testify at trials.This further increases the costs of obtaining the medical reports because, the expenses anticipated in respect of court appearances by the medical officer are factored in the total cost of the medical report [49].These challenges could partly explain why none of the survivors at the time of participating in this study had redress for their cases at the law court.
Court-guaranteed protection or occupation orders to protect survivors are not utilized for reasons including ignorance of their existence by both law enforcers and survivors.A study by FIDA-Ghana on why few women survivors of DV apply for the protection and occupational orders in Madina (a comparable community in Accra) corroborate our findings [56].Similarly, Darkwah and Prah [49] also found that some police officers were not conversant with the DV Act and its provisions.Aside from the lack of knowledge of the protection orders, reports on their effectiveness to prevent re-abuse are mixed.Carlson et al. 's work on women who were issued with protection orders found that those with very low socioeconomic status (SES) and African American women were more likely to find protection orders ineffective in preventing re-abuse [57].They argued that out of economic necessities, women of low socioeconomic profiles could not afford to separate from their abusers on whom they depend for sustenance.This draws attention to the fact that even if our study participants were aware of and sought protection under protective orders their re-abuse will likely not end.In effect, even though these orders are civil and preferred by some survivors, they cannot adequately protect them due to sociocultural and economic factors.
As similar studies in Nigeria and South Africa show, severe forms of physical and emotional violence are predictors of help seeking behaviors among women victims of DV in our study [58,59].This finding shows that women victims reporting abuse to government officials are mostly at their wit's end and extremely expectant of the needed support services.
African countries such as Ghana need to go beyond legislation, by instituting robust social welfare systems and actively scrutinize the role sociocultural practices play in conferring privileges and power between men and women, which to a large extent are deliberately or inadvertently backed or at least condoned by the law and the justice system.
Financial constraints are disempowering to survivors following up on their cases.Again, the withdrawal of cases due to extended family interference mostly on claims that the criminal justice regime is retributive also worked against the agency of some survivors.There is a growing literature that advocates for a policy shift by governments to popularize restorative justice avenues already existing within the family courts system [60,61].This could be useful to some survivors seeking redress for their abuses in low income setting and could address gaps in current gender-based violence policies and make them inclusive for vulnerable population.This notwithstanding, it can be argued that the mediation component of the DV act, though it gives room for family reconciliation, its flip side could be perpetuation of habitual nonviolent to milder forms of violence.It can come across as not being deterrent enough for abusers.It could also expose survivors to more violent and could be a disincentive in accessing formal support service.
Our findings suggest that the emergence of the due diligence standard including DV legislations in patriarchal societies such as Ghana have resulted in empowerment of the abused persons.Despite the inadequacies of formal support services for survivors in the study, the culture of silence and secrecy that shrouded DV seems to be eroding gradually -making women increasingly able to assert their rights to protection beyond their informal networks.This positive shift in agency corroborates evidence from South Africa that contends that social policy shift in deconstructing DV within a private/public sphere could lead to rapid progress on DV prevention [62].
Unfortunately, our findings show that contacting the criminal justice system did not end the violence for majority of women survivors of DV.Some survivors suffered strained relationship with their partners, homelessness, and revictimization due to negative attitude of service providers and their informal networks following reports of abuse.These findings underscore ways that people reporting abuse end up being disempowered.Some researchers have argued that legislations alone do not prevent violence [63].Such voices called for the need to couple legislation with a robust activism with people at the community level in addition to the provision of basic social and material support for vulnerable populations seeking relief from abuse [9,63,64].
All survivors who reported for formal support service received counselling.Generally, counseling sessions for our study sample were able to ease distress from depression, anxiety, posttraumatic stress symptoms, guilt, and shame that often accompany victimization as reported by Bennett et al., Mcleod et al., and Sullivan and Rumptz [see 65,66,67].This service was widely available even in low income setting and helped maintain survivors' sense of self-esteem and well-being.Counselling as a form of psychosocial support was predominant because of the relative availability of professionals among services providers in the country.It is a relatively cheaper form of support offered by the formal support services.Even though counseling alone was incomplete for addressing the needs of survivors, some psychotherapists note counseling as having the potential of empowering survivors of DV.Counseling helps survivors to externalize their problems, whilst at the same time harness their ability to internalize their personal agency to cope and to take measures to prevent their problems [68].Defiance and issuance of threats are ways in which DV survivors in Ghana have negotiated setbacks that are disempowering to them in terms of accessing formal support services.
Strength and limitations of the study
The limitations to our study include recall bias as participants may have not adequately remembered details of their experiences of abuse, and with services provided by formal support agencies.There was also the risk of social desirability bias where survivors may provide information in line with what is known to be socially acceptable.These limitations notwithstanding, the strength of our study rests in its contribution to evidence on the experiences of women survivors of domestic violence with formal support services in Ghana.Even though the study focused on urban Ghana, the findings may be useful for understanding issues of domestic violence against women in similar settings within, and out of Ghana where formal support and protective services for survivors of domestic violence are poorly delivered.
Several policy implications and recommendations emerge from this study.The study findings demonstrate that even though Ghana's DV Act makes provision for justice and some social support interventions, only the police service and justice stipulations were operational.The main challenges that hindered the delivery of holistic services to DV survivors include inadequate training of court officials and police officers in the handling of gender-based violence, and the lack of infrastructure and operational logistics.Hence, there is the need for the government to increase its budgetary allocation to mandated institutions to facilitate in-service professional training programs that will equip personnel of the criminal justice system with skills to handle gender-based violence in accordance with international standards.
The root causes of DV -poverty and patriarchy should be given close attention through intensification of already running programmes on girl child education, women empowerment projects, mass public education and sensitization exercises to increase awareness -targeting the wider community with special attention on children at an early age.The adoption of a policy of giving integrated services at one stop centers will be effective in rendering protective services to survivors.These centers will ensure that victims of DV receive free medical care and accompanying medical reports, free legal assistance, and shelters.This will facilitate speedy recovery from their injuries and mitigate complications that might arise from delays in seeking medical treatment.Prompt medical attention would also ensure that good medical evidence is not lost to time, but adequately captured in reports to facilitate adjudication of DV cases.
Conclusions
The experiences of DV survivors do not reflect the legal provisions of Ghana's DV Act.Government under funding of formal services and negative gender norms are disempowering to survivors.NGOs are popular among women survivors of DV in Ghana for the education, legal, and material support they provide.A close collaboration between the government and NGOs could better mitigate DV in Ghana.
Table 1
Socio-demographic characteristics of women survivors of domestic violence, Weija-Gbawe Municipality, Ghana
Table 2
Experiences of women survivors with formal support services, Weija-Gbawe Municipality, Ghana
Challenges associated with utilization of formal support services Delays and need for frequent follow-ups
Survivors in our study mentioned delays especially with in the criminal justice system as the reason why they dropped their cases.Survivor 6 suspected that her abuser might have induced the police to dismiss the case following his arrest and cautioning.According to her, on his release, he abandoned her and the family.She shared her experience: police made me go to the hospital and they arrested him.After he was released later, he packed his clothes and left without telling me anything.Subsequent follow up with the case officer was futile, as his whereabouts was not known.Maybe he paid money to have the case dismissed.(Survivor6,Interview,May3, 2018) long time to be adjudicated…if you don't follow up repeatedly, the case will come to naught.(Women'srights advocate 1, Interview,June 12, 2018)
|
2023-10-18T13:13:16.830Z
|
2023-10-17T00:00:00.000
|
{
"year": 2023,
"sha1": "323dc92f4a168b1e54c16844bf7f4e753b07c4ff",
"oa_license": "CCBY",
"oa_url": "https://bmcwomenshealth.biomedcentral.com/counter/pdf/10.1186/s12905-023-02678-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "901f2429d589e2687a5dd83ac7626b4759f75977",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
}
|
244525420
|
pes2o/s2orc
|
v3-fos-license
|
Data-driven prediction of mean wind turbulence from topographic data
This study presents a data-driven model to predict mean turbulence intensities at desired generic locations, for all wind directions. The model, a multilayer perceptron, requires only information about the local topography and a historical dataset of wind measurements and topography at other locations. Five years of data from six different wind measurement mast locations were used. A k-fold cross-validation evaluated the model at each location, where four locations were used for the training data, another location was used for validation, and the remaining one to test the model. The model outperformed the approach given in the European standard, for both performance metrics used. The results of different hyperparameter optimizations are presented, allowing for uncertainty estimates of the model performances.
Introduction
Wind turbulence, in the atmospheric boundary layer, is an important phenomenon in the design of civil structures for both static and dynamic wind loads, and for the safe operation of transport vehicles. It arises from both mechanical and thermal sources. Frictional forces between the moving air and the Earth's surface are the main drivers of atmospheric turbulence and are closely linked to the local topography. Thermal sources such as surface heating/cooling and downbursts can also cause turbulence in the atmosphere by convection.
Measuring the wind properties at some desired locations can be challenging, despite promising advances in remote sensing [1][2][3][4]. Cheynet et al. [5] showed a high heterogeneity of wind turbulence in a fjord with the wind direction, which can significantly impact the design of wind-sensitive bridges and other man-made structures. In these situations, wind measurements, when available, are often only found at nearby locations. If there is enough diversity in the topography of the available measurement locations and sufficient wind data is available, it is in principle possible to use machine learning to learn the complex effects that the upstream topography has on the wind turbulence.
Artificial neural networks (ANN) (see e.g. [6,7]), can be of different types. Among them, multilayer perceptrons [8,9] have been used in many problems in atmospheric sciences [10]. They have been used IOP Publishing doi: 10.1088/1757-899X/1201/1/012005 2 to e.g. predict wind speeds from ocean surface images [11], and effectively identify topographic features such as water bodies, hills and vegetation [12]. They are thus deemed adequate for the simplified vector inputs used in this study, despite a broader support for e.g. convolutional neural networks and transformers in more challenging computer vision tasks [13,14]. To the authors knowledge, this study is the first attempt to use topographic information to predict mean wind turbulence intensities at new locations, without explicitly parametrizing the topography. Parametric models representing terrain effects are inherently imperfect and are based on numerous simplifications and difficult assessments in an attempt to systematically represent a complex terrain. They were previously proposed in e.g. the Eurocode [15], Engineering Science Data Unit [16] and Bitsuamlak et al. [17]. Other studies [18][19][20] model the dependencies between wind measurements at different locations and predict wind speeds, but are unable to predict mean wind characteristics at new locations where no measurements were available, given only information about the local topography. Bodini et al. [21] predict the turbulent kinetic energy dissipation rate while condensing the effects of the upstream topography into two variables, namely the standard deviation of the terrain elevation and the mean vegetation height, but also test their model at previously trained locations.
The model developed in this study is trained, validated and tested using measured along-wind turbulence intensities that are averaged within 1-degree-wide wind direction sectors, here denoted sectoral averages, and the topographic data associated with each sector, for each measurement mast location. The model hyperparameters were optimized after each iteration of a so-called k-fold crossvalidation, and uncertainty estimates were provided for the model performance on each tested location.
Wind measurement data
Five years of wind data, between 2015 and 2020, from six measurement masts in the region around the Bjørnafjord, in Norway, are used. The locations and names of these masts are shown in Figure 1.
Each mast has 3 sonic anemometers (model: Gill WindMaster Pro) that measured the three components of the wind with a sampling frequency of 10 Hz. The anemometers are located at 13, 33 and 48 meters above ground. To avoid measurements affected by smaller nearby obstacles such as trees and buildings, which are not represented in the topographic data, only the data recorded at 48 m height was used, for simplicity. Thus, the turbulence intensities being predicted at the different locations also refer to a 48 m height above ground. The data is pre-processed to address faulty and missing data. An outlier detection is performed through a Z-score analysis, where the 99.99% most probable data is kept.
For each 10-minute interval in the five-year period, the mean wind speed , the mean wind direction and the along-wind turbulence intensity are recorded for each anemometer at 48 m height, when available. A threshold of 5 m/s was adopted and observations with smaller mean wind speeds were discarded. High threshold values require more data but help to remove turbulence observations that are not likely governed by friction, but by e.g. local thermal effects.
Topographic data
The Norwegian mapping authority provides freely accessible Digital Terrain Models of Norway [22]. A 10 × 10 meter resolution model was used (DTM 10), consistently represented in the map projection system UTM 33. For each mast and for each 1-degree-wide wind sector, a 10 km long upstream terrain profile aligned with the wind was obtained. Note that a 10 km fetch is also suggested in NS-EN 1991-1-4:2005+NA:2009 NA.4.3.2(2) (901.1). The heights above sea level of 45 points along the profile at the upstream distances = [0, 10,30,60,100,150, … ,9900] (meters) were collected into a normalized terrain profile vector , where for each single point a min-max normalization is performed from that point's extreme values (for all masts and directions), as exemplified in Figure 2. Note the linearly increasing distance between points. This decrease in resolution assumes that, far upstream, only larger topographic features still affect turbulence (see e.g. [15], NA.4.3.2(2) (901.2)). Different sizes of , between 15 and 60, were also tested, with roughly similar results. To consider the effect of the different IOP Publishing doi:10.1088/1757-899X/1201/1/012005 3 categories of terrain roughness, a vector was added to the data used. Two categories were considered, sea and ground, normalized into a binary vector, but more terrain categories could be included.
Artificial neural network
An artificial neural network (ANN) was established using PyTorch (v.1.9.0) (a Python library for deep learning). A multilayer perceptron arrangement was used, whose representation is shown in Figure 3. The ANN predicts the sectoral/directional averages of the along-wind turbulence intensities � , i.e., the mean value of all within each 1-degree wide wind sector, at each wind mast, at 48 m above ground. A k-fold cross-validation method is used where the data is divided into six folds and where each fold corresponds to the data of one measurement mast location. This forces the model to predict turbulence intensities at locations it has never "seen" before. Each fold contains up to 360 data samples, one for each wind sector. The procedure for training, validating, optimizing and testing is further detailed in Figure 3. The domain of hyperparameters investigated is described in Table 1. A min-max normalization is applied to all inputs and target outputs to improve learning and stability. The target values � are compared with the predicted values � � through a loss function and learning is achieved by backpropagation. A batch gradient descent was found suitable due to limited data size and use of GPU-accelerated algorithms. The hyperparameters were optimized to maximize the 2 value (coefficient of determination) of the validation data predictions, using 500 iterations with a so-called "Tree-structured Parzen Estimator Approach". This is preferable to grid and random searches and has been shown to have a good balance between performance and computer efficiency when compared to other methods such as gaussian processes and random forests [23,24]. Since the resultant "optimal" hyperparameters depend on the initial conditions, 20 initial sets of arbitrary hyperparameters, thus 20 different models, were used to estimate the uncertainty of the 2 of the final testing data predictions. Lastly, when predicting the sectoral averages � , instead of each 10-min occurrence of , the topographic effects are better isolated and other time-and thermal-related effects can be disregarded.
Norwegian Standard -Eurocode NS-EN 1991-1-4
For comparison purposes, the along-wind turbulence intensity is also estimated following the Norwegian Standard and Eurocode NS-EN 1991-1-4 (ref. [15]). The measurement masts presented in this study are in a region with strong contrasts of terrain roughness, namely sea water (terrain cat. 0) and forests in relatively small hills (terrain cat. III). This transition in the upstream terrain roughness is considered in the Eurocode NA.4.3.2(2) (901.2.2). Different orographic effects on turbulence could also be considered. Those described in NA.4.3.3 (901.2.1) and NA.4.3.3 (901.3.2) can be applicable to some of the studied locations. However, in NA.4.4 it is not clear how to combine these effects with those from the different terrain roughnesses upstream, so only the latter ones are considered. Also, the orography factor is intended to represent isolated hills and escarpments, not undulating and mountainous regions.
To consider the upstream roughness heterogeneity, the upstream terrain is divided into two continuous patches of either terrain category 0 or III. The length of the two patches and the location of the transition between them was found iteratively for each mast and wind direction, by minimizing the number of misclassifications when compared to the original vector.
Results and discussion
The main results of the data-driven analysis are presented in Figure 4, Figure 5 and Figure 6.
In Figure 4, the predictions of one ANN model, per location, are plotted. The plotted models were those that had their performance ( 2 ) closest to the average performance of all models for a given location (dark red dots in Figure 5). Displaying only the best performing ANN models would lead to bias, due to a regression to the mean of future dataset test performances, and is thus avoided. Contour and line plots are shown for each mast location. The contour plots show the upstream topography for each mean wind direction, with the same resolution as given in the input data for the and vectors (see Section 2.2). A blue color is superposed to represent the sea water, with lower surface roughness. The line plots show the measurements and ANN predictions of � . The sectoral averages of the mean wind speeds, � , from the data described in Section 2.1, are also included for completeness.
Upstream hills close to the masts affect the results to a greater extent than hills further away. Long upstream fetches of water are characterized by low turbulence intensities. The ANN predictions are best at Ospøya 1 and Ospøya 2 as expected, due to the proximity (260 meters) and topographic similarity between them. Some predicted values at Ospøya 2, Landrøypynten and Nesøya seem slightly misaligned with the measurements. This can be due to local deflections of the wind direction around hills or/and due to discrepancies between reported and real anemometer orientations. At Svarvhelleholmen, the ANN underestimates turbulence for southern winds due to the inexistence of such high turbulence intensities in its training database. The Eurocode prediction also underestimates turbulence, but it could be argued that the alternative procedure in NA.4.3.3 (901.3.2) ("Lower lying construction site downstream of a hill or escarpment") would lead to slightly higher turbulence intensities for this particular site and direction. At Synnøytangen, the presence of nearby buildings and tall trees presumably affects the measurements to some extent for some directions. Figure 5, the 2 values, between the predictions of all ANN models and the tested measurements, are shown as an indication of the model performances. Note that the hyperparameter optimization is a chaotic process that is dependent on the initial conditions, hence the 20 models per tested location and associated 2 uncertainty estimates. A value of 2 = 1 indicates a perfect fit, whereas 2 = 0 indicates a fit that is as good as a simple average of all 360 values of � (which is unknown a priori). Another performance metric, accuracy, is also included, taken as 100% − (mean absolute percentage error).
In Figure 6, seven histograms show the final choices of the hyperparameters for all the different ANN models tested, after all optimization iterations were complete. It took roughly 110 hours to compute the 6 mast locations × 20 ANN models × 500 optimization iterations, on a laptop PC (Intel Core i7-8850H, 64 GB 2666 MHz RAM, Nvidia Quadro P4200). For all masts, the ANN predictions were able to roughly capture the main trends of the mean turbulence intensities with the location and wind direction, showing overall better performances than the Eurocode predictions. Nonetheless, it remains a challenging task to accurately predict turbulence, regardless of the model adopted.
Conclusions
A data-driven model was developed to predict mean wind turbulence intensities for each mean wind direction in a complex terrain, where no wind measurements are available. The model consists of an artificial neural network, namely a multilayer perceptron, whose hyperparameters were systematically optimized to improve the predictions. First, a database of topographic data and measured turbulence intensities at 48 meters height above ground, at different locations, for each wind direction, was used to train the model. Each topographic data sample consisted of 45 terrain elevation points, associated with a location, a wind direction and an upstream terrain profile, plus 45 binary classifications of those points' roughnesses into "ground" or "sea". Then, the model required only the topographic data at the desired new location to predict the mean turbulence intensities at the same height above ground, for each mean wind direction.
For the six locations studied, prediction accuracies between 72% and 87% were obtained, despite the relatively small training databases with only four or five locations. The model outperformed the procedures given in the relevant standard (Eurocode NS-EN 1991-1-4), which inherently require numerous simplifications that are difficult to implement and systematize in a complex terrain. The model is simple to establish, and the suggested framework can be easily adapted to include other input features and/or to predict other wind properties.
These findings can be useful when estimating the design wind loads on structures in complex terrains as a function of the wind direction. The proof-of-concept presented could also encourage other stakeholders in establishing a comprehensive and global database, with a larger number of measurement locations and diversified topographies, which could lead to an increase in model accuracy and reliability. Such a database and model could significantly impact the design, safety and cost-effectiveness of wind sensitive structures.
Recommendations for further work
A few recommendations and ideas on how to expand the current work are as follows: • Wind measurements at different heights above ground should be collected, to expand the scope of the model and capture the turbulence relationship with the height above ground. • More terrain categories, or a continuous roughness parameter, could be directly estimated as in [25,26], using e.g. the finer point cloud models available in [22] (0.25 × 0.25 m resolution). • The crosswind and vertical turbulence intensities, often assumed to have a linear relationship with the along-wind turbulence, could be included in the model. • Expanding the inputs to "see" a wider upstream topography, such as a ±15° sector around the wind direction, could improve the predictions and capture effects such as wind deflection around hills and the horizontal diffusion of turbulence. All-around topographies could also be considered, to capture channeling and downstream blockage effects. In the present study and limited data, this resulted in no obvious gains in accuracy. • Convolutional neural networks and other state-of-the-art computer vision models could be used to capture the spatial information of the expanded inputs mentioned above. • A hybrid ANN + Eurocode model could be pursued, where the Eurocode predictions could be added to the ANN inputs. • Predefined probability density functions of wind turbulence could be predicted instead of the sectoral mean turbulence intensity. Attempts in the present study have shown that functions with more parameters resulted in a better representation of the real data, but led to worse predictions, and vice-versa, presumably due to the lack of data in some wind sectors and the small number of mast locations in the database.
|
2021-11-24T20:07:02.760Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "17a8438bde42b270e4bf3535373418d6378b4443",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/1201/1/012005/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "17a8438bde42b270e4bf3535373418d6378b4443",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
15426700
|
pes2o/s2orc
|
v3-fos-license
|
Lattice measurements of nonlocal quark condensates, vacuum correlation length, and pion distribution amplitude in QCD
Recent data of lattice measurements of the gauge-invariant nonlocal scalar quark condensates are analyzed to extract the short--distance correlation length, 1/\lambda_q, and to construct an admissible Ansatz for the condensate behaviour in a coordinate space. The correlation length values for both the quenched and full-QCD cases appear in a good agreement with the well-known QCD SR estimates of the mixed quark-gluon condensate, 2 \lambda_q^2 =<\bar{q}(ig\sigma^{\mu\nu}G_{\mu\nu})q>/<\bar{q}q>= 0.8 - 1.1 GeV^2. We test two different Ansatzes for a nonlocal quark condensate and trace their influence on the twist-2 pion distribution amplitude by means of QCD SRs. The main features of the pion distribution amplitude are confirmed by the CLEO experimental results.
Introduction
The results of long-awaited lattice measurements of gauge-invariant nonlocal quark condensates have been published recently in [1]. These data provide a possibility to examine directly the models of nonlocal-condensate coordinate behaviour. Different models have been suggested in the framework of QCD sum rules (SR) for dynamic properties of light mesons [2,3,4,5,6]. We used an extended QCD SR approach with nonlocal condensates [4,3,7,8] as a bridge to connect meson properties, form factors, and distribution amplitudes with the structure of QCD vacuum. For light meson phenomenology, the value of short-distance correlation length l in QCD vacuum, l ≃ 1 λ q , has a paramount significance, while the details of a particular condensate model are of next-to-leading importance. We demonstrate that the original lattice data in [1] allow one to extract a reasonable value of the correlation scale λ 2 q being in agreement with our vacuum condensate models. At the very end, the lattice data support our conclusion on the shape of the pion distribution amplitude [2,4,8], and vice versa, the CLEO experimental data [9] on pion photoproduction confirm our suggestion about the range of the correlation scale values in an independent way [10].
2 Models of nonlocal quark condensates Small x 2 behavior. Let us start with the general properties of gauge-invariant nonlocal quark condensates (NLCs) F S,V (x 2 ) following from their definitions where σ, ρ are spinor indices, the integral in the Fock-Schwinger string E(0, x) is taken along a straight-line path. First of all, the condensates F S,V (x 2 ), . . . should be analytic functions around the origin, and their derivatives at zero are related to condensates of the corresponding dimension. Expanding F S,V (x 2 ) in the Taylor series at the origin in the fixed-point gauge A µ (y)y µ = 0 (hence E(0, x) = 1), one can obtain [11]: F S (x 2 ) = q(0)q(x) qq = q 1 + 1 2! (xD) 2 + 1 4! (xD) 4 + · · · q qq = 1 + Here D µ = ∂ µ −igA a µ t a is a covariant derivative; m q , the quark mass; J a µ , quark vector current; and the expansion coefficients Q i appearing in (2)-(3) are vacuum expectation values (VEVs) of quarkgluon operators O i of dimension i, Q i = O i . These expansions with the explicit expressions for the condensates Q i have been derived in [11], (see Appendix A). Condensates of lowest dimensions form the basis of standard QCD SR [12] and have been estimated, while the higher-dimensional VEVs, Q 7 (0) , Q 7 (mq) , Q 8 (0) . . . are yet unknown. Note here that the matrix element of the 4-quark condensate Q 6 is not known independently. Instead, it is usually evaluated in the "factorization approximation", the accuracy is estimated to be about 20% [12], The "mixed" condensate Q 5 is expressed in the chiral limit as and the parameter λ 2 q 2 fixes the width of F S (x 2 ) around the origin. This important quantity has been estimated within the QCD sum-rule approach Estimates of λ 2 q from instanton approaches [15] are somewhat larger: λ 2 q ≥ 2/ρ 2 c ≈ 0.6 GeV 2 where ρ c ≈ 1.7 − 2 GeV −1 is an average characteristic size of the instanton fluctuation in the QCD vacuum. Taking into account all these estimates, in the following, we put a rather wide window 0.6 ≥ λ 2 q ≥ 0.4 GeV 2 for its value ("QCD Range" in Figs.2, 3). Large x 2 asymptotics from HQET. The large-|x| properties of the NLC F S (x 2 ) have been analyzed in detail in [16] in the framework of QCD SRs for heavy-quark effective theory (HQET) of heavy-light mesons. It was demonstrated that for a large Euclidean x, NLC is dominated by the contribution of the lowest state of a heavy-light meson with energy Λ = (M Q − m Q )| m Q →∞ , and F S (x 2 ) ∼ exp{−|x|Λ} (numerically, Λ is around 0.45 GeV ). In the following, we shall take this asymptotic behavior for NLC at large |x| Hints from QCD SRs, Gaussian Ansatz. To relate the NLC behaviour with the properties of mesons via QCD SR, it is convenient to parameterize the x 2 -dependence of (2)-(3) by the distribution functions f X (α) a'la α-representation for a propagator 1, S-case; 0, V-case, chiral limit.
Here we use the Euclidean interval x 2 = −x 2 E > 0, and the subscript E will be omitted below for simplicity. The representation (9) allows one (i) to involve smoothly NLCs into diagrammatic techniques, and (ii) to clarify the physical properties of NLCs. Indeed, functions f S,V,... (α) introduced in [2] describe the distribution of quarks over virtuality α in nonperturbative vacuum. The moments of f S,V,... coincide with Taylor expansion coefficients in (2)-(3). For example, we have in S case in the chiral limit :qD 2 q : :qq : with λ 2 q meaning the average virtuality of vacuum quarks. Higher moments of f S (α) are connected with higher-dimensional VEVs. The difference between the Ansatzes for F S (x 2 ) looks more pronounced just for its f S (α)-images. It is evident that distributions extremely concentrated at the origin f S (α) ∼ δ(α), δ (1) (α), . . . correspond to separate terms of the Taylor expansion (2)-(3). At the same time, f S (α) = const simulates a free propagation with zero mass: F (x 2 ) ∼ const/x 2 . The Gaussian Ansatz takes account of a single scale -"inverse width" λ q of the coordinate distribution and corresponds to the virtuality distribution f S = δ(α − λ 2 q /2). It fixes only one main property of the nonperturbative vacuum -quarks can flow through the vacuum with a nonzero momentum k, and the average virtuality of vacuum quarks is k 2 = λ 2 q /2. The Gaussian behavior at very large |x| does not correspond to the expected NLC asymptotics (8). But for the moment QCD SRs, that deal with the smearing quantities -moments of distribution amplitudes [2,4], form factors [17,3,18], the incorrect asymptotics of NLC as well as subtle details of the Ansatz shape are expected to be not so important. It is interesting that the Gaussian behaviour is supported by a model of the nonperturbative propagator, [19], based on simple "local-duality" arguments. Namely, Eq.(16) in [19] demonstrates a behaviour rather close to Eq.(11) within a physically important region 1 Fm. Unfortunately, corresponding F S (x 2 ) decays too quickly beyond this region, it becomes negative and oscillating, which is physically unclear.
Certainly, a more realistic model of f (α) should possess a finite width: we expect that it is a continuous function concentrated around a certain value λ 2 q /2 and rapidly decaying to zero as α goes to 0 or ∞. Moreover, the continuous distribution f S (α) over virtuality is directly related to the pion distribution amplitude ϕ π (x) (for details, see sect.4), as it was demonstrated with the help of the "nondiagonal" correlator in [5,7]. "Advanced Ansatz". To construct models of nonlocal condensates, one should satisfy some constraints. For instance, if we assume vacuum matrix elements :q(D 2 ) m q : to exist, then the function f (α) should decay faster than 1/α m+1 as α → ∞. If for all m, all such matrix elements exist, a possible choice could be a function f (α) ∼ α n exp(−ασ), etc. at large α. The opposite, small-α, limit of f (α) is determined by the large-|x| asymptotics (8) of the function F S (x 2 ). This means that f (α) ∼ exp(−Λ 2 /α) in the small-α region. By the simplest composition of both the asymptotics [5,7], we arrive at the class of Ansatzes [5,7] f S (α; n) ∼ α n−1 e −Λ 2 /α−σ 2 α that gives a coordinate behavior where K n (z) is the modified Bessel function. The distribution f A S (α) ≡ f S (α; n = 1) with the parameters Λ q ≃ 0.45 GeV and σ 2 q ≃ 10 GeV −2 is presented in Fig.1 in comparison with the distribution f S (α; n = 5). In this case, the behavior of F A S (x 2 ) ≡ F S (x 2 ; 1) is similar to that of the massive scalar propagator with a shifted argument: The short-distance correlation scale λ 2 q in (14) appears to be equal to 2/σ 2 at Λσ ≪ 1, reproducing the single instanton-like result [15] where the parameter σ imitates the instanton size ρ c . In the opposite case at Λσ ≫ 1, λ 2 q is proportional to 2Λ/σ. The asymptotics of F A S (x 2 ) at large |x| ≫ 2σ § is determined by the parameter Λ, where Λ plays the role of an effective mass. This interpretation is transparent in the momentum representation of the NLC:F A S (p 2 ) tends to the usual propagator form at small values of all the argumentsF "Advanced" Ansatz has been successfully applied in the nondiagonal QCD SR approach to the pion and its radial excitations [7], and the main features of the pion have been described: the mass spectrum of pion radial excitations π ′ and π ′′ in close agreement with experimental data.
Vacuum correlation length from lattice data
Here we consider the results of fitting the lattice simulation data for both the kinds of Ansatz F G,A S (x 2 ) introduced previously for the nonperturbative part of a correlator. Processing the Pisa lattice data.
was measured on an Euclidean lattice [1] at four points, x = 2a, 4a, 6a, 8a, inside 1 Fm, where a ≃ 0.1 Fm is the lattice spacing. The simulation was performed with four flavors (N f = 4) of staggered fermions with the same mass m q for every flavor and SU(3) Wilson action for the pure gauge sector. Both the full QCD case (f-QCD), i.e. , including the effects of "loop" fermions with the flavour masses m q a = 0.01, 0.02 and the quenched case (q-QCD where the effects coming from loops of dynamical fermions are neglected) with a set of flavour masses m q a = 0.01, 0.02, 0.05, 0.10 were considered, see Appendix B and [1] for details.
The correlator can be written as a sum of a perturbative-like term, B · F pt S (x 2 ), proportional to m q /x 2 at very short distances, and a nonperturbative part, A · F npt S (x 2 ). Parameters of the latter are the main goal of the fit. For the perturbative part, the authors of [1] have used just the shortdistance asymptotics; whereas for the nonperturbative one, an exponential Ansatz F npt S (x 2 ) = exp(−µ 0 |x|) that corresponds to the asymptotics of very large distances x 2 ≫ 1 Fm 2 . In other words, these two approximations, involved in the fit simultaneously, are adapted for different regions of x. It seems more dangerously that the exponential Ansatz is a non-analytic function of x 2 at the origin: it depends on |x| = √ x 2 and its derivatives with respect to x 2 do not exist at the origin. Therefore one loses any connection with local VEVs appearing in the OPE and with the corresponding interpretation of the correlation length, see section 2. Nevertheless, the fit of the § The asymptotics starts with x 2 ≫ 2 Fm 2 at the mentioned value of σ-parameter. lattice data has been performed [1] and demonstrated very small χ 2 . Note, the values of χ 2 /N d.o.f. in the Tables below should be considered as purely indicative of the best fit quality. One could not interpret these values in a standard statistical sense, because their true norm is unknown, see brief discussion in [1]. The extracted quantities, A, µ 0 , B ¶ are presented in Table 1 (for every run) for a comparison with our results. (5) 6.4 · 10 −3 6.00, q 0.01 1.6(5) 0.16(4) 0.9(1) 7.6 · 10 −2 5.91, q 0.02 2.3(7) 0.26 (3) 1.25 (7) 5.2 · 10 −2 6.00, q 0.05 1.8(4) 0.34 (2) 1.4(2) 0.2 6.00, q 0.10 5.6(5) 0.55(1) We test different NLC Ansatzes and extract the parameters of the correlator using three-step procedure. At the first step, we fit rearranged formulas for the correlator. Note that the masses of light flavors appear unnaturally large in the lattice simulation and cannot be considered small (for the values of masses in MeV, have a look at the axes of Fig.3). For this reason, one should take account of all possible mass terms in both parts of C 0 : Following [20], we fix the perturbative-like part in the form of free propagation F pt = mq x K 1 (m q x) with mass m q . For nonperturbative part, we keep all the known mass-terms from the expansion (2), see the expression for Q 6 (mq ) in Appendix A. Table 3: Fit for the "Advanced" Ansatz, In the original table [1], the dimensionless combination aB 0 has been shown. Due to the previous footnote we recalculate the corresponding B parameter for their fits.
As a result of the fit, we extract, from lattice data, an intermediate correlation scale and depends on lattice conditions, namely m q , a, and β. We expect that this quantity coincides in the chiral limit with λ 2 q , determined in (6) within the massless QCD, i.e. , λ 2 The results for A, B and λ 2 L (m q ), obtained in various cases * * are collected in Tables 2 and 3; let us outline the main features: (i) The χ 2 for the cases of both Ansatzes look higher than in Table 1, but still sufficiently small, especially for f-lattice. An exception is the last run for the q-lattice with the largest mass m q · a = 0.10 that corresponds to m q ≃ 200 MeV [1]. To process the data with so a huge mass, one should involve a lot of mass-terms into the fit formula (17). For this reason, we exclude this run results from subsequent analysis.
(ii) The extracted coefficient B that fixes the perturbative-like contribution should not change significantly from one run to another for both Ansatzes and for both kinds of lattices. This property can signal about a good quality of the fits, it confirms the reliability of the fit at least for the f-lattice case, see Tables 1 and 2. (iii) The fitted parameter λ 2 L (m q ) that fixes the behavior of the nonperturbative part has a strong and monotonic dependence on m q , in contrast with the parameter B, see item (ii).
Extrapolation to the chiral limit. At the second step, we extrapolate the intermediate λ 2 L (m q ) to the chiral limit, suggesting a simple linear dependence on m q → 0. The linear law seems to be rather naive, but the observed strong dependence of λ 2 L (m q ) on the quark mass is well supported by the data, see graphics in Fig.2, 3. Really, the linear extrapolation of the first three run results for the q-lattice is self-consistent for both Ansatzes (Fig.2(b), 3(b)), and the results are in a reasonable agreement with the corresponding f-lattice results (Fig.2(a), 3(a)). L -black points with error bars; for the chiral limit ( m q → 0) and the evolution to the scale 1 GeV 2 for the Gaussian Ansatz both in full (part (a)) and quenched (part (b)) LQCD. Dashed black lines on both parts correspond to chiral limit procedure with taking into account error-bars, black blobs -to the central points of the chiral limit, and red blobs -to the resulting values of λ 2 q on the QCD scale of 1 GeV 2 . Red arrows show the overall error-bars of the extracted λ 2 q (µ 2 = 1 GeV 2 ), whereas green thick lines bound the QCD preferred region.
The explicit results for λ 2 0 are presented in the first line of Table 4: note, λ 2 0 for the q-lattice is not smaller than for the f-one for both Ansatzes; the large error bars for λ 2 0 appear due to the roughness of the chiral extrapolation procedure. * * The Gaussian Ansatz has been tested in a lattice in [21] (after our suggestion in a private communication) without any mass corrections. The fit-result, λ 2 L (m q · a = 0.01) = 0.46(5), appears somewhat larger than our fit-result in Fig.2(a). To clarify the reliability of the approximation, we repeat the same extrapolation procedure with the dimensionless "lattice quantities", L 2 = (a · λ L ) 2 and M = a · m q . Lattice spacing a involved into this extrapolation is different for different runs (see Appendix B, Eqs.(B.1)-(B.2)). To return to "physical" quantity λ 0 at the very end, we adapt an average spacing a q corresponding to the q-lattice case, and a similar average one a f -to the f-lattice case (see Appendix B). It is clear, that this procedure is even more crude than the previous one. Nevertheless, the corresponding λ 0 well agree with the previous "three-point" q-lattice result, compare the first and third lines in Table 4. But the procedure falls down for f-lattice data, the results appear too small.
At the third step, we perform an evolution to continuum normalization scale. To compare the results for the shape of NLC F S (x 2 , µ 2 ) (or distribution f (α, µ 2 ) at scale µ 2 ) with the same quantity taken on another scale Q 2 , the corresponding evolution law in µ 2 is required. For a general case it looks as ambitious and rather complicated problem that is yet unsolved. But in our partial case if we fix how λ 2 q (µ 2 ) evolves with µ 2 , then the evolution law for both Ansatze is also fixed. Therefore, we shall consider the evolution of single characteristic of the shapeλ 2 q (µ 2 ), setting the equivalence λ 2 q (µ 2 L ) = λ 2 0 in a lattice. We live not in a lattice, but, instead, in a continuum. Therefore we need a procedure to relate lattice measurements with our observables in continuum, the corresponding one-loop evolution law for this kind of a lattice presented, e.g. , in [22], The continuum quantities enter into the l.h.s. of Eq.(18); while the lattice quantities, into the r.h.s.. Here Λ M S /Λ L = 76.44, b 0 = 11/3C A − 4/3T R N f -first coefficient of the β-function, and γ λ = (2C A − 8C F ) = −14/3 -one-loop anomalous dimension for λ 2 q (µ 2 ), calculated in [23]. So, using formula (18) to return to continuum λ 2 q (µ 2 = 1 GeV 2 ), we obtain the final results for λ 2 q appearing just in the "QCD range", see Figs.2, 3. Let us consider these evolved results, presented in Table 4, in more detail.
(i) The Gaussian Ansatz. The mean values of λ 2 q for f-and q-lattices become closer to one another after the evolution starting from the corresponding different values of λ 2 0 . This "focusing" effect is due to a difference of the evolution in both the cases: for the f-lattice (β = 5.35; b 0 = 25/3, N f = 4) the transition factor from Eq.(18) is 2.88, while for q-lattice (β ≈ 6; b 0 = 11, N f = 0 ) -2.11. Note here that if one attempts to exchange, by hand, these evolution laws (q for f and vice versa), then the final numbers diverge out of the QCD range. The observed focusing may demonstrate a complementarity of the chiral limit results to the evolution law.
(ii) The Advanced Ansatz. The value of λ 2 q for f-lattice, 0.5 GeV 2 , appears to be very close to an average λ 2 q for the Gaussian Ansatz in the left side of Table 4. The result for the q-lattice is less than this average estimate and located near the low boundary of the QCD range. But, in virtue of huge error bars, the latter result does not contradict the f-lattice result.
Finally we can conclude that Pisa lattice data reported in [1] really "feel" the short distance correlation scale in the QCD vacuum. The data processing explicitly demonstrates that the extracted mean values of λ 2 q are in agreement with the estimates from the QCD SR approach (7) for all the considered cases. Moreover, the results are in agreement with the old lattice result, λ 2 q ≈ 0.55 GeV 2 , obtained in [24] on a q-lattice. Huge errors bar are the problem of these estimates, and this does not allow one to confirm the agreement with the "QCD range" once and for all. The main uncertainty follows from the chiral limit procedure ("second step"). To reduce the uncertainty, one should improve the theoretical part in (17) as well as the "lattice" part of the fit. Namely, we need more numbers of the lattice runs with a moderate/small quark masses M = a · m q for a reliable extrapolation; to process all existing results, we should include the most important subset of mass-terms into the nonperturbative part of the fitted formula (17). Results of extrapolation to m q → 0 are evolved to the standard normalization scale µ 2 0 = 1 GeV 2 . Crosses in full QCD columns mean the breakdown of chiral limit procedure.
Another problem is to extract the quark condensate value | qq | from the lattice QCD data. In the fit it is just the coefficient A divided by the number of flavors N f = 4. But the three-step procedure fails for this data † † providing too small values for the quark condensate.
Another possibility is to construct the RG-invariant quantity m q (µ 2 ) qq (µ 2 ) in the lattice to avoid both the chiral limit and the renormalization effects. In this way, we obtained different estimates for every run in full the lattice QCD; the estimate region is that should be compared with the well-known value fixed by the current algebra (20) † † The authors of [1] also did not obtain reasonable estimates for qq using this kind of data, therefore they have performed an individual measurement of the quantity, see Eq.(3.3-3.5) in [1] The estimate (19) produces for real QCD case with the current masses m u,d ≃ 5.5 MeV qq (1 GeV 2 ) 1/3 = 265 − 358 MeV vs the standard value 250 MeV, [12].
The values are overestimated, although the lowest one corresponding to the run with m q · a = 0.01 looks reasonable.
Nonlocal quark condensates and pion distribution amplitude
The pion distribution amplitude (DA) of twist-2, ϕ π (x, µ 2 ), is a gauge-and process-independent characteristic of the pion that universally specifies the longitudinal momentum xP distribution of valence quarks in the pion with momentum P (see, e.g., [25] for a review), Due to factorization theorems [26,27], it enters as the central input of various QCD calculations of hard exclusive processes. Here we illustrate how a value of the correlation scale λ 2 q (∼ 1/l 2 ) can affect the shape of the pion DA. First, we consider NLC QCD SR for DA moments that provides the smearing quantities, moments ξ N π = 1 0 (2x − 1) N ϕ π (x)dx, to restore a profile ϕ π (x) of the pion DA. The NLC QCD SR are based on different kinds of Gaussian Ansatzes [2,4] for NLCs that naturally appear in the theoretical (r.h.s.) part of the SR. Pion DA from NLC QCD SR for pion DA moments. The Gaussian Ansatz. The SR involves 5 different kinds of nonlocal condensates in addition to the scalar condensate contribution, ∆Φ S (x; M 2 ), for details see [2,4,8]. The scalar NLC contribution results from the "factorization Figure 4: Graphical representation of ϕ π1 (x, µ 2 ) (part (a)) and ϕ π2 (x, µ 2 ) (part (b)) at the characteristic renormalization point µ 2 ≃ 1 GeV 2 . The thick solid lines in both plots denote ϕ opt π1,2 (x), i.e., the best fit to the determined values of the moments, whereas dashed lines illustrate admissible options approximately with χ 2 ≤ 1.
approximation" to the nonlocal four-quarks condensate, and its accuracy is yet unknown, compare with the approximation, Eq. (5). But in any case the contribution ∆Φ S (x; M 2 ) is numerically the largest one for not too high moments. Therefore, the main features of the shape of ϕ π (x) in Fig.4 is, roughly speaking, the net result of the interplay between the perturbative contribution and the nonperturbative term ∆Φ S (x; M 2 ) that dominates the r.h.s. of the SR.
The QCD SR predicts the values of moments ξ N π (µ 2 ≃ 1 GeV 2 ) within their error bars. For this reason, one obtains, after restoration a "bunch" of admissible DA profiles [10] corresponding to the moment error bars, rather than a single sample of profile. These profiles are shown in Fig.4 by dashed lines, in addition to the optimal one (thick solid line) that corresponds to the best fit at χ 2 opt ∼ 10 −3 . Comparing DA profiles at different values of λ 2 q in Fig.4(a) and (b), one can conclude that the larger is the correlation scale the smaller is the concavity in the middle of the profiles, and their shape becomes closer to the shape of "asymptotic" DA ϕ as (x) = 6x(1 − x). Therefore, a trial bunch (is not shown here) corresponding to the value λ 2 q = 0.6 GeV 2 at the boundary of the introduced QCD range contains mainly convex in the mid-point profiles that are close to the asymptotic one.
We have established in [10] that a two-parameter model ϕ π (x; a 2 , a 4 ), the parameters being the Gegenbauer coefficients a 2 and a 4 (as also used in [28]), enable one to fit all the moment constraints for ξ N π . For completeness we write explicit formulae for the optimal DA models ϕ opt 1,2 (x) = ϕ as (x) 1 + a opt1,2 at µ 2 ≃ 1 GeV 2 . In this way, the admissible bunches of profiles can be mapped into the a 2 (µ 2 ), a 4 (µ 2 ) plot and then can be evolved to a new normalization point [10,28], µ S&Y , see slanted rectangles in Fig.5.
Pion DA vs CLEO data. Recently, the CLEO collaboration [9] measured the γ * γ → π 0 form factor with a high accuracy. These data sets were processed by Schmedding and Yakovlev (S&Y) [28] using a NLO light-cone QCD SR analysis. They obtained the constraints to (a 2 , a 4 ) Gegenbauer coefficients: (i) calculated within the NLC-SR approach for three different values of λ 2 q and evolved to µ 2 S&Y = 5.6 GeV 2 ; (ii) confidence regions extracted by Schmedding and Yakovlev [28] from CLEO data [9]. Contour lines show 68% (solid line) and 95% (dashed line) confidential regions. It is shown how our estimate of the confidence region with λ 2 q = 0.4, 0.5, 0.6 GeV 2 overlaps with those displayed in Fig. 6 of Ref. [28]. Bold dot in the plot marks the parameter pairs for the asymptotic DA, full square -for Chernyak-Zhitnitsky DA.
The regions enclosed by the slanted rectangles bounded by the short-dashed line and solid line in Fig.5 correspond to the bunches of DA displayed in Fig.4(a) and (b). The central point a 2 = 0.142, a 4 = −0.087 corresponding to the optimum profile of the ϕ 1 (x)-bunch (λ 2 q = 0.4 GeV 2 Fig.3(a)) at µ 2 S&Y definitively belongs to the central S&Y region. The ϕ 2 (x)-bunch is, however, mostly outside the central 68% region though still within the 95% confidence region. Finally, the third slanted rectangle limited by the long-dashed contour and shifted to the upper left corner of the figure in Fig.5 corresponds to the trial bunch of NLC-DA with λ 2 q = 0.6 GeV 2 . This value falls actually outside the standard QCD NLC-SR bounds in Eq.(7) for λ 2 q . Remarkably, the image of this region in Fig.5 lies completely outside the central region as a whole. Therefore, we may conclude that the CLEO data prefer the values λ 2 q = 0.4, 0.5 GeV 2 and probably do not prefer the value λ 2 q = 0.6 GeV 2 , in full agreement with previous QCD SR estimates. Now the conclusion is supported by the lattice results presented in section 3. The quantitative details of the above qualitative discussion can be found in [10].
Pion DA from nondiagonal correlator. The "Advanced" Ansatz. An approach to obtain directly the forms of the pion and its first resonance DAs, by using the available smooth Ansatz for the correlation functions f (α) of the nonlocal condensates, was suggested in papers [5,7]. The sum rule for these DAs ϕ π ... (x) based on the nondiagonal correlator of the axial and pseudoscalar currents has the vanishing perturbative density, only quark and mixed condensates appear in the theoretical part of the SR [5]. The SR results in the elegant Eq.(24) from the approximations both in the theoretical (for a detailed discussion, see [5]) and the phenomenological parts (see [7]). In virtue of the approximations, a single correlation function f S (α) appears in the r.h.s. of Eq.(24) that determines the profile of ϕ π (x). Equation (24) demonstrates, in the most explicit manner, an important relation: the distribution ϕ π (x) of quarks inside the pion over the longitudinal momentum fraction x (on the l.h.s.) is directly related to the distribution f S (α) over the virtuality α of the vacuum quarks on the r.h.s.. Note that a similar relation was obtained in an instanton-induced model [29].
The approaches to extract ϕ π (x) with the help of SR (24) were discussed in detail in [5,7], here we concentrate on the final result for the profile in the case of the Advanced ansatz (12). Note only that these approaches do not provide the behaviour of the profile in the vicinity of the end points, the reliable predictions are expected around the mid-region. The shape of ϕ π (x) strongly depends on the spectrum of the resonances in the l.h.s. of Eq.(24) and weaker -on the value of λ 2 q . For the model of equidistant, infinite number of narrow excitations like the "Dirac comb" we have obtained the profile, very close to the asymptotic DA [7]. If we choose the spectral density in accordance with the current knowledge † , i.e. , containing only 3 radial π-excitations with the masses m 2 π ′ ≈ 1.3 2 GeV 2 , m 2 π ′′ ≈ 1.8 2 GeV 2 , m 2 π ′′′ ∼ 4.7 GeV 2 , then we obtain the set of admissible profiles presented in Fig.6. It is naturally to suggest that ϕ π (x) is situated between these two possibilities. This result qualitatively agrees with the profile behaviour for the bunches ϕ 1 (x) and ϕ 2 (x) in Fig.4.
Conclusion
We consider the admissible Ansatzes for the coordinate behaviour of the scalar quark nonlocal condensate being of importance in QCD SRs. These Ansatzes depend on the parameter λ 2 q -the short-distance correlation scale in the QCD vacuum that controls the corresponding coordinate behaviour. We analyse the lattice simulation data for the scalar NLC from [1] and test two different Ansatzes. The correlation scales are extracted following the procedure that includes a fit of lattice data, an extrapolation to the chiral limit, and an evolution of the obtained lattice results to the characteristic scale µ 2 = 1 GeV 2 of a continuum QCD.
The scale λ 2 q thus extracted does not visibly depend on the kind of a lattice (full or quenched) QCD, as well as on the kind of the Ansatz. The value of the scale appears in a good agreement with the QCD range λ 2 q (1 GeV 2 ) = (0.4 − 0.55) GeV 2 , see Figs.2-3 and Table 4 in section 3. The agreement looks unexpectedly good in view of a roughness of the mentioned procedure.
The scalar condensate and the scale λ 2 q substantially determine the shape of the pion distribution amplitude by means of QCD SRs. Both kinds of the considered Ansatzes lead to similar shapes of the pion DA. The pion DA following from QCD SR for the moments [8,10] is rather sensitive to the value of λ 2 q . The bunches of DA profiles corresponding to the λ 2 q values from the QCD range do just agree with the constraints that follow from the CLEO experimental data [10].
This and previous [4,5,7,8,10] considerations demonstrate a close link between the correlation scale in the QCD vacuum and the shape of the pion DA. where the quark condensate basis was chosen in the form for the condensates of lowest dimensions, and Q 7 1 = <qG µν G µν q >, Q 7 2 = i <qG µνGµν γ 5 q >, (A.6) Q 7 3 = <qG µλ G λν σ µν q >, Q 7 4 = i <qD µ J ν σ µν q >, for the condensates of dimensions 7. The basis elements of dimension 8, Q 8 i and A, enter into the expansion only of the vector NLC (3) that is not analyzed here. For this reason, we do not show it here and refer the reader to article [11], Eqs.(3.10-3.11).
B Details of Pisa lattice simulations
For the full QCD case (with dynamical fermions), the nonlocal condensates were measured on a 16 3 × 24 lattice at β = 5.35 (β = 6/g 2 , where g is the coupling constant) and two different values of the dynamic quark mass: a · m q = 0.01 and a · m q = 0.02 with the following values for the lattice spacing: a(β = 5.35) ≃ 0.101 fm, for a · m q = 0.01 ; a(β = 5.35) ≃ 0.120 fm, for a · m q = 0.02 . (B.1) For the quenched QCD case the measurement was performed on a 16 4 lattice at β = 6.00, with the quark mass a · m q = 0.01 for constructing the external-field quark propagator, and also at β = 5.91, with the quark mass a · m q = 0.02. In both the cases, the value of β was chosen in order to have the same physical scale as in full QCD at the corresponding quark masses, thus allowing a direct comparison between the quenched and the full theory. In the quenched case, the lattice spacing is approximately [1]:
|
2014-10-01T00:00:00.000Z
|
2002-03-05T00:00:00.000
|
{
"year": 2002,
"sha1": "51dfc07fbbf5ae6c97b8ed14617ea7e70c58e090",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0203046",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "51dfc07fbbf5ae6c97b8ed14617ea7e70c58e090",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
245876939
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of Total Harmonic Distortion and implementation of Inverter Fault Diagnosis using Artificial Neural Network
As power electronics devices dependability is very significant to guarantee Multi Level Inverter (MLI) systems stable functioning, it is imperative to identify and position faults as promptly as possible. Due to the fault occurrences, the Total Harmonic Distortion (THD) on the system gets a hit. In this perspective, to improve fault diagnosis accuracy and efficient working of a Cascaded Multi level Inverter System (CHMLIS), a quick and accurate fault diagnosis strategy with an optimized training algorithm using Artificial Neural Network (ANN) is presented. Also, Total Harmonic Distortion (THD) is analyzed for each switch Fault simulated using MATLAB/Simulink and the results are presented. Results shows the efficacy of Algorithm in identifying the fault. The auxiliary cell is replaced while the fault occurs in the main cell thus making the uninterrupted working of the Multi-Level Inverter (MLI) in the Induction Motor Drive (IMD).
Introduction
The multilevel inverters have turn out to be prevalent in cutting-edge machineries for high-power and medium voltage applications. As power electronics devices dependability is very significant to guarantee Multi Level Inverter (MLI) systems stable functioning, it is imperative to identify and position faults as promptly as possible.
Different Engineering and non-Engineering fields apply ANN for approaches that involve classification using multi-perceptron models [1][2][3][4]. Although intrinsic behavior of the back propagation network is advantageous with, inbuilt forecasting dynamism, tolerance to data error and independent of any external data for convergence, it pronounces some disadvantages. The convergence of training in this gradient descent-based method is slow and has more chances of converging to a local minimum.
Thus, to resolve the convergence problem many algorithms are considered [5][6][7][8][9][10][11]. PSO is a bio inspired optimization technique emulating the bird flock behavior or fish schooling. The potential solutions are spread throughout the hyperspace and they are accelerated in order to find the best point in the hyperspace to attain the solution. The requirement of memory and the computation are inexpensive while the implementation can be carried out using simple programming. PSO is an amalgamation of both GA and Evolutionary programming. A common thing between GA and PSO is that both the algorithm starts with the random population. GA uses binary variables which are shifted and rotated to get the new population while PSO uses real numbers as particles. Again, a randomized velocity is used to move the particles in search space. The momentum which modifies the velocities adds up to the detailed exploration of the problem space. The particles update in the search space while GA updates using the crossover operation.
A multi-probabilistic method like PSO is used for optimizing the weight updating in the ANN training. PSO based network has proved to provide the prediction rate of around 80%. The optimized network model using PSO has proved superior to the ordinary BP based perceptron both in terms of precision and rate of convergence [12]. ANN experiments need a shorter and efficient way of choosing input parameters or the features for the compact ANN.
It is observed that the feature reduction techniques are used for the stability assessment algorithm [13]. ANN applications are assumed to be advantageous if a proper feature reduction technique is used. An optimized ANN using GA [14,15,16] is applied to estimate the margin of voltage stability. Capability of the ANN to learn and the capability of the optimization algorithms to search are combined for a better and efficient classification algorithm. The FDA of MLI is analyzed using the ANN in. A prototype for fault diagnosis of five level MLI connected to IMD is validated with the simulation results [17][18].
ANN Training
The ANN topology that is used in the current implementation is as given in the following Figure. As there is a single input for the classification of fault location in the implementation the input node is one. The input is the THD of the output voltage of the CHMLIS. And the output is the switch number where the fault occurs for the corresponding THD.
A neural network model with n hidden layer excluding single input and output layer is developed, we have taken n value as 100 as shown in the Figure 1. The input for the ANN training is the THD obtained from the output voltage of the MLI and the target is the switch position where the switch failure occurs. The input and the target pair are tabulated and given to the ANN to be trained by the optimized ANN. The optimized Back propagation neural network using the GA and MGA algorithm is trained using the parameter estimation methodology. The procedure how the Back-Pro NN Algorithm is applied on the FDA of the CHMLIS is as given below in the form of Pseudo code. initialization. 4. The observed THD and the corresponding switch failure is tabulated and split for training and testing implementation. 5. The error is calculated between the traversed input at the output node and the target output which is the desired output. If the target output doesn't counterpart with the actual output, then the error has to be propagated to update the weights in the previous nodes 6. Before the error propagation, the error is matched with the tolerance assessment to decide whether the iteration of training has to stop or continue. If the error value is higher than the tolerance value, then the iteration continues. 7. The weight correction term is calculated for each node and the resulting weight correction term is updated for the previous layer. This updating continues till the input layer and the updated weights are modelled. 8. The weight updation would reduce the error term to be reduced for each iteration and the steps 5, 6 and 7 are repeated till loss value comes well below tolerance. 9. The entire process is repeated until all the I/O pairs are trained using the previous steps. An ANN model is ready with all the weight and the bias values updated that is ready to be tested with the new data that is not in the training data set. This model is the trained model of ANN.
Optimizing ANN using Parameter Optimization
The stochastic nature of the ANN training paradigm and the stochastic nature of the FDA applied in this implementation lead to the Parameter Optimization Algorithm to be implemented on the FDA algorithm. The process of parameter optimization is extensively dealt in many literatures. The process of enhancing any control algorithm by improving the parameters involved in the control or decision is called Parameter Optimization.
In this implementation the ANN parameters including the weight and bias prices are augmented for training the FDA on the CHMLIS in the three phase IMD. The learning accuracy and speed of the ANN in this FDA paradigm is targeted. The weight and bias values are searched in the hunt space in order to obtain the least MSE value during the FDA training on the ANN. The parameter estimation or optimization algorithm exhibits the following advantages, Mathematical model of the plant under control is not needed and thus does not need the calculations related to it and previous knowledge related to it. A comparison with other algorithms can be carried out Complex problems can be solved without the plant model of the system.
GA Based ANN Parameter Optimization
The constructed ANN thus developed in the previous segment with the initialized weight and the bias values are obtained. The ANN structure thus developed has to be trained by parameter estimation method using the optimization algorithm instead of the gradient descent method inbuilt in the ANN. The quick and accurate training of the ANN is targeted.
GA and MGA are used on the ANN optimization algorithm to obtain the quickly trained accurate ANN. The procedure followed for the parameter estimation method on an ANN using the meta heuristics method is as shown below. As there are two variables (weight and bias) that are optimized this is a multi-varied meta heuristics algorithm. The GA based ANN optimization algorithm is as discussed below.
Pseudo Code
1. Initial weights and the bias values are chosen from the ANN model thus developed. 2. Two arrays containing the weight and the bias values are obtained by unwrapping the matrices. 3. These two arrays are considered as the initial chromosomes. 4. The chromosomes are populated for a particular population size chosen randomly by means of intuitive selection. 5. These populated weight and bias values thus obtained are converted to weight and the bias matrices. 6. The updated weight and the bias matrices are applied on the ANN to obtain the MSE at the training paradigm. 7. The MSE obtained from the ANN after each weight and bias obtained is tabulated in an array. Recombination solutions which are worthwhile potentially, Genetic algorithm that has Roulette wheel selection as a genetic operator is preferred.
Where f i is fitness of the individual i and individuals in the population is taken as N.
Above process is compared to a casino Roulette wheel. Considering fitness, a sector of wheel will be selected. This could be achieved by considering mean value of the fitness, followed by normalization. Random selection is performed same as Roulette wheel rotation.
Analysis of THD and Inverter switch Fault Diagnosis Implementation
To improve fault diagnosis accuracy and efficient working of a Cascaded H-bridge Multi level Inverter System (CHMLIS), a quick and accurate fault diagnosis strategy with an optimized training algorithm is presented. Also, Total Harmonic Distortion (THD) is analyzed for each switch failure and tabulated.
A fault diagnosis model developed using Artificial Neural Network (ANN) is generated for a Seven level three phase CHMLIS. The ANN model is optimized using the GA algorithm using parameter estimation algorithm. The parameters that control the ANN training like the weight and the bias value is optimized by the use of Genetic Algorithm (GA) current implementation. The three phases IMD utilized is equipped with 5HP motor drive. The overall circuit illustration of the IMD is as shown in Figure 2. The speed of convergence during training is the major requirement in the implementation. The implementation carried out used Mean Square Error (MSE) as the neutral function to be reduced. The parameters like the weight and the preconception values were optimized to obtain the minimized MSE while training the ANN for fault diagnosis.
A performance evaluation of the ANN training on the Fault Diagnosis Algorithm (FDA) is delivered in this paper and results are discussed in detail. While training the input and the target is selected as THD and the position of the switch respectively. The tabulated THD and the switch position with different working conditions are supplied to the ANN for training. For each switch failure position, different THD is obtained. As the occurrence of the fault is not uniform in nature and so it is randomized problem to be unraveled, thus requiring ANN to solve the fault prediction problem. This relation between the obtained THD and the fault at any switch is fitted on the ANN model.
Results and Discussion
A MATLAB model which has the MLI model with auxiliary legs for replacing while failure in the switches occurs is developed and the FDA implementation is carried out. The auxiliary cell is replaced while the fault occurs in the main cell thus making the uninterrupted working of the MLI in the IMD. The detection of the fault is predicted from the fitted ANN model and reconfiguration is carried out using the auxiliary cell. The Back Propagation network is implemented for the ANN. The multilevel inverter topology simulations are carried out on MATLAB/SIMULINK platform shown Figure 3. Without fault THD =31.09. When there is a fault in switch S2 and S12, the corresponding THD value is same and for these two cases it is severe compared to other switch faults as publicized in Figure 4.
|
2022-01-12T20:07:07.780Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "d540f62f399a19018e479a6a4cf36e3057162462",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2161/1/012060/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d540f62f399a19018e479a6a4cf36e3057162462",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
209532040
|
pes2o/s2orc
|
v3-fos-license
|
Integrating Low-Power Wide-Area Networks for Enhanced Scalability and Extended Coverage
Low-Power Wide-Area Networks (LPWANs) are evolving as an enabling technology for Internet-of-Things (IoT) due to their capability of communicating over long distances at very low transmission power. Existing LPWAN technologies, however, face limitations in meeting scalability and covering very wide areas which make their adoption challenging for future IoT applications, especially in infrastructure-limited rural areas. To address this limitation, in this paper, we consider achieving scal-ability and extended coverage by integrating multiple LPWANs. SNOW (Sensor Network Over White Spaces), a recently proposed LPWAN architecture over the TV white spaces, has demonstrated its advantages over existing LPWANs in performance and energy-efficiency. In this paper, we propose to scale up LPWANs through a seamless integration of multiple SNOWs which enables concurrent inter-SNOW and intra-SNOW communications. We then formulate the tradeoff between scalability and inter-SNOW interference as a constrained optimization problem whose objective is to maximize scalability by managing white space spectrum sharing across multiple SNOWs. We also prove the NP-hardness of this problem. To this extent, We propose an intuitive polynomial-time heuristic algorithm for solving the scalability optimization problem which is highly efficient in practice. For the sake of theoretical bound, we also propose a simple polynomial-time 1/2-approximation algorithm for the scalability optimization problem. Hardware experiments through deployment in an area of (25x15)sq. km as well as large scale simulations demonstrate the effectiveness of our algorithms and feasibility of achieving scalability through seamless integration of SNOWs with high reliability, low latency, and energy efficiency.
Integrating Low-Power Wide-Area Networks for Enhanced Scalability and Extended Coverage
Mahbubur Rahman and Abusayeed Saifullah
Abstract-Low-Power Wide-Area Networks (LPWANs) are evolving as an enabling technology for Internet-of-Things (IoT) due to their capability of communicating over long distances at very low transmission power. Existing LPWAN technologies, however, face limitations in meeting scalability and covering very wide areas which make their adoption challenging for future IoT applications, especially in infrastructure-limited rural areas. To address this limitation, in this paper, we consider achieving scalability and extended coverage by integrating multiple LPWANs. SNOW (Sensor Network Over White Spaces), a recently proposed LPWAN architecture over the TV white spaces, has demonstrated its advantages over existing LPWANs in performance and energyefficiency. In this paper, we propose to scale up LPWANs through a seamless integration of multiple SNOWs which enables concurrent inter-SNOW and intra-SNOW communications. We then formulate the tradeoff between scalability and inter-SNOW interference as a constrained optimization problem whose objective is to maximize scalability by managing white space spectrum sharing across multiple SNOWs. We also prove the NPhardness of this problem. To this extent, We propose an intuitive polynomial-time heuristic algorithm for solving the scalability optimization problem which is highly efficient in practice. For the sake of theoretical bound, we also propose a simple polynomialtime 1 2 -approximation algorithm for the scalability optimization problem. Hardware experiments through deployment in an area of (25x15)km 2 as well as large scale simulations demonstrate the effectiveness of our algorithms and feasibility of achieving scalability through seamless integration of SNOWs with high reliability, low latency, and energy efficiency.
Index Terms-Low-power wide-area network, white spaces, sensor network.
I. INTRODUCTION
To overcome the range limit and scalability challenges in traditional wireless sensor networks (WSNs), Low-Power Wide-Area Networks (LPWANs) are emerging as an enabling technology for Internet-of-Things (IoT). Due to their escalating demand, LPWANs are gaining momentum, with multiple competing technologies being developed including LoRaWAN, SigFox, IQRF, RPMA (Ingenu), DASH7, Weightless-N/P in the ISM band; and EC-GSM-IoT, NB-IoT, LTE Cat M1 (LTE-Advanced Pro), and 5G in the licensed cellular band (see survey [1]). In parallel, to avoid the crowd of the limited ISM band and the cost of the licensed band, we developed SNOW (Sensor Network Over White Spaces), an LPWAN architecture to support wide-area WSN by exploiting the TV white spaces [2]- [4]. White spaces refer to the allocated but locally unused TV channels, and can be used by unlicensed devices as secondary users. Unlicensed devices Mahbubur Rahman and Abusayeed Saifullah are with the Department of Computer Science, Wayne State University, Detroit, MI, 48202. E-mail: r.mahbub@wayne.edu, saifullah@wayne.edu need to either sense the medium or consult with a cloudhosted geo-location database before transmitting [5]. Thanks to their lower frequencies (54-862MHz in the US), white spaces have excellent propagation characteristics over long distance and obstacles. While their potentials have been explored mostly for broadband access (see survey [5]), our design and experimentation demonstrated the potential of SNOW to enable asynchronous, low power, bidirectional, and massively concurrent communications between numerous sensors and a base station (BS) directly over long distances [2]- [4].
Despite their promise, existing LPWANs face challenge in very large-area (e.g., city-wide) deployment [6], [7]. Without line of sight, communication range of LoRaWAN, a leading LPWAN technology that is commercially available, is short, especially in indoors (<100m while its specified urban range is 2-5km) [8]. Its performance drops sharply as the number of nodes grows, supporting only 120 nodes per 3.8 hectares [9] which is not sufficient to meet the future IoT demand. Apart from these scenarios, applications like agricultural IoT, oilfield monitoring, smart and connected rural communities would require much wider area coverage [1], [5]. In this paper, we address this challenge and propose LPWAN scalability by integrating multiple LPWANs.
Most LPWANs are limited to star topology, and rely on wired infrastructure (e.g., cellular LPWANs) or Internet (e.g., LoRaWAN) to integrate multiple networks to cover large areas. Lack of infrastructure (also raised in a hearing before the US Senate [10]) hinders their adoption to enable rural and remote area applications such as agricultural IoT and industrial IoT (e.g., for oil/gas field) that may cover hundreds of square kms. According to the Department of Agriculture, < 20% farmers can afford the cost of manual sensor data collection for smart farming [11]. Industries like Microsoft [12], Monsanto [10], and many [1], [5] are now promoting agricultural IoT. Monitoring a large oil-field (e.g., 74x8km 2 East Texas Oil-field [13]) needs to connect tens of thousands of sensors [5]. Such agricultural IoT and industrial IoT can be enabled by integrating multiple LPWANs specially SNOWs due to abundant white spaces. Similar integration may also be needed in a smart city deployment for extended coverage or for running different applications on different LPWANs.
In this paper, we address the above scalability challenge by integrating multiple SNOWs that are under the same management/control. Such an integration raises several concerns. First, we have to design a protocol to enable inter-SNOW communication, specially peer-to-peer communication (when a node in one SNOW wants to communicate with a node in a different SNOW). Second, since multiple coexisting SNOWs can interfere each other, thus affecting the scalability, it is critical to handle the tradeoffs between scalability and inter-SNOW interference. Specifically, we make the following novel contributions.
• We propose to scale up LPWAN through seamless integration of multiple SNOWs that enables concurrent inter-and intra-SNOW communications. This is done by exploiting the characteristics of the SNOW physical layer. • We then formulate the tradeoff between scalability and inter-SNOW interference as a constrained optimization problem whose objective is to maximize scalability by managing white space spectrum sharing across multiple SNOWs, and prove its NP-hardness. • We propose an intuitive polynomial-time heuristic for solving the scalability optimization problem which is highly efficient in practice. • For the sake of analytical performance bound, we also propose a simple polynomial-time approximation algorithm with an approximation ratio of 1 2 . • We implement the proposed SNOW technologies in GNU Radio [14] using Universal Software Radio Peripheral (USRP) devices [15]. We perform experiments by deploying 9 USRP devices in an area of (25x15)km 2 in Detroit, Michigan. We also perform large scale simulations in NS-3 [16]. Both experiments and simulations demonstrate the feasibility of achieving scalability through seamless integration of SNOWs allowing concurrent intra-and inter-SNOW communications with high reliability, low latency, and energy efficiency while using our heuristic and approximation algorithms. Also, simulations show that SNOW cluster network can connect thousands of sensors over tens of kilometers of geographic area. In the rest of the paper, Section II presents related work. Section III gives an overview of SNOW. Section IV explains the system model. Section V describes our inter-SNOW communication technique. Section VI formulates the scalability optimization problem for integration, proves its NP-hardness, and presents the heuristic and the approximation algorithm. Section VII explains the implementation of our network model. Section VIII presents our experimental and simulation results. Finally, Section IX concludes the paper.
II. RELATED WORK
The LPWAN technologies are still in their infancy with some still being developed (e.g., 5G, NB-IoT, LTE Cat M1, Weightless-P), some having only uplink capability (e.g., Sig-Fox, Weightless-N), while, for some, there is still no publicly available documentation (e.g., SigFox) [1], [5]. Thus, developing generalized techniques to address integration is not our focus. Instead, we propose an integration of multiple SNOWs in the white spaces for scaling up, the insights of which may also be extended to other LPWANs in the future. To cover a wide area, LoRaWAN integrates multiple gateways through the Internet [17]. Cellular networks do the same relying on wired infrastructure [18]. Rural and remote areas lack such infrastructure. Wireless integration that we have considered in this paper can be a solution for both urban and rural areas. While our integration may look similar to channel allocation in traditional tiered/clustered and centralized/distributed multichannel networks [19]- [29], it is a conceptually different problem with new challenges. First, in traditional networks, the links operate on predefined fixed-bandwidth channels. In contrast, in integrating multiple SNOW networks we have to find proper bandwidths for all links and they are interdependent and can be different. Second, SNOW integration involves assigning a large number of subcarriers to each BS allowing some degree of overlaps among interfering BSs for enhanced scalability. Finally, through integration, we have to retain massive parallel communication (between a SNOW BS and its numerous nodes) and concurrent inter-and intra-SNOW communications [30], [31]. Hence, traditional channel allocation for wireless networks [32], WSN [33], [34], or cognitive radio networks [35] cannot be used in SNOW integration. In regard to the white space networking, the closest work to ours is [36] Here we provide a brief overview of the design and architecture of a single SNOW that we developed in [2]- [4]. SNOW is an asynchronous, long range, low power WSN platform to operate over TV white spaces. A SNOW node has a single half-duplex narrowband radio. Due to long transmission (Tx) range, the nodes are directly connected to the BS and vice versa ( Figure 1). SNOW thus forms a star topology. The BS determines white spaces in the area by accessing a cloudhosted database through the Internet. Hence, it does not check on the incumbents or evaluate cross-technology interference. The nodes are power constrained and not directly connected to the Internet. They do not do spectrum sensing or cloud access. The BS uses a wide channel split into orthogonal subcarriers. As shown in Figure 1, the BS uses two radios, both operating on the same spectrum -one for only transmission (called Tx radio), and the other for only reception (called Rx radio). Such a dual-radio of the BS allows concurrent bidirectional communications in SNOW. We implemented SNOW on USRP (universal software radio peripheral) devices [15] using GNU Radio [14]. The implementation has been made opensource [37], [38]. A short video demonstrating how SNOW works is also available in YouTube [39], [40]. In the following, we provide a brief overview of the SNOW physical layer (PHY) and the Media Access Control (MAC) layer. A full description of this design is available in [2].
A. SNOW PHY Layer
A key design goal of SNOW is to achieve high scalability by exploiting wide spectrum of white spaces. Hence, its PHY is designed based on a Distributed implementation of OFDM for multi-user access, called D-OFDM. D-OFDM splits a wide spectrum into numerous narrowband orthogonal subcarriers enabling parallel data streams to/from numerous distributed nodes from/to the BS. A subcarrier bandwidth is in kHz (e.g., 50kHz, 100kHz, 200kHz, or so depending on packet size and needed bit rate). Narrower bands have lower bit rate but longer range, and consume less power [3]. The nodes transmit/receive on orthogonal subcarriers, each using one. A subcarrier is modulated using Binary Phase Shift Keying (BPSK) or Amplitude Shift Keying (ASK). If the BS spectrum is split into n subcarriers, it can receive from n nodes simultaneously using a single antenna. Similarly, it can transmit different data on different subcarriers through a single transmission. The BS can also use fragmented spectrum. This design is different from MIMO radio adopted in various wireless domains including IEEE 802.11n [41] as they rely on multiple antennas to enable the same.
While OFDM has been adopted for multi-access in the forms of OFDMA and SC-FDMA in various broadband (e.g., WiMAX [42]) and cellular (e.g., LTE) technologies [43]- [45], they rely on strong time synchronization which is very costly for low-power nodes. We adopted OFDM for the first time in WSN design and without requiring time synchronization. D-OFDM enables multiple packet receptions that are transmitted asynchronously from different nodes which was possible as WSN needs low data rate and short packets. Time synchronization is avoided by extending the symbol duration (repeating a symbol multiple times) and sacrificing bit rate. The effect is similar to extending cyclic prefix (CP) beyond what is required to control inter-symbol interference (ISI). CPs of adequate lengths have the effect of rendering asynchronous signals to appear orthogonal at the receiver, increasing guard-interval. As it reduces data rate, D-OFDM is suitable for LPWAN. Carrier frequency offset (CFO) is estimated using training symbols when a node joins the network on a subcarrier (right most) whose overlapping subcarriers are not used. Using this CFO, it is determined on its assigned subcarrier and compensated for using traditional method to mitigate ICI.
B. SNOW MAC Layer
The BS spectrum is split into n overlapping orthogonal subcarriers -f 1 , f 2 , · · · , f n -each of equal width. Each node is assigned one subcarrier. When the number of nodes is no greater than the number of subcarriers, every node is assigned a unique subcarrier. Otherwise, a subcarrier is shared by more than one node. The nodes that share the same subcarrier will contend for and access it using a CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) policy. The subcarrier assignment by the BS minimizes the interference and contention between the nodes. As long as there is an option, the BS thus tries to assign different subcarriers to the nodes that are hidden to each other.
The subcarrier allocation is done by the BS. The nodes in SNOW use a lightweight CSMA/CA protocol for transmission that uses a static interval for random back-off like the one used in TinyOS [46] . Specifically, when a node has data to send, it wakes up by turning its radio on. Then it performs a random back-off in a fixed initial back-off window. When the back-off timer expires, it runs CCA (Clear Channel Assessment) and if the subcarrier is clear, it transmits the data. If the subcarrier is occupied, then the node makes a random back-off in a fixed congestion back-off window. After this back-off expires, if the subcarrier is clean the node transmits immediately. This process is repeated until it makes the transmission. The node then can go to sleep again.
The nodes can autonomously transmit, remain in receive (Rx) mode, or sleep. Since D-OFDM allows handling asynchronous Tx and Rx, the link layer can send acknowledgment (ACK) for any transmission in either direction. As shown in Figure 1, both radios of the BS use the same spectrum and subcarriers -the subcarriers in the Rx radio are for receiving while those in the Tx radio are for transmitting. Since each node (non BS) has just a single half-duplex radio, it can be either receiving or transmitting, but not doing both at the same time. Both experiments and large-scale simulations show high efficiency of SNOW in latency and energy with a linear increase in throughput with the number of nodes, demonstrating its superiority over existing designs [3], [4]. We consider many coexisting SNOWs that are under the same management/control and need to coordinate among themselves for extended coverage in a wide area or to host different applications. As such, we consider an inter-SNOW network as a SNOW-tree in the spirit of a cluster tree used in the new IEEE 802.15.4m standard [47], each cluster representing a personal area network under a coordinator. The root of the tree is connected to the white space database. In the similar spirit, our inter-SNOW network of the coordinated SNOWs is shown in Figure 2 as a SNOW-tree. Each cluster is a star topology SNOW. All BSs form a tree that are connected through white space. Each BS is powerful or there can be multiple backup BSs for each cluster. So the chances of a BS failure is quite low in practice. Even if a BS fails, the root BS may reconstruct the tree.
IV. SYSTEM MODEL
Let there be a total of N BSs (and hence N SNOWs) in the SNOW-tree, denoted by BS 0 , BS 1 , · · · , BS N −1 , where BS i is the base station of SNOW i . BS 0 is the root BS and is connected to the white space database through the Internet. The remaining BSs are in remote places where Internet connection many not be available. Those BSs thus depend on BS 0 for white space information. Every BS is assumed to know the location of its operating area (its location and the locations of its nodes). Localization is not the focus of our work and can be achieved through manual configuration or some existing WSN localization technique such as those based on ultrasonic sensors or other sensing modalities [3]. BS 0 gets the location information of all BSs and finds the white space channels for all SNOWs. It also knows the topology of the tree and allocates the spectrum among all SNOWs. Each BS splits its assigned spectrum and assigns subcarriers to its nodes. For simplicity, we consider that all nodes in the tree transmit with the same transmission power and receive with the same receive sensitivity.
In an agricultural IoT, Internet connection is not available everywhere in the wide agricultural field. The farmer's home usually has the Internet connection and the root BS can be placed there. Microsoft's Farmbeats [12] project for agricultural IoT also exhibits such a scenario. Similarly, in a large oil field, the root BS can be in the office or control room. The considered SNOW-tree thus represents practical scenarios of wide area deployments in rural fields. The IEEE 802.15.4m standard also aims to utilize the white spaces under the exact same tree network model. We shall consider the scalability through a seamless integration and communication protocol among such coexisting SNOWs.
V. ENABLING CONCURRENT INTER-SNOW AND INTRA-SNOW COMMUNICATIONS
Here we describe our inter-SNOW communication technique to enable seamless integration of the SNOWs for scalability. Specifically, we explain how we can enable concurrent inter-SNOW and intra-SNOW communications by exploiting the PHY design of SNOW. To explain this we consider peerto-peer inter-cluster communication in the SNOW-tree. That is, one node in a SNOW wants or needs to communicate with a node in another SNOW.
For peer-to-peer communication across SNOWs, a node first sends its packet to its BS. Note that two nodes may not communicate directly even if they are in communication range of each other as they may operate on different subcarriers. The BS will then route to the destination SNOW's BS along the path given by the tree which in turn will forward to the destination node. Hence, the first question is "How do BS1 … … two neighboring BSs exchange packets without interrupting their communication with their own nodes?" Let us consider SNOW 1 and SNOW 2 as two neighboring SNOWs in Figure 3 which will communicate with each other. We allocate a special subcarrier from both of their spectrum (i.e., a common subcarrier among the two BSs) that will be used for communication between these two BSs. For a tree link BS i → BS j , this subcarrier is denoted by f i,j . To each tree link BS i → BS j , we assign a distinct f i,j , eliminating interference among the BS transmissions made along the tree links. This is always feasible because the number (N ) of SNOWs, and hence the number of tree links (N − 1), is very small compared to the total number of subcarriers. Additionally, if the connecting subcarrier that forms a tree link for BS-BS communication fails, another subcarrier is assigned since usually there is much overlap between two neighboring BSs.
As shown in Figure 3, f 1,2 is a special subcarrier that enables BS 1 -BS 2 communication as described above. D-OFDM allows us to encode any data on any subcarrier while the radio is transmitting. Thus the SNOW PHY will allow us to encode any time on any number of subcarriers and transmit. Exploiting this important feature of the SNOW PHY, Tx 1 radio will encode the packet on the subcarrier f 1,2 which is used for BS 1 -BS 2 communication in Figure 3. If there are pending ACKs for its own nodes, they can also be encoded in their respective subcarriers. Then Tx 1 radio makes a single transmission. Rx 2 will receive it on subcarrier f 1,2 while the nodes of SNOW 1 will receive on their designated subcarriers. BS 2 can receive from BS 1 in the same way. They can similarly forward to next neighboring SNOWs. Thus both inter-SNOW and intra-SNOW communications can happen in parallel. Following are the several issues and our techniques to address those to enable such communication.
A. Handling Collision in BS-BS Communication
Using one subcarrier for BS 1 -BS 2 communication, BS 1 and BS 2 cannot simultaneously transmit to each other. When Tx 1 transmits on f 1,2 , there is high energy on f 1,2 at Rx 1 . The similar is the case when Tx 2 transmits. If they start transmitting simultaneously, both packets will be lost. A straightforward solution is to use two different subcarriers for Tx 1 → Rx 2 and Tx 2 → Rx 1 transmission. However, using two subcarriers dedicated for this may result in their underutilization and hinder scalability. Hence, we use a single subcarrier for BS 1 -BS 2 communication and adopt random back-off within a fixed interval rule for this special subcarrier. That is, if BS-BS communication collides, they make random back-off after which they retry transmission.
B. Dealing with Sleep/Wake up
When a node u from SNOW 1 wants to send a packet to a node v in SNOW 2 , it first makes the transmission to BS 1 which then sends to BS 2 ( Figure 3). When BS 2 attempts to transmit to v, it can be sleeping and BS 2 may be unaware of that. To handle this, we adopt a periodic beacon that the BS of each SNOW sends to its nodes. The nodes are aware of the period of beacon. All nodes in a BS that are participating in peer-to-peer communication wake up for beacon. Thus, v will wake up for beacon as it participates in peer-to-peer communication. BS 2 will encode v's message on the subcarrier used by v in the beacon. Thus, v can receive the message from the beacon of BS 2 .
VI. HANDLING TRADEOFFS BETWEEN SCALABILITY AND INTER-SNOW INTERFERENCE
Our objective of integrating multiple SNOWs is scalability which can be achieved if every SNOW can support a large number of nodes. The number of nodes supported by a SNOW increases if the number of subcarriers used in that SNOW increases. However, if each SNOW uses the entire spectrum available at its location, there will be much spectrum overlap with the neighboring SNOWs. This will ultimately increase inter-SNOW interference, resulting in a lot of back-offs by the nodes during packet transmission. Like any other LPWAN, SNOW nodes are energy-constrained and cannot afford any sophisticated MAC protocol to avoid such interference, thereby wasting energy. On the other end, if all neighboring SNOWs use non-overlapping spectrum, inter-SNOW interference will be minimized, but each SNOW in this way can support only a handful of nodes, thus degrading the scalability. This tradeoff between scalability and inter-SNOW interference due to integration raises a spectrum allocation which cannot be solved using traditional spectrum allocation approach in wireless networks. We propose to accomplish such an allocation by formulating a Scalability Optimization Problem (SOP) where our objective is to optimize scalability while limiting the interference. To our knowledge, this problem is unique and never arose in other wireless domains. We now formulate SOP, prove its NP-hardness, and provide polynomial-time nearoptimal solutions.
A. SOP Formulation
The root BS knows the topology of the BS connections, accesses the white space database for each BS, and allocates the spectrum among the BSs. The spectrum allocation has to balance between scalability and inter-SNOW interference as described above. For SOP, we consider a uniform bandwidth ω of a subcarrier across all SNOWs. Let Z i be the set of orthogonal subcarriers available at BS i considering α as the fraction of overlap between two neighboring subcarriers, where 0 ≤ α ≤ 0.5 (as we found in our experiments [3], [4] that two orthogonal subcarriers can overlap at most up to half). Thus, if W i is the total available bandwidth at BS i , then its total number of orthogonal subcarriers is given by |Z i | = Wi ωα − 1. We consider that the values of ω and α are uniform across all BSs. Let the set of subcarriers to be assigned to BS i be X i ⊆ Z i , with |X i | being the number of subcarriers in X i . We can consider the total number of subcarriers, assigned to all SNOWs as the scalability metric. We will maximize this metric. Every BS i (i.e., SNOW i ) requires a minimum number of subcarriers σ i to support its nodes. Hence, we define Constraint (1) to indicate the minimum and maximum number of subcarriers for each BS. If some communication in SNOW i is interfered by another communication in SNOW j , then SNOW j is its interferer. Since the root BS knows the locations of all BSs (all SNOWs) in the SNOW-tree, it can determine all interference relationships (which SNOW is an interferer of which SNOWs) among the SNOWs based on the nodes' communication range. Let is the parent of BS i and Ch j ⊂ {1, 2, · · · , N − 1} be such that each BS j with j ∈ Ch i is a child of BS i . The SNOWs associated with a BS's parent and children are its interferer already, i.e., ({p(i)} ∪ Ch i ) ⊆ I i . To limit inter-SNOW interference, let φ i,j be the maximum allowable number of subcarriers that can overlap between two interfering SNOWS, SNOW i and SNOW j . As explained in Section V, there must be at least one subcarrier common between a BS and its parent which is defined in Constraint (2). Note that we can also use Constraint (2) to set φ i,p(i) to indicate the number of on demand subcarriers between BSs BS i and BS p(i) in a SNOW-tree. Sometimes the demand can change and the root BS will re-run the SOP algorithm to take it into account. Constraint (3) indicates the minimum and maximum number of overlapping subcarriers between other interfering pairs. Thus, SOP is formulated as follows where the root BS allocates the spectrum among all BSs (i.e., assigns subcarriers SOP is a unique problem that we have observed first in integrating SNOWs. It is quite different from spectrum allocation in cellular network where towers are connected through a wired network and spectrum availability/dynamics [1] do not y 1 y 9 y 10 ( ∨ ∨ y 8 y 9 ∨ ∨ y 10 )
SAT
Variables: , y 2 , y 3 , y 4 , y 5 , y 6 , y 7 , y 8 , , change. Due to technology-specific features and unique communication primitive of SNOW, traditional channel allocation techniques for wireless networks (see survey [32]), WSN (see survey [33]), or cognitive radio networks (see survey [35]) are also not applicable as SOP involves assigning a large number of subcarriers to each BS allowing some degree of overlaps among interfering BSs for enhanced scalability. In the following, we will first characterize SOP and then propose its solution strategy.
B. NP-Hardness of SOP
We now prove that SOP is NP-hard which can be proved through a reduction from the SAT (Boolean Satisfiability) problem. The SAT problem asks whether there exists a truth assignment that makes all clauses true [48]. Theorem 1 formally proves the NP-hardness of SOP by proving that its decision version is NP-complete.
Theorem 1: Given a SOP for SNOW-tree, it is NP-complete to decide whether it is feasible or not.
Proof: Given an instance of SOP in SNOW-tree with overlapping spectrum assignment for N BSs and m subcarriers, where BS i , 0 ≤ i < N gets m i number of subcarriers. It is verifiable in O(N m) time whether the subcarrier assignment is feasible or not. Hence, the problem is in NP. To prove NPhardness, we reduce an arbitrary instance I(SAT ) of SAT to an instance I(SOP ) of the SOP in SNOW-tree and show that I(SAT ) has an interpretation that satisfies a boolean formula if and only if I(SOP ) is feasible.
Let I(SAT ) have m boolean variables y 0 , y 1 , y 2 , ..., y m−1 and N clauses C 0 , C 1 , · · · , C N −1 in conjunctive normal form. Now, for the set of variables in I(SAT ) we create a set of subcarriers Z = {x 0 , x 1 , · · · , x m−1 } in I(SOP ) that are available in SNOW-tree. Then, we create one SNOW BS i in I(SOP ) for each clause C i in I(SAT ). Also, we create one subset Z i ∈ Z for each BS i that corresponds to subset of boolean variables in clause C i . As an example, consider a boolean formula (y 1 ∨ ¬y 2 ∨ y 4 ) ∧ (y 1 ∨ y 2 ∨ ¬y 3 ∨ y 6 ) ∧ (¬y 2 ∨ y 3 ∨ y 4 ∨ y 5 ) ∧ (y 2 ∨ ¬y 3 ∨ ¬y 5 ∨ y 6 ∨ y 7 ) ∧ (¬y 5 ∨ y 6 ∨ ¬y 7 ∨ y 8 ) ∧ (y 3 ∨ ¬y 4 ∨ y 8 ∨ y 9 ∨ y 10 ) of 10 variables and 6 clauses in I(SAT ), thus in I(SOP ), If a boolean variable y k exists as a positive literal in clause C i and negative literal in C j , then corresponding BS i (i.e. SNOW i ) and BS j (i.e. SNOW j ) interfere each other and x k ∈ {Z i ∩ Z j } is the interfering subcarrier between them. Thus, setting y k to true in I(SAT ) will yield assigning subcarrier x k to BS i or BS j , and vice versa. In the previous example, if y 2 is set to true, then BS 1 and BS 3 get subcarrier x 2 and not BS 0 and BS 2 .
To build the SNOW-tree, we consider BS 0 as the root BS that corresponds to clause C 0 . We draw an edge between BS i and BS j if corresponding clauses C i and C j have at least one common positive or negative literal. The number of such literals in I(SAT ) represents φ ij in I(SOP ). While creating the SNOW-tree, we do not draw an edge between BS j and BS k if BS j ∈ ({p(i)} ∪ Ch i ) and BS k ∈ ({p(i)} ∪ Ch i ), where, i = j = k. Thus, no loops are created and the number of edges in SNOW-tree become N − 1, as shown in Figure 4. The whole reduction process runs in O(m 2 lg N ) time.
Suppose that I(SAT ) has an interpretation that satisfies the boolean formula. Thus, each clause C i is also true. Also, a subset of variables in each clause C i is true that corresponds to the subset of subcarriers X i that is assigned to BS i in I(SOP ). The number of variables in clause C i that are set to true represents the minimum number of subcarriers φ i in I(SOP ). Also, no two interfering BS i and BS j get more than φ ij number of common subcarriers between them. We also include a common subcarrier between neighboring BS i and BS j if there is none already, thus considering corresponding literal in I(SAT ) as true which does not change the satisfiability of boolean formula. Such inclusion also does not violate right hand side condition of Constraints (2) and (3). Thus, I(SOP ) has a feasible subcarrier assignment where the root BS assigns at least N subcarriers in total to all the BSs SNOW-tree, each having at least one. Now, let I(SOP ) have a feasible subcarrier assignment in SNOW-tree. Thus the root BS assigns at least N subcarriers to N BSs, each having at least one. Since each BS i in I(SOP ) represents a clause C i in I(SAT ) and two neighboring BS i and BS j in I(SOP ) have at least one common subcarrier and SNOW-tree is connected, each clause C i has at least one literal that is set to be true. Thus, we have an interpretation in I(SAT ) that satisfies the boolean formula.
C. Efficient Greedy Heuristic for SOP
Since an optimal solution of SOP cannot be achieved in polynomial time unless P=NP, we first propose an intuitive and efficient polynomial-time greedy heuristic. In the beginning, our greedy heuristic assigns to each BS the entire spectrum available in its location. Our heuristic then keeps removing subcarriers from all the BSs until all the constrains of SOP are satisfied. Our goal here is to remove as less subcarriers as possible from each BS to maximize the scalability metric.
Algorithm 1: Greedy Heuristic Algorithm
Data: Don't delete x l from X i or X j . 14 else 15 Break.
Our greedy heuristic is described as follows. The root BS first greedily assigns to BS i all the subcarriers that are available at the location of BS i (i.e., the entire spectrum available in BS i 's location). Note that such an assignment maximizes the scalability metric N −1 i=0 |X i |, but violates the constrains of SOP. Specifically, it satisfies Constraint (1), but may violate Constraints (2) and (3) that are defined to keep the BSs connected as a tree and to limit interference between neighboring or interfering BSs by limiting their common usable subcarriers. Now, with a view to satisfying those two constrains, the heuristic greedily removes some subcarriers that are common between interfering BSs. Such removal of subcarriers is done to make the least decrease in the scalability and to ensure that Constraint (1) is not violated. In other words, it tries to keep the subcarrier assignment balanced between BSs. Specifically, for every interfering BS pair, BS i and BS j , we do the following until they satisfy Constraints (2) and (3): Find the next common subcarrier between them and remove it from The pseudocode of our greedy heuristic is shown as Algorithm 1. As shown in the pseudo code, the heuristic may not find feasible solution in some rare cases where some BS pairs, BS i and BS j , cannot satisfy the condition |X i ∩ X j | ≤ φ i,j . In such cases, we can either use the infeasible solution and use the found subcarrier allocation or relax the constraints for those BSs (violating the constraints) by changing their values of σ i or φ i,j in Constraints (1)
D. Approximation Algorithm for SOP
While the heuristic (Algorithm 1) can be highly efficient in practice, we also propose an algorithm for which we can derive an analytical performance bound. Our reduction used in Theorem 1 provides the key insights for developing such an approximation algorithm. Our key observation from the reduction is that a solution approach for SOP can be developed by extending a solution for the MAX-SAT (Maximum Satisfiability) problem and by incorporating the constraints of the former. MAX-SAT, a generalized version of SAT, asks to determine the maximum number of clauses, of a given Boolean formula in conjunctive normal form, that can be made true by an assignment of truth values to the variables of the formula [49]. The observation allows us to leverage the wellestablished results for MAX-SAT. Specifically, we leverage a very simple but analytically efficient approach adopted for MAX-SAT solution, and incorporate the SOP constraints to develop a constant approximation algorithm for SOP.
Considering Ω as the total weight of all clauses, a simple approximation algorithm for MAX-SAT sets each variable to true with probability 1 2 . By linearity of expectation, the expected weight of the satisfied clauses is at least 1 2 Ω, thus making the approach a randomized 1 2 -approximation algorithm. In solving the SOP in a similar spirit, we shall consider assigning a subcarrier to a SNOW in place of a variable to a clause. Choosing a probability other than 1 2 would require us to calculate different probabilities for different subcarriers based on the level of interference they contribute to different BSs which involves a costly approach. Therefore, it is very difficult and impractical for us to develop a faster approximation algorithm based on our proposed approach. Since MAX-SAT does not have Constraints (1), (2), and (3), we modify such a probabilistic assignment whose pseudocode is shown as Algorithm 2 to take into account these constraints.
Algorithm 2 assigns subcarriers to the BSs in two steps. In step 1, it assigns each distinct subcarrier x l in the SNOW-tree uniformly and independently with probability of 1 2 to each Uniformly and independently add x l with a probability of 1 2 to X i , ∀i : Uniformly and independently add x k with a probability of 1 2 to X i , ∀i : x k ∈ (Z i − X i ). 10 for each BS i in the SNOW-tree do .
BS i such that x l ∈ Z i (i.e., the BS where the subcarrier is available). The set of subcarriers that BS i gets after this step is X i . Thus, the expected number of subcarriers assigned to Similarly, the expected number of common subcarriers between two interfering BSs, BS i and BS j , after step 1 is . Our experiments (Sec. VIII-A, VIII-B) show that two interfering BSs can use even up to 60% of their total available common subcarriers. That is, the values φ i,j in Constraints (2) and (3) can be up to 60% of |Z i ∩ Z j |. Thus after step 1, the probability of satisfying Constraints (2) and (3) is very high. Hence, if some BS i violates Constraint (1), i.e., if |X i | < σ i , we repeat subcarrier assignment in the same way in step 2. Specifically, step 2 assigns each distinct subcarrier x k uniformly and independently with probability of 1 2 to each BS i such that x k ∈ Z i − X i . If X i is the set of subcarriers assigned to BS i in step 2, then BS i finally gets subcarriers X i = X i ∪ X i . While step 2 increases the probability of satisfying Constraints (1), it decreases that of satisfying the other two constraints which was very high before this step. Hence, we do not adopt any further subcarrier addition. 1) Performance Analysis: As described above, Algorithm 2 sometimes can end up with an infeasible solution for SOP. However, as we describe below, such chances are quite low, and the probability of finding a feasible solution is quite high (≈ 1) (Lemma 1). Then Theorem 3 proves that the Algorithm has an approximation ratio of 1 2 for any solution it provides (feasible or infeasible).
Lemma 1: The probability of satisfying all the constraints of SOP is ≈ 1. Proof . Now, if both steps run, the expected number of subcarriers assigned to BS i is Note that the value of σ i is set usually much smaller than the above value as a BS does not want to use all available subcarriers, allowing other SNOWs to use those. Thus, the probability of satisfying Constraint (1) is ≈ 1.
If step 2 does not run, then the expected number of common subcarriers between each interfering BS pairs, BS i and BS j , . As we have discussed before, the value of φ i,j in Constraints (2) and (3) is usually above Thus, the probability of satisfying Constraints (2) and (3) is also ≈ 1.
If step 2 runs, then
which means that the probability of satisfying Constraints (2) and (3) is ≈ 1 even if step 2 runs. Thus, the probability of satisfying all constraints is ≈ 1.
Theorem 3: Algorithm 2 has an approximation ratio of 1 2 . Proof: Since an optimal value of the objective (scalability metric) is unknown, a conservative upperbound is given by If step 2 of the algorithm does not run, according to probabilistic assignments of subcarriers in step 1 of Algorithm 2, we have in step 1, If step 2 of Algorithm 2 runs, Now using linearity of expectation, if step 2 runs, Thus the approximation bound follows. As we shall describe in Sections VIII-A and VIII-B through evaluations, our heuristic (Algorithm 1) performs better in terms of scalability, energy consumption, and latency while Algorithm 2 provides theoretical performance guarantee.
VII. IMPLEMENTATION
We implement our proposed SNOW technologies on the GNU Radio [14] platform using USRP devices that can operate between 70MHz -6GHz of spectrum [15]. We have 9 USRP (2 B210, 4 B200, and 3 USRP1) devices. To demonstrate the effectiveness of SOP in intra-SNOW communication we use 2x2 devices in 2 different SNOW BSs (each having one Tx-Radio and one Rx-Radio), where one BS is assigned 3 nodes (3 USRPs) and the other BS is assigned 2 nodes (2 USRPs). On the other hand, to demonstrate the inter-SNOW communication, we use 2x3 devices in 3 different SNOW BSs (each having one Tx-Radio and one Rx-Radio). In this case, each BS is assigned one USRP device as node.
We evaluate the performance of our design by experimenting at 15 different candidate locations covering approximately (25x15)km 2 of a large metropolitan area in the city of Detroit, Michigan ( Figure 5). Due to our limited number of USRP devices (3 BSs each having one node to demonstrate inter-SNOW communication) in real experiments, we create 5 different SNOW-trees at different candidate locations and do the experiments separately. In experiments, we choose to create 3 SNOWs to demonstrate the integration of as many SNOWs as we can with our limited number of devices, and most importantly to cover more area using a SNOW-tree. In [3], [4], we have already performed extensive experiments considering multiple nodes in a single SNOW. Hence, here we will show the intra-SNOW communication using 2 SNOW BSs one having 3 nodes and the other having 2 nodes. However, later in simulations, we create a single SNOW-tree of 15 SNOWs each having 1000 nodes. We perform experiments on white space availability at different locations and determine the values of φ i,p(i) and φ i,j in Constraints (2) and (3), respectively. We compare the performance of our greedy heuristic and our approximation algorithm for SOP with a direct allocation scheme. A direct allocation scheme is unaware of scalability and inter-SNOW interference and hence will assign each BS all the subcarriers that are available at its location. Moreover, we perform exhaustive experiments on both intra-and inter-SNOW communications.
VIII. EVALUATION
In this section, we evaluate the performance of our SOP algorithms in inter-and intra-SNOW communications through experiments and simulations.
A. Experiments 1) Experimental Setup: Our testbed location has white spaces ranging between 518 and 686MHz (TV channels 21-51) for different BSs. We set each subcarrier bandwidth to 400kHz which is the default subcarrier bandwidth in SNOW [3], [4]. We use 40-byte (including header, random payload, and CRC) packets with a spreading factor of 8, modulated or demodulated as BPSK (Binary Phase-Shift Keying). With the similar spirit of IEEE 802.15.4, we set the Tx power to 0dBm in the SNOW nodes for energy efficiency. Receive sensitivity is set to -94dBm both in SNOW BSs and the nodes. Meanwhile, BSs transmit with a Tx power of 15dBm (≈40mW) to their nodes and neighboring BSs that is the maximum allowable Tx-power limit in most of the white space channels at our testbed location. For energy calculations at the nodes, we use the energy profile of TI CC1070 RF unit by Texas Instruments that can operate in white spaces [50]. Unless stated otherwise, these are our default parameter settings. 2) Finding Allowable Overlap of Spectrum: We first determine how many subcarriers can be common between two interfering SNOWs without degrading their performance. We determine white spaces at 15 different locations from a cloudhosted database [51]. Figure 6(a) shows the available white spaces at different locations confirmed by both database and sensing. Also, we conduct experiments on 5 different SNOWtrees to determine the maximum allowable number of common subcarriers between interfering BSs. Locations of BSs in 5 trees are (1) B, A, E; (2) D, C, F; (3) G, I, L; (4) J, H, K; (5) N, M, O; respectively, where the BS in the middle location in each SNOW-tree is the root BS. In this paper, we also identify the SNOW BSs by their location indices. In each tree, we allow BSs to operate with different magnitudes of white space overlap between them. To determine the maximum allowable number of common subcarriers between interfering BSs in a tree, each node hops randomly to all the subcarriers that are available in its BS location and sends consecutive 100 packets to its BS. Each node repeats this procedure 1000 times.
As shown in Figure 6(b), the BSs in each tree can overlap 60% of their white spaces to yield an average Packet Reception Rate (PRR) of 85%. We consider that an 85% PRR is an acceptable rate. This figure also shows that the average PRR decrease with the increase in the magnitude of overlap. Finding the maximum allowable overlap needs to be done only once in the beginning of the network operation and may be recomputed if there is a significant change (e.g., some BS or a large number of nodes leave or join) in the network. A network deployment may choose its magnitude of overlap based on the target applications quality of service (QoS) requirements. We thus set the values of φ i,p(i) and φ i,j in Constraints (2) and (3), respectively, based on this experiment. Finding the optimal values of these variables is out of the scope of this paper. 3) Evaluating the Scalability Metric: To demonstrate the performances in maximizing the scalability metric under our approaches and the baseline approach, we set the value of σ i in Constraint (1) to 100 for all the BSs. We choose the same value for each BS since most (13 out of 15) of the BS locations have the same set of white space channels. Figure 7(a) shows the values of the scalability metric achieved in 5 different SNOW-trees using our greedy heuristic, approximation algorithm, and the direct allocation approach. This figure shows that the direct allocation scheme assigns more subcarriers to all BSs. Our later experiments will show that such an assignment suffers in terms of reliability, latency, and energy consumptions compared to our greedy heuristic and approximation algorithm due to its violation of Constraints (2) and (3) of SOP. Also, our greedy heuristic can offer higher scalability than our approximation approach, while the latter can be preferred when analytical performance bound is a concern. Thus, our greedy heuristic can be more effective in practice (even though its performance bound was not derived). Figure 7(b) shows the time taken by our greedy heuristic, our approximation algorithm, and direct allocation scheme to assign subcarriers to BSs. Our greedy heuristic observes 0.094ms compared to 0.068ms for our approximation algorithm in worst case in SNOW-tree 4. In the figure, time taken by the direct allocation scheme is not visible as it is approximately 0ms (since it does not employ any intelligent technique). However, time taken by our greedy heuristic and our approximation algorithm are very low and practical.
4) Experiments on Intra-SNOW Communication:
In this section, we demonstrate intra-SNOW communication performance when multiple interfering SNOWs are integrated together to coexist. Due to our limited number of USRP devices, we choose two interfering SNOWs in a SNOWtree and run intra-SNOW communications independently at the same time. For example in SNOW-tree 1, SNOWs at locations A and B perform intra-SNOW communications.
Here, SNOWs at locations A and B are assigned 3 and 2 nodes, respectively (as explained in Section VII). Similarly, we allow SNOWs at locations B and C; C and A to do the same, respectively. In experiments, each node under a SNOW hops randomly on different subcarriers assigned by our greedy heuristic algorithm and sends 100 consecutive packets to the BS. We repeat the same set of experiments when subcarrier assignment is done by our approximation algorithm and the direct allocation scheme. We allow the nodes in a SNOW to hop randomly across different subcarriers to emulate as if all the subcarriers of that SNOW were assigned to different nodes. Figure 8 shows the reliability, latency, and energy consumption in intra-SNOW communication under different SOP algorithms. Figure 8(a) shows the average PRR in different SNOW BSs. In each SNOW-tree, the average PRR at each SNOW BS is calculated from all 3 pairs of intra-SNOW communication experiments. The highest average PRR is approximately 100% in SNOW BSs located at E, I, M, N, and O, while the lowest average PRR is approximately 98.9% in SNOW BS located at F when the subcarriers assigned by our greedy heuristic algorithm is used. For our approximation algorithm, the highest and lowest average PRR values are approximately 100% and 97.9%, respectively. For the direct allocation scheme, these values are 89% and 79%, respectively. Figure 8(b) shows that the average latency to successfully deliver an intra-SNOW packet to a SNOW BS is lower in all SNOW-trees while the subcarriers assigned by our greedy heuristic algorithm are used. For example, the average latency per packet is as low as 8.3ms in SNOW-tree 4 compared to 9.5ms and 22.1ms for our approximation algorithm and direct allocation subcarrier assignments, respectively. Figure 8(c) shows that the average energy consumption for each packet is also lower in all SNOW-trees when our greedy subcarrier assignment is used. In SNOW-tree 4, the average energy consumption per packet is as low as 0.47mJ compared to 0.52mJ and 1.31mJ for approximation and direct allocation subcarrier assignments, respectively. Thus, all the experiments in Figure 8 confirm that both our greedy heuristic and approximation algorithm are practical choices for SOP.
5) Experiments on Inter-SNOW Communications:
To demonstrate inter-SNOW communication performance, we perform parallel communications between two nodes under two sibling BSs in each SNOW-tree, using the sets of subcarriers assigned to BSs by different SOP algorithms in our previous experiments. Since, we have only one node under each BS in a tree (as explained in Section VII), we allow those nodes to use all the subcarriers of their respective BSs. Considering SNOW-tree 1, the node in BS located at B (and E) will send inter-SNOW packets to the node in BS located at E (and B) via root BS located at A. Thus, this is level three inter-SNOW communication. In experiments, the node in BS at B (and E) randomly hops into different subcarriers of its BS and sends consecutive 100 packets destined for the node in BS at E (and B). BS at B (and E) first receives the packets (intra-SNOW) and then relays to its parent BS at A (inter-SNOW). Root BS at A then relays (inter-SNOW) the packets to BS at E (and B). Finally, BS at E (and B) sends (intra-SNOW) the packets to its node. Considering a single inter-SNOW packet, since the node is randomly hopping to different subcarriers, the BS sends (intra-SNOW) the same packet via all subcarriers, so that the node may receive it instantly. The whole process is repeated 1000 times in every SNOW-tree. Figure 9 shows the average PRR, latency, and energy consumption in inter-SNOW communications, while the set of subcarriers used are given by our greedy heuristic, our approximation algorithm, and the direct allocation scheme in our previous experiments in Section VIII-A3. Figure 9(a) shows that the average PRR values are high in all SNOW-trees when the subcarriers are assigned using our greedy heuristic. For example, PRR is as high as 99.99% in SNOW-tree 5 compared to 97.2% and 74% by our approximation algorithm and the direct allocation scheme, respectively. Figure 9(b) shows that the per inter-SNOW packet latency is lower in all SNOW-trees in case of our greedy subcarrier as- signments. In SNOW-tree 5, it is 26.2ms on average compared to 32.8ms and 50ms in cases of our approximation algorithm and the direct allocation scheme assignments, respectively. Figure 9(c) shows average energy consumed per inter-SNOW packet at Tx and Rx nodes are lower in all SNOW-trees for our greedy assignments. In SNOW-tree 5, Tx and Rx nodes consume on average 0.49mJ and 0.48mJ energy, respectively. For our approximation algorithm, these values are 0.59mJ and 0.56mJ, while the direct allocation yields 1.2mJ and 1mJ. These experiments thus confirm that our greedy heuristic and approximation algorithms are practical choices for SOP.
B. Simulation
For evaluation under large-scale network, we perform simulations through NS-3 [16].
1) Simulation Setup: We create a SNOW-tree of 15 SNOWs (BSs) as shown in Figure 10(a) and simulate the (25x15)km 2 area as shown in Figure 5. BS at location A is the root BS. Each SNOW has 1000 nodes, totaling 15000 thousand nodes in the SNOW-tree. We limit the maximum allowable number of common subcarriers between interfering BSs based on the white space availability at different BS locations (Figure 6(a)) and our experimental findings, which is shown in Figure 10(b). σ i in Constraint (1) is chosen to be 100 for all the BS. Thus, a subcarrier will be used by at most 10 nodes in worst case in intra-SNOW communication. Figure 10(c) shows the subcarrier assignments for all BSs by the root BS at location A, while using our greedy heuristic algorithm, approximation algorithm, and the direct allocation scheme. Here, both greedy heuristic and approximation algorithms do not violate any of the Constraints of SOP. However, the direct allocation scheme violates Constraints (2) and (3) of SOP. The values for various parameters such as packet size, spreading factor, modulation, and Tx power are set the same as described in our real experiments (Section VIII-A1).
2) Simulation Results: We evaluate the performance of our design using thousands of nodes by generating thousands of parallel multi-level inter-SNOW communications. In simulation, each node in each SNOW sends 100 packets with a random sleep interval of 0-50 ms, destined for another node in second level (adjacent SNOWs) and up to its maximum reachable level inside the SNOW-tree. In each SNOW, we identify nodes from 1 to 1000. In our simulation, a node with ID i will send inter-SNOW packets to the nodes with ID i in all other SNOWs. Figure 11 demonstrates the performances in terms of reliability, latency, and energy consumption when subcarriers assigned by our greedy heuristic, approximation algorithm, and direct allocation scheme are used. Figure 11(a) shows that by using the subcarriers assigned by our greedy heuristic algorithm, we can achieve on average PRR of 93% even in 10th level inter-SNOW communications. On the other hand, our approximation algorithm and direct allocation scheme can provide approximately 73% and 40% of average PRR, respectively. Figure 11(b) shows that by using the subcarriers assigned by our greedy heuristic algorithm, we observe on average total latency of 14 minutes to send all successful inter-SNOW packets to the second levels and up to the maximum achievable levels by all 15000 nodes. Using subcarriers given by our approximation algorithm and direct allocation scheme, these values are approximately 60 minutes and 200 minutes, respectively. Figure 11(c) shows that by using the subcarriers assigned by our greedy heuristic algorithm, the per node energy consumption to send all successful inter-SNOW packets to all possible levels is 389mJ. While in cases of our approximation algorithm and direct allocation scheme, these values are 1728mJ and 5580mJ, respectively. Thus, the simulation results demonstrate that the greedy heuristic or the approximation algorithm can be chosen to scale up LPWANs for future IoT applications.
C. Discussion
In Section VI-C, we have justified that our greedy heuristic approach is an intuitive and highly scalable polynomialtime solution. Additionally, we have discussed that deriving an analytical bound (in terms of scalability) of our greedy heuristic is not immediate. Hence, for the cases when an analytical performance bound is needed, we have proposed a probabilistic optimization approach and derived its theoretical performance bound (Section VI-D). Specifically, our probabilistic optimization approach is a 1 2 -approximation algorithm. In terms of performance, both experiments and simulations demonstrate that our greedy heuristic algorithm provides higher reliability, lower latency, and lower energy consumption in both intra-and inter-SNOW communications compared to our approximation algorithm, which is due to its interferenceaware subcarrier assignments to different SNOW BSs. Since our approximation algorithm assigns more subcarriers to most of the BSs (both in experiments and simulations), it assigns a greater number of interfering subcarriers between neighboring BSs. Such assignment by our approximation algorithm causes frequent back-offs in transmissions by the nodes, resulting in an increase in latency and energy consumption in both intraand inter-SNOW communications.
As described in Algorithms 1 and 2, our greedy heuristic or/and approximation algorithms may fail to provide a feasible subcarrier assignment for few SOP problem instances. In practice, either our greedy heuristic or our approximation algorithm may be adopted to handle the subcarrier assignment failure of each other. In cases when both fail, the target application's requirement will dictate which solution should be adopted. For example, if the application requires bounded performance and high spectrum utilization, our approximation algorithm may be adopted. On the other hand, greedy heuristic may be chosen in case higher reliability is expected. In experiments, we were unable to demonstrate such cases based on the available TV white spaces and environments at our testbed location. Our realistic simulations, where parameters are chosen based on our experiments, do not also showcase any infeasible cases of our greedy heuristic algorithm. In general, our experiments and simulations demonstrate that both greedy heuristic and approximation algorithms may be practically chosen to scale up LPWANs for future IoT applications.
IX. CONCLUSIONS
LPWANs represent a key enabling technology for IoT that offer long communication range at low power. While many competing LPWAN technologies have been developed recently, they still face limitations in meeting scalability and covering much wider area. Such limitations make the adoption of LPWANs challenging for future IoT applications, especially in infrastructure-limited rural areas. In this paper, we have addressed this challenge by integrating multiple LPWANs for enhanced scalability and extended coverage. Specifically, we have proposed to scale up LPWANs through a seamless integration of multiple SNOWs that enables concurrent inter-SNOW and intra-SNOW communications. We have then formulated the tradeoff between scalability and inter-SNOW interference as a scalability optimization problem, and have proved its NP-hardness. Consequently, we have proposed a polynomial-time greedy heuristic that is highly effective in experiments as well as a polynomial-time 1/2-approximation algorithm. Testbed experiments as well as large scale simulations demonstrate the feasibility of achieving scalability through our proposed integration of SNOWs with high reliability, low latency, and energy efficiency.
ACKNOWLEDGMENT This work was supported by NSF through grants CNS-1742985 and CAREER-1846126.
|
2020-01-01T17:14:16.000Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "681448ab1a3fcaf2b743a7547bf056ccf6b97490",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2001.00243",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "681448ab1a3fcaf2b743a7547bf056ccf6b97490",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
1366038
|
pes2o/s2orc
|
v3-fos-license
|
Do Photobiont Switch and Cephalodia Emancipation Act as Evolutionary Drivers in the Lichen Symbiosis? A Case Study in the Pannariaceae (Peltigerales)
Lichen symbioses in the Pannariaceae associate an ascomycete and either cyanobacteria alone (usually Nostoc; bipartite thalli) or green algae and cyanobacteria (cyanobacteria being located in dedicated structures called cephalodia; tripartite thalli) as photosynthetic partners (photobionts). In bipartite thalli, cyanobacteria can either be restricted to a well-delimited layer within the thallus (‘pannarioid’ thalli) or spread over the thallus that becomes gelatinous when wet (‘collematoid’ thalli). We studied the collematoid genera Kroswia and Physma and an undescribed tripartite species along with representatives of the pannarioid genera Fuscopannaria, Pannaria and Parmeliella. Molecular inferences from 4 loci for the fungus and 1 locus for the photobiont and statistical analyses within a phylogenetic framework support the following: (a) several switches from pannarioid to collematoid thalli occured and are correlated with photobiont switches; the collematoid genus Kroswia is nested within the pannarioid genus Fuscopannaria and the collematoid genus Physma is sister to the pannarioid Parmeliella mariana group; (b) Nostoc associated with collematoid thalli in the Pannariaceae are related to that of the Collemataceae (which contains only collematoid thalli), and never associated with pannarioid thalli; Nostoc associated with pannarioid thalli also associate in other families with similar morphology; (c) ancestors of several lineages in the Pannariaceae developed tripartite thalli, bipartite thalli probably resulting from cephalodia emancipation from tripartite thalli which eventually evolved and diverged, as suggested by the same Nostoc present in the collematoid genus Physma and in the cephalodia of a closely related tripartite species; Photobiont switches and cephalodia emancipation followed by divergence are thus suspected to act as evolutionary drivers in the family Pannariaceae.
Introduction
Several spectacular aspects of the lichen symbiosis have come to light recently, the most surprizing for the general public and the most promising for evolutionary studies being the multiple variations of the association between the mycobiont and photobiont partners. The lichen as the icon of consensual and stable symbiosis between two very different partners ''for better and for worse'' is not the model that molecular studies have produced in recent years. Indeed, some mycobionts can incorporate several algal genotypes in their thallus [1][2][3], or even different algal species [4][5]. Several phylogenetic studies have demonstrated that photobiont switching is rather widespread [6], even in obligatory sterile taxa where both partners are dispersed together, and may occur repeatedly over evolutionary timescales [7]. Studies of the genetic diversity of both partners within a geographical context revealed that mycobionts can recruit several lineages of photobionts, allowing for ecotypic differentiation and thus for colonization of different ecological niches and distribution [6,8]. Those multiple variations in the association between the partners involved in the lichen symbiosis may take part in their evolutionary trajectory and we here address that matter for a lichen family (the Pannariaceae) in which several very different types of thalli occur together with variation in the number of photobionts involved in their construction.
The Peltigerales, a strongly supported lineage within the Lecanoromycetes, contains many well-known lichen genera, such as Lobaria, Peltigera and Sticta, within 10 families [9][10][11][12], including the Collemataceae and the Pannariaceae, two families that will be mentioned in this paper.
Within the Peltigerales, symbiosis includes two different lineages of photobionts [10]: (a) cyanobacteria mostly belonging to the genus Nostoc, or to Scytonema, Hyphomorpha and other taxa in the Scytonemataceae and Rivulariaceae; (b) green algae, mainly assigned to the genera Coccomyxa, Dictyochloropsis, Myrmecia, all belonging to the Trebouxiophyceae. The number of photobionts associated with the mycobiont provides the ground for the distinction of bi-and tripartite lichens, the latter case being much more diverse in the way of allocating space for the cyanobacteria [13][14][15]: (a) association with a single photobiont partner, either a cyanobacteria or a green algae; these thalli are bipartite and are referable to the cyanolichens or the chlorolichens, respectively [16]; (b) association with two partners, a cyanobacteria and a green algae and corresponding thalli referred to as tripartite thalli [17]; the topological organization of the partners can vary : (b1) both photobionts can be present in a dedicated layer within the thallus (chloro-cyanolichen; see [16]); (b2) the green photobiont is present in a dedicated layer within the thallus whilst cyanobacteria are confined to dedicated and morphologically recognizable organs, named cephalodia [18]; (b3) production of two different thallus types, either living independently from one another or being closely associated, one with the cyanobacteria and the other one with the green algae; these structures are referred to as « photosymbiodemes », « photopairs » or « photomorphs » and can be morphologically rather similar or very much different one from the other -in the latter case the cyanomorph has a Dendriscocaulon-like morphology [14].
Further two different types of cyanobacterial bipartite thallus can be distinguished on the basis of their response to changes in water availability [19]. A first type is characterized by thalli that swell considerably and become very much gelatinous when wet, and return to a rather brittle and crumpled condition when dry, while the second type has thalli that do not radically change when water availability varies, albeit strong changes in color can occur. The first type is associated with a homoiomerous thallus anatomy, that is absence of a specialized photobiont layer, with chains of Nostoc with thick mucilaginous walls being easily recognized and present throughout the thallus thickness, an upper cortex being absent or present; it will be hereafter referred to as the collematoid thallus type. The second type of thallus is heteromerous, that is with a usually very distinct photobiont layer present under the upper cortex (which is always present) and Nostoc (or other genera) or green algal cells compacted and assembled in clusters. Within the second group, several morphotypes can be distinguished, ranging from nearly crustose to large foliose and dendroidfruticose; the pannarioid type refers to a squamulose to foliose thallus developed over a black prothallus. Within the Peltigerales, a thallus associated with cyanobacteria can either belong to the collematoid or to other types, incl. the pannarioid type; on the other hand, thalli associated with green algae never belong to the collematoid type.
In summary, the lichen family Pannariaceae includes genera with very different thalli, easily recognized by their morphology and anatomy and behavior to water availability, the collematoid and pannarioid thalli. We here wish : (1) to examine the phylogenetic relationships of the collematoid genera Kroswia and Physma, and to examine the phylogenetic relationships of the photobiont of these two taxa (both being lichenized with Nostoc); (2) to examine the phylogenetic relationships of the collematoid, pannarioid and tripartite thalli all across the family Pannariaceae, and to establish whether a photobiont switch can be associated with the transition towards from pannarioid thalli to collematoid thalli and vice versa; (3) to examine the phylogenetical position of an undescribed species with tripartite thallus, belonging to Pannaria s. l. (foliose species with a green algae in the thallus and developing squamulose cephalodia with Nostoc over its surface) and to assess the evolutionary significance of a thallus combining a green algae and a cyanobacteria.
Taxon Sampling
We assembled material belonging to the Pannariaceae from recent field trips in Madagascar (2008)
Molecular Data
Well-preserved lichen specimens lacking any visible symptoms of fungal infection were selected for DNA isolation. Extraction of DNA followed the protocol of Cubero et al. [50]. We sequenced the ribosomal nuclear loci ITS, using primers ITS1F [51] and ITS4 [52], and LSU with primers LR0R [53] and either LR7 [53] or LIC2044 [54], the mitochondrial ribosomal locus mtSSU, using primers SSU1 and SSU3R [55], and part of the protein-coding gene RPB1 with RPB1AF [56] and RPB1CR [57]. We sequenced the 16S ribosomal region of the Nostoc symbiont of 25 of this set of Pannariaceae as well as 2 additional Fuscopannaria leucosticta, 2 additional Physma and 4 from two other genera (Leptogium and Pseudocyphellaria) belonging to the Peltigerales, using the two primer pairs fD1 [58]-revAL [17] and f712 [59]-rD1 [58]. Amplicons were sequenced by Macrogenß or by the GIGA technology platform of the University of Liège. concatenated the different loci. As several species are represented by sequences obtained from specimens collected in the different parts of the world, mostly with ITS, we further assembled a 3 loci dataset excluding ITS. We thus produced three matrices, two for a large sampling of the Pannariaceae including our target taxa (Kroswia, Physma and the undescribed species with a tripartite thallus), including the four loci 5.8S, mtSSU, LSU and RPB1 or including only the latter three, and one with the Nostoc 16S data.
For the concatenated analysis of the four loci, we partitioned the data in different subsets to optimize likelikood. We used PartitionFinder [65] to choose the best partition and determine the best models for the different subsets. We used BIC as the criterion to define the best partition, and compared all models implementable in MrBayes [66]. The partition tested for the analysis on the four loci was composed of 6 subsets: RPB1, 1 st codon position, RPB1, 2 nd codon position, RPB1 3 rd codon position, mtSSU, LSU, 5.8S. For the 16S analysis on Nostoc, we used MrModelTest version 2.3 [67] to determine the best model.
Maximum Likelihood and Bayesian Phylogenetical Analyses
For each matrix, we produced the best likelihood tree and bootstrapped for 1000 pseudoreplicates in the same run using RAxML version 7.4.2 [62][63] with the default settings and the GTRCAT model. We further ran a Bayesian analysis using MrBayes version 3.1.2 [66]. Each analysis consisted of 2 runs of 3 heated chains and 1 cold one. We assessed the convergence using Tracer version 1.5 [68] and stopped the runs after checking with AWYT [69] that convergence was reached for each run and that tree topologies have been sampled in proportion of their true posterior probability distribution. The analysis for the family Pannariaceae was stopped after 15610 6 generations, the analysis on Nostoc 16S after 37610 6 generations.
Ancestral State Reconstruction
We reconstructed ancestral character states using SIMMAP version 1.5.2 [70], with default settings, on the consensus Bayesian tree produced by the MrBayes analysis on the Pannariaceae 4 loci concatenated dataset, as well as on a subset of 20 trees (10 from each run of the Bayesian analysis) and with Mesquite version 2.75 [71][72] using the likelihood parameters and the default settings, calculating the average probabilities of the ancestral states based on the same subset of 20 trees.
We also used BayesTraits version 1.0 [73] on a set of 2 trees: the best tree produced by the ML analysis on the Pannariaceae 4 loci concatenated dataset and on the best tree of the concatenated analysis without 5.8S, as they were slightly different, to constrain some branches (ancestors) to be to a certain state. We compared the harmonic mean of the iterations, which is an approximation of the marginal likelihood of the model, calculating the Bayes Factor, which is twice the difference of likelihood between the models, with each state of ancestor, to see which state of the ancestor leads to the best likelihood of the model. A positive Bayes Factor suggests that the first character state tested has a better likelihood than the second one, and a Bayes Factor above 2 is considered significant (Bayestraits Manual, available at http://www. evolution.rdg.ac.uk/BayesTraits.html). We used reversible jump and a gamma hyperprior whose mean and variance vary between 0 and 10. We ran the program for 50610 6 iterations for each constrained state. The character reconstructed was the type of thallus, and the character states considered were tripartite, pannarioid bipartite and collematoid bipartite.
Topological Tests
We tested different tree topologies on the concatenated dataset of 4 loci for the Pannariaceae. We generated 8 constrained best trees with RAxML, with the same settings as above, and using the following constraints: (1) the 3 accessions of Kroswia forming a monophyletic group; (2) Kroswia as a monophyletic group basal to a group formed by Fuscopannaria ahlneri, F. confusa, F. leucosticta and We computed the likelihood of 100 trees (the best constrained tree, the best unconstrained tree and a random sample of 98 bootstrap replicate trees from the unconstrained analysis), estimating parameters on a NJ tree, using an HKY model with a gamma rate of heterogeneity and 4 gamma categories (parameters choice and methodology suggested by [74]). We performed the 1sKH test [75][76][77], the SH test [75] and the ELW test [78] on the constrained tree using TreePuzzle v. 5.2. [79]. Due to its very low power (see for instance [74]), we did not consider the results of the SH test.
Molecular Data
We amplified ITS, mtSSU and RPB1 for all 36 selected specimens, except one for RPB1. We amplified LSU for 21 specimens, all 15 negative results being resolved in a single clade comprising all accessions of Physma, the Parmeliella mariana gr. (P. brisbanensis, P. mariana and P. stylophora), Parmeliella borbonica and the undescribed tripartite 'Pannaria' R969 (here annotated the tripartite R969). Wedin et al. [19] could amplify the LSU loci for three species of Physma, but, for unknown reasons, all our attempts to amplify LSU for this clade failed.
Matrix Assemblage and Concatenation
For the analysis on the Pannariaceae mycobiont, we could include the following newly sequenced specimens: 21 specimens with all 4 loci, 14 with 3 loci (lacking LSU) and 1 specimen with 2 loci (lacking LSU and RPB1). We added 46 taxa retrieved from GenBank to complete our sampling, 39 members of the Pannariaceae, and 7 outgroup taxa all belonging to the Peltigerales (3 Vahliellaceae, 1 Collemataceae, 1 Placynthiaceae, 1 Peltigeraceae). Those included either the 4 loci or a subset of them. Detailed information can be found in table 1. For the 16S dataset on Nostoc, we produced 36 new sequences; we added 93 Nostoc sequences retrieved from GenBank, chosen either on the phylogenetic position of their fungal partner or their nucleotide similarity to our sequences, based on megaBLAST searches [60], and 14 outgroup sequences, belonging to other genera, to complete our sampling.
Partitioning and Model Selection
For the analysis on the Pannariaceae mycobiont, PartitionFinder divided the partition in 4 subsets: one composed of RPB1 1 st and 2 nd codon positions with LSU, one with mtSSU only, one with 5.8S only and one with RPB1 3 rd codon position only. For the first subset, the model selected was GTR+I+G, as well as for mtSSU and RPB1 3 rd codon position; for 5.8S, the model selected was K80+I+G. For the analysis on the Nostoc 16S dataset, the model selected was GTR+I+G.
Phylogenetic Analyses
The 50% Bayesian consensus tree of the analysis of the Pannariaceae mycobiont dataset comprizing 4 loci is presented in Figure 1, with the bootstrap values of the ML analysis and the Bayesian PP values written above the branches. The same consensus tree obtained with the 3 loci dataset is available in the Supplementary Material ( Figure S1). The 50% Bayesian consensus tree of the analysis of the Nostoc 16S dataset is presented in Figure 2, with the bootstrap values of the ML analysis and the Bayesian PP values written above the branches.
Phylogeny of the Family Pannariaceae (Fig. 1) Topology of the family. The analysis of the 3 and 4 loci datasets yielded the same topology, albeit with less support for some branches for the former; as expected the 5.8S loci provides an interesting resolution power to discriminate branches at the generic and infrageneric level. We retrieved the Pannariaceae as a monophyletic group, divided into two strongly supported clades: the first one includes all Parmeliella accessions, incl. the genus type P. triptophylla, except for the P. mariana group and P. borbonica which are resolved with strong support in the other clade. The so-called Parmeliella s. str. clade further includes Degelia (here resolved as polyphyletic, as already detected by Wedin et al. [19]), Erioderma, Leptogidium and the monotypic Joergensenia which represents the only tripartite species in this clade. The second clade can be divided into three groups: (1) the first one is not supported in ML optimization but gets a PP = 0.95 in the Bayesian analysis; it is composed of Xanthopsoroma, Physma, the Parmeliella mariana group, Parmeliella borbonica and the tripartite species R969, and will be referred to as the Physma group; (2) a group not supported in ML optimization but getting a PP = 0.94 in Bayesian analysis, composed of Pannaria, Staurolemma, Ramalodium, Fuscoderma, Psoroma and Psorophorus, that will be referred to as the Pannaria group; and finally (3) a group composed of Fuscopannaria, Kroswia, Protopannaria, Leciophysma and Parmeliella parvula, that will be referred to as the Fuscopannaria group.
Wedin et al. [19] and Spribille and Muggia [10] retrieved the Parmeliella s. str. group, the Pannaria group and the Fuscopannaria group with similar topology as ours. However, in their studies, their single or multiple accessions of Physma is or are nested within the Pannaria group. With our dataset, which includes a larger sampling of Physma and representatives of the closely related Parmeliella mariana gr., P. borbonica and the tripartite R969, the hypothesis of the whole Physma group nested in the Pannaria group and the Fuscopannaria group as basal is strongly rejected by two topological tests (ELW and 1sKH tests; see table 2).
Monophyly of Several Genera
Our accessions of Kroswia crystallifera (the type species of the genus; [27]) gathered in Madagascar and Reunion are not resolved as a monophyletic group: they are nested within Fuscopannaria, and closely related to its type species F. leucosticta [38]. Even with the exclusion of species now referred to Vahliella [12,80], the genus Fuscopannaria is not resolved as monophyletic, unless F. sampaiana is excluded and Kroswia crystallifera included. Two strongly supported clades can be distinguished if the genus is so recircumscribed: one with F. ignobilis and F. mediterranea and the other with the type species and Kroswia crystallifera.
Pannaria is resolved as a diverse but nevertheless well-supported genus, including several tripartite species formally placed in the genus Psoroma and which were transferred to Pannaria following the detailed studies by Elvebakk [81][82][83][84], Elvebakk & Bjerke [85], Elvebakk & Galloway [86] and Elvebakk et al. [17]. Interestingly, our single accession of the tripartite Pannaria-like R969 is not resolved amongst other tripartite Pannaria but within the Physma clade with strong support. It therefore appears that the tripartite Pannaria-like species are more diverse than expected and that the tripartite habit is widespread amongst the Pannariaceae, being absent only in the Fuscopannaria group. Two recently described and tripartite genera Xanthopsoroma and Psorophorus, segregated from Psoroma [87], are retrieved as a part of the Physma gr. with support only in the Bayesian analysis for the former, and as sister to Psoroma s. str. in the Pannaria group for the latter.
Parmeliella (type species: P. triptophylla) is a well-supported monophyletic group if the Parmeliella mariana gr., Parmeliella borbonica and P. parvula are excluded. The latter is resolved with strong support within the Fuscopannaria gr. whilst the others are resolved within the Physma group, on a long and strongly supported branch. Further, P. borbonica appears nested inside Physma, which is therefore paraphyletic.
Nostoc Phylogeny (Fig. 2) We defined phylotypes (A to G) on the Nostoc tree based on wellsupported monophyletic groups containing sequences from our representatives of the Pannariaceae family. All our sequences are part of Nostoc clade 2 (sensu [59,88]) except phylotype G, which seems related to Nostoc clade 3 sensu Svenning et al. [59].
There is no evidence suggesting coevolution or cospeciation events between the mycobiont and the photobiont. The phylogeny of Nostoc involved in the lichen symbiosis does not match the phylogeny of the Pannariaceae.
Topological Uncertainties (Table 2) he tests do not reject the monophyly of Kroswia, either its position outside of the polytomy including i.a. Fuscopannaria leucosticta and F. praetermissa, although the difference of likelihood with the best unconstrained tree is relatively high (13.68). However, the position of Kroswia outside of Fuscopannaria s. str. (including F. mediterranea and F. ignobilis) is significantly rejected by the ELW and 1sKH tests. Therefore Kroswia crystallifera should be considered as part of Fuscopannaria. Concerning the position of the tripartite R969, the topological tests do not reject its position at the base of the Physma group as a whole. However, its position at the base of the Parmeliella mariana gr., with Physma basal to both of them, is significantly rejected by the ELW and 1sKH tests.
Concerning the position of Parmeliella borbonica, the topological tests do not reject its position neither as basal to Physma, nor as basal to the Parmeliella mariana gr., with Physma basal to both of them, although the difference of likelihood for the latter case is relatively high (10.29). We consider that the weak resolution of the test regarding the position of Parmeliella borbonica might be due to a large amount of missing data as only 2 loci are available for this accession, reducing its impact on the likelihood of the trees. More material should therefore be studied before the taxonomic status of P. borbonica can be revised.
As commented above, we also tested the topology proposed by Wedin & al. [19] and Spribille & Muggia [10] where their accessions of Physma are resolved within the Pannaria gr. Such a topology is rejected on our dataset by the ELW and 1sKH tests. (Fig. 1, Table 3)
Reconstruction of Ancestral States
Results of the SIMMAP reconstructions on the Bayesian consensus tree are shown in pie charts on Figure 1. Results of the BayesTraits and Mesquite reconstructions, as well as the SIMMAP reconstruction on 20 trees are shown in table 3.
Even though the probability values can vary quite widely from a reconstruction method to the other, the same ancestral character state is recovered for most branches.
For the Fuscopannaria group, a pannarioid ancestor is strongly supported, incl. for the Fuscopannaria s. str. clade (all Fuscopannaria except for F. sampaiana). Within the Pannaria group, two deep nodes are recovered with a tripartite ancestor (the unresolved clade with all accessions of Pannaria, and the clade including Fuscoderma, Psoroma and Psorophorus) as well as the node supporting the whole group. The node supporting both groups (the Fuscopannaria and the Pannaria gr.) also has tripartite thallus as the most likely ancestral type. For the clade comprizing Physma, the Parmeliella mariana gr., P. borbonica and the tripartite R969, reconstructions favor a pannarioid ancestor without much support, except the Bayes Factor that slightly favors a tripartite ancestor. However, for the whole group and thus including both accessions of Xanthopsoroma, reconstructions recover a tripartite ancestor with strong support. The node supporting the three groups (Fuscopannaria-, Pannaria-, and Physma-group) has most likely a tripartite thallus, as recovered by all four methods. The Parmeliella s. str. group most probably had a pannarioid ancestor, as well as the family Pannariaceae.
Discussion
Nostoc from Collematoid and Pannarioid Thalli (Fig. 2) Thalli belonging to the collematoid or pannarioid types never share the same Nostoc phylotype. Phylotypes A, E and F only contain symbionts from collematoid thalli. Moreover phylotype F also contains symbionts associated with the lichen genus Leptogium, a typical representative of the collematoid type, these accessions being resolved in a strongly supported clade together with the Kroswia symbionts. Phylotype E includes the photobiont of several Physma accessions together with that of the cephalodia of the tripartite R969, and these cephalodia have the same homoiomerous structure as the thallus of Physma byrsaeum (Fig. 3a, c).
Phylotypes B, C, D and G only contain symbionts from pannarioid thalli. Phylotype B which contains the photobiont of our accession of the terricolous Fuscopannaria praetermissa is closely related to sequences from terricolous-muscicolous Nephroma arcticum photobionts whereas phylotypes C and D contain Nostoc sequences from epiphytic Lobaria, Nephroma and Pseudocyphellaria, along with our accessions of epiphytic Pannariaceae with pannarioid thalli. This confirms that Nostoc from epiphytic heteroimerous thalli cluster together, although they group in a polyphyletic assemblage of different phylotypes [17,89,90]. These data strongly suggest that many pannarioid thalli share Nostoc strains between them and with other representatives of the Peltigerales that also have Nostoc in a well-defined thin layer. Furthermore collematoid thalli can share Figure 2. Phylogenetic relationships in the genus Nostoc, based on the best ML tree of the analysis on the 16S dataset. Values above branches represent ML bootstrap and Bayesian PP values, respectively. Names in bold are those for which DNA sequences were produced for this study. Color boxes represent phylotypes containing our sequences and defined by well-supported monophyletic groups. Colors in the taxa names represent the type of the thallus containing the Nostoc: in green tripartite thalli, in red pannarioid thalli and in blue collematoid thalli. Taxa names refer to the host of the Nostoc symbionts, when available. Nostoc with representatives of the Collemataceae that also have Nostoc chains throughout their thallus. These results strongly suggest that the thallus type (collematoid versus pannarioid), and the organization of the Nostoc cells inside it, depend on the phylotype of the Nostoc with which the mycobiont associates. Therefore, it seems that in the family Pannariaceae, the Nostoc associated with the mycobiont would have more impact on the morphology of the thallus formed than the phylogenetic origin of the mycobiont. The corollary might be true as well, the Nostoc selection by the mycobiont is more affected by the morphological and ecophysiological characteristics of the association than by the phylogenetic position of the mycobiont. Extracellular polysaccharides substances (EPS) produced by many bacterial lineages, incl. cyanobacteria, are involved in the physiological and ecological characteristics of those organisms [91]; in Nostoc, the biochemistry and structure of the dense sheath of glycan strongly participate in the dessication tolerance of Nostoc commune [92]. Although no clear evidence is available, we suspect that variations in the glycan sheath characteristics amongst the various strains of Nostoc involved in the lichenization events within the Pannariaceae drive the differences between the collematoid and the pannarioid thallus types.
Occurrence of Collematoid Thalli All across the Pannariaceae (Fig. 1) We found collematoid thalli in the four main groups of the family. Kroswia and Leciophysma appear as part of the Fuscopannaria group, Kroswia being nested within Fuscopannaria s. str., excluding F. sampaiana; Staurolemma and Ramalodium are part of the Pannaria group and Pannaria santessonii was described as a collematoid thallus species; Physma is in the Physma group, along several taxa with pannarioid thalli; and finally Leptogidium is part of the Parmeliella s. str. group. These results suggest that thalli switched from pannarioid type to collematoid and possibly vice versa several times along the evolutionary history of the family.
These results also suggest that the thallus type organized by the association between a mycobiont and a photobiont is primarly driven by the identity of the latter, the Nostoc phylotype with which it associates rather than by the phylogenetic identity of the mycobiont. Indeed, unlike the original assumption that all collematoid thalli were part of the Collemataceae and all pannarioid thalli were part of the Pannariaceae, many collematoid thalli are actually members of the Pannariaceae, as already detected by Wedin et al. [19] and Otálora et al. [35]. Moreover, they do not form a monophyletic group inside the Pannariaceae, but are present all across the family, suggesting the absence of phylogenetic pattern of the mycobiont related to the collematoid morphological and anatomical thallus type.
Evidence for Coincidence between Photobiont Switch and Change of Thallus Type
The most spectacular and straightforward example lies with the type species of Kroswia which is nested inside Fuscopannaria s. str.: it exhibits a drastic change of morphology (see figure 3d-e) of the thallus (all representatives of this genus so far have typical pannarioid thalli), and it associates with a Nostoc phylotype (phylotype F) that is totally different from the one associating with the closely related Fuscopannaria leucosticta (phylotype D). Moreover, phylotype F has also been found associated with the typically collematoid Leptogium lichenoides. The duo Kroswia/ Fuscopannaria thus provides the best example of the influence of the Nostoc on the shape of the thallus. Actually, K. crystallifera is a species of Fuscopannaria with little genetic divergence with its related species such as F. leucosticta and F. praetermissa; this divergence however precludes any assumption that it could be considered as a photomorph of one of them. Its thallus is dramatically different because it switched to a different Nostoc, one that triggers the collematoid format for the thallus. Jørgensen [24], when studying the apothecia characters of the other species assigned to that genus (K. gemmascens), concluded that ''the characters of the hymenium and the chemistry of the thallus certainly place it close to Fuscopannaria (…)''. Quite interestingly another photobiont switch can be postulated in that group as the phylogenetic position of Moelleropsis nebulosa as sister to F. leucosticta has been retrieved by Ekman & Jørgensen [93] and more recently announced as confirmed [94]. This species exhibits granulose thalli with clusters of Nostoc interwoven and covered by short-celled hyphae and very much different from the pannarioid thallus type, and thus most probably associated with a different Nostoc phylotype.
Occurrence of Tripartite Thalli All across the Pannariaceae (Fig. 1) We could detect tripartite thalli in all main groups within the family, except in the Fuscopannaria group. This absence might be caused by incomplete sampling as the only tripartite species known in Fuscopannaria (F. viridescens, associated with a green algae and producing cephalodia; [95]) as well as both species of Degeliella (forming tripartite thalli; [42]) could not be included in our dataset. Psoroma, Psorophorus and the tripartite representatives of Pannaria are resolved in the Pannaria group, Xanthopsoroma and the tripartite R969 belong to the Physma group, and the characteristic Joergensenia is included in the Parmeliella group. Until the seminal papers by Elvebakk & Galloway [86] and Passo et al. [96], all tripartite Pannariaceae were assigned to a single genus (Psoroma) assumed to form a monophyletic group. Within the three main groups of the Pannariaceae where they are resolved, the species with tripartite thalli are mixed up with species with bipartite thalli, mainly of pannarioid type but also with collematoid type. These results suggest that several times through the history of the family, mycobionts switched from a tripartite to a bipartite thallus or vice versa.
Evidence for Cephalodia Emancipation
Switches from a tripartite to a bipartite thallus may involve the cephalodia and their emancipation from their green algaecontaining thalli. Although cephalodia are usually associated with rather small, firmly attached, or even included, structures, there are many examples of tripartite Pannaria and Psoroma in which cephalodia are large and easily detached, or proliferating and developing large squamules that can be easily detached from their ''host'' thalli (examples in [17,81,97,98]). The cephalodia of the tripartite R969 start their development as modest blue gray squamules over the thallus, but eventually grow up to 0.7 cm across and develop a foliose habit with denticulate to deeply lobulate margin (see figure 3a).
More interestingly, the Nostoc photobiont in several accessions of Physma byrsaeum (annotated R1, R2, R2846 and R2847; phylotype E) is very closely related to the one found in the cephalodia of the tripartite R969. As the latter is basal to the clade containing all accessions of Physma, it can be postulated that several species belonging to this genus arose from cephalodia emancipation from their common ancestor. Indeed, the common ancestor of the whole Physma clade is recovered as producing tripartite thallus. Furthermore, the disposition of the Nostoc cells inside the cephalodia of R969 is similar to the one inside Physma thalli (see figure 3a-c): they are enclosed in ellipsoid chambers delimited by medulla hyphae, these structures being responsible for the maculate upper surface of thalli (Physma) or cepahodia (R969).
Besides the tripartite R969, the clade included both accessions of the recently described genus Xanthopsoroma [87], which also develops tripartite thalli, with a green algae as the main photobiont and Nostoc included in cephalodia. The three species recognized within the Parmeliella mariana gr. may have arisen from cephalodia emancipation of their common tripartite ancestor or from a photobiont switch from a Physma ancestor. Quite interestingly, the pannarioid Parmeliella borbonica, nested within Physma, is associated with phylotype D of Nostoc, shared by most accessions of the Pannaria and Parmeliella s. str. groups (as well as other distantly related species of the Peltigerales), and not phylotypes C or G, chosen by all our accessions of its closely related species of the Parmeliella mariana gr. When excluding both accessions of Xanthopsoroma, the Physma gr. is a well-supported clade on a long branch and includes a tripartite species, species with pannarioid as well as collematoid thalli. The long branch may indicate that our sampling is too scarce and geographically too restricted. However, as both Physma and the Parmeliella mariana gr. have a pantropical distribution, we can confidently assume it would not collapse in future studies.
In figure 4, we illustrate the different possible scenarios to switch from tripartite to bipartite, and from collematoid to pannarioid thalli and vice versa, and emphasize on the possibility to obtain, with switches and time, the three types of thalli from the same tripartite ancestor.
As a matter of fact, earlier workers came close to the conclusion that cephalodia can emancipate and start their own evolutionary trajectory. Ekman & Jørgensen [93] pointed to the « homology » between the cephalodia of the green algae-containing Psoroma hypnorum and the thallus of the cyanobacterial autonomous species Santessoniella polychidioides; Passo et al. [96] retrieved the latter as sister to Psoroma aphthosum, a green algal species with coralloidsubfruticose cephalodia, very much akin the thallus of Santessoniella polychidioides. We strongly suspect this case represents a further case of cephalodia emancipation, and subsequent divergence. This scenario implies that emancipated cephalodia can reproduce sexually as most species of Physma and Santessoniella polychidioides produce apothecia and well-developped ascospores. There is indeed no reason to believe that thalli newly formed by cephalodia emancipation and containing only Nostoc as photobiont would not be able to produce apothecia, as only the mycobiont is involved in such formation. An interesting alternative would be that, when expelled out of the ascus, the ascospore produced by the mycobiont involved in the ancestral tripartite thallus, would collect or recapture the Nostoc of the cephalodia.
Several representatives of the Lobariaceae produce photomorphs, mainly within the genera Lobaria and Sticta [14,99]. These duos involving the same fungus lichenized either with a green algae or with a Nostoc comprize thalli morphologically rather similar or not (see Introduction), and living attached (thus forming tripartite thalli) or not. Although molecular studies on these duos have mainly sought to demonstrate the strict identity of the fungus involved in each part, the separation or ''living apart'' of one from the other has long been recognized in several taxa, such as Lobaria amplissima and its cyanomorph Dendriscocaulon umhausense and Sticta canariensis and its cyanomorph S. dufourii [100]. There is a priori no reason to exclude that the duos can separate on ''a permanent basis'' and thus emancipate; each morph would eventually run its own evolutionnary trajectory, as recently suggested for divergence patterns in Sticta photomorphs [101]. Such a scenario can be interpreted as a variant of cephalodia emancipation as advocated here for the evolution of thallus types within the Pannariaceae.
The alternative scenario for the complex phylogenies including bi-and tri-partite thalli implies that a cyanolichen would capture a green algae from the environment (or from another lichen), adopt it as its main photobiont and confine its Nostoc into cephalodia. This hypothesis has been suggested by Miadlikowska & Lutzoni [32] for the sect. Peltidea in the genus Peltigera but so far has not been confirmed. Our data and reconstruction of ancestral state do not support it in the Pannariaceae, with a possible exception for Joergensenia cephalodina, but a better sampling is needed in that group to reconstruct the ancestral states.
Conclusions and Perspectives
Field observations of the lichen species belonging to the widespread and well-known order Peltigerales on the tiny and remote island of Reunion in the Indian Ocean instigated our studies on the relationships between photomorphs in the Lobariaceae (14) and the present study on the Pannariaceae. Indeed, we were intrigued by the occurrence, several times at the same locality or even on the same tree, of representatives of that family with collematoid and pannarioid thalli, and more locally of tripartite thalli.
Collematoid and pannarioid thalli are represented throughout the Pannariaceae. Each thallus type mostly appears mingled within complex topologies. Switches between those thallus types are thus frequent throughout the family. We could demonstrate that both collematoid genera in the Pannariaceae we examined from Reunion material (Kroswia and Physma) are involved in photobiont switches. We suspect that such a scenario could be detected elsewhere in the Pannariaceae and may act as an important evolutionary driver within the whole family, and perhaps elsewhere within the fungi lineages containing lichenized taxa.
The tripartite thallus type is shown to be the ancestral state in the clade we could study (the Physma gr.). Although a larger sampling is needed before such an result could be confirmed, we can postulate that cephalodia emancipation and subsequent evolutionary divergence is the most likely scenario within that clade. The data available support the same scenario in other clades of the Pannariaceae, and it can be suspected in the Lobariaceae where it is represented by the separation and subsequent divergence of photomorphs.
The photomorph pattern in the Lobariaceae demonstrates that a single mycobiont can recognize and recruits phylogenetically unrelated photobiont partners and these associations result in morphologically differentiated thalli. We show here that the use of different lineages of Nostoc or the association with only one partner instead of two might lead to the same consequences. Recognition of compatible photobiont cells is carried out by specific lectins produced by the mycobiont, characterized by their ligand binding specificity [102]. Peltigera species have served as models in the studies of lectins and their involvment in the recognition of symbiotic partners [103][104][105][106]. A lectin detects compatible Nostoc cells at the initiation of cephalodium formation in P. aphthosa and this process is highly specific [107], as further demonstrated by experiment of inoculation of several Nostoc strains into the cephalodia of the same species [108]. The biochemical process sustaining the recognition of both partners in two lichen species associated with green algae has been elucidated by Legaz et al. [109] and extended to cyanolichens with collematoid thalli by Vivas et al. [110]. The genes coding for two lectins assumed to be involved in photobiont recognition have recently been identified [111][112]. Evaluation of the variation of those genes is of tremendous interest in the context of photobiont switching and cephalodia emancipation as lectins have been shown to be under selection pressure by the symbionts in corals [113][114] and a coevolutionary process could thus be highlighted and demonstrated in lichenized fungi. A preliminary study with Peltigera membranacea material from Iceland could demonstrate a significant positive selection in LEC-2 but not due to variation in photobiont partner [112].
Further research should thus assemble larger dataset of tripartite taxa within the Pannariaceae and reconstruct their evolutionary history, especially as to the fate of their cephalodia. Numerous methods for detecting genes under positive selection are available [115] and could be applied to the Pannariaceae. Genomics studies of lectins associated with photobiont recognition on tripartite taxa as well as those involved in obvious photobiont switches (pannarioid to collematoid and vice versa) could therefore bring to light a nice model of coevolution [116].
The taxonomical consequences of these results are published in a companion paper, dedicated to new taxa and new combinations.
Data Accessibility
All newly produced sequences are deposited in GenBank. All matrices used in the analyses are deposited in Treebase.
|
2016-05-31T19:58:12.500Z
|
2014-02-24T00:00:00.000
|
{
"year": 2014,
"sha1": "488189c6550c85e9bb0825a12ac2709f6cb6052f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0089876&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "488189c6550c85e9bb0825a12ac2709f6cb6052f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
125947562
|
pes2o/s2orc
|
v3-fos-license
|
Wave activity in front of high-β Earth bow shocks
Abstract. Earth’s bow shock in high β (ratio of thermal to magnetic pressure) solar wind environment is relatively rare phenomenon. However such a plasma object may be of interest for astrophysics. We survey statistics of high-β (β > 10) shock observations by near-Earth spacecraft since 1995. Typical solar wind parameters related with high β are: low speed, high density and very low IMF 1–2 nT. These conditions are usually quite transient and need to be verified immediately upstream of the observed shock crossings. About a hundred crossings were initially identified mostly with quazi-perpendicular geometry 5 and high Mach number. In this report 22 Cluster project crossings are studied with spacecraft separation within 30–200 km. Observed shock front structure is different from that for quaziperpendicular supercritical shocks with β ∼1. There is no well defined ramp. Dominating magnetic waves have frequency 0.1–0.5 Hz (in some events 1–2 Hz). Polarization has no stable phase and is closer to linear. In some cases it is possible to determine wavelength at 0.1–0.5 Hz of the order of 200–900 km.
Electromagnetic fields and waves in space shocks are of primary importance, since in the absence of collisions, kinetic mechanisms of field-particle interactions are responsible for dissipation and particle acceleration (Sagdeev, 1966;Krasnoselskikh et al., 2013).Of particular interest are relatively low frequency waves, which visually have maximal amplitudes, since they actually form the shock front structure, dissipating ions.A number of turbulence theories were also suggested for the high-β shocks (Kennel and Sagdeev, 1967a, b;Coroniti, 1970).Due to presence of magnetic field a wide variety of shock types exists with quite differing structure (Kennel et al., 1985).
For example, in a supercritical quazi-perpendicular shock, the oblique whistler waves near lower-hybrid frequency (∼5 Hz) form the ramp (sharp jump of magnetic field) via the non-linear steepening and decay cycle (Krasnoselskikh et al., 2002, and references therein).In several studies the wavelength of these waves and the scale of shock ramp were determined to be around 10-s of km and oscillations were in fact identified as whistlers (Petrukovich et al., 1998;Walker et al., 2004;Hobara et al., 2010;Schwartz et al., 2011;Dimmock et al., 2013;Krasnoselskikh et al., 2013) Another issue of interest is electron heating, which requires sufficiently small scale variations for non-adiabatic (transverse) acceleration and following isotropisation (Balikhin et al., 1993;Vasko et al., 2018).
To approach the study of magnetic structures in high-β shocks with the Earth's bow shock observations we scanned the whole set of available spacecraft data.We start with the general occurrence statistics of high-β solar wind and then look into some cases of multipoint observations allowing to estimate spatial wave characteristics.We use β >10 criterion, which is justified in the course of this presentation.
Solar wind and IMF data were taken from OMNI-2 data set, the 1-hour variant was used for the initial survey, and the 1-min variant -for final categorization of crossings.β values are precalculated in OMNI-2, assuming constant electron temperature, He++ fraction and He++ temperature.To access possible solar wind variability we use also ACE and Wind final Earth-shifted data from OMNI archive.
The period of analysis was 1995-2017, which has almost full coverage of interplanetary measurements and many spacecraft crossing bow shock.We used Interball (1995Interball ( -2000)), Geotail (since 1995), Cluster (since 2000) and THEMIS (since 2007) orbital and spin-averaged magnetic field data from CDAWeb archive.For the detailed analysis we used full-resolution Cluster FGM magnetic field (Balogh et al., 2001) and HIA/CODIF ion data (Rème et al., 2001) from Cluster Final Archive.All vectors are in GSE frame of reference.
Solar wind statistics and details of search procedure
We used 1-hour OMNI data for the period 1995-2017 to determine the occurrence of high β solar wind.The average solar wind β is somewhat large than unity.High β conditions are unevenly distributed across solar cycles (Fig. 1), being more frequent at the solar minima 1996-1997 and 2007-2009.For the threshold β > 10 there are 50-500 hours per year, while for β > 20, the number is about 3-5 times smaller.
Figure 2 shows distributions of magnetic field magnitude, solar wind speed, density and total static pressure for the full dataset of one-hour values during 1995-2017 and for the subset β > 10.The high β corresponds to slow, cold and dense solar wind with low magnetic field (ion temperature not shown here).However total static (magnetic plus thermal) pressure distribution is similar (Fig. 2b).Thus the high-β events are mostly depressions of magnetic field, compensated (at least on statistics) by increase of plasma density.The only notable difference of distributions for β > 20 (Fig. 2a, red line) is more frequent presence of magnetic field ∼1 nT, with the average 1.6 nT, while for β > 10 the average is ∼2.2 nT.
According to Fig. 3 more than 50% of events with β > 10 have one-hour duration (one point in the analyzed OMNI variant).
A sample event is in Fig. 4.There is one-hour long decrease of magnetic field and density increase, corresponding to β rise to about 20.At an occasional depletion of magnetic field below 2 nT β jumps to about 40-80 for few minutes.
Since formation of high β conditions mostly depends on subtle variations of magnetic field magnitude around 1-2 nT (note, that β has square dependence on magnetic field), it should be quite sensitive to spatial inhomogeneity of solar wind and IMF, and, in particular, to differences between those detected at L1 (in OMNI dataset) and actually hitting Earth.Fig. 5 shows comparison of β calculation for Wind and ACE 1-hour data (only for times, when Wind data were used in OMNI).The scatter is quite large.Thus actual β conditions need to be rechecked with local measurements.This issue is elaborated more in Discussion.
The semi-automated algorithm was used to assemble initial statistics of the shock candidates.For each 1-hour point in OMNI with β > 10, we checked for possible spacecraft location within 5 R E from the model bow shock (Farris et al., 1991).In a case any spacecraft was in place, the plots of solar wind, IMF, local magnetic field and plasma parameters were analyzed visually in the 5-hour window around the selected hour.Broad temporal and spatial spans were used to ensure that all possible crossings of a moving bow shock are captured for future analysis.Only events with clear shock traversals (jumps in magnetic field and step-like front), but it was considered acceptable for this particular study.The most of these initially selected intervals actually contained no shock crossings.
Actual β at the particular shock crossings were checked with 1-min OMNI data.It was often below 10, either because registered shocks were just outside initially selected hours, or because β varied on a time scale, smaller than an hour.Since a change of β is usually related with the solar wind density change, it is associated also with the dynamic pressure change.The 5 latter drives a large-scale shock motion and probability of shock registration by a spacecraft increases.In fact, many shock crossings were registered at a boundary of β change and such events were also discarded, since it was impossible to attribute them to stable plasma conditions.
Finally the list contained about a 100 individual crossings with average β about 20 (1-min value at shock front crossing).
The choice of initial threshold β > 10 (for 1-hour points) was finally justified at this stage, since a variant with initial β > 20 resulted with almost empty final list.However, all these events still need a more detailed confirmation, in particular, of local high β, stable enough crossing velocity, plasma data availability etc.
For the specific analysis in this investigation we selected 22 Cluster project crossings with relatively small spacecraft separation.One event is from 2003, with the Cluster tetrahedron size of about 300 km, while the other are for the late years 2008-2016, when separation between C3 and C4 was 30-200 km (Table S1 in Supplement 1).Some of these examples are presented below.
3 Shock examples
Event 1
The first example was registered by Cluster C3 and C4 spacecraft on 18 December 2011 (1436-1440 UT) with the spacecraft separation 36 km.Solar wind speed was small ∼260 km/s, IMF magnitude -2.5 nT (all characteristics are in Table S1).The model shock normal angle with respect to IMF was 46 o (using Farris et al. (1991) model).Spacecraft orbit is almost parallel to the model shock front (Fig. 6), but shock velocity is definitely much higher than the spacecraft velocity.Alfven Mach number is ≈18, magnetosonic Mach number is ≈5, current β (according to 1-min OMNI) is 10.8.Thus this is quazi-perpendicular supercritical bow shock, which structure for standard β is well studied (Scudder et al., 1986;Krasnoselskikh et al., 2013).analysis of ion dynamics will be performed elsewhere.Solar wind magnetic field measured locally by Cluster is the same as OMNI data (compare two lines in Fig. 7d), therefore OMNI β value is confirmed.
The final value of downstream magnetic field is around 10 nT, and compression ratio is thus close to maximally possible value of 4, in accordance with the high Mach number.However, the observed front structure is very different in comparison with that expected for a supercritical shock.First of all, there is no well-defined magnetic field jump (magnetic ramp).Thus it proved to be impossible to determine reliably shock speed and spatial scale of shock transition.The magnetic field increase is wavy rather than step-like, magnetic magnitude is often down to 5 nT.Second, there is a zone upstream from the largest magnetic bursts with slowly increasing magnetic field and density almost up to the expected downstream value (14:37:45-14:38:20).
We highlight interval 14:37:00-14:38:30 (Fig. 8) as an example of wave activity.Frequency spectra are in Fig. 9.The magnetic profile is dominated by a wave with frequency around 0.3 Hz and amplitude up to 20 nT, more pronounced in B y and B x components.An interval 14:37:27-14:37:47 of the most intense oscillation is taken to estimate the wavelength.The main oscillation (0.3 Hz) is very similar at two spacecraft and visually the time shift between C3 and C4 is about a fraction of a second.
Parameters of waves, filtered in frequency range 0.1-0.77Hz, are in Table 1.Vector of maximum variance is almost along local magnetic field (B y component dominates), of minimum variance -along Z. Ratios of eigenvalues are λ min /λ int = 0.34, λ int /λ max = 0.58, and one may assume elliptic polarisation with relatively defined propagation direction.The time shift between magnetic measurements along the maximum variance component is 0.13 s (determined with correlation analysis), while the spacecraft separation along the minimum variance direction is 10 km.The resulting wavelength estimate is 250 km.
However the hodograph of magnetic field rotation (Fig. 10) shows that the polarization actually might be linear with variable direction.In such a case propagation direction is undefined.The maximum possible wavelength ∼900 km can be obtained taking maximum possible separation 36 km.The estimate of the Doppler shift can be obtained taking either full local proton velocity 146 km/s, or its projection to minimal eigenvector 41 km/s and is 0.04-0.58Hz, depending also on a variant of the wavelength estimate.
Finally we note the oscillations with higher frequency about 1 Hz and smaller amplitude of couple nT, which are best observable in B z component (Fig. 8c and Fig. 9).These oscillations are quite different at two spacecraft and the wavelength analysis proved to be not possible.The eigenvalue ratios (after filtering the frequency range 0.7-10 Hz) are λ min /λ int = 0.68, λ int /λ max = 0.49, thus reliable determination of any wave proper direction is definitely not possible.
Other events
In this subsection we briefly present two shock examples with substantially different wave activity.A shock from January 4th, 2008 (1600-1604 UT) was registered with Cluster separation about 40 km, and very similar solar wind conditions (Table S1, Fig. S1 in Supplement).The detailed wave activity at the front is presented in Fig. 11.General frequency structure of waves in this event is similar to that in Example 1.There is a dominating oscillation with frequency about 0.4-0.5 Hz, as well as the lower amplitude waves with frequency above 1 Hz.amplitudes during the first 20 s 16:01:15-16:01:35 UT downstream the front despite relatively small separation.This difference in amplitudes was typical for all shocks registered during this day (8 crossings within 2 hours in Table S1).
One more crossing is from January 3rd, 2008 (14:30-1435 UT) with Cluster separation ∼100 km (Table S1, Fig. S2 in Supplement).OMNI data showed very low IMF (1.1 nT) and β = 39.Local upstream magnetic field at C4 was so low only episodically at 3 min before the front (Fig. S2), so very high β can not be fully confirmed.The detailed wave activity at the 5 front is in Fig. 12.Only relatively high frequency oscillations about 1-2 Hz are present, strongly different at two spacecraft, therefore no phase analysis is possible.There are no wave packets with the stable phase.For example, at 143410-143414 UT X and Z components are in anticorrelation for C3 and C4, while immediately near, at 143408-143410 UT these components are in phase.Amplitude of oscillations is comparable in components and magnitude of magnetic field.Another event from our statistics with similar higher frequency variations is that of 31 December 2003 (Table S1).
Solar wind input
High-β solar wind is relatively rare at the Earth orbit.In our study we accepted somewhat ad-hoc threshold of high β equal to 10.Such interplanetary conditions tend to occur during solar minima, being created by slow cold dense solar wind with low IMF (1-2 nT).It is not always easy to confirm that the observed shock crossing actually occurred in high-β solar wind interval, 5 identified in OMNI.The first set of problems is related with association of partiular shock front crossings with stable high-β intervals.It is reasonable to consider durations of high-β intervals at least of the order of tens of minutes, and reject candidate events at the moments of β changes, since it is more convincing to study bow shock events under stable upstream conditions.
These problems are relatively straightforward to identify and solve.
A more substantial problem is due to the finite spatial scales of high-β solar wind.We measure solar wind in L1 halo orbit, the magnetosphere.The most questionable is spatial persistence of relatively small changes of IMF from 2 to 1 nT, required for creation of very high-β intervals.
Though the analysis of the scale of high-β areas in solar wind was not performed with the multisatellite data, available variance of magnetic field, magnetic features with scale widths of 20 R E perpendicular to the IMF may occur (Crooker et al., 1982).Comparison of L1 Wind and near-Earth Interball data for 1996-1999 have shown (Petrukovich et al., 2001), that the large-scale IMF structures, associated with geomagnetic storms (with the threshold of IMF B z GSM below -10 nT during 3 hours) are practically the same in L1 and the near-Earth orbits.However, about 20-80% of smaller everyday IMF variations (depending on their amplitude), causing substorms (several nT in magnitude on one-hour scale) are different by more than Assumptions on helium content and electron temperature, used while β calculations, may also result in some errors.For the purpose of our study (properties of magnetic waves in shock front) it is also important to have in mind, that the local β is more relevant, rather than the far upstream one.Immediately upstream the shock front magnetic field increases due to presence of rotating ions in the shock foot.In the ordinary quaziperpendicular shocks (Scudder et al., 1986) this increase is rather small.When IMF is very low (in a high-β case), the foot increase may change (decrease) the local beta quite substantially.
In the zone of the strongest magnetic variations behind the shock front β should be similar to that upstream due to Rankine-5 Hugoniot jump conditions.
Shock properties
Observed shock crossings with high-amplitude magnetic variations are generally consistent with the earlier reports for high-β shocks.All selected shocks are quazi-perpendicular and have high alfvenic and magnetosonic Mach numbers.The observed structure of high-β shocks is quite different from that for low-β supercritical quazi-perpendicular events (e.g.typical for supercritical quazi-perpendicular shock), since the front actually is built with large-amplitude spikes.A study of ion kinetics, which may help in identification of detailed shock front structure, is left for future publications.
The high-β shock front is formed by strong magnetic variations with amplitude of ∼20 nT at frequencies 0.1-0.5 Hz, Analysis shows irregular wave structure with no stable phase.polarization is close to linear, so that there are substantial variations in magnitude of magnetic field also.In several shock examples it was possible to determine spatial scale of these variations around 200-900 km.Doppler shift determination is not reliable enough, since wave vector direction is not known for linear polarization.On this background much lower amplitude (1-2 nT) variation are often present with 1-2 Hz frequency.
In two events wave activity is dominated by 1-2 Hz variations, also with irregular phase.In some events strong differences between two Cluster spacecraft suggested spatial scales of the order of tens km.
Properties of magnetic variations suggest their compressional nature (linear polarization with strong changes in total magnetic field) and strong spatial localization due to absence of stable wave packets.Thus variations are strongly different from that in low-β events where clear whistler wave packets with elliptic polarization dominate.Observed polarization is also not consistent with earlier suggested alfven mode (Kennel and Sagdeev, 1967a).Pokhotelov and Balikhin (2012) suggested a Weibel mode in the finite magnetic field, developing a mix of two opposite circular polarizations.Reliable wavelength determination would be a key to final wavemode identification.
Conclusions
High-β (β > 10) shocks are relatively rare and largely unexplored class of Earth bow shock.Formation of high-β interplanetary plasmas is mostly related with dense slow solar wind and very low magnetic field up to 1-2 nT.The higher is β (in OMNI), it is more difficult to confirm it locally.
Our shock analysis was limited to quazi-perpendicular cases and shows substantial differences from similar supercritical shocks with lower β.Dominating magnetic waves show more irregular linear polarization.An extended analysis with Magnetospheric Multiscale mission data including electron and ions is necessary to conclude the wave mode analysis.
Author contributions.OMC and PIS performed the data processing and analysis.AAP is responsible for data analysis and interpretation.
AAP prepared the manuscript with contributions from all co-authors.
Competing interests.The authors declare that they have no conflict of interest.
Figure 1 .
Figure 1.Number of hours with high β with respect to calendar year.
Figure 4 .
Figure 4. Example of high-β interval.From top to down: magnetic field magnitude, solar wind speed, proton density, proton temperature, plasma β. 1-min OMNI data set used.
Figure 5 .
Figure 5.Comparison of Wind and ACE β using 1-hour data.See text for details.Red line is bisector
Fig. 7
Fig.7contains overview of magnetic field and plasma parameters.The shock front is somewhat arbitrarily placed at 14:37:45 UT (marked by vertical line) at a first extended peak of magnetic field.The shock foot, the zone with reflected ions (Fig.7f), gradual increase of density and magnetic field, maximum of parallel ion temperature, is at 14:37:45-14:40:40 UT.The detailed
Figure 7 .
Figure 7. Overview of C4 magnetic and plasma measurements for event 18 December 2011.(a) proton velocity, (b) proton density and OMNI solar wind density, (c) proton parallel and perpendicular temperature, (d) magnetic field magnitude and OMNI IMF magnitude, (e,f) proton spectrograms for the sunward and dawnward looking sectors.
Figure 8 .
Figure 8. Full resolution magnetic waveform for shock 18 December 2011.In panels (a-d) are components and total value of magnetic field.
Figure 11 .
Figure 11.Full resolution magnetic waveform for shock 04 January 2008.In panels (a-d) are components and total value of magnetic field very high β values in OMNI is not automatic.It is not always possible to check solar wind β value immediately before shock crossing with local spacecraft.A spacecraft needs to probe pristine solar wind and then rapidly cross the shock, or there should be an additional near-Earth solar wind monitor.Magnetic field can be reliably measured with magnetometer (still assuming offset uncertainty of about 0.1 nT).Accuracy of solar wind density and ion temperature 10 values is more problematic since at L1 they are measured by a specialized thoroughly calibrated instruments, while with a magnetospheric spacecraft, calibration could be rougher for the specific case of solar wind flow.Additional (relative to OMNI-based ones) very-high β intervals may actually form near Bow shock due to local variability of solar wind and IMF.
Figure 12 .
Figure 12.Full resolution magnetic waveform for shock for shock 03 January 2008.In panels (a-d) are components and total value of magnetic field
|
2018-12-11T18:03:02.879Z
|
2018-09-28T00:00:00.000
|
{
"year": 2018,
"sha1": "d876c2852748a523d87bfd89102849b038eecbdf",
"oa_license": "CCBY",
"oa_url": "https://www.ann-geophys-discuss.net/angeo-2018-110/angeo-2018-110.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "d876c2852748a523d87bfd89102849b038eecbdf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
253918974
|
pes2o/s2orc
|
v3-fos-license
|
Correction of Sea Surface Wind Speed Based on SAR Rainfall Grade Classification Using Convolutional Neural Network
The technology of retrieving sea surface wind field from spaceborne synthetic aperture radar (SAR) is increasingly mature. However, the retrieval of the sea surface wind field related to the precipitation effect is still facing challenges, especially the strong precipitation related to extreme weather such as tropical cyclone will cause the wind speed retrieval error to exceed 10 m/s. Semantic segmentation and weak supervision methods have been used for SAR rainfall recognition, but rainfall segmentation is not accurate enough to support the correction of wind field retrieval. In this article, we propose to use deep learning to classify the rainfall grades in SAR images, and combine the rainfall correction model to improve the retrieval accuracy of sea surface wind speed. To overcome the challenge of limited training samples, the transfer learning method in fine-tune is adopted. Preliminary results demonstrate the effectiveness of this deep learning methodology. The model classifies rain and no-rain images with an accuracy of 96.2%, and classifies rainfall intensity grades with an accuracy of 86.2%. The rainfall correction model with SAR rainfall grade identified by convolution neural network reduces the root-mean-square error of retrieved wind speed from 3.83 to 1.76 m/s. The combination of SAR rainfall grade recognition and rainfall correction method improves the retrieval accuracy of SAR wind speed, which can further promote the operational application of SAR wind field.
example, National Oceanic and Atmospheric Administration (NOAA) used Sentinel-1 to launch the Alaska coastal SAR program [1]. The Canadian Space Agency used Radarsat-2 to launch the Canadian National SAR wind program [2]. However, it is still a challenge to overcome the effect of rainfall on the retrieval of sea surface wind field, especially the extreme weather such as tropical cyclones accompanied by heavy rainfall, which has a great effect on human life [3]. Although according to electromagnetic theory, atmospheric attenuation, and volume scattering caused by rainfall are more obvious in Ku band and can be almost ignored in C-band. However, it is more difficult to determine the scattering changes caused by the interaction of rainfall and sea surface in C-band [4]. The theoretical simulation research shows that when the rainfall intensity does not exceed 15 mm/h, the effect of rainfall on the normalized radar cross-section (NRCS) of vertical transmit/vertical receive (VV) polarization is mainly attenuation, which leads to the underestimate of wind speed. However, when the rainfall intensity exceeds 20 mm/h, the contribution of surface backscattering to effective NRCS is far less than the volume scattering of rainfall. Therefore, it is almost impossible to use NRCS to retrieve the surface wind vector under heavy rainfall conditions if the rainfall rate is not accurately understood [5]. The attenuation of scatterometer signal and volume backscattering by rainfall, as well as the disturbance of raindrops on the sea surface, indicate that rainfall is an important factor in SAR wind speed retrieval [6]. In the previous work, the scatterometer data showed that when the wind speed exceeds 30 m/s and the rainfall intensity exceeds 15 mm/h, the error of wind speed retrieval may exceed 10 m/s [7]. In addition, Reppucci et al. estimated the impact of heavy rain on C-band ocean backscattering based on the existing radiative transfer model. The results show that when the rainfall is 30 mm/h, the NRCS attenuation may exceed -1 dB. When the rainfall intensity exceeds 50 mm/h, the attenuation of NRCS will reach -2 dB [8].
Melsheimer et al. [9], [10] analyzed images of concurrent data of European Remote Sensing Satellite and weather stations, indicating that the C-band radar features of rain cells with rainfall rates below 50 mm/h comprised two main parts: volume scattering and attenuation of the SAR signals caused by raindrops and snow particles in the atmosphere, and an increase or decrease in the sea-surface roughness owing to the comprehensive effects of splashing raindrops. Retrieval of the sea-surface wind field is based on the empirical relationship This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ between the sea-surface roughness and sea-surface wind speed, and changes in the sea-surface roughness are an important factor that affects the NRCS measured by SAR [11]. The empirical relationship between the NRCS and sea-surface wind speed was used to establish geophysical model functions (GMFs) between the VV polarization NRCS and sea-surface wind speed at low wind speeds [12] and between the vertical transmit/horizontal receive (VH) polarization NRCS and sea-surface wind speed at high wind speeds [13]. In the absence of rainfall, these GMFs can accurately retrieve the sea-surface wind field. Rainfall will affect the retrieval accuracy of GMFs. Especially in the complex environment of hurricane wind, rainfall will cause the wind speed error to reach 100% [14]. However, almost all existing GMFs do not include rainfall parameters or fully consider the effect of rainfall.
Correcting the effect of rainfall on SAR signals is very important for wind field retrieval. However, there is no instrument available on the SAR satellite platform to monitor rainfall synchronously. The high-resolution rainfall measurement provided by ground weather radar is limited to coastal areas, and its range is only a few hundred kilometers. Its detection height changes with the increase of the distance from the station, so it cannot provide accurate near ground rainfall rate. In places far from the coast, rainfall measurement mainly depends on satellite remote sensing [15]. The microwave radiometer SSMI/S can provide continuous observation covering almost half the earth, but its spatial resolution is low (8-14 km) [16].
Zhou et al. [17] used ASCAT scatterometer data and Tropical Rainfall Measuring Mission (TRMM) rainfall data to establish a C-band active microwave radiation transfer model under rainfall conditions, effectively improving the scatterometer wind field retrieval accuracy. Then, the rainfall of SAR observation time is calculated using geostationary IR images and nonsimultaneous passive microwave rainfall observation. Finally, the rainfall correction model is used to correct the SAR image, the corrected wind field is in good agreement with the NOAA Hurricane Research Division reanalysis data. Yu et al. [18] used Radarsat-2 data and quasi synchronous TRMM PR rainfall data to establish a fitting model of rain induced sea surface damped backscattering coefficient affected by rainfall intensity, incidence angle and other factors, which effectively improved the retrieval accuracy of SAR wind field under rainfall conditions.
Although the spaceborne SAR platform does not carry the precipitation measurement load at the same time, SAR can also capture rainfall and many other atmospheric and marine phenomena with its special imaging mode. In 1978, after the first ocean satellite SEASAT-A was launched, its L-band SAR captured rainfall [19]. In 1994, the space shuttle Endeavour carried the Spaceborne Imaging Radar-C/X-Band SAR (SIR-C/X-SAR) that was also used to capture rain cells. In SAR images, rain cells comprise bright and dark patches of irregular shapes, and the patch structure is closely related to the working frequency band and polarization mode of the radar [20]. Further developments of SAR have allowed the capture of radar features of rainfall in the C-band and X-bands. Analysis of SAR images has shown that heavy rainfall at sea increases the NRCS of the C-band and X-band and reduces the NRCS of the L-band [9].
The rainfall rate is an important parameter for analyzing the effect of rainfall on the SAR NRCS. However, the sea-surface wind field under rainfall conditions involves many physical processes of sea-air interactions. At present, no systematic theory has been established, and retrieving rainfall based on physical methods is still a challenge [18], [22]. Wang et al. [23] used convolutional neural network (CNN) to classify 10 geophysical phenomena including rainfall from SAR images. Colin et al. [24] realized semantic segmentation of ten oceanic processes in the context of a large quantity of image-level ground truths. Zhao et al. [25] used Sentinel-1 data and filters to realize automatic rainfall detection. Colin et al. [26] used Next Generation Weather Radar data and a CNN to segment the rainfall in SAR images with thresholds of 1, 3, and 10 mm/h. Incorporating ground reference data provides a reliable method for studying precipitation using SAR images. Lin et al. [27] estimated the rainfall rate according to the attenuation characteristics of NRCS in SAR images, and the results are consistent with the weather radar rainfall.
Since SAR cannot directly retrieve the rainfall rate, the current method of correcting the impact of rainfall on SAR signals is to use rainfall data of other loads, or use nonrainfall areas in SAR images to compare and analyze the impact of rainfall on SAR signals. But in fact, SAR and other loads can match in relatively few areas, and the coverage of meso and small-scale phenomena with the intensity reaching tropical cyclone is very large. The existing methods have limitations. The Sentinel-1 satellites of the European Space Agency (ESA) provide a large amount of reliable data for studying the interaction between the wind field and rainfall. The Global Precipitation Mission (GPM) dual-frequency precipitation radar (DPR) can provide the near-surface rainfall rate with a 5 km resolution as well as a vertical profile of the rainfall rate, and the minimum measurable rainfall rate is accurate to 0.2 mm/h, which is useful for rainfall research on using SAR. In this article, we propose to use deep learning to identify rainfall levels in SAR images, and combine existing rainfall correction models to correct wind speed retrieval of SAR images. The rest of this article is organized as follows. Section II introduces the data and products of the Sentinel-1 and GPM satellites as well as the data preprocessing. Section III introduces the CNN model and rainfall correction model. Section IV presents the results of the experiment and validation. Finally, Section V concludes this article.
A. Sentinel-1 Data and Preprocessing
Sentinel-1 is part of ESA's Copernicus program, and it is a dual constellation system comprising polar orbiting satellites A and B, which were launched on April 3, 2014 and April 25, 2016, respectively. Sentinel-1 operates in a near-polar solar synchronous orbit at an altitude of 679 km. Each of the two satellites carries a C-band SAR with a working frequency of 5.405 GHz. Each satellite has a revisit period of 12 days. Sentinel-1 was launched to provide observation data for research and applications in land, ocean, atmosphere, maritime search and rescue, and climate change [28]. Sentinel-1's SAR has four imaging modes: strip map, interferometric wide swath (IW), extra wide swath, and wave (WV). The core products of Sentinel-1 are provided at levels 0, 1, and 2. In this article, we used the VV polarization data of the level 1 ground range detected high-resolution product of IW mode. The resolution in range and azimuth is 20 m × 22 m and a cutting width of 250 km. This mode acquires three sub band images acquired by progressive terrain scanning that are then synthesized by corresponding algorithms [29]. This data is projected by multiview and World Geodetic System 84 projection on an ellipsoid model of earth. The pixel information indicated the detected amplitude, and the phase information was lost. The resolution of the generated product was almost the same in both directions, and speckle was reduced at the cost of reducing the spatial resolution.
The Sentinel Application Platform (SNAP) is software provided by the ESA for preprocessing Sentinel-1 data [30]. SNAP can process the data by creating a preprocessing flow chart, setting the processing and parameters required for the data, and directly entering the data can obtain the processed data. The flow chart of Sentinel-1 IW mode processing is shown in Fig. 1, including 8 steps. The image drawn by SAR VV NRCS is single channel, but the input of the Inception v3 model requires three channels of images. We fill the other two channels of the single channel image with single channel data.
B. Global Precipitation Mission Data and Products
The GPM satellite program is the successor to the TRMM. The core observation platform of GPM was launched on February 27, 2014, and it carried the GPM microwave imager and DPR. The DPR includes a Ku precipitation radar with a working frequency of 13.6 GHz and a Ka precipitation radar with a working frequency of 35.5 GHz. Its latitudinal coverage is 65°N-65°S , and it can detect weak precipitation and snowfall at a minimum rate of 0.2 mm/h [31]. GPM data products are divided into four levels, which are distributed by NASA Space Flight Center, and the 1-3 level products are public. In this article, we used data obtained by the normal scanning method of DPR products in the 6th edition. In this product, 49 points are scanned under the satellite at a time with a resolution of 5 km, scanning width of 245 km, and vertical resolution of 250 m.
C. Data Matching
To ensure that areas of different datasets were matched in the same time and space, we only retained data for which the difference between the GPM and SAR time tags did not exceed ±15 min. Fig. 2 shows a schematic diagram of the data matching. The grayscale image shows the Sentinel-1 VV polarization NRCS, and the color image shows the GPM near-ground rainfall rate. To comply with the input requirements of the CNN and making the subimage resolution consistent with that of the GPM data, we cut the subimage to 224 pixels×224 pixels. Each subimage covered an area of 5 km × 5 km in the longitude and latitude directions.
D. Dataset
The GPM near-ground rainfall rate was used to obtain the initial labels of the subimages. As given in Table I, the data were divided into four rainfall intensity grades according to a meteorological standard [32]: light rain (LR), moderate rain (MR), heavy rain (HR), and torrential rain (TR). The standard considers the maximum rainfall rate for a rainstorm as 32 mm/h, but such instances were very scarce. Thus, we classified all rainfall rates greater than 16 mm/h as TR.
We matched Sentinel-1 and GPM data for the eastern Pacific and Atlantic Oceans from 2017 to 2018 and obtained 125 matches. Most of the matches were in the LR range, so few instances of data with rainfall intensities above LR were collected. We first built the dataset and only divided the subimages into two categories: rain and no rain (NR). This dataset was used to test whether the transfer-learning model could recognize rainfall in SAR images. Subsequently, we established a dataset that included NR and the four grades of rainfall intensity. For the first dataset, we set the number of subimages in the rain and NR categories to be consistent to avoid errors caused by the sample size. For the second dataset, Fig. 3 shows the number of data types. Examples of sub images of different rainfall grades are shown in Fig. 4. Both datasets were divided into a training set, validation set, and test set at a ratio of 7:2:1.
III. RAINFALL GRADE IDENTIFICATION AND WIND FIELD RETRIEVAL CORRECTION
The flow chart of retrieving sea surface wind speed based on deep learning and rainfall correction model is shown in Fig. 5. First, match the data according to the time and space information of Sentinel-1 SAR and GPM DPR, and preprocess the matched SAR data using SNAP to obtain the calibrated NRCS (dB). Second, according to the input demand of the deep learning model and GPM DPR near surface rainfall rate, the rainfall grade dataset is made. Then, combining the rainfall grade recognized by the training model and the rainfall correction model, the NRCS of SAR images in the rain area is corrected. Finally, the Uncorrected/Corrected SAR wind speed and ECMWF wind speed are compared, and the correction model parameters are determined to obtain the corrected sea surface wind speed.
SAR images can capture a lot of atmospheric and marine phenomena, and dozens of sea surface phenomena have been recognized based on CNN. However, the number of collected sea surface phenomena is limited, and the new model is trained by means of transfer learning. In this article, the data that meets the time and space requirements of Sentinel-1 SAR and GPM DPR is even rarer, and it is still unable to establish a dataset with rich data, so transfer learning is also our best choice. Based on the radiative transfer model, several rainfall correction models related to rainfall rate have been established based on ASCAT, Radarsat-1, Radarsat-2 and Envisat SAR. After comparing these models, we choose the model with the best correction effect as the correction model for this article. Because the accurate rainfall rate cannot be obtained from SAR images, we input the intermediate value of the identified rainfall grade into the model for NRCS correction.
A. Convolutional Neural Network and Transfer Learning
The CNN concept was first proposed by LeCun. This neural network is good at extracting color, texture, and shape features of images, and it has obvious advantages for image processing. Inception Net is one of many CNN models [33], which was first used by Google in the ImageNet Large Scale Visual Recognition Challenge in 2014 [34]. Inception Net includes four types of models, and the Inception v3 model was adopted in this article. Inception v3 disassembles a large two-dimensional (2-D) convolution kernel into two smaller 1-D convolution kernels using the Inception v2 model and optimizes the structure of the Inception module. This method effectively reduces the number of model parameters, which suppresses overfitting and reduces the top-5 error rate from 4.8% to 3.5% [35].
The optimized Inception v3 is still a very deep network with a 48-layer network structure and more than 23 million network parameters [36]. Therefore, training a new Inception v3 model would require a large number of datasets, a long time, and highperformance hardware. However, we can use transfer learning to obtain the recognition function while avoiding this arduous process. The model is retrained by using the known dataset for application to small datasets. Wang et al. [23] used the Inception v3 model to classify geophysical phenomena in the Sentinel-1 WV data and achieved an overall accuracy of over 93% in 10 categories. Xia et al. [37] used Inception v3 and transfer learning to recognize nearly 20 kinds of flowers according to color, shape, and texture.
CNN is a multilevel deep structure, which is similar to other deep learning models. In the structure, many convolution layers are alternately distributed with pooling layers or subsampling layers, and at the end of the structure is one or more fully connected layers [38]. Its hierarchical structure helps to learn invariant features and capture the hierarchical representation of features from low layers to high layers. The input is feedforward through two-stage convolution and subsampling operations to obtain the feature representation, and then the Gaussian classifier is used to generate the probability distribution [39]. For CNN, it usually contains three key components: convolution layer, pooling layer, and fully connected layer.
The function of the convolution layer is to extract different features of the input by using the convolution kernel. The distribution in the Inception v3 structure is shown in Fig. 6. Then, based on the nonlinear activation operation of the activation function, better eigenvalues are retained and scattered features are discarded. One of the most effective activation functions in the nonlinear activation layer is the rectification linear unit, which is a non-negative piecewise function that always obtains the maximum value between zero and input, as described in [40] f (x) = max (x, 0) = x, x > 0 0, The role of pooling layer is to reduce the amount of network parameters and computation without losing image features. There are two common pooling methods: average pooling and maximum pooling. The most used is maximum pooling, which is generally better than average pooling. Pooling layer is mainly used to reduce the feature space dimension of CNN, but it will not reduce the deep. In addition, the pooling layer can effectively prevent the over fitting of the model and effectively improve the generalization ability of the model.
The full connection layer is at the network outlet, followed by the network output. Its function is mainly to complete the summary of eigenvalues and classify images according to the situation of eigenvalues. After fully extracting and compressing image features, the CNN will input all feature values to the full connection layer. The full connection layer will flatten the multidimensional feature map obtained in one dimension and then carry out activation operation. For object recognition, SoftMax classifier is commonly utilized to normalize the label probability, as mathematically described in [40] softmax (y i ) = e y i n j=1 e y i . (2) The research shows that Inception v3 model can well realize the recognition and classification of new images by changing the structure of the full connection layer and retaining the settings of all convolution layers. Transfer learning involves migrating the weights of a network pretrained on a large dataset to small datasets and then fine-tuning the network. The black connections part of Fig. 6 is a model trained by Imagenet dataset, which has 1000 categories and more than 1 million picture data. Then, we remove the last three layers of the original model, and input the feature output results of the original model to a new full connection for classification. The pretrained Inception v3 model has been successfully applied to extracting general features such as curves and edges of SAR images. In this article, we applied it to grading rainfall intensities. The default input image size of Inception-v3 is 299 pixels×299 pixels; however, the image size in the dataset was 224 pixels×224 pixels. We did not resize the images to 299 pixels×299 pixels when training and testing Inception-v3. This did not change the number of channels but instead changed only the size of the feature maps generated during the procedure, and the result was satisfactory.
B. Rainfall Correction Model
The effect of rainfall on SAR signals can be divided into three parts: the attenuation of rainfall on signals in the atmosphere, the volume scattering of raindrops in the atmosphere and the impact of raindrops on the sea surface, which change the sea surface roughness [6]. The rain-modified measured backscatter σ m is where σ m is the SAR-measured NRCS, σ wind is the windinduced surface backscatter predicted by the ECMWF and CMOD5, σ surf is the rain-induced surface perturbation backscatter, α atm is the two-way rain-induced atmospheric attenuation, and σ atm is the rain-induced atmospheric backscatter. The net two-way atmospheric attenuation factor α atm is expressed as where H is the height of melting layer, k = 0.0031R is the atmospheric attenuation coefficient (dB/km) for 5.7 cm wavelength [17], where R is rain rate (mm/h). σ atm is expressed as where λ is the wavelength (cm) of SAR; K w = (n 2 − 1)/(n 2 + 2) is a coefficient related to the absorption properties of water, which is assumed to be 0.93 for rain. Z e = 210R 1.6 is the effective reflectivity (mm 6 /m 3 )) of the atmospheric rain. σ wind is derived from (3) where σ rain = (σ atm α −1 atm + α surf ), α −1 atm , and σ rain can be fitted with SAR data and rainfall rate where R is rainfall (dB·mm/h), p and q are fitting coefficients, and θ is the angle of incidence. The observation range of spaceborne SAR is about 15°to 50°. Considering that the incidence angle ranges of ASCAT and Sentinel-1 are similar, we use the coefficients fitted by the rainfall rates of ASCAT and TRMM. Table II shows the fitting coefficients of (7) with different incident angles. Table III shows the fitting coefficients of (8) with different incident angles.
IV. RESULTS AND VALIDATION Fig. 7 shows the changes in the model accuracy over epochs with the training and validation sets. Fig. 7(a) shows the results with the rain and NR dataset. The accuracy tended to stabilize after 10 epochs with both the training and validation sets. The accuracy was close to 100% with the training set, about 98% with the validation set, and 96.2% with the test set. Fig. 7(b) shows the results with the dataset of different rainfall intensity grades. The accuracy increased rapidly with more epochs for both the training and validation sets. After 20 epochs, the trend slowed rapidly, and the accuracy fluctuated within a certain range. The model was trained for 30 epochs. The accuracy was mostly stable for the training set and fluctuated within a certain range for the validation set. The accuracy was about 99% for the training set, 85% for the validation set, and 86.2% for the test set. Fig. 8 shows the recognition results of the rainfall intensity. Fig. 8(a) shows the NR recognition results, and Fig. 8(b) shows the LR recognition results. For each subimage, the first line gives the classification results of the model, and the second to sixth lines give the probability that the model would classify the subimage into a certain rainfall intensity grade. Table IV presents the confusion matrix for the results with the test set. The retrained transfer-learning model was effective at recognizing NR and TR. Among LR subimages, 9.52% were wrongly classified as MR, and 4.76% were wrongly classified as NR. Among MR subimages, 6.66% were wrongly classified as LR, and 6.66% were wrongly classified as HR. Among HR subimages, 20% were wrongly classified as TR.
LR subimages may have been wrongly classified as NR if they lacked obvious features. The wrong classification of LR subimages as MR and MR subimages as LR and HR may be because the rainfall rate was at the threshold between these grades, and the differences between two adjacent grades were not particularly obvious. As shown in Fig. 9 shows two example LR subimages recognized as MR. The model assigned very similar probabilities to MR and LR. In addition, the rainfall ratios of these two subimages were 2.16 and 2.28 mm/h, respectively, which are very close to the threshold value of 2.5 mm/h between LR and MR. The wrong classification of HR subimages as TR was mainly because not enough HR data were collected, so the model was not sufficiently trained to extract HR features. The Inception v3 model can only obtain the rainfall grade of SAR subimages, but not the accurate rainfall rate. Therefore, we input the intermediate values of all rainfall grades into the rainfall correction model. The comparison between the corrected SAR wind speed and the ECMWF wind speed is shown in Fig. 10. Fig. 10(a) contains the scatter plot of uncorrected SAR sea surface wind speed and ECMWF wind speed at all rainfall grades. The root-mean-square error of uncorrected rainfall affecting wind speed is 3.83 m/s. Fig. 10(b) shows the scatter plot of SAR sea surface wind speed and ECMWF wind speed after correction for the impact of rainfall. The root-mean-square error of the corrected wind speed is 1.76 m/s. It can be seen that the sea surface wind speed after rainfall correction is more consistent with the ECMWF wind speed.
V. CONCLUSION
Owing to the increasingly serious effects of climate change, monitoring the changes to extreme weather and meso-and small-scale phenomena in the oceans is of great importance to ensuring the safety of economic production and human lives. Conventional ground-based radar can only monitor land and offshore areas, so remote sensing is the main means of large-scale ocean monitoring. Although global wind-field and precipitation products are already available, there are still many challenges to improving the monitoring resolution and simultaneously monitoring the wind field and precipitation. Advances in SAR-related technology have led to the gradual application of SAR products to wind farms. Precipitation is an important factor that affects wind-field retrieval, but combining SAR and other loads is not practical. Therefore, we considered using deep learning to extract rainfall information from SAR images to eliminate their influence on wind-field retrieval.
In this article, we used the GPM near-surface rainfall rate as the reference data and labeled data according to the standard for rainfall intensity grades. We then used established datasets to train fine-tune a pretrained Inception v3 model. These results indicate that a CNN based on transfer learning can be used to recognize rainfall in SAR images. The model was then trained by using a dataset containing different rainfall intensities. The accuracy with the training and validation sets was lower than with the previous dataset containing only rain or NR but remained above 80%. These results are a preliminary confirmation that the rainfall intensity can be graded according to features captured by SAR images. However, at present, there are not many Sentinel-1 data and GPM near surface precipitation data that can be matched, leading to the recognition effect of the retrained model around some rainfall level threshold still needs to be further improved. Finally, it is verified that the median value of rainfall grade identified by CNN is input into the existing rainfall correction model, which can effectively reduce the impact of rainfall on wind field retrieval. However, the modified correction model is more suitable for the area with higher wind speed, and it needs to be improved or fitted with a new model in the area with lower wind speed. Our next step is to match more Sentinel-1 and GPM DPR data. On the one hand, it is to further improve the generalization rate and classification accuracy of the model. On the other hand, it is to fit a rainfall correction model based on Sentinel-1 and GPM DPR near ground rainfall rate, so as to realize automatic recognition of SAR rainfall phenomena and wind field correction. Retrieval of the sea-surface wind field based on remote sensing is a relatively mature technology for conventional applications, but the interaction between the sea-surface wind field and precipitation has not yet been established as a complete physical model. If more rainfall and wind information can be obtained from SAR images at the same time, it will be helpful to this part of the article.
|
2022-11-26T16:35:11.037Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "c02c5c55a85bb29794d5d74c7b035f668279d191",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/4609443/4609444/09963594.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "dae2b159bfb19b8610cb0e55f328f1deb905a9be",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
18930692
|
pes2o/s2orc
|
v3-fos-license
|
Influence of speed of sample processing on placental energetics and signalling pathways: Implications for tissue collection☆
Introduction The placenta is metabolically highly active due to extensive endocrine and active transport functions. Hence, placental tissues soon become ischaemic after separation from the maternal blood supply. Ischaemia rapidly depletes intracellular ATP, and leads to activation of stress-response pathways aimed at reducing metabolic demands and conserving energy resources for vital functions. Therefore, this study aimed to elucidate the effects of ischaemia ex vivo as may occur during tissue collection on phosphorylation of placental proteins and kinases involved in growth and cell survival, and on mitochondrial complexes. Methods Eight term placentas obtained from normotensive non-laboured elective caesarean sections were kept at room-temperature and sampled at 10, 20, 30 and 45 min after delivery. Samples were analyzed by Western blotting. Results Between 10 and 45 min the survival signalling pathway intermediates, P-AKT, P-GSK3α and β, P-4E-BP1 and P-p70S6K were reduced by 30–65%. Stress signalling intermediates, P-eIF2α increased almost 3 fold after 45 min. However, other endoplasmic reticulum stress markers and the Heat Shock Proteins, HSP27, HSP70 and HSP90, did not change. Phosphorylation of AMPK, an energy sensor, was elevated 2 fold after 45 min. Contemporaneously, there was an ∼25% reduction in mitochondrial complex IV subunit I. Discussion and conclusions These results suggest that for placental signalling studies, samples should be taken and processed within 10 min of caesarean delivery to minimize the impact of ischaemia on protein phosphorylation.
Introduction
Placental dysfunction lies at the heart of the 'Great Obstetrical Syndromes', including growth restriction, pre-eclampsia, pre-term delivery and stillbirth. These syndromes are related to a varying degree with a deficiency in deep trophoblast invasion, and subsequent remodelling of the uterine spiral arteries [1]. Over the last few years, considerable progress has been made in elucidating the pathophysiological changes within the placenta at the molecular level. Studies have revealed increased oxidative stress, and activation of stress-response signalling pathways consistent with malperfusion [2e4]. Unlike histological changes, which are relatively slow processes, alterations in transcript abundance and activation of signalling pathways occur rapidly in response to external stimuli. The placenta inevitably undergoes a period of ischaemia after delivery, following separation from the maternal arterial supply. Therefore, the speed of tissue sampling and processing is likely to be crucial in molecular studies in order to avoid the introduction of ex vivo artefacts.
The placenta has a high rate of oxygen and glucose consumption, reflecting its high metabolic activity [5e7]. This can be accounted for by the existence of numerous active transport systems for maternalefetal transfer of nutrients, and its endocrine function. The majority of nutrients including amino acids, vitamins, Ca 2þ and other biomolecules, such as antibodies, are reliant on primary or secondary active transport systems, which utilize energy either directly linked to the hydrolysis of adenosine triphosphate (ATP) or provided by ion gradients such as sodium, chloride, and protons [8]. In the sheep placenta, approximately 25% of total oxygen uptake is used to generate ATP to support cation cotransport systems, such as Na þ -K þ ATPase, which creates a Na þ gradient that is the basis for secondary active transport of amino acids and other substances [9].
Additionally, the placenta is one of the most active endocrine organs, synthesizing and secreting large quantities of polypeptide hormones, such as human chorionic gonadotrophin, human placental lactogen (hPL), and many growth factors, including insulin growth factor 2 and placental growth factor. As 4 ATP molecules are required for a single peptide bond formation, protein synthesis consumes a large fraction of cellular energy. Carter estimated that approximately 30% of the total placental oxygen consumption is used for protein synthesis [6].
This high metabolic activity suggests that the placenta is likely to undergo ischaemic changes rapidly following delivery, with depletion of its intracellular energy reserves. Indeed, the concentrations of high-energy phosphates and other cellular metabolites are reduced within 24 min of separation from the maternal blood supply [10]. This reduction could have a serious impact on energydependent cellular processes, including protein synthesis, active transport and ion transporters. As a result, stress-response signalling pathways aimed at restoring homoeostasis will be activated. A classic example is suppression of mRNA translation in an ATPdependent manner that aids survival of cells under hypoxia [11,12].
In this study, our aim was to investigate the effects of ex vivo ischaemia, as would be experienced by delayed processing of placental samples, on activation of stress response pathways and suppression of growth and proliferation signalling, including the energy sensor AMP activated protein kinase (AMPK); cell growth, metabolism, and stress signalling pathways, such as AKT-mTOR and mitogen activated protein kinases (ERK1/2, p38 kinase and JNK); ER stress response pathways; and heat shock protein family members.
Tissue collection
The study was approved by the Cambridge Local Research Ethics Committee and all participants gave written informed consent. Eight human term (38e40 weeks) placentas were obtained from normal uncomplicated singleton first pregnancies after elective non-laboured caesarean section.
Separation of the placenta from the uterus was designated as t ¼ 0 min. As it is standard procedure in our hospital for the midwife to check placental integrity, the earliest time the placenta was accessible was between 5 and 10 min after separation. Therefore, for consistency, the placentas were kept at room temperature and repetitively sampled at 10, 20, 30 and 45 min after separation. To minimize possible contamination with maternal decidual tissue, the surface of the basal plate was removed with scissors to a depth of approximately 1e2 mm. To avoid regional variations, all samples were taken from the same placental lobule. At each timepoint approximately a 1 cm 3 piece of villous tissue was taken from the selected lobule. This was rinsed twice in ice-cold PBS, and several small pieces (w10 mg and w50 mg) were quickly cut off and further washed in cold PBS. These were blotted dry and snap frozen in liquid nitrogen. This post-sampling procedure took approximately 2 min. All tissues were stored at À80 C freezer for further analysis.
Western blotting
Details of the procedures have been previously described [13]. Briefly, a tissue lysate, was prepared using Matix D and FastPrep Homogenizer (MP Biomedicals UK, Cambridge, UK) and the Bicinchoninic Acid (BCA) used to determine protein concentration. Equal amounts of protein were resolved by SDS-PAGE and transferred to nitrocellulose membranes. After incubation with primary and secondary antibodies, enhanced chemiluminescence (ECL, GE Healthcare, Little Chalfont, UK) and X-ray film (Kodak, Hempstead, UK) were used to detect the bands. Multiple exposure times were employed when necessary. Unsaturated bands were scanned using HP Scanjet G4050 (HP, UK) and band intensities quantified by Image J (Freeware).
Statistical analysis
Given the number of placentas and time points, it was necessary to run 2 gels per analyte. In order to combine data across the gels, densitometric values were normalized to the mean of the 10 min sample values for all the samples run on the same gel. The distribution of the normalized values was assessed using the Shapiroe Wilk test. If the distribution was non-normal, seven arithmetic transformations were evaluated and the optimal method for generating a normal distribution selected. These tests were carried out using the statistical language R (version 3.0.1). For each analyte, differences between the means of the transformed data for each time point were tested using one-way ANOVA with repeated measures, with the Tukey correction for multiple comparisons. These analyses were carried out using Prism GraphPad version 6.0. We used the method of Benjamini and Hochberg to control the false discovery rate due to multiple testing when groups comprised 10 or more analytes [14]. Significance levels were set at p < 0.05 or adjusted p < 0.05.
Activation of AMPK is associated with a reduction in mitochondrial complex IV subunit I protein level
Activation of the energy sensor AMPK [15] by phosphorylation was investigated in the time series of placental samples. Although the levels of phosphorylated and total AMPKa did not change significantly (Suppl. Fig. 1A), the relative ratio of phosphoryated to total AMPKa (P-AMPKa/AMPKa) showed a 2-fold significant increase (p ¼ 0.022) at the 45 min time point (Fig. 1A and C).
Mitochondria are the major source of intracellular ATP production in eukaryotic cells. Therefore, accumulation of AMP, which activates AMPK, suggests that mitochondrial activity in the placentas may be compromised by ex vivo ischaemia. Indeed, the electron acceptor complex IV subunit I in the electron transport chain (ETC) was reduced significantly (p ¼ 0.016) at 45 min after separation, although the protein level of other subunits in complexes I, II, III and V were not affected ( Fig. 1B and D; Suppl. Fig. 1B). Loss of subunits of the ETC complexes could result in reduction of mitochondrial activity, which in turn compromises intracellular energy production.
Activation of ER stress signalling
In response to intracellular energy depletion, stress pathways are activated in an attempt to restore cellular homoeostasis. Therefore, we investigated changes in phosphorylation or protein level of key members of the most common stress signalling pathways, including the ER stress response, MAPK stress kinases, and heat shock proteins.
Protein synthesis is highly energy demanding. Therefore, we first looked at the ER stress response pathway. There was a significant (p ¼ 0.0075) and incremental elevation of phosphorylation of eukaryotic initiation factor 2 subunit alpha (P-eIF2a); w1.5 fold by 30 min, and almost 3 fold by 45 min compared to either 20 and 10 min respectively ( Fig. 2A). However, other ER stress markers, including phosphorylation of IRE1a and the levels of ATF6, GRP78 and GRP94, were relatively constant ( Fig. 2A; Suppl. Fig. 2A).
These data confirm that specific stress-response pathways are activated and change progressively in placental tissues as time elapses after separation.
Rapid loss of AKT-mTOR signalling
Signalling pathways regulating cell growth and metabolism are sensitive to cellular energy levels. We therefore examined the phosphorylation status of kinases and proteins in the AKT-mTOR pathway, a central regulatory pathway for cell growth and metabolism. Full activation of AKT is reliant on phosphorylation of residues at both Ser473 and Thr308 [16]. Interestingly, phosphorylation of AKT at Ser473 was reduced significantly throughout the period (p ¼ 0.006), and by 60% and 65% at 30 min and 45 min respectively. In contrast, phosphorylation of the Thr308 residue, which is phosphorylated by upstream PI-3K-PDK1 signalling [17], was relatively constant compared to the 10 min control ( Fig. 3; Suppl. Fig. 3).
Glycogen synthase kinase 3 (GSK-3) is a downstream target of the AKT pathway whose activity is inhibited by AKT-mediated phosphorylation at Ser21 of GSK-3a and Ser9 of GSK-3b. The loss of AKT phosphorylation was associated with w30% (p ¼ 0.014) and w40% (p ¼ 0.006) decrease in phosphorylation of GSK-3a and b respectively (Fig. 3B).
These results indicate that the activity of the AKT/mTOR pathway was rapidly suppressed by the ischaemic insult.
Discussion
The majority of cellular processes, such as phosphorylation, protein translation and ion transporter activity, are sensitive to intracellular energy levels. Our results show a reduction in phosphorylation of proteins and kinases involved in AKT-mTOR signalling, including 4E-BP-1, GSK-3, and AKT. There are also changes in the eIF2a and JNK stress signalling pathways with increasing duration of ischaemia (Figs. 2 and 3). The changes in phosphorylation of 4E-BP1 and eIF2a are two key molecular mechanisms involved in the inhibition of protein synthesis [21], and were observed at 30 min after separation. Coincidently, the level of In the dot plot graphs, the median of the group is shown, n ¼ 8. "a" and "b" denote significance p < 0.05 compared to 10 and 20 min respectively. C) P-AMPKa/AMPKa. D) Mitochondrial complex IV subunit I. mitochondrial ETC complex IV subunit I was reduced by over 25% at 45 min. Further experiments, including measurement of the transcript encoding complex IV subunit I and their turnover rate, will be required to confirm any role of protein synthesis inhibition in the reduction of that subunit. However, we recently demonstrated that increased phosphorylation of eIF2a following treatment of JEG3 cells with salubrinal, a specific eIF2a phosphatase inhibitor, was sufficient to down-regulate complex IV subunit I, indicating a potential direct regulation of translation of those proteins by the eIF2a pathway [22]. Furthermore, swelling of mitochondria and other organelles has been reported at 10 min after placental separation [23], indicating loss of ionic homoeostasis most likely due to compromised ion transporter activity. Taken together, these mitochondrial impairments could serve as a positive feedback loop to further reduce cellular energy production. Therefore, prolonged ischaemia (over 45 min) will eventually induce necrotic cell death due to severe energy depletion, resulting in loss of cell integrity and tissue damage.
To conclude, the phosphorylation status of specific placental kinases and proteins involved in cell growth, metabolism, and stress signalling is changed by 20 min after placental separation from the uterine wall. Our findings indicate that to avoid ischaemia-induced artefacts in studies focussing on these pathways, placental samples must be collected and processed rapidly following delivery, preferably within 10 min. Furthermore, our previous work has shown that stress-response pathways are to quantify band intensity. Each data point was calculated relative to the mean value of the 10 min values. In the dot plot graph, the median of the group is shown, n ¼ 8. "a" and "b" denote significance p < 0.05 compared to 10 min or 20 min respectively. A) ER stress response pathways; B) MAPK pathways. Fig. 3. Phosphorylation of AKT, GSK-3, 4EBP-1 and p70S6K. A) Phosphorylation levels of kinases and proteins in the AKT-mTOR pathway were measured by Western blot. Both bactin and Ponceau S staining were used to show equal loading among samples. B) Densitometry was used to quantify band intensities. Each data point was calculated relative to the mean value of the 10 min values. In the dot plot graph, the median of the group is shown, n ¼ 8. "a" and "b" denote significance adjusted p < 0.05 compared to 10 min or 20 min respectively. activated during a vaginal delivery [24], and so only non-laboured caesarean delivered placentas are appropriate for such studies.
|
2018-04-03T04:05:45.639Z
|
2014-02-01T00:00:00.000
|
{
"year": 2014,
"sha1": "54495196a687e4de184b898b9ce0ea9402e3ec4b",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.placenta.2013.11.016",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "54495196a687e4de184b898b9ce0ea9402e3ec4b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
1132442
|
pes2o/s2orc
|
v3-fos-license
|
Kinetic Modelling and Inference of Hyperpolarized 13C Molecules in Cancer Metabolism
Hyperpolarized 13C-MRI allows real time observation of metabolism in vivo. Imaging sequences have been developed to follow the metabolism of [1-13C] pyruvate and extract reaction kinetics, which can show tumour treatment response. We applied the fitting model and algorithm for the imaging data of mice tumour models and determined error estimates for the parameters of interest. Data was least-squares fitted onto a two-site exchange model in MATLAB, followed by statistic computation to assess model performance. Inference through the application of MCMC was also performed. The modelling and inference process extracted quantitative information satisfactorily and reproducibly, demonstrating metabolic activity and intratumour heterogeneity. Finally, novel fitting methods were evaluated and further recommendations were made.
To Huiqiang Zhao and Zhanjiang Pu , brave father and grandfather who died of cancer yet achieved numerous miracles and have guided me along my life.
Introduction
Cancer is a complex disease that involves malignant cell growth. It consists of disorders in many distinct aspects, but a number of cancers do share some characteristic features 1 , among which, notably, is abnormal metabolism 1,2 . The Warburg effect of cancer cells is the mass production of lactate ('fermentation') via glycolysis even in the presence of oxygen 3 , which is a signature of mitochondria damage. Recent findings suggest that such alteration is not only an adaptation towards hypoxic microenvironment as a result of unlimited replication, but potentially the underlying cause of cancer 4 .
It is therefore important to visualise the metabolic activity of cancer. Pre-clinically, it helps to advance the understanding of cancer pathophysiology. In clinical practice it contributes to more precise diagnosis and better detection of treatment response 5 : an example being FDG-PET, where 18 F-FDG is injected into the patient and its uptake is analyzed and mapped onto a CT image of anatomy 6 ( Fig. 1). However, the imaging technique yields static image, poor spatial resolution and also brings ionizing radiation which may lead to disease progression.
Advances in molecular imaging has provided a novel non-invasive tool for oncology that shows high clinical promise 5,7 . Among them is dynamic nuclear polarization (DNP), which increases the SNR of 13 C magnetic resonance ( 13 C-MR) by over 10,000 folds 8 9 . This allows real-time observation of metabolism in vivo: not only can we image the distribution of metabolites down to tissue (voxel) level, we are also able to extract reaction kinetics (e.g. rate of conversion) through quantitative modelling. These observations can further demonstrate intratumour heterogeneity 10 and indicate early treatment response 5,9 .
The goal of this project was to determine the best kinetic model and fitting algorithm for imaging data, and to determine error estimates for the parameters of interest. Example models were taken from literature and applied to spectroscopic/imaging datasets acquired, whilst examining fitting performance both qualitatively (by curve fitting) and quantitatively (by information criteria). Markov Chain Monte Carlo (MCMC) analysis was performed to estimate error and increase confidence in modelling results. Finally, suggestions were made on modelling and inference algorithms. Animal Model and Data Acquisition 5 × 10 6 EL4 lymphoma cells were injected into the flank of C57BL/6J mice which were imaged 8 days post-injection when tumour size was approximately 1.5cm ! . 44-mg samples of 91% [1-13 C] pyruvate solution were polarized, using a clinical hyperpolarizer (GE Healthcare, Chicago, IL, USA), to 20%±1%, as measured by an NMR polarimeter (Oxford Instruments, Abingdon, UK). Hyperpolarized substrates were made by rapid dissolution of these samples. The mice were anaesthetized and placed in a self-made surface coil, in a 7T animal MR magnet. Substrates were injected intravenously through tail vein, and data collection began shortly before or after injection (normally -8s∼2s). The free induction decay (FID) signals in time domain (1s in length) were collected by the coil and Fouriertransformed to a series of 13 C-NMR spectra (Fig. 2a&b). For each spectrum, the area under each metabolite peak was calculated after phase correction (which sets the peaks perpendicular to the base line), giving signal intensities of metabolites at that specific time point. A complete set of time course signals of different metabolites was formed (Fig. 2c) and thus became the starting point of kinetic modelling and statistical inference.
Two experiments were performed in Spring 2015 and Summer 2016 respectively. The former was performed with a pulse sequence from literature. It used a uniform 5° flip angle and was able to detect signals from pyruvate, lactate and alanine (see the next section), but the SNR was mixed. As for the 2016 experiment, mice were imaged twice with a 2-day gap during which single etoposide treatment was given (n = 4). Imaging was based on a singleshot spiral pulse sequence developed in 2016 to optimize SNR ( Fig. 3), with different flip angles for pyruvate (7°) and lactate (45°) so as to preserve fresh polarization and produce higher lactate signals 11 . It yielded a spatial resolution of 1.25 × 1.25 × 2.5mm and a time resolution of 1s, but sacrificed lactate data in the first 10s as well as signals from other metabolites. pyruvate is infused over 8s (purple box) and excited every two seconds for 10s. Alternate P-L acquisitions with different flip angles occur every 1s from 10s -90s. No lactate data is thus collected for the first ten seconds.
Kinetic Modelling
Biomarkers associated with pyruvate that demonstrate observable peaks in 13 C-NMR spectra include (but not limited to) lactate (via fermentation), alanine (via reversible transamination), urea (via downstream urea cycle) and bicarbonate (via TCA cycle). In this project, only the first three were considered as signals from urea and bicarbonate were considerably lower and might be ignored. Pyruvate hydrate may also appear as a peak but it does not involve in any metabolic pathways 9 .
In order to extract quantitative reaction kinetics, the time course signal intensities can be fitted into the following two-site exchange model based on modified Bloch equations 12 , assuming all metabolites in vivo are able to have direct contact with intracellular enzymes that facilitate exchange (P: pyruvate; L: lactate; A: alanine): 14 , would also need to be fitted. Later sections will demonstrate further simplification of this model, however one should note that this model itself is already a simplified one, ignoring the difference between intravascular, extravascular (ECM) and intracellular environments. Nevertheless, being a non-linear ODE model with a number of parameters to be fitted, the robustness and reliability of fitting results are in serious doubt, particularly when many of the parameters may not be measured directly (notably the AIF, since real-time information on circulation is unavailable).
Markov Chain Monte Carlo (MCMC)
MCMC is widely applied in computational biomedicine 15,16 , as it significantly increases one's understanding and certainty towards parameter values via Bayesian statistical analysis 17 . Compared with least-squares fitting which offered a single result, MCMC algorithms randomly (hence 'Monte Carlo') sample over the entire parameter space, forming a Markov chain whose distribution at equilibrium is the target probability distribution of parameter vectors. Two major algorithms include Metropolis-Hastings (MH) and the Gibbs Sampler 17 . Since the latter requires a higher degree of independence between parameters, the MH algorithm was applied in this project, whose flow chart is shown in Fig. 4. This is a simplified description of the algorithm whose mathematical details are beyond the scope of this report, however there are two things one should pay particular attention to.
Firstly, the likelihood I(J|L). Let & 9 represent the i th measured data value and ℳ(N) 9 be the value predicted by the model. We can thus write: for the current choice of parameter vector. The principle of maximum entropy leads to the assumption that the error (residual) O 9 follows a Gaussian distribution with standard deviation P Q 16 . A further assumption of datapoint-independence brings us to the likelihood equation shown in the figure.
More importantly, it is critical how to choose ('tune') the proposed covariance matrix Σ. Ideally it is proportional to the target, but since we had no information whatsoever, it seemed the only way was to manually tune the matrix (aka 'trial and error'), so as to avoid the situation where Σ if too large rejects most updates or too small makes the movement extremely slow (Fig. 5). Fortunately, an adaptive algorithm is developed to automatically update Σ during the stochastic process 18, 19 . Proved by mathematical principles, the sampling efficiency tends to its highest when the proposed Σ for the n th iteration is given by: Where & is the dimension of parameter vector and Σ :X> is the empirical covariance matrix based on vectors of the previous n-1 iterations. The optimal acceptance rate of updates is ∼44% for &=1 and ∼24% for &>5. Thus, the Adaptive-MH algorithm proposes a multivariate normal distribution for the n th iteration given by: Since it is impossible to estimate target covariance right at the beginning, the algorithm samples with a fixed covariance for the first few (∼10%) iterations, usually in the form of Σ = P W^_ (^_: d-dimensional identity matrix), assuming no parameter interdependence. The adaptive process begins afterwards and ends at ∼50% of total iterations, in order to prevent the Markov chain from converging into the wrong direction. Finally the algorithm runs with the last covariance obtained 19 .
Results
The project was performed in MATLAB environment (MathWorks, Natick, MA, USA). The ODE model was coded directly into a script as it was difficult to solve by hand (because of the AIF). Datasets were least-squares fitted onto the model using the lsqcurvefit function, whose performance was assessed by Akaike Information Criteria (AIC) quantitatively and by observing plots qualitatively. Surprisingly, lsqcurvefit depended highly on initial parameter vector (guess), upper bound and lower bound, and they had to be tuned according to fitting performance so as to yield a near best fit. The resulting parameter vector was passed onto the MCMC function to obtain a final fitting result as well as error estimate (standard deviation, SD). To show the Markov chain converged to the right direction, another chain starting from a distant parameter vector was produced and two chains were compared against each other 17,20 . Datasets were fitted onto the full model as in Section 3, but ignored the correction factor for flip angle, essentially uniform the 4 values as reciprocal of T1. (Fig. 6) demonstrate a good match for many mice datasets, but for others post-peak signals were not well-fitted. For data with low SNRs, the model was able to produce a decent fit showing robustness.
MCMC analysis gave the error estimate (SD) for the fitted values and the adaptive algorithm yielded an acceptance rate of ∼50%. Trace plots and histograms (Fig. 7a&b) indicated good mixing of the chain. Interestingly, however, the two chains from distinct starting points were unable to converge into a uniform range (Fig. 7c). The effect on AIC values differed greatly between datasets, while some lowered the AIC values by less than 10% compared with 45% as in Table 1, which implied no significant improvement as to lsqcurvefit results. a Several points may be noted when simplifying the model for the latest experiment. First, all alanine terms and reverse rate constants were omitted. Second, the flip-angle correction factor was not negligible as H 6 = 45°, but was hard-coded to reduce fitting parameters, assuming accurate flip angles for each RF pulse. Finally, the first few data points for lactate were missing. Table 2 showed an example of whole-tumour fitting results (obtained by averaging data in each voxel within the tumour region), with higher / 01 and lower T1.
MCMC analysis gave a stable acceptance rate of around 20%, with trace plots indicating good chain-mixing and successful convergence of two distinct chains (Fig. 8). Notwithstanding these encouraging signs, we had seen strong dependence of fitting results on AIF, as well as greater uncertainty (i.e. SD) towards parameter values. Surprisingly, AIC values stayed the same as that of least-squares fitting (within computer tolerance). Time course plots ( Fig. 9) showed failure in fitting pre-peak signals (possibly due to the missing lactate data), but the algorithm nevertheless produced decent fits indicating its robustness.
Voxel-by-voxel least-squares fitting 10 was performed and fitted / 01 values were mapped onto 1 H anatomical MRI images (Fig. 10). The functional maps were compared with lactate distribution (as pyruvate was only found in aorta for a few seconds) showing those regions emitting higher lactate signals generally overlap with regions with higher / 01 . One may thus interpret that metabolic activity was mostly found deep inside the tumour and occasionally on the rim, which implies substantial occurrence of cell death within the tumour. The distinct / 01 values across the tumour show evidence of intratumour heterogeneity. From post-treatment images one may conclude that etoposide lowered / 01 but made intratumour metabolism more chaotic.
Validity of Modelling
The strong dependence of fitting results on least-squares parameters (i.e. boundaries and initial guess) as well as AIF puts the robustness and even validity of modelling algorithm in doubt. In fact, when we assigned a boundary Table 2) showing two-chain convergence (40,000 iterations, d = 6). that seemed physiologically improbable (e.g. / 01 > 5 s -1 with / 10 > 0.5 s -1 ), we were still able to obtain a good fit while other parameters had normal values. This brings us to the possibility of multimodal distribution of parameter vector, which can occur for highdimensional ODE models 16 . Such multimodal behaviour might also be the reason why two parallel Markov chains could not converge together, regarding the MCMC analysis for 2015 datasets. The chain initiated from a vector distinct from target distribution might converge to a local mode 16 that was supposed to be rejected due to physiology. Therefore, it became critical to accumulate adequate prior knowledge and carefully select the mode that best fits the metabolic conditions in vivo.
A number of articles 21,23 proposed methodology of analysis where fitting was performed for the time course after signals have reached their highest, discarding data prior to peaks, possibly for the purpose of avoiding incorporation of AIF into modelling. Indeed, in many experiments researchers were unable to, or chose not to, measure AIF directly or indirectly because of timing and costs. Addition of AIF into fitting also increased its difficulty and uncertainly. Notwithstanding that, discarding data puts validity of modelling at risk, particularly when accurate quantitative metabolic information is needed (e.g. for decision- Four rats with EL4 tumours were imaged twice and treated with etoposide in between. Grey-scale proton images showed the anatomy of tumours (outlined in green), onto which fitted / 01 maps were false-coloured and superimposed. NB whilst trying to ensure for each rat, the same slice was compared between the two imaging processes, discrepancy may occur. making in future clinical applications). We therefore suggest considerations to be made when designing experiments and processing data.
There has also been discussion regarding the correct computation of AIC. The value is directly related to the RSS between acquired and fitting data, however as the order-ofmagnitude of signal intensities is vastly different between different experiments, it might be important to consider the necessity and feasibility to normalize RSS, so as to provide a fair comparison between fitting performance for different datasets.
Other Kinetic Models
The kinetic model presented in this report is shown to be efficient with satisfactory fitting performance, nevertheless the model itself has contributed to several imperfections. First, as suggested in Section 3, a number of assumptions and thus omissions were made, underlying systematic error. Second, the fact that the ODE could not be solved analytically has led to reliance upon built-in ODE solvers (ode15s in this project) which inevitably introduced error and compromised accuracy. Third, the sheer number of parameters in the model (12 in 2015 and 6 in 2016) had increased the difficulty of optimization and made MCMC inference fragile, as the chain could easily go into wrong direction and terminate the whole program subsequently. Such trade-off behaviour between fitting efficiency and accuracy, as witnessed in this project, is of central importance in kinetic modelling and statistical inference.
There have been controversies as to whether higher complexity of model would lead to better fitting results. Whilst some 8 suggested no significant bias caused by model variation, others concluded separately 21,24 that some models do outperform others with lower AIC scores and higher likelihood. Model-free approaches 13,23 , though seemed appealing, showed only modest performance 21 . We encoded one 'enhanced' model with separate intra-and extravascular environments as suggested in the Bankson et al. paper, but were nevertheless unable to obtain a fit for the 2016 datasets we acquired. Future work may be done to examine and improve the repeatability of different kinetic models.
Other Inference Methods
As suggested, the MH algorithm may fall short when sampling through high-dimensional or multimodal distributions that demonstrate strong parameter correlations 16 . The chains may also converge into wrong regions due to its random-walk behaviour. Therefore, new MCMC methods have been emerging with the progress of machine learning and computational sciences, among which Hamiltonian Monte Carlo (HMC) can a powerful approach targeting multidimensional problems 19,25 . Inspired by molecular kinetics, it aims to overcome the inefficiency of MH by walking straight towards the target distribution with a 'momentum' vector. In this project the HMC algorithm was coded and trialled on 2016 datasets. Performance was assessed against criteria suggested in literature 25 . It did show promise from histograms and trace plots (Fig. 11), as we were able to witness the vector walking along a path around a specific region, and then moving on to seek other possible modes. However, there are some critical problems that need to be solved before further applications. First, HMC requires first order sensitivities 16,25 of the ODE model. In this project they were computed numerically rather than analytically (again, because of AIF), consuming substantial amount of time whilst losing accuracy. Second, it requires very precise tuning of parameters like the mass matrix 16 , which, if not done automatically, would cost considerable manual effort. In light of future clinical demand of both efficiency and accuracy, MH might still be the first choice for inference, as it is much faster, produces satisfactory convergence and requires less tuning.
Conclusion
In this project, kinetic modelling of hyperpolarized 13 C MR spectroscopy and imaging was performed, from which we were able to extract quantitative metabolic information of mice EL4 tumour models. Data was acquired and least-squares fitted onto a two-site exchange model, followed by statistical inference through the application of MCMC. Performance of the modelling process was assessed and compared with those suggested in literature, demonstrating reproducibility of the experiment. From the reaction kinetics, real-time metabolic activity was witnessed, showing Warburg effect and intratumour heterogeneity. Novel modelling and inference approaches were also introduced and examined. Further research may be performed in the following aspects: 1) Better pulse sequences (e.g. flip angles 26 ) that improve SNR whilst preserving data; 2) More sophisticated models that are robust, efficient and accurate; 3) Better mathematical description of AIF; 4) Improving MCMC algorithms to target multimodal, high-dimensional and highlycorrelated parameter distributions; and 5) Automation of modelling and inference, so as to prepare for clinical applications.
Finally, novel molecular imaging techniques like hyperpolarized 13 C-MRI, glucoCEST 27 and photoacoustic imaging 28 have all shown potential to measure metabolic activity in human tumours. The first clinical trials of hyperpolarized 13 C-MRI have yielded encouraging results. Nevertheless, one needs to bear in mind that these techniques are inherently qualitative and thus requires careful interpretation when trying to extract quantitative information, so as to increase confidence in data and contribute towards a better decision-making for the benefit of cancer patients.
|
2017-12-21T02:46:01.000Z
|
2017-12-20T00:00:00.000
|
{
"year": 2017,
"sha1": "fe46fbc2d4b448e3085368a8e6f902d75ce98d32",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fe46fbc2d4b448e3085368a8e6f902d75ce98d32",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Biology"
]
}
|
18323332
|
pes2o/s2orc
|
v3-fos-license
|
Vascular invasion does not discriminate between pancreatic tuberculosis and pancreatic malignancy: a case series.
BACKGROUND
Pancreatic tuberculosis is very rare and most commonly involves the head and uncinate process of the pancreas. It closely mimics pancreatic malignancy and is often diagnosed after pancreatico-duodenectomy. Vascular invasion is believed to be a hallmark of malignant lesions and described as a point of differentiating benign lesions from malignant lesions. We herein retrospectively evaluated the patients with pancreatic tuberculosis seen at our unit over the last 4 years for features of vascular invasion.
METHODS
We retrospectively analyzed the collected database of all patients diagnosed with pancreatic tuberculosis at our unit over the last four years and identified patients who had evidence of local vascular invasion and their clinical and imaging findings were retrieved.
RESULTS
Over the last four years, 16 patients (12 males) with pancreatic tuberculosis were seen and five of these 16 patients had imaging features of vascular invasion by the pancreatic head mass. Of these five patients, four were males and the mean age was 32.0±5.47 years. Of these five patients, three had involvement of portal vein and superior mesenteric vein and two had involvement of hepatic artery.
CONCLUSION
Presence of vascular invasion does not distinguish pancreatic tuberculosis and malignancy, and, therefore, cytopathological confirmation is mandatory to differentiate between the two.
Introduction
Pancreatic tuberculosis is very rare and most commonly involves the head and uncinate process of the pancreas [1]. Th e clinical and imaging features of pancreatic tuberculosis closely mimic a resectable pancreatic cancer and therefore many cases of pancreatic tuberculosis have been diagnosed aft er histopathological examination of the resected specimens obtained aft er Whipple's surgery for presumed pancreatic head malignancy [1,2]. Endoscopic ultrasound (EUS) is an excellent imaging modality for evaluation of pancreatic lesions because of high resolution images obtained by a closely placed transducer. However, we have previously shown that none of the EUS features of a mass lesion caused by pancreatic tuberculosis are distinctive and therefore cytological examination is mandatory to diff erentiate it from resectable pancreatic head malignancy [1].
Local vascular invasion is oft en considered to be an imaging feature of malignant lesions and may indicate unresectability. Vascular invasion is not usually seen in benign lesions and is considered a feature of malignancy. Vascular invasion has also not been usually reported in patients with pancreatic tuberculosis. One study on 19 patients with pancreatic tuberculosis did not fi nd vascular invasion in any of these patients and therefore suggested that absence of vascular invasion in pancreatic head mass lesion could suggest a diagnosis of pancreatic tuberculosis [3]. However, we have previously reported two cases of pancreatic tuberculosis with local vascular invasion [4,5]. We retrospectively evaluated the patients with pancreatic tuberculosis seen at our unit over the last 4 years for features of vascular invasion and present a series of 5 cases of pancreatic tuberculosis with local vascular invasion.
Patients and methods
We retrospectively analyzed the collected database of all patients diagnosed with pancreatic tuberculosis at our unit over the last four years. Th e diagnosis of pancreatic tuberculosis was established on a basis of clinical features, radiologic fi ndings, cytological fi ndings and improvement in symptoms with anti-tubercular therapy (ATT). All patients had undergone contrast-enhanced computed tomography (CT) of the chest and abdomen. We identifi ed patients who had evidence of local vascular invasion on CT and their clinical and imaging fi ndings were retrieved. Since these pancreatic head lesions closely mimicked pancreatic malignancy, all the patients had also undergone positron emission tomography CT (PET-CT) for the purpose of staging.
EUS was performed aft er informed consent using the linear scanning echoendoscope (EG-3870 UTK linear echoendoscope Pentax Inc, Tokyo, Japan or GF-UCT 180; Olympus, Tokyo, Japan). Th e examination sought details about the size, location, appearance of the lesion with any lymphadenopathy, vascular invasion and calcifi cations. Th e diameter of the common bile duct and pancreatic duct were also noted. EUS-guided fi ne needle aspiration (EUS-FNA) was performed from the lesion and material obtained was immediately sent for cytopathological examination. Th e extrapancreatic lesions, if present were also sampled: celiac or mediastinal lymph nodes under EUS guidance and hepatic lesions under transabdominal ultrasound guidance.
Th e patients were treated with weight-based four drug anti-tubercular therapy (isoniazid 5 mg/kg/day, rifampicin 10 mg/kg/day, pyrazinamide 25 mg/kg/day, and ethambutol 15 mg/kg/day) and were followed-up for disappearance of symptoms and radiological improvement. As we had previously shown that pancreatic tuberculosis patients with cholestatic symptoms had resolution of their symptoms with ATT alone and had no need for biliary stenting, all these patients were also treated with ATT alone [1]. It was decided to place a biliary stent only if there was intractable pruritus, cholangitis, or worsening of cholestatic symptoms aft er starting ATT.
Results
Over the last four years, 16 patients (12 males) with pancreatic tuberculosis were treated at our unit and 5 of these 16 patients had imaging features of vascular invasion by the pancreatic head mass. Of these 5 patients, 4 were males and the mean age was 32.0±5.47 years (Table 1). Th e presentation in all patients was abdominal pain of varying duration without any associated fever or night sweats. Of these 5 patients, 4 had cholestatic jaundice but none had cholangitis. All patients had associated loss of appetite and 4 patients had loss of weight. All patients were negative for human immunodefi ciency virus and none of the patients had any clinical or radiologic fi ndings of extrapancreatic tuberculosis. Also, blood sugar was normal in all these 5 patients.
On evaluation peripancreatic lymphadenopathy was present in 3 patients. Other lymph nodes involved were celiac, precaval, supraclavicular, portal, paraaortic, internal mammary and mediastinal lymph nodes ( Fig. 1 and 2). Two patients had isolated pancreatic lesions without associated lymphadenopathy and the pancreatic duct was dilated in 2 patients. In one patient (case 4) the presence of hepatic lesions further caused diagnostic confusion with metastatic pancreatic malignancy. However, ultrasound guided FNA from the hepatic lesions also revealed granulomatous infl ammation confi rming the diagnosis of tuberculosis. Th e cytological analysis of the aspirated material revealed granulomatous infl ammation in all the patients whereas caseous necrosis was seen in 3/5 (60%) patients. None of the patients had acid-fast bacilli seen on Ziehl-Neelsen staining. No complications of EUS-FNA were noted.
On PET-CT, the pancreatic mass lesions, liver lesions as well as the lymph nodes were intensely fl uorodeoxyglucose (FDG) avid with SUV Max value ranging from 6 to 22 (Fig. 3). Interestingly all these patients had evidence of invasion of vascular structures also causing diagnostic confusion with locally advanced or metastatic pancreatic malignancy. Of these 5 patients, 3 had involvement of portal vein and superior mesenteric vein, and 2 had involvement of hepatic artery (Table 1). Th e diagnosis of vascular invasion was confi rmed both on CT and EUS (Fig. 4). Intraabdominal collaterals because of splenoportomesenteric vessel involvement were seen in two patients but none of the patients had esophagogastric varices.
All the patients were treated with standard 4 drug antitubercular therapy and showed response with disappearance of pain and jaundice within 2 weeks and liver function tests Moreover, the vessels that were seen on initial imaging to be infi ltrated by the mass were found to be normal on follow-up ultrasound. However, intraabdominal collaterals seen initially could also be seen on follow-up imaging. Th e hyperbilirubinemia improved with anti-tubercular therapy alone and no biliary interventions were needed. No toxicity of ATT was observed in any patient.
Discussion
Pancreatic tuberculosis is an uncommon condition usually seen in developing countries but has also been reported with increased frequency from western world [6]. It usually affl icts the region of the head of the pancreas and is oft en misdiagnosed as pancreatic malignancy and may result in unwarranted pancreatic resections [1,2]. Even on EUS, pancreatic tuberculosis is not distinguishable from pancreatic malignancy and presents as hypoechoic lesions as in malignancy [1]. Even on FDG-PET, tuberculosis closely mimics pancreatic malignancy and the standardized uptake values can be as high as those for malignant lesions [1,7]. Vascular invasion of the abdominal vessels is oft en regarded as a feature of locally advanced malignancy. One of the studies has reported this as a point of distinction between pancreatic tuberculosis and pancreatic malignancy [3].
Previously, only a few case reports have recognized vascular involvement in patients with pancreatic tuberculosis [4,5,8]. In the present series, both arterial as well as venous involvement was observed in patients and 2 of 5 patients with splenoportomesenteric vessel involvement had intraabdominal collaterals, thereby suggesting signifi cant vessel involvement leading on to impairment of venous circulation. Also, all patients had excellent response with resolution of pain and jaundice with standard weight-based anti-tubercular therapy. Moreover, the involved vessels appeared normal on follow-up imaging.
As there are no distinctive clinical, laboratory or radiological features including vascular invasion for distinguishing pancreatic tuberculosis from pancreatic cancer, histopathological or cytological confi rmation is necessary for establishing the diagnosis of pancreatic tuberculosis. Percutaneous imaging or EUS-FNA sampling for staining, cytology, bacteriology, culture and polymerase chain reaction assay is essential for establishing the diagnosis of pancreatic tuberculosis [1,9,10]. Th e microscopic features of tuberculosis observed on cytology are caseation necrosis, granuloma and presence of acid fast bacilli. In our earlier study we found that the majority of patients with pancreatic tuberculosis (83.3%) had granulomas with acid fast bacilli being seen in only 1 of 6 (16.7%) and culture for Mycobacterium tuberculosis being positive in 1 of 2 patients (50.0%) tested [1].
Another important and controversial issue is the question of identifying patients with pancreatic head mass who should undergo EUS-FNA. It is widely accepted that patients with unresectable mass or patients who are poor surgical candidates should undergo FNA before deciding upon radiotherapy or chemotherapy [11]. However, the issue of doing FNA in patients with resectable pancreatic head mass is more controversial. Th e proponents of not doing FNA argue that tissue diagnosis is not going to alter the management and therefore is not necessary and will also put the patients at risk of complications. Th eir argument is further supported by the fact that the sensitivity of EUS-FNA ranges from 85-90% and thus having up to 15% false negative results [11]. Th e supporters of doing FNA argue that histological diagnosis before surgery may alter management as certain disorders like lymphoma, small cell metastasis, and tuberculosis do not need surgery. Th ey also suggest that EUS-FNA off ers the opportunity to visualize the pancreatic mass, judge its relation with surrounding vessels, and also obtain a tissue diagnosis without the risk of tumor seeding along the needle tract as well as with very low complication rates [12]. Moreover, some surgeons and patients would like to have a defi nitive diagnosis of malignancy before undergoing major surgical resections. We feel that the practice of doing EUS FNA in resectable pancreatic head masses should be based upon local experience and FNA may be avoided in patients having a clinical setting of pancreatic adenocarcinoma. But, the centers with high frequency of pancreatic tuberculosis especially the ones in tropical countries like ours should adopt the practice of doing FNA in all cases as there are no distinctive clinical, laboratory or radiological features for distinguishing pancreatic tuberculosis from pancreatic cancer and a correct pre-operative histological diagnosis can avoid unnecessary surgery.
In conclusion, pancreatic tuberculosis is a potential mimic of invasive pancreatic malignancy and the presence of vascular invasion does not distinguish one condition from the other.
|
2016-05-12T22:15:10.714Z
|
2014-02-10T00:00:00.000
|
{
"year": 2014,
"sha1": "090e275743f9a0693f2e3da3fcd1a0fe3add55ee",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "090e275743f9a0693f2e3da3fcd1a0fe3add55ee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
44247805
|
pes2o/s2orc
|
v3-fos-license
|
Towards Zero-Emission Refurbishment of Historic Buildings : A Literature Review
Nowadays, restoration interventions that aim for minimum environmental impact are conceived for recent buildings. Greenhouse gas emissions are reduced using criteria met within a life-cycle analysis, while energy saving is achieved with cost-effective retrofitting actions that secure higher benefits in terms of comfort. However, conservation, restoration and retrofitting interventions in historic buildings do not have the same objectives as in modern buildings. Additional requirements have to be followed, such as the use of materials compatible with the original and the preservation of authenticity to ensure historic, artistic, cultural and social values over time. The paper presents a systematic review—at the intersection between environmental sustainability and conservation—of the state of the art of current methodological approaches applied in the sustainable refurbishment of historic buildings. It identifies research gaps in the field and highlights the paradox seen in the Scandinavian countries that are models in applying environmentally sustainable policies but still poor in integrating preservation issues.
Introduction
the renovation potential of buildings in the European Union (EU) is huge.Up to 110 million buildings could be in need of renovation [1] as 35% of the EU's buildings are over 50 years old and, in Europe, there is a slow replacement rate [2].
In the existing built environment, a historic building (HB) is a single manifestation of immovable tangible cultural heritage that does not necessarily have to be a heritage-designated building [3,4].The historic buildings (HBs) that are not listed or fully protected by countries' legislation may have a significant cultural value in identifying the form of cities, and play a significant role in providing a sense of identity to the community.However, existing materials, building structures and envelope design may limit the choice of interventions to be applied, while the restraints in thermal-performance upgrades may limit their cost-effectiveness.This means that, if compared to recent buildings, these interventions are more demanding in terms of maintenance and adaptation and more challenging in energy-saving during the operational stage.
Nowadays, the preservation of historic buildings is at risk, not only due to natural weathering of their materials but also by the convenience of rebuilding instead of restoring or of developing renovation methods tailored to modern buildings.The topic has recently gained a lot of attention, including the first achievements of planning and executing preservation, protection, maintenance and restoration of immovable cultural heritage in a standardised way [3].
in recent years, several databases (e.g., the Odyssee database used by the European Environment Agency (EEA) [5]), assessment methods (e.g., Building Sustainability Assessment (BSA) [6]) and modelling and evaluations tools (e.g., the SURE Indicator Tool [7]) applicable to different stages of the refurbishment process have been created.in addition, different sustainability certification systems to assess building performance have been developed.The most important at European level are: • BREEAM (Building Research Establishment Environmental Assessment Methodology), leading in the EU market (80% of all the EU-certified sustainable buildings) but mostly used in the United Kingdom, where it was created in 1990 [8]; • LEED (Leadership in Energy and Environmental Design) developed in the USA in 1998 [9]; • HQE (High-Quality Environmental) developed in France in 1992 [10]; • Miljobyggnad (environmental buildings) created in Sweden in 2005 [11]; and • the DGNB (German Sustainable Building Council) system developed in Germany in 2007 [12].
These tools apply a rating method to compare different options in new, converted or renovated buildings; for example, to assess the improvements in energy and materials before and after refurbishment.However, their scoring methods are actually not applicable for the conservation of HBs, as they are not designed to highly rate: (i) the multi-value of immovable cultural heritage; (ii) the significant embodied energy savings within this building stock; and (iii) the energy performance targets achievable through refurbishment.
Decisions on conservation, restoration and retrofitting interventions in HBs need to take into account not only the aspects mentioned in the above paragraph but also a broader range of benefits counting for historic, artistic, cultural and social values or the preservation of authenticity and use of materials compatible with the originals.In such a case, reversible techniques are preferable because, if proven to be inefficient or of low durability over time, they can be replaced without damaging the original material or decreasing artistic and historical value.However, reversible techniques (i.e., maintenance and preservation actions) do not always solve existing restoration problems that require higher levels of interventions of the irreversible type.
Is it possible to save HBs by implementing sustainable-refurbishment actions?What are the existing methods used by heritage scientists, environmental engineers and, generally, decision-makers to plan correct and effective sustainable interventions?Are the two main research communities working on these objectives?What are the gaps in knowledge?
This paper puts into the sustainability specialist and conservators' debate the potential conflict between the need to meet environmental targets-particularly greenhouse gas emissions, e.g., the objective of a 20% energy-saving target by 2020 [13]-and to retain cultural heritage values and resources (Section 1-Introduction).The aim is to clarify such issues through a systematic literature review (Section 2-Methodology).The results indicate a need for knowing, characterizing and summarizing the existing methodological approaches on cultural heritage safeguarding and CO 2 -savings potentialities linked to refurbishment (Section 3-Results).Finally, the paper in Section 4 (Discussion and Conclusions) identifies the gaps in the methodological approach that must be addressed in the future.It also highlights the current situation created in the Scandinavian countries that are meritorious, and a model in applying sustainable policies that are nonetheless poor when it comes to integrating preservation issues.
Methodology
in research studies, there is a variety of methods that can be applied during a literature review and the choice of the appropriate one is a delicate process because the use of different methods in the same field may appear to have contradictory outcomes [14].The topic of "sustainable refurbishment of historic buildings" involves different research communities and asks for a review of large bodies of information from different fields.For this reason, the systematic literature review method was selected and applied at the junction between the environmental sustainability and the heritage sectors, as this method guarantees a proper mapping of different areas of knowledge and of relevant research gaps and uncertainties and highlights research needs properly [15].
Selection of Publications
Identification and counting of existing research publications in the field of sustainable refurbishment of historic buildings was done using the online Elsevier database, Scopus.This platform was selected because it is the world's largest abstract and citation database of peer-reviewed literature i.e., scientific journals, books and conference proceedings, with over 22,000 titles from more than 5000 international publishers [16].The interests of the two main research communities involved, sustainability and refurbishment specialists, drove the choice of the two initial sets of keywords in the search, using one set for each community.The first set was created to identify the publications related to sustainable methodologies applied to historic buildings by using the keywords "sustainab*" AND "method*" AND "histor* build*"; while the second research results related to interventions aiming at the preservation of historic buildings by using the keywords "preserv*" AND "interven*" AND "histor* build*".The two sets have in common only the category of analysed buildings, i.e., historic buildings, while they differ for the rest.The keywords were written keeping the root of the word and adding the asterisk symbol (*) after it to include all the grammar forms of the word.As the research topic is quite new, the search was performed for scientific publications from the year 2000 until the present day (search performed in September 2017).The search results gave a total number of 274 publications, of which 118 documents resulted from the first set of keywords (sustainability field) and 156 documents from the second set (preservation field).After a first scan, the total number was reduced to 246, removing 9 documents not written in English, 9 duplicate documents, and 10 lecturers' notes or conference proceedings' books.This final list was subject to a document analysis both in term of general characteristics, contents, gaps and needs.The list of the publications considered for the review is provided in the supplementary file.
Analysis of Publications
the first level in the analysis, i.e., the general characteristics for each document, was retrieved by reading the abstract aiming to identify the following information: the classification of the documents regarding their discipline served as an input for the second level of analysis i.e., the content characteristics.Within this level, the documents were grouped using the scheme in Figure 1.They were categorised according to the intervention-driving factor i.e., sustainability or the measures to improve the performance of the building.When one document was judged to belong to more than one category, it was assigned to the most relevant field by the authors.From these two main driving factors (orange colour-Figure 1), more precise categories of contents were recognized (green colour in Figure 1) and the classes of environmental (impact) and refurbishment (process), the focus of our paper, were selected for further review.This deep review was the third and last level of analysis, i.e., the content's characteristics.This consisted of full text readings of papers that were assigned to environmental and refurbishment green boxes (Figure 1), in order to understand the objectives and the authors' judgement and track future research needs.Specifically, research products focused on methodological approaches (blue cell-Figure 1) were the ultimate objective of this review as the base on which to build new and effective tools in planning the sustainable refurbishment interventions of HBs in Scandinavian countries.
Geography of Publications
The geographical distribution of the documents is defined taking into account the continent and the country of the first author's affiliation.By screening of the entire list it can be seen that 79% (n = 193) of the documents are published by researchers from the European continent, 10% of the documents (n = 25) are published in Asia, and each of the other continents has produced less than 5%.This result reflects the efforts and the financial availability that the European Commission is investing in Framework Programme (FP) for Research and Technological Development in order to develop innovative and effective ways to preserve its cultural heritage.In fact, over the last few decades, the largest EU-funded research initiatives such as the Noah's Ark [17], Climate for Culture [18], EFFESUS [19], 3EnCult [20] and MOVE [21] projects, have demonstrated valuable methodological approaches in the cultural heritage (CH) protection field.
It is interesting to examine the results within Europe.Almost half of the relevant European literature (45%) is published in Italy (n = 86), followed by United Kingdom with 11% (n = 22), Spain and Turkey with 6% (n = 11), Czech Republic with 5% (n = 10), and other countries with less than 10 publications.Regarding northern Europe, the number of publications is very low, with two documents published in the Scandinavian countries (both of them part of the European project EFFESUS [19]) and two documents published from researchers affiliated with the Baltic countries.The results show that the topic is still unexploited and more research should be conducted for the green refurbishment of historic buildings in northern Europe.The geographical distribution is given in Table 1.
Geography of Publications
the geographical distribution of the documents is defined taking into account the continent and the country of the first author's affiliation.By screening of the entire list it can be seen that 79% (n = 193) of the documents are published by researchers from the European continent, 10% of the documents (n = 25) are published in Asia, and each of the other continents has produced less than 5%.This result reflects the efforts and the financial availability that the European Commission is investing in Framework Programme (FP) for Research and Technological Development in order to develop innovative and effective ways to preserve its cultural heritage.In fact, over the last few decades, the largest EU-funded research initiatives such as the Noah's Ark [17], Climate for Culture [18], EFFESUS [19], 3EnCult [20] and MOVE [21] projects, have demonstrated valuable methodological approaches in the cultural heritage (CH) protection field.
It is interesting to examine the results within Europe.Almost half of the relevant European literature (45%) is published in Italy (n = 86), followed by United Kingdom with 11% (n = 22), Spain and Turkey with 6% (n = 11), Czech Republic with 5% (n = 10), and other countries with less than 10 publications.Regarding northern Europe, the number of publications is very low, with two documents published in the Scandinavian countries (both of them part of the European project EFFESUS [19]) and two documents published from researchers affiliated with the Baltic countries.The results show that the topic is still unexploited and more research should be conducted for the green refurbishment of historic buildings in northern Europe.The geographical distribution is given in Table 1.
Type of Publication
the search has shown that documents were written in all forms of scientific literature, with the journal article being the most found genre (128 documents (52%)).As journal articles are expected to have top-level quality due to rigorous peer-review processes before publication and a larger impact in the research community, they received most attention during the literature-review process.The percentage of publications related to conferences is also considerable with 43% (n = 109) of the documents categorised as conference papers.The other types of publications such as books or book chapters account for less than 5%.
Year of Publication
the sustainable refurbishment of historic buildings is a multi-disciplinary topic that has received a lot of attention among researchers in recent years.In fact, while the number of publications within this field was quite low (n = 2) in 2000, over the last few years that has increased significantly, reaching a maximum in 2015 with 38 publications followed in 2016 with 35. Figure 2 shows the number and the categories of publications per year.
Type of Publication
The search has shown that documents were written in all forms of scientific literature, with the journal article being the most found genre (128 documents (52%)).As journal articles are expected to have top-level quality due to rigorous peer-review processes before publication and a larger impact in the research community, they received most attention during the literature-review process.The percentage of publications related to conferences is also considerable with 43% (n = 109) of the documents categorised as conference papers.The other types of publications such as books or book chapters account for less than 5%.
Year of Publication
The sustainable refurbishment of historic buildings is a multi-disciplinary topic that has received a lot of attention among researchers in recent years.In fact, while the number of publications within this field was quite low (n = 2) in 2000, over the last few years that has increased significantly, reaching a maximum in 2015 with 38 publications followed in 2016 with 35. Figure 2 shows the number and the categories of publications per year.The graph highlights an increased number of publications in 2008 with regard to the set of search keywords related to interventions (i.e., "preserv*" AND "interven*" AND "histor* build*").From the data analysis, the increase this year mainly came from publications related to the International Conference on Structural Analysis of Historic Construction (SAHC08).In addition, regarding sustainability issues, the number of publications reflects three fruitful series of conferences-the Central Europe towards Sustainable Building (CESB) event held in Prague, Czech Republic in 2010, 2013 and 2016.The 2015-2016 maximum in the number of publications is not a result of a separate event but rather the effect of the EU framework programme FP7-environment.This EU framework, over a 6-year period (2007-2013), produced a general increase in consciousness of environmental the graph highlights an increased number of publications in 2008 with regard to the set of search keywords related to interventions (i.e., "preserv*" AND "interven*" AND "histor* build*").From the data analysis, the increase this year mainly came from publications related to the International Conference on Structural Analysis of Historic Construction (SAHC08).In addition, regarding sustainability issues, the number of publications reflects three fruitful series of conferences-the Central Europe towards Sustainable Building (CESB) event held in Prague, Czech Republic in 2010, 2013 and 2016.The 2015-2016 maximum in the number of publications is not a result of a separate event but rather the effect of the EU framework programme FP7-environment.This EU framework, over a 6-year period (2007-2013), produced a general increase in consciousness of environmental technologies to be used in CH protection and necessary knowledge that resulted in a rise in the number of publications a few years later.Publications in 2017 are counted until early September, the date when the search was concluded.
Field of Publication
the sustainable refurbishment of historic buildings has embraced researchers from different fields and disciplines.The grouping of documents according to their field of publication is reported in Figure 3.In about 34% (n = 84) of the listed documents (see supplementary file), the main driver of the publication is the refurbishment process, from maintenance (preservation, conservation) i.e., low-level interventions to renovation and/or restoration i.e., high-level interventions.Within this group of documents (primary driver: refurbishment), 55% (n = 47) of the publications focus on energy efficiency and the energy retrofit of historic buildings as part of the global effort to reduce energy consumption [13,20].a wide variety of passive and active interventions were used to achieve such energy goals, e.g., passive interventions directed to the building envelopes, insulation of roofs and walls, introduction of high-performance windows, and active measures directed at energy-saving improvements linked to equipment maintenance, system controls, change in lighting, and heating, ventilation and air-conditioning (HVAC) systems.Ten documents (i.e., 12% within this driving factor) are, instead, related to the revitalization/reuse of abandoned buildings or their change of use.
the second large sub-group (Figure 3-yellow colours) of listed publications has sustainability issues as its main driver (n = 62, i.e., 25%) in accordance with the three main pillars of sustainability: environmental (n = 30, i.e., 12%), social (n = 23, i.e., 9%) and economic (n = 9, i.e., 4%).Although this sub-group is strictly connected with the first, this division was undertaken to maintain the focus of the paper, i.e., to analyse the union and intersection between the physical process of the intervention (sub-group 1 i.e., refurbishment) and the impact of the intervention (sub-group 2 i.e., sustainability).The environmentally sustainable-related documents mainly emphasise the reduction of greenhouse-gas emissions in the construction sector as part of worldwide action towards a decarbonised society [22].Research in this sector is also devoted to the assessment of the impact of climate change on historic buildings, following the general increased awareness related to the topic and the call for action by the EU community in this field [17,18].In the review, 15 documents (6%) that treat climate change-related research were identified.
the third large sub-group (i.e., Engineering in Figure 3) includes research contributions dealing with the integrity of the structure and its ability to resist natural ageing and decay.This category of publications has predominantly an engineering and technical character and includes several disciplines, such as structural engineering, geological and geotechnical issues, material sciences, and computer technologies.The number of publications listed in this category is comparable with those regarding sustainability (n = 62, i.e., 25%).The result points to two aspects: Conservation and, above all, restoration interventions are conducted when HBs are in a situation of "emergency" i.e., when the risk of partial or complete loss of the building is high due to instability, leaning, rising damp, damage of building materials through moisture, corrosion, salt crystallization, etc.; 2.
the value of an HB is often perceived by stakeholders, owners and users as intimately connected with the use and technical performance of the building itself [23].
the last sub-group (i.e., Hazard in Figure 3) refers to publications that intend to preserve tangible CH under natural hazards and catastrophic events (38 publications, i.e., 16% in total).Among those, 33 publications discuss the integrity of historic buildings during and/or after earthquakes.This result reflects the location of the majority of case studies in the Mediterranean Basin, which has a high risk of seismic activity.Although there is a diversity of publications concerning this topic, the majority of them discuss the strengthening interventions before the hazardous event, e.g., base isolation, fibre polymers and other non-invasive techniques with the help of computer simulations and laboratory testing.Only a few of them (3) are focused on post-disaster interventions and efforts to restore as much as possible of the initial buildings.In the list, there are also research documents that aim at the stability of buildings during other hazardous events such as fire (2), erosion (1), floods (1) or wind (1).
Type of Contribution
Research outcomes dealing with refurbishment processes and the environmental sustainability pillar were identified with respect to four types of contributions, and are reported in Figure 4.The literature review is the less-used approach when working with sustainable interventions in heritage buildings (i.e., the smallest category, with nine listed documents).At this point, it is quite common to present research results as descriptions of the methodological approaches to be applied during restorations (the largest category with 48 documents).It is also common (n = 34) to use the analysis of data and information gathered on specific case studies, eventually supported by computer simulation, to suggest generalized conservation and/or energy-retrofitting actions on similar buildings in comparable geographical conditions.
Finally, the last type of contribution is mostly focused on the management process, including communication methods and channels used to involve different types of stakeholders (n = 23).This
Type of Contribution
Research outcomes dealing with refurbishment processes and the environmental sustainability pillar were identified with respect to four types of contributions, and are reported in Figure 4.
Type of Contribution
Research outcomes dealing with refurbishment processes and the environmental sustainability pillar were identified with respect to four types of contributions, and are reported in Figure 4.The literature review is the less-used approach when working with sustainable interventions in heritage buildings (i.e., the smallest category, with nine listed documents).At this point, it is quite common to present research results as descriptions of the methodological approaches to be applied during restorations (the largest category with 48 documents).It is also common (n = 34) to use the analysis of data and information gathered on specific case studies, eventually supported by computer simulation, to suggest generalized conservation and/or energy-retrofitting actions on similar buildings in comparable geographical conditions.
Finally, the last type of contribution is mostly focused on the management process, including communication methods and channels used to involve different types of stakeholders (n = 23).This the literature review is the less-used approach when working with sustainable interventions in heritage buildings (i.e., the smallest category, with nine listed documents).At this point, it is quite common to present research results as descriptions of the methodological approaches to be applied during restorations (the largest category with 48 documents).It is also common (n = 34) to use the analysis of data and information gathered on specific case studies, eventually supported by computer simulation, to suggest generalized conservation and/or energy-retrofitting actions on similar buildings in comparable geographical conditions.Finally, the last type of contribution is mostly focused on the management process, including communication methods and channels used to involve different types of stakeholders (n = 23).This proves the importance, both in the heritage and sustainable sector, of keeping decision makers, owners, and local communities involved in HB conservation projects.Concern about the social aspect from the beginning may positively influence the planning of the interventions (i.e., maintenance, preservation, and refurbishment/restoration), as well as guarantee the long-lasting and effective application of advice coming from the research community.
Methodological Contributions
Documents presenting methodological approaches (48 papers, marked with italic in the supplementary file) to apply during refurbishment processes were further screened to pinpoint achievements and gaps in the field (Figure 5a).The first document in this category was published in 2008.This shows how research into developing a methodological approach is still in its early phase and has recently gained increasing interest.About 54% of these documents (n = 26) describe methodological approaches that deal with intervention processes, while 31% of them (n = 15) focus on energy-retrofit measures and energy-efficiency evaluation after the refurbishment process (e.g., [24][25][26][27]).Four publications (8%) present conservation methods that take into account the effects of future climate-change scenarios [28,29] and the evaluation of microclimate conditions [30,31] in the building.Finally, two documents primarily focus on the carbon footprint calculation after intervention [32,33], and one publication discusses the methodology in the decision making process [34].
the 26 documents that describe a methodological approach in maintenance and refurbishment were further categorised according to the levels of intervention (Figure 5b).Three categories were used: low (preservation and conservation), middle (refurbishment and rehabilitation), and high (renovation and restoration).The actions of the first category refer to maintenance interventions, while the middle-and high-level interventions are performed during deeper adaptation processes.From the analysis, 14 documents (54%) describe methods referred to a low level of interventions i.e., preservation (e.g., [35,36]) and conservation (e.g., [37,38]) using the rule of minimum intervention and as much as possible non-destructive techniques.Five publications (19%) have as a primary driver mid-level interventions (i.e., refurbishment, rehabilitation) (e.g., [39][40][41]) while seven documents (27%) present methodological approaches applied to deeper interventions and the full restoration of decayed or abandoned buildings (e.g., [42][43][44][45]).proves the importance, both in the heritage and sustainable sector, of keeping decision makers, owners, and local communities involved in HB conservation projects.Concern about the social aspect from the beginning may positively influence the planning of the interventions (i.e., maintenance, preservation, and refurbishment/restoration), as well as guarantee the long-lasting and effective application of advice coming from the research community.
Methodological Contributions
Documents presenting methodological approaches (48 papers, marked with italic in the supplementary file) to apply during refurbishment processes were further screened to pinpoint achievements and gaps in the field (Figure 5a).The first document in this category was published in 2008.This shows how research into developing a methodological approach is still in its early phase and has recently gained increasing interest.About 54% of these documents (n = 26) describe methodological approaches that deal with intervention processes, while 31% of them (n = 15) focus on energy-retrofit measures and energy-efficiency evaluation after the refurbishment process (e.g., [24][25][26][27]).Four publications (8%) present conservation methods that take into account the effects of future climate-change scenarios [28,29] and the evaluation of microclimate conditions [30,31] in the building.Finally, two documents primarily focus on the carbon footprint calculation after intervention [32,33], and one publication discusses the methodology in the decision making process [34].
The 26 documents that describe a methodological approach in maintenance and refurbishment were further categorised according to the levels of intervention (Figure 5b).Three categories were used: low (preservation and conservation), middle (refurbishment and rehabilitation), and high (renovation and restoration).The actions of the first category refer to maintenance interventions, while the middle-and high-level interventions are performed during deeper adaptation processes.From the analysis, 14 documents (54%) describe methods referred to a low level of interventions i.e., preservation (e.g., [35,36]) and conservation (e.g., [37,38]) using the rule of minimum intervention and as much as possible non-destructive techniques.Five publications (19%) have as a primary driver mid-level interventions (i.e., refurbishment, rehabilitation) (e.g., [39][40][41]) while seven documents (27%) present methodological approaches applied to deeper interventions and the full restoration of decayed or abandoned buildings (e.g., [42][43][44][45]).A further analysis was made regarding the type of methodological approach used to achieve the sustainable refurbishment of historic buildings.The results underline a huge variety of approaches used in the field in recent years.The most common approach was the multi-criteria assessment method that was applied in buildings for both energy-efficiency improvement [46] and for interventions [35,44,47].Decision-makers, using this assessment, have the ability to rank different interventions in order to select the most effective and appropriate actions.Criteria eventually in conflict-that create awareness about conservative interventions-can be also identified.Particular methodological approaches were: maturity matrix assessment [48], multi-attribute value theory (MAVT) [42], methodology for energy-efficient building refurbishment (MEEBR) [25], the functionality index [39], or other methods that require the use of computer simulation or numerical methods.This diversity and heterogeneity of tools shows the importance of using cross-disciplinary, multi-criteria, multi-index, multi-level procedures to develop an effective method/tool able to plan assess different levels of sustainable interventions depending on the conservation needs, type of building, and climate conditions.
Further Findings
Further analyses of the data gathered from the listed papers allowed the type of building and level of applied interventions to be determined, as well as the building materials subject to alterations.For example, no method was identified that can tailor sustainable interventions on buildings' façades, although in HBs the front walls are often representatives of much of the aesthetic and architectural value and constantly exposed to climate and anthropic-induced decay.The majority of the methods (60%, i.e., n = 29) (e.g., [44,47]) were applied to single (as a whole) buildings while the rest (40%, i.e., n = 19) to interventions at district level (e.g., [34,49]) (see Figure 6a).Regarding the occupancy of the building, about 33% focus on residential buildings (n = 16, e.g., [48,50]), 17% on religious buildings (n = 8, e.g., [45,51]), 10% on educational buildings (n = 5, e.g., [24,25]), 8% on museums (n = 4, e.g., [31,32,46]) etc. (see Figure 6b).A further analysis was made regarding the type of methodological approach used to achieve the sustainable refurbishment of historic buildings.The results underline a huge variety of approaches used in the field in recent years.The most common approach was the multi-criteria assessment method that was applied in buildings for both energy-efficiency improvement [46] and for interventions [35,44,47].Decision-makers, using this assessment, have the ability to rank different interventions in order to select the most effective and appropriate actions.Criteria eventually in conflict-that create awareness about conservative interventions-can be also identified.Particular methodological approaches were: maturity matrix assessment [48], multi-attribute value theory (MAVT) [42], methodology for energy-efficient building refurbishment (MEEBR) [25], the functionality index [39], or other methods that require the use of computer simulation or numerical methods.This diversity and heterogeneity of tools shows the importance of using cross-disciplinary, multi-criteria, multi-index, multi-level procedures to develop an effective method/tool able to plan and assess different levels of sustainable interventions depending on the conservation needs, type of building, and climate conditions.
Further Findings
Further analyses of the data gathered from the listed papers allowed the type of building and level of applied interventions to be determined, as well as the building materials subject to alterations.For example, no method was identified that can tailor sustainable interventions on buildings' façades, although in HBs the front walls are often representatives of much of the aesthetic and architectural value and constantly exposed to climate and anthropic-induced decay.The majority of the methods (60%, i.e., n = 29) (e.g., [44,47]) were applied to single (as a whole) buildings while the rest (40%, i.e., n = 19) to interventions at district level (e.g., [34,49]) (see Figure 6a).Regarding the occupancy of the building, about 33% focus on residential buildings (n = 16, e.g., [48,50]), 17% on religious buildings (n = 8, e.g., [45,51]), 10% on educational buildings (n = 5, e.g., [24,25]), 8% on museums (n = 4, e.g., [31,32,46]) etc. (see Figure 6b).It is interesting also to analyse the type of the materials that constitute the building subject to intervention.More than 40% (i.e., n = 19) are brick buildings that require interventions to improve mortar and plaster conditions and to reduce energy consumption through the addition of insulation.
Sixteen documents (i.e., 33%) focus on the refurbishment of stone buildings with interventions directed towards thermal insulation of the walls and application of chemical agents against moisture, while less than 10% (i.e., n = 3) of documents propose suggestions for the refurbishment of timber buildings.The findings are summarised in
Discussion and Conclusions
This review offers insights into the state of knowledge on sustainable refurbishment of HBs and reports how these topics are being explored globally.Its ultimate aim is to influence scholars belonging to the two communities of experts on sustainability and conservation of cultural heritage by further increasing science-based knowledge within the field and influencing decision-making in safeguarding heritage in a society that demands better energy management.This systematic review shows that such topics were incorporated in research agendas since 2006, demonstrating growing interest with an increasing production of research papers.However, current research is geographically limited to Europe and still has some significant gaps in knowledge, as recognized and analysed in the following sub-section.
Knowledge Gap and Research Needs
First, almost all the published methodological approaches evaluate the actual performance of the buildings and suggest the application of interventions to improve their energy performance and related environmental impact.Environmental sustainable improvements are always assessed during the operational phase i.e., after the conclusion of interventions.No methods are proposed to assess the environmental impact of the refurbishment process itself.
This identified gap is driving our future work on the assessment of the environmental footprint of different refurbishment scenarios by developing a methodological tool that will respect conservation principles i.e., the adoption of minimal technical interventions (avoiding unnecessary replacement of historic fabric), compatibility, and reversibility.The refurbishment scenarios, while ensuring the best preservation, have the potential to become a powerful tool in optimizing the re-use of original materials, planning the time of intervention, and reducing its cost.In fact, they can be developed to take advantage of embodied energy, to recognize areas most vulnerable to climate-induced decay, and to focus interventions on minimum waste production, and thereby on the whole to increase a building's lifetime.
Second, all the published methods for refurbishment processes are fragmentary with a focus on different stages or procedures and based on the partial needs of different stakeholders.In our perspective, there is a call for a multi-disciplinary, inclusive method able to confront and link different issues that can help stakeholders in:
•
revealing and improving the protection of the historic, cultural, and socio-economic value of the building; • using a life-cycle assessment (LCA) approach to find optimal combinations that maximize the reuse of materials and their lifetimes, thus reducing the carbon footprint of interventions.
Such inclusive and effective sustainable-refurbishment processes can take place given the close cooperation of professionals from different fields such as urban planners, architects, engineers, heritage scientists, conservation specialists, buildings owners, and decision-makers involved in heritage management.From the perspective of planning a long-term building management strategy, its use provides benefits for both the conservation of HBs and the reduction of environmental impact.
Due to the complexity of the field, the methodology will first be applied to regions with similar climatic conditions and to historic buildings with similar architectural attributes.Later, it will be further developed into a tool to be applied in different built environments and places.
Third, the research should be performed in a broader spatial context for monumental buildings, i.e., extending the method to the neighbourhood scale, as this would result in time and cost savings in adaptation processes.In a district perspective, it is more efficient and economical to categorise the buildings and give solutions for each category than to treat them one by one.Moreover, in towns and cities, buildings with no outstanding historic and architectural value by themselves may, taken as a whole, represent an important part of the country's heritage [52].This wider-scale approach of increasing the number of buildings subject to refurbishment would enhance the achievement of ambitious energy-efficiency targets and would significantly improve the living conditions of the inhabitants.Furthermore, it would upgrade the image of the cities and the incomes through leisure and tourism.
the Scandinavian Paradox
Finally, this review pointed to the Scandinavian paradox.In Norway, more than 300,000 buildings from before 1900 have been identified, and about 6000 buildings are protected under the Cultural Heritage Act [53].In Denmark, the number of protected buildings as of 2016 was about 7000 [54]; while in Sweden there are 1500 sites identified as protected (containing many more buildings) [55].However, the number of papers published in international peer-reviewed journals from researchers affiliated to Scandinavian institutions was very low and they all resulted from the EFFESUS EU project.It was in the interest of the authors to underline the contribution obtained by Scandinavian countries in the results depicted by this literature review.This accentuates the need for future research work and broader dissemination strategies to develop a methodological approach that targets zero-emission refurbishment of historic buildings.
the major publications from the Norwegian governmental institutions that deal with the preservation of cultural heritage, such as the Norwegian Institute for Cultural Heritage Research (NIKU) [56] and the Directorate for Cultural Heritage (Riksantikvaren) [57], are transmitted as reports and, therefore, cannot be traced in a Scopus database search.Moreover, some of them are written in Norwegian, which makes them not easy accessible to researchers of other countries.However, the database search has indicated that even Scandinavian research bodies have devoted very little attention to new methods to effectively maintain and refurbish historic buildings through conservative actions and/or to develop environmental friendly, science-based tools to increase such practice.The existing publications are mainly national reports that, although they contain valuable results in the field [58], have limited dissemination potential due to the language and type of publication.
the literature review has shown that Norway is keeping to traditional established refurbishment and maintenance methods without asking for innovative, science-based approaches.Conservators and researchers in this field want to build further knowledge about maintenance and restoration, collect information on what has been done in the course of the last few years on the usage of traditional handicrafts, and develop "new" knowledge concerning the use of different traditional materials (e.g., results from the "Stave Church Preservation Programme" funded by Riksantikvaren over the 2001-2015 period [59]).
Research on such "new" knowledge concerning the use of traditional material is required in Scandinavia to preserve wooden historic buildings that have high maintenance demands.A detailed knowledge is required to understand the (i) properties of original, aged materials, restored materials and new/created composite materials (e.g., assembling new and aged materials); (ii) changes in building performances (e.g., air-exchange rates, thermal transmission) that include the aesthetic and physical impacts on the existing structure; and (iii) alterations in decay rates or duration of interventions.
An international research project that involves the Norwegian University of Science and Technology (NTNU), NIKU, Riksantikvaren, the Getty Conservation Institute, and the Polish Academy of Science, focuses on the preservation of Stave Churches in Norway and historic wooden buildings in the Scandinavian countries.In the next few years (2018-2021) this will answer some of the questions about the sustainable management of heritage buildings with a long-term perspective.[60] On the other hand, Norway and the other Scandinavian countries are the most active countries aiming at zero emissions for new construction [61,62] or in developing energy-retrofitting measures for existing buildings, even at a large scale (e.g., district level) [19,63].This means that:
•
Sweden is one of the countries in the EU that, since 2005, has created an energy and sustainable certification scheme for commercial and residential buildings [11], while the large stock of residential buildings in Europe is not certified yet [64].
•
in the Scandinavian countries, an increasing number of new constructions, residential or not, are targeted to be nearly zero-energy buildings before 2020 i.e., to balance any CO 2 emission caused by the use of electricity (or other energy carriers) during the building's operation with onsite generation of renewable energy [65].the energy-efficiency renovation rate in Norway is at the maximum level compared with that in the 13 countries of the European Union where data are available.It reaches 2.5% a year, while in other countries it varies in a range from 0.5% to 2.0% a year [66][67][68], with a typical figure being 1% (about 250 million m 2 ) per year [69].If retrofit actions are blindly applied to historic buildings without complete knowledge of the challenges involved, in a short time uncontrolled decay will increase the risk of losing valuable historic buildings and will require a huge economic effort to repair the damage caused.Supplementary Materials: the following are Available online at www.mdpi.com/2075-5309/8/2/22/s1.
Figure 1 .
Figure 1.Flow diagram for the content review of the documents.Step 1 groups the documents according to the focus of publication (orange cells), step 2 categorizes them according the field of publication (green cells), and step 3 identifies the type of contribution (blue cells).
Figure 1 .
Figure 1.Flow diagram for the content review of the documents.Step 1 groups the documents according to the focus of publication (orange cells), step 2 categorizes them according the field of publication (green cells), and step 3 identifies the type of contribution (blue cells).
Figure 2 .
Figure 2. Distribution of documents by year of publication with indication of some major projects and conferences in the field that have influenced the growth of interest in this research topic.The Norwegian research centres for Zero-Emission Building (ZEB) and on Zero-Emission Neighbourhoods in Smart Cities (FME ZEN) are also highlighted.
Figure 2 .
Figure 2. Distribution of documents by year of publication with indication of some major projects and conferences in the field that have influenced the growth of interest in this research topic.The Norwegian research centres for Zero-Emission Building (ZEB) and on Zero-Emission Neighbourhoods in Smart Cities (FME ZEN) are also highlighted.
Buildings 2018, 8 ,
x FOR PEER REVIEW 7 of 16 of computer simulations and laboratory testing.Only a few of them (3) are focused on post-disaster interventions and efforts to restore as much as possible of the initial buildings.In the list, there are also research documents that aim at the stability of buildings during other hazardous events such as fire (2), erosion (1), floods (1) or wind(1).
Figure 3 .
Figure 3. Distribution of the documents by field of publication i.e., content characteristics.
Figure 4 .
Figure 4. Distribution of the environmental and refurbishment documents by type of publication.
Figure 3 .
Figure 3. Distribution of the documents by field of publication i.e., content characteristics.
Buildings 2018, 8 ,
x FOR PEER REVIEW 7 of 16 of computer simulations and laboratory testing.Only a few of them (3) are focused on post-disaster interventions and efforts to restore as much as possible of the initial buildings.In the list, there are also research documents that aim at the stability of buildings during other hazardous events such as fire (2), erosion (1), floods (1) or wind(1).
Figure 3 .
Figure 3. Distribution of the documents by field of publication i.e., content characteristics.
Figure 4 .
Figure 4. Distribution of the environmental and refurbishment documents by type of publication.
Figure 4 .
Figure 4. Distribution of the environmental and refurbishment documents by type of publication.
Figure 5 .Figure 5 .
Figure 5. Findings from the systematic literature review: (a) categorisation of the documents presenting methodological approaches by primary driver; (b) categorisation of the documents describing a methodologic approach in maintenance and refurbishment by level of intervention.
Figure 6 .Figure 6 .
Figure 6.Findings from the systematic literature review: (a) categorisation of the scale of intervention at building (blue) or district level (orange); (b) categorisation of the building by its function.
• in Norway, projects involving dozens of public and industrial partners as well as a large number of pilot projects have been funded since 2009 with industry and governmental support to enable the transition to a low-carbon society.These research centres are: the Research Centre on Zero-Emission Buildings (ZEB) 2009-2017 [61] and the Research Centre on Zero-Emission Neighbourhoods in Smart Cities (FME ZEN) 2016-2024 [62].
Table 1 .
Distribution of the publications by continent within the two main research communities involved, i.e., the sustainability and conservation specialists.
Table 1 .
Distribution of the publications by continent within the two main research communities involved, i.e., the sustainability and conservation specialists.
Table 2, with some examples of the most common interventions performed.
Table 2 .
Categorisation of publications by primary building constructive material, number of related publications, and most common performed interventions.
|
2018-02-15T10:11:29.812Z
|
2018-01-31T00:00:00.000
|
{
"year": 2018,
"sha1": "d022c93adaf627524d482395fd0df30bd9f6d450",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-5309/8/2/22/pdf?version=1517416285",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "54dc39fafd10fbd53fd730850f5ed985634285ca",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
2453338
|
pes2o/s2orc
|
v3-fos-license
|
Structuring targeted surveillance for monitoring disease emergence by mapping observational data onto ecological process
An efficient surveillance system is a crucial factor in identifying, monitoring and tackling outbreaks of infectious diseases. Scarcity of data and limited amounts of economic resources require a targeted effort from public health authorities. In this paper, we propose a mathematical method to identify areas where surveillance is critical and low reporting rates might leave epidemics undetected. Our approach combines the use of reference-based susceptible–exposed–infectious models and observed reporting data; We propose two different specifications, for constant and time-varying surveillance, respectively. Our case study is centred around the spread of the raccoon rabies epidemic in the state of New York, using data collected between 1990 and 2007. Both methods offer a feasible solution to analyse and identify areas of intervention.
Introduction
As pointed out in Microbial threats to health: emergence, detection and response [1], the degree of success of global and national efforts to create public health infrastructure with effective systems of surveillance and response is a key variable influencing the future impact of infectious diseases. According to WHO, surveillance is an ongoing, systematic collection, analysis and interpretation of health-related data essential to planning, implementation and evaluation of public health practice (http://www.who.int/ immunization_monitoring/burden/routine_surveillance/en/index.html). Surveillance plays a major role in devising public health strategies to curtail the spread of infectious diseases and early detection remains the first line of defence in preventing the emergence of novel disease outbreaks. Often, surveillance is the decisive factor in triggering early intervention [2,3], in order to avoid the higher public health costs associated with a widespread infection in the case an outbreak has gone undetected.
The definition of an epidemic/epizootic or outbreak is varied and has a long history of confusion (see Rosenburg [4] for an account of the history of the concept of an epidemic). Contemporary discussions have assumed at least two definitions of epidemic or outbreak occurrence. Childs et al. [5], for example, consider a rabies outbreak as occurring when the observed number of cases falls above a baseline for a specified number of consecutive observation periods and where the average number of cases in a given location determines the base line. They suggest an above-average reported rate at the county level for three consecutive months. The other most common definition treats any occurrence of an infectious disease as an outbreak, where it is detected in a novel geographical location and poses a significant public health threat, because of its novel appearance in that location. Throughout this paper, we adhere to this latter definition since we are concerned with uncovering appropriate surveillance strategies for detecting novel occurrences of disease.
Resources for infectious disease surveillance are always in limited supply and any strategy that provides insights into the optimal guidance of surveillance programmes is a valued addition to our public health infrastructure [6]. Guidance strategies should include the identification of both areas and populations that are at increased risk of disease exposure. This is the key idea associated with the concept of targeted surveillance (also known as risk-based surveillance) defined as a surveillance strategy that focuses sampling on high-risk populations in which specific and commonly known risk factors exist [7]. The concept of targeted surveillance was first formally introduced following the emergence of bovine spongiform encephalopathy (BSE) in the UK during the 1996 epidemic [8]. This idea is also behind the recently emerging field of model-guided surveillance [9].
In the USA, the Council of State and Territorial Epidemiologists, in collaboration with the Center for Disease Control (CDC), maintains a list of notifiable diseases constituting the National Notifiable Diseases Surveillance System. For human diseases, healthcare providers are an essential component of any surveillance programme, but their impact is significantly reduced when confronted with an epidemic of zoonotic origin. Monitoring of wildlife reservoirs is an essential component of detection but rarely undertaken routinely. What we understand of zoonotic epidemics is largely constructed from passive reporting of occurrences gleaned from haphazard and incomplete surveillance of animal populations usually as the result of an animal-human interaction [10]. For the purposes of our analysis, the reporting rate (or equivalently the detection rate) is taken to constitute the fraction of reported cases over the total number of infections. Reporting rates vary significantly over both time and space and may deviate significantly from the true underlying distribution of infections due to a variety of sources (e.g. variation in the size and extent of infection clusters, heterogeneity in human and host population densities, etc. [11]). However, these factors explain only partially the spatial and temporal heterogeneity in reporting rate. Variation in the implementation and structure of surveillance programmes can themselves be a significant source of reporting rate variation and a mapping of different levels of reporting rate and surveillance efforts across space or time can help identify specific areas in need of intervention.
A variety of mathematical models are available in the literature to describe the dynamics of infectious diseases using the generalized susceptible-exposed-infectious-removed (SEIR) modelling structure (see, for instance, [12,13] or, for a spatially continuous model, [14]), and some work has been done at estimating the reporting rates for some human diseases confering lifelong immunity [15,16], but little effort has been directed at elucidating how to incorporate reporting data into models of surveillance [17], especially from an ecological viewpoint [18]. The goal of this paper is to show how to use reporting data (both reports of positive and negative occurrences) to identify geographical areas where surveillance levels are potentially insufficient to detect outbreaks.
Our approach is intended to provide a useful tool for public health agents, who monitor critical areas for surveillance and allocate funds for increased intervention. We introduce two different methods depending on whether agents have fixed or time-varying reporting rate data. The first method is based on a simple, constant reporting rate, intended to model a constant level of surveillance over time. Considering that surveillance levels usually change as a consequence of case detection and local public health concerns, we relax this assumption in our second method, where we formulate a reporting rate that changes over time and depends on the total number of reports (positive and negative) and the estimated host population. Provided that such an estimate is moderately accurate at any given time, it is possible to track disease dynamics through a model for infectious spread. The first approach identifies a surveillance risk, whereas the second one identifies a surveillance efficacy. The concepts are not mutually exclusive, and the observed correlation between our results from the two approaches supports their mutual consistency. As a consequence, either method can be used to identify areas where surveillance levels are critical, possibly underassessed and potentially leaving an outbreak unidentified. Such evaluation relies on comparing the values of computable parameters (risk or efficacy) across different counties. From the public health standpoint, the areas identified by the method as at risk are the ones where additional resources should be allocated for targeted monitoring. The proposed models provide input for explicit assessment of which counties need active intervention by public health decision-makers.
The approach we introduce combines process-driven and observational methods. It is quite general and suitable to a wide range of infectious disease systems and datasets. Moreover, it has great potential for application to human diseases. The approach relies on good estimates of the population size and a good knowledge of the epidemiology of the disease. Both aspects are crucial, and often poorly specified. In the case of human diseases, the knowledge of the susceptible population and mitigated uncertainty about the epidemiological parameters of the disease would significantly increase the accuracy of the method. The model then can serve as a basis to improve surveillance strategies, particularly in disadvantaged regions.
For illustrative purposes, we apply our method specifically to the spread of raccoon rabies virus among its raccoon (Procyon lotor) hosts in the state of New York. Rabies, a viral encephalomyelitis specific to mammals, has been a CDC notifiable disease since the mid-1970s. Rabies has the longest extant record of reports of any zoonotic disease in the USA. Rabies virus is transmitted from one animal to another usually by a bite [19,20]. Because its transmission modality is favourable to interspecies infection, including human beings, rabies is a major public health concern. Raccoons are the major terrestrial vector of the disease in the eastern USA, though many foxes, bats and skunks carry the disease as well [10,21]. The potential risks to humans coupled with an extensive database with high geographical resolution, exact occurrence dates and knowledge of the species of host involved engenders the application particularly relevant and amenable to testing our methods and approach.
Model
We consider the dynamics of a lethal disease, as described by a compartmentalized model of susceptible-exposed-infectious (SEI) type. The model subdivides the population into susceptible, exposed (hosts that have been exposed to the virus but rsif.royalsocietypublishing.org J R Soc Interface 10: 20130418 not yet infectious) and infectious (host with the capability of transmitting the pathogen). The spatial resolution of the model is set at regional level (from township to state). Consequently, the computational model consists of a system of ODEs completed by suitable initial conditions. In the above equations, we denote by b the transmission of pathogen by contact between a susceptible and an infectious individual, by v the vaccination rate, by s the reciprocal of the latency period, by a the reciprocal of the life expectancy of an infectious host.
We assume a density-dependent mortality rate in the absence of the disease, bN. We denote by a the reproduction rate, which represents a yearly average, to take into account the reduced fecundity of juveniles. Seasonality is not explicitly included here, but could easily be with a time-dependent reproduction rate [22]. Moreover, we assume that only susceptible and exposed individuals are able to reproduce. Such an assumption is reasonable for a very aggressive disease in wildlife, assuming the expected survival of an infectious host much too short to give birth or care of the offspring. To show the dynamics of the epidemic model, we ran a simulation of the SEI model within one single virtual region. We report in table 1 the model parameter values that are adapted to raccoon rabies for the eastern USA and have either been drawn from published values and US Department of Agriculture sources (http://www.usda.org) or estimated indirectly. In particular, the birth rate a, the transmission rate b, the latency period 1/s and the infectious period 1/a are taken from literature [25][26][27][28][29]. The rate of density-dependent mortality, b, is estimated indirectly to produce a disease-free equilibrium of 27 000 individuals, corresponding to a density of 11 animals per km 2 (average for raccoons in the eastern USA [23]) in a region of 2457 km 2 (average size of a New York county, outside the five boroughs of New York city). We simulate 922 weeks of epizootic. The plot of the temporal dynamics of the full SEI model (susceptible, exposed, infectious and total population) is available in the electronic supplementary material.
In order to simplify the dynamics of the SEI system, we aggregate the model to a planar system in terms of the infectious individuals I and the total population N. Since A ¼ N 2 I, by summing up the first three equations in the model, we get N 0 ¼ aN À ða þ aÞI À bNðN À IÞ and I 0 ¼ sE À aI: A fourth class of removed could be included in a more general model, consisting of hosts that recovered from the disease or have been vaccinated. Since there is no evidence for natural recovery in rabies, which is our case study in this paper, and we do not consider vaccination at this level, the removed class is not considered. However, the following results are based on an aggregated method, and the use of a SEIR model would not affect the conclusions.
Main features of the aggregated model
The aggregated model is not in closed form due to the presence in the second equation of the term sE. However, the knowledge of the new infectious sE temporal dynamics is sufficient to reproduce the dynamics of the full SEI model by means of the aggregated one. If the new infectious are known as function of time, their dynamics can be considered a source term F in the second equation of the aggregated model, which can be written in the more general form To support our claim, we ran a simulation of the reduced model using as a source term in the second equation the temporal dynamics of the new infectious sE, obtained by simulating the full SEI. We compare in figure 1 its dynamics with those of the aggregated model. We plot the dynamics of both the total population (figure 1a) and the number infectious (figure 1b). In both pictures, the dashed line represents the values obtained with the full SEI model, whereas the circles represent the values obtained with the aggregated model. The numerical results confirm that the knowledge of the temporal dynamics of the new infectious sE is sufficient to reproduce the SEI dynamics with the aggregated model.
A direct stability analysis for the aggregate model is not feasible. However, we can identify the N-nullcline, namely the set of points in the plane (N,I), where N 0 ¼ 0, that is shown in figure 2a. If the number of infectious is constant, the upper branch of the nullcline is stable, whereas the lower branch is unstable. Moreover, as expected, the persistence of infectious hosts (i.e. an endemic state) reduces the carrying capacity of the host.
Different temporal dynamics of the new infectious F entail complex behaviours of the system in terms of epidemic outbreak, including persistency and possible extinction of the host population. We simulated different temporal dynamics by rescaling the new infectious from the full SEI, as F ¼ z  (sE), with z ¼ 0.25, 0.5, 0.75, 1, 1.25, 1.5, 1.75, 2. The resulting trajectories in the phase plane (N,I ) are plotted in figure 2b. If the growth rate of the newly infectious hosts F is too large, the population goes extinct along the bisector of the phase plane N ¼ I (note the different scales on the axes). Otherwise, the trajectories show different levels of population drops in epidemic outbreaks, and a recovery Table 1. Coefficients of the SEI model. The natural death rate is chosen to be density dependent to provide a carrying capacity compatible with the published values in literature of 5 -17 animals per km 2 [23,24]. The birth rate a, the transmission rate b, the latency period 1/s and the infectious period 1/a are taken from [25 -29].
Modelling detection rate for surveillance
Effective surveillance within a region amounts to the ability to identify newly infectious individuals. In the SEI model, this amounts to the correct assessment of sE, and to estimate the surveillance levels in the different counties, we need an accurate evaluation of this value. However, this value is unknown. We propose to extrapolate the value sE from the available data in a given observational window, whose length we denote by t. Specifically, we consider the reported positive and negative cases. We denote by r þ (t) and r 2 (t) the reported positive and negative cases at time t, respectively, and the total amount of reports ( positive and negative) along the observation window I t ¼ [t 2 t, t] are given by Note that the istantaneous reports r þ (t) and r 2 (t) are 0 for most times t, according to the reporting frequency of the public health departments. In what follows, the dependency on time will be left out.
We introduce a suitable function of the available reports that we denote with F(R þ , R 2 ), whose role is to expand the actual number of reported cases to take into account the effectiveness of the surveillance procedure, leading to the extrapolation model N 0 ¼ aN À ða þ aÞI À bNðN À IÞ and I 0 ¼ ÀaI þ FðR þ ; R À Þ:
Compatibility of the extrapolation functions
The extrapolation function F(R þ , R 2 ) has to satisfy any compatibility requirements arising from the disease dynamics under consideration. Our case study in this paper concerns raccoon rabies, which is a lethal disease for the host, killing an infected animal within two weeks from the emergence of symptoms. For a lethal disease, the total population drop (namely the percentage of animals killed by the first outbreak) is known to be related to the basic reproductive rate R 0 associated with the disease [30], and can be used as a compatibility constraint. We would like to observe that estimating the population drop with this method is not needed for most human diseases, as public health data regarding the number of deaths are usually available. For the SEI model introduced earlier, the basic reproductive rate is given by while the expected population drop [30] is The reported values in literature for raccoon rabies R 0 lie between 1.2 and 1.4 [26]. As a consequence, a population drop between 16% and 28% can be used as a reliable compatibility constraint for the system N 0 ¼ aN À ða þ aÞI À bNðN À IÞ and I 0 ¼ ÀaI þ FðR þ ; R À Þ:
Modelling extrapolation
We propose here two different extrapolation functions to model surveillance efficacy that depend upon a family of parameters. The first models a constant level of surveillance, whereas the second models dynamic surveillance over time.
We base our analysis on the assumption that an outbreak actually occurred in every area featuring positive reports.
Constant surveillance
Constant surveillance in time is modelled using only the positive reports R þ , together with a linear extrapolation function In the above expression, g is the reporting rate, namely the percentage of new rabid cases that are actually detected. Reporting activity varies in space and is also known to be correlated with the population density [10]. In order to identify the local surveillance efficacy for a given area, we express g in terms of the human population density of the area (h) This choice models an increase in the reporting rate with the human density: in particular, if h is zero then g vanishes, and as h increases g approaches 1 (that is, in the case where human population density is infinite, every new infectious case would be detected). The positive parameter K is a risk index: the larger its value, the lower the reporting rate for a given human population density.
Knowing the initial population in a given area, we can identify the parameter g fulfilling the compatibility requirements on the extrapolation function. In order to assess the level of surveillance in the region, we choose the corresponding risk index K.
We iterate the procedure over all the areas of interest and identify the corresponding values for g. This procedure clearly depends on the epidemic under study. To eliminate such dependence, we normalize the risk index to a scale from 1 to 10, where a small value indicates a high level of surveillance in the region, whereas a large value entails a significant risk of an outbreak to go undetected in the area.
Dynamic surveillance
Dynamic surveillance in time is modelled by using both positive R þ and negative R 2 reports, combined through a nonlinear extrapolation function where u . 1 is a parameter that represents the surveillance efficacy. The choice of the function F dyn (R þ , R 2 ) relies on two assumptions. First, we want a change in a small number of total reports to be more significant than a change in a larger number (a concept similar to diminishing returns in economics). Then, we assume that the testing procedure has sensitivity 1 (that is, if we could test all individuals we would be able to identify all the new infectious cases) and specificity 1 (we have no false positives). As a consequence, the function depends also on the total population N. Also in this case, knowing the initial population, we can identify the parameter u fulfilling the compatibility requirements on the extrapolation function. We iterate the procedure over all the areas of interest and identify the corresponding values for u. In this case, a large value of u indicates a high level of surveillance in the area, whereas a small value of u highlights a significant risk that an outbreak can go undetected in the region.
New York State epidemiological data (1990 -2007)
On 4 May 1990, the first case of a rabid raccoon was recorded in the state of New York, in Addison Township, Steuben County, on the New York/Pennsylvania border, as part of an advancing wavefront of rabies spread. By the end of 1994, the epizootic had propagated extensively across the state. The epizootic wave across NY was actually part of a larger epizootic that began at the boundary between Virginia and West Virginia in the mid-1970s and spread northeast through Pennsylvania and Connecticut and southeast to North Carolina [5], but entering NY in 1990.
At the time of the outbreak, rabies posed a particularly pressing public health problem with the number of postexposure prophylactic treatments increasing from around 70 before the outbreak to over 1200 by 1991 [31]. Consequently, intensive surveillance and monitoring of wildlife populations was undertaken by the state and continues today. An extensive database has been collected by the New York State Department of Health. Each entry was recorded at the township level (754 locations) from 1990 to the present. The data we use in our analysis are those positive and negative cases verified by the New York State Department of Health from 1990 to 2007.
We aggregated the data at the county level, at which surveillance and intervention policies are actually implemented. Table 2 collects the 56 counties that featured reported cases of rabid raccoons in the period 1990-2007, their human population density and the total positive cases. Figure 3 illustrates the progression of the epidemic across the state at rsif.royalsocietypublishing.org J R Soc Interface 10: 20130418 four different times, in terms of total reported cases at the county level.
Estimate of the raccoon population
One of the major limitations in studying wildlife epidemics is the difficulty in establishing the actual size of the at-risk population under investigation. Best estimates from the literature suggest that raccoon density in the eastern USA falls in the range of 5-17 animals per km 2 [23,24].
We consider in this study all 56 counties (table 2) that featured reported cases of rabid raccoons in the period 1990-2007. We mitigate the uncertainty about the actual raccoon population size by drawing, for each county, 50 values from a normal distribution with mean 11 and s.d. 2 (in order to cover the variability among the different ranges in the literature, see [23,24] and references therein). We add a correction to this distribution by taking into account the human population density: according to the New York State Department of Environmental Conservation (http://www.dec.ny.gov/animals/9358.html), raccoons are more prone to establish in areas where the human presence is higher. Suburban/ metropolitan areas are often associated with the highest recorded raccoon population densities. We thus added an extra term to the counties with human density above the average for the state (157.81 individuals per km 2 ), by adding draws from a normal distribution with mean 0:3 ffiffi ffi h p (h being the human density for the ith county) and s.d. 12. The concerned counties are Albany, Erie, Monroe, Nassau, Niagara, Rockland, Schenectady, Suffolk and Westchester. We plot in figure 4 the minimal (figure 4a) and maximal (figure 4b) initial populations stochastically generated by the procedure described earlier, and we report in table 3 the corresponding values.
Model simulation and risk identification
We ran simulations of the aggregated system with extrapolation from the data for all 56 counties with the 50 values of the initial population described earlier. The reports' behaviour along time seems to suggest the presence of an epidemic in almost all counties featuring positive reports, with the exception of Clinton, Hamilton, Suffolk and Warren, where the scarcity of reports does not allow us to draw evidence. The results for these counties have thus to be considered with care. We assumed that at the beginning of the epizootic the host population is entirely susceptible and at equilibrium, and that an epidemic has actually taken place in the counties included in the study. As a consequence, a drop in the population occurred that was compatible with the epizootic of the disease. We tested both the static and the dynamic model rsif.royalsocietypublishing.org J R Soc Interface 10: 20130418 the ranges of the parameters that produce population drops between 16% and 28% during the first outbreak.
Knowing the initial population, we can then assess the level of risk for each county (labelled by i ¼ 1, . . . , 56) for the constant surveillance model. We choose the risk index K i m , obtained algebraically from the midpoint of the compatibility interval for g. If the compatible values of g for the ith county lie in the interval G i ¼ ðg i min ; g i max Þ, the corresponding risk index is given by where h i is the human population of the county, and g i m is the midpoint of the interval G i . The procedure clearly depends on the epidemic under study. In order to eliminate such dependence, we normalize the risk index to a scale from 1 to 10. Hence, we introduce for the ith county a surveillance risk r i , which is defined as the natural logarithm of K i m weighted by its maximum over all counties. The corresponding surveillance risk for th ith county is then given by where a small value of r i indicates a high level of surveillance in the county, whereas a large value of r i entails a significant risk of an epidemic going undetected in the area.
In a similar manner, we can assess the surveillance efficacy for the dynamic surveillance model. In this case, we consider as indicator for the surveillance efficacy in the ith county, the value of u i corresponding to the midpoint of the interval associated with the initial population. A large value of u i indicates a high level of surveillance in the area, whereas a small value of u i highlights a significant risk that an epidemic will go undetected in the region.
Finally, the values of r i and u i can be plotted on a geographical map to get a comprehensive view of the global risk across the state.
Results
Detailed results are shown for Albany County. This county has a very high count of reports, probably associated with the presence of the rabies diagnostic laboratory of the Wadsworth Center (New York State Health Department). We would like to observe that the presence of this large facility might induce bias in the estimated surveillance risk for the neighbouring counties. However, the observed disease dynamics are not different from what was observed in the majority of other counties. Figure 5a,c shows, respectively, for constant and dynamic surveillance, the curves obtained connecting the values of the parameters (g and u) paired with the associated population drop. The dashed blue line corresponds to the lower bound for the initial raccoon population, whereas the red line corresponds to the upper bound. The intersections of the two curves with the horizontal lines at 16% and 28% drop locate the intervals, where the compatibility constraints are satisfied. For static surveillance, we have g [ ð0:02; 0:05Þ in the case we believe that the raccoon population is on the higher end of the estimate, and g [ ð0:12; 0:23Þ for the lower end. As we can see the lack of overlap between the compatibility intervals associated with the minimal and maximal initial population implies that optimal surveillance levels can be potentially very different. The importance of an accurate estimate of the initial raccoon population is crucial. A similar figure 5c. Figure 5b,d shows different time series for the total raccoon population associated with different surveillance scenarios. We believe that the outbreak that occurred in Albany County was typical and we expect disease dynamics consistent with the values of R 0 in the literature. For a population of roughly 60 000 raccoons, we can observe the drop caused by the outbreak, some damped oscillations and a slow recovery to the endemic equilibrium carrying capacity.
In figure 6a,c, we have comprehensive plots for g and u for the estimated intervals for all the 56 counties alphabetically ordered. The same level of surveillance can produce completely different interpretation of the disease dynamics: for instance, a value g ¼ 0.07 is associated with an outbreak so violent that it leads to extinction if the initial population is the minimal one, and at the same time with a complete absence of outbreak in the case where the initial population is the maximal one. Such a feature is shared by almost all the counties when a constant level of surveillance is assumed (figure 6a), with the exception of Clinton and Suffolk. In the case of dynamic surveillance, in contrast, only 12 counties do not feature an overlap between the intervals of u corresponding to minimal and maximal initial population (figure 6c). Moreover, among those 12, only two feature a significant gap comparable with the length of the smaller interval (Albany and Schenectady).
Since the actual raccoon population is not known with absolute certainty, we choose to geographically map (figure 6b,d) the values of r i and u i corresponding to the maximal estimated initial population. This is a conservative choice, justified by the consideration that the higher the population, the higher the risk (and relative consequences in terms of public health) of an undetected epidemic.
Finally, a somewhat expected duality between the intrinsic surveillance risk r associated with the constant extrapolation and the surveillance efficacy u associated with the dynamic extrapolation is apparent, and can be assessed directly from the risk and efficacy mappings: areas with low surveillance risk display higher levels of surveillance efficacy.
Discussion
Surveillance is a key element in detecting, monitoring and studying infectious disease outbreaks over time and space. In this paper, we present some methodological aspects that can be used to evaluate the impact of localized surveillance for infectious diseases and help devising public health strategies. Intervention is based on information and the aim rsif.royalsocietypublishing.org J R Soc Interface 10: 20130418 of this paper is to provide some of the information to decision-makers. As an illustration to the methodology, we showed an example based on a real dataset, consisting of positive and negative reported cases of rabid raccoons in the state of New York over a period spanning from 1990 to 2007. We introduce two methods, both based on the idea of combining process-driven models with an observational approach, to take advantage of the features of both. The first method is based on a simple, constant reporting/detection rate, intended to model a constant level of surveillance over time. Considering that surveillance levels usually change because of news effects and public health concerns over possible outbreaks [18], we relax this assumption in our second model, where we formulate a reporting/detection rate that changes over time and depends on the total number of reports (positive and negative) and the estimated host population. Provided that such an estimate is accurate at any given time, it is possible to track disease dynamics through a model for disease spread [12]. With each of the two methods, we are able to identify locations where surveillance levels are critical and can potentially leave an outbreak unidentified.
The first method identifies surveillance risk, whereas the second one identifies a surveillance efficacy. An expected negative correlation between risk and efficacy emerged (20.5652384). Besides being intuitive, such correlation is actually a sign that the two approaches are consistent, and either one can be used to identify areas at greater risk to which resources should be allocated in priority. The dynamic surveillance method (which assesses surveillance efficacy) provides results that are less sensitive to the initial population size. This aspect is very promising in view of extending the approach presented here to human diseases, where accurate accounts of the total population, with high resolution in space and more stable self reporting rates are available. Two significant assumptions underlay our analyses. The first pertains to possible scenarios for the initial population size (before the first cases were recorded), and the second is that an epidemic actually occurred in each county where there was a positive reported case. We note that the first assumption is less limiting in the instance of human diseases. Since our study focuses on raccoon rabies, an a priori knowledge about the epidemiology of the disease is well known and established [29]. This is not a limiting aspect as long as the methodology is applied to extant diseases, but could prove problematic when applied to a newly emerging pathogen for which the epidemiology is not yet available. In this case, the method should be adapted by introducing some stochasticity in the key model parameters such as the transmission rate and the latency period.
Our work has the potential to be extended at both the methodological and applied level. For instance, the raccoon rabies surveillance analysis can potentially benefit from the inclusion of information regarding vaccinations programmes. Oral Rabies Vaccination (ORV) was initiated during the collection of our data (J. E. Childs 2000, personal communication) and may have affected, for instance, Essex and Clinton counties as suggested by a slight decline in the number of reports in those counties after ORV establishment. Unfortunately, we do not know if these modest declines are due to ORV or simply the decline is cases as the epizootic moved through the county. Very little is known about the rate of transition of individuals from the susceptible to the immune class through artificial immunization and we cannot, at this point, include such dynamics in our modelling. Investigating the efficacy of ORV programmes and verifying their eventual impact on disease dynamics might help better understanding of targeted surveillance, although it is unclear whether the conclusions we reached in our work will be sensitive to this extension.
Although uncertainty in outbreak size is taken into account by estimating system trajectories for different levels of R 0 [21] and of initial host population [23], the model can be further generalized by including randomness in some of the parameters. A current work in progress involves estimation of parameters in a full Bayesian hierarchical setting.
Combining the information from previous studies ( prior elicitation) with the evidence arising from observational data (likelihood), we are able to produce estimates and uncertainty assessment for all the model parameters. This form of modelling bears directly on our understanding of the underlying disease process. Nonetheless, however the results can be incorporated also into the surveillance setting.
In future work, one could also estimate optimal levels of surveillance, by maximizing an utility function that depends on the social or environmental benefits of detecting an epidemic and on a penalty term with the costs associated with implementing surveillance policies. Furthermore, writing a stochastic model, possibly with the introduction of a spatial dynamics not considered in the present work [14,[32][33][34], will allow us to actually estimate parameters and optimal surveillance levels in a likelihood framework. Finally, we also envision applications to other types of diseases where accurate estimates for the host population are available (for instance, some infectious diseases in humans).
|
2017-04-06T18:04:49.850Z
|
2013-09-06T00:00:00.000
|
{
"year": 2013,
"sha1": "af36da3c5b35600c6af094f35690c73f0302ff6d",
"oa_license": "CCBY",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsif.2013.0418",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "af36da3c5b35600c6af094f35690c73f0302ff6d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
259805696
|
pes2o/s2orc
|
v3-fos-license
|
A NOVEL APPROACH OF USING BAMBOO ROOT CELLULOSE: AN ALTERNATIVE FOR IRON(II) REMOVAL FROM WASTEWATER
A novel approach was taken using the bamboo root transformed into activated cellulose, implying dilute nitric acid as the agent activator. The activated bio-sorbent was applied for Iron(II) (Fe(II)) adsorption in batch mode. Then, bio-sorbent characterization was established using infrared spectroscopy and a scanning electron microscope (SEM). The effect of acid variation, iodine number, contact time, particle size, temperature
INTRODUCTION
Iron (Fe(II)) is the industry's most widely used metal among all metals. 1 Iron is needed in the body to form hemoglobin and it is necessary for plant metabolism. 2 Besides its benefit, iron also has disadvantages to the environment if the concentration is excess in water. 3 Iron (Fe(II)) causes a yellow color in drinking water, deposition in pipes, growth of bacteria, and turbidity. For several decades along with the development of industrial activities, Fe(II) contamination of the environment with wastewater, including heavy metals, has become a major problem. Odor in drinking water and block pipes because of the precipitation of Fe(II) hydroxide 4 . It can also accumulate in the alveoli and cause lung dysfunction. 5,6 So, Fe(II) contained in the wastewater must be removed before being disposed of into the river. Many efforts have been made to decrease metal concentration in industrial wastewater. Conventional methods include extraction, ion exchange 7 , electrocoagulation 8 , membrane filtration 9 , adsorption, and electrochemical treatment. 10 The advantage of the adsorption method compared to other methods is that the process is simple, does not produce sludge, and is relatively inexpensive. One of the methods is adsorption using activated carbon, but this method is quite expensive. Various research has been conducted to remove Fe(II) using crab shell 11 , wooden charcoal 12 , and kapok banana peel 13 but their adsorption capacity was still low. The bamboo root that contains cellulose, has a high abundance in nature and is relatively cheap. Previous studies used cellulose as a bio-sorbent for heavy metals using lemongrass leaves 14 and coconut shells. 15 The cellulose of bamboo root content of fiber is about 62.67%. 16 Therefore, the bamboo root produced in Indonesia is chosen in this study. Cellulose, specially produced by plants and trees, is a natural polymer that is easy to biodegradation and is an eco-friendly resource. The cellulose molecules have many active groups of hydroxyls. Therefore, it can be chemically modified based on introducing various functional groups on hydroxyl. The absorption mechanism occurs due to the exchange of ions between the hydroxy groups of BAMBOO ROOT CELLULOSE M. Wulandari et al.
the cellulose molecules and the Fe 2+ ions from the metal surface. 17 Cellulose has been proven to be an effective and simple method to remove Fe(II) ions using ion exchange techniques. 18 Previous studies have carried out adsorption from bamboo roots to absorb copper and zinc metals. 19 Based on the explanation above, bamboo roots have the potential to be applied as heavy metal bio-sorbents. Therefore, in this study, activated bamboo roots were developed to absorb Fe(II) metal ions in the batch process and then it was applied to the wastewater sample. The effects of different parameters were studied, including particle size, contact time, initial Fe(II) concentration, pH, and temperature. The results were applied for removing Fe(II) in wastewater.
Material and Methods
The apparatus used in this study include glassware commonly used in chemistry laboratories.
General Procedure and Detection Method Bio-sorbent Activation
The bamboo roots were washed and brittle in the oven at 70°C overnight in the first stage. After that, the seeds were grounded. After that, these materials were activated in each of the following various acids in the same 0.3 M concentration: HNO3, HCl, H3PO4, H2SO4, and HClO4 for 24 hours. After the activation, the distilled water was used to remove the ash until pH 7; the activated bio-sorbent was brittle in the oven for 24 hours and 60 0 C. The size of particles (50, 75, 150, and 355 μm) was obtained by sieving and put in a desiccator for the subsequent study.
Adsorption Studies
The adsorption was studied by batch method. A specific volume of Fe(II) solutions with different initial concentrations was immersed into a certain amount of bamboo roots. The parameter studied includes contact time and temperatures. After equilibration, the mixture was centrifuged and analyzed using Atomic Absorption Spectrometer (AAS). The obtained data from the batch method was used to study the equilibrium Fe(II) number with equation (1).
= ( − )
(1) Where qt (mg/g) is the Fe(II) adsorbed per unit weight of bamboo root, V (L) is the solution volume, Ct (mg/L) is the concentration of Fe(II), Co (mg/L) is the initial Fe(II) concentration and m is the mass of bamboo root used.
Determination of Iodine Number
Several bamboo root biomass was immersed in 0.1 N iodine solution with a stirring time of 15 minutes. After the filtered, 10 mL filtrate was pipetted into an Erlenmeyer then it was titrated with sodium thiosulfate, which had been standardized first with potassium dichromate.
Activated Bio-sorbent Characterization
The activated cellulose of bamboo root (activated bio-sorbent) was heated at 60 °C for 24 hours and then was added sufficiently KBr and put under pressure to form pellets. Samples are ready to be analyzed using Fourier Transform Infra-Red (FTIR) with a transmittance between 400 and 4000 cm -1 wavelengths. The peaks were achieved in a scanning time of fewer than 30 seconds. While the surface morphological analysis was performed using SEM. Samples do not need to be pre-prepared, and the magnification was about 2500x.
Application To the Environmental Samples The first step to measure the recovery% is the measurement of the calibration curve and then plotting the linearity. The solution of Fe(II) with various concentrations of 1, 2, 3, 4, and 5 ppm was measured using AAS. A calibration curve is made as concentration versus absorbance. For the sample, 0.5 mL of a standard solution of Fe(II) 150 ppm was put in a volumetric flask of 25 mL and then diluted in a wastewater sample. Three samples were then analyzed using AAS at 248 nm. The absorbance resulting from the sample measurement was too high, so the dilution step was applied. The calculation of the percent recovery is obtained by using equation 6:
RESULTS AND DISCUSSION
The application of bamboo root to adsorb Fe metal has been carried out. Its effectiveness and absorption mechanism have been studied with several parameters, which will be explained in this section.
Acid Selection for Bamboo Root Cellulose
Bamboo root as a bio-sorbent was used in this research because it contains more cellulose than other biosorbents (mentioned in the introduction chapter). The higher amount of cellulose is expected to increase its adsorption capacity. The adsorption capacity of metal ions by activated bio-sorbents is greater than that of non-activated biosorbents. In this study, the activation was carried out by soaking the biosorbent of bamboo root using several acids (HNO3, H2SO4, HCl, H3PO4, and HClO4). Acid can react with elements in bamboo to produce different reactions and active sites. As described in Fig.-1A, the highest absorption is obtained from the biosorbent activated using HNO3, shown by the adsorption capacity of 1.8407 mg/g. Based on the level of acidity, the acids used can be sorted as follows: HClO4 < HCl < H2SO4 < H3PO4 < HNO3. The acids mentioned above are oxidizing acids, except for HCl.
Oxidizing acids can form oxides on the surface of roots which can increase their hydrophilic. 20 A previous study stated that HNO3 increased the concentration of oxygen groups on the surface which caused it to easily react with Fe(II). Determination of the iodine number is studied to drive the level of activity of the biosorbent. An iodine number is a number that shows the amount of iodine adsorbed on the biosorbent. The higher the activity level, the higher the iodine number. 21 It was done by an analysis of the iodine value of the biosorbent with particle size < 75 was carried out with different acids, namely: HNO3, H2SO4, HCl, H3PO4, and HClO4. The analysis carried out obtained data that is in Fig.-1B. From Fig.-1B, it was found that cellulose that was activated with HNO3 has the highest iodine number among the others, with the amount of 1200.66 g/kg. This shows that HNO3 has the best acid for biosorbents. In the previous study, such results were obtained 22 on activated bamboo, with HNO3 having the highest iodine number among H2SO4, HCl, ZnCl2, H3PO4, and NaOH, which was 1198 g/kg. It can be concluded that HNO3 was the best activator to activate the carbon from bamboo because it produced many active sites, and a chemisorption reaction occurred in the carbon pores during activation. The reaction of HNO3 with cellulose produces cellulose nitrate (nitrocellulose), which is formed during the activation of cellulose with nitric acid to produce an active reaction site for the metal ion adsorption process. The reaction mechanism can be seen in Fig.-2. Each subsequent reaction of the -OH group with the nitrating agent yields nitrocellulose. The interaction between O-bound and nitrogen is probably forming a coordination complex in the existence of a lone pair on the oxygen atom of the group of NO. Complex metal compounds will be formed when a lone pair of electrons occupy the empty orbital of the Fe 2+ metal ion.
Adsorption Characterization
Infra-red characterization was carried out on the biosorbent before activation, after activation, and after adsorption. The results of the characterization as described in Table-1. From Fig.-3, it can be seen that there was a shift in the wavenumber. At the time before activation and after activation, the -OH group showed a shift in wavenumber 3435 cm -1 to 3374 cm -1 and there were no big changes in the wavenumber for biosorbent after Fe(II) adsorption. SEM was studied to see the different surfaces of bio-sorbent and study the surface's morphology. Figure-4 describes pore differences from bio-sorbent before activation, after activation, and after adsorption. It can be seen in the picture of cellulose before surface activation that the surface looked like fiber, and after activation with HNO3, pores were formed. The pores in the cellulose were then closed again because of the possibility that Fe(II) was being adsorbed.
Adsorption Study
The analysis was carried out with several parameters, including bio-sorbent particle size, acid activation, contact time, concentration, temperature, and pH. These parameters were studied to determine the best conditions for the bio-sorbent adsorption process. The optimum conditions were determined based on the highest adsorption capacity (Qe). These optimum conditions will be applied to the environmental samples later. In the first stage of this research, the particle size effect on the adsorption capacity was studied. Variations in particle sizes of 355, 150, 75, and 50 µm were used. Figure-5A shows a significant decrease in the capacity of adsorption from 1.8477 mg/g to 0.5349 mg/g for bio-sorbent particle sizes of 50 µm to 355 µm. The cellulose of bamboo root can optimally absorb Fe(II) at a particle size of 50 µm. The Fe(II) adsorption increased when particle size was reduced. The smaller particle size will allow a larger biosorbent surface area per unit volume so that more Fe(II) will be adsorbed. Based on previous research 22 , the particle size of 150 µm showed the highest % adsorption. A particle size of 50 µm will be used for the next experiment. Adsorption capacity measurement was carried out based on variations in concentration (50, 100, 150, and 250 ppm) which aims to determine the optimum saturation of the bio-sorbent in absorbing Fe(II) ions. The measurement was studied at the optimum particle size. The optimum absorption vs. concentration of Fe(II) can be seen in Fig.-5B. Based on the data obtained, the optimum cellulose bamboo root adsorption capacity was at 150 ppm of Fe(II) concentration with Qe = 1.5634 mg/g. After passing the optimum concentration, the absorption capacity decreases considerably. It could be explained that the bio-sorbent absorption capacity has been saturated. If there was another Fe(II), the capability of the bio-sorbent to absorb was slight. In the end, a desorption process causes the Qe value to decrease.In the end, a desorption process causes the Qe value to decrease. Next, the contact time effect to the adsorption capacity were carried out to obtain the optimum absorption capacity. Variations were made between 60, 180, 300, and 420 minutes. From Fig.-5C, it can be seen that the optimum absorption at 300 minutes was with an absorption capacity value of 2.3768 mg/g, and there was very slight decrease at 420 minutes to 2.3764 mg/g, but this can be assumed as a constant. It was due to the saturated state of the bio-sorbent adsorbing Fe(II) so that when the BAMBOO ROOT CELLULOSE M. Wulandari et al.
optimum time passes, desorption occurs, which results in a slight decrease in Fe(II) in the bio-sorbent. Therefore, the optimum time was preferred in optimizing the time variation in a state of 300 minutes. The previous study 23 showed that the capacity of adsorption was precisely equivalent to the time up to a certain point, then decreased after passing that point. Then, optimization determined the adsorption capacity with the temperature variations. The observed temperatures ranged from 30, 40, 50, to 60°C. In addition to contact time and particle size, the temperature can also affect the work of the bio-sorbent in absorption. From the research that has been done, the data are presented in Fig.-5D. The optimum absorption capacity is at 3.56 mg/g at a temperature of 50°C. Supported research 24 , states that the absorption capacity increases with increasing temperature. This was probably related to the more active movement of metal ions due to increasing temperature so transferring the mass of metal ions from the solution to the biosorbent surface becomes more accessible. However, after passing the optimum temperature, the absorption capacity decreases. An increase in the adsorption temperature, the energy for desorption increases so that more metal ions were released from the bio-sorbent compared to the number of metal ions adsorbed in the bio-sorbent. The pH response was investigated in the Fe(II) absorption mechanism towards bamboo root bio-sorbent. Figure-5E shows the pH response to the capacity of adsorption. The pH of Fe(II) was reviewed from 2 until 7. The Fig. shows an increase in adsorption capacity from pH 2 to 5 and the optimum at pH 5. Adsorption seems to decrease starting from pH 6. It can be explained as follows: at acidic pH, Fe(II) will remain in the form of Fe(II), while at alkaline pH will form a precipitate of Fe(OH)2.
Adsorption Isotherm Study
The isotherm of adsorption describes the number of adsorbates adsorbed at a certain temperature. The adsorption isotherm is crucial to elevating and knowing the use of adsorbent as well; the isotherms of adsorption analysis can help to interpreted the bio-sorbent adsorption mechanism (activated cellulose of bamboo root) and adsorbate (Fe(II)). The data of equilibrium was evaluated with a model from Freundlich Log Qe =1/n log Ce + log KF (4) Where Qe (mg/g) is the number of adsorbates when reached equilibrium, Ce is the (Fe(II)) concentration of equilibrium, KF is the constant of Freundlich associated with capacity of adsorption, n is the constant associated with the intensity of adsorption associated with the heterogeneity factor. The plots of log Qe vs. log Ce present a linear graph, and n (slope) and KF (intercept) can be gained from the graph.
Ce Qe = Ce qmax + 1 K L. qmax (5) Where (mg/g) is Fe(II) concentration at equilibrium. KL is the Langmuir constant related to adsorption capacity (mg/g). KL is matched with the variation of the appropriate region and porosity of the adsorbent. Because of the large pore volume and surface area, adsorption capacity becomes high. From the graph shown in Fig.-6A, B, and Table-2, the linear regression of the Langmuir and the Freundlich isotherm are 0.9919 and 0.9851, respectively. This linearity indicates the Langmuir is suitable for the Fe(II) ion adsorption activated by bamboo roots. The Qm value obtained from the calculation is 5 mg/g. This is also in accordance with the shape of the adsorption capacity curve in Fig.-6 with increasing the initial concentration of Fe(II) to a certain concentration, no longer increases the adsorption capacity because the active sites of NO3are already saturated. The previous study also reported follow Freundlich isotherm adsorption. 28 It depends on the composition of physical and chemical substances. Bamboo root has a relatively short equilibration time and good capacity compared to all bio-sorbents.
Removal Fe(II) from Wastewater
The recovery% was studied by correlating the final measurement of the concentration of the analyte with the introductory concentration (Eq. 3) by performing a spiked analysis of the sample whose concentration was known. A good % recovery value was in the range of 100 ± 5%. Based on the calculations, it described that the recovery% for removal Fe(II) from wastewater was 100%; these results indicate that the method used was suitable for wastewater. Based on all the parameters examined above, it can be understood that bamboo roots have the potential as bio-sorbents of heavy metals. This alternative method can reduce Fe(II) contamination in wastewater.
CONCLUSION
Bamboo roots have been successfully used as bio-sorbent for Fe(II). The optimum condition was achieved at the cellulose particle size of 50 µm. Bamboo root cellulose activated with HNO3 showed maximum adsorption capacity compared to other acids indicated by the highest iodine number. Analysis using FTIR can prove that in the C-O group the activation and adsorption processes run well, analysis using SEM can provide an overview after the activation process and after the adsorption process. The optimum adsorption capacity occurred at 300 minutes, 50˚C, 150 ppm, and a pH of 5. The recovery% of removal Fe(II) from wastewater was 100%.
|
2023-07-12T16:18:17.675Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "d3b3a08ae6a2fb1596574cf505080c9616788a7b",
"oa_license": null,
"oa_url": "https://doi.org/10.31788/rjc.2023.1628307",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a7c041bbd5696fc63e66efe1a2cdaa6e91374356",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
253246781
|
pes2o/s2orc
|
v3-fos-license
|
In vivo probing of SECIS-dependent selenocysteine translation in Archaea
By turning a bacterial reporter into an archaeal selenoprotein, in vivo probing of structure–function relations during UGA recoding for selenoprotein synthesis in Archaea is greatly facilitated.
Introduction
The standard genetic code assigns 64 base triplets/codons to 20 canonical amino acids and three stop signals for protein biosynthesis termination (Nirenberg et al, 1966). The "21 st " amino acid selenocysteine (Sec) is cotranslationally inserted into proteins by recoding UGA, which is normally a stop codon, that is, signaling termination of translation (Gesteland & Atkins, 1996). Seccontaining proteins (selenoproteins) are found in members of all three domains, Bacteria, Eukarya, and Archaea, but the trait of Sec synthesis and incorporation is not evenly distributed across the tree of life (Santesmasses et al, 2020). Although the general concept of tRNA-bound synthesis of Sec and its translational insertion via UGA recoding is conserved, significant differences exist for details of the respective pathways in the three domains. For example, only the archaeal and eukaryotic Sec synthetic pathways involve a phosphorylated aminoacyl intermediate (Carlson et al, 2004;Kaiser et al, 2005). For translation of Sec at dedicated UGA codons, a secondary structure on the selenoprotein mRNA, the Sec insertion sequence (SECIS) element (Berry et al, 1991), is common, although they are not similar to each other in sequence or structure across the domains. In Eukarya and Archaea, the SECIS is located in the 39 untranslated region (39-UTR) of the mRNA (Berry et al, 1991;Wilting et al, 1997;Rother et al, 2001); in Bacteria, it is directly 39-adjacent to the UGA Sec codon (Zinoni et al, 1990). Also common to UGA recoding is a specialized translation elongation factor, SelB (designated as EFSec in eukaryotes), that binds the correctly aminoacylated tRNA specific for Sec (Sec-tRNA Sec ) (Forchhammer et al, 1989;Fagegaltier et al, 2000;Rother et al, 2000;Tujebajeva et al, 2000). The ternary complex of SelB, GTP, and Sec-tRNA Sec mediates communication between the recoding signal (the SECIS element) and the recoding site (UGA in the ribosomal A site) for Sec insertion. However, the mode of this communication differs between the domains. While bacterial SelB directly binds the SECIS (Baron et al, 1993), auxiliary factors, like the SECIS binding protein 2, SECp43, and ribosomal protein L30, form a recoding complex with EFSec (see Bulteau and Chavatte [2015] and references therein). For Archaea, details of the recoding mechanism are still unknown.
So far, the only members of the Archaea experimentally shown to contain Sec are methanogenic archaea (methanogens). Genomic analyses suggest that a recently proposed taxon, the Asgard archaea, also harbors members encoding Sec (Mariotti et al, 2016). Because this group is closely related to eukaryotes (Spang et al, 2019), the similarity of the Sec synthesis and incorporation machinery between the two domains may be a result of the vertical transfer from an archaeal ancestor (Mariotti et al, 2016). Thus, understanding the mechanism of Sec insertion in Archaea will help determining the relevance of this trait during the evolution of eukaryotes. Among the Archaea, Methanococcus maripaludis has become the prime model for studying selenoprotein synthesis. This is mainly due to its comparably fast growth, the comparably high sophistication-and number-of methods for genetic analysis (Sarmiento et al, 2011), and the non-essential nature of the Sec synthesis and incorporation machinery (Rother et al, 2003;Stock et al, 2011). Considering that most of the selenoproteins in methanogens are directly involved in their energy metabolism, methanogenesis , the latter was unexpected, but later explained by the presence of a Sec-independent alternative set of enzymes (Rother et al, 2003).
From analyzing putative Sec-encoding genes in methanogens (Wilting et al, 1997), a hypothetical consensus structure for the archaeal SECIS element was deduced. A basal helix of ca. 10 bp, sometimes harboring unpaired bases, with a G/C-rich apical end is followed by a highly conserved bulge, consisting of GAA opposed by A on the other side (GAA/A; Fig 1). Interestingly, this structure is reminiscent of the kink-turn motif found in SECIS elements of eukaryotes (Walczak et al, 1996;Klein et al, 2001). This archaeal kinkturn-like motif is followed by two or three G-C pairs, which lead into a non-conserved apical loop region of four to eight nucleotides . In only one previous study was the principal nature of the archaeal SECIS element experimentally addressed. There, it was shown that a secondary structure, previously predicted, was indeed part of a selenoprotein mRNA, that it attained the predicted structure in vivo, and that it was required for heterologous expression of a selenoprotein gene from Methanocaldococcus jannaschii in M. maripaludis, evidenced by the incorporation of radioactively labeled selenium (Rother et al, 2001).
The synthesis of selenoproteins can be assessed through the enzymatic activity of a natural selenoprotein, like formate dehydrogenase (Fdh) in Escherichia coli (Leinfelder et al, 1988) or deiodinase in eukaryotes (Berry et al, 1993), or through the direct detection of selenium (isotopes) in the protein (Heras et al, 2011). Development of an easily quantifiable reporter system, like the translational fdhF-lacZ fusion established in E. coli (Zinoni et al, 1987), was key for detailed structure-function analyses of the SECIS element and the UGA decoding event (Zinoni et al, 1990;Heider et al, 1992;Suppmann et al, 1999). In M. maripaludis, its formatedependent growth behavior, its Fdh activity, and in vivo labeling of endogenous selenoproteins with [ 75 Se] have been employed for analyzing Sec insertion (Rother et al, 2001). All these approaches are either laborious (due to the anaerobic nature of the organism), or "semi-quantitative," or both. Developing a reporter system for methanogenic archaea where Sec insertion directly corresponds to (easily) quantifiable enzymatic activity would not only greatly facilitate assessing phenotypic consequences of mutant strains but also allow quantitatively probing structure-function relationships of factors involved in the process.
In the present study, we engineered the class A (TEM) β-lactamase (Bla) from E. coli (Bla, accession number J01749.1) (264-amino acid residues, signal peptide omitted, 29.03 kD calculated mass; Fig S1) into a selenoprotein to study SECIS-dependent Sec translation in M. maripaludis. The enzyme is monomeric, contains no cofactor or posttranslational modification, is naturally active outside of the cell (i.e., robust), and is easily quantifiable with the chromogenic substrate nitrocefin. The enzyme hydrolyzes its substrate via a serine residue in the active site serving as a nucleophile to attack the β-lactam carbonyl (Tooke et al, 2019). By adding a SECIS encoding region to the bla gene, and by replacing three residues in Bla with Sec, fundamental conclusions about codon context requirements, about codon-SECIS distance limitations, and about selenium insertion efficiency could be drawn, thereby considerably extending our understanding of SECIS-dependent UGA recoding in Archaea.
Construction of a reporter for monitoring Sec insertion in Archaea
So far, only laborious and/or merely qualitative techniques for assessing Sec translation in Archaea are available. By establishing an easily quantifiable system, we sought to develop a method that allows probing of structure-function relations of the Sec translation apparatus, like the SECIS element. To this end, we engineered a translational reporter based on bla from E. coli (Fig 1A) shown to be actively produced in M. maripaludis . To achieve sufficient expression, the gene (lacking the signal peptide-encoding sequence) was placed under the control of a strong constitutive promoter (Psl) and a strong terminator (TmcrA). Three codons within the open reading frame of bla were individually changed to UGA in order to substitute the corresponding amino acids to Sec: the active site serine (S46U, Pos 1) and two cysteine residues (C53U, Pos 2; and C99U, Pos 3) (Fig 1A) shown to be involved in stability (through the formation of disulfide bonds) rather than required for enzymatic activity (Schultz et al, 1987). To direct Sec insertion during (Lorenz et al, 2011); dashes indicate the Watson-Crick base pairing. 39-UTR of fruA from Methanococcus maripaludis JJ is shown from TAA stop codon (underlined); the predicted minimal SECIS element of fruA (minifruA) (−9.5 kcal mol −1 minimal free energy, frequency 71.2%) is shown in orange; Mut is a completely based paired stem-loop with the same length as the minimal SECIS element (−30.3 kcal mol −1 minimal free energy, frequency 95.4%); GAA/C is a variant of the minimal SECIS element with one base exchange of A to C (indicated by purple arrow) (−12.3 kcal mol −1 minimal free energy, frequency 84.1%).
translation of these constructs, the 39-UTR of fruA (MMJJ_14570) from M. maripaludis JJ, encoding the SECIS element of the gene for the Sec-containing large subunit of F 420 -dependent hydrogenase, was inserted (overlapping with a restriction site used for cloning) immediately downstream of the bla coding sequence ( Fig 1B). Thus, either abundance or specific activity, or both, of Bla in the cells should depend on the functioning of the Sec insertion machinery, including UGA recoding by the fruA SECIS element.
SECIS-dependent reporter activity
The three constructs, together with a wild-type (WT) variant of bla (i.e., without a Sec codon), were transferred to M. maripaludis JJ via a self-replicating shuttle vector (Lie & Leigh, 2003), which also constituted the vector control (VC) (i.e., lacking a bla reporter) ( Table 1). In addition, reporter variants where the SECIS encoding sequence had been removed (-S) were also included ( Table 1). Heterologous production of Bla in the respective strains was assessed via cleavage of nitrocefin in cleared cell lysates (see the Materials and Methods section).
The Bla activity of the VC was considered the background noise (4.2 ± 4.5 mU mg −1 ; Table S1). That of the WT variant, irrespective of whether the SECIS element was present or not, was substantial (between more than 7,000 and 9,000 mU; Fig 2A and Table S1). When the codon for the active site serine was replaced for a Sec codon (Pos 1), Bla activity corresponded to that of the VC, again, irrespective of whether the SECIS element was present or not (Fig 2A). In contrast, Bla activity of variants Pos 2 and Pos 3 clearly depended on whether the 39-UTR (containing the predicted SECIS element) was present on the mRNA. When absent, activity ranged betweeñ 65 (Pos 2) and 20 (Pos 3) mU mg −1 (Table S1), which is well discernible from the background noise (Fig 2A and Table S1). When the SECIS element was present, Bla activity was more than fivefold and 15-fold higher, respectively, in the range of 350 mU mg −1 (Fig 2A and Table S1). Thus, suppression of an UGA codon reduces Bla activity at least 20-fold compared with the WT allele (Table S1). The SECISindependent Bla activity observed for Pos 2 and Pos 3 was less than 1% of that for WT, which is in the same range that resulted from Secindependent UGA suppression during the synthesis of Fdh in M. maripaludis (Seyhan et al, 2015). However, most of the translated UGA-containing bla mRNA depends on the presence of a SECIS element, which strongly indicates that Sec is inserted at the respective positions.
Engineering an archaeal selenoprotein
To unambiguously demonstrate cotranslational Sec insertion into Bla via the archaeal selenoprotein synthesis machinery, the strains carrying the reporter constructs were metabolically labeled with radioactive selenium ( 75 Se-selenite; see the Materials and Methods section). Beside the known selenoproteins ( Fig 2B, VC), M. maripaludis synthesized another Sec-containing macromolecule electrophoretically migrating at~30 kD, but only when bla contained UGA, and only when the mRNA contained the 39-UTR SECIS region (Fig 2B, arrow). Notably, the Pos 1 variant also contained Sec, despite the fact that it was not active. In members of this class of Bla, the active site serine acts as the reaction nucleophile and hydrolyzes β-lactams via a covalent acyl-enzyme intermediate (Tooke et al, 2019). Replacing the active site serine with another nucleophile, cysteine, resulted in active enzyme but >10-fold reduced affinity toward nitrocefin (Sigal et al, 1984). Despite the fact-or maybe because-nucleophilicity of Sec is even higher than that of cysteine (Arner, 2010), Sec can apparently not functionally replace serine in this context. The electrophoretic behavior strongly suggested that the new selenoprotein of M. maripaludis is Bla. To confirm this notion, commercially available polyclonal antibodies against Bla from E. coli were used to probe cell extracts of the strains (see the Materials and Methods section). Only one specific signal was observed ( Fig 2C) (except when markedly overexposed; Fig S2), again electrophoretically migrating at~30 kD, which corresponds to the predicted mass of Bla (29.03 kD). The lack of detection for the Pos 2 and Pos 3 variants without the SECIS, despite the fact that they showed some residual activity, is probably due to their abundance being close to, or below, the detection limit of the antiserum (Fig S2). The fact that no Bla fragments with lower mass were detected indicates that the protein truncated at Pos 3 (~11.1 kD; the other variants would not be resolved by the gel system used here) is rapidly degraded. The WT variant (not containing Sec) was much more abundant in cell extract than any of the Sec-containing variants, which is consistent with the Bla activities in the corresponding strains (compare Fig 2A and C). Whether the reduced abundance of the Pos 2 and Pos 3 variants, compared with the WT, is due to exchanging a cysteine residue involved in Bla stability, or due to the inherently slow and inefficient insertion of Sec (Suppmann et al, 1999), remains to be demonstrated.
Expression of Sec-encoding bla
Expression of the engineered selenoprotein gene is governed by a strong constitutive promoter. Such strong and constant expression signal bears the risk of degeneration, that is, to be lost over time through mutation in order to eliminate a genetic/metabolic load not conferring selective advantage (Glick, 1995). To demonstrate a reliable correlation between the SECIS-dependent Sec insertion during bla translation and the activity of the resulting enzyme, it was quantified as before (see Fig 2A, except for the VC and the Pos 1 variant), but only after the cultures had been transferred 10 times (2% from late exponential growth phase into fresh medium, corresponding to~50 generations) without relief of antibiotic selection. In none of the strains did Bla activity markedly decrease, which confirmed the stability and durability of the reporter system (compare Figs 2A and 3A and Table S1).
As Bla activity could be linked to the SECIS-dependent Sec insertion in variants Pos 2 and Pos 3, the amount of selenium available to the Sec synthesis and insertion machinery of M. maripaludis should affect the readout of the reporter. To test this notion, both variants Pos 2 and Pos 3, each with and without the SECIS, were cultivated for three passages on medium to which no selenium had been added. Although nothing is known as to how M. maripaludis transports selenium into the cell, or if the element can be intracellularly accumulated and/or stored, this measure is appropriate to establish steady-state conditions . In the absence of added selenium, the SECIS-dependent Bla activity dropped~10-fold to the respective levels of the reporter (Fig 3B), again strongly suggesting that the remaining activity stems from Sec-independent UGA suppression. The drop in Bla activity when selenium was omitted from the growth medium carrying the reporter prompted us to investigate the selenium dependence further. To this end, variant Pos 3 was grown (to a steady state) in the presence of various concentrations of selenite. The resulting Bla activity correlated with the selenium status of the cells between 20 and 100 nM, and this correlation depended again on the presence of the SECIS element (Fig 3C). Beyond 100 nM selenite, Bla activity did not increase further (Fig 3C). We noted a minor but significant difference in Bla activity in the two WT constructs, which indicated that the SECIS element might stabilize the mRNA somewhat (Fig 2A). To confirm that the SECIS element exerts no effect other than directing Sec insertion (its feature under study here), the mRNA abundance of bla mRNA was quantified in strains carrying variants Pos 2 and Pos 3, with and without the SECIS element, respectively, and grown in the presence or absence of added selenium. The presence of a SECIS element did not increase the amount of the respective mRNA, that is, did not increase its half-life, regardless of whether selenium was present in the growth medium of the respective strain or not ( Fig 3D and Table S2). Thus, the Bla activities observed in this study are not affected by differences in bla mRNA abundance. It might be worth noting that bacterial mRNAs containing "premature" stop codons are degraded rapidly (Morse & Yanofsky, 1969), which was not observed here. The basis for this phenomenon, and for the ca. twofold difference in mRNA abundance between variants Pos 2 and Pos 3, is not known.
Taken together, the data presented establish that bla mRNA containing UGA is translated into a selenoprotein and that Sec insertion depends on the presence of a SECIS element in the 39-UTR and on the presence of selenium for the synthesis of Sec. Beyond providing a quantifiable and facile tool, this system allows fundamental insights into the physiology of M. maripaludis and the mechanism of selenoprotein synthesis in Archaea: Sec insertion into the engineered selenoprotein appears to be saturated between 0.1 and 1 µM selenite in the medium (Fig 3C), which is in the same range as that reported for selenium-dependent transcriptional regulation in M. maripaludis . Furthermore, the synthesis of Sec-containing Bla did not affect the abundance of the other selenoproteins of M. maripaludis visible through metabolic labeling (Fig 2B), which suggests that the Sec synthesis and incorporation machinery of the organism has sufficient capacity.
Structure-function relation of archaeal SECIS elements
After characterizing the synthesis of Sec-containing Bla, thereby confirming its usefulness as a proxy for directly monitoring Sec insertion in methanogenic archaea, the mechanism of SECISdependent UGA recoding was investigated further. The Pos 3 variant of bla was chosen as the reporter, as it showed the largest difference between "no activity" (i.e., without the SECIS element, without selenium) and "full activity" (i.e., with the SECIS element, at 1 µM selenite). To assess the SECIS variant functionality, Bla activity, Bla synthesis, and 75 Se incorporation were assessed (Figs 4 and S1). First, a SECIS variant constituting a symmetric fully base-paired stem-loop lacking the GAA/A-motif (Mut; Fig 1B), was analyzed. No Bla activity (Fig 4A), no Sec incorporation (Fig 4B), and no Bla synthesis (Figs 4C and S3) could be observed in the presence of a Mut-SECIS, which shows that a mere stem-loop is not sufficient, and that the motif removed is critical, to act as a SECIS element. The same principal result was obtained when Sec insertion into FruA of M. jannaschii was studied by metabolic labeling (Rother et al, 2001). Second, the fruA SECIS region was shortened to more rigorously define it. The 59-region was shortened by 15 nucleotides (thereby eliminating a predicted small 59-stem-loop; Fig 1B), and from the 39region, 33 nucleotides were removed, generating a "minimal" SECIS element, "minifruA" (Fig 1B). Sec insertion into Bla mediated by minifruA was no less than with the original SECIS encoding region. Thus, the archaeal SECIS element could be experimentally confined, that is, defined by means other than sequence identity/similarity. Third, a SECIS variant was used, where in the GAA/A-motif, a presumed critical region, a single nucleotide was exchanged ("GAA/ C"). Indeed, this measure sufficed to eliminate the function of the element as SECIS completely (Fig 4). Interestingly, when the same position of the M. jannaschii fruA-SECIS was changed to guanine (A→G), a seemingly milder impairment of Sec insertion (~75% maripaludis containing the predicted SECIS element; WT, bla without a UGA codon; the empty vector pWLG40NZ-R without the reporter construct (VC) was used as control; error bars show a 95% confidence interval of at least four biological replicates; comparison was performed by an unpaired two-tailed t test. **P < 0.01 and ***P < 0.001; experiments were reproduced at least once. (B) Sec insertion into Bla. Autoradiograph of a 10% PAGE gel after electrophoresis of 75 Se-labeled lysates (see the Materials and Methods section) of M. maripaludis cells carrying the reporter constructs (see panel A); experiment was reproduced at least once. (C) Synthesis of Bla. Cleared lysates of cells carrying the reporter constructs (see panel A) containing 10 μg protein, except for WT and WT-S where 1/10 was applied, were immunoblotted with α-Bla antisera (see the Materials and Methods section); experiment was reproduced at least once.
SECIS-dependent selenocysteine translation in Archaea Peiter and Rother https://doi.org/10.26508/lsa.202201676 vol 6 | no 1 | e202201676 reduction), assessed by metabolic labeling, was observed (Rother et al, 2001). A possible explanation is that when cytosine (A→C) is present at this position, the 59-G of the GAA/A motif would base pair to it, essentially dissolving the kink-turn-like structure (Fig 1B), while retaining it when G is present opposite the GAA (Fig S4). Possibly, the flanking (G-C) base pairs are meant to stabilize the GAA/A motif, which is why exchanging the distal (of three) G-C to A-U had only a mild effect (Rother et al, 2001). Thus, unlike the situation for bacterial and eukaryal Sec insertion (Heider et al, 1992;Martin et al, 1998), structural rather than sequence identities are functional determinants of archaeal SECIS elements.
Here, we showed that the SECIS element in the 39-UTR facilitates (with similar apparent efficiencies) Sec insertion at three different positions in Bla (Fig 2B), arguing against strict limitations of the distance between the recoding signal (the SECIS element) and the site of recoding (the UGA translating ribosome). In fruA of M. maripaludis, the distance between the UGA and the SECIS element (59-base of the stem) is 88 nucleotides ; in Pos 1, 2, and 3 of Sec-containing Bla, it is 679, 658, and 520 nucleotides, respectively, which is well within the range deduced for archaeal selenoprotein genes . Furthermore, the SECIS element can be moved in the 59-direction to directly follow the coding region of the selenoprotein gene without any apparent impairment of its function. Thus, Sec insertion resembles the system in eukaryotes not only in terms of the SECIS location but also in terms of lacking stringent constraints for the distance to the Sec codon (Berry et al, 1993). That Bla activity is proportional to the amount of available selenium highlights the use of this reporter for quantitative probing. Once the factor(s) binding the SECIS during UGA recoding is (are) identified, its (their) interaction(s) can be studied in vivo in detail.
Lastly, three endogenous amino acids could be exchanged with Sec arguing against stringent requirements for codon context in Archaea. Even the natural Sec codons in M. maripaludis have no apparent base context in their vicinity (Fig S5). In this feature, archaeal Sec insertion again resembles the eukaryal system more than the bacterial system, where the SECIS element itself represents a dramatic context constraint (Heider et al, 1992). Considering the degree of apparent flexibility, the selenoprotein gene expression system reported here will therefore not only aid in unraveling the mechanism of SECIS-dependent Sec insertion in methanogenic archaea, but may also allow to engineer novel selenoproteins with novel properties (Boschi-Müller et al, 1998), particularly ones requiring reducing and/or anaerobic conditions.
Strains and growth conditions
Strains of Escherichia coli were grown under standard conditions and transformed with plasmid DNA by electroporation (Sambrook et al, 1989). Where appropriate, 100 µg ml −1 ampicillin was added to the medium for the selection of plasmids conferring the corresponding resistance. M. maripaludis strain JJ (DSMZ 2067) (Jones et al, 1983) was cultivated in McSe medium containing 10 mM sodium acetate (Rother et al, 2003). When selecting for pWLG40NZ-R (Lie & Leigh, 2003) and derivatives of it, 0.5 mg ml −1 (agar plates) or 1 mg ml −1 (liquid culture) neomycin was present in the medium, including the experiments addressing reporter stability. To generate Fig 2A) grown (at steady state) in the absence of added selenium; an unpaired two-tailed t test was performed comparing the respective activity in the same constructs with selenite present (Fig 2A); error bars show a 95% confidence interval of at least four biological replicates; experiment was reproduced at least once. (C) Titration of Bla activity with selenite. Specific activity was determined in cleared lysates of variants Pos 3 (red) and Pos 3-S (light red), grown (at steady state) in a medium where selenite, to the concentrations indicated, had been added; error bars show a 95% confidence interval of at least three biological replicates; experiment was reproduced at least once. (D) Abundance of bla mRNA. Samples correspond to cultures in Fig 2, panel A;and Fig 3,panel B; error bars show a 95% confidence interval of at least three biological replicates; experiment was reproduced at least once; comparison was performed by an unpaired two-tailed t test. ns, not significant. *P < 0.05; **P < 0.01; and ***P < 0.001. selenium-adequate conditions, sodium selenite was added from a sterile anaerobic stock solution to a final concentration of 1 µM. For lower concentrations, the medium containing selenite was diluted with McSe medium. Cultures were pressurized with 2 × 10 5 Pa of H 2 :CO 2 (80:20), which served as the sole energy source, and incubated at 37°C with gentle agitation. Growth was monitored photometrically at 578 nm (OD 578 ) using a Genesys 20 spectrophotometer (Thermo Fisher Scientific).
Transformation and plating of M. maripaludis was conducted as described previously (Stock et al, 2010). In vivo labeling of M. maripaludis with [ 75 Se]-selenite and analysis of the selenoproteome were basically conducted as described (Stock et al, 2010). Briefly, M. maripaludis was grown in the medium supplemented with Na-[ 75 Se]-selenite (Eckert & Ziegler) to a final activity of 37 kBq ml −1 (specific activity of 37 GBq mmol −1 ). After harvesting and washing with McSe medium by centrifugation, cells were lysed in water containing 1 µg ml −1 DNase I and 1 µg ml −1 RNase A. Cell debris was sedimented by centrifugation. Proteins in the supernatant (cleared lysate) were separated by discontinuous denaturing PAGE (SDS-PAGE) (Laemmli, 1970). Autoradiography was conducted by Phosphoimaging using a phosphor screen and the Typhoon Trio (GE Healthcare). Migration positions of labeled macromolecules were compared with those of reference proteins (Color Prestained Protein Standard, Broad Range; New England Biolabs).
Molecular methods and cloning
Standard molecular methods were used for the manipulation of plasmid DNA from E. coli DH10B (Sambrook et al, 1989). Plasmids used in this study are listed in Table 1. All DNA fragments derived from PCR (oligonucleotides used are listed in Table S3) and used for cloning were sequenced by Microsynth Seqlab using the BigDye Terminator Cycle Sequencing protocol. Reporter cassettes in pACYC177 derivatives were amplified with PCR to increase the amount of DNA for cloning into pWLG40NZ-R. The principal reporter construct, flanked by XhoI and BglII restriction sites, respectively, consists of the 59-region of the S-layer-encoding structural gene (sla) of Methanococcus voltae (Kansy et al, 1994) overlapping the start codon of the bla gene, codon-optimized for M. maripaludis , the sequence encoding the 39-UTR of M. maripaludis JJ fruA (MMJJ_14570), and the transcription terminator of mcrA of M. voltae (Müller et al, 1985) (Fig 1A). Four variants of the principal reporter were synthesized (General Biosystems, Inc.): the WT and three variants where a different codon within bla was exchanged for TGA (Sec/stop) codons resulting in the constructs Pos 1 (S46U), Pos 2 (C53U), and Pos 3 (C99U). The fragment for the 39-UTR of fruA was exchanged after moving the reporter construct to pACYC177 via restriction cloning using HindIII and XhoI, which was done to eliminate interfering restriction sites in the vector backbone. To exchange regions for 39-UTRs through restriction cloning (NcoI/PciI), double-stranded oligonucleotides were used. To this end, two complementary oligonucleotides (Table S3) containing appropriate overhangs suitable for restriction cloning were annealed, subsequently 59-phosphorylated using T4 polynucleotide kinase (Thermo Fisher Scientific), and purified with the aid of illustra MicroSpin G-25 columns (GE Healthcare) before ligation. Reporter constructs were moved to pWLG40NZ-R via restriction cloning using XhoI and BglII, except for pEblaWT and pEblaGAA/C (Table 1), which were amplified via PCR (Table S3) before Gibson cloning (Gibson, 2011). The resulting episomal reporter plasmids were used to transform M. maripaludis JJ.
Quantification of Bla
Bla activity in M. maripaludis JJ carrying the reporter plasmids was quantified with nitrocefin (Biomol) as described , except that cells were harvested at an OD 578 of~0.3. Bla activity, determined at 486 nm using a molar extinction coefficient of 20,500 M −1 cm −1 , is expressed as milliunits (mU) per mg protein (1 U = 1 μmol nitrocefin cleaved per min). To convert mU into the SI unit nkat, values are multiplied by 0.016. Protein in cell fractions was quantified with the method of Bradford (Bradford, 1976) using bovine serum albumin as standard.
Quantification of mRNA
Quantification of M. maripaludis mRNA was conducted via reverse transcription quantitative PCR (RT-qPCR). Cells were harvested by centrifugation from 2 ml of culture (OD 578 ca. 0.3). If not used directly, cell pellets were snap-frozen in liquid nitrogen and stored at −80°C until use. RNA was isolated from the cells using the High Pure RNA Tissue Kit (Roche), lysing cells in lysis/binding buffer for 10 min on ice, and following the manufacturer's instructions including oncolumn DNase treatment. Eluted RNA preparations were either directly used for the synthesis of cDNA, with gene-specific oligonucleotides (Table S3), or stored at −80°C for later use. The absence of DNA was confirmed via qPCR. The synthesis of cDNA and qPCR, and analysis of the data were conducted as described (Stock et al, 2011), except that for cDNA synthesis, the SuperScript III Reverse Transcriptase (Invitrogen) was used and that for qPCR, the Luna Universal qPCR Master Mix (New England Biolabs) with the qTOWER³ (Analytik Jena) was used. Also, treatment with RNase H after cDNA synthesis was omitted. Specific oligonucleotides (Table S3) for bla were designed with the help of Primer3Plus (Untergasser et al, 2007). For cDNA synthesis of mcrB, encoding the β-subunit of methyl-coenzyme M reductase, the oligonucleotide used is specific for the allele from strain S2 (MMP1555) (Stock et al, 2011) and has one base difference to the corresponding sequence of strain JJ (MMJJ_12810; Table S3). The data were analyzed and normalized to the expression of the mcrB as described (Stock et al, 2011). Amounts of mRNA bla copy numbers are shown per mRNA copy number of mcrB.
Immunoblot analysis
For electrophoretic separation of M. maripaludis proteins, 5 ml of culture was harvested by centrifugation and the cells were resuspended in lysis solution (1 µg ml −1 DNase I and 1 µg ml −1 RNase A in water). Separation of cell debris by centrifugation was conducted at 14,000g for 10 min. Separation of proteins in the cleared supernatant via SDS-PAGE and their immunodetection were carried out as described (Oelgeschläger & Rother, 2009), except that the transfer of proteins onto nitrocellulose membranes was achieved by tank blotting in transfer buffer (190 mM glycine, 25 mM Tris base, and 20% [vol/vol] methanol, pH 8.3) (Schmid & Böck, 1984), using a Mini Trans-Blot cell (Bio-Rad) for 1 h at 120 V. For immunodetection, a commercial anti-β-lactamase (α-Bla) polyclonal antibody (AB3738-I; Merck KGaA) was used at a 1:1,000 dilution with a protein A-horseradish peroxidase conjugate (Bio-Rad) as the secondary antibody. Detection was carried out as described (Oelgeschläger & Rother, 2009), except that instead of an X-ray film, the Fusion FX imager (Vilber Lourmat) was used. Migration signals were compared with those of reference proteins (Color Prestained Protein Standard, Broad Range; New England Biolabs).
Statistical analysis
The comparison between two cohorts with an unpaired two-tailed t test and other statistical analyses were conducted using GraphPad Prism version 5.03 (GraphPad Software).
|
2022-11-02T07:15:14.427Z
|
2022-10-31T00:00:00.000
|
{
"year": 2022,
"sha1": "7433a7dc6d56d000cdd9659289baf3757daae6b2",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a18b4120ed2330f87e423f9d53d628e7c818cb68",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18038085
|
pes2o/s2orc
|
v3-fos-license
|
A comparative evaluation of two rotary Ni‐Ti instruments in the removal of gutta‐percha during retreatment
Aim: The purpose of this study is to achieve an effective method to remove root canal filling material from the root canal system. The study, thus, aims to evaluate the efficacy of the cleaning ability of two different rotary Ni‐Ti systems; ProTaper Retreatment files and RaCe System compared to hand instrumentation with Hedstrom files for the removal of gutta‐percha during retreatment. Materials and Methods: Thirty mandibular premolars with one single straight canal were decoronated and instrumented with ProTaper files and filled with thermoplastic gutta‐percha. After 30 days, the samples were divided into three groups and gutta‐percha was removed with the test instruments. The postoperative radiographs were evaluated with known criteria by dividing the root into cervical third, middle third, and apical third. The results were tabulated and Statistical Package for Social Sciences Software (IBM Corporation) was used for analysis. Results: The mean deviation of the results were first calculated and then t‐test and analysis of variance test (two‐tailed P value) were evaluated for establishing significant differences. The rotary instruments were effective in removing the gutta‐percha from the canals. Therefore, significant difference was observed between the efficacies of the two rotary systems used. The rotary instruments showed effective gutta‐percha removal in the cervical and middle one third. (P > 0.05). However, apical debridement was effective with Hedstrom files. Conclusion: The study concluded the use of both rotary and hand instrumentation for effective removal of gutta‐percha for retreatment.
INTRODUCTION
The success of a root canal filled tooth depends mainly on the extent of re-cleaning and re-shaping followed by the complete filling of the root canal system. [1] Gutta-percha is the most commonly used material for filling the root canals, and it should be removed when retreatment is indicated. [2] Cleaning and shaping of the canal system was done using the ProTaper system (Dentsply Maillefer) according to the manufacturer's instructions using an X-Mart (Dentsply). The preparation was performed in a crown-down technique. Canals were irrigated between instruments with 5.25% NaOCl and 17% EDTA alternatively.
The canals were filled in increments using Obtura II and hand plugger was used to condense and plug the thermoplastic gutta-percha. Postoperative radiograph were obtained to determine the quality of root fillings [ Figure 3]. The specimens were sealed There are various methods that are followed to remove gutta-percha from the canal system; these include hand files, rotary files, as well as ultrasonic instruments. [3] Studies have shown that none of the re-treatment procedures are able to completely clean the root canal walls, [3] particularly in the apical third, where microorganisms generally persist. It is considered that the combined use of different techniques is more effective in the complete removal of gutta-percha. [4] It has been reported in various studies that the use of Ni-Ti instruments for the purpose of gutta-percha removal during re-treatment is safe, fast, and efficient; Ni-Ti also maintains the shape of the root canal and its use also avoids the apical extrusion of debris. [5][6][7] Few similar previous studies have contradicted the abovementioned findings; these studies have reported that the manual use of Hedstrom files is more effective in the removal of gutta-percha when compared to Ni-Ti rotary systems during retreatment procedures. [8][9][10][11] In addition, it should be noted that, many studies have not concluded the efficacy of one single rotary Ni-Ti system in the removal of root canal filling material. All the Ni-Ti rotary systems that were studied showed no significant difference among them in removing guttapercha. [8][9][10][11][12][13] The purpose of the present study is to compare manual and automated instrumentation techniques for the removal of root canal filling material as well as to compare the efficiency among automated systems, of which Ni-Ti system is especially designed for endodontic re-treatment procedures.
Preparing the samples, shaping, and obturation
Thirty extracted single rooted teeth of similar length were selected. The sample size for the study was calculated using Microsoft Excel and similar previous studies were considered. [3] Soft tissues and calculus were mechanically removed from the root surfaces immediately after extraction. Soft tissue remnants were cleaned by immersing the tooth in 3% sodium hypochlorite for 24 h. [7] The samples were decoronated at the cementoenamel junction with a diamond disc (D&Z, Berlin, Germany), leaving the root length to be approximately 18 mm in length [ Figure 1]. Working length of 17 mm was established [ Figure 2]. with temporary filling material (Cavit, 3M ESPE Dental). The samples were stored at 37°C for 30 days, and then, they were divided into three groups of 10 samples each, and each group was treated using a different technique.
Retreatment procedure
The canal filling material in Group I was removed using Hedstrom files; file sizes from #45 to #30 were used. The filling material was removed in a crown-down technique by using the file sizes in a reverse sequence.
Canal filing material in Group II was removed using ProTaper Universal retreatment files. ProTaper Retreatment files D1, D2, and D3 were used in a crown-down technique. D1 is used for cervical debridement, followed by D2 at the middle one thind and D3 is worked to working length of the canal.
Root canal filling material in Group III was removed using RaCe files, sequence were used as suggested by the manufacturer (9 instruments, Tapers range from 2 to 10%).
All the samples were digitally radiographed after the re-treatment procedure with their respective instruments.
Radiograph standardization was maintained. A standard exposure time of 0.08 seconds and a standard distance of 5 cm was maintained. [6] The digital radiograph used for a study was Dr. Suni dental radiovisiograph (RVG). The digitized images were analyzed by dividing the canal into coronal, middle, and apical areas [ Figure 3].
RESULTS
The mean and standard deviation [ Table 1] of the three groups were analyzed first followed by performing the t-test along with analysis of variance test (two-tailed P value) among the three groups to determine the significant difference [ Table 2].
When the results were analyzed it was noted that there was a significant difference (P < 0.05) between the effectiveness of manual method using Hedstrom files to that of using rotary files. Both the rotary systems were effective in removing the root canal filling material from the canal walls.
No significant difference (P > 0.05) was observed between ProTaper Retreatment to that of the RaCe file system in removing gutta-percha from the canal walls. When the cervical, middle, and apical thirds were analyzed separately, it was observed that all the groups performed well in removing the filling material from the canal walls at the cervical region. However, the significance of difference (P < 0.05) was recorded when instrumentation was performed at the middle and apical third regions.
The overall time taken to perform the procedure is also comparatively less when using rotary system to that of the manual method. The use of the ProTaper Retreatment files was faster than the RaCe files because of the less number of files (3 files) that are used for the procedure compared to the 9 files for the RaCe file system.
DISCUSSION
A growing demand to conserve teeth has been seen in recent times. It also includes the cases wherein root canal therapy has failed. [14] The choice of treatment for a failing root canal treated tooth is either a surgical procedure (apical surgery or extraction) or nonsurgical retreatment, [15] out of which the latter is the most preferred. [16] Filling material left after retreatment procedure may harbor necrotic tissue and bacteria, which could lead to a persistent disease and reinfection of the root canal system. [10] The present study was undertaken to determine the effectiveness of gutta-percha removal technique in a root canal retreatment procedure. Complete removal of the old root canal filling material along with good debridement is very important for a successful root canal retreatment procedure. Many methods are used during endodontic retreatment, which include endodontic hand files, endodontic rotary files, ultrasonic files, and chemicals such as chloroform, zylene, turpentine, and many others. [17][18][19][20] In this scenario, there are a number of endodontic rotary file systems introduced by different manufactures promising an effective filling material removal from the root canals. Some manufactures have even introduced exclusive root canal filling material removal systems such as the ProTaper Retreatment files. However, few studies point out that the effectiveness of these exclusive retreatment systems is same to that of rotary system. A study compared the effectiveness of ProTaper and Mtwo to that of retreatment system from the same manufactures, the ProTaper Retreatment and Mtwo Retreatment systems; the study concluded that there was no significant difference among the study groups, and ProTaper and Mtwo were as effective as ProTaper Retreatment and Mtwo Retreatment file systems. [11] All these systems to some extent challenge the conventionally used hand Hedstrom files, which is/ was used by many clinicians for gutta-percha removal during retreatment procedures. Hence, this increases the necessity for a clinician to access and know the best technique he can employ in the removal of gutta-percha with a rotary or hand file and sometimes the use of both the rotary and hand might become necessary. This study helps in evaluating the efficiency of two such rotary systems, the ProTaper Retreatment files and RaCe Rotary files, along with Hedstrom files.
In the present study, single rooted premolar teeth were selected to reduce the variation in the effectiveness of the technique among the different study groups. The samples were shaped with ProTaper Rotary system and obturated with thermoplastic gutta-percha so that they receive a relatively uniform quantity of filling material in their canals. The samples were randomly divided into three groups and preoperative radiographs were taken. The samples of Group I were re-treated with Hedstrom files, Group II were re-treated with ProTaper Retreatment files and Group III with RaCe rotary system.
The primary outcome of the study is that none of the systems or the technique used were effective in a total or 100% removal of the gutta-percha form the root canals of the samples. [21] Hulsmeann et al., [22] who also studied the cleaning ability of rotary instruments in retreatment, concluded that there was no system that was 100% effective in gutta-percha removal. This outcome of the study is also supported by other previously done studies. [23,24] The present study used a unique scoring criterion to determine the effectiveness of the gutta-percha removal not just in the whole of the root canal but also helped in determining the exact effectiveness of the different techniques at different parts of the root canal system. The canals were divided into three parts as the apical, middle, and the cervical thirds, and depending on the presence of residual filling materials the scoring was done. When the groups were compared at these different levels, the effectiveness of the techniques varied between the rotary and the hand methods.
Group II and Group III which represented the rotary systems did not show much difference in their efficiency in removing filling material from the middle and cervical one third of the canals. The mean scours of Group II and Group III when compared showed no difference. Both the rotary Ni-Ti files were very effective at the middle and cervical one third regions. The effectiveness of the rotary at these areas can be attributed to the greater tapers of the files at these areas, like that of RaCe files used were 10% and 8% at the cervical areas, thus engaging more of the filling materials during the cleaning process. Files such as the RaCe sized 10.40 and 8.35 are made of stainless steel; it is also concluded in a few studies that stainless steel files have a higher cutting efficiency than Ni-Ti files. [25][26][27] Few recent studies have established that effectiveness of ProTaper Retreatment files in GP removal, a study that compared ProTaper Retreatment files to that of RaCe, K3 and Hedstrom files showed its efficacy in retreatment procedures. [12] It is also estimated that the ProTaper Retreatment system worked faster than Mtwo Retreatment files in removing root canal obturation material, both retreatment systems were considered to be effective, reliable, and fast. [28] There are a few studies like those by Hulsmann et al. and Betty et al. that advocate the use of endodontic hand files for the removal of endodontic filling material. [17,29] Studies by Rodig et al. also support the effectiveness of hand files in the removal of gutta-percha. [8,9] In the present study, samples of Group I, where the method of filling material removal was Hedstrom files, showed less or no residual gutta-percha at the apical one third region. This might be because of the availability of a greater number of files with large tip sizes in the Hedstrom file system, thus providing the comfort to use one size larger than the original size used in the initial cleaning of the canal, thus, helping in engaging the whole of the filling material from the area. The hand files also provide an unparallel tactile feel to the operator thus helping in understanding a better engagement of the filling material at the apical region.
Unfortunately, one limitation of this study is that, an in-vitro study does not provide the same conditions as that of an in-vivo study, even though all the steps were taken to reduce as much errors as possible. A standardized method of root canal preparation was employed to minimize the variation among the study groups in relation to the quantity of gutta-percha that will be removed in the study. The obturation of the samples was performed by thermoplastic obturation technique to attain a homogenous mass of gutta-percha, which eliminates pools of entrapped sealer in the filling and also eliminates any loose filling at the apical third. [30] The limitation of the study points toward the need of further research on the subject and technique of guttapercha removal. Further, research should be oriented in reproducing more in-vivo conditions for more accurate results and to help the clinician in implementing the techniques for a more effective retreatment procedure.
Recent studies have compared not just the rotary files systems in root canal obturation material removal but also the efficacy of reciprocating systems like the WaveOne and Reciproc systems are studied. One study has concluded that both Wave One and Reciproc when compared were effective but did not completely remove the obturating material from the root canals. [31] These reciprocating systems are no different compared to the rotary systems in gutta-percha removal. [32,33] One big advantage of reciprocating systems is that it does not extrude apical debris as much as rotary systems do during retreatment procedures. [34] Most of the research concerning retreatment procedures is aimed at establishing the type of system that is more effective in gutta-percha removal, or the type of technique that is faster in the gutta-percha removal. However, in the middle of all this it is important to guide the clinician in employing the type of movement or motion in which the systems are to be used for complete gutta-percha removal. It is most effective if an adaptive motion is employed, by engaging all the sides of the root canal when the retreatment procedure is done, compared to a rotation movement, irrespective of the file system used. [35] CONCLUSION It can be concluded that the effective removal of root canal filling material might not be achieved by the use of one system or method. A more effective way of an endodontic retreatment would be the use of both the rotary and hand file systems. The rotary system would help us in achieving the complete removal or filling material form the cervical and middle one third as well as help us in reaching the apical region faster compared to the use of hand files in these areas; the final apical region can be debrided by the use of hand files, thus completing the filling material removal without leaving behind any residual filling materials. [29,[36][37][38] Financial support and sponsorship Nil.
Conflicts of interest
There are no conflicts of interest.
|
2018-04-03T02:11:30.344Z
|
2016-08-01T00:00:00.000
|
{
"year": 2016,
"sha1": "34eadf092235f933121e230ebb060f5ccf1e4725",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc5022390",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "34eadf092235f933121e230ebb060f5ccf1e4725",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Mathematics"
]
}
|
15544318
|
pes2o/s2orc
|
v3-fos-license
|
Replacement of the phospholipid-anchor in the contact site A glycoprotein of D. discoideum by a transmembrane region does not impede cell adhesion but reduces residence time on the cell surface.
The contact site A (csA) glycoprotein of Dictyostelium discoideum, a cell adhesion molecule expressed in aggregating cells, is inserted into the plasma membrane by a ceramide-based phospholipid (PL) anchor. A carboxyterminal sequence of 25 amino acids of the primary csA translation product proved to contain the signal required for PL modification. CsA is known to be responsible for rapid, EDTA-resistant cohesion of cells in agitated suspensions. To investigate the role of the PL modification of this protein, the anchor was replaced by the transmembrane region and short cytoplasmic tail of another plasma membrane protein of D. discoideum. In cells transformed with appropriate vectors, PL-anchored or transmembrane csA was expressed under the control of an actin promoter during growth and development. The transmembrane form enabled the cells to agglutinate in the presence of shear forces, similar to the PL-anchored wild-type form. However, the transmembrane form was much more rapidly internalized and degraded. In comparison to other cell-surface glycoproteins of D. discoideum the internalization rate of the PL-anchored csA was extremely slow, most likely because of its exclusion from the clathrin-mediated pathway of pinocytosis. Thus, our results indicate that the phospholipid modification is not essential for the csA-mediated fast type of cell adhesion but guarantees long persistence of the protein on the cell surface.
Abstract. The contact site A (csA) glycoprotein of Dictyostelium discoideum, a cell adhesion molecule expressed in aggregating cells, is inserted into the plasma membrane by a ceramide-based phospholipid (PL) anchor. A carboxyterminal sequence of 25 amino acids of the primary csA translation product proved to contain the signal required for PL modification. CsA is known to be responsible for rapid, EDTA-resistant cohesion of cells in agitated suspensions. To investigate the role of the PL modification of this protein, the anchor was replaced by the transmembrane region and short cytoplasmic tail of another plasma membrane protein of D. discoideum. In cells transformed with appropriate vectors, PL-anchored or transmem-brane csA was expressed under the control of an actin promoter during growth and development. The transmembrane form enabled the cells to agglutinate in the presence of shear forces, similar to the PLanchored wild-type form. However, the transmembrane form was much more rapidly internalized and degraded. In comparison to other cell-surface glycoproteins of D. discoideum the internalization rate of the PL-anchored csA was extremely slow, most likely because of its exclusion from the clathrin-mediated pathway of pinocytosis. Thus, our results indicate that the phospholipid modification is not essential for the csA-mediated fast type of cell adhesion but guarantees long persistence of the protein on the cell surface.
T ar development of single ceils of Dictyostelium discoideum into a multicellular organism is mediated by the release and recognition of cAMP as a chemoattractant and by cell-cell adhesion. The contact site A glycoprotein (csA) ~ is the best characterized component of the adhesion systems that are active during the aggregation stage of Dictyostelium development (Gerisch, 1986;Siu et al., 1988). Transcription of the csA gene is induced during the preaggregation phase under the control of cAMP. CsA mediates an EDTA-stable ("Ca2+-independent ") type of cell adhesion. In mutants defective in csA, the ability to form EDTA-stable aggregates in agitated cell suspensions is greatly reduced (Noegel et al., 1985;Harloffet al., 1989). However, cells of these mutants are able to aggregate on an agar surface where no shear is applied. From these studies it has been concluded that csA is responsible for a "fast" type of cell adhesion that is essential for the cells to aggregate when they are agitated in suspension, and that other adhesion systems are sufficient to mediate aggregation as long as the cells are not exposed to mechanical stress (Harloff et al., 1989).
The csA protein is modified by two types of oligosaccharide residues (Hohmann et al., 1987a,b). Moreover, the cDNA of csA encodes a hydrophobic COOH-terminal sequence that is characteristic of membrane proteins in which the primary COOH terminus is replaced by oligosaccharidelinked phosphatidylinositol (GPI) which anchors the protein in the plasma membrane (Ferguson and Williams, 1988;Cross, 1990;Thomas et al., 1990). The anchor of the csA protein consists, however, not of the typical GPI but of a ceramide-based phospholipid (Stadler et al., 1989), and is thus the prototype of a novel class of phospholipid (PL) anchors which are also found in yeast (Conzelmann et al., 1992). The anchor in D discoideum is resistant to phospholipases C that cleave many of the GPI anchors (Stadler et al., 1989). For several GPI-anchored proteins, such as the Thy-1 antigen of lymphocytes, alkaline phosphatase, and the decay accelerating factor, fast lateral diffusion in the plasma membrane has been demonstrated (Woda and Gilman, 1983;Noda et al., 1987;Ishihara et al., 1987;Lisanti et al., 1990a), The csA glycoprotein in D discoideum is a minor component of the plasma membrane which covers not more than 2 per cent of the cell-surface area (Beug et al., 1973). Therefore, it is reasonable to assume that high mobility of this protein in the membrane is important for rapid matching of interacting molecules on adjacent cell surfaces. The ques-tion is whether this high mobility is only guaranteed by the PL anchor.
In the present paper a carboxyterminal sequence of 25 amino acids, that is sufficient to direct lipid attachment of the csA protein, has been exchanged in a fusion protein by a transmembrane region. We demonstrate that the transmembrane version of the csA protein is capable of mediating EDTA-stable cell adhesion in an agitated suspension, indicating that the phospholipid anchor is not essential for fast adhesion. However, rapid turnover of the transmembrane form indicates that the presence of the anchor guarantees long persistence of the csA glycoprotein on the cell surface.
Vector Constructions
CsA sequences were derived from plasmid p512-6, kindly provided by B. Leiting, carrying the full length 1.8 kb csA eDNA (Noegel et al., 1986). P29F8 sequences were taken from plasmid p56, carrying the full length 1.9 kb P29F8 eDNA (M~ller-Taubenberger, 1989). For the construction of DNAs encoding the fusion proteins PC and CP a unique PstI-site was introduced into the coding unit of csA by site-directed mutagenesis with the pMa/c5-8 phasmid system (Stanssens et al., 1989). The PstI-site was introduced by the exchange of one nucleotide at position 1466 of the csA eDNA, which converted Glu 47° to Ala. The csA sequence 3' of this PstI-site was used to replace the 3'-PstI/HindlII fragment in the P29F8 eDNA, resulting in a coding unit for the fusion protein PC. The csA sequence upstream of the introduced Pstl site was fused in frame to the 3'-Psd/HindIII fragment of the P29F8 eDNA, resulting in the coding unit for the fusion protein CP. The Ala codon produced by mutation of the csA sequence replaced the Ala codon at the unique PstI-site in the P29F8 sequence, so that PC contained the codons for 25 amino acids of csA, whereas in CP 26 amino acids of csA were replaced. The recombinant DNAs were verified by sequencing the fusion regions. For constitutive expression of the encoded proteins these cDNAs were cloned into one of the following D. discoideum expression vectors in which expression is controlled by the actin 15 promoter (Knecht et al., 1986). (a) For the fusion proteinPC the corresponding coding unit was cloned into pDEX RH, kindly provided by J. Faix, Max-Planck-Institut ftir Biochemie, Martinsried. This vector was derived from pDEX H (Faix et al., 1992) by introducing an EcoRI site into the eDNA expression cassette. (b) For the fusion protein CP the eDNA-fragment downstream of the NcoIsite in the csA expression vector pDCEV4 (Faix et ai., 1990) was replaced by the 3'-Nco1 fragment of the CP coding unit. (c) FOr the wild-type protein P29F8 the complete cDNA of P29F8 was cloned into pDEX P-d-I. (d) For the csA protein the vector pDCEV4 (Faix et ai., 1990) was used.
Transformation of D. discoideum Strains
The strains employed were AX2 clone 214 and HG1287. This strain is a parasexnal recombinant obtained by E. Wallraff from HG693, a csAdefective mutant of AX2 (Faix et al., 1992). AX2 and HG1287 ceils were transformed by the Ca-phosphate method according to Nellen et al. (1984) with pDCEV4 and pDEV/CP. The AX2 transformants were designated as AT-CI and AT-CPI, respectively; the HG1287 transformants accordingly as HT-C1 and HT-CP7 and 8. HG1287 cellswere also transformed with pDEX RH/P29F8 and with pDEX RH/PC, resulting in transformants HT-P3 and HT-PC, respectively. Transformants were selected for G418 resistance using 2~0 t~g/ml of Geneticin (Sigma Chem. Co., St. Louis, MO), and subsequently cloned by spreading onto SM agar plates containing Klebsiella aerogenes. O418-resistant clones were screened for expression of the vector-encoded proteins in growth-phase cells by colony immtmoblotting according to Wallraft and Geriseh (1991). The blots were incubated with 125I-labeled mAb 294, which recognizes the protein moiety of the csA glycoprotein, or with t25I-labeled mAb 210 which recognizes an epitope common to type 2 carbohydrate residues of csA and P29F8 (Bertholdt et al., 1985), and autoradiographed.
Culture Conditions for the Analysis of Transformants and Measurement of CeU Adhesion
Cells were cultivated at 23°C axenically in liquid nutrient medium contain-ing 1.8% maltose (Watts and Ashworth, 1970) on a gyratory shaker at 150 revs/min and harvested from this medium during exponential growth at a density of not more than 5 × 106 cells/ml. For starvation and initiation of development, the cells were washed and adjusted to 107/ml in PB (17 mM Soerensen phosphate buffer, pH 6.0), and shaking was continued.
Cell-cell adhesion was determined in an agglutinometer as described by Beug and Gerisch (1972) and Bozzaro et al. (1987). Before the measurement, cells were washed and resuspended at a density of 107/ml in PB, pH 6.0, with or without 10 mM EDTA.
Metabolic Labeling
For labeling with [3H] palmitic acid (Stadler et al., 1984) the concentration of growth-phase ceils was adjusted to 106/ml by dilution into fresh nutrient medium. I mCi [3H] palmitic acid (New England Nuclear, Boston, MA; NET 043, 30 Ci/mmol) was added to 5 ml aliquots of cells and incubated for 16 h. Then the ceils were washed twice with PB, pH 6.0, collected by centrifugation, and frozen at -800C.
For pulse-chase labeling of developing cells with [35S] methionine (Amersham Corp., Arlington Heights, IL, 10 mCi/ml) growth-phase cells were washed twice and resuspended in PB, pH 6.0, at a density of 108/ml. 0.5 mCi/ml of [35S] methionine was added and the cells were incubated for 30 min. For the chase, the ceils were washed twice and resuspended at 107/1111 in PB, pH 6.0, containing 2 mM of unlabeled methionine. Aliquots of cells were collected by centrifugation after various times of chase and frozen at -80°C.
Cell-Surface Biotinylation and Assay for Internalization
For cell-surface biotinylation, growth-phase cells were washed twice and resuspended in ice-cold PB, pH 8.0, at a density of 10S/ml. Sulfosuccinimidyl 2-(biotinamido)ethyl-l,3-dithiopropionate (NHS-SS-biotin, a reducible analog of sulfo-NHS-biotin, Pierce, R~ckford, IL) was added to a final concentration of 0.25 mg/ml, and the cells were shaken for 30 min at 0°C and 150 revs/min. After labeling, the cells were washed twice and resuspended at a density of 10Tirol in ice-cold PB, pH 7.2, containing 40 mM NI-hCI. NH4Ct was added to the buffer to inhibit lysosomai degradation of internalized proteins (Seglen and Reith, 1976). Aliquots of the suspensions were kept on ice or incubated in the same buffer at 23"C and 150 revs/min to allow endocytosis to occur. After various times, endocytosis was inhibited by chilling the suspensions in ice water and the cell-surface label was stripped by reduction with glutathione as follows: ceils were collected by centrifugation, resuspended at a density of 107/ml in ice-cold glutathione solution (50 mM glutathione, 75 mM NaC1, 75 mM NaOH, 1% BSA) and incubated for 30 rain at 150 revs/min in ice water. After reduction the cells were washed twice in ice-cold PB, pH 7.2, containing 40 mM NI-hCI, collected by centrifugation and frozen at -80"C.
Immunoprecipitation of Proteins from Cell Lysates
CsA and CP were precipitated with mAb 71 which recognizes the protein moiety of the native csA glycoprotein (Bertholdt et ai., 1985). P29F8 and PC were precipitated with mAb 210. All procedures were carried out at 4°C. Cells were freeze-thawed at concentrations varying from 6 x 106 to 10S/ml and the lysates were subsequently incubated for 30 rain with 1% (vol/vol) octyloligooxyethylene (Bachem, CH-4416 Bubendorf, Switzerland) in 10 m_M Hepes-NaOH buffer, pH 7.5, containing 1 mM DTT (lysis buffer). Extracts were cleared by centrifugation, supplemented with 150 mM NaCI and incubated for 2 h with 150/~g/ml of purified IgG. For precipitation, protein A-Sepharose CL-4B beads (Pharmacia LKB Biotechnology, Piscataway, NJ) were added in excess (300/~1 swollen beads + 1.2 ml DTT-Hepes-NaCl per 150 tzg of IgG). The samples were agitated to keep the beads suspended during the incubation with antibody. After 1 h the beads were collected by centrifugation at 600 g for 30 s, washed four times with sevenfold the volume of lysis buffer supplemented with 150 mM NaCI, and boiled for 3 rain in SDS-sample buffer (Laemmli, 1970). Proteins from NHS-SS-biotin labeled cell lysates were immunoprecipitated without the addition of DTT to the lysis buffer, and the SDS-sample buffer contained no ~-mereaptoethanol.
After immunoprecipitation of proteins from [3HI palmitic-acid labeled cell lysates beads were boiled in SDS sample buffer without glycerol and bromophenol blue and the supernatants were acidified to 0.1 N of HC1 (Stadler et al., 1989). Noneovaiently bound lipids were removed by chloroform/methanol extraction of the proteins from the acidified solution (Wessel and Flfigge, 1984).
Gel Electrophoresis, Fluorography, and Immunoblotting
For SDS-PAGE (Laemmli, 1970) either total cell hornogenates or immunoprecipitates were used. If not stated otherwise, equivalents of los cells were applied per lane in 7.5% or 10% gels and proteins were transferred to Schleicber and Schuell (Keene, NH) HA 85 nitrocellulose (Vaessen et al., 1981). For fluorography of the [3H] or the [35S] label, the filters were dipped into 20% (wt/vol) of 2,5-dipbenyloxazole in toluene, dried, and exposed at -70"C on Kodak XAR-5 film. 2,5-Diphenyloxazole was removed by washing the filters with toluene for subsequent immunolabeling. Antibodies applied to immunoblots were mAb 294 which recognizes the polypeptide chain of denatured csA (Bertholdt et al., 1985), mAb 353 against N-linked oligosaccharides of csA (Faix et al., 1990), mAb 210 against O-linked oligosaccharides of both csA and P29F8 (Bertholdt et al., 1985), rnAb 200 against severin as a control (Andr~ et al., 1989), and polyclonal pAb P against the carboxyterminal end of P29FS. The pAb P serum was obtained by immunizing a rabbit with a synthetic peptide, representing the sequence of the 17-amino acids of the carboxyterminus of the protein bound via a K-G-G spacer to a paimitoyl residue, together with Freund's adjuvant (Miiller-Taubenberger, 1989). Blotted proteins were directly or indirectly labeled with 125I-labeled antibodies. For quantitative comparisons the glycoprotein bands were scanned on fluorograms or autoradiograms in two dimensions on an Elscript-400 electropboresis scanner (Hirschmann Getiitebau, 82008 Unterhaching, Germany) using a round aperture of 0.6mm diam.
SubceUular Fractionation and Enzyme Assays
3 x l0 s cells ware starved for 4 h in 30 ml PB, pH 7.2, in the presence of 40 mM NI-hC1 to inhibit degradation of proteins by lysosomal enzymes. Subsequently, the cells were cooled to 0°C and lysed in fractionation buffer: PB, pH 7.2, with 40 rnM NI-hC1, 0.25 M sucrose and protease inhibitors (10 mM benzamidine, 28 #g per ml of aprotinin, and a 100-fold dilution of a stock solution containing 50 #g bestatin, 100 #g pepstatin, 100 btg antipain, 100 #g leupeptin in 1 ml of methanol). Lysis was obtained by three passages of the cells through Nucleopore filters with pore diameters of 5 /an. Unbroken cells and nuclei were removed by eentrifngation at 3,000 g for 5 rain. Membranes and intracellular vesicles were collected by centrifugation of the supernntant at 15,000 g for 30 rain. The 15,000 g pellet was resnspended in 1 ml fractiormtion buffer and loaded on top of 39 ml of the buffer containing 22 % Percoll (Pharmacia LKB Biotechnology). After centrifugation at 23,000 rpm for 30 min in a Kontron VTi50 rotor, the gradient was fractionated from the bottom of the tube into 19 equal fractions. The fractions were assayed for enzyme activities or examined by SDS-PAGE and immunoblotting.
Triton X-100 was added to the fractions to a final concentration of 0.5 %. Acid phosphatase (EC 3.1.3.2) and alkaline phosphatase (EC 3.1.3.1) ware assayed according to Loomis and Kuspa (1984) and Loomis (1969). Pereoll did not affect the enzyme activities. After removal of precipitated Percoll by centrifugation, the absorption of the supernatant was measured at 410 nm.
Construction of a csA Fusion Protein That Is Endowed with a Transmembrane Sequence
To obtain a protein in which the phospholipid anchor of the csA protein is replaced by a transmembrane polypeptide sequence, fusions were designed between cDNA fragments that code for portions of the csA sequence and fragments of a D. discoideum cDNA clone that, according to its isolation number, has been referred to as P29F8 (Fig. 1). The P29F8 cDNA codes for a polypeptide of 58 kD with three domains: a large extracellular NH2-terminal region, one transmembrane stretch, and a small cytoplasmic domain. The protein is N-and O-glycosylated, giving rise to a product with an apparent molecular mass of 100 kD in 10% SDS gels. Under the same conditions, csA forms a band corresponding to a glycoprotein of 80 kD. Although the function of the protein encoded by this clone is unknown, P29F8 offers itself for fusion with csA for two reasons. The transcript is similarly regulated during development as the csA transcript; both messages are absent from growth-phase cells and are max-imaUy expressed during the aggregation phase (for csA see Murray et al., 1981;Noegel et al., 1986;for P28F8 Miiller-Taubenberger, 1989). Elimination of P29F8 by gene disruption or overexpression of this protein in growth-phase cells has revealed no activity of the protein in cell adhesion (Miiller-Taubenberger, 1989;Barth, 1992). Thus, P29F8 sequences will have no impact on the measurement of cell adhesion in transformants.
Our goal was to construct a csA-fusion protein with the phospholipid anchor replaced by a transmembrane sequence but missing only a minimal portion of the csA sequence. Ideally, this portion should comprise not more than the COOH-terminal end of the csA protein which is required for attachment of the anchor. As shown by construction of the fusion protein PC (Fig. 1), the very last 25 amino acids of the csA protein are sufficient for phospholipid anchorage. This result implies that a cleavage site for replacement of the COOH terminus by an anchor resides within this short COOH-terminal sequence of the primary translation product.
For use in cell adhesion studies the fusion protein CP was constructed (Fig. I). This protein contained the entire csA sequence except of 26 amino acids of the COOH terminus (26 instead of 25 residues were removed for technical reasons as explained in Materials and Methods). These amino acids were replaced by 75 COOH-terminal residues of P29F8 which encompass the putative transmembrane and cytoplasmic domains of this protein.
No incorporation of [3H] palmitic acid into CP was detected, which indicates the lack of a phospholipid anchor (Fig. 2). In accord with these results, CP was recognized by rabbit antibodies pAb P that had been raised against a synthetic peptide corresponding to the COOH-terminal end of P29F8 (Fig. 3). This label confirms that no replacement of the COOH terminus of the fusion protein by a phospholipid anchor had occurred and provides evidence that CP is inserted into the membrane by the transmembrane sequence of P29F8.
The normal csA protein is modified by N-and O-linked oligosaccharide residues, referred to as carbohydrates 1 and 2. To prove that CP expressed in growth-phase cells is endowed with both types of carbohydrate residues, proteins from the CP-expressing cells were labeled with mAbs 353 and 210 (Fig. 3). mAb 353 is directed against the fucosylated, N-linked type 1 carbohydrate chains (Faix et al., 1990), mAb 210 against type 2 carbohydrate residues which are posttranslationally added during passage of the csA protein through the Golgi apparatus (Hohmann et al., 1987a,b). Both antibodies recognized CP. The band formed by CP in 7.5% SDS gels indicated a glycoprotein with an apparent molecular mass of 87 kD (Fig. 3).
Comparison of EDTA-Stable Cell Adhesion Mediated by Phospholipid Anchored csA and by the Transmembrane Fusion Protein CP
In wild-type cells, csA as well as P29F8 are synthesized only during development, so that fusion proteins can be examined in growth-phase cells of transformants without any interference by the endogenous protein. W'dd-type AX2 cells were transformed in parallel with appropriate vectors so that they expressed either csA or CP during their growth-phase. For (PC and CP). The abbreviations used throughout the paper are put into brackets. PT, proline/threonine rich domain of the csA protein; l, COOH-terminal hydrophobic amino acids of csA. Attachment or absence of the phospholipid anchor (PL) as determined by [3H] palmitic-acid incorporation is indicated on the right. (B) cDNA-derived sequence of the P29F8 protein and of the COOH-terminal sequences of the fusion proteins. The putative hydrophobic leader of the P29F8 translation product (residues 1-19) is boxed. The putative transmembrane domain of P29F8 (residues 488-510) and the hydrophobic COOHterminal stretch in the csA sequence are indicated by single underlines. Proline/threonine rich sequences in both proteins are indicated by broken underlines. The amino-acid sequence of a peptide used to raise the polyclonal antibodies pAb P is indicated by double tmderlines. In the fusion proteins PC and CP, amino acids from P29F8 are shown in bold letters, those from csA in italics. *, fusion points of the P29F8 and csA sequences.
comparing cell adhesion, a pair of transformants was chosen which expressed csA or CP in about equal amounts (Fig. 4 A). In contrast to untransformed growth-phase ceils which do not adhere to each other in the presence of EDTA, strong EDTA-stable adhesiveness was found in both transforinants (Fig. 4 B). In fact, we found no difference in the strength of adhesiveness by an agglutinometer assay (Fig. 4 C). This result indicates that EDTA-stable cell adhesion, as measured in an agitated suspension, is not significantly reduced when the adhesion protein is inserted into the plasma membrane by a membrane-spanning polypeptide sequence.
In an attempt to extend these studies to developing cells, which in the wild-type produce their own csA, we have transformed ceils of the csA negative mutant HG1287 with the same vectors as used for the assay of cell adhesion in wildtype growth-phase cells. Surprisingly, if transformants expressing csA or CP were compared at 9 h of development, EDTA-stable cell adhesiveness was found to be weaker in the latter (Fig. 5 A). This observation found its explanation when, in the same cultures, the amounts of proteins were determined. Immunoblotting showed a decrease in CP during the first 9 h of development which was not paralleled by a decrease in csA (Fig. 5 B).
To examine whether the decrease in the amount of CP during development is due to its rapid degradation, csA, CP and P29F8 were pulse labeled with psS]methionine at the beginning of development. At intervals, aliquots of the labeled cells were lysed, the three proteins were immunopre- cipitated in parallel, and the radioactivity was determined in each of them. Fig. 6 shows that csA had a half-life of more than 5 h in the starved, developing cells, whereas CP and P29F8 had half-lives of less than 2 h. From this result we conclude that the different amounts of csA and CP present in developing cells are steady-state levels which are primarily determined by strong differences in the turnover rates of these two proteins. CP may be released into the medium, either intact or in fragmented form, or may be internalized and degraded in lysosomes. The following experiments were performed to distinguish between these possibilities.
Rapid Internalization of the Fusion Protein CP Is the Critical Event Leading to Degradation
The extent of release from the cells was examined by probing for csA and CP in the extracellular medium. Blots were labeled with mAbs 294, 353, and 210, for both proteins, and with pAb P for the COOH-terminus of CP (see Fig. 3). All the antibodies revealed the same result: compared to the amounts of the two proteins that were associated with the cells, very little was detected in the extracellular medium in the form of intact proteins or large fragments. The amount of CP released was in the same range as the quantity of csA normally shed-off from the cells (data not shown).
To test whether CP is more rapidly internalized than csA, proteins located at the cell surface were pulse-labeled with NHS-SS-biotin, a reducible derivative of biotin which does not pass the plasma membrane (Lisanti et al., 1990b). The labeled cells were incubated with 40 mM NI-LC1 to inhibit proteolytic degradation of internalized proteins by elevating the lysosomal pH (Seglen and Reith, 1976). At intervals of 0-60 rain, aliquots of cells were treated with glutathione to remove cell-surface exposed biotin by reduc- Figure 3. Presence of the P29F8 COOH-terminal sequence and of Carbohydrate modifications in the fusion protein CP. Total proteins from untransformed AX2 cells or from cells of the transformant AT-CP1 were separated by SDS-PAGE, blotted, and labeled with the antibodies indicated on bottom. The AX2 cells were harvested at 6 h of starvation (6) when csA and P29F8 were fully expressed, the AT-CP1 cells were harvested at the growth phase (0) when only the vector-encoded fusion protein was present. Equivalents of 106 cells were applied per lane, csA and CP were recognized by the csA specific antibody mAb 294. CP but not csA was labeled by the anti-peptide antibody pAb P which recognizes the COOHterminal sequence of P29F8 (Fig. 1 B). mAb 353 and mAb 210 label N-and O-linked carbohydrate chains, respectively. The 100-kD band recognized by pAb P and mAb 210 represents the endogenous glycoprotein P29FS. The band of a 29-kD glyeopeptide in the transformant is obviously a COOH-terminal degradation product of CP which carries O-linked but no N-linked carbohydrate residues. This fragment accumulates during turnover of the protein because the carbohydrate-decorated portion of the polypeptide chain is protected against proteolysis (Hohmann et al., 1987b). tion of the disulfide bond (Bretscher and Lutter, 1988). Inaccessibility of the biotin label is taken as evidence that the protein has been internalized.
Scanning of the blots shown in Fig. 7 (upper panels) showed that within 1 h only 6 per cent of the biotin-labeled csA, but 56 per cent of the biotin-labeled CP was internalized. The total amounts ofcsA and CP in the cells were about Cells were harvested immediately after removal of nutrient medium (0) or at 9 h of starvation (9) and incubated in the agglutinometer with 10 mM EDTA. Photographs were taken after 1 h in the agglntinometer. The bar indicates 100/zm. (B) Total proteins of cells from the same cultures as in A were subjected to SDS-PAGE and immunoblotting. Cells were harvested at 0, 3, 6, and 9 h of starvation, and equivalents of 106 cells were applied per lane. The blot was incubated with anti-csA mAb 294 and subsequently with severin-specific mAb 200. The constitutively expressed actinbinding protein severin served as a reference. the same. These amounts remained almost constant during the experiments, as indicated by labeling the proteins with mAb 294 (Fig. 7, lower panels). These results demonstrate that the fusion protein CP is more rapidly internalized than the phospholipid-anchored csA.
To examine whether internalized CP is transported to dense endosomes and secondary lysosomes, the distributions of csA and CP were compared after 4 h of starvation in the presence of 40 mM NI-hCI in order to inhibit degradation of the glycoproteins (Bush and Cardelli, 1989 et al., 1989). At this concentration the sorting of enzymes into lysosomes is not blocked by NFLCI. The peak of csA coincided with the peak of the plasma-membrane marker alkaline phosphatase (Fig. 8, A and C). In contrast, the peak of CP was found in fractions of higher density where also high activity of acid phosphatase was recovered (Fig. 8, A and B). The relatively high csA content and alkaline phosmant HT-P3, and the two CP expressing transformants HT-CP7 and HT-CP8 were labeled with pSS]methionine immediately after removal of nutrient medium. After 30 min of incubation cells were chased with excess unlabeled methionine. From aliquots taken at the times indicated, proteins of detergent-solubilized cell lysates were precipitated with mAb 71 for csA and CP, or with mAb 210 for P29FS. The precipitates were subjected to SDS-PAGE and blotring. After fluorography for [35S], the films were scanned and label in the glycoprotein bands given as relative absorption (A), taking the density at 1 h of chase as 100%. In case of HT-Cl, HT-P3, and HT-CP8 each value represents the mean of two pulse/chase experiments, and vertical bars denote the difference between the mean and the actual values. For HT-CP7 the data of one pulse/chase experiment are shown. To determine the distribution of csA and CP, aliquots of the fractions were subjected to SDS-PAGE and immunoblotting with mAG 294. (B) The same fractions were assayed for acid-phosphatase activity as a marker for lysosomes, and (C) for alkaline phosphatase activity as a plasma-membrane marker.
phatase activity in high-density fractions (Fig. 8, A and C), was probably caused by contamination of these fractions with cytoskeleton-associated remnants of the plasma membrane.
Cell Adhesion Mediated by a Transmembrane
Derivative of the csA Protein The fusion protein used in this study was based on the finding that the sequence of the 25 carboxyterminal amino acids of the primary translation product is responsible for modification of the csA glycoprotein by a PL anchor. Despite the differences in structure between the ceramide-based anchor of the Dictyostelium protein (Stadler et al., 1989) and the GPI anchors of protozoan and mammalian proteins, the carboxyterminal sequences of three phospholipid anchored Dictyostelium proteins, including csA, fit into a common scheme ( Fig. 9). Serine or glycine at the predicted cleavage (o~) site to which the anchor is attached, and alanine or serine at the o~ + 2 site agree with the requirements for GPI-modification (Moran et al., 1991;Gerber et al., 1992;Kodukula et al., 1993;Micanovic et al., 1990).
A variety of other cell-adhesion proteins are expressed on cell surfaces in a GPI-anchored form (Gennarini et al., 1989;Hortsch and Goodman, 1990;Ranscht and Dours-Zimmermann, 1991) or in both a transmembrane and a GPIanchored version Dustin et al., 1987;Hortsch and Goodman, 1991), suggesting that GPI anchors are relevant to the function of these proteins in mediating cell interactions. One possibility is that a PL anchor leads to clustering of the protein by hydrophobic interactions in the lipid phase of the plasma membrane, and thus to local increases in the strength of cell adhesion. Another possibility is that the mobility of a protein in the membrane is critical for its function in cell adhesion.
High lateral mobility has been established for some GPIanchored proteins (Woda and Gilman, 1983;Ishihara et al., 1987;Noda et al., 1987). Differences in the strength of adhesion have been found for the GPI-anchored and the transmembrane isoforms of the cell adhesion molecule LFA-3 (Chan et al., 1991;Tfzeren et al., 1992). Adhesion of cells to a planar bilayer containing the GPI isoform has been stronger than to a bilayer containing the transmembrane isoform. Lowering of the difference with higher LFA-3 density and also with prolonged time of contact has suggested an influence of the mobility of the molecule on the strength of adhesion (Chan et al., 1991).
To examine whether the PL-anchor has an impact on the csA-mediated "fast type" of cell adhesion as it is observed in agitated suspensions of Dictyostelium cells (Stadler et al., 1989;Harloff et al., 1989), the anchor of csA has been replaced by the carboxyterminal tail of another D. discoideum protein, which comprises a transmembrane region.
The strong agglutination of the transformed cells has provided evidence that the chimeric transmembrane protein is capable of mediating EDTA-stable cell adhesion, similar to the normal PL-anchored csA molecule. No substantial difference in the adhesion capacity of cells containing equal amounts of either the PL-anchored or the transmembrane form of csA has been measured under the influence of shear force. The vectors used for expression of the transmembrane and PL-anchored form of csA in the growth-phase cells of D. discoideum tend to integrate in multiple copies into the genome. To simulate conditions in wild-type cells as closely as possible, we have chosen transformants which express less than threefold the amount of csA that wild-type AX2 expresses during the aggregation stage. Using these cells we cannot exclude an effect of the anchor in case that csA is present at lower densities on the cell surface. Nevertheless, the finding showing that PL-anchorage is not necessary for rapid adhesion has prompted us to search for other functions of the PL-anchor. Figure 9. Sequence comparison of three PL-anchored proteins from D. discoideum, csA, contact site A protein as studied in this paper. The horizontal arrow indicates the COOH-terminal sequence sufficient for PL-modification. gp130, cell surface glycoprotein putatively implicated in sexual cell fusion (Fang et al., 1993) and in cell adhesion (G-erisch et al., 1993). PsA, prespore-specific antigen of the slug stage (Early et al., 1988). The vertical arrow indicates the cleavage site established by sequencing of the PL-anchored protein PsA (Gooley et al., 1992). The hydrophobic COOH-terminal regions are underlined, the proline rich regions near the anchor attachment site are boxed.
Phospholipid Anchorage Prevents Rapid Internalization and Subsequent Degradation of the Protein
The PL-anchored csA protein has a half-life of more than 5 h as determined by pulse-chase labeling (Fig. 6). This unusually long lifetime is based on a slow rate of internalization: only 6 per cent of the protein labeled on the cell surface is endocytosed during the first hour of starvation (Fig. 7). These data are in accord with results of Hohmann et al. (1987b) who observed a 20 per cent reduction in the amount of csA within 4 h of cyeloheximide treatment of starved D. discoideum cells. In contrast, the half-life of the transmembrane version of the csA protein has been less than 2 h, and about 50 per cent of the protein becomes internalized during the first hour of starvation. Thus the difference in stability of the PL-anchored and transmembrane forms of the csAprotein is clearly due to their different susceptibilities to the endocytotic pathway. The question is whether exclusion of the PL-anchored csA protein from endocytotic vesicles is responsible for this difference, or accumulation of the transmembrane protein in clathrin-coated vesicles. The latter is unlikely since the cytoplasmic tail of the fusion protein has in its sequence of 34 amino acids none of the known endocytosis signals (Ktistakis et al., 1990;Trowbridge, 1991;Vaux, 1992). Furthermore, even higher rates of internalization than found for the transmembrane form of the csA protein have been measured for surface glycoproteins and total plasma membrane of D. discoideum cells (Thilo and Vogel, 1980;de Chastellier et al., 1983). These rates correspond to internalization of the total surface area once every 20-45 min. Almost all of the pinocytotic activity in these cells is due to clathrin-coated vesicles (O'Halloran and Anderson, 1992).
In the light of these data the persistence time of the PLanchored csA protein on the cell surface is exceptionally long, which suggests that this protein is excluded from the coated pits. Comparable results showing slow internalization rates have previously been obtained for the GPI-anchored decay accelerating factor in HeLa and MDCK cells (Lisanti et al., 1990b) as well as for T-cell activating protein and the Thy-1 antigen in lymphocytes (Bamezai et al., 1989;Lemansky et al., 1990). Even GPI-anchored proteins that play a key role in the endocytotic process which mediates uptake of small molecules by plasmalemmal invaginations called calveolae (Rothberg et al., 1990; for review see Hooper, 1992), are excluded from endocytosis by clathrin-coated vesicles (Bretscher et al., 1980;Keller et al., 1992).
Switching between a PL-anchored and a transmembrane isoform may provide a sort of regulation of cell adhesion. Expression of the different N-CAM isoforms, for example, is tightly regulated during embryogenesis (Hemperly et al., 1986;Covault et al., 1986;Germarini et al., 1986). The PLanchor guarantees a long half-life of the protein on the cell surface (Lemansky et al., 1990). By endocytosis of a transmembrane isoform, cell adhesion can be rapidly downregulated (Bailey et al., 1992). In D. discoideum, which lives in the soil, long persistence of the csA protein has the obvious advantage of keeping cells in an adhesive state when cells are too disperse to complete aggregation within a few hours.
|
2014-10-01T00:00:00.000Z
|
1994-01-01T00:00:00.000
|
{
"year": 1994,
"sha1": "24f44dddbedb9d1534fbd8e1ddad7f398b87e1bf",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/124/1/205/1261098/205.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba07ade9f240c3c0e5dd5f629b19db6f21522888",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
250329362
|
pes2o/s2orc
|
v3-fos-license
|
Stressful Life Events and Chronic Fatigue Among Chinese Government Employees: A Population-Based Cohort Study
Background Currently, evidence on the role of stressful life events in fatigue among the Chinese working adults is lacking. This study aimed at exploring the prospective associations between stressful life events and chronic fatigue among Chinese government employees. Methods From January 2018 to December 2019, a total of 16206 government employees were included at baseline and they were followed-up until May 2021. A digital self-reported questionnaire platform was established to collect information on participants' health and covariates. Life events were assessed by the Life Events Scale (LES), fatigue was assessed by using a single item, measuring the frequency of its occurrence. Binary logistic regression analysis was used for the data analysis. Results Of the included 16206 Chinese government employees at baseline, 60.45% reported that they experienced negative stressful life events and 43.87% reported that they experienced positive stressful life events over the past year. Fatigue was reported by 7.74% of the sample at baseline and 8.19% at follow-up. Cumulative number of life events at baseline, and cumulative life events severity score at baseline were positively associated with self-reported fatigue at follow up, respectively. After adjusting sociodemographic factors, occupational factors and health behavior related factors, negative life events at baseline (OR: 2.06, 95% CI: 1.69–2.51) were significantly associated with self-reported fatigue at follow-up. Some specific life events including events related to work and events related to economic problems were significantly associated with self-reported fatigue. Specifically, work stress (OR = 1.76, 95%CI: 1.45–2.13), as well as not satisfied with the current job (OR = 1.95, 95%CI: 1.58–2.40), in debt (OR = 1.75, 95%CI: 1.40–2.17) were significantly associated with self-reported fatigue. The economic situation has improved significantly (OR = 0.62, 95%CI: 0.46–0.85) at baseline was significantly associated with lower incidence of self-reported fatigue. Conclusion Negative stressful life events were associated with fatigue among Chinese government employees. Effective interventions should be provided to employees who have experienced negative stressful life events.
BACKGROUND
As a non-specific symptom often overlooked in primary health care, fatigue refers to self-reported physical and mental tiredness, weakness, and a lack of energy (1). Fatigue is a significant public health concern given its negative health consequences and high prevalence. Prevalence of fatigue in the general population ranged from 0.37% to 18.3% (2-4). People who perceived fatigue often report poor self-care activities, significant social and functional impairment, and great use of general medical services (1,5,6). For the working population, fatigue not only affects work performance and productivity, severe fatigue may lead to sick leave and work disability (7,8).
Stressful life events have been implicated in the onset and course of various illnesses (9). Long-term stress has harmful effects on an individual's mental and physical health (7). The association between stress and fatigue is a well-documented in the general population (10). Fatigue has become an issue of concern among employees resulting from prolonged life stressors (11). Previous research has demonstrated that workrelated stressors contribute to an individual's stress level and can lead to unfavorable health consequences (8). Many people with fatigue report having experienced a very stressful situation prior to the manifestation of the disease, with some cases describing a concomitant infection or other determining factors (12). The role of psychosocial factors in the development of fatigue would be better demonstrated by examining the predictive effects of recent stressful events (13).
Nowadays, highly demands and stressful work situations have become common in modern work (14,15). Working adults may experience diverse type of stressful life events, which are quite different from the general population. Previous studies have shown that family-related life events occur most frequently in the general population (16). In the working adults, except for familyrelated stressors, work-related stressors are also very common (17). Currently, evidence on the role of stressful life events in fatigue among the Chinese working adults is lacking. There is a lack of studies comparing the impact of different stressors on fatigue, we are not sure if some specific types of life events (such as work-related stressors) were more potent on fatigue than others. Also, there are no direct tests of the stressor-disease specificity hypothesis, we are not sure if specific types of events linked to specific diseases (such as the association between in love or engagement and fatigue) (4,18,19) found that negative life events could influence short-term fluctuations, producing an increase in symptoms of fatigue but with recovery to prestress levels when the event is resolved or coping mechanisms are activated to reduce the negative impact (9). However, the effects of positive life events on fatigue are not clear enough.
As an Asian country deeply influenced by traditional culture, China is relatively partial to authoritarianism, collectivism, and kinship, which is different from the horizontal organization and rationalism that popular in Western countries (20). The Chinese workers are hardworking and obedient due Abbreviations: SMS, Short Messaging Service; PSQI, The Pittsburgh Sleep Quality Index; LES, The Life Events Scale; CI, confidence interval; OR, odds ratio. to the social culture of nationalism, stability, and harmony (21). In western countries, such as the UK and USA, workers share a lower level of collectivism and power distance when compared with Asian countries, the psychosocial work environment they experienced were better than most Chinese workers (21). Although many Chinese companies or organizations have been working hard to improve the psychosocial work environment of workers, the workload of full-time workers has not decreased. Most industries face a major challenge in maintaining a healthy and productive workforce, it is necessary to explore the associations between stressful life events and fatigue among Chines working adults. The current study aimed at exploring the possible prospective associations between the cumulative and specific stressful life events and self-reported fatigue among Chinese government employees.
Ethics Statement
This study was approved by the Human Research Ethics Committee of Central South University. Written informed consent was obtained before investigation was conducted.
Participants
Employees were consecutively recruited if they (a) were government employees who worked in Hunan province. (b) were aged between 18 to 60 years old.
Procedure
The present analysis is based on the Cohort Study on Chronic Diseases of Government Employees in Hunan Province, which was carried out in five cities of Hunan Province, China, between January 2018 and May 2021 (22). A total of 32 public sectors or organizations were recruited in the five selected cities. Once a public sector or organization is agreed to participate in our project, all employees in that public sector or organization was invited to the cooperated general hospital in their located site, complete the digital questionnaire. In this program, a digital self-reported questionnaire platform was established to collect information on employees. Recruited participants accessed the questionnaire with URLs sent by Short Messaging Service (SMS) and answered the questions via cellphone, tablet, or PC. Following written informed consent, employees were enrolled in the study and completed study questionnaires. All the government employees (n = 20863) who worked in the 32 public sectors or organizations were recruited, 2838 of them refused to participate in our project, 18025 government employees were enrolled in the project. In the current study, 1819 of them were excluded due to lack of demographic and fatiguerelated data, 16206 participants were finally included at baseline, 13,668 of them were followed. See Supplementary Figure S1 for the details.
MEASURES Stressful Life Events
Life events were assessed by the Life Events Scale (LES) (23), in which a total of 48 items are classified into several dimensions: family/marriage, work/study, health status, economic problems, accidents and legal disputes. Life events involve serious illness, housing, financial crises, relationship breakdowns, job satisfaction, social support in the workplace, etc. Participants can choose "yes" or "no" for each event based on their experience over the past year. The Chinese version of LES was used in this study. This self-report scale was widely used in China, which has reported favorable validity and reliability (24,25). Based on previous research (16,26), we divided the number of life events as four groups: none, one, two and three or more life-events. In addition, the participants were asked to report the severity level of each event. The severity score was divided into four groups: 1 = mild impact, 2 = moderate impact, 3 = severe impact, 4 = extreme severe impact. To obtain a cumulative severity score of all events, the sum of the severity scores of each life event was calculated (25). In this study, the cumulative severity score was grouped into four groups: zero points, 1-3 points, 4-7 points and ≥8 points.
Besides, these 48 life events were classified into five groups: events related to family & marriage, events related to work, events related to economic problems, events related to accidents and legal disputes and events related to health. According to the nature of these events, they were also divided into positive life events and negative life events in the current study (27). These different types of life events were grouped into two groups: none and one or more events.
Chronic Fatigue
We determined fatigue by using a single item, measuring the frequency of its occurrence. Previous studies showed that correlation patterns and multivariate analysis revealed a strong and significant association between the single-item measure and the other scale analysis on fatigue (28)(29)(30). In this study, the outcome variable "perceived fatigue status" was evaluated by self-report, with the response options consisting of "not at all, " "sometimes" and "often" on a Likert-type three-point scale.
Participants were asked by the question "With what frequency have you felt fatigue over the past 3 months?" Participants rated the items on a 3-Likert scale with response options of not at all, sometimes and often, with a score ranged from 0-2. A higher score indicates greater perceived fatigue. A cut-off score of ≥2 denotes perceived fatigue in the current study (28,29,31).
Covariates
Some variables included gender, age, marital status (married, single/divorced/widowed), household income, educational achievement (primary school, secondary school/university/university or above), working hours per day, level of employment (low/intermediate/high), type of work (cognitive tasks / manual tasks), self-reported chronic disease (yes/no) and sleep quality. In the current study, sleep quality was assessed by the 19-item Pittsburgh Sleep Quality Index (PSQI). Clinically, Poor sleep quality was defined by a PSQI global score >5 (32). The details of sleep quality measurement were introduced in our previous research (22). Some behavior factors including alcohol consumption (never/occasional drinking, former drinkers, current drinkers), smoking status (never/occasional, former smokers, current smokers), attending business dinner frequently (yes/no) were included too. In this study, alcohol use was defined as drinking alcohol at least once a week for at least half a year. Smoking is defined as smoking at least one cigarette a day for at least half a year. Attending business dinner frequently is defined as attending business dinner more than once a week over the past year.
Analytic Plan
The mean and standard deviation (SD) or proportion (%) of covariate characteristics were presented among participants with or without self-reported fatigue. Chi-square tests were used for categorical variables. Logistic regression models were used to estimate ORs and corresponding 95% confidence intervals (CIs) of self-reported fatigue for different type of stressful life events. In the logistic regression models, each model included known and potential confounders (multicollinearity diagnosis between those variables were conducted, collinearity between life events and different covariates were not found). Model 1 adjusted for gender and age. Model 2 further adjusted for some baseline factors (including marital status, education level, family income, grades of employment, type of work, work hours per day, sleep quality and whether have chronic disease) and model 3 further included behavioral factors (including smoking status, drinking status and attending business dinner frequently or not). To examine associations of worked-related stressors and self-reported fatigue in different participants, analyses were conducted across subgroups stratified by gender and age. Please see Supplementary data for the details.
Data analyses were conducted by SPSS 26.0. All p-values refer to two-tailed tests. A p-value less than 0.05 was considered statistically significant.
Descriptive Statistics
Among the 16,206 participants who were finally included at baseline, 13,668 of them were followed. After the exclusion of missing demographic and fatigue-related data, 16,206 government employees at baseline and 12,639 government employees at follow up were available for the final analysis. Of those participants, 11,658 government employees (4,050 males and 7,608 females, with a mean age of 38.09 years) were eligible for the longitudinal analyses after excluding participants who suffered from fatigue at baseline. Please see Supplementary Figure S1 for the details.
The mean age of the sample was 37.11 years (S.D = 9.53), most of them were younger than 40 years old (66.23%) and 61.52% of the sample were female. More than two thirds of the participants (68.78%) had a college education and most of them (78.03%) were currently married or cohabitating with someone. Also, 9.86% of the included government employees were current drinkers and 11.49% of them were current smokers. Of the included Chinese government employees, 7.63% reported that they perceived fatigue (fatigue score ≥2) over the past 3 months, 70.47% reported that they have experienced stressful life events over the past year. Of the Chinese government employees who reported stressful life events, 60.45% reported that they experienced negative stressful life events and 43.87% reported that they experienced positive stressful life events over the past year. See Table 1 and the supplementary data for the details.
Comparisons on the Key Variables by the Covariates
As presented in Table 1, being female, younger age, higher education levels, lower family income, divorced or widowed, lower grade of employment, have a physical oriented work, have a longer working hours per day, poor sleep quality, attending business dinner frequently, and had chronic disease were significantly associated with self-reported fatigue (p < 0.05). Non-significance was found between smoking status and drinking status (p < 0.05).
Cross-Sectional Associations Between Cumulative Number of Life Events, Cumulative Life Events Severity Score and Self-Reported Fatigue
The results for associations between cumulative number of life events and self-reported fatigue showed that experiencing two life events at baseline was associated with a statistically significant 1.31 (95% CI: 1.05-1.69) times greater likelihood of self-reported fatigue at baseline. Employees with more than three life events had 2.58 (95% CI: 2.14-3.11) times greater likelihood of selfreported fatigue in the full-adjusted model. The associations between experiencing one life events and self-reported fatigue were not statistically significant (OR = 1.14; 95% CI: 0.90-1.46).
The results for associations between cumulative life events severity score and self-reported fatigue showed that 4-7 points and ≥8 points group were positive associated with self-reported fatigue in the past 3 month. Chinese employees with 4-7 points of severity score had 1.85 (95% CI: 1.50-2.28) times greater likelihood of self-reported fatigue. Government employees with ≥8 points of severity score had 3.36 (95% CI: 2.76-4.10) times greater likelihood of self-reported fatigue over the past 3 months. For 1-3 points group, the association was not statistically significant in the fully-adjusted model (OR = 1.16; 95% CI: 0.94-1.42). See Table 2 for the details.
Longitudinal Associations Between Cumulative Number of Life Events, Cumulative Life Events Severity Score and Self-Reported Fatigue
The results for associations between cumulative number of life events and self-reported fatigue showed that experiencing one life events at baseline was associated with a statistically significant 1.42 (95% CI: 1.06-1.92) times greater likelihood of self-reported fatigue at follow up. Also, experiencing two life events at baseline was associated with a statistically significant 1.65 (95% CI: 1.22-2.21) times greater likelihood of self-reported fatigue at followup. Employees with more than three life events at baseline had 2.57 (95% CI: 2.02-3.24) times greater likelihood of self-reported fatigue at follow-up in the full-adjusted model.
The results for associations between cumulative life events severity score at baseline and self-reported fatigue at follow-up showed that higher severity score was associated with higher incidence of self-reported fatigue. Chinese employees with 1-3 points of severity score had 1.43 (95% CI: 1.10-1.85) times greater likelihood of self-reported fatigue. Also, employees with 4-7 points of severity score had 2.56 (95% CI: 1.97-3.28) times greater likelihood of self-reported fatigue. Government employees with ≥8 points of severity score had 2.72 (95% CI: 2.08-3.51) times greater likelihood of self-reported fatigue at follow-up. See Table 3 for the details. Table 4 and Supplementary Table S2 and for the details.
Longitudinal Associations Between Specific Life Events at Baseline and Self-Reported Fatigue at Follow-Up
Among the 48 major life events, work stress, not satisfied with the current job, self-pregnancy or wife pregnancy, in love or engagement, new member was added to the family and other events shown in Table 5 are the top 10 life events in terms of prevalence of events among our participates (see Supplementary Table S1). The relationship between these most common life events at baseline and self-reported fatigue at follow-up were tested in different models. Regarding the specific stressful life events, work stress (OR = 1.76, 95%CI: 1.45-2.13), as well as not satisfied with the current job (OR = 1.95, 95%CI: 1.58-2.40), in debt (OR: 1.75, 95%CI: 1.40-2.17) were significantly associated with a higher likelihood of self-reported fatigue. Self-pregnancy or wife pregnancy (OR = 1.14, 95%CI: 0.86-1.50), in love or engagement (OR = 1.02, 95%CI: 0.78-1.32), new member was added to the family (OR: 1.04, 95%CI: 0.79-1.38), discord with spouse's parents (OR: 0.91, 95%CI:
Key Findings
Of the included 16,206 Chinese government employees at baseline, 60.68% reported that they experienced negative stressful life events and 44.02% reported that they experienced positive stressful life events. The results of this study showed that cumulative stressful life events experienced at baseline were positively associated with self-reported fatigue at follow-up. After adjusting sociodemographic factors, occupational factors and health behavior related factors, negative life events were significantly associated with self-reported fatigue. Some specific life events including events related to work and events related to economic problems were significantly associated with selfreported fatigue. Specifically, work stress (OR = 1.76, 95%CI: 1.45-2.13), as well as not satisfied with the current job (OR = 1.95, 95%CI: 1.58-2.40), in debt (OR = 1.75, 95%CI: 1.40-2.17) were significantly associated with self-reported fatigue.
The economic situation has improved significantly (OR=0.62, 95%CI: 0.46-0.85) at baseline is a protective factor for selfreported fatigue, which was significantly associated with lower incidence of self-reported fatigue at follow-up.
Comparison With Previous Research
Previous studies have reported on the adverse effects of stressful life events on fatigue (9,13,33,34). In the current study, we found that Chinese government employees who experienced more than 3 life events over the past year was most strongly related to high fatigue levels. Also, higher severity score of stressful life events was associated with higher level of fatigue among Chinese government employees. Which was consistent with previous studies (35). We sought to determine, in a population-based sample of Chinese working adults, whether subjective fatigue differed according to the type of life event that precipitated them. The results showed that negative stressful life events were associated with greater likelihood of subjective fatigue while the association between positive stressful life events and subjective (19). The prolonged activation model suggests that next to stressors itself, cognitive anticipation of a stressor can lead to stress-related physiological activity, insufficient recovery, and ill health (37). It is said that there were three forms of anticipation, including positive anticipation, negative anticipation and positive outcome expectancy (10). Employees being preoccupied with work in a negative way (i.e., negative anticipation of a stressful work situation) has a negative impact on fatigue (38). For those companies or organizations, it is important to identify working adults who have experienced negative life stressors and provide effective interventions (such as exercise, cognitive behavioral therapy) (31). Previous research demonstrated that the effects of life events consistently differed, depending on the category of stressful life events (39). The analysis of the most common separate items in this study suggested that work-related life events including perceived work stress, not satisfied with the current job and economic-related life events including in debt that were the key components. Experienced work-related stressors may mean the psychosocial work environment of those employees were poor, which would contribute to high work demands and low social support, resulted in fatigue (8,11,14,15,38,40). In Gimeno et al. study (12), people experienced some economic-related life events usually reported higher level of fatigue. Experiencing economicrelated life events may influence the ability of employees to balance work and personal life (41). When people experienced some economic-related or family-related life events, the ability and willingness to lower the boundaries of personal life are lower, and the degree of penetration of work into personal life is higher, personal life may easily interfere with work and resulted in fatigue (41). Previous studies showed that people who experienced health problems were easier to perceived fatigue (6). The associations between health-related events and fatigue was not statistically significant in this study, which need further exploration.
There are fewer studies addressing the extent to which gender differences in exposure to stressful life events are associated with differential vulnerability to illnesses (19). It is said that female is generally expected to carry the load of both work and family, working women work a second shift which starts when they reach home (42), which increases their burden and stress level (34). Additionally, it is believed that females were more sensitive to what happens in their social networks, so they exposed to more interpersonal stressful life events than male (19,43). However, the results in the current study showed that no significant difference between gender were found. It may be that with the changes in society in the past few decades, the gender inequality in housework and work among Chinese workers has decreased. We think further research is needed in the future. In addition, we found that younger employees who reported work-related life events were significantly associated with higher level of fatigue than older employees. The possible reason is that young employees generally have shorter working years and a lower level of occupation. They usually have a heavier workload and are more likely to work long hours than older employees. Therefore, they are more likely to be exposed to work-related life events and subjective fatigue (44)(45)(46).
Future Implications
The current study demonstrated that work-related life events and economic-related life events were highly associated with fatigue among Chinese government employees. Given that many government employees have experienced multiple life events and the negative effects of those life events were statistically significant, it would be important to explore the work and personal life boundary management in Chinese working adults (41). Understanding the boundaries surrounding the work and personal life domains may contributed to a better health outcome on working adults, and improve their work efficiency (47).
Although the situation is changing in recent years, some countries (such as China and South Korea) in Asia are relatively partial to authoritarianism, collectivism, and kinship, which is different from the horizontal organization and rationalism that popular in Western countries (20). The Chinese workers are hardworking and obedient due to the social culture of nationalism, stability, and harmony. Thus, we believe the negative impact of work-related stressors among Chinese workers especially for those working in the government was particularly significant. We think it is necessary to study the influence of social culture on stress management and health among the working population in developing countries, which is helpful to develop effective interventions with better cultural sensitivity.
Currently, Cohen et al. indicated in their study that some important questions for which we lack adequate evidence are whether previous stressors or ongoing chronic stressors moderate responses to current ones and whether the stress load accumulates with each additional new stressor (19). Understanding the nature of the cumulative effects of stressors may be key to obtaining sensitive assessments of the effects of stressful life events on different disease and for planning effective interventions to reduce the impact of stressful life events on individuals' health (19). Lastly, we believe that possible underlying physiological and psychosocial causal pathways leading to fatigue have rarely been advanced and need to be examined more comprehensively (45).
Limitations
The present study has some limitations. First, we have classified those stressful life events into different types according to previous research (27), but participants may feel different about these stressors. In the future, it is necessary to further ask these participants to evaluate the event they have experienced is a positive event or a negative event, which may help the researchers better explore the relationship between stressful life events and fatigue. Second, it is possible that the one-item measure of fatigue in the current study could not capture the multidimensional nature of the fatigue experience (48). Thus, we cannot completely exclude misclassification bias as a source of error that explains the current observed associations. Finally, although we adjusted for a number of potential confounders, the results in this study might still be affected by unmeasured or residual confounding (especially genetic background as well as social support and coping style) (49).
CONCLUSION
This study showed that cumulative number of stressful life events experienced at baseline were positively associated with selfreported fatigue at follow-up. After adjusting some background factors and health behavior related factors, negative life events were significantly associated with self-reported fatigue. Some specific life events including work stress, job dissatisfaction and in debt were associated with self-reported fatigue. When stratified by gender and age, younger employees who experienced workrelated stressors were more vulnerable to self-reported fatigue. Effective interventions should be provided to working adults who have experienced stressful life events, especially negative life events.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Human Research Ethics Committee of Central South University. The patients/participants provided their written informed consent to participate in this study.
|
2022-07-07T15:14:29.747Z
|
2022-07-07T00:00:00.000
|
{
"year": 2022,
"sha1": "39d28213cb50ec23e8df3838b42e0566b2ddbb81",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "39d28213cb50ec23e8df3838b42e0566b2ddbb81",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.