id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
27855977
pes2o/s2orc
v3-fos-license
Nitric oxide is an upstream signal of vascular endothelial growth factor-induced extracellular signal-regulated kinase1/2 activation in postcapillary endothelium. We recently demonstrated that nitric oxide (NO) significantly contributes to the mitogenic effect of vascular endothelial growth factor (VEGF), suggesting a role for the NO pathway in the signaling cascade following kinase-derivative receptor activation in vascular endothelium. The aim of this study was to investigate the intracellular pathways linked to VEGF/NO-induced endothelial cell proliferation. We assessed the activity of the mitogen-activated protein kinase (MAPK) that is specifically activated by growth factors, extracellular-regulated kinase (ERK1/2), on cultured microvascular endothelium isolated from coronary postcapillary venules. ERK1/2 was immunoprecipitated, and its activity was assessed with an immunocomplex kinase assay. In endothelial cells exposed for 5 min to the NO donor drug sodium nitroprusside at a concentration of 100 microM, ERK1/2 activity significantly increased. VEGF produced a time- and concentration-dependent activation of ERK1/2. Maximal activity was obtained after 5 min of stimulation at a concentration of 10 ng/ml. The specific MAPK kinase inhibitor PD 98059 abolished ERK1/2 activation and endothelial cell proliferation in a concentration-dependent manner in response to VEGF and sodium nitroprusside. The NO synthase inhibitor Nomega-monomethyl-L-arginine, as well as the guanylate cyclase inhibitor 1H-[1,2,4]oxadiazolo[4,3-a]quinoxalin-1-one, blocked the activation of ERK1/2 induced by VEGF, suggesting that NO and cGMP contributed to the VEGF-dependent ERK1/2 activation. These results demonstrate for the first time that kinase-derivative receptor activation triggers the NO synthase/guanylate cyclase pathway to activate the MAPK cascade and substantiates the hypothesis that the activation of ERK1/2 is necessary for VEGF-induced endothelial cell proliferation. Vascular endothelial growth factor (VEGF) 1 is a secreted protein that is a specific growth factor for endothelial cells, and it has been shown to increase vascular permeability (1,2). It is angiogenic in vivo and in in vitro assays (3,4), and its physiological importance in vasculogenesis is well documented (5,6). The action of VEGF is regulated by two receptors belonging to the tyrosine kinase family, Flt-1 and KDR (or Flk) (7,8). Flt-1, which has higher affinity for VEGF than KDR, is required for endothelial cell morphogenesis, whereas KDR is involved primarily in mitogenesis (5,6,9,10). The postreceptor signaling pathways underlying VEGF actions on endothelial cells are still unclear. VEGF has been shown to elevate intracellular inositol 1,4,5-trisphosphate and calcium levels and to stimulate tyrosine phosphorylation and von Willebrand factor release in cultured human umbilical vein endothelial cells (11). VEGF effects on permeability (12) and vascular tone (13) are coupled to nitric oxide (NO) production. Consistent with this observation, we have recently demonstrated that NO production and cGMP elevation contribute to the angiogenic effect of VEGF (14,15). The activation of mitogen-activated protein kinase (MAPK) cascade by VEGF has been recently demonstrated (16). MAPKs are important intermediates in signal transduction pathways that are stimulated by a variety of agents, such as growth factors, hormones, neurotransmitters, and physical and chemical stressing agents (17). Many receptor tyrosine kinase and G protein-coupled receptors have been shown to activate the MAPKs. The 44-and 42-kDa MAPK (ERK1 ⁄2 ) isoforms are ubiquitously expressed and have been shown to be activated by dual specificity MAPK kinases (MEK 1 /MEK 2 ) in response to diverse stimuli (18,19). This study was designed to characterize the transducing pathways underlying VEGF-activated endothelial cell proliferation. Recently, we have shown that NO is a downstream signal in VEGF effects (15). Here, we have investigated the role of NO on the intracellular pathway linked to VEGF receptor activation in postcapillary endothelium. We assessed MAPK activity specifically activated by a growth factor, i.e. ERK1 ⁄2 , on cultured endothelium isolated from coronary postcapillary venules. MATERIALS AND METHODS Cell Line Culture Conditions and Proliferation Assay-Coronary venular endothelial cells (CVECs) were obtained and maintained in culture as described previously and characterized for their endothelial morphology by immunofluorescent staining for factor VIII antigen and uptake of acetylated low density lipoproteins (20). Cells between passages 15 and 25 were used in these experiments. Cell proliferation was quantified by total cell number after 48 h of stimulation with test substances (14). To evaluate the effect of the MAPKK, NO synthase (NOS), and guanylate cyclase inhibitors, the drugs were added to the cells 30 min before the test substances. Proliferation is expressed as mean Ϯ S.E. of total cells counted in each well. Immunoprecipitation and Immunocomplex Kinase Assay of ERK1 ⁄2 -CVECs were serum starved overnight. Following treatment, cells were washed twice with ice-cold Dulbecco's phosphate-buffered saline and lysed by adding 0.3 ml of buffer containing 50 mM Tris (pH 7.4), 1% Triton X-100, 1 mM EGTA, 100 mM NaCl, 1 mM Na 3 VO 4 , 0.2 M phenylmethylsulfonyl fluoride, 25 g/ml leupeptin, 10 g/ml aprotinin, and 10 mM NaF. To assess the calcium dependence of ERK1 ⁄2 activation, CVECs were stimulated in the presence of 3 mM EGTA. Cell lysate containing 100 g of protein in a total volume of 800 l were precleared with nonimmune rabbit IgG and 30 l of goat anti-rabbit IgG agarose beads on a rotating plate for 1 h at 4°C and then centrifuged at 10,000 ϫ g for 10 min. 1 g of anti-ERK 1 polyclonal antibody, which is reactive with ERK 1 and to a lesser extent with ERK 2 , and 25 l of goat anti-rabbit IgG agarose beads were added to the supernatant, and the mixture was placed on a rotating plate overnight at 4°C. Following a centrifugation at 10,000 ϫ g for 5 min, the pellet was recovered and washed twice with the lysis buffer and once with the kinase buffer containing 20 mM Hepes (pH 7.6), 20 mM MgCl 2 , and 2 mM dithiothreitol. The kinase assay was carried out at 30°C for 10 min in 30 l of assay buffer containing 5 g of myelin basic protein (MBP) as specific substrate for ERK1 ⁄2 (21), 20 M ATP, and 3 Ci of [␥ 32 P]ATP. The reaction was stopped by the addition of Laemmli's sample buffer and boiled for 5 min. The samples were resolved by 12% SDS-polyacrilamide gel electrophoresis, stained with Coomassie Brilliant Blue, and exhaustively destained. The gel was dried, and the incorporation of [␥ 32 P]ATP was visualized by autoradiography. Gel slices of the 20-kDa MBP bands were also cut out in most of the experiments, and their radioactivity was measured by liquid scintillation counting. Inositol Phosphate Activation-CVECs seeded onto 6-well plates (3 ϫ 10 4 cells/well) after overnight incubation were labeled with [ 3 H]myoinositol (2 Ci/ml) in DMEM without cold inositol for 48 h. Excess of tritiated myoinositol was removed by three washes with cold DMEM followed by 4 h of incubation with cold DMEM at 37°C. After one wash, cells were incubated for 10 min with 20 mM LiCl to block myoinositol-1-phosphatase and then with test compounds for the designed times. Reaction was stopped by the addition of ice-cold methanol for 30 min. Cells were scraped, and cell-associated inositols were extracted by chloroform-methanol (1:1). Water-soluble fractions were applied to anion-exchange columns (Resin AG-X8, 200 -400 mesh, formate form), and water-soluble inositols were eluted by successive washes with 60 mM CH 5 Calcium Measurements-Cytosolic calcium measurements were made using the calcium-sensitive fluorescence indicator indo-1 (Molecular Probes, Eugene, OR) (22). Cells were incubated with the acetoxymethyl ester for the dye in DMEM containing 1% Me 2 SO for 40 min. At that time, the cells were rinsed with Dulbecco's phosphate-buffered saline and reincubated in DMEM without Me 2 SO, dye, or phenol red for 20 min to allow for de-esterification. The experiments were performed on a stage-scanning photometric imaging system (ACAS 570, Meridian Instruments, Okemos, MI) at 37°C. The cells were illuminated with the 355-nm line of an argon laser that was attenuated with neutral density filters and an acuosto-optic modulator to minimize bleaching. Fluorescence emission was monitored at 405 and 485 nm using narrow-band pass filters and dual photomultipliers. The fluorescent images of a suitable field of cells were captured before and after addition of VEGF, and the 405/485 nm ratio was analyzed. This ratio is a direct index to the calcium concentration within the cell at the time of image capture. Differential Reverse Transcription PCR Analysis-Subconfluent and serum-starved cells were stimulated for 4 -24 h with VEGF in the presence of 1% serum. At the end of incubation, total RNA was isolated by the standard guanidine thiocyanate-phenol-chloroform extraction (23). cDNA was synthetized as described (24). Differential reverse transcription PCR for NOS isoforms was carried out by using 5 l of cDNA and specific primers for bovine calcium/calmodulin-dependent NOS (ecNOS) and iNOS with sequences as follows: ecNOS sense, 5Ј-GCTT-GAGACCCTCAGTCAGG-3Ј; ecNOS antisense, 5Ј-GGTCTCCAGTCTT-GAGCTGG-3Ј (25); iNOS sense, 5Ј-TAGAGGAACATCTGGCCAGG-3Ј; iNOS antisense, 5Ј-TGGCAGGGTCCCCTCTGATG-3Ј (26). Calibration was performed by co-amplification of the same cDNA samples with primers for glyceraldehyde-3-phosphate dehydrogenase as an internal standard, with sequences as described (24). For PCR amplification a Perkin-Elmer GeneAmp PCR System 2400 was used. The PCR cycles were as follows: 30 s at 94°C, 1 min at 55°C, and 1 min at 72°C. After 30 cycles of amplification, aliquots of each sample product (20 l) were electrophoresed on a 3% agarose gel and stained with ethidium bromide. The size of the amplification products was 296 for ecNOS and 372 for iNOS. Image processing and analysis of the intensity of the bands were performed as described (24). Results were evaluated as ratio between the target genes (ecNOS and iNOS) and glyceraldehyde-3phosphate dehydrogenase amplification analysis. Determination of NOS Activity-Subconfluent CVECs in 60 -mm culture dishes were serum starved overnight and then washed and equilibrated for 20 min at 37°C with Hepes buffer containing: 145 mM NaCl, 5 mM KCl, 1 mM MgSO 4 , 10 mM Hepes, 1 mM CaCl 2 , 1 mM glucose, pH 7.4. Cells were incubated for 20 min with 1 Ci of [ 3 H]arginine plus 10 M L-arginine. To assess the calcium dependence of NOS activation, experiments were performed in calcium-free Hepes buffer containing 1 mM EGTA. Test substances were added for 5 min at 37°C. The reaction was stopped by cold Hepes buffer containing 4 mM EDTA, and the supernatant was removed. 0.5 ml of ethanol added to each monolayer was allowed to evaporate, and 2 ml of 10 mM Hepes-Na, pH 5.5, was added for 20 min. The supernatant was collected and applied to 0.8 ml of Dowex AG50WX-8 (Na-form) and vigorously shaken for 45 min. Then, 0.5 ml was collected and added to 3 ml of liquid scintillation counting mixture. NOS activity was expressed as pmol/mg of protein. Measurement of cGMP Levels-cGMP levels were measured on cell extracts from confluent cell monolayers by radioimmunoassay using iodinated tracer (14). Cell monolayers were treated with 1 mM 3-isobutyl-5-methyl-xanthine for 15 min before stimulation. After stimulation, cells were rinsed with phosphate-buffered saline and removed by scraping in ice-cold 10% trichloroacetic acid. Following centrifugation, cGMP levels were assayed in the supernatant, and proteins were measured in the pellet by the Bradford's procedure. Data are expressed as fmol/mg of protein. ERK1 ⁄2 Activation Induced by VEGF Is Time-dependent, Concentration-dependent and Sensitive to Inhibition of the MAPKK- The ability of VEGF to stimulate the MAPK cascade was assessed. After 24 h of starvation, CVECs were stimulated with 10 ng/ml VEGF over a range of times from 2 to 15 min, and the activity of the immunoprecipitated ERK1 ⁄2 was measured. The activation of ERK1 ⁄2 was statistically higher than the unstimulated control within 2 min and reached the maximum at 5 min (Fig. 1a). The activation of ERK1 ⁄2 by VEGF was concentration-dependent with maximal activity at 10 ng/ml, which doubled the basal activity of ERK1 ⁄2 (Fig. 1, b and d). In the same experiments, bFGF (10 ng/ml) did not significantly induce ERK1 ⁄2 activation. When the endothelial cells were preincubated for 30 min with increasing concentrations of the MAPKK inhibitor PD 98059 (27), VEGF-induced ERK1 ⁄2 activation was inhibited (IC 50 ϭ 30 M). At 100 M, PD 98059 completely abolished the ERK1 ⁄2 activation produced by 10 ng/ml of VEGF but did not modify the unstimulated control (Fig. 1, c and e). NO Activates ERK1 ⁄2 -In previous reports, we demonstrated that the NO pathway is necessary for the proliferative effects of VEGF on microvascular endothelial cells (14). We therefore investigated whether NO contributed to VEGF mitogenic activity by activating the MAPK cascade. For this purpose, starved and subconfluent CVECs were treated with the NO donor SNP, and the activity of ERK1 ⁄2 was measured. After 5 min of stimulation with 100 M SNP, the MAPK activity was increased by 2-fold (Fig. 2a). NO-induced ERK1 ⁄2 activation and the proliferative effect of NO were abolished by PD 98059 (Fig. 2, a and b), indicating that ERK1 ⁄2 was specifically and directly activated by NO and that this phosphorylation cascade was involved in signaling mitogenesis in postcapillary endothelial cells. Calcium-dependent Activation of NOS and ERK1 ⁄2 by VEGF-We then characterized the NOS isoform mediating VEGF effect in CVECs. The rapid activation of ERK1 ⁄2 in response to VEGF suggested that the acute activation of NO production in CVECs by ecNOS. Differential reverse transcription PCR of total RNA indicated that this isoform was predominantly expressed in CVECs (Fig. 3a). After 4 h from VEGF administration, ecNOS expression was not modified, indicating the absence of a transcriptional event between VEGF administration and NO production. iNOS expression was not detected at any time point between 4 and 24 h of exposure to the growth factor. CVECs preloaded with the ratiometric fluorescent indicator indo-1 exhibited a rapid calcium transient upon exposure to VEGF (Fig. 3b). The upward stroke of the calcium transient began 3 min after addition of VEGF, the peak concentration of cytosolic calcium occurred at approximately 7 min, and recov-ery occurred over the next 20 min. After continued exposure of CVECs to VEGF for 70 min, cytosolic calcium had recovered only 65%, suggesting a continuing signal for calcium-calmodulin NO production beyond the rapid peak. Consistent with the rapid cytosolic calcium elevation, within 5 min after VEGF exposure, NOS activity increased, and EGTA abolished its elevation (Fig. 3c). EGTA also abolished ERK1 ⁄2 activation by VEGF, suggesting that calcium was required to trigger the MAPK cascade, as well as the NOS activity (Fig. 3d). The MAPKK inhibitor specifically reduced the proliferative effect of VEGF in a concentration-dependent manner, whereas it did not inhibit the growth-promoting effect of bFGF (Fig. 4a). The IC 50 for growth inhibition (10 M) was in the same range of concentration as that for ERK1 ⁄2 inhibition. At the highest concentration, PD 98059 slightly reduced the number of cells recovered under control conditions. The effect was independent from cytotoxicity, as indicated by trypan blue exclusion assays (data not shown). MAPKK and NOS/Guanylate Cyclase Inhibitors Block VEGF-induced Endothelial The addition of VEGF doubled cGMP levels in CVECs, an effect specifically blocked by NOS inhibitors (Table I) as previously reported (14). L-NMMA inhibited VEGF-induced growth in a concentration-dependent manner; maximal growth inhibition was obtained at 200 M (IC 50 ϭ 10 M) (Fig. 4b). Conversely, no inhibition in cell growth was produced when bFGF was used as a mitogen (Fig. 4b). ODQ produced concentration-dependent inhibition of the FIG. 1. Time-and dose-dependent activation of ERK1 ⁄2 exposed to VEGF. a, CVECs were exposed for different times (2-5 min) to 10 ng/ml VEGF. b, cells were stimulated for 5 min with increasing concentrations (5-20 ng/ml) of VEGF and 10 ng/ml bFGF. c, effect of the MAPKK inhibitor PD 98995 (10 -100 M) on 10 ng/ml VEGF-induced ERK1 ⁄2 activation. d and e are representative autoradiographies related to b and c, respectively. ERK1 ⁄2 was immunoprecipitated, and its activity was measured with an in vitro kinase assay by using [␥ 32 P]ATP and MBP as substrate. The samples were resolved by 12% SDS-polyacrilamide gel electrophoresis followed by autoradiography. Gel slices of the 20 K D MBP bands were cut out, and the radioactivity was measured by liquid scintillation counting. n ϭ 5; mean Ϯ S.E. *, p Ͻ 0.05; ***, p Ͻ 0.001 versus unstimulated control; #, p Ͻ 0.001 versus VEGF alone (ANOVA followed by Fisher's test). guanylate cyclase activation and cGMP levels elevation induced by VEGF as well as by the NO donor SNP (maximal effect at 10 M; IC 50 ϭ 0.5 M) ( Table I). The minimal effective concentration of ODQ that inhibited cGMP formation was sufficient to block the proliferative effect of VEGF, and lower concentrations gave the same effect (Fig. 4c). Conversely, proliferation and cGMP elevation produced by the NO donor SNP were reduced by ODQ in a concentration-dependent manner ( Fig. 4c and Table I). Maximal inhibition was obtained at 10 M, and the IC 50 was 0.5 M for both the effects. The guanylate cyclase inhibitor did not produce significant reduction of bFGFinduced growth (Fig. 4c). MAPKK Inhibitor Does Not Affect the Biochemical Cascade of NOS/Guanylate Cyclase Elicited by VEGF and NO- To demonstrate the exact biochemical location of the MAPK in the NO/NOS pathway in our system, the MAPKK inhibitor was tested on guanylate cyclase activation. PD 98059 did not affect the NOS/cGMP pathway activation stimulated by either VEGF or SNP on CVECs at any of the concentrations tested (Table I). Similar results were obtained when PD 98095 was assessed on NOS activity. The VEGF-induced NOS activity (223 Ϯ 11 pmol/mg of protein versus a basal value of 169 Ϯ 15 pmol/mg of protein) could be selectively blocked by 3 M L-NMMA (131 Ϯ 10 pmol/mg of protein; IC 50 ϭ 50 M) but not by 100 M PD 98059 (210 Ϯ 22 pmol/mg of protein; n ϭ 3). The possibility that PD 98059 could affect other transducing pathways required for proliferation in our system was ruled out in parallel experiments in which inositol phosphate metabolism was assessed. PD 98059 at the concentration producing 100% reduction of the specific biochemical target (ERK1 ⁄2 activation) failed to affect the metabolism of inositol phosphate. VEGF induced inositol phosphate 1 accumulation (448 Ϯ 37 cpm/well over basal control), which was not affected by 100 M PD 98059 pretreatment (642 Ϯ 98 cpm/well; n ϭ 3). NOS/Guanylate Cyclase Inhibitors Prevent the VEGF-induced ERK1 ⁄2 Activation-To investigate the exact location of the NOS/cGMP pathway in the phosphorylation cascade triggered by VEGF in our system, experiments were done to assess the involvement of NO pathway in VEGF-induced ERK1 ⁄2 activation. CVECs were pretreated for 30 min with 200 M L-NMMA and then stimulated for 5 min with 10 ng/ml VEGF. Data obtained showed that the pretreatment with L-NMMA abolished the increased ERK1 ⁄2 activity elicited by VEGF (Fig. 5a). This effect was selective for the VEGF effect because no inhibition was found for cells stimulated with 10% calf serum (1680 Ϯ 70 cpm and 1725 Ϯ 150 cpm, with and without L-NMMA, respectively; n ϭ 3). Consistent with the observation that NO is the transducing molecule between the VEGF receptor and ERK1 ⁄2 , ODQ significantly inhibited the VEGF-and SNP-induced increase of ERK1 ⁄2 activity (Fig. 5b). DISCUSSION The data presented here demonstrate that the mitogenic activity of VEGF on postcapillary endothelial cells requires the activation of the MAPK cascade and that NO/cGMP production mediates the MAPK activation following VEGF receptor interaction, ultimately leading to endothelial cell growth. These conclusions are based on the following observations: 1) VEGF stimulated the MAPK specifically linked to proliferation, i.e. ERK1 ⁄2 , as did the NO-donor drug SNP; 2) blockade of the NO pathway by L-NMMA and by ODQ prevented the ERK1 ⁄2 activation by VEGF and SNP; and 3) inhibition of MAPK kinase activation, of NO synthase activity, and of cGMP production specifically blocked VEGF/NO-induced proliferation. In rat liver sinusoidal endothelial cells, it was reported that VEGF stimulated phosphorylation of the MAPK (16). Postcapillary venular endothelium has the ability to respond promptly to mitogenic peptides. Using cultured endothelium from coronary postcapillary venules, we demonstrated that ERK1 ⁄2 activation lies upstream of the proliferative effect of VEGF. The specificity of ERK1 ⁄2 activation is confirmed by the use of the MAPKK (or MEK) inhibitor PD 98059 (27). This compound has been demonstrated to be a selective and noncompetitive MEK inhibitor in in vitro assay (30,31) without any effect on ERK. PD 98059 at concentrations above 50 M has been shown both to inhibit MEK1 ⁄2 by binding a regulatory site on the enzyme and to prevent activation by c-Raf and MEK1 ⁄2 kinase (30). Our data show that in this concentration range, PD 98059 prevented the ERK1 ⁄2 activation and the proliferative effect induced by VEGF, demonstrating that the activation of ERK1 ⁄2 is a necessary step for endothelial cell proliferation. In previous work, we have shown that molecules able to increase NO levels induced endothelial cells proliferation and FIG. 2. Effect of NO on ERK1 ⁄2 activation and cell proliferation. a, ERK1 ⁄2 activation. CVECs were stimulated with 100 M SNP. ERK1 ⁄2 was immunoprecipitated, and its activity was measured with an in vitro kinase assay by using [␥ 32 P]ATP and MBP as substrate. The samples were resolved by 12% SDS-polyacrilamide gel electrophoresis followed by autoradiography. Gel slices of the 20-kDa MBP bands were cut out, and the radioactivity was measured by liquid scintillation counting. b, cell proliferation: 1.5 ϫ 10 3 cells resuspended in 10% calf serum were seeded in each well of 96-well plates. After adherence (3-4 h), the medium was replaced with 1% calf serum DMEM containing test substances and incubated for 48 h. After fixation and staining of cells with hematoxylin-eosin with Diff-Quik, the number of cells was counted in seven random fields of each well at ϫ 100 magnification with the aid of a 21-mm 2 ocular grid. Data are expressed as total number of cells counted/well. PD 98059 was given at 100 M for 30 min before cell stimulation. n ϭ 3; ***, p Ͻ 0.001 versus unstimulated control; #, p Ͻ 0.05 versus SNP alone (ANOVA followed by Fisher's test). migration in vivo and in vitro (32,33) and also that the activation of the NO pathway following VEGF stimulation significantly contributed to the mitogenic effect of VEGF (14). Here, we demonstrate that under the same experimental conditions, NO directly triggers the activation of the MAPK cascade. ERK1 ⁄2 activation and endothelial cell proliferation promoted by NO are selectively blocked by the MAPKK inhibitor. ODQ, a selective and specific inhibitor of the soluble guanylate cyclase (29), blocked in a concentration-dependent manner cGMP elevation in venular endothelial cells exposed to the NO donor and to VEGF. Consistent with cGMP being required to transduce the NO-dependent proliferation signal, neither VEGF or SNP promoted ERK1 ⁄2 phosphorylation and growth in the presence of ODQ. The IC 50 for proliferation and cGMP formation overlapped when SNP was the mitogen. Interestingly, minimal reduction of cGMP levels was sufficient to completely block the proliferation signal produced by VEGF. Because in our experimental model, production of cGMP is required for VEGF-induced cell adhesion (15), the effect of ODQ on VEGF proliferation might be related to the specific requirement of cell adhesion to fulfill the growth program encoded by VEGF in postcapillary venular endothelium. The link between VEGF stimulation of CVECs, NO release, and the rapid activation of ERK appears to be ecNOS, the calcium/calmodulin-dependent enzyme found in endothelial cells. ecNOS is the predominant isoform expressed in CVECs, and its expression is not affected by the growth factor. We show that VEGF causes a rise in cytoplasmic calcium that peaks at 7 min and triggers NOS activity within 5-10 min. Consistently, ERK1 ⁄2 activity peaks between 5 and 10 min. Thus, the time frame for increases in cytosolic calcium, NO production, and ERK1 ⁄2 activity support a KDR/calcium/ecNOS/NO/soluble guanylate cyclase/cGMP/ERK1 ⁄2 cascade activated by VEGF. The mechanism responsible for the calcium transient is not completely clear. However, as indicated by the elevation of inositol phosphate levels recovered in CVECs, release of calcium from the endoplasmic reticulum could occur by KDR-mediated activation of phospholipase C gamma 1 (34). Alternatively, VEGF may activate processes that accelerate calcium entry via plasmalemmal ion channels (35). We recently demonstrated that NO synthase lies downstream of the angiogenesis induced by VEGF but not of that induced by bFGF (15). The present data provide a new insight on the mechanism underlying the role of NO in mediating VEGF effect by demonstrating that the NO pathway is upstream of the MAPK cascade activated by VEGF. In fact, the ERK1 ⁄2 activation and the endothelial proliferation following VEGF/receptor activation are prevented in culture conditions in which NO production and cGMP elevation are impaired by the use of NOS/cGMP selective inhibitors. Conversely, blockade of the MAPKK does not affect the NOS/guanylate cyclase. Thus, the NO pathway activation is intermediate between the VEGF receptor activation and the MAPK phosphorylation in endothelial cells. Other observations support a link between the NO and the MAPK cascade. Sing et al. (36) recently described that ERK activation is necessary for the induction of the inducible NOS by interleukin-1␤ in myocytes and cardiac microvascular endo- 4, 9, and 24), iNOS expression at 0, 4, 9, 24 h of stimulation, respectively. b, effect of VEGF on cytosolic calcium mobilization in individual adherent CVECs. Addition of 10 ng/ml VEGF induced a synchronized rapid increase in cytosolic calcium followed by a long-lasting decline to levels above prestimulation values. Data are the means of tracing recorded from 22 individual cells. c, effect of calcium on NOS activity in CVECs. NOS activity (pmol/mg of protein) was evaluated by [ 3 H]L-arginine conversion in cells exposed to 10 ng/ml VEGF for 5 min (open columns). EGTA was used at 1 mM in calcium-free buffer (hatched columns) (n ϭ 3). d, effect of calcium on ERK1 ⁄2 activation. Unstimulated and VEGF-treated cells were lysed, ERK1 ⁄2 was immunoprecipitated, and its activity was detected as MBP phosphorylation. Cells were stimulated with 10 ng/ml VEGF in the absence (open columns) and in presence (hatched columns) of 3 mM EGTA. ERK1 ⁄2 activity is expressed as radioactivity of gel slides of phosphorylated MBP. n ϭ 2; **, p Ͻ 0.01 versus unstimulated control; #, p Ͻ 0.05 versus VEGF (ANOVA followed by Fisher's test). thelial cells. Elevated shear rate caused increased production of NO and activated the MAPK cascade in endothelial cells (37,38). Whereas in the above mentioned reports, MAPK activation anticipates and/or parallels NO production, our data indicate an upstream role for NO. The role of NO in promoting cell growth and differentiation is controversial. In angiogenesis, NO elevation has been shown to be positively correlated with neovascularization and tumor growth (15,39,40) in adult rodent models. Conversely, in the chorionallantoic membrane of the chick embryo and during the developmental maturation of Drosophila, NO acts as an antiproliferative agent (41,42). ERK1 ⁄2 is thought to be directly involved in transmitting signals from growth factor receptors to the nucleus to regulate gene transcription and protein synthesis, leading to proliferation or differentiation and apoptosis (17,43,44,45). Recently, a difference in the actions of the ERK and p38/JNK pathway has been demonstrated in PC12 cells; the activation of JNK-p38 cascade leads to apoptosis of PC12 cells, whereas the activation of ERK1 ⁄2 seems to be necessary for survival and/or antiapoptosis of PC12 cells (45). The data here reported support the hypothesis of NO as a "prosurvival" or antiapoptotic effector for endothelial cells. Although it is presently difficult to speculate on whether the opposing effects of NO in controlling cell growth are due to species or to differentiation diversity, nevertheless our results using venular endothelial cells continue to emphasize the importance of NO as a balancing element in the molecular events between cell proliferation and differentiation.
2018-04-03T01:03:50.701Z
1998-02-13T00:00:00.000
{ "year": 1998, "sha1": "690d48db167a0fc9712c7099cbdabf2e1fae1fdc", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/273/7/4220.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "6f32ab85c64db8e24b39a0f5ef61bf30edf9870c", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
252546703
pes2o/s2orc
v3-fos-license
Physical activity partly mediates the association between cognitive function and depressive symptoms Cognitive function, physical activity, and depressive symptoms are intertwined in later life. Yet, the nature of the relationship between these three variables is unclear. Here, we aimed to determine which of physical activity or cognitive function mediated this relationship. We used large-scale longitudinal data from 51,191 adults 50 years of age or older (mean: 64.8 years, 54.7% women) from the Survey of Health, Ageing and Retirement in Europe (SHARE). Results of the longitudinal mediation analyses combined with autoregressive cross-lagged panel models showed that the model with physical activity as a mediator better fitted the data than the model with cognitive function as a mediator. Moreover, the mediating effect of physical activity was 8–9% of the total effect of cognitive function on depressive symptoms. Our findings suggest that higher cognitive resources favor the engagement in physical activity, which contributes to reduced depressive symptoms. MAIN Engaging in regular physical activity and maintaining high cognitive function are essential for health [1][2][3]. Thus, the agerelated decline in physical activity and cognitive function [4][5][6] often affects mental health [3,7,8]. Yet, the nature of the relationship between physical activity, cognitive function, and mental health across aging remains unclear. Physical activity has been found to reduce the risk of developing depressive symptoms [9][10][11] through several biological and psychosocial pathways [12,13]. Likewise, a recent study drawing on large-scale genome-wide association studies revealed that a device-based measure of physical activity reduces the risk of major depression, while the association in the opposite direction was not significant [14]. In addition, longitudinal studies showed that cognitive decline precedes the emergence of depressive symptoms [15][16][17][18], while the opposite association was not observed. Thus, previous literature suggests that both physical activity and cognitive function predict subsequent changes in depressive symptoms. However, the temporal precedence and directionality governing the effect of physical activity and cognitive function is unclear. Although observational results suggested that physical activity enhanced cognitive function [19][20][21], more recent studies showed that higher levels of cognitive function can increase the engagement in physical activity [22][23][24][25]. The effect of physical activity on cognitive function can be explained by the effects of physical activity on angiogenesis, neurogenesis, cortical thickness, and growth factor production [26][27][28]. The effect in the opposite direction (cognitive function → physical activity) can be explained by experimental and theoretical work related to the theory of effort minimization [29][30][31][32][33][34]. According to this perspective on effort minimization, engaging in physical activity requires cognitive resources to override the automatic attraction toward effort minimization. Based on the aforementioned literature, two models can be hypothesized (Fig. 1). First, a decline in physical activity has a detrimental effect on cognitive function, which contributes to depressive symptoms. Second, a decline in cognitive function has a detrimental effect on physical activity, which contributes to depressive symptoms. In other words, cognitive function may mediate the effect of physical activity on depressive symptoms (physical activity → cognitive function → depressive symptoms). Alternatively, physical activity may mediate the effect of cognitive function on depressive symptoms (cognitive function → physical activity → depressive symptoms). These hypotheses are not mutually exclusive as a vicious cycle between cognitive decrement and physical inactivity could also occur [23]. The objective of the present study was to test these two hypothesized models. Understanding the nature of the relationship between physical activity, cognitive function, and depressive symptoms in adults 50 years of age or older can contribute to improving interventions aiming to promote mental health in this population. RESULTS We studied 51,191 individuals (55% women, mean age at baseline 65 years). Table 1 summarizes the baseline characteristics of the analyzed sample. Table 2 describes the data of each measure across waves. Table 3 presents the results of the longitudinal mediation models. In Model 1 (cognitive function → physical activity → depressive symptoms), results showed that higher cognitive function predicted higher physical activity 2 years later (a1: B = 0.022, p < 0.001; a2: B = 0.030, p < 0.001) and that higher physical activity predicted lower depressive symptoms after 2 more years (b1: B = −0.137, p < 0.001; b2: −0.137, p < 0.001). Results further demonstrated an indirect effect of cognitive function on depressive symptoms through physical activity (indirect1: B = −0.003, p < 0.001; indirect2: B = −0.004, p < 0.001). After adjusting for physical activity, results showed that lower cognitive function predicted higher depressive symptoms 4 years later (c'1: B = −0.036, p < 0.001; c'2: B = −0.040, p < 0.001), which suggested that the effect of cognitive function on depressive symptoms was not fully mediated by physical activity. Specifically, the proportion of the total effect that was mediated by physical activity was 8% between waves 4 and 6, and 9% between waves 5 and 7. Overall, results showed that the model including physical activity as a mediator of the association between cognitive function and depressive symptoms (Model 1) fitted the data more accurately than the model including cognitive function as a mediator of the association between physical activity and depressive symptoms (Model 2) when compared with Akaike Information Criterion (AIC): AIC Model1 -AIC Model2 = −104.034 (Table 3). This finding was consistent with the observation that the proportion of the total effect mediated by physical activity was about two times larger than the one mediated by cognitive function (8% vs. 3%, and 9% vs. 5%, respectively; Table 3). One to three times a month 5,05 (9.9) Sensitivity analyses Results of the sensitivity analyses were comparable with those of the main analyses for both Models 1 and 2 (Supplementary Table 1). Specifically, in Model 1, when verbal fluency was used instead of delayed recall, results showed an indirect effect of cognitive function on depressive symptoms through physical activity with a comparable proportion of mediated effect (i.e., 8 and 11%). When vigorous physical activity replaced moderate physical activity, the mediating pattern was similar, but the proportion of mediated effect was smaller (i.e., 5 and 7%). In Model 2, the proportion of mediated effects of 3 and 5% were observed, when verbal fluency replaced delayed recall, or 4 and 4%, when vigorous physical activity replaced moderate physical activity. AIC Model1 -AIC Model2 = −620.719 when verbal fluency replaced delayed recall, but AIC Model1 -AIC Model2 = 50.983 when vigorous activity replaced moderate activity. The moderating effect of sex and age As the interaction term of sex × age was a significant predictor of cognitive function, physical activity, and depressive symptoms, we replicated the cross-lagged panel models (CLPM) across sex and age categories. The results were comparable to the main models in the direction of the effects (Table 4). In Model 1, women above the age of 65 experienced the highest total effect of cognitive function on depressive symptoms (total effect1: B = −0.069, p < 0.001; total effect2: B = −0.087, p < 0.001), but the proportion of mediation was the highest in women younger than 65 years in the first mediation cycle (17%) with the second lowest total effect (total effect1: B = −0.024, p < 0.05). The proportion of mediation was the lowest in men younger than 65 years (effect1: 3%). Moreover, like in the main analysis, the proportion of mediating effect was smaller in Model 2 (2-9%) than in Model 1 (3-17%). In Model 2, women younger than 65 showed the highest total effect of physical activity on depressive symptoms (total effect1: B = −0.186, p < 0.001), but with only a 2% mediating effect. The highest proportion of mediating effect was observed in men and women older than 65 years (both 9%). Complementary analyses In the random intercepts cross-lagged panel model (RI-CLPM), history-independent autoregressive latent trajectory model (HI-ALT), and autoregressive latent trajectory model with structured residuals (ALT-SR) approaches, the mediating effects of physical activity in Model 1 and of cognitive function in Model 2 were~0% (Supplementary Table 1). The results of the fully saturated CLPM model were comparable with the CLPM model in that Model 1 had higher proportion of mediating effects (2-4%) than Model 2 (1-2%; Supplementary Table 1). Finally, the XM interaction models obtained small and non-significant estimates of the exposuremediator interaction. Main findings Based on a sample of 51,191 adults aged 50 years or older used to investigate the relationship between physical activity, cognitive function, and depressive symptoms, our results showed the model with physical activity as a mediator better fitted the data than the model with cognitive function as a mediator. Moreover, the mediating effect of physical activity was 8-9% of the total effect of cognitive function on depressive symptoms, whereas the mediating role of cognitive function was 3-5% of the total effect of physical activity on depressive symptoms. Finally, age and sex moderated the effects observed, with the highest proportion of mediated effect observed in younger women (i.e., 17%), and the lowest proportion observed in younger men (i.e., 3%). Altogether, these findings suggest that higher cognitive resources favor the engagement in physical activity, which contributes to reduced depressive symptoms. Comparison with other studies Our results showed that higher engagement in physical activity predicts lower depressive symptoms, which is consistent with the literature that have robustly demonstrated a protective role of physical activity on mental health [9,10]. Our findings confirm this association and add to the mounting evidence showing that physical activity prospectively predicts the level of depressive symptoms. Several psychosocial and biological scenarios have been put forth to explain this protective role [26,28,35]. Likewise, our results showing that higher levels of cognitive function predicted lower depressive symptoms were consistent with previous evidence indicating that cognitive decline precedes depressive symptoms in later life [15][16][17][18]. Several mechanisms including the detrimental effect of cognitive decline on the ability to be independent in daily life activities [36] and the awareness of their own cognitive decline [37] can lead to an increase in depressive symptoms. In addition, a common neurodegenerative process and cerebrovascular diseases have been proposed to explain the age-related decline in cognitive function and the increase in depressive symptoms [38]. To the best of our knowledge, our large-scale longitudinal study is the first one to investigate the mediation mechanisms that underlie the relationship between cognitive function, physical activity, and depressive symptoms in adults 50 years of age or older. We found that physical activity partly mediated the effect of cognitive function on depressive symptoms, while the mediating role of cognitive function in the association between physical activity and depressive symptoms was less convincing. These findings suggest that the age-related cognitive decline precedes the decline in physical activity, which is consistent with the literature demonstrating that cognitive resources are required to engage in physical activity [22][23][24][25]. One plausible scenario for this observation can be found in the theory of effort minimization in physical activity (TEMPA) [39,40]. Specifically, anchored in an evolutionary perspective on physical activity [40,41], TEMPA argues that individuals hold an automatic tendency for effort minimization that may explain the difficulty to engage in regular physical activity [39]-a proposition that has been confirmed by a large number of studies [29, 31-34, 42, 43]. Crucially, because of such automatic attraction to physical inactivity, TEMPA proposes that cognitive function is essential to counteract this attraction and thereby favor physical activity engagement. Altogether, though not directly assessed, the current findings fit well with TEMPA. It is worth noting that the aforementioned scenarios are not mutually exclusive, as several studies not only demonstrated the protective effect of physical activity on cognitive function [19][20][21], but also provided biological explanations for this effect [26][27][28]. Finally, the complementary analyses showed mixed, small, and non-significant within-person effects. This result suggests that within-person changes in cognitive function and physical activity did not predict within-person changes in depressive symptoms. This result can be explained at least in two ways. First, only the between-person approach (main analyses) can model the prospective effect from the start to the end of the follow-up period, while the within-person approach only focuses on the occasion-specific changes in the tested constructs (i.e., one path between two waves). Therefore, each within-person model only tests the respective temporary deviation that applies to that event and does not accumulate the prospective effect throughout the study period [44]. This may suggest that when examining processes that are deteriorating across aging (i.e., biological aging or senescence), statistical approaches allowing to account for the effects that accumulate over time could be more adapted than those focusing on relatively short-term changes. Second, it should be also acknowledged that the lack of a significant association between individual changes in cognitive function and physical activity with depressive symptoms may also reflect that the between-person approaches can be affected by a non-accounted background measure. Yet, previous literature robustly showed that both physical activity and cognitive function predict changes in depressive symptoms, which allows to be rather confident in the veracity of the observed results. Note that the CLPM models had worse model fit than the other three models in these complementary analyses. However, because CLPM often fits worse than the other models with structured residuals, especially with high sample size, this thus should not be the basis of ruling out the results of the CLPM model [44][45][46]. Mainly, low CFI and TLI can be attributed to the generally rather weak correlations between the measures (e.g., between depression and both cognitive function measures the correlations are between −0.072 and −0.211 across all waves; between depression and both physical activity measures the correlations are between −0.107 and −0.252; between physical activity and cognitive function the correlations are between 0.110 and 0.252). The analysis was performed using cross-lagged panel longitudinal mediation model. The observed depressive symptoms, physical activity, and cognitive function variables were regressed on sex, age, and sex × age in each wave. The "1" describes the associations between wave 4 and wave 6, while the "2" describes the associations between wave 5 and wave 7, see Fig. 1. AIC Akaike Information Criterion, CFI comparative fit index, CI confidence interval, RMSEA root-mean square error of approximation, SRMR standardized rootmean-squared residual, TLI Tucker-Lewis index. ***p < 0.001. Z. Csajbók et al. Table 4. Longitudinal mediation models across sex and age. The model fit could be improved probably by latent variable modeling instead of estimating the CLPM model on observed variables. In the current data, however, there is no fitting, longitudinally invariant measurement model to indicate latent factors, as the two items measuring physical activity and cognitive function were not sufficient to indicate fitting latent factor measurement models. However, the estimated single-level random-effects models obtained practically zero variance of the random effects (all < 0.001), while the XM interaction models obtained small and non-significant estimates of the exposuremediator interaction, indicating that the basic CLPM and the fully saturated CLPM models are eligible on our data without adjusting for these confounding effects. We also fitted Latent Growth Curve Models (LGCM) against the data to inspect the growth trajectories of the measures independently. Although the slopes were relatively small (the estimated mean latent slope factors were between −0.014 to 0.114 and between −0.058 to 0.003 for the quadratic factors), the significant variances of the latent intercept, slope, and quadratic factors in all measures (except for fluency where only the intercept's variance was significant) suggested that modeling for the latent growth factors could be a meaningful improvement over the CLPM model. This would support the HI-ALT or the ALT-SR models over the CLPM model, but with two caveats. First, only the CLPM model can demonstrate betweenperson associations, while the HI-ALT and ALT-SR models focus on within-person associations. Second, the interpretation of the autoregressive latent trajectory models is less straightforward than the simpler models [45]. Still, relying on the LGCM results, if one is interested in the within-person longitudinal mediation effects, the indirect effect is practically zero in both tested models (i.e., Model 1 and 2). Replication studies, including experimental ones, along with the rapidly developing analytical tools, are needed to provide more robust evidence that would help us disentangle the associations between cognitive function, depressive symptoms, and physical activity. Strengths and weaknesses Our large-scale longitudinal study has several strengths. The large sample size from multiple European countries allows a stronger generalization of the current findings compared to studies with smaller and non-international samples. Likewise, the use of longitudinal data has enabled us to assess longitudinal mediation respecting the time-lags and the temporal order between the studied factors. Though correlational, and thereby preventing to definitely probe causal inference, this longitudinal approach still allows to get closer to a test of the causal relationships between physical activity, cognitive function, and depressive symptoms. However, our study also has some limitations. First and foremost, the physical activity measure was self-reported. Although widely used in previous SHARE-based studies [47][48][49][50], self-reported physical activity is prone to inaccuracy and socialdesirability biases, which reduces its validity relative to devicebased measures [51]. Similarly, studies have observed that the association between physical activity, cognitive function, and depressive symptoms may differ depending on whether physical activity was self-reported or objectively measured [14,52]. Thus, future studies using a device-based measure of physical activity need to be conducted to test the replicability of the current findings. Second, cognitive function includes various cognitive domains, such as reasoning, processing speed, memory, and spatial ability [53,54]. Our study relied on delayed recall and verbal fluency, which are thought to reflect memory performance [55] and executive functions [56], respectively. Accordingly, because the associations between physical activity and cognitive functions are likely to depend on the specific cognitive domains assessed, future studies should include additional domains of cognition. Third, the 2-year timespan between the measures was not based on relevant theories of time and change, but because of the features of data collection in SHARE. Accordingly, because the features of the associations between our variables are certainly time sensitive, both in terms of the time difference between measurements (i.e., frequency) and in terms of evolution over the long run (i.e., duration), investigating how the relations observed may have depended on the time frame used is warranted in future studies. CONCLUSION Our findings show that physical activity partly mediates the effect of cognitive function on depressive symptoms in adults 50 years of age or older. In other words, lower cognitive function may reduce the engagement in physical activity that in turn can elicit higher depressive symptoms. Importantly, only one tenth of the total effect of cognitive function on depressive symptoms was explained by physical activity, which suggests that cognitive function and physical activity have independent effects on depressive symptoms. These findings highlight the need for developing interventions that promote physical activity in cognitively declining adults to limit the onset of depressive symptoms. Participants and study design We studied individuals who took part in the Survey of Health, Ageing and Retirement in Europe (SHARE). SHARE is a population-based study of health, social network and economic conditions of community-dwelling individuals, as described in detail elsewhere [57]. The study was initiated in 2004 and assessments have been performed in approximate 2-year intervals. Eligible participants were people 50 years of age or older and their partners, irrespective of age, and were sampled based on probability selection methods. Computer-assisted personal interviewing (CAPI) was used to collect the data in participants´homes. This study was carried out in accordance with the Declaration of Helsinki. SHARE has been approved by the Ethics Committee of the University of Mannheim (waves 1-4) and the Ethics Council of the Max Plank Society (waves 4-7). All participants provided a written informed consent. Data was pseudo-anonymized, and all participants were informed about the storage and use of the data and their right to withdraw consent. We restricted the sample to individuals who participated in at least two waves from wave 4 to wave 7 due to the fact that wave 3 (SHARELIFE) did not include the measurements of interest to our study as this wave was devoted to data collection related to childhood histories. Including waves 1 and 2 would have imbalanced the time gaps between measurements across the waves, thereby complicating the estimations of the paths and the comparison of the paths across the different time gaps. Using waves 4 to 7 allowed estimating two complete longitudinal mediation cycles with equal time difference between the measures. Based on these criteria, 76,293 participants were included. Then, we sequentially excluded adults who had less than two measures of depressive symptoms (n = 14,738), who had less than two measures of cognitive function (n = 801), who had less than two measures of physical activity (n = 85), and who were younger than 50 years (n = 1300). Finally, we excluded adults with clinically significant depressive symptoms (7 and more points on the EURO-D scale; n = 3605) [58,59], adults who self-reported a diagnosis of dementia, Alzheimer´s disease or senility (n = 290), and adults with limitations in activities of daily living (n = 4283). The final analytical sample included 51,191 individuals. This sample size was sufficient to perform pathmodeling with 37 degrees of freedom [60]. Measures Physical activity. Physical activity was assessed using the following question: How often do you engage in activities that require a low or moderate level of energy such as gardening, cleaning the car, or doing a walk? [50,61]. Participants answered using a 4-point scale: 4 = more than once a week; 3 = once a week; 2 = one to three times a month; 1 = hardly ever or never. Although this measure cannot be used to accurately determining the prevalence of individuals meeting (or not) the recommended level of physical activity, it has been found to predict a wide range of physical and mental health variables [11,22,25,62]. Cognitive function. Cognitive function was assessed with the validated test of delayed recall, which is regarded as a sensitive predictive measure of the development of dementia [55,63]. Delayed recall was extracted from an adapted 10-word delayed recall test [64]. First, participants listened to a list of 10 words that were read out loud by the interviewer. Then, they were immediately asked to recall as many words as possible. At the end of the cognitive testing session, the participants were asked to recall any of the words from the list a second time, which captured delayed recall. Delayed recall ranged from 0 to 10, with higher scores indicating better cognitive performance. Delayed recall has been shown to be linked to both physical activity and depressive symptoms [15,25,65], which makes it a relevant measure for our study. Note, however, that additional measures of cognitive functions, especially those targeting fluid intelligence are needed [15]. Depressive symptoms. Depressive symptoms were assessed with the EURO-D scale. The EURO-D scale was originally developed to compare symptoms of late-life depression across 11 European countries in the EURODEP Concerted Action Programme [59] and has been used in many epidemiological studies [16,[66][67][68][69]. The 12 items (depressed mood, pessimism, wishing death, guilt, sleep, interest, irritability, appetite, fatigue, concentration, enjoyment, and tearfulness) were scored 0 (symptom not present) or 1 (symptom present), generating a score with a maximum of 12, with higher score indicating more severe depressive symptoms. Statistical analyses We applied a longitudinal mediation analysis to test the two hypothesized models (Fig. 1). The advantage of the longitudinal mediation analysis over the cross-sectional mediation analysis is that the exogenous variable (e.g., cognitive function) precedes the mediator variable (e.g., physical activity) and the outcome variable (i.e., depressive symptoms). Therefore, the longitudinal mediation model accounts for the time-lags and the temporal order that is necessary for testing causal inference [70]. Here, we combined the longitudinal mediation analysis with the CLPM, which is regarded as the best method to test between-person effects [44]. Specifically, the first model (Model 1) longitudinally examined the mediating role of physical activity in the association between cognitive function and depressive symptoms, while adjusting for the autoregressive effects (Fig. 1). Subsequently, Model 2 was tested by creating a similar autoregressive longitudinal mediation model, but with cognitive function as the mediator between physical activity and depressive symptoms (Fig. 1). The mediating paths were defined longitudinally. Cognitive function at time x predicted physical activity at time x + 1 (path a) and depressive symptoms at time x + 2 (path c). Then, physical activity at time x + 1 predicted depressive symptoms at time x + 2 (path b). The respective a, b, and c paths within the two longitudinal mediations during the four-wave follow-up were freely estimated (i.e., not assuming stationarity) because the models with no equality constraints obtained better AIC than the models assuming stationarity [70]. To estimate the full hypothesized model, depressive symptoms at wave 5 were regressed on physical activity at wave 4, and physical activity at wave 7 was regressed on cognitive function at wave 6. The autoregressive regression paths leading from time x to time x + 1 within each construct were freely estimated. Similarly, correlation coefficients at each time were freely estimated between the measures obtained at the same time. Maximum Likelihood estimator was used with 10,000 bootstrap replications to estimate asymmetric confidence intervals for the indirect effects and with Full Information Maximum Likelihood estimation for the missing values [71]. Model fit indices were considered to be acceptable if root-mean square error of approximation (RMSEA) and standardized root-mean-squared residual (SRMR) were lower than 0.08, and comparative fit index (CFI) and Tucker-Lewis index (TLI) were higher than 0.9 [72]. The fit of Model 1 and Model 2 were compared based on AIC. The analyses were performed with Mplus 8.7. Mplus syntaxes are in the supplement. Moderation analyses As both age and sex may influence the pattern of results observed [5,16], the observed variables were regressed on sex, age, and sex × age interaction, thereby allowing to assess potential differences between women and men regarding the evolution of the aforementioned variables across age. The sex × age interaction term was a significant predictor in each model. Each model was therefore re-run stratified by sex (i.e., men vs. women) and two age categories (i.e., <65 vs. ≥65 years) [73]. Sensitivity analyses We performed two sensitivity analyses for each model. In the first sensitivity analysis, we used a measure of physical activity of vigorous intensity, derived from the following question: How often do you engage in vigorous physical activity, such as sports, heavy housework, or a job that involves physical labor? Participants answered using a 4-point scale. In the second sensitivity analysis, we used a measure of verbal fluency instead of delayed recall. Verbal fluency was derived from the verbal fluency test [74], in which participants were asked to name as many different animals as they could think of within 1 min. The score was the total number of correctly named animals, with a higher score indicating better verbal fluency. For helping model convergence, verbal fluency was divided by 10 to keep the score closer in absolute range to physical activity and depression. Complementary analyses To test the robustness of the CLPM approach, we conducted three complementary analyses relying on different methods. Indeed, the crosslagged effects can be tested in various ways, with each method allowing specific interpretations about the research question [44]. The most important distinction between cross-lagged models is that they test either for between-or within-person effects. For example, the between-person approach examines whether people with lower cognitive function have a higher risk for lower physical activity and for a greater number of depressive symptoms. In contrast, the within-person approach examines whether, for a particular individual, exhibiting lower cognitive function than usual is associated with a higher risk of physical inactivity and a greater number of depressive symptoms. Here, CLPM was used to test the between-person effects and complementary analyses were used to test the within-person effects or other effects controlled. In the first complementary analysis, the RI-CLPM model was performed to test within-person effects with free trait levels. In the second complementary analysis, we used the HI-ALT model as an alternative method to test the within-person effect, also taking into account the latent growth of the constructs over time [75]. In the third complementary analysis, we used the ALT-SR model, which is a combination of the RI-CLPM and the HI-ALT models, to model the latent growth and the structured residuals of the observed variables. The fourth complementary analysis was a fully saturated CLPM model [76], which controls for all lag-2 and lag-3 effects. Lastly, we performed the single-level random effects model [77], but obtained near zero variances of the random effects (all < 0.001); and the XM model [78], which obtained small and nonsignificant XM interaction effects. The specifications of the CLPM, RI-CLPM, and ALT-SR model followed the Mplus syntax of Orth et al. [44], while the HI-ALT model was specified as in Ou et al. [75]. The latent growth factors of cognitive function, depressive symptoms, and physical activity in the HI-ALT and ALT-SR models, and the observed variables in the CLPM and RI-CLPM models were regressed on sex, age, and sex × age. Bootstrapped 10,000 replications were estimated to obtain asymmetric confidence intervals. DATA SHARING This SHARE dataset is available at http://www.share-project.org/ data-access.html.
2022-09-28T13:47:33.089Z
2022-09-27T00:00:00.000
{ "year": 2022, "sha1": "d0b641db13ff81c9083f8e0af2a6c884a2dc3261", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "53086bc0fd9493145e9d3507f4a71823aa5d7d94", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
153311429
pes2o/s2orc
v3-fos-license
Osteoarthritis Changes Hip Geometry and Biomechanics Regardless of Bone Mineral Density—A Quantitative Computed Tomography Study We aimed to compare proximal femur geometry and biomechanics in postmenopausal women with osteoarthritis (OA) and/or osteoporosis (OP), using quantitative computed tomography (QCT). A retrospective analysis of QCT scans of the proximal femur of 175 postmenopausal women was performed. Morphometric and densitometric data of the proximal femur were used to evaluate its biomechanics. We found, 21 had a normal bone mineral density (BMD), 72 had osteopenia, and 81 were diagnosed with OP. Radiographic findings of hip OA were seen in 43.8%, 52.8%, and 39.5% of the normal BMD, osteopenic, and OP groups, respectively (p < 0.05). OA was significantly correlated with total hip volume (r = 0.21), intertrochanteric cortical volume (r = 0.25), and trochanteric trabecular volume (r = 0.20). In each densitometric group, significant differences in hip geometry and BMD were found between the OA and non-OA subgroups. Hip OA and OP often coexist. In postmenopausal women, these diseases coexist in 40% of cases. Both OA and OP affect hip geometry and biomechanics. OA does so regardless of densitometric status. Changes are mostly reflected in the cortical bone. OA leads to significant changes in buckling ratio (BR) in both OP and non-OP women. In adults, bone shape continues to be affected by periosteal apposition (modeling) and endosteal resorption and formation (remodeling), resulting in substantial alteration of bone shape and size. Pathological changes, i.e., OA and OP might add to the dynamics of these [12]. Bone quality could not be solely attributed to BMD (bone mineral density) [13][14][15]. Bone morphology and geometry considerably add to the strength model. Separate assessment of the cortical and trabecular bones is necessary to distinguish the differences in their age-related changes, biomechanics, and response to pharmacological and non-pharmacological treatments. The trabecular bone is about eight times more metabolically active than the cortical bone and is subjected to early and rapid changes with advancing age [13,14]. Both OA [27] and OP [28,29] affect hip geometry and strength, yet no quantitative radiological data are comparing these in the literature. This study was aimed to compare proximal femur geometry and biomechanics between postmenopausal women diagnosed with OA and and/or with OP, using QCT, and to evaluate the extent to which the two diseases coexist in this group of patients. Materials and Methods QCT-scans of the proximal femur of 175 consecutive postmenopausal women presenting with low back (LBP) and groin pain to the emergency department or outpatient clinic were collected. A multi-detector-row CT (computed tomography) scanner (Aquilion16, Toshiba Medical Systems Corporation, Tokyo, Japan) at the Radiology Department was used for evaluation of both the lumbar spine and proximal femurs. Patients were scanned with conditions adjusted to 120 kV, 250 mA, reconstruction thickness of 0.5 mm, and spatial resolution of 0.625 × 0.625 mm. For the proximal femur analysis, patients were placed on a supine position with the solid calibration phantom (Mindways, Austin, TX, USA) placed beneath the patient between the hips. The region scanned extended just above the femoral head to 3.5 cm below the lesser trochanter. The CT scanner table height was set at the level of the greater trochanter. Participants were categorized into subgroups due to OA (radiographic signs) and due to OP based on BMD T-scores/or presence of osteoporotic vertebral compression fracture. Patients with chronic endocrine diseases, taking antiresorptive drugs, and after total hip arthroplasties were excluded from the study. The institutional review board approved the study. Informed consent was obtained from the patients before study participation. Proximal femur analysis was performed using the QCT Pro Bone Investigational Toolkit (BIT, Mindways, Austin, TX, USA) [30]. All volumetric bone mineral density (vBMD) measurements and structural characteristics were extracted from our QCT Pro BIT database. The software separated the cortical bone based on a fixed threshold of 350 mg/cm 3 for all CT scans. The narrow femoral neck (FN) region was found automatically as a perpendicular plane to the FN axis where the approximate diameter ratio (superior-inferior and anterior-posterior) was 1.4, producing the lowest cross-sectional area (CSA) of the FN. The produced cross-section was then divided automatically into 16 sectors (defined by equal angles of 22.5 • ) with the origin at the center of the mass. For all sectors, vBMD was assessed separately for trabecular and cortical compartments. FN angle, width, overall CSA, volume, and mass, hip axis length, cross-sectional moment of inertia (CSMI), section modulus (Z), and buckling ratio (BR) were measured using BIT. The Z is a measure for withstanding bending stress. QCT Pro evaluates section modulus along the strongest (Zmax-from the geometric center to periosteal surface) and weakest axis (Zmin-corresponding periosteal distance along orthogonal to Zmax axis), which combined reflect the ability to withstand torsion. CSMI is a derivative of section modulus, and measures the mass distribution relative to the geometric center reflecting how effective a cross-section is at resisting bending and torsion-depending on the axis chosen for calculation. Both Z and CSMI assume homogenous distribution of cortical bone, however differences in porosity and mineralization lead to varied voxel density. To address this, each voxel's area is multiplied by the ratio of measured cortical density to physiologic bone density to produce density-weighted Z and CSMI (DW-Z, DW-CSMI). The BR reflects strength against compressive stress leading to sudden sideways deflection of the structural member. BR is a measure of cortical instability consequential of excessive cortical thinning. BR relates the cortical thickness to the width of the femoral neck. Apart from the structural characteristics, we also evaluated density measures of the proximal femurs, taking note of any signs of hip joint OA. The OA and sub-OA subgroups were defined based on the hip joint CT assessment of cartilage destruction, and the presence of osteophytes. Lumbar spine scans were evaluated to identify vertebral compression fractures. Diagnosis of OP was based on the QCT BMD criteria of the T score values using the National Health and Nutrition Examination Survey (NHANES) DXA for hip QCT [19]. All of the analyses were carried out using TIBCO Software Inc. (2017) Statistica (data analysis software system), version 13.1. Descriptive statistics of all variables were calculated. Normally distributed quantitative variables were compared using the Student t-test; and non-normally distributed categorical variables were compared using the U-Mann Whitney test and Kruskall-Wallis test. Correlations between variables were calculated using Spearman's Rank Correlation Coefficient. The statistical level of significance was set at p = 0.05. Participants' Baseline Characteristics In the 175 postmenopausal women included in this study, the mean age was 68.8 years (standard deviation (SD) 11.26 years, standard error (SE) 0.85 years), mean weight was 64.4 kg (SD 14.9 kg, SE 4.87 kg), and mean height was 159 cm (SD 6 cm, SE 0.45 cm). Among the women, 12% had a normal BMD, 41.1% met the densitometric criteria of osteopenia, and 46.9% were diagnosed with OP. Data on patients' baseline characteristics are summarized in Table S1. Nearly one-half (79 women −45%) of women had radiographic signs of at least unilateral hip OA, whereas 45 (25.7%) sustained at least one vertebral compression fracture in the lumbar spine, with 20 women (25.3%) having OA as a coexisting disease. Overall, 70 (40%) women had decreased BMD (either osteopenia or OP) combined with radiographic hip OA, with 32 women (18.3% overall, 45.7% of the OA group) being osteoporotic. Spearman's rank correlation coefficient showed a weak, yet significant association between hip OA and history of hip fracture (not site-matched), and the moderate association between densitometric status and history of vertebral fractures (r = 0.16, p < 0.05 and r = 0.28, p < 0.05, respectively). Morphological and Densitometric Findings A number of morphological and densitometric measures proved to be significantly different between the OA and non-OA subgroups (Table S2). FN characteristics (angle, width, and height) were not different between the OA and non-OA subgroups, regardless of densitometric status. However, the FN volume was significantly different in the patients with a decrease in overall BMD. Patients' physical characteristics were mostly correlated with hip axis length and total hip volume (r = 0.36 and r = 0.46, respectively-p < 0.05), mostly reflected in the volume of the intertrochanteric region (r = 0.43, p < 0.05). Weight was overall less correlated with hip characteristics but was mostly correlated with cortical indices, such as total hip and intertrochanteric cortical volumes (r = 0.36 and r = 0.38, respectively; p < 0.05). Hip BMD measurements did not reach those levels of association. FN morphology expressed in angle, width, and length was mostly correlated with total hip and trochanteric cortical volumes (r = 0.55 and r = 0.57, respectively; p < 0.05). Morphological indices of FN were mostly insignificantly correlated with BMD measurements. Those that reached a level of statistical significance had weak correlations. Biomechanical Findings Collective biomechanical data of patients' subgroups (OA vs. non-OA) are presented in Figure 1. The gradual increase in BR with decreasing BMD can be noted from our data. There was a decrease in Z, CSMI, and CSA across the groups (separately cortical and whole bone). In the normal BMD group, all indices, except cortical CSMI, were significantly different between the OA and non-OA subgroups. In the osteopenia and OP groups, the differences reach statistical significance in all measurements ( Figure 2). In the normal BMD group, all indices, except cortical CSMI, were significantly different between the OA and non-OA subgroups. In the osteopenia and OP groups, the differences reach statistical significance in all measurements ( Figure 2). In the normal BMD group, all indices, except cortical CSMI, were significantly different between the OA and non-OA subgroups. In the osteopenia and OP groups, the differences reach statistical significance in all measurements (Figure 2). Sectors Each sector was characterized by its perimeter, average cortical BMD, average trabecular BMD, average cortical thickness, normalized to BMD cortical thickness, the average distance from the center of mass to the cortex, and average distance from the geometric center to the cortex (Figure 3). Each sector was characterized by its perimeter, average cortical BMD, average trabecular BMD, average cortical thickness, normalized to BMD cortical thickness, the average distance from the center of mass to the cortex, and average distance from the geometric center to the cortex (Figure 3). In the supero-posterior region (13th-16th sectors), there were no significant differences between the normal BMD and osteopenic groups in the average trabecular BMD. In the posterior overlapping region (10th-14th sectors), there were no significant differences between the osteopenic and OP In the supero-posterior region (13th-16th sectors), there were no significant differences between the normal BMD and osteopenic groups in the average trabecular BMD. In the posterior overlapping region (10th-14th sectors), there were no significant differences between the osteopenic and OP groups in the average cortical BMD. All other measurements of the abovementioned characteristics were statistically significant among the normal BMD, osteopenic, and OP groups. Spearman's rank correlation coefficient showed no significant association between age and cortical/trabecular BMD in the 10th-12th sectors of the femoral neck (infero-posterior part). At the same time, significant correlations with average cortical thickness were found across all sectors. The highest correlations were produced by adjacent sectors of the super-posterior/superior region [i.e., 15th, 16th, 1st, and 2nd sectors (r = −0.20 to r = −0.22, p < 0.05)]. Intersector correlations were the strongest between adjacent sectors (of the same region), in terms of average cortical thickness (up to r = 0.84-0.85 for the 8th, 9th, and 10th sectors), average trabecular BMD (up to r = 0.7-0.76 for the 4th, 5th, and 6th sectors), and average cortical BMD (up to r = 0.85 for the 15th and 16th sectors). Among the individual sectors, the average trabecular BMD of the 10th sector was least significantly correlated with measurements of other sectors. Most commonly, inverse intersector correlations between average trabecular and cortical BMD were noted across all sectors. Discussion In this study, we retrospectively investigated the coexistence of OP and hip OA in postmenopausal women, and their quantitative effect on proximal femur geometry and biomechanics. Associations between proximal femur three-dimensional architecture, cortical bone geometry and strength were presented in previous studies [15,31]. Yet, there are no reports comparing OA and OP. In this studied cohort, we showed the prevalence of OP in the group with radiographically proven hip OA (45.7%); of these, >25% sustained a vertebral compression fracture. If we include osteopenia, the conditions coexist in 70 of 79 women with OA, which is higher compared to that reported in large prospective population-based cohorts (20.7-28%). Our research focused on postmenopausal Caucasian women. Borggrefe et al. [12] investigated a large cohort of older men, which were categorized according to the history of hip fracture. QCT-derived measures of the femoral neck region showed more correlation between vBMD and Z or BR (Z − r = 0.47 vs. r normal = 0.13, r osteopenia = 0.39, r osteoporosis = 0.64; r normal was not statistically significant; BR − r = −0.79 vs. r normal = −0.81, r osteopenia = −0.72, r osteoporosis = −0.64). Both groups were older than the women in our study, regardless of the densitometric status (mean 73.3-77.1 years vs. 63.1-71.8 years). This discrepancy implies that gender-dependent proximal femur geometry contributes significantly to the ability to withstand stress. Indeed, Yates et al. found in their hip structural analysis (HSA)-based study, significant gender differences in hip structural geometry [32]. Furthermore, the differences increased with age. The differences were seen in CSA (cross-sectional area), outer diameter, cortical thickness, Z, and BR in both the femoral neck and intertrochanteric regions. The findings were subsequently confirmed in a large QCT-based prospective population study [33]. There is a recognized tendency of femoral neck expansion [8,12,32]. Periosteal apposition leads to an increasing CSA of the femoral neck with age. The endosteal expansion resulting in widening of the endosteal cavity can impact the stability of the femoral implant. In our group, cortical CSA of the femoral neck decreased with decreasing BMD. Total hip volume (cortical and trabecular bone volumes combined) differed between non-OP and OP women (266.1 cm 3 vs. 198.7 cm 3 ), but the difference was non-significant. When comparing their hip compartments, both groups showed significant difference. Overall, we observed, contrary to previous reports [8,12,32] that the OP group was characterized by smaller volumes of different hip regions (total hip, femoral neck, greater trochanter, intertrochanteric zone, and Ward's triangle). This could be partially attributed to the less robust physical characteristics (both weight and height). Indeed, these were mostly correlated with hip axis length and total hip volume and mostly reflected in the volume of the intertrochanteric region [15]. Weight was overall weakly but significantly correlated with hip characteristics, mostly with cortical indices such as total hip and intertrochanteric cortical volumes. The differences in the presence of OA in each densitometric group showed significantly higher volumes in the cortical compartments in most regions in women with OA. The differences were not conspicuous in trabecular compartments or overall. The presence of osteophytes contributing to the increased in cortical volume might explain these observations. Volumetric data were reflected in the biomechanical measurements for all densitometric groups. The OA affected hips showed better mechanical strength. The angle, width, and height of the femoral neck did not show any differences between OA and non-OA subgroups, regardless of the densitometric status. The femoral neck is an area that usually does not encompass osteophytosis. Although the superior-posterior region undergoes gradual cortical thinning, the infero-posterior region of the femoral neck cross-section is least likely to be affected by age-related changes. This is in accordance with previous observations of femoral neck fractures, in which the decline in cortical thickness and density of the superior half of the femoral neck averaged to 3.3% per year and 1.2% per year, respectively, which is in contrast to losses of 0.9% per year and 0.4% per year, respectively, in the inferior femoral neck [23,34]. When comparing the normal BMD and osteopenic groups, there were significant differences in the average trabecular BMD in the superior-posterior region, which is a tendency not seen between the osteopenic and OP group. Thus, initially, the bone stock seemed to deplete significantly in the trabecular compartment, whereas cortical thinning was more noticeable with decreasing BMD. This observation was mentioned only once recently by Khoo et al. [8], who reported a quadratic vs. linear loss of volumetric BMD in the cortical and trabecular compartments, respectively. QCT relative to DXA (dual-energy X-ray absorptiometry) allows analysis of all bone compartments. It is more sensitive in detecting diminished BMD, since the measurement is not affected by obesity, degenerative changes, joint space narrowing, calcifications and osteophytes [35][36][37][38]. It facilitates the understanding of the three-dimensional bone structure, which can be helpful in preoperative planning [39][40][41], and therapy monitoring-either medical or implant-focused. Diagnosis of OA could not exclude the diagnosis of OP, especially in the elderly. OP in OA patients requires medical attention. Proximal femur or vertebral compression fracture could be the eventual consequence of low bone mineral density, adding to overall morbidity and mortality in these patients. Untreated OP patients undergoing THA (total hip arthroplasty)have higher intra-and post-operative risks, such as those of intraoperative fracture, periprosthetic osteolysis with implant migration, and postoperative periprosthetic fracture [42]. Postoperative antiresorptive medication reduce the risk of revision surgery by almost 60% [42]. A novel local osteo-enhancement procedure could serve as a preventive measure against proximal femur fracture [43]. The present study has several limitations. First, our analysis was based on QCT findings, which have recognized technical limitations particularly concerning partial volume effects in the cortical regions, which are caused by the limited spatial resolution, and overall beam hardening artifacts, which can influence the measurements [44]. Second, the study included Caucasian women; thus, the results cannot be fully extrapolated to other populations. Despite its insufficiencies, the study has its strengths. It was conducted with a cohort of considerable size. Secondly, it raises an issue generally neglected in studies on OA and OP, wherein patients presenting with specific complaints (in our case LBP and groin pain) may prompt the treating physician to consider that the symptoms could be caused by osteoporotic vertebral fractures (a consequence of OP), which requires prompt medical attention, both general and, if OA patients are to referred to joint arthroplasty, implant oriented. Conclusions Hip OA and OP often coexist. In postmenopausal women, these diseases may coexist in 40% of cases. Both OA and OP differently affect hip geometry and biomechanics. OA does so regardless of densitometric status, yet the discrepancy increased with a decline in bone stock. Changes are mostly reflected in the cortical bone-total hip cortical BMD and volume, intertrochanteric cortical BMD and volume, and CSA, especially. Sectoral analysis showed cortical thinning in the superior-posterior region of FN in women with OP, while osteopenia initially leads to trabecular loss in the same region. In terms of biomechanics, OA leads to significantly notable decrease in BR of both OP and non-OP women, and significant increase in Z and CSMI. QCT clearly shows the density and the architecture of the proximal femur from a broader perspective to researchers as well as orthopedic surgeons and practicing clinicians. Supplementary Materials: The following are available online at http://www.mdpi.com/2077-0383/8/5/669/s1. Table S1: Patients' baseline characteristics. Mean values of selected morphometric and densitometric characteristics in BMD-based groups, Table S2: Results (p-values) of U Mann-Whitney test comparing different densitometric groups with regard to presence of radiographic signs of hip osteoarthritis.
2019-05-14T09:07:55.494Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "15be3b0d393c0cec7fc0c4f4c5c7446466d971b2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/8/5/669/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "15be3b0d393c0cec7fc0c4f4c5c7446466d971b2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253237197
pes2o/s2orc
v3-fos-license
Strong entanglement criteria for mixed states, based on uncertainty relations We propose an entanglement criterion, specially designed for mixed states, based on uncertainty relation and the Wigner-Yanase skew information. The variances in this uncertainty relation does not involve any classical mixing uncertainty, and thus turns out to be purely of quantum mechanical nature. We show that any mixed entangled state can be characterized by our criterion. We demonstrate its utility for several generalized mixed entangled state including Werner states and it turns out to be stronger than any other known criterion in identifying the correct domain of relevant parameters for entanglement. The proposed criterion reduces to the Schrodinger-Robertson inequality for pure states. I. INTRODUCTION Entanglement between two or more subsystems is considered as a resource to deal with quantum information. Several applications like quantum teleportation, quantum metrology, quantum cryptography, and super-dense coding require that the participating subsystems will be entangled. Identifying whether these subsystems are entangled or not is therefore an essential step toward quantum information processing. In the past two decades, several criteria for detection of entanglement have been developed. The positive partial transpose (PPT) criterion by Peres and Horodecki [1] has been one of the most important ones, which provides necessary and sufficient condition in certain cases. Peres had proposed that the density matrix of a bipartite entangled state after partial transpose (PT) in the basis of one of the parties exhibits negative eigenvalues. The other criteria include those based on reduction [2] and the computable cross norm [3]. However, to test these criteria in experiments, one would ideally need to reproduce the density matrix using quantum state tomography. As an alternative approach more suitable for experimental detection of entanglement, criteria based on measurement outcomes of the relevant observables have been derived. For example, the PT criterion has been mapped into uncertainty relations of the relevant quadratures, violation of which would indicate existence of entangled states [4,5]. There exist methods based on Bell-type inequalities [6], local uncertainty relations [7],the SRPT inequality [5], as well. These measurement-dependent criteria are often expressed in terms of inequalities, which are satisfied by separable states, and any state violating these inequalities must be entangled. These criteria are useful to detect entanglement in pure as well as in mixed states. But unfortunately, they cannot reveal the correct domain of the relevant parameters, as prescribed by the PPT criterion, to detect entanglement in mixed states. This can be attributed to the fact that the mixed states involve both classical and quan- * 2018phz0009@iitrpr.ac.in tum probability distributions and the above criteria do not differentiate between these two for evaluation of the expectation values. As the entanglement is a property purely of quantum nature, to detect this, we need a criterion which considers only the quantum uncertainties of the relevant variables. In addition to the criteria based on partial transposition, quantum uncertainty of local observables has also been used to characterize non-classical correlation like quantum discord [8]. Quantum discord can distinguish between classical and quantum probability distributions, inherent in the system. However, it is rather quite cumbersome to calculate the discord, as it requires optimization over many measurements and for the systems with more than two qubits, it becomes more intractable. In this work, we use an alternative strategy. In this paper, we consider a Schrodinger-Robertsontype uncertainty relation proposed by Furuichi [9], which includes the Wigner-Yanase skew information. This skew information is known to give a measure of the quantum uncertainty of an operator X with respect to a given state ρ. As mentioned above, for a joint state of two subsystems, the quantum uncertainty of a local variable provides an alternative estimate of the discord. Here, we consider both local and non-local variables of the two subsystems and apply the partial transposition criterion to the uncertainty relation to detect entanglement between them. When using the nonlocal variables, we essentially consider the nonlocal correlation of the subsystems. Moreover, the uncertainty relation used for employing the entanglement criterion which involves only the terms which do not contain the classical mixing uncertainty. This implies that, the uncertainty relation is particularly suitable for mixed states. We emphasize that the skew information represents quantum (rather than total) uncertainty which considers both incompatibility and correlation between the relevant observables. Thus the uncertainty relation is of purely quantum nature and is stronger than the conventional ones when one deals with mixed states. Note that this uncertainty relation reduces to the Schrodinger-Robertson inequality for the pure state. We will apply the inequality to several generalized mixed states including the Werner states. We show that our inequality reveals an ideal domain of the relevant parameter for entanglement, unlike the other criteria based on the Bell inequalities [10,11], uncertainty relations [12], and the Schrodinger-Robertson inequality [5]. It must be borne in mind that the proposed inseparability inequality requires the full knowledge of the density matrix to be evaluated. So we need quantum state tomography to reproduce the density matrix. The paper is organized as follows. In Sec. II, we review some basic properties of skew information and highlight its relation with variance. In Sec. III, We will present the entanglement criterion in the form of the Schrodinger-Robertson type inequality in terms of the skew information. We also demonstrate how the violation of this inequality can detect entanglement for a large class of pure and mixed states, including the Werner states. In Sec. IV, we conclude the paper. II. WIGNER-YANASE SKEW INFORMATION In their seminal paper on quantum measurement, Wigner and Yanase introduced the quantity, the skew information, [13] as This corresponds to a measure of the amount of information on the values of observable, which is skew to the operator X. Here X is a conserved quantity like Hamiltonian, momentum etc., of the relevant quantum system, which is in a state described by the density matrix ρ. Note that I(ρ, X) accounts for the non-commutativity between ρ and X. The skew information satisfies several criteria, suitable for a valid information-theoretic measure, which are as follows: 1. Non-negativity: I(ρ, X) ≥ 0. 2. Convexity: It is convex with respect to ρ in the sense that where p 1 + p 2 = 1, p 1 , p 2 ≥ 0. This suggests that the skew information decreases when two density matrices are mixed with each other. Additivity: This is represented by where ρ 1 and ρ 2 are two density operators describing the two systems, I i are the density matrices in their respective basis (i ∈ 1, 2), and X 1 and X 2 are their corresponding conserved quantities. 4. Let U be a unitary operator, then where, U = e −ιθX commutes with X. i.e, when the state changes according to the Landau-von Neumann equation, the skew information remains constant for isolated systems. It has been used to construct measures of quantum correlations [14] and quantum coherence [15], to detect entanglement [16], to study phase transitions [17] and uncertainty relations [18][19][20], and so on. Skew information is related to the conventional variance, through the following relation: This is equal to the variance only if the state ρ is a pure state, i.e., if ρ = |ψ ψ|: where V (ρ, X) = TrρX 2 − (TrρX) 2 . On the other hand, for any mixed state ρ, the skew information is always dominated by the variance: A mixed state can be considered as a classical mixture of quantum states. The variance does not differentiate between the quantum uncertainty (arising out of purely quantum probability distribution) and the classical uncertainty (associated with the classical mixing) in the mixed state. On the contrary, the skew information can be interpreted as equivalent to quantum uncertainty and does not account for the classical mixing. In fact, it vanishes if ρ and X commute with each other. Also the convexity property I, as mentioned above, suggests that classical mixing cannot increase quantum uncertainty. The interpretation of the skew information as a kind of quantum uncertainty and the relation (7) above was used to construct an uncertainty relation, which is stronger than the usual Heisenberg uncertainty relation to detect entanglement in a mixed state. We will discuss this in the next Section. III. ENTANGLEMENT CRITERIA BASED ON THE UNCERTAINTY RELATIONS In this Section, we will first discuss the modified uncertainty relations and then will propose how this can be useful as entanglement criterion. A. Modified uncertainty relations Usual uncertainty relation, due to Heisenberg, sets a fundamental limit on simultaneous measure of two noncommuting observables [21]. For measurement of any two observables A and B in a quantum state ρ, this is given by where V (ρ, A) and V (ρ, B) are the variances of A and B, as defined above, and Tr(ρ[A, B]) is the average of commutator [A, B] = AB − BA in the state ρ. It is noticeable that the commutator, which is so fundamental in quantum mechanics, makes its appearance in Heisenberg's relation. In addition to this commutator, one also considers the correlation between the observables, which is usually expressed in terms of anti-commutator in quantum mechanics. This was included by Schrodinger [22] into the canonical form of the uncertainty relation, that now takes the following form: the fluctuation operators about their respective expectation values, calculated for the state ρ. As discussed in the Sec. II, the skew information can be considered as quantum uncertainty. Luo therefore proposed [18] that Heisenberg's uncertainty relation might be changed, as follows, in terms of the skew information, for any two observables A, B and the quantum state ρ, This relation is defined in the spirit of the relation 0 ≤ I(ρ, A) ≤ V (ρ, A). However, this does not distill the right essence of the uncertainty relation, as when the quantum uncertainties I vanish for two non-commuting operators A and B, the above inequality (10) gets violated, even if the state ρ has non-classical correlations. It was later observed that the Heisenberg uncertainty relation is of purely quantum nature for pure state and is of "mixed" flavor for mixed state because V (ρ, A) is a hybrid of classical and quantum uncertainty for these state. Motivated by this simple observation, Luo then introduced [20] the quantity U (ρ, A), as follows, by decomposing the variance into classical and quantum parts i.e., V (ρ, A) = C(ρ, A) + I(ρ, A): Luo then successfully introduced a new Heisenberg-type uncertainty relation based on U (ρ, A) (which suitably takes care of exclusion of classical mixing, specially for mixed state) as follows: The three quantities V (ρ, A), I(ρ, A), and U (ρ, A) have the following ordering: Clearly, for pure states, we have the classical correlation C = 0 and thus, U = V and the above relation (12) becomes the same as the original uncertainty relation (8). The above uncertainty relation (12) is improved by Furuichi [9], who proposed a stronger Schrodinger-type uncertainty relation, by improving upper bound, for the quantity U , as where C ρ (A, B) is called Wigner-Yanase correlation between two observables and can be written as where A * is the complex conjugate of the operator A. Note that, if A = B are self-adjoint, this simplifies to which becomes the same as the skew information. It can be shown that So the inequality (14) can be finally written as B. Relation to the entanglement criteria As mentioned in the Introduction, entanglement criteria based on partial transpose in the uncertainty relations do not differentiate between the quantum and classical probabilities and also the correlations of the observables. Thus they fail to attain the correct domain, as prescribed by the PPT criteria, of the relevant variable in the mixed states. As the uncertainty relation (18) includes both the quantum uncertainty and correlations, while excluding the classical uncertainty, it is expected that entanglement criterion based on (18) would prove to be much stronger compared to the older versions of such criteria, when mixed states are involved. In the following we therefore propose a new criterion, particularly useful for detecting entanglement in bipartite mixed states. where A and B are the operators in the joint Hilbert space and ρ P T represents the partial transpose of the joint density matrix ρ in terms of one of the subsystems. Violation of the above inequality is a sufficient condition to entanglement because Peres criterion is sufficient to detect entanglement in bipartite system. Werner state To illustrate the utility of the criterion (19), we first consider a Werner state, which is mixture of a maximally entangled state and the maximally mixed state. The Werner state for a two-qubit system is given by where |ψ − = 1 √ 2 (|01 − |10 ) is a maximally entangled state (one of the four celebrated Bell states) and 0 ≤ p ≤ 1. In the computational basis (|00 , |01 , |10 , |11 ) of two qubits, we can write ρ as follows: With the partial transpose with respect to the second qubit, this transforms into the eigenvalues of which are (1 + p)/4 (triply degenerate) and (1−3p)/4. Clearly, the Werner state is entangled (inseparable) for p > 1 3 (as one of the eigenvalues becomes negative), according to the PT criterion and maximally entangled when p = 1. But, if we use the uncertainty relation (9) with ρ P T , we find the condition for separability as p ≤ 1, which is always satisfied. Thus, the violation of this inequality cannot be reliably used to detect entanglement in this state. If one would perform the partial transposition on the relevant observables A and B, instead of on ρ, the uncertainty relation (9) becomes the SRPT inequality. However, not all observables are suitable to demonstrate the violation of the SRPT inequality. They have to satisfy a general condition to be eligible. The SRPT inequality detects the entanglement of Werner state when p > 1 2 for a particular choice of observables A and B. This lower bound is however larger than p = 1/3. We show below that the present criterion reveals the entanglement even in the domain ( 1 3 , 1 2 ). As we proposed in this paper, we now explore the suitability of the uncertainty relation (19) for entanglement detection. In this regard, we first obtain It is now important to suitably choose the observables A and B, such that they do not commute with ρ P T . Case I: Following [8], we first choose a set of local observables A = σ z ⊗ I 2 and B = I 1 ⊗ σ z . For these operators, we found that V (ρ P T , A) = V (ρ P T , B) = 1 , For 0 ≤ p ≤ 1 3 , the state ρ P T is positive, which implies that it describes some physical state and therefore satisfies the inequality (19). For p > 1 3 , however, the term √ 1 − 3p is complex. Therefore, we can rewrite U (ρ P T , A) and U (ρ P T , B) as There are several forms of the square root (25). (where a = (1 + p 2 )/4 and b = p p 2 + 2p + 2/2), and by using (24), we have the following condition from (19): It should be remembered that the above condition is obtained in the domain p > 1/3 and is obviously violated for all p ∈ (1/3, 1]. Case II: Usually, measurement of local observables bypasses the issue of nonlocal correlations that exist between two subsystems, when they are entangled. Accordingly, if we choose a set of global observables A = σ z ⊗ σ z and B = σ x ⊗ σ x , we find that they commute with ρ P T , and thus the skew information vanishes, i.e., I(ρ P T , A) = I(ρ P T , B) = 0. The correlation between these operators also vanishes: C ρ P T (A, B) = 0. Clearly, for such choices of global observables, we cannot clearly say anything about the inseparability of the Werner state using (19). But if we choose a different set of operators, say, A = σ x ⊗ σ y and B = σ y ⊗ σ x , which do not commute with ρ P T , we have the same expressions of V , I, and U as in (24), and the correlation term becomes Using these expressions, we find that, for p > 1 3 , the condition (19) for separability is violated, i.e., the Werner state is entangled for p > 1 3 . This result indicates that the criterion (19) is the strongest among all the other known criteria for entanglement. For example, the Bell's inequalities [10,11] lead to p > 1 √ 2 for entanglement, while the uncertainty relation in [12] sets the lower limit as p > 1 √ 3 and the Schrodinger-Robertson inequality based on local variables [5] suggests p > 1 2 . As clear from the two cases discussed above, the separability criterion (19) affirms the correct limit for entanglement, as one gets from the Peres criteria. Interestingly, both local and global sets of operators can reveal this limit. One only needs to choose the set of operators that do not commute with the ρ P T . Werner derivative An important generalized class of Werner states is Werner derivative [23], which is a mixture of nonmaximally pure entangled states and the maximally mixed state. This can be written in the form where |ψ = √ a|00 + √ 1 − a|11 is the Schmidt decomposition of the state obtained by a nonlocal unitary rotation of the Bell state |ψ − , and 1 2 ≤ a ≤ 1. It is worth noting the difference between the states (28) and (20). In the computational basis of two qubits, ρ W D takes the following form: According to the PT-criterion, the state described by Eq. (28) is entangled if which further restricts p as 1 3 ≤ p ≤ 1. Clearly, for different values of p, the parameter a has an upper and a lower bound, such that the state ρ W D parameterized by a becomes entangled. But when using the standard uncertainty relation (9), one finds that the state is separable for all p ∈ [0, 1]. This can be seen by using ρ P T W D along with the observables A = σ z ⊗ I 2 and B = I 1 ⊗ σ z in the inequality (9), which leads to p ≤ 1. This means, according to (9), the state ρ W D should be always separable, which is not the case. We show below, how the inequality (19) can successfully detect entanglement in this state. To employ the criterion (19), we choose the same set of local operators, as above and we obtain the following inequality: where We find that the above inequality is violated in the domain when D is imaginary. This happens in the following range of a: (33) The upper limit of a thus matches with the one obtained by directly applying the PT criterion [see Eq. (30)]. By definition of the Schmidt decomposition, one further requires a to be real positive, and therefore p ≥ 1/3 [else, a would be complex; see (33)]. Note that p cannot be greater than unity, as it defines the probability of the state |ψ in the mixture ρ W D . Interestingly, for p = 1/3, the state ρ W D is entangled only for a = 1/2. For higher values of p, the Werner derivative is entangled for a range of values a, including a = 1/2 (corresponding to the maximally entangled Bell state) and a = 1/2 (corresponding to a non-maximally entangled state |ψ ). An example of pure nonmaximally entangled state To further check the criterion (19), we next consider a non-maximally pure entangled state |ψ of the following form: where c 0 , c 1 are the complex coefficients and unequal in magnitude. Note that when c 0 = c 1 , the state becomes one of the Bell states, which are maximally entangled. For A = σ z ⊗ σ z and B = σ x ⊗ σ x , the inequality (19) leads to which is always violated for any nonzero c 0 and c 1 . This is maximally violated when c 0 = c 1 . An example of mixed nonmaximally entangled state Finally we consider a non-maximally entangled mixed state ρ new [24], which is a convex combination of a separable density matrix ρ G 12 = Tr 3 (|GHZ 123 ) and an inseparable density matrix ρ W 12 = Tr 3 (|W 123 ). Here |GHZ 123 and |W 123 are the GHZ state and W-state, respectively, of three qubits 1, 2, and 3. The state ρ new can be explicitly written as where 0 ≤ p ≤ 1. Note that the Werner state is also a convex sum of a maximally entangled pure state and a maximally mixed state. On the contrary, the state ρ W 12 is not a pure state (though entangled) and the ρ G 12 is also not maximally mixed (though separable). In the computational basis of two qubits, ρ new and ρ P T new take the following forms: and According to the PT criterion, that ρ new is entangled for p > 0.708 can be easily verified by finding the eigenvalues of the ρ P T new . But this cannot be revealed by using the Eq. (9) with ρ P T new and the following set of local operators: A = σ z ⊗ I 2 and B = I 1 ⊗ σ z . This leads to the following inequality: p ≥ 0, which means that, according to (9), the state ρ new is separable for all p. On the contrary, as we show below, the inequality (19) can successfully detect the entanglement in this state, as well. To evaluate the condition (19), we choose the same set of local operators, as above and obtain the following inequality: We find that this is always violated for p > 0.708, which correctly matches with the result obtained by directly using the Peres criterion. D. Discussions It is worth noting that the usefulness of the criterion (19) becomes more prominent when the state under consideration is mixed in nature. In fact, when ρ is pure, the classical mixing is zero and therefore I(ρ P T , A) = V (ρ P T , A) and similarly for B. So, we have U (ρ P T , A) = V (ρ P T , A) and U (ρ P T , B) = V (ρ P T , B). The left hand side of (19) becomes the same as that in (9). On the right hand side also C ρ P T (A, B) becomes equal to the covariance Cov ρ P T (A, B) for pure state, which are defined by, for any ρ, Cov ρ (A, B) = Tr(ρAB) − (TrρA)(TrρB) . (40) In this way, the criterion (9) becomes enough to identify the entanglement in two-qubit pure states. But it fails to identify the entanglement in two-qubit mixed state as we have shown that it is satisfied by all the two-qubit mixed states, considered in this paper. Note that if the partial transpose is taken on the operators A and B instead of on ρ, the inequality (19) reduces to the Schrodinger-Robertson partial transpose (SRPT) inequality [5], as given by This inequality is able to detect entanglement in any pure entangled state of bipartite and tripartite systems, by experimentally measuring mean values and variances of different observables [5]. For mixed states, the above inequality detects entanglement of bipartite Werner states better than the Bell inequalities. On the contrary, the criterion (19) cannot be experimentally verified, since it involves term like Tr( √ ρA √ ρA), which cannot be measured by usual quantum measurements. However, it is possible to set a nontrivial lower bound. For all ρ and A, we have 1 2 Tr[ρ, A] 2 ≥ Tr[ √ ρ, A] 2 [25]. This implies that I(ρ, A) ≥ I L (ρ, A) ≥ 0, i.e, the skew information has a non-negative lower bound. For the spectral decomposition, ρ = i λ i |φ i φ i |, putting A ij = φ i |A|φ j , we have I(ρ, A) = 1 2 ij ( √ λ i − λ j ) 2 |A ij | 2 , with the lower bound I L (ρ, A) = 1 4 ij (λ i − λ j ) 2 |A ij | 2 . This lower bound is experimentally measurable. IV. CONCLUSIONS In conclusion, we have formulated a strong entanglement criterion for mixed states. This criterion uses the Peres-Horodecki partial transposition applied on a suitable uncertainty relation. We show by explicit analysis that this criterion can be useful for not only the pure states, but also several generalized forms of mixed states. For example, it can correctly reveal the lower bound of the mixing probability (i.e., p > 1/3) of the Bell state in the Werner state. Thus it turns out to be stronger than any other known criteria, based on, e.g., the Bell inequality, the uncertainty relation proposed in [12] or the Schrodinger-Robertson inequality. The strength of our criterion lies into the fact that it suitably takes care of the quantum share of the uncertainties (the Wigner-Yanase skew information) and correlations of the relevant
2022-11-01T01:16:03.436Z
2022-10-29T00:00:00.000
{ "year": 2022, "sha1": "2994b98b27250254bd476fee8f9da731cb63c490", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2994b98b27250254bd476fee8f9da731cb63c490", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257914798
pes2o/s2orc
v3-fos-license
The regulation of self-tolerance and the role of inflammasome molecules Inflammasome molecules make up a family of receptors that typically function to initiate a proinflammatory response upon infection by microbial pathogens. Dysregulation of inflammasome activity has been linked to unwanted chronic inflammation, which has also been implicated in certain autoimmune diseases such as multiple sclerosis, rheumatoid arthritis, type 1 diabetes, systemic lupus erythematosus, and related animal models. Classical inflammasome activation-dependent events have intrinsic and extrinsic effects on both innate and adaptive immune effectors, as well as resident cells in the target tissue, which all can contribute to an autoimmune response. Recently, inflammasome molecules have also been found to regulate the differentiation and function of immune effector cells independent of classical inflammasome-activated inflammation. These alternative functions for inflammasome molecules shape the nature of the adaptive immune response, that in turn can either promote or suppress the progression of autoimmunity. In this review we will summarize the roles of inflammasome molecules in regulating self-tolerance and the development of autoimmunity. Introduction A functioning immune system is characterized by the capacity to distinguish between self-antigens versus microbial pathogens and foreign molecules. Several mechanisms are in place regulating both innate and adaptive immunity to establish persistent self-tolerance. These mechanisms maintain self-tolerance by limiting the activation and maturation of innate effectors such as monocytes, macrophages and dendritic cells (DC), while regulating self-specific T and B cells via intrinsic and extrinsic events. Immunoregulation is a dominant mechanism by which self-tolerance is established and maintained. Multiple subsets of self-specific T cells, including forkhead box P3 (FoxP3)-expressing regulatory CD4 + T cells (Foxp3 + Treg), as well as regulatory B cells, mediate immunoregulation via 1) secretion of anti-inflammatory cytokines (e.g. TGF b1, IL-10) and modulatory factors, and Inflammasome-mediated inflammation-an overview Inflammasome-driven inflammation in the context of innate immunity generally entails the production of proinflammatory cytokines such as IL-1b and IL-18, as well as induction of programmed cell death. The typical inflammasome complex consists of three components; namely 1) a sensor molecule such as a nucleotide oligomerization domain-like receptor (NLR), Absent in melanoma 2-like receptors (ALR) or pyrin, 2) the adaptor molecule apoptosis-associated speck-like protein (ASC) that contains a caspase activation and recruitment domain (CARD), and 3) pro-caspase-1 ( Figure 1) (4). The assembled inflammasome provides a platform for cleavage of pro-caspase-1 (4). Once activated via an autolytic processing event, caspase-1 mediates maturation of pro-IL-1b and pro-IL-18 precursors, as well as initiating pyroptosis (4). Pyroptosis, a lytic form of programmed cell death, is induced through caspase-1-mediated cleavage of gasdermin D (GSDMD), which removes the autoinhibitory C-terminus portion of the protein (10). Cleaved GSDMD also forms pores in the cell membrane, which facilitate the secretion of mature IL-1b and IL-18 (11). Cleavage of GSDMD and induction of pyroptosis is also achieved by a nonconical pathway in which murine caspase-11 or human caspase-4/5 are activated by cytosolic lipopolysaccharide (LPS), a gram-negative bacteria endotoxin (11,12). In addition to pyroptosis, certain inflammasome molecules such as NLR family pyrin domain containing 3 (NLRP3) and absent in melanoma 2 (AIM2), have been associated with PANoptosis-driven cell death in response to microbial infection and changes in cellular homeostasis (13). PANoptosis is regulated by the PANoptosome, which is a multimeric complex consisting in part of effector molecules involved in pyroptotic (caspase 1), apoptotic (caspase 8), and necroptotic (receptor-interacting protein kinase 1 (RIPK1), receptor-interacting protein kinase 3 (RIPK3)) cell death pathways (14). The composition of the PANoptosome varies with the nature of the stimulatory response, and complexes consisting of the ASC adaptor and NLRP3 or AIM2 sensor molecules have been identified (15). Inflammasome activation is achieved in response to a broad range of stimuli derived from microbial infection, tissue damage, and/or dysregulation of metabolic events ( Figure 1). The process of inflammasome activation typically entails two sets of signaling events that prime (signal 1), and activate (signal 2) the inflammasome (11). This multiple-step pathway ensures robust regulation of inflammasome activity. Signal one, induced by PRR (e.g. toll-like receptors (TLR)) primes inflammasome assembly via activation of NF-kB, upregulation of pro-IL-1b and pro-IL-18 expression, and induction of post-translational events that favor the formation of an inflammasome complex (11,12). Signal two is specific for a given sensor molecule and induces inflammasome activation (12). Binding of an agonist to the leucine-rich repeat containing receptor (LRR) portion of the sensor protein leads to oligomerization via homotypic pyrin (PYD) interactions with the ASC adaptor molecule. ASC is important for linking the sensory protein with caspase-1 via CARD interactions (11,12). Events driving caspase-1 activation, IL-1b and IL-18 maturation, and induction of pyroptosis and/or PANoptosis then follow (11,12). To date, the role of inflammasomes in autoimmunity have largely focused on NLRP3 and AIM2, but other inflammasome molecules such as NLRP1, and NLR family CARD domain-containing protein 4 (NLRC4) have also been implicated in autoimmunity (16,17). The respective inflammasomes are defined by the sensor protein. NLRP3 has been the most extensively studied inflammasome, in general and in autoimmunity (18). NLRP3 agonists are structurally and chemically diverse: such agonists include 1) PAMPs expressed by bacteria, virus, and fungi, and 2) DAMPs including cholesterol, extracellular ATP, microbial pore-forming toxins, and particulate matter such as uric acid crystals (19). Consequently, it is believed that these agonists are indirectly sensed by NLRP3. Here, agonistinduced K + and Cleffluxes, Ca 2+ fluxes, lysosomal damage, and mitochondrial damage and/or dysfunction coupled with the release of reactive oxygen species (ROS) are directly sensed by NLRP3 (20). For instance, noncanonical-induced activation of GSDMD results in K + efflux, which activates NLRP3 and leads to caspase-1mediated IL-1b and IL-18 production via the classical pathway (21-23). Gain of function variants in the NLRP3 gene resulting in aberrant NLRP3 inflammasome activation cause a family of diseases referred to cryopyrin-associated periodic syndromes (CAPS), which are marked by reoccurring systemic inflammation (20). NLRP3 activation has also been linked to diseases of the central nervous system (CNS) such as Alzheimer's Disease (AD) (24, 25). In AD, the accumulation and subsequent uptake of amyloid-b by microglia residing in the brain results in lysosomal destabilization and NLRP3 activation (24). Production of IL-1b also has neurotoxic effects on microglia and astrocytes (25). The process of NLRP1 activation is distinct from other inflammasomes (26). Here, motif-dependent ubiquitination followed by degradation of the N-terminal subunit by proteasome are required for activation of NLRP1 (27, 28). Various bacterial toxins and viral proteases have been reported to activate NLRP1 in mice and humans (29-33). However, since mice encode several NLRP1 orthologues with sequences that differ from the single human encoded NLRP1 gene, specific PAMPs and DAMPs triggering NLRP1 activation are variable and not fully defined among the species (34-37). The NLRC4 inflammasome is also distinct compared to other inflammasomes, in which the sensor protein functions as an agonist receptor. Instead, the NLRC4 protein associates with NLR family apoptosis inhibitory proteins (NAIPs) that act as cytosolic innate immune receptors, and which bind bacterial flagellin and type III secretion system components (T3SS) (38, 39). Gain-of-function variants in NLRC4 lead to periodic fever syndromes marked by increased systemic IL-18 (40). AIM2 is responsive to cytosolic double-stranded DNA (dsDNA) from bacteria and DNA viruses. Notably, AIM2 binds both endogenous and microbe-derived dsDNA independent of nucleic acid sequence (41). Expression of AIM2 is upregulated by FIGURE 1 Inflammasome assembly and activation. Canonical activation of the inflammasome pathway begins with a primary signal, such as PAMPs, endogenous-derived DAMPs, or dsDNA, that are recognized by pattern recognition receptors (PRRs), such as toll-like receptors (TLRs). PRR activation induces NF-kB and subsequent expression of NLRP, pro-IL-1b and pro-IL-18, and post-translational events. Formation of the inflammasome complex occurs when the sensor protein, such as NLRP3, binds to ASC, driving caspase activation and inflammasome assembly. Caspase enzymes cleave pro-IL-1b and pro-IL-18 as well as the C terminus from gasdermin D, allowing the gasdermin D N-terminal domain to form pores necessary for pyroptosis. IL-1b and IL-18, as well as cellular contents are released to establish a proinflammatory response. In autoimmune disease, inflammasome activation can occur via activation in a noncanonical matter including agonist-induced ion flux and lysosomal and mitochondrial reactive oxygen species (ROS). The figure was prepared using Biorender software licensed to the UNC Lineberger Comprehensive Cancer Center. type I interferon (IFN), and the AIM2 inflammasome is key in host defense against bacterial and viral pathogens such as Francisella tularensis and Listeria monocytogenes, and vaccinia virus and cytomegalovirus, respectively (42). In addition, the AIM2 inflammasome promotes caspase-1-driven death of intestinal epithelial cells and hematopoietic bone marrow cells upon recognition of dsDNA breaks due to ionizing radiation or chemotherapeutic drugs (43). The roles of IL-1b and IL-18 in inflammation Inflammasome generated IL-1b and IL-18 enhances both innate and adaptive immunity against microbial pathogens. However, dysregul ated prod uction of these two cytokines by inflammasomes is also linked to chronic autoimmune diseases. T cell responses are also regulated both indirectly and directly by IL-1b. For instance, IL-1b enhances the stimulatory capacity of DC by driving maturation and upregulation of co-stimulatory molecules needed for efficient T cell activation and expansion (47). Increased IL-12 secretion by IL-1b stimulated DC favors differentiation of antigen-stimulated T cells towards a type 1 phenotype, marked by IFNg production by CD4 + Th1 and CD8 + Tc1 cells (48). On the other hand, IL-1b has direct effects on CD4 + and CD8 + T cells, influencing expansion and subset differentiation depending on the extracellular milieu (49). In mice, IL-1b synergizes with IL-6, IL-21 and IL-23 to induce the differentiation of CD4 + T cells into IL-17-secreting Th17 cells (49). In humans, IL-1b has a more potent role in driving Th17 differentiation. Both Th1 and Th17 cells play key roles in several autoimmune diseases. Furthermore, IL-1b can suppress the function and/or reduce the stability of Foxp3 + Treg (50, 51). Dysregulation of the Foxp3 + Treg pool leading to skewed differentiation and pathogenic function of autoreactive effector T cells (Teff) is associated with a number of autoimmune diseases (52-56). CD8 + T cell expansion and differentiation are also regulated by IL-1b (57). IL-1b has regulatory effects on the B cell compartment by enhancing B cell proliferation and antibody production (45). In addition, IL-1b increases proliferation and secretion of IL-4 and IL-21 by CD4 + T follicular helper cells (Tfh) (58). Tfh cells play a critical role in regulating antibody production by B cells and have also been implicated in the production of autoantibodies during autoimmunity (59). IL-18 is expressed by a variety of cells such as Kupffer cells, macrophages, DC, and non-hematopoietic cells that include intestinal epithelial cells, keratinocytes and endothelial cells (60). Locally, IL-18 stimulates myeloid and endothelial cells to upregulate nitric oxide (NO) synthesis, and expression of cell adhesion molecules and chemokines to recruit and activate additional immune effectors at the site (60). In addition, IL-18 has potent regulatory effects on T cells and natural killer (NK) cells (60). IL-18 along with IL-12 drives the differentiation of Th1 cells and induces IFNg production by CD8 + T cells and NK cells (60,61). Furthermore, IL-18 stimulation upregulates 1) perforin-and Fas ligand (FasL)-dependent cytotoxicity in CD8 + T cells and NK cells, and 2) IL-17 secretion by gd T cells (62). Not only is IL-18 linked to autoimmune diseases such as T1D and SLE, IL-18 has also been shown to play a key role in the maintenance of the intestinal epithelial barrier and regulation gut microbiota composition (63,64). Dysbiosis of gut microbiota has been suggested as a risk factor for the development of autoimmunity (65,66). Classical inflammasome activationdependent events in autoimmunity In view of highly potent proinflammatory effects, it is not surprising that classical inflammasome activation is linked to a host of autoimmune diseases. Inflammasome activation is detected in innate and adaptive immune effectors thereby having indirect and direct effects that shape and maintain the proinflammatory response either locally and/or systemically in autoimmunity. In addition, inflammasome activation in non-immune cell types that makeup a given organ can initiate and/or exacerbate an autoimmune response. Finally, evidence indicates that inflammasome activation can have a protective role and contribute to maintenance of self-tolerance. In the following, we will describe the different roles classical inflammasome activation has in common tissue-specific and systemic autoimmune diseases (Table 1). Multiple sclerosis and inflammasomemediated neuroinflammation MS is a demyelinating autoimmune disease marked by chronic inflammation of the CNS, leading to variable neurological symptoms and heterogenous clinical outcomes (143, 144). MS susceptibility and disease progression are influenced by both genetic and environmental factors (145). Although ill-defined, the autoimmune response in MS is believed to be initiated in the periphery, involving stimulation of CD4 + and CD8 + T cells specific for myelin proteins (146,147). Differentiation of the encephalitogenic CD4 + T cell pool is skewed towards Th1 and Th17 subsets. This pool coupled with CD8 + T cells and B cells migrate across the CNS microvascular endothelium and into the brain and spinal cord (148, 149). The CNS infiltrate includes peripheral monocytes/macrophages and DC that further amplify the autoimmune response. Upon activation, microglia, which are tissue-resident macrophages as well as resident astrocytes also contribute to inflammation (144, 150) by production of: 1) proinflammatory cytokines such as IL-1b, which has neurotoxic and immunomodulatory effects in the CNS, as well as 2) chemokines that promote recruitment of immune effector cells (151,152). Studies of MS patients and rodent experimental autoimmune encephalomyelitis (EAE), a model of MS, demonstrate that inflammasomes such as NLRP3, are associated with various aspects of the autoimmune process (153)(154)(155) (Figure 2). mRNA expression of NLRP3 and IL1B are detected in MS lesions as well as increased levels of IL-1b and IL-18 in blood and cerebrospinal fluid (CSF) (150,156). Furthermore, the P2X7 purinergic receptor (P2X7R), a ligand-gated ion channel regulated by extracellular ATP that activates the NLRP3 inflammasome (157), is elevated in the spinal cord of MS patients. Indeed, increased extracellular levels of ATP and uric acid are found in the CSF and serum of MS patients (158,159). ATP is normally abundant in the extracellular space of the CNS, where it functions as an excitatory neurotransmitter. Interestingly, various drugs used to clinically treat MS such as recombinant IFNb, glatiramer acetate and natalizumab suppress NLRP3 mRNA expression, and decrease IL-1b in the blood and CSF of MS patients (160)(161)(162). In the brain lesions of MS patients, NLRP9 protein is also up-regulated in microglia but not astrocytes, suggesting a role for NLRP9 in modulating the encephalitogenic response (76). needed to adequately activate and upregulate T cell expression of osteopontin (OPN), and chemokine receptors CCR2, and CXCR6 for efficient migration into the CNS (83). In addition, lack of NLRP3 and ASC expression also limits DC and macrophages to upregulate matching receptor/ligands for OPN (a4b1 integrin), CCR2 (CCL7/CCL8), and CXCR6 (CXCL16) (83), resulting in aberrant APC migration into the CNS. These findings support a role for APC-expressed NLRP3 in mediating chemotactic recruitment of immune effectors to the CNS. Peripheral APC also regulate the progression of EAE via inflammasome-mediated pyroptosis. EAE is attenuated in mice lacking GSDMD expression by peripheral myeloid cells (85). On the other hand, selective deletion of GSDMD in microglia has no effect on EAE, indicating that pyroptosis of CNS-resident APC may have only a limited role. The T cell stimulatory capacity of GSDMD -/-APC is reduced, which is marked by diminished numbers and effector function of Th1 and Th17 cells in the CNS. Notably, selectively blocking GSDMD-mediated pyroptosis with the inhibitor disulfiram, also attenuates EAE, demonstrating a direct role for pyroptosis (85). It is believed that pyroptosis of APC heightens local inflammation to promote efficient T cell activation, and subset differentiation needed to generate a robust encephalitogenic T cell pool. In addition to APC, inflammasome activity intrinsic to T cells impacts EAE progression ( Figure 2). Selective ASC-deficiency in T cells attenuates EAE marked by reduced infiltration of CD4 + T cells, B cells, and neutrophils (86). ASC -/-T cells are readily activated and undergo normal in vitro and in vivo differentiation into Th1, Th2, Th17 and Foxp3 + Treg subsets. However, ASC-deficiency affects the properties of Th17 but not Th1 cells. ASC -/-Th17 exhibit reduced survival and pathogenicity reflected by decreased secretion of IL-17A, IFNg, TNFa, as well as IL-1b. Here, IL-1b plays a key role in an autocrine manner, by enhancing the survival and effector function of Th17 cells residing in the CNS. Interestingly, cleavage of pro-IL-1b in Th17 cells is mediated via a noncanonical pathway involving caspase 8 activation. In this scenario, increased extracellular ATP levels due to release by stressed and dying cells drives activation of the NLRP3-ASC-caspase-8 complex, establishing a feed-forward loop promoting Th17 cellmediated pathogenicity. In addition to NLRP3, the activity of other inflammasome molecules in non-immune CNS resident cell-types have been The role of inflammasomes in multiple sclerosis (MS) and experimental autoimmune encephalitis (EAE). The autoimmune response for MS is believed to begin in the periphery. Activation of NLRP3 and NLRC4 inflammasome pathways in antigen-presenting cells (APC) enhance stimulation and differentiation of pathogenic CD4 + Th1/Th17 and CD8 + Tc1 subsets. On the other hand, NLRC3 activation in dendritic cells (DC) is protective against disease by inhibiting DC maturation. Secretion of IL-1b and IL-18 increase T cell expression of osteopontin (OPN), CCR2 (binding CCL7/8), and CXCR6 (binding CXCL16) to promote infiltration to the central nervous system (CNS). Upon activation and differentiation, CD4 + and CD8 + T cells, and B cells migrate to the CNS. In the CNS, peripheral DC, macrophages (MP), and monocytes (MO) further amplify inflammation. CNS resident cells such as microglia and astrocytes also promote inflammation. Lysophosphatidylcholine (LPC) activates NLRP3 and NLRC4, causing secretion of IL-1b and chemokines, leading to further inflammation and demyelination. NLRP9 expression is increased in microglia. NLRX1 and NLRP12 serve to downregulate neuroinflammation and provide protection against disease as indicated by the red arrows. Reduction of NLRX1 and NLRP12 can lead to exacerbated disease states. Purinergic receptor (P2X7R). The figure was prepared using Biorender software licensed to the UNC Lineberger Comprehensive Cancer Center. found to promote neuroinflammation. Both NLRP3 and NLRC4 regulate the activity of microglia and astrocytes in a cuprizone model of inflammation-induced demyelination (77). Both cell types are known mediators of neuroinflammation through secretion of proinflammatory cytokines and chemokines. Cuprizone-induced pathology is prevented in NLRP3-and NLRC4-deficient mice characterized by microglia and astrocytes lacking IL-1b production, and exhibiting reduced expression of G2A, the receptor for lysophosphatidylcholine (LPC) ( Figure 2). LPC, known for proinflammatory properties, is rapidly metabolized under homeostasis but accumulates under pathological conditions in the CNS (77). Following cuprizone treatment, LPC levels are increased, and LPC functioning as a DAMP, activates NLRP3 and NLRC4 expressed by microglia and astrocytes (77). In MS patients, expression of G2A and NLRC4 are increased, suggesting a role in the MS autoimmune response (77). Interestingly, inflammasomes have also been shown to play a protective role in EAE. For instance, deficiency of NLRC3 exacerbates EAE (84). Lack of NLRC3 results in DC producing increased proinflammatory cytokines such as IL-12, IL-6, and IL-23, that in turn enhance differentiation of encephalitogenic Th1 and Th17 cells (84). NLRC3 negatively regulates DC maturation by inhibiting activation of the p38 signaling pathway (84). The ligand (s) regulating NLRC3 activity in DC is currently undefined (84). Also serving a protective function is NLR family member X1 (NLRX1), a more recently characterized NLR that is ubiquitously expressed and located in the mitochondria (78, 90). NLRX1 inhibits proinflammatory pathways, including type I IFN and TLRmediated NF-kB signaling events, and may play a role in regulating mitochondria oxidative damage (78). Mice deficient of NLRX1 have increased T cell infiltration of the CNS, and consequently develop more severe EAE (79). Microglia exhibit a hyperactivated phenotype characterized by elevated expression of MHC class II molecules and production of IL-6 and chemokines, which in turn aid T cell recruitment and expansion (79). Accordingly, NLRX1 function is predicted to attenuate the proinflammatory properties of microglia. On the other hand, NLRX1-deficiency has no intrinsic effect on the pool of encephalitogenic T cells (79). NLRX1 may also play a protective function in astrocytes; NLRX1 -/astrocytes release excess glutamate in a Ca 2+ dependent manner and contain reduced ATP levels compared to wild-type astrocytes, suggesting that NLRX1 promotes mitochondria ATP production (90). Furthermore, ROS levels in NLRX1 deficient astrocytes are increased compared to wild-type astrocytes, which may explain the reduced glutamate uptake (90). Recent evidence suggests that NLRX1 inhibits microglial activation in the early stages of EAE, which prevents activation of neurotoxic astrocytes (78). NLRP12 has also been shown to regulate the progression and nature of CNS inflammation in EAE (87, 88, 153). NLRP12 mediates classical inflammasome driven inflammation in innate effector cells to certain microbes (164,165), but also serves as a negative regulator of the NF-kB signaling pathway (80, 87, 88, 166,167). In mice deficient of NLRP12, a more rapid and severe EAE develops (81). This exacerbated disease is characterized by i n c r e a s e d m R N A l e v e l s e n c o d i n g I L -1 b a n d o t h e r proinflammatory molecules in the CNS, as well as activated microglia producing heightened levels of inducible NO synthase (iNOS), NO, TNFa, and IL-6 (81). A second study reported that EAE induction in NLRP12 -/mice results in neuroinflammation that promotes ataxia and poor balance, rather than the ascending paralysis that normally develops in wild-type mice (87). Furthermore, NLRP12-deficiency has intrinsic effects on T cells. In the absence of NLRP12 expression, T cells exhibit increased proliferation, and secretion of IFNg, IL-17 and IL-4, that is in part due to hyperactivation of NF-kB (87). Therefore, NLRP12 negatively regulates various aspects of innate cell activation, as well as CD4 + T cell expansion and effector function via blocking NF-kB signaling (88). Rheumatoid arthritis and inflammasomemediated joint inflammation RA is a chronic autoimmune disease characterized by the inflammation of the joints, leading to synovial tissue proliferation, cartilage erosion and joint destruction (168)(169)(170). Pathology is in part driven by Th1 and Th17 CD4 + T cells and B cells, as well as innate effectors such as monocytes, DC and neutrophils that traffick into the synovium (171)(172)(173). Joint-resident cells such fibroblastlike synoviocytes (FLS) also promote local inflammation (174). Normally, FLS play a key role in maintaining joint homeostasis via production of the extracellular matrix and matrix metalloproteinases (MMPs) (175). The autoimmune response of RA also involves high levels of serum complement and the production of autoantibodies that target the Fc region of IgG (i.e. rheumatoid factor), cartilage components, nuclear proteins and proteins post-translationally modified by citrullination (176,177). Key proinflammatory cytokines driving RA include IL-1b and IL-18, as well as IL-6 and TNFa (178). In addition to having immunomodulatory effects, IL-1b mediates cartilage erosion and prevents chondrocyte matrix formation (179). Furthermore, the severity of RA correlates with elevated serum IL-18 (180,181). Moreover, during the early-stages of RA, FLS proliferate and differentiate into distinct subsets of activated synovial fibroblasts that produce inflammatory cytokines, matrixdegrading enzymes and proangiogenic factors which lead to the release of inflammatory mediators, bone destruction and angiogenesis (182-184). FLS also promote T cell survival, Tfh and Th17 cell differentiation, and can function as antigen presenters to autoreactive T cells (185)(186)(187)(188)(189)(190)(191)(192)(193). The etiology of RA is ill-defined but genetic and a host of environmental factors are known to influence disease susceptibility and progression. Evidence also suggests that inflammasomes likely have an important role in RA pathogenesis ( Figure 3). In RA patients, NLRP3 and NLRP3-inflammasome-related proteins are upregulated in a cell-specific manner among innate effectors. For instance, expression of NLRP3, ASC, and caspase-1 as well as IL-1b secretion is generally increased in monocytes, macrophages, and DC from RA patients (99-102) (Figure 3). CD4 + T cells from RA patients also exhibit increased NLRP3 expression, which correlates with elevated serum IL-17A concentrations and disease activity (109) (Figure 3). Notably, differentiation of Th17 cells is inhibited by NLRP3 knockdown (109), suggesting that NLRP3 regulates the proinflammatory activity of both innate and adaptive effectors in RA. Interestingly, NLRP3 activation in monocytes is mediated via multiple mechanisms in RA patients. C1q binding to pentraxin 3, a key regulator of complement activity and which is increased on the surface of RA CD14 + monocytes, leads to NLRP3 activation, enhanced IL-1b and IL-6 secretion, and GSDMD-induced pyroptosis (178). In addition, due to elevated extracellular Ca 2+ in the joint and concomitant heightened activity of calcium-sensitive receptors, macropinocytosis of calciprotein particles (CPPs) is elevated by local monocytes (194). After uptake, CPPs disrupt lysosome integrity resulting in enhanced NLRP3 activation and IL-1b secretion (194). Whereas NLRP3 and related inflammasome proteins are typically elevated in various innate and adaptive immune effectors, neutrophils from RA patients exhibit reduced NLRP3, ASC and pro-caspase-1 expression (108). Here, NLRP3 mRNA levels in neutrophils negatively correlate with disease severity (108). This suggests that NLRP3 may serve a protective role in the context of neutrophil function via an ill-defined mechanism (108). Various inflammasome molecules, in addition to NLRP3, have been found to be involved with RA ( Figure 3). NLRC4 activity is increased in DC residing in the synovial membrane of RA patients (105). These DC secrete elevated IL-1b, have increased expression of CD64, an IgG Fc receptor, and display an enhanced capacity to stimulate Th1 and Th17 subset differentiation (105). This capacity is due to a novel mechanism of upregulation of NLRC4 expression and activity. Here, dsDNA-IgG complexes bind to CD64, are internalized, and the combination of CD64 signaling and intracellular sensing of the dsDNA increases NLRC4 activity (105). AIM2 expression is increased in synovial tissue of RA patients, and knockdown of AIM2 mRNA inhibits in vitro proliferation of FLS derived from RA patients (111). On the other hand, NLRP6 levels are reduced in FLS from patients with RA versus osteoarthritis (112). Furthermore, increased ectopic expression of NLRP6 in RA patient-derived FLS blocks the production of inflammatory cytokines such as IL-1b, IL-6, and Events in dysregulated inflammasome activation in rheumatoid arthritis (RA). NLRP3 and NLRC4 activity are increased in monocytes (MO) and DC by Fc-receptor (FcR) binding of DNA-IgG immune complexes and complement component 1q (C1q) binding to pentraxin 3 (PTX3). Uptake of elevated levels of calciprotein particles (CPPs) in the joint by resident DC also leads to NLRP3 activation. Resulting pyroptosis and secretion of proinflammatory cytokines promote RA progression by favoring Th1 and Th17 differentiation and development of autoantibodies and RA factor producing plasma cells. NLRP3 activation is also increased in Th17 cells. Aberrant lysosomal processing of endocytosed dsDNA can lead to AIM2 activation in joint resident macrophages (MP). Neutrophils exhibit reduced expression of inflammasome molecules, which correlates with decreased disease severity. NLRP3, NLRC5 and AIM2 are associated with proinflammatory properties of fibroblast-like synoviocytes (FLS), while NLRP6 and NLRP12 serve protective roles, indicated by the red arrows. NLRP6 limits FLS cytokine production, and NLRP12 negatively regulates Th17 subset differentiation. Reduced expression of NLRP6 and NLRP12 leads to pathology. Matrix metalloproteinases (MMP). The figure was prepared using Biorender software licensed to the UNC Lineberger Comprehensive Cancer Center. TNFa, as well as MMP via inhibition of the NF-kB pathway. The latter indicates that NLRP6 serves a protective role in RA (112), and is consistent with NLRP6 having a negative regulatory function in colitis (195). Animal studies further support the notion that the role for inflammasomes in RA is complex, and that cell type-dependent, inflammasome molecules can have distinct effects on immune cells and effector molecules depending on the RA model (103, 196) ( Figure 3). Mice deficient of ASC are resistant to collagen induced arthritis (CIA), in part due to a reduced T cell stimulatory capacity of ASC -/-DC (103). However, CIA develops in both NLRP3 -/and Caspase1 -/mice suggesting that ASC has caspase 1-independent effects in DC (103). On the other hand, NLRP3 and caspase-1 play a key role in the spontaneous polyarthritis that develops in mice in which the RA susceptibility gene A20/Tnfaip3 is selectively ablated in myeloid cells (A20myel-KO mice) (104). Here, macrophages lacking A20 have increased constitutive and LPS-induced expression of NLRP3 and pro-IL-1b. The latter is indicative of the established role A20 has as an inhibitor of NF-kB activation (197), which is needed for NLRP3 and pro-IL-1b transcription following inflammasome priming. Furthermore, activation of NLRP3 in A20-deficient macrophages results in enhanced caspase-1 activation, IL-1b secretion, and pyroptosis. Notably, pathology in A20myel-KO mice is blocked by ablation of NLRP3, caspase-1 and the IL-1 receptor (IL-1R), demonstrating a direct role for classical NLRP3 inflammasome activation in this spontaneous autoimmune model of cartilage destruction (104). NLRP3 is also associated with the proinflammatory properties of FLS. NLRP3 expression is increased in FLS isolated from mice with adjuvantinduced arthritis (AA) (113), and knockdown of Nlrp3 mRNA expression in FLS reduces disease severity in a monosodium urateinduced model of gout arthritis in rats (114). AIM2 has also been shown to have a key role in joint inflammation. Mice deficient in expression of lysosomal endonuclease DNase II and type I IFN receptor (IFNaR) develop polyarthritis marked by production of autoantibodies, and macrophage secreted proinflammatory cytokines such as IL-1b, IL-6 and TNFa (106). Lack of lysosomal endonuclease DNase II results in aberrant processing of dsDNA in lysosomal compartments, and translocation of undigested DNA into the cytoplasm of macrophages (106,107). AIM2-deficiency limits joint inflammation marked by reduced caspase-1 activity, IL-1b and IL-18 expression, and macrophage infiltration (106,107). Notably, however, autoantibody production is unaffected by AIM2-ablation indicating a tissue-specific role for AIM2. Furthermore, AIM2-ablation has no effect on the transfer of arthritogenic serum from K/BxN mice (107). In this passive model, arthritis is induced by the deposition of immune complexes within the joint, leading to complement fixation and ensuing pathology (106,107). Therefore, AIM2 regulates inflammation when cytosolic DNA is the key driving event. A contribution for NLRC5 in joint inflammation has been reported (115). NLRC5 expression is elevated in the synovium and FLS in rat AA (115), and knockdown of Nlrc5 mRNA blocks FLS proliferation and production of TNFa and IL-6, due to suppressed NF-kB activation (115). Similar to NLRP6, NLRP12 has been shown to negatively regulate joint inflammation (110). The severity of antigeninduced arthritis in NLRP12 -/mice is increased, marked by elevated levels of joint infiltrating Th17 cells (110). Notably, in vitro Th17 cell differentiation is enhanced in NLRP12 -/-CD4 + T cells marked by elevated IL-6-induced activation of signal transducer and activator of transcription (STAT) 3 (110). Type 1 diabetes and inflammasomemediated pancreatic islet inflammation T1D is characterized by chronic inflammation of the pancreatic islets (insulitis) that results in the dysfunction and/or destruction of the insulin producing b cells (198)(199)(200). Despite life-long insulin therapy, T1D patients typically develop a variety of complications including retinopathy, neuropathy, and nephropathy related to hyperglycemia and inflammation. The autoimmune response involves islet infiltration of CD4 + and CD8 + T cells, B cells, macrophages, and DC. b cell-specific CD4 + and CD8 + T cells are generally believed to be the key drivers of pathology (198)(199)(200). Diabetogenic CD4 + and CD8 + T cells typically exhibit a type 1 effector phenotype, although Th17 cells are also implicated in the disease process (199). In addition to serving as APC, isletinfiltrating macrophages and DC, mediate b cell destruction through secretion of proinflammatory mediators and cytokines such as IL-1b, IFNg and TNFa that have direct b cell-cytotoxic effects (199). The initiation and progression of T1D are influenced by genetic and poorly defined environmental factors (201-204). The latter include viral infections, and dysbiosis of gut microbiota, which are events that can be impacted by inflammasome activity (16,201,205). Studies using murine models of T1D show that NLRP3 regulates the diabetogenic response ( Figure 4). In non-obese diabetic (NOD) mice, which spontaneously develop b cell autoimmunity and overt diabetes, NLRP3 deficiency results in a reduced incidence of diabetes (123). This attenuated diabetes is due in part to NLRP3 -/-APC having a decreased capacity to promote Th1 cell differentiation; Th17 cell differentiation, however, is unaffected. Importantly, NLRP3 -/b cells exhibit decreased production of IL-1b and chemokines such as CCL5, and CXCL10 (123). The latter limits migration into the islets by immune effectors including diabetogenic T cells (123) (Figure 4). Interestingly, limited IL-1b production leads to reduced activation of interferon regulatory factor 1 (IRF1) that is needed for b cell expression of CCL5 and CXCL10. Diminished IL-1b secretion by b cells is also expected to aid b cell viability and function, as well as enhance the maintenance and function of protective Foxp3 + Treg in the islets. Notably, upregulation of NLRP3 and IL-1b is also detected in human b cells upon LPS and ATP stimulation in vitro (206). A regulatory function for NLRP3 in the disease process is also seen in a multiple low dose streptozotocin (MLD-STZ)-induced model of T1D. Here, progression of b cell autoimmunity is reduced in MLD-STZ treated C57BL/6 mice lacking NLRP3 expression (207). In this model NLRP3 is activated in macrophages residing in the draining pancreatic lymph nodes (PLN) by mitochondrial DNA (mtDNA) that is released following STZ treatment. NLRP3 activation results in increased caspase-1 activity, and IL-1b production, which drives expansion of pathogenic Th1 and Th17 cells and the induction of diabetes. The PLN are a key site for priming of diabetogenic CD4 + and CD8 + T cells. Interestingly, plasma levels of mtDNA are increased in T1D versus healthy subjects, which is expected to contribute to systemic inflammasome activation (208). Indeed, circulatory mtDNA induced by MLD-STZ in mice activates NLRP3 in endothelial cells via Ca 2+ influx and mitochondrial ROS generation, which leads to endothelial dysfunction and vascular inflammation (208). Vascular inflammation is a key driver of complications that develop in T1D. Together these studies indicate that NLRP3 promotes pathological events driving b cell autoimmunity. Nevertheless, mechanisms by which NLRP3 mediate effects are likely to be complex and cell dependent. For instance, disease progression in NOD mice is unaffected by caspase-1 deficiency (209,210), and only minimally affected by IL-1R ablation (211). In contrast to NLRP3-deficient C57BL/6 (207), MLD-STZ enhances diabetes development in AIM2-deficient C57BL/6 mice (124). Interestingly, disease exacerbation in the AIM2 -/mice is mediated by enhanced intestinal permeability, alterations in the gut microbiota, and increased bacterial translocation to the PLN where CD4 + Th1 and CD8 + Tc1 are readily expanded (Figure 4). Importantly, AIM2 deficiency results in decreased maturation of IL-18 which is needed to maintain intestinal barrier function (124). On the other hand, reduced NLRP3 expression in colonic NOD mouse tissue is associated with decreased microbiota dysbiosis, enhanced intestinal barrier function and diabetes prevention (125,126). It is well established that dysbiosis within the gut microbiota significantly affects disease progression in NOD mice, and clinical findings suggest similar effects may also occur in T1D subjects (16,205,(212)(213)(214)(215). These studies provide evidence that inflammasomes may play a key role in regulating T1D progression in part via effects on gut microbiota and intestinal barrier function (16). Studies have reported that gut microbiota composition and/or intestinal barrier permeability are also influenced by other inflammasome molecules such as NLRP6 (216), NLRC4 (217), NLRX1 (218, 219), and NLRP12 (220, 221). Further investigation is necessary to elucidate the connection between inflammasomes, gut microbiota homeostasis, and autoimmunity. Systemic lupus erythematosus and the role of inflammasome activity in widespread inflammation SLE is a chronic autoimmune disease with diverse clinical manifestations. Development of SLE is influenced by genetic, hormonal, and environmental factors that lead to dysregulation of mechanisms of innate and adaptive-mediated self-tolerance. The autoimmune response is characterized by the generation of antinuclear autoantibodies, tissue deposition of immune complexes, increased type I IFN production, and inflammation in multiple The roles of inflammasomes in type 1 diabetes (T1D). Under homeostasis, healthy intestinal epithelial cells maintain intestinal barrier function and regulate permeability to prevent passage of harmful elements such as microorganisms and toxins. AIM2 serves a protective function (indicated by the red arrow). Dysregulation of inflammasome function, such as AIM2 deficiency, leads to reduced production of IL-18, which is necessary for maintaining intestinal barrier function. Consequently, inflammasome dysregulation enhances intestinal permeability and triggers inflammation. On the other hand, NLRP3 is linked to dysbiosis within the gut microbiota, which can exacerbate T1D progression. In the pancreatic lymph node (PLN), upregulation of NLRP3 in APC promotes IL-1b production that ultimately drives differentiation of diabetogenic CD8 + Tc1, CD4 + Th1, and Th17 cells. In the pancreatic islets, NLRP3 hyperactivity in b cells induces release of cytokines and chemokines. These conditions combined with other immunomodulatory factors establish a positive feedback loop to further perpetuate pancreatic inflammation. Macrophage (MP), dendritic cell (DC), antigen-presenting cell (APC). The figure was prepared using Biorender software licensed to the UNC Lineberger Comprehensive Cancer Center. organs with the kidneys being the most commonly affected (222). CD4 + T cells such as Tfh cells are key drivers of the autoantibody response, and Th17 cells, found infiltrating the kidneys and skin contribute to tissue damage (223). Innate effectors such as monocytes, macrophages, DC and neutrophils also play roles in mediating the systemic inflammation and tissue damage in SLE (223). The etiology of SLE is not fully understood but evidence from humans and animal models indicate that inflammasomes contribute to disease progression ( Figure 5). Inflammasome components are typically upregulated in kidney biopsies from SLE patients, and NLRP3, IL-1b and IL-18 are increased in SLE patient macrophages, peripheral blood mononuclear cells (PBMC), and serum (133,134). A critical meditator of pathology in SLE are anti-nuclear autoantibodies (ANA) that target endogenous dsDNA and ribonucleoproteins (RNP) (224). Immune complexes (IC) of dsDNA upregulate NLRP3 and caspase-1 activity leading to increased IL-1b production by monocytes and macrophages of SLE patients (225). Here, the IC activates TLR9, a DNA sensor, which subsequently upregulates NF-kB and primes inflammasome assembly via increasing NLRP3 and pro-IL-1b (225). Upon IC binding, TLR9 also promotes mitochondrial ROS production and K + efflux and subsequent NLRP3 activation. Notably, SLE monocytes stimulated with dsDNA-antibody complexes readily promote differentiation of Th17 cells, which is also seen in vivo in lupus-prone NZBW/F1 mice injected with anti-dsDNA autoantibodies from SLE patients (224). Similarly, autoantibody complexes of U1-small nuclear RNP (U1-snRNP) activate the NLRP3 inflammasome involving cytoplasmic RNA sensors TLR7 and TLR8 signaling in human monocytes (226). Antibody complexes of endogenous snRNP also induce production of macrophage migration inhibitory factor (MIF) in human monocytes, which enhances NLRP3 activation and IL-1b production (227). Interestingly, the context of nucleic acid uptake appears to determine the identity of the inflammasome molecule being engaged. For instance, unbound dsDNA, normally found at high levels in SLE patient serum, is taken up by monocytes via macropinocytosis, which activates AIM2 as well as NLRP3 (135). Uptake of free nucleic acid, however, requires antibody to be internalized by macropinocytosis but not Fc receptor (FcR) (135). On the other hand, internalization of dsDNA/snRNP autoantibody The roles of inflammasomes in systemic lupus erythematosus (SLE). Upregulation of NLRP3 inflammasome in macrophages (MP) and DC by DNA or RNA immune complexes (IC) or small nuclear ribonucleoprotein (snRNP) leads to release of proinflammatory cytokines such as IL-1b, IL-18 and IFNa. Dysregulation of inflammasomes in APC also promotes Th17 and Tfh cell differentiation. Tfh cells and IFNa facilitate B cell maturation and autoantibody production. However, production of IFNa is regulated by AIM2-mediated pyroptosis (indicated by red arrows). Deposition of IC, infiltrating Th17 cells, and production of autoantibodies and cytokines all contribute to tissue damage. IL-18 activates NETosis in neutrophils and in turn upregulates NLRP3 and IL-1b and IL-18 secretion in macrophages via cathelicidin antimicrobial peptide (LL37)-driven K + efflux mediated by the P2X7 receptor (P2X7R). These cytokines further induce pyroptosis and release of cellular and nuclear contents, leading to the production of antinuclear autoantibodies and further amplifying systemic inflammation. Inflammasome activation in cells of target tissues, such as kidney resident podocytes also contributes to disease pathology by producing IL-1b. The figure was prepared using Biorender software licensed to the UNC Lineberger Comprehensive Cancer Center. complexes via FcR may favor activation of NLRP3, and possibly NLRC4 as seen in RA (105). In each of the aforementioned scenarios, IL-1b and IL-18 are secreted to maintain/amplify inflammation. Furthermore, induced pyroptotic death and release of cellular and nuclear contents lead to the production of ANA to further fuel the autoimmune response (228,229). Aberrant clearance of neutrophil extracellular traps (NETs) is also linked with the pathogenesis of SLE and inflammasome activation ( Figure 5). NETs are a network of chromatin fibers containing antimicrobial peptides such as LL37 and enzymes that participate in host defense (230). NETs are primarily released by activated neutrophils that undergo NETosis, a programmed cell-death mechanism (231). Notably, NETs activate NLRP3 inflammasome and IL-1b and IL-18 secretion in macrophages from SLE patients via LL37-driven K + efflux mediated by the P2X7R (136). Furthermore, IL-18 activates NETs and promotes NETosis suggesting that a feed-forward loop exists that helps to maintain inflammation (136). Monocytes from SLE patients versus healthy controls exhibit enhanced NLRP3 activation and IL-1b secretion (138,139). This hyperactivity is attributed to chronic IFNa stimulation of monocytes. Elevated type I IFN-induced gene expression "signatures" correlate with the presence of autoantibodies, nephritis, and disease activity (232). Prolonged IFNa exposure in vivo induces NLRP3 hyperactivity by an IRF1 signaling pathway (138). However, consistent with other studies (233), short-term IFNa exposure of monocytes blocks NLRP3 activation (138). The latter, importantly, indicates that chronic type I IFN stimulation can have distinct effects on inflammasome activation. The study of different murine lupus models provides further evidence that inflammasomes regulate SLE pathogenesis. Mice deficient in caspase-1 expression versus wild-type mice exhibit reduced autoantibody production, a limited IFN signature, as well as diminished NETosis and kidney pathology induced by pristane administration (136). In addition, blocking the P2X7R significantly impacts the development of spontaneous lupus in MRL/lpr mice. Here, limiting NLRP3 activation reduces the production of anti-dsDNA autoantibodies and IL-1b, and decreases Th17 cell expansion and the severity of nephritis (234). Furthermore, various drugs that inhibit NLRP3 inflammasome activation attenuate disease severity in different lupus mouse models (137,(235)(236)(237). On the other hand, nephritis induced by pristane treatment is exacerbated in mice in which myeloid cells selectively express a transgene encoding a hyperactive Nlrp3 R258W mutant protein (238). In addition to immune effector cell types, inflammasome activation in target tissues also contributes to disease pathology ( Figure 5). Endothelial cells, basement membrane, and podocytes form a glomerular filtration barrier, which is essential for maintaining kidney function (239). In NZM2328 mice, which spontaneously develop lupus nephritis, severe proteinuria correlates with increased activation of NLRP3 and caspase-1 as well as IL-1b secretion by glomerular podocytes (141,142). NZM2328 mice treated with MCC950, an NLRP3 inhibitor, exhibit reduced NLRP3 activation by podocytes, and attenuated renal tissue damage and proteinuria (141,142). Depending on the lupus model, inflammasome molecules have also been shown to play a protective role. In C57BL/6 lpr/lpr mice, which develop mild lupus, deficiency of NLRP3 or ASC exacerbates pathology marked by an increase in activated macrophages and DC and production of proinflammatory cytokines, and T and B cell proliferation but no effect is seen on autoantibody production (240). This enhanced pathology is marked by reduced SMAD2/3 phosphorylation during TGF-b receptor signaling, and consistent with the role of TGF-b1 as a key regulator of immune homeostasis (240). In this scenario, it is likely that NLRP3 or ASC serve functions independent of classical inflammasome activation (see below), consistent with the observation that IL-1R-or IL-18deficiency in C57BL/6 lpr/lpr mice does not exacerbate pathology. Studies have indicated that AIM2 may also serve a protective role in lupus by negatively regulating type I IFN production. In B6.Nba2 mice, which spontaneously develop lupus nephritis, p202, another IFN-inducible p200 family member is up-regulated (241, 242). Notably, p202 blocks AIM2 inflammasome assembly, and pyroptosis-mediated cell death. Consequently, p202 or other dsDNA sensors such as cyclic GMP-AMP synthase (cGAS), bind cytosolic DNA to promote prolonged type I IFN production that would be normally terminated by AIM2-induced pyroptosis (243). Regulation of pyroptosis has also been found to impact other aspects of the autoimmune response driving lupus nephritis. Pristane-induced lupus nephritis is exacerbated in mice lacking T cell expression of the P2X7R (140). Here, the P2X7R normally mediates GSDMD-driven pyroptosis of Tfh cells, which then limits differentiation of autoantibody secreting plasma cells in the germinal centers. Together these findings demonstrate the complexity of the roles inflammasomes have in both promoting and suppressing the autoimmune response of SLE. Alternative roles of inflammasome molecule-mediated regulation Classical inflammasome activation and induction of a proinflammatory response contributes to autoimmunity in a variety of ways as described above. It is becoming apparent, however, that inflammasome molecules also serve regulatory functions independent of typical inflammation-driving events ( Table 2). Caspase-1 for instance, in addition to being involved in the maturation of IL-1b and IL-18, has been shown to modulate protein secretion, cell death, and lysosomal function in many cell types such as neurons, hepatocytes, epithelial cells, and cardiomyocytes (244)(245)(246)(247)(248)(249)(250)(251). These alternative roles for inflammasome molecules have been linked to regulation of immune effector cells such as T and B cells, as well as non-immune tissue-resident cell types. Accordingly, some of these events have been reported to be directly involved in the progression of autoimmunity, and/or can be expected to contribute to an autoimmune response. ASC: A regulatory function in CD4 + T cells ASC has a T cell intrinsic effect regulating the production of IL-1b needed to maintain CNS-resident Th17 cells in EAE. Recent findings indicate that ASC also regulates properties of murine CD4 + T cells independent of classical inflammasome activation and IL-1b maturation (252). ASC is constitutively expressed in naïve CD4 + T cells, and after anti-CD3/CD28 antibody stimulated TCR signaling, ASC is upregulated but no IL-1b or IL-18 secretion is detected (252). Naïve CD4 + T cells lacking ASC expression normally differentiate in vitro into Th1, Th2, Th17, Th9, and Foxp3 + Treg subsets under polarizing conditions (252). Notably, recombination activation gene (Rag)-deficient mice develop more severe colitis after transfer of ASC -/-CD4 + T cells versus wildtype, NLRP3 -/-, or Caspase1 -/-CD4 + T cells (252). This increased pathogenic function of ASC -/-CD4 + T cells is marked by enhanced TCR signaling in vitro, elevated lymphopenic proliferation in vivo, and an increased metabolic state marked by higher glycolytic flux and increased glucose transporter 1 (Glut-1) surface expression (252). These findings suggest a negative regulatory function for ASC in CD4 + T cell TCR signaling, proliferation, and metabolism. The mechanism(s) by which ASC regulates these events still needs to be defined. Nevertheless, one could envision a scenario in which dysregulation of alternative ASC function enhances the pathogenic potential of autoreactive CD4 + (and possibly CD8 + ) T cells to aid autoimmune disease progression. NLRP3 and Th2 cell differentiation NLRP3 has also been found to have T cell-intrinsic effects independent of classical inflammasome activation. Specifically, NLRP3 positively regulates Th2 subset differentiation (253). Upon TCR stimulation by anti-CD3/CD28 antibody, expression of NLRP3 is increased in both Th1 and Th2 cells, due in part to IL-2 induced STAT5 activity (253). However, NLRP3-deficiency reduces Th2 but not Th1 cell differentiation (253). Importantly, ASC or caspase-1 deficiency has no effect on NLRP3-mediated Th2 lineage differentiation ruling out a role for classical NLRP3 inflammasome activity (253). Findings indicate that NLRP3 functions as a transcription factor regulating Il4 transcription (253). Here, NLRP3 forms a complex with the transcription factor IRF4, that enhances the binding of the IRF4 to the Il4 promoter; however, NLRP3 alone is insufficient to mediate Il4 transcription (253). Notably, induction of asthma, which is Th2 cell-dependent, is reduced in NLRP3-deficient mice (253). Furthermore, NLRP3 -/mice also more readily reject implanted B16F10 tumor cells due to an elevated Th1 cell response (253). In wildtype recipients, increased differentiation of Th2 cells permits the progression of B16F10 tumors (253). In the case of autoimmunity, aberrant Th2 cell differentiation has been associated with skewed development of Th1 and Th17 cells, which drive the pathology in MS, RA, T1D and SLE (254). Accordingly, aberrant expression and/or function of NLRP3 that is independent of inflammasome activity, may favor the development of pathogenic autoreactive Th1 and Th17 effectors. For instance, reduced IL-2 signaling and STAT5 activation, which is associated with T1D (255), would be expected to limit Nlrp3 transcription and Th2 cell differentiation. Roles of AIM2 independent of inflammasome activation Studies demonstrate that AIM2 displays a number of alternative functions independent of inflammasome activation in various cell types, that affect the progression of autoimmunity. Recently, AIM2 was shown to have a T cell-intrinsic role in regulating peripheral Foxp3 + Treg (256). AIM2 is highly expressed in murine and human Foxp3 + Treg, and AIM2 expression is upregulated by TGF-b1 stimulation (256). TGF-b1 is required for peripheral differentiation of CD4 + T cells into Foxp3 + Treg (257). In AIM2deficient C57BL/6 mice, MOG 35-55 -induced EAE is exacerbated characterized by increased Th1 and Th17 cell infiltration, and a reduction in the frequency of Foxp3 + Treg in the CNS (256). A diminished local pool of Foxp3 + Treg favors the expansion and effector function of encephalitogenic Teff (256,257). Foxp3 + Treg are unaffected by ASC-deficiency, indicating that the role for AIM2 is inflammasome-independent (256). Notably, AIM2 in Foxp3 + Treg attenuates AKT activation, and downstream mTOR and MYC signaling that leads to glycolysis (256). Normal Foxp3 + Treg differentiation and lineage maintenance is achieved under metabolic conditions favoring oxidative phosphorylation of lipids (256). On the other hand, glycolysis negatively impacts Foxp3 + Treg stability and function (256). AIM2 serves to maintain Foxp3 + Treg under proinflammatory conditions by forming a complex consisting of the adaptor protein receptor for activated C kinase 1 (RACK1), and the protein phosphatase 2 (PP2A) phosphatase that blocks AKT phosphorylation (256). AIM2 has also been reported to regulate Tfh independent of inflammasome activation (258). Tfh from blood and skin lesions of SLE patients express elevated levels of AIM2. In mice in which AIM2 is conditionally ablated in T cells, the severity of pristaneinduced lupus nephritis is reduced relative to control animals. The latter corresponds with a decreased Tfh pool. Notably, AIM2 regulates Tfh differentiation through an interaction with transcription factor c-MAF, that in turn is needed to promote Il21 gene transcription (258). Interestingly, Aim2 mRNA expression is upregulated by IL-21 stimulation suggesting that AIM2 participates in a feed-forward loop promoting Tfh differentiation and function. In addition to T cells, AIM2 has been shown to have a B cellintrinsic effect independent of inflammasome activation. SLE patients exhibit elevated AIM2 expression in germinal center (GC) B cells, memory B cells and antibody secreting plasma cells prepared from the tonsils, blood and/or skin lesions (259). Furthermore, pristane-induced lupus nephritis is attenuated in mice in which AIM2 is conditionally ablated in B cells. Limited disease is reflected by diminished numbers of GC B cells, and plasma cells. Findings suggest that AIM2 is an upstream regulator of the Blimp1-BCL6 transcriptional axis, which drives GC B cell and plasma cell differentiation (259). AIM2 also serves a protective role in EAE by limiting the inflammatory properties of brain-resident microglia (151). Whereas ASC-deficiency in mice attenuates EAE as discussed above, AIM2-deficiency exacerbates EAE severity. Furthermore, selective ablation of AIM2 in microglia is sufficient to enhance the encephalitogenic response. In microglia, AIM2 negatively regulates a proinflammatory phenotype by suppressing the activity of DNA-dependent protein kinase (DNA-PK) and downstream activation of AKT3. Inhibition of AKT3 reduces phosphorylation of the key transcriptional factor IRF3, which blocks the production of chemokines, type I IFN, and the expression of antigen presentation molecules by microglia (151). AIM2 similarly inhibits DNA-PK and AKT activation in colon epithelial cells to protect mice from colitis and colon cancer (260). Interestingly, a recent study provides evidence that AIM2 has an alternative role in an EAE model independent of robust classical inflammasome activation (152). Through the use of a novel reporter mouse to track inflammasome activation in situ, AIM2 activation is seen to be prevalent in astrocytes but not CNS infiltrating monocytes and macrophages. Despite elevated AIM2 expression, no marked Il1b expression and cell death are detected in astrocytes (152). The role of AIM2 in this scenario needs to be further defined. Targeting inflammasome molecules to prevent/treat autoimmunity Inflammasome molecules offer an appealing target for immunotherapy and the treatment of autoimmunity. Several inhibitors targeting inflammasome-related molecules have been identified, developed, and tested in preclinical studies or clinical trials (Table 3). MCC950, a small-molecule inhibitor, specifically binds to the Walker B motif of the NACHT domain of NLRP3 to block function (287). Therapeutic efficacy and safety of MCC950 and analogs (Inzomelid and Somalix) have been assessed in several preclinical studies with promising results (288-294) (TrialTrovelID-368867; TrialTrovelID-360928). Nevertheless, a phase II clinical trial for RA showed that MCC950 has safety concerns related to elevated serum liver enzyme levels. Other NLRP3 inhibitors are currently being evaluated in animal studies of EAE (264,266,272,279). Caspase-1 is another key target for therapeutic intervention of autoimmunity. VX-765 (belnacasan), a caspase-1 inhibitor, blocks GSDMD-mediated pyroptosis, reduces inflammasome-associated proteins in the CNS, and attenuates EAE in mice (275). However, testing of the related caspase-1 inhibitor VX-740 was discontinued in a RA clinical trial due to the liver toxicity observed in animal models (295). Inhibiting GSDMD by necrosulfonamide reduces neuroinflammation and necroptosis in collagenase VII-induced mouse intracerebral hemorrhage model (277). In addition, dimethyl fumarate, an immunosuppressive drug used for the treatment of recurrent remission MS and plaque psoriasis promotes succination of GSDMD, which in turn disrupts the interaction with caspase-1 and blocks pyropotosis (278). Disulfiram, a drug used for alcohol addiction treatment, blocks pore formation by targeting Cys191/Cys192 in GSDMD (261). IL-1b, which is associated with the pathogenesis of several autoimmune diseases, has been therapeutically targeted. Two FDA-approved biologics that block IL-1 activity have been clinically tested. Anakinra is a recombinant human IL-1R antagonist mainly applied for the treatment of RA. Due to a short half-life and low response rate compared to other treatments available, the usage of anakinra is limited, and efficacy is selective. For example, anakinra shows no efficacy for the treatment of T1D and Sjogren's disease. Canakinumab is an anti-IL-1b neutralizing monoclonal antibody and has shown efficacy in RA and systemic juvenile idiopathic arthritis but no benefit for recent onset T1D patients (285,296). IL-18 blockers have also been established but have not been applied for the treatment of autoimmunity. Summary/conclusions The evidence at hand establishes roles for classical inflammasome activated inflammation and alternative pathways regulated by inflammasome molecules in autoimmunity. Inflammasome molecules have been implicated in human MS, RA, T1D and SLE, and shown in corresponding disease models to override and/or maintain self-tolerance (Table 1). Intrinsic and extrinsic effects on both APC and other innate effectors as well as T and B cells enables inflammasome molecules to establish the nature and specificity of an autoimmune response. Similarly, inflammasome molecules have intrinsic and extrinsic effects that alter the cellular integrity of tissues, independent of immune effectors. In a given tissue, inflammasome activity can impact inflammation by initiating and/or further driving a local autoimmune response, which in turn may be influenced by induction of pyroptosis versus PANoptosis cell death pathways. Alternatively, dysregulated inflammasome function can have more broad effects. This is seen with aberrant inflammasome activity reducing intestinal barrier function, which results in shifts within the microbiota composition that can impact the production of systemically released metabolites and favor proinflammatory versus immunoregulatory events (214). The key events that drive inflammasome molecule activity in autoimmunity are poorly understood. What is apparent, however, is that multiple pathways and mechanisms exist to induce activation, 120,153,[297][298][299][300][301][302][303]. However, whether the disease-linked SNPs override the normally tight regulation of gene expression and/or function of inflammasome molecules needs to be ascertained. Inflammasome activity is also the consequence of collateral damage induced by autoimmunity. Autoimmune-mediated cytotoxicity leads to the release of DAMPs and a proinflammatory milieu induces local cellular stress affecting metabolism and mitochondrial function for instance, that drive inflammasome molecule activity. The relative contribution(s) inflammasome molecule activity has in autoimmunity is poorly understood. Questions of whether inflammasome molecules mediate initiating events and/or modulate the progression and severity of autoimmunity need to be addressed. Environmental insults have typically been proposed to initiate autoimmunity where inflammasome activation is likely to occur (Table 1). Alternatively, sterile inflammation driven by metabolically stressed cells may stimulate dysregulated inflammasome activity and initiate autoimmunity. Pancreatic b cells are susceptible to metabolic stress due high levels of insulin expression and secretion (304,305) that may lead to NLRP3 activation, for example. Reports showing that inflammasome expression and activity are upregulated in MS, T1D, RA and SLE patients suggest a role in at least supporting disease progression. Feed-forward loops in which inflammasome molecule activity are self-sustaining as well as promoting autoimmune reactivity and vice versa have been described. The use of murine models of spontaneous autoimmunity coupled with cell-specific and inducible expression systems will be helpful in further defining the contribution in the disease process for a given inflammasome molecule. Of keen interest moving forward is defining regulation of inflammasome molecule-mediated events that are independent of classical activation of inflammation (Table 2). A hint to the complexity that is involved is exemplified by AIM2. As discussed above AIM2 regulates peripheral Foxp3 + Treg differentiation by blocking AKT signaling through a AIM2-RACK1-PP2A complex (256). On the other hand, AIM2 suppresses colon carcinoma by binding to and inhibiting DNA-PK and downstream AKT signaling events needed for colon epithelial cell transformation (260). Therefore, depending on the cell-type, AIM2 inhibits PI3K-AKT signaling but via distinct complexes and mechanisms. Furthermore, AIM2 is reported to interact with the c-MAF transcription factor to positively promote Tfh differentiation (258). The nature of the signaling events that stimulate alternative inflammasome molecule activity, and the outcome of that activity in immune and nonimmune cell types are important issues that require continued investigation. To date, the therapeutic benefit of inhibiting inflammasome activation has mostly been demonstrated in animal disease models with limited success in the clinic ( Table 3). The general lack of efficacy may reflect the timing and relative contribution of an inflammasome molecule in a given autoimmune disease. For instance, inflammasome activation may play a prominent role early in a disease process. Therefore, targeting inflammasome activity once an autoimmune response is well established, which is typical in the clinic, may have only a minimal effect. There is the important concern that inhibiting a given inflammasome molecule, particularly long-term, may compromise immunity against pathogens. Therefore, both efficacy and safety may be enhanced by combining an inflammasome-based approach with other types of immunotherapies. For example, limiting ongoing inflammation by blocking inflammasome activity may enhance the efficacy of antigen-based immunotherapy and induction of protective Treg. The etiology of MS, T1D, RA and SLE is highly complex, and illdefined. Establishing the roles of inflammasome activity in autoimmunity will aid our understanding of the mechanisms that drive these disease processes, as well as provide the impetus for the development of novel strategies of immunotherapy for disease prevention and treatment. Author contributions QK, AG, VM, XJ and RT contributed to the preparation of the review article. All authors contributed to the article and approved the submitted version. Funding This work was supported by National Institutes of Health grants R01DK100256, R01AI139475, R01AI141631, R21AI115752 (RT). Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2023-04-04T13:07:06.856Z
2023-04-04T00:00:00.000
{ "year": 2023, "sha1": "ce2720d5c3732a62e5406ff17925986152e024c2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "ce2720d5c3732a62e5406ff17925986152e024c2", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
253546057
pes2o/s2orc
v3-fos-license
No Representation Without Compensation: The Effect of Interest Groups on Legislators’ Policy Area Focus Interest groups seek to influence parliamentarians’ actions by establishing exchange relationships. We scrutinize the role of exchange by investigating how interest groups impact parliamentarians’ use of individual parliamentary instruments such as questions, motions, and bills. We utilize a new longitudinal dataset (2000–2015) with 524 Swiss parliamentarians, their 6342 formal ties to interest groups (i.e., board seats), and a variety of 23,750 parliamentary instruments across 15 policy areas. This enables us to show that interest groups systematically relate to parliamentarians’ use of parliamentary instruments in the respective policy areas in which they operate—even when parliamentarians’ time-invariant (fixed effects) and time-variant personal affinities (occupation, committee membership) to the policy area are accounted for. Personal affinities heavily moderate interest groups’ impact on their board members’ parliamentary activities. Moreover, once formal ties end, the impact of interest groups also wanes. These findings have implications for our understanding of how interest groups foster representation in legislatures. In Western democracies, privileged access to members of parliament (MPs) is heavily sought after by interest groups. By means of financial contributions, information provision, electoral support, and other resources, interest groups establish relationships with the goal of influencing MPs to act in their interest. A large and influential theoretical strand considers exchange as the foundation of these relationships (Berkhout 2013;Bouwen 2004;Hall and Deardorff 2006;Hopkins, Klüver, and Pickup 2019). It builds on the notion that collaboration between MPs and interest groups is mutually beneficial, and accordingly sustained if both actors (perceive to) profit. Yet the exchange mechanism is generally rather assumed than tested. In this article, we therefore scrutinize the role of exchange on MPs' behavior by addressing three of its key aspects: the impact of ongoing exchange relationships, MPs' susceptibility to exchange, and the reciprocity of exchange. More specifically, we address these aspects by studying how MPs' seats on the boards of interest groups, that is, formal ties affect MPs' focus on specific policy areas when using parliamentary instruments at their individual disposal in the Swiss case. This focus on parliamentary instruments entails several advantages when studying exchange. Parliamentary instruments such as parliamentary questions, interpellations, postulates, motions, or bills constitute an important signaling tool towards a variety of political actors. Members of parliaments use them constantly. They reveal the position and intensity-effort and time investment-that MPs exhibit in specific policy areas (Hall 1996). Members of parliaments' available attention is scarce so they can only focus on a limited number of issues at a time (see Jones and Baumgartner 2005). When forming relationships with MPs, interest groups may influence legislators' focus of attention. Whether this occurs, and if so, to what extent remains empirically unclear. There are several reasons for this. Damgaard (1980, 223) already argued that legislators' areas of interest and expertise are most strongly indicated by their occupations. Indeed, occupational background drives MPs' political behavior in several fields, including self-selection into committees (Hamm, Hedlund, and Post 2011;McElroy 2006;Shepsle 1978, 79;Yordanova 2009), bill introduction, and the focus of their legislative agendas (Burden 2007). At the same time, committee membership relates to MPs' legislative agenda (Schiller 1995). Given these two known empirical patterns, it constitutes a key challenge to the understanding of the role of exchange to empirically disentangle the effect of interest group influence from these personal causes for policy area focus. For example, a legislator with medical training or a seat on the health committee is more likely to submit bills on health policy. Simultaneously, she is arguably also more likely to work with health lobby organizations. When measured cross-sectionally, we thus observe a spurious correlation between the policy area of the representative's interest group ties and other sources of her policy focus in parliament. The observed statistical correlation between these could be due to the representative's occupational background or committee membership, and hence affinity to the policy area rather than actual influence on the part of her interest groups. We propose an original approach to overcome this confounding effect of personal affinity, and make use of the unique part-time nature of the Swiss national parliament. It is the only national parliament where-similar to some U.S. state legislatures-parliamentarians retain their regular occupations while serving part time in the legislature (Bütikofer 2013). We leverage this context as an opportunity to provide a test for interest group influence by not only using time-invariant MP fixed effects but additionally the policy area of MPs' occupation and committee membership as time-variant control variables. Our choice of a longitudinal design with separate analyses for 15 policy areas arguably constitutes a fitting scenario of statistical control in a real-life observational context. As it would be difficult to devise an experimental design to scrutinize this dynamic, we consider the presented setup to be as close as reasonably possible for empirically establishing whether interest groups and parliamentariansimplicitly-exchange benefits for influence. 1 The presented design offers two additional empirical contributions. First, our choice of adopting a per-policy area approach enables us to put the effect size of interest group influence into perspective by comparing it to current occupation and committee membership as alternative key drivers of MPs' focus on a certain policy area, and inspect their moderating effect on legislative subsidies and other benefits provided by interest groups. At the same time, focusing on individual policy areas allows us to control for MPs' traditional principals, voters and the party (Carey 2007;Hix 2002), by using constituency and party fixed effects per policy area. Second, the longitudinal nature of our design accounts for changes in relations between MPs and interest groups over time. Our key independent variable is MPs' memberships on interest group boards. We focus on these strong ties because, in comparison to measures such as campaign contributions, such formal ties are institutionalized relationships between parliamentarians and interest groups. This has not only theoretical merit, but also analytical advantages: board memberships come with start and end dates, and therefore clearly delimit the period during which benefits can be given in return for favors. The number of formal ties MPs have at any given time serves as a proxy for interest groups' supply of benefits. Empirically, this approach is made possible by data from the Swiss context. Board seats allow us to measure the role of interest groups at concrete moments in time. Swiss parliamentarians are required to provide yearly overviews of their board positions in interest groups. We estimate the effects of formal ties separately across 15 policy areas, for example, whether a farmer with many formal ties to agricultural interest groups still submits more instruments than a farmer with fewer or no ties. We then synthesize these policy area-specific results into an overall image of how MPs' interest group board positions affect the policy focus of the parliamentary instruments they submit. To anticipate, the results show that more formal ties to interest groups lead to more activity of parliamentarians in the respective policy areas. Evidence for the existence of a strategic exchange relationship between MPs and interest groups where board positions are "traded" for influence can be found even when MPs' personal affinity (fixed effect, occupation and committee membership), their political parties, and their voters are all explicitly accounted for. Nonetheless, the effect of a single interest group is smaller than that of their personal interests (occupation, committee membership). In fact, interest groups' impact is cut almost by half when personal interests in a policy area exist, thus highlighting that not all legislators are equally susceptible to exchange benefits. The exchange conceptualization is further corroborated by results indicating that with said controls it is current rather than former formal ties that affect MPs' use of parliamentary instruments: we find support for the reciprocity of formal ties by showing that the effect of ties dissipates over time once MPs and interest groups end them. Our findings are robust when using alternative model estimation approaches and not driven by the distinct purposes of specific parliamentary instruments (i.e., gathering information/government oversight, or introducing new policy). The Impact of Interest Groups on Parliamentary Instruments Traditionally, the impact of interest groups has been conceptualized as particular policy outcomes that come about or are prevented as reactions to interest group activities (see Leech 2010), with voting behavior being one of the primary examples (e.g., Fellowes and Wolf 2004;Baumgartner et al. 2009). While such key decisions are undeniably important, interest groups arguably also target an entirely different layer of parliamentary behavior, namely, parliamentary instruments. It matters to interest groups how engaged MPs are in policy issues important to their organizations. Parliamentarians need to decide on a continuous basis what kind of policy areas to focus on and specialize in. Parliamentary instruments reflect this and are used to signal concerns and take positions in policy areas (Martin 2011); traditionally towards voters (Bräuninger, Brunner, and Däubler 2012;Highton and Rocca 2005). Parliamentary instruments are particularly suited for this purpose because they are closed instruments-policy statements that entail detailed policy questions, requests, and drafts that require specific action from a certain addressee (see Keh 2015Keh , 1088. They also allow legislators to signal positions that are not related to the current legislative agenda. Interest groups have been shown to reward position taking and signaling behavior-bill sponsorship is a case in point-with campaign contributions (Rocca and Gordon 2010). In this article, we argue there is an additional aspect to this relationship: interest groups not only reward this behavior post hoc but also induce it. Existing research on how interest groups affect parliamentary instruments is scarce. Evidence so far has been limited to the study of a few select interest groups on the aggregate legislature level (Hertel-Fernandez 2019). At the individual level, interest groups' influence on the use of parliamentary instruments has been addressed (Martin 2011;Pedersen 2013), but could-with the notable exception of the effect of campaign contributions on committee amendments (McKay 2020)-not be systematically corroborated. There are two possible reasons for this. On the one hand, the absence of effects may arguably relate to the difficulties that interest groups face when attempting to gain the attention of legislators in the first place (see Fraussen, Graham, and Halpin 2018;Jones and Baumgartner 2005). On the other hand, the apparent absence of impact might also be the consequence of a relatively rough cross-sectional measurement that includes observations in which interest groups struggle to assert influence, thus increasing the chance of finding contradictory patterns of interest group influence (Leech 2010). Hence, we adopt a fine-grained longitudinal approach where we focus on long-term MP-interest group relationships as a strategy to identify sizeable shifts in legislators' focus towards interest groups' areas of concern. The Exchange Relationship Between Interest Groups and Parliamentarians One of the key theoretical ideas in the literature for understanding interest groups' influence on parliamentarians is to conceptualize it as an on-going exchange relationship. To exert influence on MPs and their behavior, interest groups seek to establish connections with them (Fellowes and Wolf 2004;Grossman and Helpman 1996;Roscoe and Jenkins 2005;Stratmann 1998). Interest groups use their current access to parliamentarians to gather information on political processes, to foster the representation of their interests in committees and on the legislative floor, and to hold the government accountable (Fouirnaies and Hall 2018;Kalla and Broockman 2016;Varone, Bundi, and Gava 2020). In exchange, interest groups compensate parliamentarians with political benefits like information, electoral support, personal gifts, and additional-often financial-favors during this collaboration period (Berkhout 2013;Bouwen 2004;Eichenberger and Mach 2017;Hall and Wayman 1990;Lutz, Mach, and Primavesi 2018). Until recently, it was generally assumed that interest groups tend to be interested in establishing informal links to parliamentarians (e.g., Marshall 2015; Wonka and Haunss 2019). However, as a result of the increasing demand for transparency in ever more countries, 2 both in terms of connections to organized interests and financial disclosure, light has been shed on the frequent occurrence of long-term collaboration between MPs and interest groups. Interest groups initiate formal exchange agreements by actively recruiting MPs for board positions in their organization (Huwyler 2021). Such formal ties between interest groups and parliamentarians constitute an official statement that these two actors cooperate. They entail both tangible effects-Eichenberger and Mach (2017, 2) talk of "policyseeking interest group "tying the knot" with vote-seeking MPs"-and less direct ones such as the mutual enhancement of prestige (Gaugler 2009), for example, formal ties as a signal to interest groups' supporters. For interest groups, such relationships to MPs entail a long time period of institutionalized access to a political arena to which they usually lack direct access. As long as they support their MPs, they can ask them to use the parliamentary instruments at their disposal to their benefit, for example, to propose draft laws, suggest new measures or legislative regulations, and demand information or reports. Patterns in MPs' submissions of parliamentary instruments should reflect on-going interest group affiliations accordingly. A formal tie to an interest group offers several valuable benefits to MPs. First, it provides non-financial compensation such as specialized knowledge and information that benefits parliamentarians' work, as they deal with issues in a wide array of policy areas (Bouwen 2004;Hall and Deardorff 2006;Klüver 2013). Specialized knowledge and information function as a legislative subsidy, which offers MPs the possibility to appear competent in often complex policy areas (Hall and Deardorff 2006). In consequence, parliamentarians are in a position to display more activity in policy areas to which these resources pertain. Second, interest groups also offer financial benefits, such as resources for election campaigns (Fellowes and Wolf 2004;Lutz, Mach, and Primavesi 2018). They may also incentivize parliamentary activities by providing side payments such as gifts or continuous financial support in the form of paid board positions, if the legal framework 3 allows this (Djankov et al. 2010). Third, interest groups can offer opportunities for post-parliamentary employment (Claessen, Bailer, and Turner-Zwinkels 2021;Lazarus, McKay, and Herbel 2016). While sitting on the boards of interest groups, MPs therefore want to demonstrate their worth, make use of additional information and resources in order to raise specific issues in parliament, and demand laws benefitting their interest groups. Accordingly, we expect parliamentarians to show an increased level of individual activity in certain policy areas as a result of on-going formal interest group ties in those areas. H1 : The more formal ties parliamentarians have with interest groups in a certain policy area, the more parliamentary instruments they submit in this policy area. Parliamentary activities are also affected by MPs' personal interests, values, and expertise in specific policy areas (Burden 2007, 15). Prior familiarity with a policy area reduces MPs' costs of being active in said area. It has been contended that those interests and expertise are best indicated by legislators' occupations (Damgaard 1980, 223) and by membership in the respective parliamentary committees. In the latter case, activity is not only affected by expertise but also the institutional advantages that committees offer (Hamm, Hedlund, and Post 2011;McElroy 2006;Shepsle 1978, 79;Yordanova 2009). As a consequence, benefits provided by interest groups will exert a weaker effect on those parliamentarians. On the one hand, MPs with expertise in a given policy area depend less on knowledge and information provided by interest groups to work effectively. On the other hand, specialized MPs' attention to the policy area is less shaped by their interest groups, given that they also derive cues for their parliamentary activities from their occupational and committee-related expertise and experiences. H2 : The effect of formal interest group ties on parliamentarians' use of parliamentary instruments in a policy area is smaller if parliamentarians demonstrate personal affinity (relevant occupation, committee membership) towards said policy area. The conceptualization of the relationship between MPs and interest groups as an exchange implies that the influence of interest groups on parliamentarians depends on on-going formal ties. Parliamentarians should thus remain responsive to interest groups only as long as they obtain benefits. Accordingly, we should observe that parliamentarians' current instrument use depends primarily on their current number of formal ties. An (implicit) exchange contract implies that once one side stops offering benefits, the other side will return in kind. In consequence, if exchange is reciprocal, parliamentarians should become less responsive to interest groups once their formal relationships have ended. Importantly, we expect the effect of previous formal ties to wane gradually. While sitting on interest group boards, parliamentarians will have deepened their knowledge, made relevant contacts, and possibly increased their personal interest in the specific policy area. In this way, resources and past experiences may induce MPs to be active in certain policy areas for some time even after the end of formal collaboration with interest groups. As these effects are not permanent though, we should no longer observe any effect of former interest group ties after some time. H3 : When formal ties to interest groups in a certain policy area have ended, their effect on parliamentarians' submission of parliamentary instruments in this policy area decreases towards zero over time. Case Selection We use the Swiss case for a test of the exchange mechanism because it allows us to control for time-variant personal policy area focus not only with committee membership but also with occupation. There are 72 democracies globally, including Switzerland, where national MPs can occupy at least some form of paid and unpaid job while in office (Djankov et al. 2010). However, in contrast to other national parliaments, and similar to some U.S. state legislatures, 4 Swiss MPs are part-time legislators who retain their normal occupations. Their parliament is in session 12 weeks per year while committees also operate between sessions. As occupations may exert both a direct influence on MPs' parliamentary instrument use, and a direct effect on their collaboration with interest groups, not accounting for personal motivation would otherwise bias the results towards an overestimation of the formal tie effect. Apart from MPs retaining their occupations, the Swiss system is similar to other Western democracies in many aspects: its legislature shares several characteristics with the U.S. Congress. The Swiss Federal Assembly is a bicameral legislature that consists of two equally powerful chambers. In the 200-seat Lower House, seats are allocated to cantons (states) based on resident population. In the 46-seat Upper House, each canton is represented by two seats. 5 Parties are traditionally relatively weak similar to parties in the U.S. Data The data in this study are drawn from two sources: Curia Vista, the Swiss Parliament's database of parliamentary proceedings, and the Parliaments Day-By-Day (PDBD) database (Turner-Zwinkels et al. 2021). Based on these two sources, we generate a data frame with 58,455 observations using parliamentarian-years (i.e., repeated measures of parliamentarians across time with one observation for every year they have a seat in the Swiss Parliament) as the unit of analysis. This represents the idea that in principle, in every year, formal ties between parliamentarians and interest groups can form and end. The sample contains 524 unique politicians, nested in 24 political parties across five legislative periods. These data bring together four types of information: parliamentary instrument use, interest group affiliations, annual self-reported occupations, and additional biographical information, including committee membership. Information on parliamentary instrument use is taken from Curia Vista. Swiss parliamentarians can use an extensive range of parliamentary instruments (see Online Appendix B). There are neither limits to the number of submitted instruments, nor approval requirements by parliamentary party groups or co-sponsorship quora. All instruments available to MPs individually are also available to party groups and committees collectively. For every instrument, Curia Vista contains a unique database entry with meta-information, including the parliament's own policy area classification. We focus our analysis to the 23,750 instruments submitted by individual MPs from 2000 to 2015. For MPs' interest group affiliations, the parliament's official Register of Interest Ties served as the original source of the PDBD data collection. At the beginning of every calendar year, legislators have to report their interest group activities. This includes their seats in domestic and foreign leadership bodies, supervisory bodies, and advisory bodies of all organizations, institutions and foundations under private and public law to the Parliamentary Services. These lists have been published annually since 1985 and are available online. 6 The published formal ties are self-reported. Failure to report is not sanctioned. We thus will somewhat underestimate the extent of MPs' extraparliamentary work for interest groups. The Register of Interest Ties also lists self-reported occupations of parliamentarians on an annual basis. Due to the part-time nature of the Swiss parliament, legislators typically hold regular jobs that may change over time next to their parliamentary mandate. Fifteen Separate Policy Areas To establish the hypothesized match between MPs' formal interest group ties and their use of parliamentary instruments, we classified both into 15 distinct policy areas. Assignment to more than one policy area was possible. We used the policy area boundaries developed by the Comparative Agendas Project (CAP) whenever possible. In 12 policy areas, we rely directly on the CAP's main categories. For three of the areas in our analysis (Economic and Financial Affairs, Social Affairs, Environment), we combine either two or three CAP categories (see Online Appendix C). 7 This was necessary since we rely on pre-coded data from the Swiss Parliament. Of key theoretical interest is the occurrence of policy area matches across a variety of variables. A policy area match occurs when the values of different variables refer to the same policy area. Consider, for example, formal interest group ties in the area of transportation policy. For every MP, we count both (1) how many ties to transportation interest groups they have, and (2) how many parliamentary instruments on transportation policy they submitted each year. 8 We consider a match to have occurred if formal interest group ties have the same policy area as the submitted parliamentary instruments. This operationalization strategy builds on the assumption that MPs will use parliamentary instruments in line with the preferences of interest groups on whose boards they sit. Figure 1 shows our approach graphically. Of key interest are the hypothesized positive correlations (H 1 ) on the diagonal from top-left to bottom-right of the correlogram. Key Measures Building on Policy Area Match The measures that follow all rely on the idea of policy area matching. This means that each occurs 15 times in our data, once per policy area. Number of parliamentary instruments. The dependent variable in all of our analyses is the number of all 9 instruments submitted by an MP in a certain policy area in a given year. Instruments include questions (38.6%), interpellations (26.9%), postulates (9.8%), motions (20.3%), and bills (4.4%) (see Online Appendix B). Based on the policy area codes preassigned by the Parliamentary Services, instruments were mostly 10 automatically attributed to the 15 policy areas. When the Parliamentary Services indicated multiple policy areas for a parliamentary instrument, the instrument was counted in all applicable policy areas. This served to create 15 distinct instrument counts per parliamentarian per year. Of the average 6.27 instruments that Swiss MPs submitted annually, the most, 1.5 on average, pertain to economic and financial policy (Figure 1). Number of formal interest group ties. The main independent variable in our analysis is the number of interest group ties of any given parliamentarian in a certain policy area in a given year. All entries from the Register of Interest Ties 11 were coded by the authors according to their policy areas. Members of parliaments sit on average on 6.15 interest group boards, resulting in 0.41 formal ties per policy area. Number of formal ties ended at t minus x years. This relative measure indicates how many more ties MPs had in the past in a certain policy area. Its use in regression analysis is preferred over the absolute measure number of ties at t minus x because that would strongly correlate with the current number of ties, and hence cause multicollinearity issues. We first calculate decreases in the number of ties between two subsequent years by policy area. Decreases are positive integers; increases are coded as decreases of zero. We then apply lags of varying durations to this measure. This allows us to measure how many formal interest group ties ended a certain number of years ago. Hence, the variable captures the effect of former ties for example, two ties that ended in 2009, in subsequent years, for example, in 2010 (t-1), in 2011 (t-2), etc. Occupation (0/1). Swiss parliamentarians typically hold day jobs. When occupations and policy areas match, parliamentarians should be more active in the policy area concerned. We expect, for example, that all else equal, elected medical professionals will be more active in health policy than other MPs. We measure dichotomously whether MPs' occupation in a given year falls into a certain policy area (1) or not (0). Committee membership (0/1). Committee membership in the policy area (0 = no, 1 = yes) in which the instruments are submitted constitutes the second way in which we control for MPs' personal interests and expertise. Committee membership is assumed to relate to personal interests and expertise, and therefore also a higher propensity to use parliamentary instruments in the committee's policy area. At the same time, working in a committee also makes it more attractive to use instruments from that area as it gives MPs more control over the fate of their submissions. Moreover, two control variables also follow the idea of policy area matching. We control for money at stake in the policy area and the salience of the policy area. Beyond policy area-specific measures, we add controls for leadership positions, the total number of parliamentary instruments, tenure, election year, and parliamentary chamber. Finally, we add fixed effects for constituency and party affiliation to remove any variance on these levels, 12 (for a detailed description of all variables, see Online Appendix E). Analytical Strategy The key relationship under scrutiny is that between formal interest group ties and the use of parliamentary instruments. We use negative binomial models instead of Poisson regression because our dependent variable, the number of parliamentary instruments used per policy area, is an over-dispersed count variable. Likelihood ratio tests on the residuals confirm this decision. 13 Since parliamentarian-specific and year-specific effects need to be accounted for, a non-nested model for each policy area is estimated for the random part of the model. This partialpooling is appropriate for the time-series cross-sectional data at hand (Gelman and Hill 2006, 289). We estimate 15 different models, one for every policy area. These 15 models can be found in Online Appendix G. However, to test our hypotheses, we need to calculate the average overall statistical relationship between formal ties (i.e., interest group board seats) in a specific policy area on the use of parliamentary instruments in this policy area. To obtain this overall general effect, we conduct a meta-analysis to synthesize the regression slopes across models using a univariate weighted least squares approach (Becker and Wu 2007, 7). This is possible because all our predictors are measured on the same scale and available across all 15 models. This setup allows us to test our hypotheses only once instead of separately for each of the 15 policy areas. To illustrate the robustness of our key findings, we provide extensive additional tests that we report in Online Appendix I to K. We show that our results are not driven by the specific functions that parliamentary instruments serve, that is, gathering information/government oversight, 14 or introducing new policy. We estimate our models separately, using either only the number of oversight/information gathering instruments or the number of policy instruments as dependent variables, respectively. To ensure that our results hold when estimating a model that controls for MPs ever having formal ties in a certain policy area, a non-synthesized model in which the function of the aforementioned meta-analysis is fulfilled by simply stacking the 15 different policy area models together into one model, and a regular linear regression model. 15 Furthermore, we replicate the results of the effect of former formal ties ( Figure 3) with models using parliamentarian fixed effects. The findings presented are robust to these alternative specifications. As a final remark on causality in our design, it is important to point out that the measure of our independent variable, formal ties, precedes (January) that of the instruments (rest of the calendar year). This strengthens a semi-causal interpretation of the key effects of interest. The inclusion of occupation, committee membership, and MP fixed effects as control variables deals with policy area affinity of MPs as far as possible. Results The General Effect of Formal Ties Table 1 addresses the first two hypotheses; H 1 that, all else equal, formal interest group ties predict the submission of parliamentary instruments in the same policy area, and H 2 on the moderating effect of personal affinity on formal interest group ties. Our regression analysis progresses in several steps through subsequent models with increasingly far-reaching statistical controls. Model 1 tests the non-controlled relationship. In Model 2, we add MP-level controls (e.g., tenure) and policy area-level controls (e.g., salience). In Model 3, the party and constituency fixed effects are introduced to control for their influence on MPs' parliamentary instrument use. In Model 4a, we inspect the effect of formal ties in the context of dynamic personal interests: occupation in policy area and committee membership in policy area. 16 In Models 4b and 4c, we inspect the moderation effect of occupation and committee membership on formal interest group ties. Finally, in Model 4d, parliamentarian fixed effects are added as a strong time-invariant control for MPs' affinity towards a To give an indication of the random part of the model, that is, variance estimates, and diagnostics, we report average values across the models and their standard deviation. policy area. 17 Finally, for those who prefer such a presentation, the key results are also displayed graphically in Figure 2. There we show average marginal effects to compare interest groups' impact on legislators with occupations or committee seats in the relevant policy area to those without policy area affinity. In line with H 1 , the results of the combined models provide corroborating evidence for a positive relationship between the number of formal interest group ties within a specific policy area and the number of parliamentary instruments that a parliamentarian uses within this policy area. The synthesized estimates in Table 1 reveal that this effect is significant across all models, including Model 4d with MP fixed effects. In terms of substance, even the most conservative estimates in Model 4d of Table 1 teach us that for every additional formal interest group tie that MPs have, there is still an increase in the rate of submitted parliamentary instruments in that policy area by a factor 1.022 [1.002, 1.042], all else equal. When comparing Models 2 and 3, we furthermore learn that controlling for MPs' traditional principalsvoters and parties-does not strongly decrease the magnitude of the interest group effect. Only once we account for personal interests and expertise does the effect of interest groups decrease-from a factor 1.109 [1.092, 1.127] in Model 3, to 1.075 [1.058, 1.092] in Model 4a when current occupation and committee membership are added, and to a factor 1.022 [1.002, 1.042] in Model 4d when MP fixed effects are included. This corroborates that without accounting for MPs' personal affinity, one would overestimate the impact of formal ties on MPs' parliamentary instrument use by up to a factor of 5. These results show for the first time that interest groups have an impact on MPs' use of parliamentary instruments across policy areas, even when personal affinity towards the policy area is explicitly accounted for. To better understand the substance of the interest group effect, it is valuable to compare it to that of occupation and committee membership. The latter two are stronger than a single formal interest group tie. Model 4a 18 estimates the effect of occupation at a factor 1. 493 [1.380, 1.606]. This effect is 6.5 times larger than that of a single formal interest group tie (1.075 [1.058, 1.092]) 19 . The effect of a committee seat is estimated at a factor of 1.666 [1.591, 1.741], which is about 9 times larger. By establishing a formal tie, interest groups can thus impact the policy area focus of an MP, but to a limited extent only. However, from the parliamentarian's perspective, the collective impact of interest groups is far from negligible. Parliamentarians hold on average 6.15 interest group board seats across all policy areas. This means that taken together, a parliamentarian's formal interest group ties will affect her pattern of parliamentary instrument use about as strongly as her occupation. Importantly, though, interest groups' influence depends on MPs' specialization in a given policy area (H 2 ). Parliamentarians with personal affinity towards a policy area are significantly less impacted by interest group ties than their colleagues without personal interest in the area. As we learn from Model 4b, parliamentarians whose occupational background matches interest groups' policy area increase their rate of submitted parliamentary instruments in said area by only a factor of 1.064, all else equal, while those without relevant experience from their occupation increase their rate of parliamentary instrument use by a factor of 1.140. Similarly, Model 4c indicates that MPs without policy area-relevant committee seats have their rate of instrument submission increased by a factor of 1.100, those sitting on a relevant committee only by a factor of 1.059. This suggests that MPs who lack relevant expertise from their occupation or committee membership have their parliamentary activities almost twice as strongly affected by interest group board seats than those who do not. Figure 2 further emphasizes these group effects by displaying average marginal effects. The charts show that the difference in the impact of interest groups on MPs with and without relevant expertise is particularly pronounced for MPs with few formal ties in a policy area. In other words, MPs with and without personal interests and expertise in a policy area act more similarly the more formal ties they have. Do Former Ties Have an Effect? When hypothesizing about the impact of interest group ties, we argued their impact would wane once they had ended (H 3 ). Figure 3, a coefficient plot with adjusted pooled standard errors, inspects this hypothesized waning effect of former ties. The figure plots the estimated effect of the number of formal ties ended at t minus x years. The analytical strategy for significance testing requires some adjustments, as H 3 states a null effect. This entails the risk to bias the analysis towards finding support for H 3 because as time progresses, our sample size decreases. This renders not finding a significant difference easier and easier. We avoid such bias in two ways. First, to not "overask" the relatively small year-specific subsamples, we test H 3 not on single-year estimates (e.g., current versus 1 year ago versus 2 years ago, etc.), but estimate the 2-year synthesized effect of former ties (current versus 1-2 years ago, versus 3-4 years ago, etc.). The second adjustment is to pool the standard errors: instead of using our regression models to estimate the standard errors for each year separately, we additionally assume that the variance in the subsamples for each year is equal. This allows us to estimate one, importantly not year-dependent, standard error and confidence interval around the biennially estimated means. Without this, the drops in sample size and the associated increases in the width of the confidence intervals would render biased support for H 3. Together, these two adjustments result in a fair test of H 3. Following this procedure, the overall time trend in Figure 3 generally seems to support this idea of a decreasing impact of former exchange relationships. At the latest 2 years after MPs' interest group board membership ended, the former formal tie no longer exerts a significant effect on their parliamentary behavior. We also see an indication that with every step into the future, the estimated effect of former formal ties is smaller than in the previous step. This suggests that MPs might still make use of interest groups' non-financial resources, or that informal MP-interest group contacts are still quite strong right after ties ended. The way the effect dissipates after formal ties end quite strongly supports the proposed exchange conceptualization of parliamentarian-interest group collaborations: we can see here that when benefits stops, representation of interest groups' interests begins to wane. Conclusion The goal of our study was to examine the exchange mechanism by gauging interest groups' impact on parliamentarians' focus on specific policy areas. To operationalize the relationship between these two sets of actors, we relied on parliamentarians' board seats in interest groups as a measure of formal relationships. The results of our longitudinal analysis provided strong evidence for a significant and substantive effect of formal interest group ties on MPs' use of parliamentary instruments in the respective policy areas. They suggest that exchange plays a defining role. Importantly, the behavior of MPs with prior expertise (and thus lower demand) in interest groups' policy area is less strongly impacted. Moreover, at the latest 2 years after a formal tie ended, its effect disappears. In line with the reciprocal nature of exchange, MPs' behavior is primarily affected by the current number of formal ties, not past relationships to interest groups. Our choice of the Swiss case allowed us to conduct a test with key confounding sources for MPs' attention to specific policy areas. We were able to show-to our knowledge for the first time-that even when controlling for time-variant sources of personal interests and expertise (occupation and committee membership), and when incorporating MP, constituency and party fixed effects, having more formal ties with interest groups at the start of a year leads to higher levels of legislative activity in the respective policy area of the interest groups throughout the rest of that year. This provides first evidence that interest groups are able to shift legislators' attention towards the policy areas that serve their interests and are not just subsidizing activities in areas where MPs would have been active in regardless. The analysis revealed that the effect of interest groups on parliamentary instruments exists across a broad set of policy areas. Arguably, this is because MPs' use of parliamentary instruments is not conflictual among their principals. A parliamentarian's choice of how she casts a floor vote may favor some interest groups at the expense of her voters (Giger and Klüver 2016). With parliamentary instruments, however, she can submit multiple ones to meet the expectations of several actors. Increased activity in certain policy areas is unlikely to be perceived as threats to the party or voters. This has important implications for our understanding of interest group influence. Interest groups' effect on parliamentary instruments can arguably be observed clearly because parties and voters either tolerate legislators' use of parliamentary instruments for interest groups, or because they are unaware of the systematic nature of the phenomenon. Our findings also point to a dilemma for interest groups: legislators' previous affinity with policy areas decreases their need for exchange, which in turn reduces interest group influence on parliamentary instrument use. However, previous research shows that this type of legislator, that is, the one with an occupation and committee assignment in interest groups' policy area is more likely to sit on the boards of the respective interest groups (Gava et al. 2016;Huwyler 2021). Taken together, this suggests that legislators whom interest groups covet the most for their boards are not necessarily the ones most receptive to their requests. This study provides a first step towards more rigorous tests of the exchange relationship. Further research might benefit from disaggregating the frequency variables-the number of formal ties and parliamentary instruments-by adopting MP-interest group dyads or even MP-interest group-instrument triads as the unit of analysis. Ideally, such an approach entails studying the impact of interest groups on policy issues, not only areas, to gain a more detailed (context-dependent) picture of the workings of exchange. However, any such approach requires additional and more fine-grained data. For example, the absence of lobbying disclosure requirements beyond formal ties limited our capacity to address variation in MPs' behavior when collaborating with different interest groups. We know neither the extent nor the combination of the benefits that specific interest groups provide to MPs. It remains subject to future research how variation in interest group benefits-pay, information, campaign support, gifts-trigger different reactions in MPs' behavior. At the same time, the generalizability of our findings could be further reinforced with alternative operationalizations of ties. It is conceivable that formal ties, while fueled by exchange, may also induce other, concomitant mechanisms that hinge on on-going exchange. On the one hand, board membership may lead MPs to also act out of loyalty and responsibility for the interest group (see Buchanan 1974, 533). On the other hand, board membership creates procedures and routines for frequent, close personal contact between MPs and interest groups, which is known to impact legislators' parliamentary behavior most strongly (Huwyler and Martin 2021). Moreover, we need to study interest groups' reliance on different parliamentary instruments in more detail. There is, on the one hand, the question of what instruments these organizations request under given circumstances, and on the other, how consequential the impact of interest group-induced parliamentary instruments is. Submitting them is only the first step, and there is variation in their success rate (see Sciarini et al. 2021). The findings of this article could furthermore be confronted with the conditions of other polities. Switzerland has a relatively non-professionalized parliament comparable to some state legislatures in the U.S. Previous evidence suggests that interest group influence on parliamentary instruments relates to the professionalization of legislatures (Hertel-Fernandez 2019). In a similar vein, in other contexts, more professionalized parties may play a more pronounced role in MPs' use of parliamentary instruments. The extent to which our findings translate to more professionalized settings should be explored in order to bolster external validity. This would provide an even more nuanced understanding of the impact of interest groups compared to personal interests and other principals. In multiple ways, the findings of our study supported the conceptualization of the relationship between parliamentarians and interest groups as one of (implicit) quid pro quo. The idea that parliamentarians primarily do interest groups' bidding when they are compensated for their efforts is provocative. It means that both parties will stay in the relationship as long as they derive sufficient benefits from these long-term issue area-based alliances. The suggestion that former ties are largely without an effect on MPs' current parliamentary behavior highlights that relationships do probably not transform MPs' personal interests. As our study suggests, MPs' investment in their alliances will be relative to the resources they obtain. Parliamentarians who sit on more boards arguably obtain more resources, and therefore submit more parliamentary instruments. This has important implications for parliaments' collective attention to policy issues. In light of the widespread presence of interest groups in legislatures (Kriesi, Tresch, and Jochum 2007), interest groups as a collective arguably drive a substantive part of parliamentarians' attention, reaching an influence level similar to that of the occupational background of MPs. This renders the questions about the kind of interest groups that manage to obtain access to parliamentarians (e.g., Fellowes and Wolf 2004;Grossman and Helpman 1996;Roscoe and Jenkins 2005;Stratmann 1998) very relevant; particularly in light of the finding that collectively, interest group ties affect MPs' use of parliamentary instruments about as strongly as their professions. 8. Counting captures the notion that more interest group ties translate to more resources and more pressure for MPs to be active in interest groups' policy areas. It also entails the decision to not study ties as networks. In the Swiss case, interest groups typically do not have more than one board member in parliament (analysis available upon request). As such, we consider the risk of autocorrelation on our dependent variable through interest group-MP networks relatively minimal. 9. As our more general theory does not suggest different effects for different instruments-interest groups' demand for a specific instrument at a particular time is arguably contextdependent-we use an overall count. Nonetheless, in Online Appendix J, we aggregate parliamentary instruments according to their function (information gathering and government oversight vs. introducing new legislation) as a robustness test. 10. Three categories required some manual coding: general law, private law, and security policy. 11. We include all the organizations listed in the Register without any distinction according to political activity (for such an approach, see Eichenberger 2020). The use of this measure constitutes a hard test to our hypotheses, as we potentially underestimate the effect of formal ties by also including organizations that may not or only rarely seek to influence MPs' use of parliamentary instruments such as companies. We consider companies as interest groups, as we expect legislators to act on their behalf, for example, by seeking to improve their (sector's) regulatory environment. 12. Formal ties may, for example, be related to MPs' attention to issue areas that they expect their party and constituents to deem important. 13. According to likelihood ratio tests, negative binomial models offer a significantly better fit compared to Poisson models across all 15 policy areas. 14. Interest groups arguably benefit from the signaling function of information instruments. The latter are a cheap tool to indicate to members, donors and other actors with a stake in the organization that interest groups (or, ultimately MPs) work on their behalf. This, in turn, enables these organizations to retain and mobilize supporters. 15. We do not use logistic regression in the main model because dichotomization of the number of instruments constitutes a loss of information. Moreover, logistic regression models would force an artificial function on the distribution and make interpretation more difficult. 16. Our design hinges both on cases where the policy areas of the formal ties do and do not match MPs' personal affinities (e.g., farming MPs who have only ties to agricultural interest groups versus those who have formal ties in completely different areas). Online Appendix F sketches this variation and shows that both types of observations occur frequently. 17. While Model 4d is arguably the most stringent test of H 1 , we run the risk of overfitting the underlying 15 models. Since we have 524 MPs but only 3897 observations, Model 4b goes against the common one-in-ten rule for the predictors-to-observations ratio (Hofmann 1997). For Figures 2 and 3, the underlying models therefore do not use parliamentarian fixed effects. 18. We rely on Model 4a (the model without parliamentarian fixed effects) for this comparison because there is not enough variation within politicians over time for occupation to warrant a meaningful interpretation of the effect of occupation in a model with parliamentarian fixed effects. 19. We use the effect of formal ties from Model 4a instead of Model 4d. The reason is that we want to compare the relative strength of occupation, committee membership and formal ties and thus need to do so while using the same control variables.
2022-11-16T16:49:41.541Z
2022-11-14T00:00:00.000
{ "year": 2022, "sha1": "abebc0ffc8d802dc90107703af04ad050501bb0f", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1177/10659129221137035", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "dee44c261a77e0ebe01bea6203e64ceff971bb24", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
231952964
pes2o/s2orc
v3-fos-license
Submilligram-scale separation of near-zigzag single-chirality carbon nanotubes by temperature controlling a binary surfactant system Submilligram-scale separation of near-zigzag single-chirality carbon nanotubes has been achieved by gel chromatography. INTRODUCTION The most valuable asset of single-wall carbon nanotubes (SWCNTs) for photonics applications is their structure-dependent optical transitions, which can be optically or electrically stimulated to emit light in the near-infrared (IR) wavelength range (1)(2)(3). This, together with the compatibility of SWCNTs with a range of biological (4), chemical (5), and complementary metal-oxide semiconductor processing methods (6,7), makes SWCNTs highly attractive for application as fluorescence markers in photoluminescence (PL) microscopy or as nanoscale emitters for on-chip data transmission with light (8)(9)(10). In particular, the sp 3 functionalization of SWCNTs enables not only tuning of their photon emission into telecom wavelengths but also great enhancement of their photon emission efficiency (5,11). The enhancement of the photoelectric properties of SWCNTs by sp 3 functionalization is heavily dependent on their chiral angle (11). As the chiral structures of SWCNTs move from near armchair to zigzag, sp 3 defect-state emission is progressively moved to the more redshifted energy band, ultimately collapsing to emission from a single state in the true zigzag limit (5,11), implying that the nanotubes with smaller chiral angles may have previously unidentified and unique photoelectric properties. Industrial production of zigzag and nearzigzag single-chirality (n, m) species with identical properties is fundamental for the disclosure of their previously unknown properties and practical application in photonics, optoelectronics, and related integrated circuits. In the present work, we developed a method to realize submilligramscale separation of near-zigzag single-chirality SWCNTs by temperature controlling their selective adsorption onto a gel medium in the binary surfactant system of sodium cholate (SC) and SDS. The gel chromatography technique has been demonstrated to be highly efficient, simple, and scalable (8,(16)(17)(18). In this technique, the surfactants that disperse raw SWCNTs in an aqueous solution exhibit selective adsorption toward different-structure SWCNTs, producing a structural or coverage difference in the surfactant coating around different (n, m) SWCNTs (16)(17)(18), which induces their interaction difference with the gel medium and thus enables separation via their selective adsorption onto the gel. Recently, we demonstrated that SC exhibits adsorption selectivity toward the chiral angle of SWCNTs in the presence of SDS, while sodium deoxycholate (DOC) displays selectivity toward the diameter (8,28,29). However, separation of SWCNTs by chiral angle has not been achieved with gel chromatography. In addition, pH (30,31), ethanol (32), strong salts (33), strong oxidants (34), and even temperature (17) have been proven to finely tune the surfactant coating by driving the selective adsorption onto different SWCNTs and to promote the separation efficiency of (n, m) SWCNTs because a slight difference of only 0.003% in the surfactant coating will produce very different interactions with gel (8,28,29). Among these techniques, temperature adjustment has the advantages of simplicity, efficiency, and being impurity free (17). On the basis of these results, we proposed combining temperature tuning and the binary surfactants SC and SDS to finely tune the selectivity of the surfactant coating toward the chiral angle of SWCNTs and thus realize high-efficiency separation of single-chirality SWCNTs with small chiral angles, especially zigzag and near-zigzag SWCNTs. Here, we systematically investigate the effect of temperature on the selective adsorption of SWCNTs dispersed in a cosurfactant system of SC and SDS onto a gel column. The results show that only SWCNTs with chiral angles less than 20° were adsorbed in the gel column at temperatures less than 18°C. In contrast, at temperatures higher than 18°C, the SWCNTs with chiral angles larger than 20° start to adsorb in the gel column. On the basis of these results, we proposed two steps for the mass separation of zigzag and near-zigzag single-chirality SWCNTs. The first step is to separate the raw SWCNTs by diameter at the quasi-industrial scale at room temperature by using cosurfactants of SDS, SC, and DOC (8,29). Subsequently, by controlling temperature of the SDS and SC binary surfactants, SWCNTs with similar diameters are separated by chiral angle. The results show that we achieved nine types of single-chirality SWCNTs with chiral angle less than 20°, including (7, 3), (8,3), (8,4), (9, 1), (9,2), (10,2), (11,0), (11,1), and (12, 1) SWCNTs, accompanied by the separation of (6, 4), (6, 5), (7,5), (7,6), (9,4), and (10, 3). Among them, more than 10 types of single-chirality species, including the near-zigzag species of (9, 1), (9,2), (10,2), and (11,1), can be prepared on the submilligram scale in one-run separation, exhibiting potential for industrial preparation of near-zigzag single-chirality SWCNTs. These results indicate that temperature control is extremely important to enhance the selectivity of the gel to the chiral angles of SWCNTs in the binary surfactant system of SDS and SC. The submilligram separation of single-chirality SWCNTs with chiral angles less than 20° and even near-zigzag SWCNTs can be realized, which were difficult to separate in the past (8,17,28,29), let alone large scale. We further detected the structure of the SC/SDS surfactant layer on SWCNTs with different chiral angles and its evolution with the SDS/SC ratio and temperature using optical absorption spectra. Our present work lays a material foundation for fundamental property research and application of near-zigzag SWCNTs. The present technique also provides a reference for other methods, such as ATPE and DGU, for high-efficiency separation of zigzag and near-zigzag SWCNTs because the main feature of these techniques is that surfactants such as SDS and SC are used to distinguish the structure of various SWCNTs (35)(36)(37)(38)(39). Adsorption selectivity toward the chiral angle of SWCNTs To reveal the effect of temperature on the selective adsorption of SWCNTs into a gel column in the binary surfactant system of SC and SDS, the separation temperature was varied to study the struc-ture distribution of the SWCNTs adsorbed in the gel (see the detailed process in the experimental section). Raw high-pressure monoxide catalytically-grown SWCNTs (HiPco-SWCNTs) were dispersed in a binary surfactant solution of 0.5 weight % (wt %) SC and 0.5 wt % SDS. The initial separation temperature was set at 12°C, and 5 ml of an SWCNT dispersion was loaded into a 30-ml gel column. The unadsorbed SWCNTs were washed with an aqueous solution of 0.5 wt % SC and 0.5 wt % SDS, and the adsorbed SWCNTs were eluted by 5 wt % SDS. Subsequently, the unadsorbed SWCNTs were loaded into a second same gel column, whose temperature was increased by 2°C to separate the SWCNTs not adsorbed by the first gel column. In this manner, by increasing the separation temperature at a step of 2°C, the SWCNTs adsorbed at each temperature were collected. The separation schematic diagram is shown in Fig. 1A. Submilligram-scale separation of single-chirality near-zigzag SWCNTs It is well known that the atomic arrangement structure of a specific SWCNT is uniquely determined by its diameter and chiral angle. We expect that single chirality separation of zigzag and near-zigzag SWCNTs could be realized by successive separations by diameter and chiral angle, as shown in Fig. 2A. In the previous work, the surfactant DOC exhibits high recognition ability for the diameters of different SWCNTs (8,29). When the SWCNTs dispersed in the binary surfactants 1 wt % SDS and 0.5 wt % SC were loaded into a Sephacryl gel column, quasi-industrial diameter separation of the adsorbed SWCNTs could be obtained via stepwise elution by increasing the DOC concentration in the eluent (8,29). As shown in Fig. 2B, the optical absorption peaks of the eluted fractions by increasing DOC concentration show an overall redshift trend, suggesting that the diameter separation of SWCNTs was achieved, although the diameter separation shows some deviation possibly due to the presence of SC (8,29). The SWCNT fractions eluted at each DOC concentration exhibited a narrow diameter distribution but differed greatly in chiral angle. For example, the fraction eluted by 0.07 wt % DOC contained predominantly (6, 5) and (9, 1) SWCNTs with identical diameters and very different chiral angles. Singlechirality (9, 4) and (10, 3) SWCNTs were also achieved. Most of the separated fractions contained a higher content of the SWCNTs with chiral angles less than 20° than the raw materials, which provides feedstock suitable for single-chirality separation on a large scale. Chiral angle separation was further performed at low temperatures, as shown in the right panel in Fig. 2A. The SWCNTs separated by diameter in the first step were simply redispersed in the binary surfactants SDS and SC and then loaded into a gel column at a low temperature (less than 18°C). In this manner, the SWCNTs with smaller chiral angles were adsorbed, while the species with larger chiral angles directly flowed through the gel column. Thus, highefficiency separation of single-chirality SWCNTs with chiral angles less than 20° was obtained on a large scale. For instance, the SWCNT fraction containing (7, 3) and (6, 4) SWCNTs eluted at 0.06 wt % DOC was redispersed in the cosurfactants 0.5 wt % SC and 0.5 wt % SDS and then applied to a gel column at 15°C. The (7, 3) SWCNTs with a smaller chiral angle of 17° were adsorbed in the gel column, while the (6, 4) SWCNTs with a chiral angle of 23.41° directly flowed through the gel column. Thus, single-chirality (7, 3) SWCNTs were easily obtained, simultaneously accompanied by enrichment of (6, 4) SWCNTs in the flow-through dispersion, as shown in the top left panel in Fig. 2C. (as shown in Fig. 3, A and B). Because of the simple separation process, submilligram-scale separation of single-chirality species of (7, 3), (7,6), (8,3), (8,4), (9, 1), (9,2), (10,2), and (11, 1) was achieved (Fig. 3D) using a 200-ml gel column. The purity of the separated SWCNTs was evaluated with a previous method (16). PeakFit software was used to simulate the near-IR optical absorption spectra representing the individual (n, m) species with wavelengths from 700 to 1350 nm, as shown in fig. S5. The purity of each (n, m) species was computed as the ratio of the area of the dominant absorption peak to the sum of all peak areas in the near-IR region. The results are integrated in Fig. 3C. The purity of 10 species is greater than 90%, and only the purity of the (11, 0) species is below 80%, indicating that the present technique is highly efficient in recognizing the atomic structures of SWCNTs, especially for the SWCNTs with chiral angles less than 20°. Note that, although the stock SWCNT solution was dispersed for 5 hours at 0.38 W/ml before separation, high ratio of G-band/D-band in Raman spectra for different (n, m) species evidenced that the dispersion process did not introduce too many defects ( fig. S6). When the dispersion time was reduced to 30 min, we demonstrated that high-purity single-chirality species could also be isolated and they have longer length and higher PL intensity. However, the concentration of the monodispersed SWCNT solution was greatly reduced, which would inevitably decrease the overall throughput of the separated species ( fig. S7). Recently, by characterizing the PL spectra of the separated single-chirality SWCNTs, we revealed the relationship between the PL quantum yield and the chiral structure of SWCNTs (41). Detecting the temperature-driven surfactant coating change The length distribution of the separated (n, m) species is basically identical, ranging from 100 to 700 nm ( fig. S8), indicating that length difference is not the main cause for the separation of zigzag and non-zigzag SWCNTs. Lowering the temperature would decrease the solubility of SDS and SC in the aqueous solution and facilitate their aggregation (42)(43)(44), driving their selective adsorption onto distinct (n, m) SWCNTs (17). Our abovementioned results indicate that the temperature-driven adsorption of surfactants onto SWCNTs should be chiral angle-dependent in the binary system of SDS and SC, which enlarges the difference in the surfactant coatings around SWCNTs with different chiral angles. The optical transition properties of SWCNTs are sensitive to the adsorbed molecules, including various surfactant molecules, because of changes in the molecular interaction with SWCNTs, dielectric environment, or strain effect around them (17,(45)(46)(47)(48)(49). To detect the selective adsorption of the binary surfactants SDS and SC onto (n, m) species with different chiral angles driven by lowering the temperature, we explored the effect of temperature on the spectral changes in the first van Hove singularity transition (S 11 ) absorption peak of single-chirality (9, 1) and (6, 5) species with identical diameters but different chiral angles, which were highly dispersed in an aqueous solution of the cosurfactants 0.5 wt % SDS and 0.5 wt % SC. The results show that lowering the temperature induces a clear redshift and quenching of the S 11 optical absorption peak of both types of SWCNTs (Fig. 4). The S 11 peak of (6, 5) SWCNTs exhibits a greater redshift than that of (9, 1) SWCNTs, while the quenching degree is smaller. These results imply that more surfactant molecules should selectively adsorb onto (6,5) SWCNTs and form a denser or tighter surfactant coating at lower temperatures, resulting in much weaker adsorbability onto gel. The more compact surfactant coating also protects the (6, 5) SWCNTs from oxidation and protonation by oxygen or hydronium ions in an aqueous solution such that the quenching degree is smaller than that of the (9, 1) SWCNTs (49). We can anticipate that three possible processes occur in the adsorption of the cosurfactants onto SWCNT surfaces driven by lowering the temperature: (i) the ratio of SC and SDS in the surfactant coating around a specific SWCNT remains unchanged or (ii) the composition ratio of the cosurfactants changes, and (iii) the morphology and structure of the cosurfactant coating changes via reorganization. The physical adsorption of surfactants onto SWCNTs is a dynamic equilibrium phenomenon. The adsorption probability of each surfactant mainly depends on its concentration and ratio in the mixed surfactants (16,17). To verify the possibility of the first process occurring, we ideally assume that the composition ratio of the surfactant coating around an SWCNT remains constant as the concentration of each surfactant component in the solution increases equally. For this, we investigated the effect of the concentration of SDS and SC on the optical absorption spectral change while fixing their concentration ratio to 1:1, in which the concentration of each surfactant was simultaneously varied from 0.5 to 2 wt %. The results show that the wavelength of the S 11 peaks of both nanotubes remains unchanged ( fig. S9 and Supplementary Materials) at a fixed temperature. These results are different from the results induced by lowering the temperature that we observed, indicating that lowering the temperature does not simply increase the surfactant density around SWCNTs, and the ratio of SDS and SC adsorbed on SWCNTs is likely altered at lowered temperatures. We further studied the spectral changes of (9, 1) and (6, 5) species by varying the concentration ratio of SDS and SC at different temperatures, which could drive a composition ratio change of the surfactant coating around SWCNTs. The results are presented in Fig. 5A and figs. S10 to S12. As the SDS concentration increases while fixing the SC concentration at 0.5 wt %, the shift of the S 11 absorption peaks of (6, 5) and (9, 1) SWCNTs shows similar changes at different temperatures. In general, the spectral changes can be divided into three stages. In the first stage, when the SDS concentration gradually increases from 0 to 0.5 wt %, the S 11 peaks of both the (9, 1) and (6, 5) nanotubes exhibit an increasing redshift, reaching a maximum at approximately 0.5 wt % SDS. In the second stage, when the SDS concentration increases from 0.5 to 1.0 wt %, the S 11 peak position fluctuates slightly. In the third stage, with an increase from 1.0 to 2.0 wt %, the S 11 peaks of the (9, 1) and (6,5) blueshift are back. Similarly, when the SDS concentration is fixed at 2 wt %, the addition of SC to the SDS dispersing SWCNTs also causes a redshift of the S 11 peaks of (9, 1) and (6, 5) SWCNTs. In comparison, the redshift of the (6, 5) SWCNTs is substantially greater than that of the (9, 1) SWCNTs regardless of whether SDS is added to the SC dispersing SWCNTs or SC to the SDS dispersing SWCNTs. Note that the change in the surfactant composition ratio redshifts the S 11 absorbance of (6, 5) SWCNTs by 8 meV and (9, 1) SWCNTs by 3 meV at most at 24°C, which is smaller than the redshift caused by lowering the temperature from 24° to 8°C at the fixed concentrations of 0.5 wt % SDS and 0.5 wt % SC [13 meV for (6, 5) and 9 meV for (9, 1) SWCNTs]. On the basis of the above results, we believe that the spectral redshift induced by temperature reduction may be due, in part, to a change in the SDS/SC ratio on SWCNTs. In the single-surfactant SDS system, a redshift of the optical absorption spectrum was also reported with lowering of the temperature (17). The spectral change was attributed to the change in the microdielectric environment or strain enhancement of SWCNTs due to the adsorption and reorganization of the surfactant (17,(45)(46)(47)(48). However, the redshifts caused by lowering the temperature from 24° to 8°C in the single-surfactant SDS or SC system are less than 5 meV for (9, 1) and (6, 5) SWCNTs (figs. S13 and S14), which is much smaller than that in the case of the mixed surfactants [13 meV for (6, 5) and 9 meV for (9, 1) SWCNTs] (Fig. 5A). In combination with the above results, whether adding SDS to the SC-dispersing SWCNTs or adding SC to the SDS-dispersing SWCNTs will cause marked redshift of optical absorption spectra, we propose that the interaction between SDS and SC should be present in the binary surfactant system because a simple competitive adsorption or replacement between SDS and SC on the surface of SWCNTs should not cause a large spectral redshift (50)(51)(52). Compared to the headtail surfactant SDS, SC shows a lower self-aggregation tendency and allows tighter SC coating of the SWCNT surface by accommodating the SWCNT curvature and wrapping around the SWCNTs like a ring, preventing the adsorption of SWCNTs on gel (53), while SDS is prone to form a more loosely packed structure owing to van der Waals interactions on SWCNT surfaces (54,55), allowing for a stronger interaction of SWCNTs with the gel. Several studies have reported that the flexible alkyl chains of SDS tend to interact with the nonplanar hydrophobic -faces of SC molecules to form compound micelles of SDS/SC (56)(57)(58)(59). We propose that the introduced SDS molecules likely shift the dynamic equilibrium of the physisorption of SC molecules on SWCNTs and destroy the well-packed SC coating by removing a fraction of monomeric SC molecules on SWCNTs and forming SDS/SC compound (cosurfactant) that loosely coat the SWCNTs, leading to increase in the exposure area of SWCNT sidewalls (as shown in Fig. 5B), which enhances the adsorbabililty of SWCNTs onto gel. Compared with single surfactant SDS or SC dispersing (9, 1) and (6,5) SWCNTs, the adsorption of SDS/SC compound on SWCNTs causes a great redshift of the S 11 optical absorption peak, probably due to the increase in the contact area and the enhancement of the interaction between SWCNTs and cosurfactants ( Fig. 5A and fig. S14). Because of the presence of chiral SC molecules, the compound surfactants prefer to adsorb on the SWCNTs with larger chiral angles, resulting in a larger redshift of the S 11 peak of (6, 5) SWCNTs than (9, 1) SWCNTs. This hypothesis is consistent with the change trend of the optical absorbance of SWCNTs. With the introduction of SDS, the optical absorbance of SWCNTs decreases likely because the formation of the loosely SDS/ SC layer weakens the protection of SWCNTs from oxidation. Because of the relatively dense SC/SDS layer, the absorbance of (6, 5) SWCNTs decreases less compared with (9, 1) species (fig. S12). It has been reported that dissolved oxygen may cause the spectral redshift of A and B) S 11 absorption peaks of (6, 5) and (9, 1) dispersed in 0.5 wt % SC and 0.5 wt % SDS at various temperatures. (C) Plots of redshift and relative absorbance of the S 11 peaks of (9, 1) and (6,5) SWCNTs as a function of temperature. of 10 SWCNTs due to the reorganization of surfactant layer resulting from oxidation (34). Here, the dissolved oxygen may partially contribute to the spectral shift of SWCNTs due to the loosely SDS/ SC coat. The density of the SDS/SC cosurfactant adsorbed on SWCNTs should strongly depend on the concentrations and ratio of SC and SDS. As shown in Fig. 5C, with the introduction of SDS (less than that of SC) in the first stage, the SDS/SC cosurfactant starts to form and adsorb on SWCNTs. However, because of the low SDS concentration, much of the surface area is still covered by wellpacked SC, which results in weak adsorbability onto the gel. With an increase in the SDS concentration, an increasing amount of SDS/SC cosurfactant is formed. When the SDS concentration increases to 0.5 to 1.0 wt % (equal to that of SC or higher), the concentration of the SDS/SC cosurfactant reaches the maximum and dominates the surface coating on SWCNTs. The exposure area of the SWCNT sidewalls reaches the largest value. With a further increase in the SDS concentration, SDS molecules gradually replace the SDS/SC compound surfactant and lastly dominate the surfactant layer structure on SWCNTs, resulting in the S 11 peak shifting back to the blue region. In comparison, the relative blueshift of the S 11 peak of (6, 5) SWCNTs is significantly smaller than that of (9, 1) SWCNTs. For example, the S 11 peak of (6, 5) SWCNTs blueshifts back by ~24% of the maximum redshift at an SDS concentration of 2 wt %, while that of (9, 1) shifts back by ~88% of the previous redshift at 24°C possibly because the interaction between the SDS/SC cosurfactant and (6, 5) SWCNTs is stronger, making the cosurfactant more difficult to replace by monomeric SDS (29). According to the experimental results and the proposed structure of the cosurfactant SDS/SC coating, we further put forward the structural change of the cosurfactant layer caused by the temperature decrease. At room temperature (24°C), although the coating structure of the cosurfactant was tuned by altering the concentration of SDS and SC, reaching the maximum redshift at 0.5% SDS/0.5% SC of 8 meV for (6, 5) and 3 meV for (9, 1), the difference in the coating is not sufficient to separate the SWCNTs by chiral angle. The selective adsorption onto gel by chiral angle can be observed only by overloading, but a large number of near-armchair SWCNTs are still adsorbed in the gel ( fig. S15). Lowering the temperature induces a further spectral redshift of (6, 5) and (9, 1) SWCNTs at various ratios of SDS/SC (Fig. 5A), implying that more cosurfactants should adsorb and reorganize on the SWCNTs driven by the reduction in the solubility of surfactants (as shown in Fig. 5C) (43,44). Because of the different effects of temperature on the solubility of SDS and SC, the ratio of SDS/SC in the cosurfactant coating adsorbed on SWCNTS is likely altered at lowered temperatures (42,56). (6,5) SWCNTs exhibit a greater redshifts than (9, 1) SWCNTs ( Fig. 5A and table S1) at various concentrations of SC and SDS and smaller quenching in the S 11 peak (fig. S12). As shown in Figs. 5A and 4C, lowering the temperature from 24° to 12°C further induces a redshift of the S 11 peaks of (6, 5) by 9 meV and (9, 1) by 5.8 meV at 0.5 wt % SDS/0.5 wt % SC. These results indicate that a tighter or denser cosurfactant coating is selectively adsorbed on (6, 5) SWCNTs compared to (9, 1) SWCNTs, which further amplifies the interaction difference of (6, 5) and (9, 1) SWCNTs with the gel. Thus, the denser and tighter SDS/SC cosurfactant markedly weakens the adsorbability of (6, 5) SWCNTs onto the gel, while the adsorbability of (9, 1) SWCNTs is preserved at a lower temperature. Actually, it is a very complex and systematic task to fully illustrate the structure of the SC/SDS surfactant layer and how it varies with the SDS/SC ratio and temperature. In this work, we only proposed a possible model on the basis of spectral detection (Fig. 5). More systematic studies are needed to clarify the structure of SDS/SC cosurfactant layer and its evolution with environment. Effect of the ratio of SDS and SC on the adsorbability of SWCNTs The hydrophobic interaction and electrostatic repulsion between SDS and SC molecules can be adjusted by changing the ratio and the total concentration of these two surfactants, which will tune the molecular self-assembly process on SWCNTs. The critical point for the structural separation of nanotubes via gel chromatography is the selective adsorption of SWCNTs into the gel medium, which strongly depends on the exposure area of the SWCNT sidewalls. As mentioned above, the separation of (9, 1) SWCNTs from a (6, 5)/(9, 1) mixture was achieved under 0.5 wt % SC and 0.5 wt % SDS because of the maximum difference in the cosurfactant coating at a low temperature. For the separation of larger-diameter SWCNTs, the concentration/ ratio should be tuned for the same purpose. Given that the SC molecules need not bend to a great extent to cover an SWCNT with a larger radius, the larger-diameter SWCNTs should have a stronger interaction with SC molecules and usually form a denser SC coating (59). Thus, more SDS molecules should be introduced to interact with or replace SC adsorbed on SWCNTs to form the SDS/SC cosurfactant, increase the exposure area of the nanotube surface, and enable adsorption onto gel. To verify the effect of the concentrations and composition ratio of the mixed surfactants on their selective adsorption onto SWCNTs at low temperatures, we further investigated the separation of SWCNTs based on their chiral angle by varying the surfactant concentration/ ratio. In this experiment, we dispersed SWCNTs in 0.5 wt % SC and 0.3 wt % SDS, 0.5 wt % SC and 0.4 wt % SDS, 0.5 wt % SC and 1 wt % SDS, 0.05 wt % SC and 2 wt % SDS, and 0.5 wt % SC and 2 wt % SDS. In both cases of 0.5 wt % SC/0.3 wt % SDS and 0.5 wt % SC/2 wt % SDS, adsorption of SWCNTs onto the gel medium was not observed, possibly due to the well-packed surfactant coating around them. In the other cases, the nanotubes adsorbed at temperatures of 12° to 22°C were characterized by the optical absorption spectra (fig. S16), and the relative contents of different (n, m) species among the adsorbed SWCNTs are summarized in Fig. 6. In both cases of 0.4 wt % SDS/0.5 wt % SC and 1 wt % SDS/0.5 wt % SC, the SWCNTs with smaller chiral angles can selectively adsorb at low temperatures, showing strong chiral angle selectivity. The selective adsorption of small-diameter species with smaller chiral angles is even more distinct for the case of 0.4 wt % SDS/0.5 wt % SC, but the amount of adsorbed SWCNTs, especially large-diameter nanotubes, is much smaller (fig. S16). The (11,1), (10,2), (10,3), and (9, 4) SWCNTs with relatively large diameters were not adsorbed. This should be attributed to the stronger interaction of SC molecules with these SWCNTs and thus the denser SC coating on them. Meanwhile, as we predicted, in the case of 0.5 wt % SC and 1 wt % SDS, where SDS is higher than SC in concentration, chiral angle selectivity still dominates, while the adsorbability of SWCNTs with relatively large diameters or medium chiral angles (slightly smaller than 20°) is enhanced. This scenario is consistent with the use of 0.5 wt % SC and 1 wt % SDS or 0.25 wt % SC and 1.25 wt % SDS for the separation of single-chirality (11,1), (10,2), (10,3), and (9, 4) SWCNTs that have larger diameters. In the situation of 0.05 wt % SC and 2 wt % SDS, in which a trace amount of SC is introduced, chirality selectivity at low temperature (at 12°C) is also observed. As shown in Fig. 6A and fig. S16, a small amount of multiple (n, m) species with chiral angles less than 20° [i.e., (9, 1), (7,3), and (8, 3) SWCNTs] is adsorbed in the gel column, indicating that chiral angle selectivity emerges. In contrast, only near-armchair nanotubes, such as (6, 4), 8 of 10 (6,5), and (7,5), are adsorbed at temperatures lower than 12°C in the single-surfactant SDS system (17). We developed a novel and efficient method to separate SWCNTs by chiral angle, in which the highly selective adsorption of the SWCNTs with chiral angles less than 20° into a gel medium was achieved by temperature controlling the binary surfactant system of SDS and SC. On the basis of this result, we designed a two-step strategy to separate single-chirality zigzag and near-zigzag SWCNTs: The raw SWCNT mixture was first separated by diameter using stepwise elution, and subsequently, the eluted fractions with narrow diameter distributions were separated by chiral angle through temperature control. With this technique, more than 10 types of singlechirality species, including near-zigzag SWCNTs of (9, 1), (9, 2), (10,2), and (11, 1), were separated on the submilligram scale. We further detected the temperature-driven adsorption selectivity of SC/SDS toward the chiral angle of SWCNTs using optical absorption spectra and revealed that lowering the temperature caused the adsorption of more SC/SDS cosurfactant on the SWCNTs with smaller chiral angles, which amplified the interaction difference of the SWCNTs with different chiral angles with gel and improved the separation efficiency of SWCNTs by chiral angle. Our present results lay a fundamental basis for the industrial separation of singlechirality near-zigzag SWCNTs and provide guidance for other methods, such as ATPE and DGU, to separate small-chiral angle SWCNTs. In addition, the achievement of multiple single-chirality near-zigzag SWCNTs with a broad diameter distribution by tuning the ratio of SDS/SC provides a possible pathway for the mass separation of larger-diameter single-chirality SWCNTs, which exhibit higher carrier mobility and saturation current due to the formation of ohm contact with metal electrode. Dispersion of SWCNTs HiPco-SWCNTs were purchased from NanoIntegris (raw powder batch no. 29-037). The as-received SWCNT powder was dispersed in 100-ml aqueous solution of 1 wt % SC (99%; Sigma-Aldrich) using a homogenizer equipped with a half-inch tip at 0.38 W/ml of output power density for 5 hours (Sonifire 450D, Branson). To dissipate the heat generated during sonication, the dispersion was immersed in a water bath at 15°C. Subsequently, ultracentrifugation (S50A, Hitachi, CS150FNX) was performed on the dispersion at 210,000g for 30 min. Eighty percent of the supernatant was collected as the as-prepared dispersion. SDS was introduced to the dispersion by adding SDS (99%; Sigma-Aldrich) aqueous solution of various concentrations to achieve specific concentrations of SC and SDS according to the experimental conditions. Alternatively, the raw SWCNTs were directly dispersed in the aqueous solution of 0.5 wt % SDS and 0.5 wt % SC, followed by centrifugation. The supernatant was collected as parent solution for the separation of SWCNTs. Temperature-controlled separation in the binary surfactant (SC and SDS) system Several columns filled with 30 ml of gel (Sephacryl S-200 HR, GE Healthcare) were prepared. The columns, surfactant solutions, and SWCNT dispersion were soaked in a bath at 12°C. Then, 5 ml of the dispersion was applied to an equilibrated column. The adsorbed SWCNTs were eluted by 5 wt % SDS. The flow-through fraction was collected and loaded onto the next column until no SWCNTs could be adsorbed at this temperature. Then, repeated separation of the unadsorbed SWCNTs was performed as described above after increasing the temperature at a step of 2°C. Mass separation of single-chirality SWCNTs with different chiral angles Two hundred milliliters of gel was packed in a column (XK 50/40, GE Healthcare). Then, the column was connected to an automated chromatography system (AVANT 150, GE Healthcare). The whole system was kept in a homemade thermostat to control the temperature. At 18° to 22°C, the SWCNT dispersion in the mixed surfactant of 0.5 wt % SC and 1 wt % SDS was loaded onto the equilibrated column. After eluting the unadsorbed SWCNTs, the mixed surfactants X wt % DOC/0.5 wt % SC/1 wt % SDS were loaded to stepwise elute the adsorbed SWCNTs, where X was increased from 0.06 to 0.2 wt % at a step of 0.01 wt %. The eluted fractions were collected and characterized by optical absorption spectra. Optical absorption characterization Optical absorption spectra were recorded using an ultraviolet near-IR spectrophotometer (UV-3600, Shimadzu). A temperature control module was used to cover the cuvette and was connected to a circulating-water system to control the temperature of the sample during measurement.
2021-02-19T06:16:14.593Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "b3ee9d771490c7e384330082be134f2f2bf9795d", "oa_license": "CCBYNC", "oa_url": "https://advances.sciencemag.org/content/advances/7/8/eabe0084.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f3a56d449741c417ffda7be053690c05b590ba45", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
242786752
pes2o/s2orc
v3-fos-license
Knowledge, attitudes and ethical and social perspectives towards fecal microbiota transplantation (FMT) among Jordanian health care providers Background: Fecal microbiota transplant (FMT) is a treatment modality that involves the introduction of stool from a healthy pre-screened donor into the gastrointestinal tract of a patient. It exerts its therapeutic effects by remodeling the gut microbiota and treating microbial dysbiosis-imbalance. FMT is not regulated in Jordan, and regulatory effort for FMT therapy in Jordan, an Islamic conservative country, might be faced with unique cultural, social, religious, and ethical challenges. We aimed to assess knowledge, attitudes, and perceptions of ethical and social issues of FMT use among Jordanian healthcare professionals. Methods: An observational, cross-sectional study design was used to assess knowledge, attitudes, and perceptions of ethical and social issues of FMT among 300 Jordanian healthcare professionals. Results: A large proportion (39%) thought that the safety and e�cacy of this technique are limited and 29.3% thought there is no evidence to support its use. Almost all (95%) responded that they would only perform it in certain cases, if ethically justi�ed, and 48.3% would use it due to treatment failure of other approaches. When reporting about reasons for not using it, 40% reported that they would not perform it due to concerns about medical litigation, fear of infections (38 %), and lack of knowledge of long safety and e�cacy (31.3 %). Interestingly, all practitioners said they would perform this procedure through the lower rather than upper gastrointestinal tract modality and the majority will protect the patient’s con�dentiality via double-blinding (43.3%). For a subset of participants (n=100), the cultural constraints that might affect the choice of performing FMT were mainly due to donor's religion,followed by dietary intake, and alcohol consumption. Conclusion: Our health care practitioners are generally reluctant to use the FMT modality due to religious and ethical reasons but would consider it if there was a failure of other treatment and after taking into consideration many legislative, social, ethical and practice-based challenges including safety, e�cacy and absence of guidelines. Introduction Fecal microbiota transplantation (FMT) is a procedure involving the transfer of stool from a healthy screened donor into the intestinal tract of a diseased recipient.FMT is claimed to possess a therapeutic effect by remodeling the gut microbiota and treating microbial dysbiosis, which is often de ned as an "imbalance" in the gut microbial community that is associated with disease (1)(2)(3)(4).Traditionally, FMT is prepared as a crude fecal matter using a manual method where the "fecal matter, or stool, is collected from a tested donor, mixed with a saline or other solution, strained, and placed in a patient, by colonoscopy, endoscopy, sigmoidoscopy, or enema" (5).Recently, a standardized automated washed microbiota transplant (WMT) preparation method was introduced and was found to signi cantly reduce FMT-related adverse events. FMT has been used to successfully treat recurrent Clostridium di cile infection (8, 9) and guidelines towards its safe use are continuously evolving.As such, the Infectious Diseases Society of America (IDSA) (10), Society for Healthcare Epidemiology of America (SHEA) (11), and The World Society of Emergency Surgery (WSES) (12) have recently updated the Clinical Practice Guidelines for Clostridium di cile Infection (CDI). The updated ISDA and SHEA guidelines include the use of FMT as a CDI treatment in the second or subsequent recurrence with strong to moderate strength of recommendation and quality of evidence (10,11).According to the Food and Drug Administration (FDA), regulation of the use of FMT for recurrent CDI should clearly be explained as being an experimental approach.As such, the use of FMT is a mix of a clinical trial and standard care (13). The long history of FMT has witnessed an evolution in the methodology, clinical strategies, and delivery methods (14).Therefore, modernized FMT guidelines have been formulated (8,15,16).Nevertheless, efforts are still needed for establishing standardized protocols for stool preparation, FMT administration and delivery methods, donors and recipients selection criteria (17), and stool banking (18,19).Meanwhile, there has been great interest in FMT applications and growing evidence suggests its potential use in the management of GI conditions other than recurrent CDI (20), including ulcerative colitis (21), cardiometabolic syndrome (22), Crohn's disease (23), irritable bowel syndrome (24), and some neurological disorders such as multiple sclerosis (25) and Parkinson's disease (26).Nevertheless, these potential therapeutic uses face many challenges (27), and thus safety and e cacy studies are still needed.Moreover, technical, legislative, regulatory, ethical and social concerns in creating a standardized treatment modality should be addressed and resolved. FMT therapy had faced and is still facing numerous regulatory, ethical, cultural, and social challenges. Ethical challenges included (28), ( 1)"informed consent and the vulnerability of patients"; (2)" determining what is a suitable healthy donor" ; (3) "safety and risk"; (4) "commercialization and potential exploitation of vulnerable patients"; and (5) "public health implications" (28).Personal identity and family relations (29)(30)(31) have been identi ed as additional ethical challenges.The ndings that altered microbiota can be passed to the offspring (32) and the possibility of family members to be potential secondary recipients, raised calls for the consideration of the ethical complexity and challenges associated with microbiome research in FMT procedures and regulations (33).Moreover, due to the strong symbolic or emotive objection of certain types of diet in relation to the recipient's culture, religion, or self-perception, it was shown that the dietary intake of a stranger donor might be considered an ethical challenge in FMT consenting procedure (30)(31)(32)(33).All these challenges make it very di cult to demarcate the regulatory framework.Indeed, the regulatory status of FMT has been changed several times and is continuously modi ed (35,36). Despite the reported therapeutic effects of FMT in recurrent CDI management, its use is limited by many factors, including lack of specialized centers, di culties with donor selection and recruitment, and di culties related to regulation and safety monitoring (19,37), in addition to the social and ethical challenges described above.In contrast to the long standing FMT use in China, FMT is not regulated as a therapeutic tool in Jordan nor is o cially practiced.In light of the growing evidence of FMT therapeutic effectiveness in the management of different GIT dysbiosis related disorders, we expect it will become an approach used by Jordanian practitioners in the future.Nevertheless, the differences in the cultural, social, and religious make up of Jordanian and Islamic conservative tradition compared to China or Western countries might entail unique ethical challenges towards FMT therapy and thus speci c regulations for its use.Indeed, it has been shown that the cross-cultural differences between Chinese (28) and Western cultures impacted the shaping of their FMT regulations (39,40). The aim of our current study was to investigate the knowledge, attitudes, and perceptions of ethical and social issues regarding FMT uses by Jordanian Health care providers to highlight the ethical challenges in the context of Jordan's cultural and social makeup. Study design, settings, and subjects This was an observational, cross-sectional study design, the aim of which was to assess knowledge, attitudes, and perceptions of ethical and social issues of FMT among Jordanian healthcare professionals.The study was conducted in Amman, Jordan between June and August 2019.Using convenient sampling, 300 various healthcare practitioners, including gastroenterologist and/or internists, medical doctors, nurses, medical laboratory technicians, and pharmacists were invited to participate in the study and asked to ll a paper-based questionnaire.The goals of the study, as well as the questionnaire, were thoroughly explained to each participant before getting their verbal consent to participate.Their participation was voluntary and their responses were anonymous.This study was approved by the Institutional Review Board of The Jordan University Hospital (IRB no.80/20/9/535) dated 3/11/2019. Questionnaire development The questionnaire was based on that used by a previous study by Ma et al. (28) with some modi cations.The latter is distributed under the terms of the Creative Commons Attribution 4.0.International License (http://creativecommons.org/licenses/by/4.0/). In brief, the questionnaire consisted of four sections comprising 20 items: general knowledge and attitudes towards FMT (four items); perception of ethical concerns (nine items); belief about social and regulatory issues (four items); and views about FMT bank ethics (three items).Question formats included single choice, multiple-choice, and written short answer.To this questionnaire, we added questions about cultural constraints including religion, dietary intake and alcohol consumption, for a subset of participants (n=100).Moreover, using an open-ended question, participants were asked to write any other comments regarding FMT that they wish to make. Sample Size Calculation For the questionnaire, sample size was calculated based on O'Rourke et al, 2013 (41), where it is recommended that the number of subjects should be 5-10 times the number of items, or 100.Given that we have 21 items in our questionnaire, a sample size of 105-210 participants was considered representative for the purpose of this study. Statistical analysis Data were analyzed using Statistical Package for Social Science (SPSS ® ) version 22 (SPSS ® Inc., Chicago, IL, USA).The descriptive analysis was done using frequencies and percentages.Chi-square (or Fisher's) test was used to compare practitioners who were familiar and/or involved with FMT vs. those who were not.Independent student's t-test was used to compare the score between practitioners familiar with FMT vs. those who are not.In addition, ANOVA test was done to check the difference by profession.An arbitrary negative score was created from the negative views about FMT, assigning a value of 1 for answers with a negative attitude and 0 for positive attitudes.P-value less than 0.05 was considered statistically signi cant. Results Data were collected from 300 healthcare professionals.Table (1) below describes results as frequency (n) and percentage (%).Most of the participants were gastroenterologist (38%) followed by medical doctors (23.7%).The vast majority (95.7%) did not perform FMT but have heard about it. Ethics: Regarding ethical issues, it seems most of the responders were skeptical and not supportive of using the FMT method.A large percent (39%) thought that the safety and e cacy of this technique are limited and another 29.3% reported that there is no evidence to support its use.When asked if the methods were medically indicated and ethically approved would they use it, still only 5% would refer a patient for FMT.About 40% would not perform it due to concerns about medical litigation, followed by fear of infections (38 %), and lack of knowledge of long-term safety and e cacy (31.3 %).But 48.3% would do it when other treatments fail and another 29.7% would do it if there was a need for organic or natural treatments. The majority will protect the patient's con dentiality via double-blinding (43.3%).Not everyone was willing to inform the patients about all risks as some would inform them about actual physical risk from the procedure and others will inform patients depending on their comprehension.Concerning the FMT bank, all participants viewed that there is a problem in donor's anonymity and data de-identi cation, and 47.7% were worried about the consent methods.The ethical concerns were numerous and included the mode of informed consent, privacy protection, and ownership of samples. Perceptions about the use and e cacy of FMT: Only 20.7% believed that FMT was overrated, and 42% did not agree that FMT value is overrated, and the rest did not know.Interestingly, all practitioners would perform this procedure through the lower gastrointestinal rather than the upper gastrointestinal tract.A total of 43% supported the statement that FMT negatively impacts the patient's dignity.As for social and regulatory issues, 87 % believed that the application of FMT should be suspended and it is not urgent to apply, 84 % believe that FMT will not have other future applications, and 100% said that it should not be used as the rst line for CDI.Barriers to the use of FMT were due to lack of guidelines (40.3%), and unknown mechanisms of action of this treatment (33.7%). "Do-It-Yourself" (DIY)-FMT-meaning lay individuals adopting FMT clinical techniques performed on and/or by themselves at home.Social media has facilitated widespread exposure to and awareness of the relationship between the gut microbiome and human health (42).Given the availability of the necessary raw materials, the straightforward technique of FMT administration, and anecdotal success stories online, numerous websites have already sprung up advertising home DIY FMT kits as a direct-to-consumer (DTC) product.The concept of commercialization of FMT raises concerns regarding proprietary rights, accessibility of data and biological material, and the implications of DTC products (43). With regards to commercialization, 81.3% of participants thought that DIY and DTC advertisement is not concerning as it is common in other areas, and 86.3% believe that FMT should not be charged for.Almost above half of them did not care about the justice of the allocation of bene ts to the patients. Cultural aspects: For a subset of participants (n=100), we asked about the cultural constraints that might affect the choice of performing FMT, for 52% it was the religion, then dietary intake (25%), and alcohol consumption (23%).Due to the scarce number of practitioners who fall in the category of being familiar and/or involved in FMT (13 out 300), the comparison of different variables according to this parameter would not be accurate due to the large difference in sample size between these categories.However, some results were worth mentioning. Those who were familiar with FMT were gastroenterologists and/or internists and only one of them did not think that it is a promising modality.All of them (n=13) would not recommend FMT due to concerns about infection, while 36.2%(n=104) of those who are not familiar with FMT have such concerns (p-value=0.003). Moreover, half of those who are familiar would inform patients about physical risks vs. 16.7% in those who are not familiar with FMT informing patients about physical risks (p<0.002).Alcohol was the main cultural concern among those who did not perform FMT (25.3%), and dietary intake was the concern of those who did perform FMT (44.4% vs. 23.1%),and religion was equally concerning for both groups.Almost all the health care providers have heard of FMT but only 4.3% performed or were involved with this procedure.An independent t-test showed that there was no signi cant difference in the negative views' score between practitioners familiar with FMT (n=13; mean= 9.9, SD=2.1) vs. those who are not (n=287; mean=10.3,SD=2.1); (p=0.55)(Table 2).ANOVA test showed that medical doctors hadhigher negative scores than Laboratory technicians, but not statistically signi cant (p=0.15), and no signi cant differences were found between other professions, as shown in Table 2: In addition, an open-ended question allowed participants to express their views about FMT in a category called "others".Seventy of them answered this question, 17% of them expressed (n=12) religious objections and 30% (n=21) of the participants declared the need to consider the religious point of view and to seek Fatwa.Moreover, 41% of them were concerned about lack of experience and clinical trials in the Arab region (n=29), and 11% thought it should be sought as last resort, with strict monitoring, or might have role in future (n=8). Discussion FMT harmonized regulations are lacking and the current regulatory status ranges from non-existing to strictly regulated (35).For now, the US FDA had classi ed FMT as a live biotherapeutic drug that requires the submission of an Investigational New Drug application for its therapeutic uses (36).CDI has been recently exempted from IND application ling, which was a decision that was received with high appreciation by clinicians to use FMT in a fatal ailment.Meanwhile, strict regulation and control over the use of such treatment were recommended by, Renzong Qiu, 2017 (44).Moreover, although, FMT has recently received great attention there is still a gap in the understanding of FMT around the world, even in countries using it (45,46).A wider acceptance of this therapy can be achieved by the implementation of regulations addressing the ethical and social issues facing its application such as the autonomy and the privacy of patients and donors, promoting research investigating its safety and e cacy, and the use of standardized methods in its preparation and application including stool banking (14,18,19).Moreover, to promote its dissemination to countries in the Middle East.such as Jordan, then country speci c social norms, tradition, customs and religious backgrounds, and structures should be taken into consideration towards introducing and regulating FMT (44). Our results demonstrated that the majority of the respondents heard of FMT treatment but did not practice it. In contrast to Jordan, where FMT is not regulated nor practiced yet, FMT has been practiced in China since the fourth century where traditional Chinese medicine used yellow soup, fecal slurry, orally to treat food poisoning and diarrhea (28,47).This justi es the high familiarity of this treatment modality among Chinese clinicians (28).Nevertheless, the familiarity does not guarantee experience in using it by clinicians; Zipursky, et al. ( 48) in their study reported that physicians have limited experience with FMT despite having treated patients with multiple recurrent CDIs. In general, our study population was not enthusiastic about nor supportive of the introduction of such treatment.They did not see its promising utility for other future applications.Barriers towards the promotion and recommendation of FMT include mostly the absence of o cial guidelines and regulations followed by the risk of infections and long-term risk and safety.This is in concordance with Kelly et al. (49) who reported on physicians' attitudes towards FMT in 2010 at the American College of Gastroenterologist meeting.They found that 40% of physicians who had heard of FMT were not willing to try it, pending further demonstration of its e cacy safety.Nevertheless, Kelly et al., showed that physicians' recommendation was positively in uenced by patients' perceived acceptance (49,50).This was not what our respondents think.In general, unwillingness for recommending FMT treatment were related to many factors; the limited knowledge among the study population (38), the limited practicing numbers (40), and the "yuck"factor (51).Other reasons for physicians not offering or referring a patient for FMT were; "not having the right clinical situation', "the belief that patients would nd it too unappealing", and "institutional or logistical barriers" (48).In his commentary, Brandt et al., (51) related physicians' hesitation to recommend FMT to the limited randomized controlled trials to show effectiveness and safety.He predicted that patients' needs in addition to the availability of aesthetically acceptable formulations are in uential parameters towards the acceptance of this treatment modality among physicians.Indeed, we found that the lower part of GI was the only acceptable route of administration of FMT.This might affect how the accepted FMT formulation will need to be regulated in Jordan in the future. In support of the international legislations, our respondents will not recommend FMT as a rst-line treatment, but only recommend it when there is a failure of conventional treatment or they want organic natural treatments.This is in agreement with the Iranian clinicians and gastroenterologists' attitudes who reported a willingness of accepting FMT as a therapeutic option if it is scienti cally justi ed and ethically approved given it was used as synthetic microbiota rather than FM (52). Clinical e cacy is a crucial factor that maintains patients' positive attitudes towards fecal microbiota transplantation (53) In light of the above-described barriers and limited efforts in increasing the awareness of the uses and e cacy and safety of FMT treatment modality, we predict that the introduction and the regulation of this treatment modality in Jordan is not going to be soon.Accordingly, efforts should be put forth for increasing awareness about its utility and effectiveness and to highlight the ethical and cultural/religious challenges towards its application such as patients' vulnerability, donor's anonymity and data de-identi cation and the consenting procedure Moreover, legislative and ethical challenges facing the establishment of biobanks in Jordan including privacy and con dentiality, specimen ownership and informed consent should be addressed (54).According to the US FDA, during the investigational use of FMT, the potential risks and bene ts including the unknown risks and the long-term risks should be clari ed for quali ed patients during the consenting procedure (Food and Drug Administration 2013).Consenting is an ethical challenge in FMT, which was recognized by close to 50% of our participants.The FMT consenting procedure should consider patients' vulnerability, unforeseen long term risks, and limited knowledge of the actual bene ts and risks to the treatment in addition to the universal ethical requirement of biomedical research (55) .Ma et al., (2017) (28) believe patients' compromised decision-making capacity and vulnerability are the main challenges to informed consent.They consider CDI patients vulnerable, and desperate individuals who can be easily affected by emotive language as being natural and safe whether from physicians or the media.This was opposed by Bunnik et al., 2017(13) who believe that it is not the vulnerability or capacity to consent but rather the inadequate information that poses di culties with regards to the FMT consenting procedure. Other important challenging parameters in the consenting process are cultural/religious or personal/ideological food restrictions () of stranger donor.In their commentary, authors questioned whether informed consent to FMT can be obtained without information about the donor's diet.This an important ethical challenge that is very relevant to our region's population that is mostly Muslim thus observing the religious commitment to halal nonalcoholic containing foods and beverages is essential.Our respondents think that religion, dietary, and alcoholic consumption will be considered as a barrier in patient's acceptance of FMT.Accordingly, we perceive that it could be necessary to declare the donor's dietary habits to obtain an autonomous decision in this region. An important parameter that was highlighted by the respondent's comments was the need to consider the religious point of view and to seek Fatwa.This was declared by 30% of the participants in addition to their perception of the need for more knowledge about safety.Therefore, we concluded that our health care practitioners are reluctant to use FMT because of concerns about safety and religious beliefs.Ma et al., 2017 (56) highlighted important cultural and religious beliefs that might affect the public acceptance of FMT.Some people might consider FMT an unsanitary treatment, and some will limit the donor to those who eat speci c food, as vegans, or those with a speci c religion such as Muslim patients who might not accept fecal transplant from non-Muslim donors.All these barriers entail the importance of demarcating regionspeci c FMT regulations that take into consideration the cultural and religious background of the public. Although, there is a growing awareness of ethics in human research, nevertheless Alahmad et al., 2012 have shown that research ethics regulations and guidelines in Middle Eastern Arab countries suffer from various degrees of de ciencies with regards to ethical protection (57).They recommended that social norms, traditions, customs, and familial ties should all be taken into consideration when developing policies and regulations.In interviews with medical professionals from the Middle East Alahmad et al., 2015 (58) reported the social importance of protecting con dentiality, de-identi cation, and anonymity of donors scored 100% as being an ethical concern in conducting FMT among the Jordanian clinicians.They mostly agreed that con dentiality can be protected by double blinding both the donor and the receiver and to ensure the con dentiality of patient information during communication with others. Limitations: Firstly, our study adopted convenient sampling from the capital of Jordan (Amman), therefore the ndings may not be generalizable to other provinces or worldwide.However, the objective of this study was to assess the perceptions of health care providers, regarding ethical and social concerns about FMT, as the rst such study among this population and we do not expect our results will substantially change among other Jordanian physicians.Secondly, we had a limited number of physicians who used FMT, making it more di cult to fully comprehend the procedure and its risks and bene ts and the attitudes might change if they had a positive experience in treating patients with it. In conclusion, our study demonstrated a lack of enthusiasm to implement FMT in Jordan by health care providers although there is general support for its potential use as a second line of treatment when other traditional medical treatment fails.There are complex ethical, religious, and practice-based challenges that need to be addressed before FMT becomes an established practice.Future studies should examine FMT from local traditional and especially religious perspectives as well as other barriers found in our study, as well as consenting, privacy, and risks.Patient (end-user) perspectives are lacking and would be important to understand the level of acceptability among those who need FMT.Furthermore, there should be more education to increase the understanding of FMT bene ts and risks among Jordanian health care practitioners. List Of Abbreviations FMT: fecal microbiota transplant b. Privacy protection of personal information c.De-identification and anonymity of donors d.Ownership and property of samples e. Access regulation to data and sample Table 2 : Mean negative views' score regarding FMT among different health care practitioners. and physicians advising and referring patients to FMT treatment modality.The reported physicians' responses regarding the e cacy and safety of FMT were diverse.While a major concern about FMT e cacy and safety was reported among Chinese clinicians (43), Zipursky et al., (48) have reported minor doubts about FMT's e cacy and safety among physician respondents at Dartmouth-Hitchcock Medical Center and Baylor College of Medicine (Texas, USA).
2021-01-07T09:06:58.238Z
2020-04-09T00:00:00.000
{ "year": 2020, "sha1": "7963528e1ea88cfdad111b2fd1f584160222c883", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-63083/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "08c00bc483ca6b5a900ed78a3ea9b472ac2066a2", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [] }
86141741
pes2o/s2orc
v3-fos-license
Interpreting ecology and physiology of fossil decapod crustaceans Decapods are the most diverseand complex group ofcrustaceans, adapted for life in all parts of the marine environment, many aquatic habitats, and some terrestrial niches. With this diversity of life styles, a vast range of morphotypes of decapods has evolved, exploiting almost every imaginable variation in morphology of the complex exoskeleton that characterizes them. Many ofthe morphological variants are a response to exploiting a particular niche in which the organisms live or an adaptation to particularbehavioral characteristics. Assessing the significance of morphological variation in the fossil record is challenging because of the taphonomic overprint that results in loss ofsoft tissue, preservation ofpartial remains of hard parts, and vastly reduced numbers ofpreserved individuals as contrasted to the once-living population. The purpose of the present paper is to identify aspects ofmorphology that may be useful in interpreting the behavioral responses of the organism to its environment, w, th primary emphasis on morphological features of the exoskeleton that are not expressed onall individualsbut that occur at low, and unpredictable, frequencies. Introduction: functional morphology Some crabs exhibit behavior that permits them to live within other organisms. Perhaps the best known are the Pinnotheridae, tiny crabs which may be endosymbiotic within bivalve, gastropod, and polyplacophoran molluscs, brachiopods, echinoderms, enteropneusts, ascidians, worms, and burrows of worms and other decapods (Ross, 1983). Their presence has been noted since the time of Aristotle (Aristotle, 1862). Several free-living pinnotherids havebeen described from the fossil record (Schweitzer & Feldmann, 2001) but there has been only one report of pinnotherids within the host organism (Zullo & drivers, 1969). Another endosymbiotic association is typified by modern gall When assigning significance to a specific morphological attribute, neontologists are able to study the living organisms, observe their behavior, and interpret the functional significance of that morphological attribute. Because paleontologists can observe behavior only by analogy with living organisms, they rely on deductive reasoning to interpret the fossil record. Thus, by noting the crenulated regions near the antennal base of some lobsters or along the inner surface of the propodus of some extant crabs, and observing the behavioral pattern 111 living animals of generating a rasping sound by rubbing that surface against the carapace margin (stridulation), it is possible to identify similar structures in some fossil remains and to assign to them a similar function (Feldmann & Bearlin, 1988). Similarly,, the function of grooming appendages, characteristic of many shrimp and other decapods, can be inferred on the basis of modern analogy. Much of the work of paleontologists is characterized by this functional morphological approach. Form and function of the carapace and appendages has been the subject of voluminous literature. The two most comprehensive works are those of Schafer (1972) and Manton (1977) and their efforts will not be repeated here. Suffice it that feeding behavior may be deduced from the shape of claws and the nature of their denticles, and life style, including swimming, burrowing, free living, can be interpreted from the shape of the carapace. Numerous examples are given within those familiar works. Thus, form reflects behavior. R.M. Feldmann Interpreting ecology and physiology I 12 crabs, the Haplocarcinidae. Some of these tiny crabs perform a specific behavior to induce corals to form a gall that surrounds and protects the animal (Ross, 1983). Their presence in the fossil record should be assured because they are essentially entombed within coral tissue; despite that, none has been reported from the fossil record. However, there are several reports of tiny, cryptic crabs living within sponge and coral structures that are ofnote (Collins & Wicrzbowski, 1985; Muller & Collins, 1991; Beschin ct ah, 1996, 2001). These crabs often form a very diverse assemblage but are almost completely unknown because they are very small, tend to be preserved in the same fashion as the surrounding coral, and are extremely difficult to detect. Frequency of occurrence of morphological traits within a species A different approach to understanding the relationship between morphology and behavioral adaptations may be taken by assessing the frequency of occurrence of a particular morphological trait within a fossil population. Morphological traits may be expressed in all individuals within a species or may be found in only a subset, depending upon the origin of the trait. Morphological attributes that are necessary for the functioning of all individuals within a taxon should be discernable on all members of the taxon, if the appropriate region is preserved. Swimming behaviour in nektobenthic crabs, for example, is often facilitated by expansion and flattening of the distal elements of someof the pereiopods and trans\ verse elongation of the carapace (Schafer, 1972). Similarly, decapods adapted for burrowing, for a purely nektonic life, or for other life styles have been characterized on the basis of morphological attributes. Many of these types of characters not only have functional significance but, because of their ubiquitous presence in the population, are useful in identifying and classifying the organism. Other characters, associated primarily with ontogeny, may exhibit systematic variation that can be expressed as changes in morphometries of the organism. Thus, allometric growth may be definable within a collection of fossil decapods. Schweitzer & Fcldmann (2000) recognized allometric growth patterns in the Eocene geryonid, Chaceonperuvianus (d’Orbigny) that helped distinguish it from a contemporaneous portunid in which the proportions of the adult portunids were similar to the proportions of the juvenile geryonid. Characters that are gender-specific are anticipated to occur with a frequency equal to the gender distribution within the taxon. In decapods, that ratio is quite variable and in the fossil record it may be difficult to discern. Schweitzer Hopkins & Feldmann (1997) studied a large suite ofspecimens of Eocene-Oligocene mud shrimp from Washington State (USA) and determined that 65% of the claws were male and 35% were female. This study is particularly significant because it demonstrated that two previously named species were, in fact, sexual dimorphs of a single taxon, Callianopsis clallamensis (Withers). In a more recent study of PleistoceneHolocene decapods from Guam, Schweitzer et al. (2002) noted that the ratio of males to females, determined on specimens exhibiting well-preserved pleons, ranged from 50% to 89%. Some shrimp, notably the Pandaloideawithin the Caridea, undergo sexual reversal within their ontogeny. The shrimp typically grow to maturity as males, undergo sexual Fig. I. 1 dorsal view of extant Callinectes Balanus sp. with epibionts (scale bar equals 10 mm); 2 dorsal view ofLobocarcimis pustulosus Feldmann & Fordyce, with arrows showing the position ofa large, straight serpulid and a small, coiled serpulid worm tube (scale bar equals 10 mm); 3 enlarged view of the concave surface of the counterpart of Trichopeltarion greggi Dell, from the Miocene of New Zealand. Outer cuticular material adheres to the counterpart; where the cuticle is fortuitously broken away, a mould of the interior of a balanid barnacle epibiont is revealed (scale bar equals 5 mm); 4 ventral view of Tumidocarcinus giganleus Glaessner, from the Miocene of New Zealand, showing straight-sided, exceptionally wide male abdomen ofa feminised individual (scale bar equals 10 mm); 5 dorsal view of Feldmann et al., from the Cretaceous of Antarctica, showing a severe bopyrid isopod swelling on the right branchial chamber (scale bar equals 10 mm); 6 scanning electron micrograph of a portion of the cuticle of an extant Torynomma australis Cancer sp., from Mexico, showing exfoliating exocuticle and concomitant loss of an encrusting bryozoan (scale bar equals 1 mm); 7 malformed claw of an extant FI Milne Edwards, from Maine, USA (scale bar equals 10 mm). Homarus americanus Contributions to Zoology, 72 (2-3) 2003 113 R.M. Feldmann Interpreting ecology andphysiology I 14 reversal, and live the final part of their lives as females (Bliss, 1982). In this instance, secondary sexual characteristics should be expressed in a 50/ 50 ratio; however, the females should typically be larger than the males. Regardless of the ratio, it is clear that gender differences will result in occurrences of both primary and secondary sexual characteristics at a frequency less than 100% for each gender. Finally, certain morphological attributes observable in fossils occur at very low, and unpredictable, frequencies. These characters reflect interaction of the individual crab with its environment. Such conditions as pathology and parasitism, infestation by epibionts, predation, autotomy, and regeneration of normal or abnormal appendages lie within this category. In many cases, these conditions have little effect on the hard parts of the organism and, therefore, are not recognizable in the fossil record. In other cases, such as autotomy or regeneration, the evidence may be quite circumstantial. If an appendage is not preserved in the fossil record, it may be attributed to loss after death of the organism rather than to autotomy. However, some conditions do leave evidences that can be recognized and interpreted, as discussed below. Most pathological and parasitic conditions in decapods have little effect on the exoskeleton. Two that have been recognized are parasitization within the branchial chamber by bopyrid isopods and within the reproductive system by rhizocephalan barnacles. The former condition has been summarized by Forster (1969), who reported numerous swellings attributed to isopods in the Galatheidae and Introduction: functional morphology Some crabs exhibit behavior that permits them to live within other organisms.Perhaps the best known are the Pinnotheridae, tiny crabs which may be endosymbiotic within bivalve, gastropod, and polyplacophoran molluscs, brachiopods, echino- derms, enteropneusts, ascidians, worms, and bur- rows of worms and other decapods (Ross, 1983). Their presence has been noted since the time of Aristotle (Aristotle, 1862).Several free-living pinnotherids have been described from the fossil record (Schweitzer & Feldmann, 2001) but there has been only one report of pinnotherids within the host organism (Zullo & drivers, 1969).Another endo- symbiotic association is typified by modern gall When assigning significance to a specific morphological attribute, neontologists are able to study the living organisms, observe their behavior, and interpret the functional significance of that mor- phological attribute.Because paleontologists can observe behavior only by analogy with living or- ganisms, they rely on deductive reasoning to inter- pret the fossil record.Thus, by noting the crenulated re gions near the antennal base of some lobsters or along the inner surface of the propodus of some extant crabs, and observing the behavioral pattern 111 living animals of generating a rasping sound by rubbing that surface against the carapace margin (stridulation), it is possible to identify similar struc- tures in some fossil remains and to assign to them a similar function (Feldmann & Bearlin, 1988). Similarly,, the function of grooming appendages, characteristic of many shrimp and other decapods, can be inferred on the basis of modern analogy. Much of the work of paleontologists is character- ized by this functional morphological approach. Form and function of the carapace and appendages has been the subject of voluminous literature. The two most comprehensive works are those of Schafer (1972) and Manton (1977) and their ef- forts will not be repeated here.Suffice it that feed- ing behavior may be deduced from the shape of claws and the nature of their denticles, and life style, including swimming, burrowing, free living, can be interpreted from the shape of the carapace.Nu- merous examples are given within those familiar works.Thus, form reflects behavior.R.M. Feldmann -Interpreting ecology and physiology I 12 crabs, the Haplocarcinidae.Some of these tiny crabs perform a specific behavior to induce corals to form a gall that surrounds and protects the animal (Ross, 1983).Their presence in the fossil record should be assured because they are essentially entombed within coral tissue; despite that, none has been reported from the fossil record.However, there are several reports of tiny, cryptic crabs living within sponge and coral structures that are of note (Collins & Wicrzbowski, 1985; Muller & Collins, 1991; Beschin ct ah, 1996, 2001).These crabs often form a very diverse assemblage but are almost completely unknown because they are very small, tend to be preserved in the same fashion as the surrounding coral, and are extremely difficult to detect. Frequency of occurrence of morphological traits within a species A different approach to understanding the relation- ship between morphology and behavioral adaptations may be taken by assessing the frequency of occurrence of a particular morphological trait within a fossil population.Morphological traits may be expressed in all individuals within a species or may be found in only a subset, depending upon the ori- gin of the trait. Morphological attributes that are necessary for the functioning of all individuals within a taxon should be discernable on all members of the taxon, if the appropriate region is preserved.Swimming behaviour in nektobenthic crabs, for example, is often facilitated by expansion and flattening of the distal elements of some of the pereiopods and trans-\ verse elongation of the carapace (Schafer, 1972). Similarly, decapods adapted for burrowing, for a purely nektonic life, or for other life styles have been characterized on the basis of morphological attributes.Many of these types of characters not only have functional significance but, because of their ubiquitous presence in the population, are useful in identifying and classifying the organism. Other characters, associated primarily with on- togeny, may exhibit systematic variation that can be expressed as changes in morphometries of the organism.Thus, allometric growth may be definable within a collection of fossil decapods.Schweitzer & Fcldmann (2000) recognized allometric growth patterns in the Eocene geryonid, Chaceon peruvianus (d'Orbigny) that helped distinguish it from a con- temporaneous portunid in which the proportions of the adult portunids were similar to the propor- tions of the juvenile geryonid. Characters that are gender-specific are anticipated to occur with a frequency equal to the gender dis- tribution within the taxon.In decapods, that ratio is quite variable and in the fossil record it may be difficult to discern.Schweitzer Hopkins & Feldmann (1997) studied a large suite of specimens of Eoce- ne-Oligocene mud shrimp from Washington State (USA) and determined that 65% of the claws were male and 35% were female.This study is particularly significant because it demonstrated that two previously named species were, in fact, sexual di- morphs of a single taxon, Callianopsis clallamensis (Withers).In a more recent study of Pleistocene- Holocene decapods from Guam, Schweitzer et al. (2002) noted that the ratio of males to females, determined on specimens exhibiting well-preserved pleons, ranged from 50% to 89%.Some shrimp, notably the Pandaloidea within the Caridea, undergo sexual reversal within their ontogeny.The shrimp typically grow to maturity as males, undergo sexual reversal, and live the final part of their lives as females (Bliss, 1982).In this instance, secondary sexual characteristics should be expressed in a 50/ 50 ratio; however, the females should typically be larger than the males.Regardless of the ratio, it is clear that gender differences will result in occur- rences of both primary and secondary sexual char- acteristics at a frequency less than 100% for each gender. Finally, certain morphological attributes observ- able in fossils occur at very low, and unpredictable, frequencies.These characters reflect interaction of the individual crab with its environment.Such conditions as pathology and parasitism, infestation by epibionts, predation, autotomy, and regenera- tion of normal or abnormal appendages lie within this category.In many cases, these conditions have little effect on the hard parts of the organism and, therefore, are not recognizable in the fossil record. In other cases, such as autotomy or regeneration, the evidence may be quite circumstantial.If an appendage is not preserved in the fossil record, it may be attributed to loss after death of the organ- ism rather than to autotomy.However, some con- ditions do leave evidences that can be recognized and interpreted, as discussed below. Most pathological and parasitic conditions in decapods have little effect on the exoskeleton.Two that have been recognized are parasitization within the branchial chamber by bopyrid isopods and within the reproductive system by rhizocephalan barnacles. The former condition has been summarized by For- ster (1969), who reported numerous swellings attributed to isopods in the Galatheidae and Proso- ponidae of Late Jurassic age and in Cretaceous \ Raninidae.Prior to his work, van Straelen (1928) and Housa (1963) had also summarized occurrences of bopyrids.Subsequent to Forster's work, Muller (1984) documented bopyrid swellings in Miocene decapods (Galatheidae, Porcellanidac Bishop (1986) recognized bopyrids in Cretaceous Homolidae, Col- lins & Rasmussen (1992) noted their presence in a Late Cretaceous raninid, and Feldmann et al. (1993) recorded them from a Cretaceous species of the Torynommidae (Fig. 1.5).Although this list may not be exhaustive, it is interesting to note that all the fossil occurrences of bopyrids are in galatheids, porcellanids, or in the so-called primitive crabs.Geo- graphically, fossil bopyrids have been identified from as far north as Greenland to as far south as Antarctica.A review of parasitism in extant crus- taceans (Overstreet, 1983) noted that bopyrids typically were found in macrurans and anomurans.The only macrurans discussed were shrimp, a group with a poor fossil record. Other parasites, rhizocephalan barnacles, produce a condition called parasitic castration.The process involves introduction of the barnacle into the in- testinal tract and destruction of the androgenic gland in males, and the barnacle ultimately manifests it- self by suppressing male hormones and feminiz- ing the males.Females that are infested take on a mature appearance at a prematurely early stage (Overstreet, 1983).This condition has most fre- quently been observed in portunids, although it is known in other brachyurans as well.The only record of parasitic castration in the fossil record is that of a Miocene xanthid from New Zealand (Feldmann, 1998).Recognition of this condition can be made only by determining that the abdomen of infected males is unusually broadened to simulate the form of the abdomen of females (Fig. 1.4); thus, it is necessary to observe the ventral morphology of a large number of crabs to recognize the condition. The exoskeleton of decapods provides a firm substratum that may serve as a base for attachment of a host of epibionts on extant taxa (Fig. 1.1), including hydroid and anthozoan cnidarians, bryozoans, bivalves, barnacles, and annelid worms. Almost any organism requiring a firm base of at- tachment might be anticipated.The incidence of fouling of the decapod carapace by these organ- isms is quite variable, based upon age of the host, duration of the inter-molt period, location on the organism, ecological setting, and life habit of the host (Ross, 1983).Many of these epizoans have no hard parts and have only a slight chance of being preserved in the fossil record.Others with hard parts may be lost after death of the host as the waxy epicuticle separates from the remainder of the cu- ticle (Fig. 1.6) (Waugh & Feldmann, work under way).Still others are not recognized because, when opening concretions, a thin layer of exocuticle may remain attached to the counterpart, obscuring the epibionts (Fig. 1.3) (Waugh & Feldmann, work under way).The result is that the known occur- rence of epibionts on fossil decapods is considerab ly lower than would be anticipated based upon rates of fouling on living decapods.Despite these conditions, a variety of attached epibionts have been recorded (Fig. 1.2), including bryozoans and brachiopods (S.L. Jakobsen, pers.comm.), oysters (Bishop, 1981(Bishop, , 1983; Tshudy & Feldmann, 1988), barnacles (Glaessner, 1969; Feldmann & Fordyce, 1996), and serpulid worms (Tshudy & Feldmann, 1988; Feldmann & Fordyce, 1996).The most prom- ising possibility for discovering more, and differ- ent, epibionts is under study now.Sten L. Jakobsen (Geological Museum, Copenhagen), is using clever, novel preparation techniques to expose a diverse assemblage of organisms from the Middle Danian Fakse Beds in Denmark.This may develop into the most important single locality for studying epibionts, both in terms of prevalence and diversity of occurrence. Hermit crabs present a quite different combina- tion of host-epibiont interaction because a typical hermit crab occupies the empty shell of a gastro- pod and a variety of organisms, including hydrozoan cnidarians and bryozoans are known to invest the shell (Taylor, 1994).Specimens from Argentina, currently under study, are typical because the her- mit crab initially occupies a relatively small gas- tropod shell and, instead of replacing the shell with a larger one periodically, the hermit crab relies on the incrusting bryozoans to continue growing and developing a larger, coiled, protective sheath mim- icking the form of the original shell.Until very recently, gastropod shells have been taken to be 'he primary domiciles of choice for hermit crabs, bat Fraaije ( 2003) described an in situ occurrence °1 a hermit crab within an Early Cretaceous am- monite. One type of facultative epibiotic relationship that does not seem to manifest itself in the fossil record ls drat of carrying, or snagging, epibionts as a de- fensive or camouflaging technique.The sponge crabs, Dromiidae, carry a cap of a sponge, an ane- mone, or a piece of shell over the carapace that is Sapped by the fifth pereiopods.The cap does not adhere to the carapace and, when released by the crab, leaves no trace.Similarly, the spider crabs, Majoidea, typically have setal hairs shaped like the rooks on Velcro©.The hairs trap vegetation and other material as camouflage but, again, leave no trace upon death of the organism. Epibionts have not been used often by paleontologists to infer aspects of the depositional envi- ronment, and their occurrence is all too frequently nested into systematic papers; thus, the informa- tion is difficult to extract.Because some of the epibionts may be ecologically sensitive, it is quite possible that we can learn more about the setting in which the decapods lived by studying their inti- mate associates. Predation is, of course, a daily occurrence for decapods in modern settings, because they form a food resource for many organisms.Flowever, in the fossil record, evidence of predation is limited, partly because the effects are unrecognized and partly because the effects include total destruction of the remains.Bishop (1975) described a partial crab specimen within a phosphatic nodule and interpreted it to be a regurgitate.A similar inter- pretation was given for a specimen collected from the Lower Cretaceous of Mexico (Feldmann et al., 1998).Other occurrences have been noted but probably represent hydraulic accumulations.Tshudy et al. (1989) described the feeding habits of nautiloid cephalopods on exuvia of lobsters, noted an in- stinctive pattern of eating the remains from the pos- terior of the abdomen towards the anterior, and postulated that this selective ingestion of the abdo- men might explain the larger percentage of cara- paces than pleons found in the marine fossil record. Interestingly, examination of a large collection of freshwater crayfish from the Pliocene of the western United States showed no difference in the number of carapaces and pleons.Nothing in that environment was utilizing the crayfish skeletons in the fashion of the cephalopods. Crayfish have been documented as prey species in one instance where it was concluded (Feldmann & May, 1991) that the systematic removal of the dorsal part of the carapace of Pleistocene crayfish was the result of predation either by a small mam- mal or by man.Bishop (1972) noted the only oc- currence known to me of a crab that was attacked by a toothed animal, presumably a fish, and es- caped.Large puncture marks document the unsuc- cessful interaction -unsuccessful at least in terms of the predator. RM. Feldmann - Interpreting ecology and physiology Autotomy, casting off an appendage in the face of predation and regeneration of the lost limb, oc- curs frequently in the modern world but is difficult to document in the fossil record.Occasionally, a fossil specimen will be illustrated that appears to have an unusually small first pereiopod; however, this is only circumstantial evidence of autotomy. As stated earlier, absence of the appendage on a fossil could arise as a result of many factors.The one unequivocal example of regeneration would be the growth of a deformed appendage as a result of damage during the growth process.The presence of not only deformed, regenerated claws, but also deformed carapaces, has been well documented in living Homarus americanus H. Milne Edwards (Fig. 1.7).I am not aware of any demonstration of the phenomenon in the fossil record, although I have always been curious about the bizarre claw depicted on Schlueteria tetracheles Fritsch & Kafka (1887, fig. 53). Summary Ecological and physiological characteristics of fossil decapod Crustacea can be inferred by using a va- riety of functional morphological approaches.Ad- ditionally, considering the frequency of occurrence of a morphotype in a population of fossil crabs may reveal features unique to the individual as an indi- cation of its interaction with the environment.Al- though much is known about the behavioral patterns of living crabs and, by analogy, fossil forms, many significant observations are presented within de- tailed systematic works and are difficult to locate.This summary is, in part, a notice of the types of interpretations that can be made and a plea to call specific attention to low-frequency, unpredictable morphological characters. Fig Fig. I. 1 -dorsal view of extant Callinectes Balanus sp. with
2018-12-07T20:30:58.576Z
2003-01-01T00:00:00.000
{ "year": 2003, "sha1": "30984ff40e9df6760c8b4e47878d4f25200965d9", "oa_license": "CCBYNC", "oa_url": "https://brill.com/downloadpdf/journals/ctoz/72/2-3/article-p111_9.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "30984ff40e9df6760c8b4e47878d4f25200965d9", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Biology" ] }
14029074
pes2o/s2orc
v3-fos-license
Energy Efficiency in Water Supply Systems: GA for Pump Schedule Optimization and ANN for Hybrid Energy Prediction According to Watergy (2009), about two or three percent of the energy consumption in the world is used for pumping and water treatment for urban and industrial purposes. The con‐ sumption of energy, in most of water systems all over the world, could be reduced at least 25%, through performance improvements in the energy efficiency. Hence, it is noticeable the importance of development of models which define operational strategies in pumping sta‐ tions, aiming at their best energy efficiency solution. Introduction In the last decades, the managers of water distribution systems have been concerned with the reduction of energy consumption and the strong influence of climate changes on water patterns. The subsequent increase in oil prices has increased the search for alternatives to generate energy using renewable sources and creating hybrid energy solutions, in particular associated to the water consumption. According to Watergy (2009), about two or three percent of the energy consumption in the world is used for pumping and water treatment for urban and industrial purposes. The consumption of energy, in most of water systems all over the world, could be reduced at least 25%, through performance improvements in the energy efficiency. Hence, it is noticeable the importance of development of models which define operational strategies in pumping stations, aiming at their best energy efficiency solution. The consumption of electric energy, due to the water pumping, represents the biggest part of the energy expenses in the water industry sector. Among several practical solutions, which can enable the reduction of energy consumption, the change in the pumping operational procedures shows to be very effective, since it does not need any additional investment but it is able to induce a significant energy cost reduction in a short term. As well known, the tasks of operators from the drinking network systems are very complex because several distinct goals are involved in this process. To determine, among an extensive set of possibilities, the best operational rules that watch out for the quality of the public service and also provide energy savings, through the utilization of optimization model tools which take into consideration all the system parameters and components, is undoubtedly a priority. The technological advances in the computational area enabled, in the last years, the intensification of the quality of scientific works related to the optimization tools, as well as aiming at the reduction of the energy costs in the operation of drinking systems. Nevertheless, most of the optimization models developed was applied to specific cases. The first studies to optimize the energy costs of pumping have been used for operational research techniques, such as linear programming (Jowitt and Germanopoulos, 1992), integer linear programming (Little and Mccrodden, 1989), non-linear programming (Ormsbee et al., 1989) and dynamic programming (Lansey and Awumah, 1994). The limitation of using these models to real cases is mainly due to the complexity of the equations' resolution to ensure the hydraulic balance and the difficulty of generalizing such optimization models in any water supply system (WSS). Brion and Mays (1991), in the attempt to reduce the operational costs in a drinking pipeline in Austin, Texas (USA), had tested a model of optimization and simulation, achieving a reduction of 17.3 % in the operational costs. Ormsbee and Reddy (1995) applied an optimization algorithm in Washington -DC and obtained significant results with the management implementation provided by the model, observing a reduction of 6.9% in the costs with electric energy. During this period, the use of evolutionary algorithms was quite limited. Wood and Reddy (1994) were the pioneers in the use of such algorithms. The remarkable use of evolutionary algorithms in this research topic in recent years is mainly due to Genetic Algorithm (GA) provides a great flexibility in exploring the search space and allows an easy link to other simulation models. However, in contrast the GA does not solve problems with constraints. Once the operation in WSS is considered a complex procedure, with many constraints, there remains the doubt about the speed of the modelling and the convergence for optimal solutions between the GA and hydraulic simulators. Additionally the concern with the reduction of the computational time is due to the applicability of energy optimization models in real time Jamieson, et al., 2007Rao and Salomons, 2007;Alvis et al., 2007). To reduce the computational time for seeking solutions with reduced energy costs, these authors used the technique of Artificial Neural Networks (ANN) to reproduce the results by the hydraulic simulator obtained by the EPANET (Rossman, 2000). Then, this new tool based on ANN for the hydraulic simulation was connected with a GA model. After several analyses done in a hypothetical system and in two real case studies, the authors concluded the model GA-ANN found optimal solutions in a period 20 times lower when compared to GA-EPANET. Shamir and Salomons (2008) have searched for reducing the computational simulation time based on a scale model of a real case system for different operating conditions. At the present research a different resolution was adopted. In order to reduce the computational simulation time in the search for optimal solutions, a change in the GA algorithm type was made, instead of replacing the hydraulic simulator model (EPANET) as former references. Thus, new algorithms were created which work directly with the infeasible solutions generated by a GA to make them feasible, through the development of a hybrid genetic algorithm (HGA) (i.e. genetic algorithm plus repair algorithms). This new model determines, in discrete intervals (every hours) the best programming to be followed by the pumps switch on / off, in a daily perspective of operation. In this way, the decisions start to be orientated from the research of thousands of possible combinations, being chosen, through an iterative process, the best energy management strategy that presents the best energy savings. The world's economy is directly connected to energy and it is the straight way to produce life quality for society. China is nowadays one of the biggest consumers of energy in the world (Wu, 2009). In order to have enough energy to make its economy grow the prediction of new solutions to produce sustainable energy in a most feasible way is imperative, not only depending on conventional sources (i.e. fossil fuel) but using renewable sources. The increase of energy consumption and the desired reduction of the use of fossil fuels and the raise of the harmful effects of pollution produced by non-renewable sources is one of the most important reasons for conducting research in renewable and sustainable solutions. In Koroneos (2003) analysis, renewable sources are used to produce energy with high efficiencies, social and environmental significant benefits. Renewable energy includes hydro, wind, solar and many others resources. To avoid problems caused by weather and environment uncertainties that hinder the reliability of a continuous production of energy from renewable sources, when only one source production system model is considered, the possibility of integrating various sources, creating hybrid energy solutions, can greatly reduce the intermittences and uncertainties of energy production bringing a new perspective for the future. These hybrid solutions are feasible applications for water distribution systems that need to decrease their costs with the electrical component. These solutions, when installed in water systems, take the advantage of power production based on its own available flow energy, as well as on local available renewable sources, saving on the purchase of energy produced by fossil sources and contributing for the reduction of the greenhouse effect. In recent studies (Moura and Almeida, 2009;Ramos and Ramos, 2009a;Ramos and Ramos, 2009b;Ramos, 2008, 2009), the option to mix complementary energy sources like hydropower, wind or solar seems to be a solution to mitigate the energy intermittency when comparing with only one source. So, the idea of a hybrid solution has the advantage of compensating the fluctuations between available sources with decentralized renewable generation technologies. In literature review, a sustainable energy system has been commonly defined in terms of its energy efficiency, its reliability, and its environmental impacts. The basic requirements for an efficient energy system is its ability to generate enough power for world needs at an affordable price, clean supply, in safe and reliable conditions. On the other hand, the typical characteristics of a sustainable energy system can be derived from policy definitions and objectives since they are quite similar in industrialized countries. The improvement of the efficiency in the energy production and the guarantee of reliable energy supply seem to be nowadays common interests of the developed and developing countries (Alanne and Saari, 2006). This work aims to present an artificial neural network model by the optimization of the best economical hybrid solution configuration applied to a typical water distribution system. Objective function The search for the optimal control settings of pumps in a real drinking network system is seen as a problem of high complexity, due to the fact that it involves a high number of decision variables and several constraints, particular to each system. The decision variables are the operational states of the pumps xt (x 1t, x 2t, …, x Nt), where N represents the number of pumps and t is the time-step throughout the operational time. To represent the states of the decision variables in each time-step, the binary notation was used. The configuration of each pump is represented by a bit where 0 and 1 stated switched on and off, respectively. The main goal of the model is to find the configuration of the pumps' status which proceeds to the lowest energy cost scenario for the operational time duration. To calculate this cost, several variables must be considered, in each time-step, such as the variation of consumption, energy tariff pattern and the operational status of each pump. The objective function is the sum of energy consumed by the pumps, in every operational time, due to the water consumption and tanks' storage capacity. It can be expressed according to the following equation: where E and C stated the consumed energy (kWh) and the energy costs by pumps' operation in the time-step t. Constraints The main constraint of the model is the hydraulic balance verification for the network. To establish such balance, the equations of the conservation of mass at each junction node and the conservation of energy around each loop in the network are satisfied. In order to these conditions be attended it is necessary to accomplish the hydraulic verifications to each system configuration. The hydraulic simulator EPANET (ROSSMAN, 2000) was used to perform this purpose. The constraints are implicit in the calculation of the objective function. These are equations that need to be solved in order to obtain the total energy cost of the solution to be analyzed. After accomplishing this stage, some variables are verified, from the hydraulic simulation, aiming for obtaining the hydraulic performance of the system that it is evaluated by means of explicit constraints, showed as follows: Pressure: for each time-step of operational time, the pressures in all the junction nodes must be between the minimum and maximum limits. where P it represents the pressure on node i in time-step t, Pmin i and Pmax i are the minimum and maximum pressures required for node i. Levels of storage tanks: The levels of storage tanks must be between the minimum and maximum limits for each time-step. Besides at the end of the operational time duration, they must be superior to the levels at the beginning of the time duration. This last constraint assures the levels of the tanks do not lessen with the repetitions of the operational cycles. where S jt : level of tank j in time-step t; Smin j e Smax j : minimum and maximum levels of storage tank j. Pumping power capacity: the power used by each pump during the operational time must be inferior to its maximum capacity. where PP kt : used power by pump k in time-step t; PPmax k : maximum capacity of the pump k. Actuation of the pumps: The number of pumping start-ups in the operational strategy must be inferior to a pre-established limit. This constraint, presented by Lansey and Awumah (1994), influences in the maintenance of each pump, since the more it is put into action in a same operational cycle, the bigger will be its wear. Lansey and Awumah (1994) suggest the maximum pump start-ups 3 in 24 hours. A greater value can cause problems on the pumps inducing the need of maintenance and repair and consequently the interruption of the system operation. where: NA k represents the number of start-ups for pump k and NAmax k the maximum allowable pump start-ups for the pump k. Optimization algorithm The definition of optimal control strategies in water distribution systems, where the rules evaluate the behaviour of the system and make decisions at each time-step, requiring a great computational demand. Among several available optimization methods, the Genetic Algorithm (GA) was the tool chosen for offering a great flexibility in search space, allied to the possibility of use discrete variables. Besides these advantages, the technique has an easy manipulation, which makes its connectivity with simulation models easier. The model developed is composed by two modules that will work as a whole in a way the hydraulic simulation routine is called to simulate each operational alternative scenarios given by the GA, in the search of alternatives with better performance. Prediction algorithm The conception of an ANN in order to capture the best energy model domain from a configuration model and economical simulator (CES) in a much more efficient way is based on the following remarks: first of all, a robust data base has to be developed to create the input and output data set that will be used in ANN conception and training; the data has to be analysed to determine a structure that fits the problem and then to train and validate the ANN. A flowchart describing the procedures of the designed ANN is shown on Figure 2. The data used on this study is calculated by means of a CES model that gives an optimized ranking of the best hybrid solution for each particular case, based on an economy analyses for the production and consumption of energy ( Figure 2). This data set is organized with the subject that the study is concerned to evaluate the use of hybrid energy solutions in water distribution systems based on micro-hydro, wind turbine and national electric grid. Hence, the range of data is defined in order to adequate the installation of such energy converters. The data range for flow, power head and water levels variation in reservoirs are used in a hydraulic and power simulator (HPS) to determine the power consumed by the pump and the power produced in a micro-hydro turbine installed in a gravity pipe branch whenever there is energy available in the system. Simple Genetic Algorithm (SGA) GA is a stochastic method of global search that develop such search through the evolution of a population, where each element (or individual) is the representation of a possible solution for the problem. The principle is based on the theory of natural selection and it was firstly presented by Goldberg (1989). At drinking systems' operation, GA stands out for being very efficient when binary and discrete variables are used. They represent a set of optimal solutions and not only one. At each new computational step, solutions containing the status of the pumps are evaluated and later classified according to its fitness. The tendency is as the running proceeds, the elements with less fitness disappear and the more adapted to the impositions (or constraints) of the problem will arise. GAs do not deal directly with the optimization problems that contain constraints. This impediment in the minimization procedure can be overcome employing the Penalty Methods, on which pre-defined constraints are added to the objective function in terms of penalties, turning the solution less apt as much as its violations occur. The Multiplicative Penalty Method (MPM), presented by Hilton & Culver (2000), is then implemented in this model. The penalty function is presented as follows: where TR: type of constraint; NTR: amount of hydraulic elements (nodes, reservoirs or pumps) which have violated certain constraints; k: coefficient which varies with the hydraulic element and the type of violated constraint. The values of k represent how the energy cost is increased for a particular type of violated restriction (TR). These values were determined from the amount and importance of constraints in the model. Analyzing the extreme values (1.05 and 1.80), for each node that exceed their limits, increases 5% to the value of the objective function. It was adopted the lower value for this violation because, commonly, the number of nodes in a WSS is higher the amount of tanks and pumps. However, as the discontinuity of the supply occurs in the system, it has great importance in the feasibility of the solution consequently a maximum value was adopted for this type of violation, increasing by 80% the cost of energy. Following this logic, the remaining violations have intermediate k values. When the constraint is not violated the coefficient k has the unit value. The first stage of the SGA (Figure 1) process is characterized by the generation of operational rules (randomly), the demand definition and the tariff costs. Next, these variables are used by the hydraulic simulator (i.e. EPANET), which calculates the pressures in the pipe system nodes, the energy consumed and the levels of the tanks, all of them being necessary for the evaluation of the solution. The following stage is characterized by the calculation of the objective function, which is obtained from the total energy cost and from the penalty function, in case of infeasible solution. The process is repeated until the parameters of the operational control meets the hydraulic requirements with the lowest cost possible. Hybrid Genetic Algorithm SGA makes use of the penalty method becoming the infeasible solutions into solutions with reduced ability. The genetic operators only diversify the solutions, but do not become them feasible. In this case, it can be confirmed the search process for solutions hydraulically feasible, with minimum energy costs, is strongly stochastic. During the process of evaluation of the objective function, the explicit restrictive variables can be evaluated every hour. Thus, at this time interval, it is possible to verify the type of constraints that were violated. Because of this, repair algorithms were created, and every hour they try to correct the solutions generated by GA, becoming them hydraulically feasible. The HGA layout of the model is also presented at Figure 3. Hence, each solution generated by GA is passed on to the repair algo-rithms. After this stage two solutions are stored: the original, generated by GA, and the modified solution, generated after the attempts of correction. If the penalty function of the modified solution is zero, so it will be sent to a data bank, otherwise, this solution will be discarded. Independent on the destiny of the modified solution, the original solution will be conserved and sent to the next generations of the GA, avoiding a premature convergence of the solutions. The repair algorithms are only a set of rules that modify the decision variables trying to become solutions hydraulically feasible all hours (Figure 4). Among the type of corrections presented in Figure 4, the one related to the maximum number of pump start-ups is the only one that does not use the EPANET routines. This is the first type of repair that occurs in infeasible solutions and aims mainly the reduction of the pump start-ups, changing as little as possible the original configuration of the solution. In Figure 5, with only four changes, it was reduced from six to two the number of start-ups. Besides the considerable reduction, in the repaired solution is visible a greater uniformity of pumps' switch-on schedules. The changed solution has presented only two periods with the pump switched-on. The use of long operation periods is a characteristic of commonly strat-egies in real pump systems due to a lesser intervention in the operation and a wear reduction of the pumps. Finishing the iterations of the HGA, the solutions stored at the data bank (feasible solutions) are sent to a process of specialized local search. This search algorithm is an iterative process in which, every hour, the pumps are switched-off one by one, verifying if the constraints remain inviolate. If the solution becomes hydraulically unfeasible, the initial solution is restored. The selected hour is the one that has the highest energy cost. The process is repeated until there are no alterations that result in feasible solutions. With the utilization of the specialized local search algorithm it is possible to evolve good solutions in local optimal solutions. These solutions would probably require great computational efforts to be found by the conventional GA. Artificial Neural Network The data of renewable sources performance characteristics is included in the CES model to determine the best hybrid energy solution to be selected. One of the resources data is the wind turbine power curve of a selected wind turbine, which corresponds to the local wind source along an average year for the region under analysis ( Figure 6) and the wind annual average speed applied to the wind turbine. In Table 2 is presented an example of data set range to be used in the CES model to determine the inputs and outputs of the developed ANN. Those data is used to calculate all energy and economic parameters to be included in the CES model to complete the data needed to train the ANN. Based on a basic data range, depending on the system characteristics (Table 2), to be used in the CES model and from auxiliary hydraulic and energy formulations, the complete input data is then obtained ( Table 3) In the end of the modelling process the input data set is built in a matrix of [ The ANN data set created to be used in water distribution systems is then ready to determine the NPV of each hybrid system evaluated for each type of configuration (e.g. grid, grid + hydro, grid + wind, grid + hydro + wind). In the ANN code running, the process of training and simulation for each system characteristic is analysed. In the training mode is introduced the configuration parameters. Those parameters are standard limits (max and min), number of neurons on the hidden layer, limit number of epochs, final error desired, validation rate and activation function used in the hidden layer. With the best ANN configuration for each possible hybrid system and new data set for inputs, a validation process is made and the results are verified in terms of correlation and relative error among the values of CES base model and the ANN. Optimization of the pumps' schedule in the Fátima system The drinking system of Fátima is composed of 22 water sources, 10 treatment plants, 36 pump-stations and 64 tanks. The water is distributed to the consumers through 1111 km by a supply and distribution network system. Nowadays, the system is managed by the company Veolia -Águas de Ourém, which is responsible for the catchment, water treatment and distribution ( Figure 7). According to former description, the water storage of the tank Cascalheira is done by EPAL. The cost attributed to Veolia by this supply is related only to the effluent volume from this tank and it is not dependent of any alteration in the operation of the pump-station between tanks of Cascalheira and Fazarga. The reduction of this cost would only be possible with the implementation of water loss control by leakage. The level of the tank Cascalheira is always maintained close to the maximum limit in a way that it increases the reliability of the system. Thus, in the optimization model, it was chosen to consider only the variation of the level of the tank Fazarga at downstream of the pump-station. The tank of Cascalheira has the storage capacity of 4000 m ³ of water, whereas Fazarga has a total volume of 347 m ³ and operates with the initial, minimum and maximum levels of 2.0 m, 0.3 m and 2.3 m, respectively. The pump-station comprises two pumps of Grundfos NK65-250 type which work for an average flow of 42 1/s with an efficiency of 65%. The average time variation of the consumption in the region of Fátima during the day was obtained from the sensors located at the exit of the tank Fazarga. The period analyzed was from March to September, 2007. The water consumption in this year is more noticeable for comprising spring and summer. Figure 8 presents the average time variation calculated. The hours with the pump working are considered as regular and discrete intervals by the optimization algorithm. Thus, for this case study, a day in which the pumps remained switched-on, in intervals similar to the format considered in the optimization model, were chosen. The hydraulic model of the system was built, in which the tanks Cascalheira and Fazarga were considered as reservoir and storage tank, respectively. The variation in the level of the tank of Fazarga during the day calculated by the hydraulic simulator was similar to the real values. The maximum number of pump start-ups (Na max) used by Veolia was three (pump 1) and the level of the tank at the end of the operational time is very close to the initial one ( Figure 9). The variation of the energy rate is presented in Table 5. ,0465 0,0465 0,0465 0,0465 0,0465 0,0465 0,0465 0,0465 0,0465 0,0761 0,1299 0 ,1299 0,0761 0,0761 0,0761 0,0761 0,0761 0,0761 0,0761 0,1299 0,1299 0,0761 0 Both GA models presented in this analysis were implemented to determine the best operational strategy with a reduced energy cost in the system Cascalheira/Fazarga. Figure 10 presents the evolution of the objective function with the computational time, in minutes. Figure 10. Convergence of the fitness functions It is possible to evaluate the efficiency of the HGA model. Only with the feasible solutions obtained with 20 generations, from the repair algorithms and from the specialized local search system, it is possible to find a local optimal solution in about 5 minutes, whereas the SGA took a little more than 33 minutes to find a good solution, with also a bit higher energy cost when compared to the solution found by the HGA. The difficulty for GA to find a good feasible solution can quickly be confirmed. Such behaviour occurs due to the high level of randomness existent in GA models. The alterations of the solutions provided by the genetic operators diversify the type of answer without a guarantee of the evolution in each generation. Among all possible solutions, the probability of extracting, for each pump, a solution with at most three start-ups is 0.0173. Now, it is possible to confirm the difficulty of obtaining a feasible solution, because besides the determination of a solution it is necessary the other constraints (pressure limits, water levels in tanks and power pumps start-ups) be satisfied. These constraints are dependent on the complexity of the drinking system to be evaluated. The pumps remained switched-on during 12 hours. A period of two hours (13h and 22h) belongs to the period with the most expensive energy tariff ( Figure 11). The variation in the reservoir level is the main factor in the decision making the operation and the variation of the energy tariff is the second reason. The best solution obtained by HGA, in each iteration step, is selected from a set of solutions containing only individuals hydraulically feasible. The objective function for this case is the total energy cost. For SGA while the model does not find a feasible solution, the objective function starts to be the sum between the energy cost and the penalty function. The operational strategy found by the HGA and the variations of the water level in the Fazarga tank for the real situation and the solution with reduced energy cost are shown at Figures 11 and 12. From Figures 9, 11 and 12 it is possible to make a comparison between the operational strategies presently adopted by the water manager company and the one obtained by HGA optimization model. The variation of the energy tariff was well explored in the solution with an important reduction of the energy cost (HGA). It is possible to observe a significant difference from the strategies, being noticeable that the pumps do not work in hours with energy tariff more expensive. With the implementation of the optimization model an economy of 31% was achieved for the period chosen for the analysis. In operational terms, the strategy obtained from the HGA can be considered more daring. In the critical time (1:00 p.m.) the level of the tank in the present operation by the water company achieved values superior to 1m. However based on former mentioned, the minimum water level in the Fazarga tank is 0.30m. In case of desirable an economic solution with higher levels in the tanks, it is easy to increase the minimum limit of the water level in the constraints of the HGA developed model. The importance between the minimum water level attained in the tank and the energy costs to be paid by the water company will depend on the water company priorities, economic and social impacts, and performance or feasibility factors. Prediction of hybrid energy solutions in Espite system Espite is located in Ourém and it is a small system that distributes water to Couções and Arneiros do Carvalhal villages and the average flow in this pipe system is approximately 7 l/s. This system is hydraulically analysed to determine the best hydro solution. Then ANN is applied to establish the best economical hybrid solution, employing the same data set used to developed ANN model. A simplified scheme of Espite water drinking system is presented in Figure 13. The pump station considered in the analysis is Pump Carvalhal 1 and 2 and the micro hydro power plant will be installed in the gravity pipe system between node 5 and Tank Carvalhal. The population consumption (i.e. demand points) must be guaranteed and the tanks water level variation should vary between recommended limits. The elevation profile of Espite system is established in Figure 14, where (1) The HPS model is used to verify all hydraulic parameters and the system behaviour when a hydropower is installed. Rule-based controls are defined in the optimisation process to guarantee that the limit tank levels are always respected. In order to determine the most adequate hydro turbine in this water pipe system, regarding the importance to always maintain a good system operation management and the satisfactory demand flows, the evaluation of the available energy and the characteristic turbine curve compatible with the all operating and hydraulic constrains must be developed. According to Araujo (2005) and Ramos et al. (2010), a characteristic curve for the turbine is evaluated to define the most adequate turbine selection a key for the successful of this solution. The system is then analysed using the electricity tariff for the worst conditions. The energy report of the original situation is shown in Table 6. To reduce the pump consumption, the optimization of the time pumping is considered, turning it on in the low electricity tariff period and turning it off in the higher tariff peri-od, always imposing tank levels' restriction to satisfy the minimum and maximum advisable values for its good operation. Figure 15 shows the system behaviour regarding the water level variation and the optimized pump operation time. Table 7 shows the savings achieved with the water level control and pump operation optimization for the energy tariff pattern adopted. The energy production in the hydro power is calculated using the hydraulic turbine selected considering a sell rate of 0.10€/kWh for 24 hour production as shown in Table 8 as well as the saving achieved with this energy configuration. The operating point of the turbine corresponds to a power net head of 40 m and an average flow of 6.6 l/s determined by the HPS model based on extended period simulations of 24h. Table 8. Energy production in the hydropower solution. After the calculation of the pump consumption and the turbine production, the values are inserted in the ANN model developed and compared with the results obtained with the CES model. For the analysis of the best hybrid energy solution it takes into account that the wind speed in the region of this case study has an average value of 5 m/s. It was considered the wind turbine model SW Skystream 3.7 with a rated power of 1.8 kW and a market price of € 15,000 and a micro hydro turbine (or a pump as turbine -PAT) with a market price estimated in € 2,500 with a nominal power of 3.14 kW. For a lifetime analysis of 25 years, the ANN results show that the best hybrid solution for this case study is a grid + hydro with an NPV of €18,966, and the CES results point out for the same solution a NPV of €18,950, with a relative error of 0.08% and a correlation coefficient of 0.999996. Figure 16 presents the results for all configurations calculated by ANN and CES models showing clearly the best solution. The negative value of NPV in Grid+Wind and Grid+Hydro+Wind is derived from initial installation costs of the wind turbine and its small energy production. For the case study a bigger wind turbine with a higher installed power capacity wasn't chosen because the wind speed in the case study area is very low and wind turbines that have a satisfactory energy production for these wind speeds are extremely expensive, being inadequate to the case study that is a small system and without many resources to be invested. Optimization of the pumps' schedule in the Fátima system The feasibility of the developed HGA model in the search of the best operational strategy for a lowest energy cost in the real Fátima system was analysed. Two algorithms were developed and linked to the GA. The first one, a repair algorithm from an analysis done in the unfeasible solutions generated by the GA, alters the decision variables in the attempt of making these solutions feasible. After finishing the generations of the GA, the second algorithm acts in these solutions, making a local search in the attempt of finding optimal locals. The efficiency of the algorithm developed HGA in the search of the solution with lowest operational cost is confirmed, whereas the convergence occurred six times faster. One of the biggest limitations of the GA is the treatment of problems with high quantity of constraints. The application of penalties only allows the identification of unfeasible solutions. In problems of this kind it is probable that along the candidates' generation, the quantity of unfeasible candidates does not decrease, making the search of good solutions very difficult. With the implementation of repair algorithms, the appearance of super-candidates occurs in less time, since the alterations in the individuals are done directly in its problematic genes. An evaluation analysis about the necessity of use genetic operators, when these algorithms are applied directly in a large set of solutions generated randomly, also shows final good results. To determine the best strategy among thousands of possible solutions it must also be taken into consideration the hydraulic reliability of the system. The HGA model presented can be implemented in any network. Furthermore, its application is practical and useful, being able to be used by water supply companies, making easier the best decision aiming at the energy efficiency in pumping systems. Prediction of hybrid energy solutions in Espite system The current research work aims at the prediction analysis about the best energy system configuration, depending on the renewable available sources of the region, and the optimization of operating strategies for the water distribution systems (WDS), which have about 80% of their costs associated to the energy consumption. Hence an integrated methodology based on economical, technical and hydraulic performances has been developed using the following steps: (i) Artificial Neural Network (ANN) to determine the best hybrid energy system configuration; (ii) for the ANN training process, a configuration and economical base simulator model (CES) is used; (iii) as well a hydraulic and power simulator model (HPS) to describe the hydraulic behaviour; (iv) an optimization based-model to minimize pumping costs and maximize hydraulic reliability and energy efficiency is then applied. The objective is to capture the knowledge domain in much more efficient way than a CES, ensuring a good reliability and best economical hybrid energy solution in the improvement of energy efficiency and sustainability of WDS. In this case study the installation of a micro hydro using water level controls and pump operation optimization shows the improvement of the energy efficiency in 63.35%. In this methodology to determine the best hybrid energy solution, the ANN has demonstrated significant reduction in time modelling, with a good a correlation and mean relative error.
2015-07-14T19:54:51.000Z
2012-12-12T00:00:00.000
{ "year": 2012, "sha1": "3a51f66285905db5e22849050f9da5dfcd4b0e50", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/chapter/pdf-download/37673", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "8b10f0b9184946734a177de7146123adff976635", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
238052656
pes2o/s2orc
v3-fos-license
The Impact of Color Rendering on Visual Fatigue in Interior Zone of Tunnel . Judicious use of lamps is of profound significance to improve the internal traffic safety of tunnels. This study evaluated the effect of LED color on human visual fatigue under mesopic vision category. According to the difference of human eyes' response to different wavelengths of light radiation, the mesopic vision spectral luminous efficiency curve is applied to the visual fatigue evaluation methods. Taking the critical fusion frequency as the physiological index, the detection experiment of human visual fatigue was carried out in the simulated tunnel environment. The results show that spectrum with high color rendering index has a positive effect on alleviating drivers' visual fatigue, and is more suitable for tunnel interior lighting. Introduction Visual fatigue is one of the important causes of traffic accidents. Due to the semi-enclosed space characteristics of highway tunnels, its visual pressure and monotonous environment are more likely to aggravate the visual fatigue, which seriously affects road traffic safety [1]. Therefore, it is imperative to study tunnel lighting from the perspective of visual fatigue. In recent years, light emitting diode(LED) has been widely used in traffic lighting due to its long life, energy saving and other advantages. LED has adjustable spectrum, which makes it possible for people to choose the light color of LED. How to select the light color according to the needs of the scene has become an urgent problem to be solved. At present, there is no clear basis for the selection of light color in the tunnel lighting design department. At present, the LED used in road lighting is made of yellow phosphor excited by blue light, which makes the blue light content of the spectrum higher. Due to the short wavelength of blue light, the focus point does not fall in the center of the human retina, but a little more in front of the retina. If you want to see clearly, the eyeball will be in a state of tension, causing visual fatigue and affecting people's driving performance. How to choose the appropriate light color parameters to reduce the harm of blue light and alleviate the driver's visual fatigue needs to be studied. Hawes et al. compared the effects of fluorescent lamp and LED lamp on cognitive performance and visual fatigue, and the results showed that the fatigue degree of subjects under fluorescent lamp was higher than that of LED [2]. Wang Qing et al. studied the effect of LED on visual fatigue based on reading task, and the results showed that compared with low illumination and low correlated color temperature(CCT), the symptoms of visual fatigue in high illumination and high CCT were lighter [3]. Liang Bo et al. studied the influence of light source CCT on reaction time, and the results showed that increasing light source CCT can improve the visual efficiency [4]. At present, most of the researches on visual fatigue focus on light intensity and CCT, but the research on color rendering index(CRI) is still lacking. Therefore, this paper studies the influence of CRI on visual fatigue in low visibility. This paper aimed to select the light color parameters which are more conducive to alleviate the driver's visual fatigue, so as to reduce the occurrence of traffic accidents. The research results can provide reference for tunnel lighting standards, which is of great significance to improve road safety. Theoretical calculation In recent years, the blue light hazard factor K b of light radiation has been used to quantitatively reflect the influence of different wavelengths on the blue light hazard of human eyes. The higher the value, the greater the impact on visual fatigue. It can be calculated by equation (1). where L ed (λ) refers the SPD of the light source. Huai et al. studied the influence of three kinds of CCT on the penetration characteristics of fog, and obtained that 3000K is more suitable for street lighting by calculating the transmittance [5]. We chose the spectra with different CRI(60,70,80 and 90) at 3000K for experimental study, which was shown in Figure 1 (a). B(λ) refers the spectral weighting function of retinal blue light damage, which was shown in Figure 1 (b). E(λ) is the spectral luminous efficiency function of human vision, and K e is its maximum spectral luminous efficiency. Because the tunnel lighting section belongs to the category of mesopic vision, it is unreasonable to use the human visual curve under photopic vision. Therefore, this paper uses the MES-2 model to obtain mesopic vision spectral light curve. Previous studies show that the brightness of this model (below 5.0cd/m 2 ) is more suitable for tunnel lighting [6]. The form of MES-2 mesopic vision spectral luminous efficiency model is as follows: where V′(λ) refers the scotopic spectral luminous efficiency function, and K m ′ refers its peak value. V(λ) refers the photopic spectral luminous efficiency function, and K m is its peak value. T(λ) refers the transmittance of different wavelength at a certain fog concentration, which was shown in Figure 1 (c). L S is the scotopic luminance, L P is the photopic luminance. (4) and (5) are iterative processes, n is an iteration step, m 2,end is the final iteration output. V mes (λ) is the mesopic spectral luminous efficiency function, which was shown in Figure 1 (d). In the calculation, we use the product of T(λ) and V mes (λ) as E(λ). Figure 2 shows the blue light hazard factors of four CRI (60, 70, 80, 90) spectra under mesopic vision when the fog transmittance is 20% and 80% respectively. It can be seen that the blue light hazard factor at 20% transmittance is greater than that at 80% transmittance. As for the effect of CRI on the blue light hazard factors, the blue light hazard factors decreased with the increase of CRI. From the theoretical calculation results, it can be preliminarily judged that the use of high CRI light source is helpful to driver's visual fatigue and improve road traffic safety. Experimental setup According to the above theoretical analysis, we conclude that high CRI light source under mesopic vision is helpful to alleviate visual fatigue. However, in the actual tunnel driving environment, the influence of different CRI spectra on visual fatigue still needs further research. As an organ that directly feels light stimulation, the change of eye activity can most directly reflect visual fatigue. Previous studies have confirmed that the decline of critical fusion frequency(CFF) reflects the weakening of human eye function, which can be used to measure the degree of human visual fatigue [7]. The lower the CFF value, the greater the fatigue degree [8]. Compared with other physiological indicators, CFF is more simple and accurate in measuring asthenopia. In this paper, CFF is selected as the physiological parameter to detect visual fatigue. The fog chamber is made of plexiglass with a size of 3 * 2 * 2 cubic meters. The inside of the glass box was pasted with wallpaper imitating the road surface and walls of the tunnel. LEDcubes are placed on both sides of the glass box to simulate different SPDs and light intensity. An observation window is set on the side close to the observer, and the bright spot flicker indicator is placed on the observer's chest for CFF measurement. A white visual chart "E" is placed directly in front of the observer. The fog generator is placed behind the fog chamber to produce stable and uniform fog. Ten subjects with normal color vision and normal vision (or corrected vision) participated in the experiment. First, the fog generator emits the fog until the fog concentration of the glass box reaches the required value (20% or 80%). At the same time, the LEDcube is adjusted to the required light environment (CRI set to 60, 70, 80, 90). The luminance is set to 3cd/m 2 , which corresponding to the actual luminance inside the tunnel. Then the subjects were asked to measure CFF before the experiment. The frequency of CFF measurement was from low to high once, and from high to low again. The CFF value recorded by the experimenter is the average of the two measurements. Then the subjects were asked to focus their eyes on the opposite visual chart. CFF values were measured at 5, 10 and 15 minutes of the experiment. Results and discussion The CFF values of 10 subjects under different spectral conditions were obtained by the experimental procedure. In the experiment, the CFF value of 0 minutes of each experiment is taken as the initial value, and the initial value minus the CFF value of 5 minutes, 10 minutes and 15 minutes is taken as the CFF drop value. Figure 4 shows the CFF drop values for different CRI sources at 20% and 80% transmittance. It can be seen from Figure 4 that CFF drop value increases with the increase of time, and decreases with the increase of CRI and fog transmittance. This shows that reducing the fog concentration inside the tunnel and improving the CRI of the light source are helpful to alleviate the driver's visual fatigue. Conclusions This paper discusses the effect of CRI on human visual fatigue under the condition of mesopic vision and low visibility. Firstly, a theoretical calculation model is established by combining the mesopic vision curve with the blue light hazard factor curve. The theoretical results show that high CRI has a positive effect on alleviating driving fatigue. Then, a fog chamber system was established to simulate the internal driving environment of the tunnel, and the CFF of the simulated driver was tested. The results show that the CRI and the fog transmittance are inversely proportional to the CFF decrease. Given the fog transmittance and the mesopic vision, high CRI light source is recommended in the interior zone of the tunnel to alleviate the driver's visual fatigue and ensure the driving safety.
2021-08-27T16:34:12.603Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "16bec8b46abc37192e23050277b0d85d6ca0baa2", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/52/e3sconf_wchbe2021_02008.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "53a022cbedc470da20bdab0ee4e17380d37b5a46", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
270146338
pes2o/s2orc
v3-fos-license
The Influence of Acid-Base Balance on Anesthetic Muscle Relaxants: A Comprehensive Review on Clinical Applications and Mechanisms Muscle relaxants have broad application in anesthesiology. They can be used for safe intubation, preparing the patient for surgery, or improving mechanical ventilation. Muscle relaxants can be classified based on their mechanism of action into depolarizing and non-depolarizing muscle relaxants and centrally acting muscle relaxants. Non-depolarizing neuromuscular blocking drugs (NMBDs) (eg, tubocurarine, atracurium, pipecuronium, mivacurium, pancuronium, rocuronium, vecuronium) act as competitive antagonists of nicotine receptors. By doing so, these drugs hinder the depolarizing effect of acetylcholine, thereby eliminating the potential stimulation of muscle fibers. Depolarizing drugs like succinylcholine and decamethonium induce an initial activation (depolarization) of the receptor followed by a sustained and steady blockade. These drugs do not act as competitive antagonists; instead, they function as more enduring agonists compared to acetylcholine itself. Many factors can influence the duration of action of these drugs. Among them, electrolyte disturbances and disruptions in acid-base balance can have an impact. Acidosis increases the potency of non-depolarizing muscle relaxants, while alkalosis induces resistance to their effects. In depolarizing drugs, acidosis and alkalosis produce opposite effects. The results of studies on the impact of acid-base balance disturbances on non-depolarizing relaxants have been conflicting. This work is based on the available literature and the authors’ experience. This article aimed to review the use of anesthetic muscle relaxants in patients with acid-base disturbances. Introduction Muscle relaxants are a large group of chemical compounds that can relax skeletal muscles.They play a crucial role in various clinical situations, being used as primary drugs for safe endotracheal intubation, in surgical procedures (eg, combined with anesthetics to prepare patients for surgery), or to assist ventilation in patients requiring mechanical ventilation.Muscle relaxants can be classified into 3 groups based on their mechanism of action: non-depolarizing neuromuscular-blocking drugs (NMBDs), depolarizing neuromuscular-blocking drugs, and centrally acting skeletal muscle relaxants [1][2][3][4][5].Drugs belonging to the first group can further be categorized into 3 subgroups based on structure and reversal methods: steroid, benzylisoquinolinium, and asymmetrical mixed-onium chlorofumarate [2].NMBDs, such as tubocurarine, atracurium, pipecuronium, mivacurium, pancuronium, rocuronium, and vecuronium, counteract the effects of acetylcholine on postsynaptic membranes, hindering the depolarizing impact and preventing muscle fiber stimulation.On the other hand, depolarizing relaxants, like succinylcholine and decamethonium, induce an initial activation followed by sustained blockade, acting as more enduring agonists compared to acetylcholine [4,6].The choice of muscle relaxant should consider aspects such as the patient's clinical condition (eg, liver and kidney function, electrolyte and acidbase balance disorders), duration of drug action, onset time, and the clinical context for which it is intended.Additionally, factors like sex, age, weight, body temperature, and concurrently administered medications can influence the effects of muscle relaxants [4,[7][8][9]. Acid-base balance disturbances can affect the efficacy of these drugs.The normal pH in the human body ranges between 7.35 and 7.45.When the pH drops below 7. 35, we refer to it as acidosis, and when it rises above 7.45, it is called alkalosis.There are 4 main acid-base balance disorders -metabolic acidosis, respiratory acidosis, metabolic alkalosis, and respiratory alkalosis [10] -and this article discusses their impact on muscle relaxants.Currently, it is believed that non-depolarizing muscle relaxants exhibit increased efficacy in acidic conditions and decreased efficacy in alkalosis [2,9].Conversely, the depolarizing muscle relaxant succinylcholine has the opposite effect [9]. Medications classified as centrally acting muscle relaxants form a diverse group in terms of both their structure and the receptors they target in the central nervous system.These drugs are used to alleviate tension and spasms of skeletal muscles as an adjunct therapy for discomfort and pain associated with various conditions accompanied by musculoskeletal system pain.Due to their different mechanisms of action and clinical scenarios in which drugs of this group are utilized, the article will focus on medications from the first 2 groups -those that induce neuromuscular blockade [11,12].This work is based on the available literature and the authors' experience.Results of studies on the effect of acid-base balance disturbances on non-depolarizing relaxants have been conflicting.This article aims to review the use of anesthetic muscle relaxants in patients with acid-base disturbances. Acid-Base Balance Disturbances One of the human body's adaptations to maintain homeostasis is to keep the pH between 7.35 and 7.45.Arterial blood gas analysis is performed to assess parameters of acid-base balance.The normal values for the parameters assessed in this test are as follows: pH=7.35 to 7.45, pCO 2 (partial pressure of carbon dioxide)=35 to 45 mmHg, pO 2 (partial pressure of oxygen)=75 to 100 mmHg, HCO 3 -(bicarbonate ions)=21 to 28 mEq/L (parameters may vary between devices used for arterial blood gas analysis, so it is important to familiarize yourself with the device's specifications and reference values determined for that particular device), and oxygen saturation ³95% [10].The maintenance of pH within this range is primarily governed by the buffering system based on bicarbonate ions, with involvement of the renal system and carbon dioxide regulation by the respiratory system.This mechanism allows for partial or even complete compensation of certain disturbances, whereby the pH remains within the physiological range despite the presence of disruptions.The situations where pH drops below 7.35 are called acidosis, while situations where it rises above 7.45 are termed alkalosis [10,13].We distinguish 4 main disorders of acid-base balance: metabolic acidosis, respiratory acidosis, metabolic alkalosis, and respiratory alkalosis [10].Respiratory alkalosis is the most common acid-base abnormality, with no difference between males and females [14].Table 1 shows the types of acid-base disorders and the parameters used for their differentiation [10,[13][14][15][16][17]. Metabolic acidosis can be caused by various types of poisons, such as cyanides, carbon monoxide, arsenic, toluene, methanol, ethylene glycol, paraldehyde, or medications such as metformin or salicylates, or in the case of diarrhea, renal tubular acidosis, diabetic ketoacidosis, and accumulation of lactates (lactic acidosis) in the course of sepsis [15,18,19].The treatment of metabolic acidosis largely depends on the underlying cause.In the case of sepsis or diabetic ketoacidosis, it involves appropriate fluid therapy and correction of electrolyte imbalances.However, in poisoning cases, it may involve administration of antidotes, dialysis therapy, or, in some cases, administration of bicarbonates [15]. Respiratory acidosis is caused by accumulation of carbon dioxide in the body.This condition can be contributed to by respiratory failure, which can be caused by chronic obstructive pulmonary disease, asthma, interstitial lung disease, myasthenia e944510-2 gravis, or centrally acting depressant medications such as opioids.The treatment of respiratory acidosis involves non-invasive and invasive respiratory support and medications that dilate the airways.In the case of respiratory disturbances induced by opioids, naloxone is used [16,20]. There can be several reasons for the development of metabolic alkalosis.One of them may be excessive loss of hydrogen ions due to vomiting; for example, in the case of pyloric stenosis.Metabolic alkalosis can also occur in primary hyperaldosteronism or as a result of treatment with loop or thiazide diuretics [17].Respiratory alkalosis may occur in cases of low CO 2 production due to states with reduced metabolism such as coma, or in cases of excessive loss due to psychogenic hyperventilation or mechanical ventilation.The treatment of alkalosis depends on its primary cause -in the case of pyloric stenosis, surgical treatment is indicated.During the diagnosis of alkalosis, electrolyte disturbances such as hypokalemia or hypocalcemia should be assessed, as they can lead to cardiac rhythm disturbances; their evaluation may require an EKG.If they are diagnosed, appropriate treatment should be initiated [14,17]. The symptoms of acidosis and alkalosis can vary depending on the underlying cause.In the case of acidosis, symptoms may include weakness, drowsiness, altered consciousness, excessive sweating, and warm and flushed skin.In the case of alkalosis, symptoms may include trembling hands, tingling sensations in the hands and feet, muscle cramps, nausea, vomiting, dizziness, and altered consciousness [16,17,[19][20][21][22]. Characteristic of Non-Depolarizing Muscle Relaxants The mechanism of action of NMBDs is based on competitive antagonism of acetylcholine by blocking the alpha subunit of the acetylcholine receptor on the postsynaptic membrane of the neuro-muscular junction, preventing the attachment of acetylcholine.As a result, the motor endplate cannot depolarize, leading to muscle paralysis [1,2,4,7,8,23].In some cases, these drugs can also directly block the inotropic activity of acetylcholine receptors [2].As previously mentioned, these drugs can be classified into several subgroups based on their structure [1,8].This classification is clinically significant, as different methods are employed to reverse the blockade induced by them.Compounds belonging to these subgroups also exhibit varying additional activities.This classification is presented in Table 2 [1][2][3][24][25][26][27][28][29][30][31].Aminosteroid drugs can induce vagolytic activity, leading to tachycardia and hypertension.On the other hand, benzylisoquinolinium compounds, especially mivacurium, atracurium, or doxacurium, demonstrate dose and delivery rate-dependent non-immunologic histamine release, causing facial flushing, hypotension, peripheral vasodilation, and, in rare cases, bronchospasm [1,2]. When selecting an appropriate skeletal muscle relaxant, considerations should be based on the onset time, duration of action, the patient's clinical condition, and assessment of liver and kidney function.Muscle relaxants differ in their affinity to receptors (dissociation-constant), metabolism, and elimination [8]. It is believed that the ideal non-depolarizing, neuromuscular blocking drug should possess specific characteristics listed in Table 3 [32,33], but no single currently available NMBD has all of them. A clinically significant aspect is the ability to reverse the effects of muscle relaxants.Drugs with anticholinesterase activity, such as neostigmine and edrophonium, can be used for reversal.However, when using them, it is essential to also administer drugs with anticholinergic effects (such as glycopyrrolate or atropine) to block the action of acetylcholine on muscarinic receptors [1][2][3]29].The use of neostigmine in reversing neuromuscular blockade should be avoided in patients with myasthenia due to its mechanism of action, posing a risk of cholinergic crisis [46].A specific drug that can be used to reverse the effects of rocuronium and vecuronium is Sugammadex (Selective Relaxant Binding Agent -SRBA), allowing for faster reversal of blockade compared to the previously mentioned drugs, with no adverse effects on the parasympathetic nervous system.It provides more efficient and safer reversal of moderate and deep muscle blockade compared to neostigmine -patients experience fewer adverse effects such as bradycardia, postoperative nausea, or postoperative residual paralysis symptoms [2,23]. In recent years, a new group of drugs has been discovered that can reverse blockades from both the aminosteroid and benzylisoquinolinium groups -Calabadion 1 and Calabadion 2which have shown good results in studies on rats.However, there is still a lack of safety and efficacy results from human studies [24,47]. The Use of Non-Depolarizing Muscle Relaxants in Acidosis and Alkalosis Many factors can influence the timing and strength of the action of drugs, including organ dysfunction, electrolyte imbalances, and disturbances in acid-base balance.The action of NMBDs, such as rocuronium, atracurium, vecuronium, pancuronium, and tubocurarine, is also affected similarly by acid-base imbalance.In general, it is believed that electrolyte abnormalities like hypokalemia, hypocalcemia, hypophosphatemia, hypermagnesemia, and respiratory and metabolic acidosis (with respiratory acidosis having a stronger effect than metabolic acidosis) potentiate neuromuscular blockade, while hypothermia can prolong blockade (due to decreased elimination and metabolism) [1][2][3]7,23,46,48]. Respiratory acidosis also antagonizes reversal [1].Furthermore, acidosis can lead to a decrease in renal and hepatic blood flow, resulting in prolonged drug half-life [46].Muscle paralysis potentiation may also occur in patients with eclampsia who have developed hypermagnesemia following magnesium sulfate treatment [1]. In the case of sepsis, when the acid-base balance is disrupted, hemodynamic disturbances occur, and the recovery after administering NMBDs may be delayed due to reduced acetylcholinesterase activity in the neuromuscular junction space.However, sepsis does not affect the onset of NMBDs [7]. Rocuronium is a drug widely used for perioperative muscle relaxation to prepare patients for anesthetic and surgical procedures and to assist lung ventilation.It is also used off-label in defasciculating doses to prevent muscle fasciculation during muscle blockade, to prevent myalgia, and, in patients undergoing therapeutic hypothermia in post-cardiac resuscitation, to prevent shivering.The neuromuscular blocking strength of rocuronium is influenced by alterations in respiratory pH, rising with lower pH levels and falling with higher pH levels [35]. Respiratory alkalosis can delay the action of rocuronium, and it is essential to take into account the delayed effects of rocuronium during hyperventilation [50].It was also discovered that respiratory acidosis induced by ventilation prolongs the neuromuscular blockade caused by rocuronium [51]. Atracurium is eliminated by Hoffman's elimination and via ester hydrolysis by non-specific esterases in plasma.The speed of Hoffman's elimination depends on temperature and pH and is slowed by acidosis and hypothermia [41].Studies on atracurium blockade in patients undergoing renal transplantation have concluded that acid-base balance disturbances can affect recovery time and neuromuscular blockade.An acidic environment can lead to prolonged metabolism of the muscle relaxant due to diminished blood perfusion in the muscles.Lowering blood pH enhances the attraction of atracurium to the anionic acetylcholine receptors [52]. e944510-5 An experiment on 24 cats, in which the effects of acid-base imbalance on the neuromuscular actions of atracurium or vecuronium were studied, concluded that the potentiation of blockade induced by atracurium can be increased in both respiratory and metabolic acidosis.However, action and recovery were not influenced by experimental imbalance [53]. Research on patients undergoing abdominal surgery has established the impact of acid-base balance on vecuronium. Respiratory acidosis prolongs the duration and recovery time of vecuronium, while respiratory alkalosis shortens it [54]. Vecuronium is metabolized by the liver, so it should be used cautiously in patients with impaired liver function, which can lead to prolonged recovery from muscle paralysis.This drug should be used with caution in patients with renal failure, as elevated urea concentrations may impair elimination by the liver and can lead to accumulation of an active metabolite [34]. Pancuronium is commonly recommended for use in pediatric cardiac surgery and other high-risk procedures in infants and children [55].It can also be used in cases of shivering during therapeutic hypothermia in patients in protocols in cardiac arrest [45].Acidosis and hypokalemia contribute to an extended duration of paralysis, while alkalosis can counteract the blockade [56]. Tubocurarine is a myorelaxant that can cause apnea and is contraindicated in asthmatic patients.Postoperative respiratory acidosis can enhance undetected residual curarization [57]. Despite the general belief that acidosis enhances and alkalosis weakens neuromuscular blockade, there are exceptions.Pipecuronium-induced neuromuscular block is increased in metabolic alkalosis, as well as in acute respiratory and metabolic acidosis.The action of pipecuronium is decreased by respiratory alkalosis [58]. The effects of hypocalcemia, hypokalemia, hypermagnesemia, and respiratory acidosis on the benzylisoquinolinium derivative cisatracurium are unclear, but may related to metabolism through ester hydrolysis and Hoffman degradation [7]. Characteristic of Depolarizing Muscle Relaxants Drugs that are depolarizing muscle relaxants exhibit agonistic action toward the acetylcholine receptor in the postsynaptic membrane of the neuromuscular junction, thus depolarizing the motor endplate.These drugs are resistant to the action of acetylcholinesterase, and by inducing continuous depolarization, they prevent further stimulation by acetylcholine.The block induced by depolarizing drugs occurs in 2 phases -depolarizing and desensitizing.The first phase involves stiffening and transient muscle fasciculation occurs, corresponding to muscle depolarization.In the second phase, muscles cease to respond to acetylcholine released by motoneurons, leading to complete neuromuscular block [1,4]. Succinylcholine is the most widely recognized depolarizing neuromuscular blocking drug.It is the only drug in this category used in clinical settings and is the preferred choice for rapid sequence intubation (RSI) in emergency departments.It has a rapid onset of action (approximately 30 s) and a very short duration of action (5-10 min).It is hydrolyzed by various cholinesterases present in the plasma (eg, butyrylcholinesterase) [6,59].Another drug in this group is decamethonium, but it is rarely used in clinical practice [4]. Drugs from this group are contraindicated in individuals with degenerative neuromuscular diseases and those with a history of malignant hyperthermia.In children with skeletal muscle myopathies such as Duchenne muscular dystrophy, there is a risk of rhabdomyolysis with hyperkalemia [1]. The Use of Depolarizing Muscle Relaxants in Acidosis and Alkalosis Information regarding the influence of acid-base metabolism on succinylcholine comes from the 1960s, when scientists studied the effect of sodium carbonate-induced alkalosis on the action of neuromuscular-blocking drugs in cat muscle preparation.The experiment showed that alkalosis potentiated the action of succinylocholine [60].A study on the impact of acidosis on neuromuscular blockade showed that succinylocholine and decamethonium are antagonized by both metabolic and respiratory acidosis [61].Succinylcholine remains the preferred drug for inducing paralysis, especially when there is a requirement for a swift onset and conclusion of its effects.Unfortunately, succinylocholine-induced lethal hyperkalemia is still being reported.Succinylcholine in patients with acidosis, metabolic hypovolemia, or bleeding can lead to a greater increase in serum potassium levels than in patients with maintained homeostasis.In the event of cardiac arrhythmias caused by hyperkalemia after succinylcholine administration, treatment with calcium chloride, bicarbonate, and hyperventilation should be initiated as soon as possible [46].Table 5 summarizes the implications of acid-base disorders for nondepolarizing and depolarizing muscle drugs [7,8,23,52,54]. changes in acid-base balance and electrolyte alterations may weaken or strengthen blocks in a patient, resulting in a change in the recovery time of neuromuscular blockade.Metabolic factors (such as hypo-and hyperglycemia), electrolyte imbalances, acid-base disorders, and hypothermia may contribute to delayed emergence from anesthesia.A case report of 2 patients undergoing surgery observed that stress, pain, increased sympathetic system activity with the release of catecholamines, and continuous stimulation of beta receptors, combined with hyperventilation leading to respiratory alkalosis, could shift potassium ions into cells.This effect may be more pronounced in individuals with preoperative hypokalemia.The report demonstrated that elevating potassium levels in these patients improved the level of consciousness, and recommended balancing serum potassium levels in cases of delayed emergence from anesthesia [48].In a study on kidney transplant surgery patients in whom atracurium was used for blockade, a significant reduction in muscle relaxation duration and faster reversal of the blockade were observed due to treatment of acid-base imbalances using calcium carbonate.Based on this study, it was concluded that intraoperative treatment of acid-base disorders can shorten neuromuscular blockade and is a potential factor improving transplant outcomes [52]. The most popular methods for monitoring and predicting the course of blockade are Train-of-four (TOF), Train-of-four count (TOFC), and Train-of-four ratio (TOFR).TOF consists of 4 consecutive 2-Hz stimuli applied to a chosen muscle group, typically performed on the adductor pollicis muscle via stimulation of the ulnar nerve [3].The desired response is a twitch indicating a specific muscle contraction.TOFR is determined by dividing the amplitude of the fourth twitch by the amplitude of the first twitch [1].If TOFR is <0.9, there is a higher risk of post-residual blockade and postoperative complications, requiring use of a reversal agent.TOFR less than 0.7 indicates persistent blockade.TOFC provides information about the percentage of blocked receptors [3].Adequate muscle blockade for surgery occurs when approximately 90% of receptors are blocked when 1 or 2 signals (twitches) are present [1,62].The correlation between the percentage of blocked receptors and TOFC is presented in Table 6 [2]. About 75% of acetylcholine receptors become antagonized when the fourth twitch from TOF disappears, and the level of receptor occupancy increases as twitches disappear, ranging from 85% for the third twitch to 95-100% for the first twitch.Adequate relaxation for surgery is considered present when 1 to 2 twitches of the TOF are observed [2]. Assessment of TOF is largely dependent on the examiner, which leaves a wide margin of error.Therefore, other methods for evaluating the course of blockade should be considered, such e944510-7 as acceleromyography, strain-gauge monitoring, and electromyography [2,23].Currently, it is recommended that the TOF ratio (TOFR) should be 0.9 for the reversal of blockade.However, due to the previously mentioned potential examiner-dependent margin of error, the patient's muscle strength should be assessed, including sustained tetanic response and ability to lift the head for at least 5-10 s, indicating an appropriate level of reversal of blockade [1].It is recommended that clinically weak patients should be left intubated with supported respirations until they can demonstrate return of strength [23]. In 2023, the American Society of Anesthesiologists published a report containing practical guidelines and recommendations regarding assessment of blockade reversal.The report strongly recommended: -do not rely solely on clinical assessment of blockade reversal, -choose quantitative monitoring over qualitative assessment for residual neuromuscular blockade, -confirm a TOFR ³0.9 before intubation when using quantitative monitoring, -use the adductor pollicis muscle for neuromuscular monitoring, -avoid using ocular muscles for monitoring, -in cases of minimal depth of neuromuscular blockade, consider using neostigmine as an alternative to Sugammadex. It is also conditionally recommended (due to low strength of evidence): -in the use of atracurium or cisatracurium and minimal depth of neuromuscular blockade, consider using neostigmine to avoid residual blockade, -in the absence of quantitative monitoring, after using neostigmine for blockade reversal, wait at least 10 min before extubation [62]. Use of Muscle Relaxants in Metabolic and Neuromuscular Disorders Patients with genetically determined metabolic disorders such as autosomal recessive inborn propionic acidemia and methylmalonic acidemia (which are organic acidemias) require special attention and assessment by an anesthesiologist.These patients may experience recurrent episodes of metabolic acidosis.Some drugs should be avoided or used with great caution in these individuals.It occurs with neuromuscular blocking agents such as succinylcholine, cisatracurium, and mivacurium, which are metabolized through ester hydrolysis to oddchain organic acids [63]. NMBDs should be used with caution in patients with glycogen storage disease type II (Pompe disease), which is known for skeletal muscle myopathies, because residual weakness can be poorly tolerated by them [64]. Caution is also needed when using skeletal muscle relaxants in patients with neuromuscular disorders (NMDs).Their common feature is weakening of muscle strength and fatigue.This is a group of diseases that can affect both children and adults.We can divide them into 3 categories: prejunctional (including motor neuron diseases such as amyotrophic lateral sclerosis (ALS) and spinal muscular atrophy (SMA); peripheral neuropathies; hereditary neuropathies such as Charcot-Marie-Tooth (CMT) disease), junctional (myasthenia gravis (MG) and Lambert-Eaton myasthenic syndrome (LEMS)), and postjunctional (including muscular dystrophies such as Duchenne muscular dystrophy and Becker muscular dystrophy; and congenital myopathies).In NMDs, muscle relaxants should be used only if necessary, and use of succinylcholine should be avoided.Studies have shown that succinylcholine can be used in patients with myasthenia gravis; however, the best choice of skeletal muscle relaxants for them will be mivacurium and atracurium.Their action will be prolonged, so it is important to remember to use them in reduced doses [65].The use of skeletal muscle relaxants is challenging, especially in the pediatric population, in patients with muscular dystrophies.Muscular dystrophies are a diverse group of genetically based diseases characterized by weakness and progressive damage of muscles, resulting from impaired synthesis or regeneration of contractile proteins.When using NMBDs in patients with muscular dystrophies, it should be remembered that they may have faster onset of action, longer duration of action, irregular action, and high risk of residual paralysis [66].In 2007, the American College of Chest Physicians published the Consensus Statement on the Respiratory and Related Management of Patients With Duchenne Muscular Dystrophy Undergoing Anesthesia or Sedation, in which they stated that the use of depolarizing neuromuscular blocking agents such as succinylcholine in patients with Duchenne muscular dystrophy is absolutely contraindicated due to the risk of rhabdomyolysis, hyperkalemia, and cardiac arrest [67]. Future Directions Contemporary technologies enable the design and creation of new drugs, the properties of which can be applied in clinical practice.However, there is still no ideal non-depolarizing skeletal muscle relaxant.The search for a universal drug is also underway, which would allow the reversal of muscle blockade caused by both steroid NMBDs and benzylisoquinoline.Progress in medicine will also lead to better and more accurate monitoring of the course of muscle blockade in the future. while NMBDs offer more controllable muscle relaxation with the advantage of reversal options.The choice between them depends on the clinical scenario, patient characteristics, and the preferences of the anesthesia provider.These 2 groups of drugs are reported to act differently during acid-base balance disturbances.It is essential to consider the effects of hypoor hyperventilation during use of myorelaxants.Regardless of the presence of acid-base balance disorders, it is important to remember that when using skeletal muscle relaxants, neuromuscular transmission should be monitored using TOF. Institution Where Work Was Done The article was written at Collegium Medicum University of Warmia and Mazury, Olsztyn, Poland. Table 1 . Values of parameters assessed in arterial blood gas analysis evaluated during the differentiation of acid-base balance disorders. Table 2 . Structural and clinical classification of subcategories of NMBDs based on drug reversal patterns and additional activity. Table 3 . Characteristics that the "ideal" NMBDs should have. Table 4 . Division of NMBDs based on their action and elimination mechanism from the body. Table 5 . Effects of acid-base disorders on neuromuscular blockade made by non-depolarizing and depolarizing muscle agents. Table 6 . Presentation of the number of blocked receptors depending on the number of signals during the Train-of-Four count measurement.
2024-05-31T15:20:46.096Z
2024-05-29T00:00:00.000
{ "year": 2024, "sha1": "5e2e34563d4c3156ea69b8bd7f6c739b9c4e10f6", "oa_license": "CCBYNCND", "oa_url": "https://medscimonit.com/download/inPress/idArt/944510", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e85389415f853e718f8b52796fbeb4f77dff87c3", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [] }
119094742
pes2o/s2orc
v3-fos-license
Trend Analysis of Las Vegas Land Cover and Temperature Using Remote Sensing The Las Vegas urban area expanded rapidly during the last two decades. In order to understand the impacts on the environment, it is imperative that the rate and type of urban expansion is determined. Remote sensing is an efficient and effective way to study spatial change in urban areas and Spectral Mixture Analysis (SMA) is a valuable technique to retrieve subpixel landcover information from remote sensing images. In this research, urban growth trends in Las Vegas are studied over the 1990 to 2010 period using images from Landsat 5 Thematic Mapper (TM) and National Agricultural Imagery Program (NAIP). The SMA model of TM pixels is calibrated using high resolution NAIP classified image. The trends of land cover change are related to the land surface temperature trends derived from TM thermal infrared images. The results show that the rate of change of various land covers followed a linear trend in Las Vegas. The largest increase occurred in residential buildings followed by roads and commercial buildings. Some increase in vegetation cover in the form of tree cover and open spaces (grass) is also seen and there is a gradual decrease in barren land and bladed ground. Trend analysis of temperature shows a reduction over the new development areas with increased vegetation cover especially, in the form of golf courses and parks. This research provides a useful insight about the role of vegetation in ameliorating temperature rise in arid urban areas. Introduction The Las Vegas urban area expanded rapidly during the last two decades [1].The expansion slowed during 2007-2009 recession, but more recently, the development activities have regained momentum.Las Vegas development has mostly consisted of residential and commercial areas.Being a city in an arid environment with limited water resources [2] and geographically bounded by Federal lands [3], there is a limit to its sustainable urban growth.Thus, it is important to monitor and understand the expansion of the Las Vegas urban area. Urban expansion is an outcome of the complex interplay of natural and anthropogenic processes linked through feedbacks.In order to understand and model these processes, it is imperative that the rate and type of urban expansion is determined [4].Urban and social scientists can use this information about the rate and type of urban expansion to develop and calibrate urban sprawl models [5,6].Moreover, this information can help better understand interaction between natural and urban phenomena and their impacts on issues such as urban temperature, air quality, and flooding [4,7,8].Furthermore, maps of urban expansion can help urban managers and planners understand the efficacy of previous decisions. Remote sensing continues to be an efficient and effective way to study spatial change in urban areas [9][10][11][12].In particular Thematic Mapper (TM) data from Landsat missions has been used to understand urban growth and trends, manage and monitor resources and the environment, and plan future development [13][14][15][16][17].The spatial resolution of TM data (30-m pixel) can be a limiting factor to capture the details of an urban surface.Therefore, spectral unmixing of surface response using Spectral Mixture Analysis (SMA) is a valuable technique to retrieve subpixel landcover information [15,[17][18][19][20]. Nevertheless, the quality of spectral unmixing is highly dependent on the presence and abundance of pure pixels (called endmembers).This difficulty can be overcome by calibrating an SMA model with land cover from high spatial resolution images such as National Agricultural Imagery Program (NAIP) multispectral data (1-m pixel). This paper presents geospatial aspects of urban growth in Las Vegas using remote sensing and relates the urban land cover change to surface temperature trends.The urban expansion of Las Vegas is studied over the 1990 to 2010 period using TM imagery.A linear spectral mixture model for TM pixel is calibrated using high resolution NAIP classified image to extract subpixel land cover.The trends of land cover classes over the two decades are studied and related to the land surface temperature trends derived from TM thermal infrared images. This paper is organized as follows.The following subsection is a discussion of relevant literature review.A description of the study area and data used is provided in Section 2. Section 3 describes the remote sensing techniques used for data processing and key results are presented and discussed in Section 4. Section 5 provides a brief summary and description of conclusions. Background The spatial nature of urban growth makes it suitable for the application of remote sensing and GIS analysis.Urban areas have complex dynamics and it is imperative to monitor, understand, and predict their growth for sustainable development [21,22].Due to urban complexity at various scales, remote sensing provides an efficient way to understand urban growth and sprawl [4].GIS has been successfully applied to model urban sprawl [23,24], determine urban land use change [10], and predict long-term urban growth [25,26].Donnay et al. [27] has provided a wide range of techniques for urban analysis using remote sensing data including segmentation, temporal prediction, and geostatistical analysis.Bhatta [12] has analyzed various statistical methods in GIS and remote sensing applications to study urban development and growth processes.Wilson et al. [28] has developed a geospatial model to quantify and map urban growth.Land cover change alters the surface energy balance due to anthropogenic materials and urban infrastructure thus, leading to change in surface temperature.Fall et al. [29] analyzed climate data over US and revealed warming trend that could be explained on the basis of land use/land cover change.Wichansky et al. [30] analyzed the New Jersey region and showed through simulation that daytime maximum temperatures over urban landscape are increased more than nighttime minimum temperatures due to enhanced daily temperature range. Spaceborne remote sensing provides a synoptic view of urban areas and has been applied to many areas of urban studies.Remote sensing has been used to study urban surface runoff [31,32], road networks [33], and impact on surface temperature [2,7,34].Remote sensing can be applied to characterize urban sprawl [11,35], monitor urban growth [9,13,36], measure land use change [10], and understand surface reflectance from urban areas [37].Almeida et al. [38] have used Bayesian methods on GIS and remote sensing data to study land use change and generated forecasts of growth trends in Sao Paulo State, Brazil.Abed and Kaysi [39] have used fuzzy logic on GIS and remote sensing data to identify urban boundaries of Beirut area using strong dependency of surrounding areas on the core metropolitan region of Beirut.Bhatta [40] has also modeled urban boundary growth using geoinformatics and shown its benefits to control urban growth and sprawl in Kolkata, India. Thematic Mapper datasets from multiple Landsat missions have proven very valuable for urban studies [13,15,16].Landsat TM imagery has been used with spectral unmixing to assess land cover change at multiple scales for the decision making purposes [14].Landsat TM data has also been used to study urban dynamics [13].Spectral indices derived from TM imagery such as Normalized Difference Vegetation Index (NDVI) and Normalized Difference Buildup Index (NDBI) have also been useful to analyze urban change [41,42].Zha et al. [42] have achieved high accuracy urban map of Nanjing city in China using NDBI and NDVI.In particular, NDBI is based on the difference between shortwave infrared and near infrared reflectances to identify buildings and transportation infrastructure.Huang et al. [43] have analyzed 77 cities of the world using TM data and used cluster analysis to understand the compactness of urban areas.As NDBI and NDVI from TM imagery only provide pixel-based distinction at 30-m resolution, further enhancement is required by using subpixel approaches.Several techniques based on spectral unmixing or SMA of TM surface reflectance have been developed to estimate subpixel land cover information [15][16][17][18][19][20][44][45][46].Typically, SMA techniques use pure pixel spectral response of land cover classes to estimate fractional land cover in mixed pixels. The classified high resolution imagery can provide land cover information needed to understand spectral responses of larger pixels.The high resolution data is acquired less frequently and thus poses a limitation in itself.Nevertheless, in combination with lower resolution data which has higher temporal resolution, a meaningful method to retrieve subpixel information can be developed. Las Vegas Urban Sprawl Las Vegas, since its establishment in 1905, has seen spurts of growth stimulated by events such as construction of Hoover dam and the legalization of gambling.In other times, it has continued to attract a population seeking a warm and dry climate, especially senior citizens [1,47,48].More recently, it was among the fastest growing urban areas.Las Vegas' population almost doubled during 1990-2010 period accompanied by a rapid urban expansion.Many new residential and commercial developments appeared along with many recreational areas such as golf courses and parks.Figure 1a shows the trend of population increase between 1990 and 2010 [49] and Figure 1b compares corresponding TM false color composite images (i.e., color assignment of red, green, and blue to Bands 4, 3, and 2, respectively), where red shade reflects the vegetation.The two decades considered in this article represent the period of rapid growth of Las Vegas that slowed down due to 2007-2009 recession.This study is a part of urban land cover analysis conducted for Nevada Division of Forestry to understand the role of urban forestry efforts in relation to urban climate during the rapid growth period.Clearly, the doubling of population has resulted in comparable urban expansion.The spatial patterns in TM images illustrate changes in several land cover types such as roads, high density buildings, and open spaces.Figure 1b also depicts 12 sites (A through K) chosen to understand the effect of NDVI on temperature.These sites represent various changed and unchanged land covers between 1990 and 2010 period. Generally, NDBI and NDVI are used to estimate land cover composition of buildup and vegetated areas.A comparison of NDVI and NDBI derived from TM data is shown in Figure 2 to depict urban development.The left graphs corresponds to a residential site L in Anthem area with a golf course whereas right graphs are over the site D that already existed before 1990.The older development shows very weak trend of NDVI and NDBI.On the other hand, the new development reveals that a gradual change took place during approximately, the 1997-2005 period.Since the water conservation initiatives in Las Vegas did not start until 2003, this new development shows significant rise in the vegetation signature.This visual outlook is not sufficient and a more quantitative understanding of changes in land cover incorporating subpixel information would be useful. Despite the rapid spatial expansion of Las Vegas, it is approaching an upper limit.Under the Southern Nevada Public Land Management Act of 1998 (SNPLMA), the Bureau of Land Management (BLM) created a disposal boundary around the Las Vegas metropolitan area (solid line bounding urban area in Figure 1b).Any BLM lands within this boundary have been disposed to Clark County.Thus, the disposal boundary defines the limit to the available spatial expansion of the Las Vegas metropolitan area [3]. Land cover changes in Las Vegas due to urbanization impact hydro-and thermodynamics on the surface.Subsequently they impact human life for example in the form of urban flooding and urban heat island effect [50].Previous work on urban temperature in Las Vegas has shown trends that are related to the urban development [2,34]. Remote Sensing Data Landsat 5 Thematic Mapper Data: The Landsat has proven to be a successful mission of NASA and USGS collaboration.The Thematic Mapper onboard Landsat 5 provides a long time series of global multi-band surface reflectance which has found wide applications [51,52].It is noted that Landsat 5 is the only platform that covers the study period of interest providing data from a single thematic mapper sensor at a resolution useful for urban decadal study.In this research, bands 1-5 and 7 are used for subpixel land cover extraction whereas band 6 is used for land surface temperature estimation.Data conversion procedure as described by Chander et al. [52] and Giannini et al. [53] was followed to estimate surface reflectance and surface temperature.The TM data is available for the whole study period at an average repeat cycle of 16 days and 30-m spatial resolution.In order to ensure meaningful results, the TM imagery was carefully screened and cloudy images were removed.Although Las Vegas has clear sky most of the year, atmospheric correction of images was performed using dark object subtraction method [54]. National Agricultural Imagery Program Data: The NAIP was initiated by US Department of Agriculture to map agricultural activities during the growing season.The NAIP imagery is only acquired once or twice a year at 1-m spatial resolution.In general, NAIP imagery is available in 3 bands including blue, green, and red but over some states, 4 band imagery has also been acquired [55].In this research, the 3-band 2010 NAIP image of Las Vegas valley was used.It was classified into eight classes using supervised image classification.The classified NAIP data provides a detailed view of land surface whereas TM provides a coarse view.Figure 3 compares the NAIP coverage with corresponding 2010 TM image where a TM pixel is 900 m 2 compared to a NAIP pixel of 1 m 2 .Likewise, Table 1 lists and compares the bands of TM and NAIP imagery.The TM imagery closest to the NAIP acquisition time is used for comparison.Being 900 times larger, a TM pixel can be considered as an average response of all the 900 NAIP pixels with combined effect of various land cover classes.The comparative level of detail available in NAIP imagery is evident in Figure 3, where top images show major freeway intersection dominated with asphalt, concrete, and bare soil.The bottom images compare a residential and commercial area showing significant vegetation and roof tiles.When available, these two datasets can be used to decompose spectral response of a TM pixel into its constituent land cover classes. Air Temperature Data for Validation To validate land surface temperature (LST) derived from thermal infrared imagery, ground based dry bulb thermometric data was used that reflects air temperature.These thermometric observations at McCarran Airport (36 • 43 8" N 115 • 9 48" W) were retrieved from Climate Data Online of National Climatic Data Center of National Oceanic and Atmospheric Administration and used to validate remote sensing-based LST. Method The main approach is to develop a SMA model of a TM pixel in terms of its land cover fractions and to calibrate this model using a NAIP classified image.Then the calibrated model is used to retrieve land cover information from 20 years of TM images.Figure 4 shows the overall workflow of this modeling approach, which is further explained below.Moreover, the corresponding land surface temperature maps are prepared from the TM thermal images. NAIP Image Classification The NAIP imagery is processed with supervised classification where training regions of interest for land cover classes are provided.The Las Vegas urban area is considered to be composed of eight land cover classes including residential buildings (houses), commercial buildings (commercial), asphalt, tree cover, open spaces, water, barren ground (barren land), and bladed ground.These classes represent the general composition of Las Vegas city.The asphalt land cover represents roads and parking lots whereas residential buildings are identified by tiled rooftops.The commercial buildings are differentiated from houses as their rooftops are generally concrete.Vegetation is identified as open spaces and tree cover as these two can be clearly distinguished in the NAIP images and moreover have different spectral responses.Open spaces are primarily parks with grass and residential yard turf.The tree cover is sparse in general with some areas with higher tree coverage including exogenous trees.The barren land represents the undisturbed surface with sparse vegetation (mostly brush) whereas bladed ground represents area that has been prepared for construction.The bladed ground is included to represent the progress of construction activities over time. Since NAIP image resolution is 1 m, any feature of at least 2-by-2 pixels could be considered as minimum mapping unit.In case of Las Vegas, the minimum mapping units of the chosen classes are variable.Even though features of houses, commercial buildings, open spaces, barren land, and bladed ground were greater than 2-by-2 pixels, any tree cover, asphalt paths, and water ponds smaller than 1-m size could not be detected and classified.In general, it is difficult to achieve good accuracy of classification in high resolution images with fewer bands.In this particular case, lack of near infrared band posed a serious limitation.Nevertheless, a careful selection of training regions representing all eight classes and supervised classification with maximum likelihood classifier was applied achieving a reasonable overall accuracy of more than 80%. Subpixel Land Cover Fraction Model The SMA method is a common approach for retrieving subpixel land cover fractions.This section provides a brief overview of this method to show its connection with the calibration technique.There is a myriad of literature describing and applying SMA to TM data, which can be consulted for more details [15][16][17][18][19][20][44][45][46]. The basis of SMA modeling is that the spectral response of a pixel is a linear combination of the spectral responses of constituent pure land cover classes (endmembers) weighted by the fractional area in the footprint.Mathematically, it is given by where ρ j (p) is the observed surface reflectance of the pixel p in the jth TM band.The summation is performed over N endmembers of choice where f i (p) is the fractional area of the ith class in the pth pixel.The r ji is a calibration parameter, which represents the surface reflectance of the ith endmember in the jth band.The e j is the modeling error.Note that j runs from 1 to K where K is the number of bands used in the SMA model.The above equation is conveniently represented in a matrix form as where R is a K × N matrix with elements r ji , ρ is a K × 1 column vector of the spectral response, f is a N × 1 column vector of the subpixel endmember fractions, and e is a N ×1 column vector of the modeling errors.Each row of R corresponds to the spectral response of an endmember.This approach depends on the provision of R matrix that is often estimated from training samples of pure pixels representing the endmembers.An image may not have many pure pixels and thus, R may not be readily available.Conversely, if the fractional area of the training pixels is available, the above model can be reformulated to estimate R. As these fractional areas are available from a high resolution classified image, the model ( 1) can be rewritten as a system of equations where the r ji are the unknowns.This is the step that is different from conventional SMA approaches.Instead of pure pixels, the model is calibrated using f i values obtained from a high resolution classified image. In matrix form, the reformulation of ( 1) is given by where ρ is a KP × 1 column vector containing P training pixels each having K-band observations, F is a KP × KN matrix, and r is KN × 1 column vector.The r is a vectorized version of the K × N matrix R. e is a KN × 1 column vector of the errors.The F matrix is created from the training samples of high resolution classified NAIP image whereas the ρ column vector is the corresponding surface reflectance values from the TM bands 1-5 and 7.The classified image results in 900 pixels under each TM pixel as shown in Figure 5. where F † is the pseudo-inverse of F. The estimated r provides R as the calibration term for (2).As typically done for SMA approaches, subpixel fractional areas of remaining data can be estimated by solving the following constrained optimization problem for each pixel, i.e., solve After estimating R using the NAIP classified image, Equation ( 5) is applied to all the cloud free TM images between 1990 and 2010 to prepare their fractional land cover images.These fractional images are used to understand spatial and temporal change of various land cover classes during the study period. The accuracy of SMA output is highly dependent on the proper selection of the endmembers.The endmembers are land cover classes that are large enough to cover multiple pixels in the image.These are identified by using Pixel Purity Index or by reducing data dimensionality.In the case of dimensional reduction, Principal Component Analysis and Minimum Noise Fraction methods are often used, which limit the number of possible endmembers.In these methods, the number of endmembers cannot exceed the number of dimensions.Moreover, the heterogenous nature of urban surface makes it difficult to find pure pixels in TM imagery.These limitations can be overcome if corresponding high resolution imagery is also available that can be used to calibrate an SMA model. Land Surface Temperature Estimation The land surface temperature (T sur f ace ) is retrieved from the thermal infrared data (TM band 6).The digital numbers are converted to at-sensor brightness temperature using Chander et al. [52], Giannini et al. [53]. where T B is at-sensor brightness temperature computed from thermal band radiance (L 6 ) using constants K Subpixel Land Cover Fractions The spatial analysis of the land cover images depicts a consistent behavior about the urban expansion in Las Vegas.For example, Figure 7 compares the maps of land cover fraction of asphalt between 1990 and 2010.In these maps, the high values of asphalt fraction shown as red correspond to the roads and parking lots.The analysis showed that the area covered by asphalt increased from ≈150 km 2 in 1990 to ≈250 km 2 in 2010.As new road infrastructure appears on the landscape, it promotes further urban development and subsequently further extension and enrichment of road network.The extension implies lengthening of roads beyond the existing urban limit and enrichment implies increasing road linkages within the existing urban area.In Las Vegas valley, the road network expansion is a surrogate of the urban expansion and shows that it is reaching its limit of the BLM disposal boundary.In a similar comparison of the houses class, land cover area showed a much greater fractional increase than the asphalt whereas the commercial class showed a relatively lesser increase. The land cover fraction of tree cover, open spaces, and water have negligible change over the study period.For example, Figure 8 compares the tree cover fraction change over the two decades. The total area of the tree canopy increased from ≈120 km 2 in 1990 to ≈130 km 2 in 2010 (depicted as shades of green and red).Although this increase is small but it is spatially distributed over the whole urban area.Similar to the road network, the tree canopy expansion in Las Vegas is a direct measure of the urban growth and reveals that the Las Vegas expansion is reaching its spatial limits.In Las Vegas, the expansion of the tree canopy class also reflects an increased water demand.Although the best practices of urban forestry guide that the indigenous desert plants are grown, there is some provision of water through a drip irrigation system.In 2003, Las Vegas also promoted xeriscaping practices to conserve outdoor water use.The analysis presented in this paper did not reveal any significant impact of xeriscaping.The impact is expected to reveal in the land cover fractional area of open spaces which showed negligible change. The land cover fraction of the barren land and bladed ground showed reduction in the area.The barren area is expected to reduce as it is being converted into other urban land surfaces.The reduction in the bladed ground reflects that the construction activity reduced in the year 2010.In order to understand the trends during the 20-year period, a time series analysis of the land cover fractions is presented next. Trend Analysis of Land Cover Fractions and Temperature This section discusses temporal variation in the land cover fractions over the study period and explores its relation to the temperature change.Figure 9 shows the time series plots of the surface area (km 2 ) of each land cover and the regression line through the data points.The urban growth in Las Vegas followed an overall linear trend.The variability about the trend can be attributed to the inherent fluctuations in the remote sensing imagery due to the solar illumination angle and seasonal variations.Nevertheless, the behavior of trend lines of various land cover classes are consistent with each other.As found through the spatial analysis, temporal variations reveal a consistent result where the houses, asphalt, and commercial classes have gradually increased whereas the barren land and bladed ground have gradually decreased.The tree cover, open spaces, and water, show minute increasing trends. The slope of the trend lines is the rate of change of the surface area of a given class per year.Table 2 lists the rate of change of each land cover and reveals that a maximum occurred for the houses (10.5 km 2 /year).It is followed by the asphalt (5.7 km 2 /year) and the commercial (2.6 km 2 /year) classes.These three classes reflect the key compositions of an urban growth.The tree cover, open spaces (grass) and water show negligible increase.Note that the open spaces represent yards and public parks, whereas water class is primarily residential pools and ponds in public parks.The classes of barren land and bladed ground decreased at a rate of 3.6 and 4.5 km 2 /year, respectively.The bladed ground area trend was expected to mimic the variation in construction activity in Las Vegas.It may be correlated to the barren land class due to their similar spectral signatures.Nevertheless, this result can be interpreted as an overall reduction in the natural landscape which is replaced by the urban land cover classes.The land cover trends derived from remote sensing data reveal that the urban spatial expansion of Las Vegas has followed a linear trend.This is insightful as urban growth is often commonly thought to be exponential.Moreover, the Las Vegas urban area has experienced most expansion in the residential sector followed by the asphalt which reflects increased transportation infrastructure.Often urban development is believed to lead to the urban heat island effect as urban growth increases retention of heat in the high specific heat urban materials (asphalt and concrete).Previous studies have shown that the trends of temperature in arid regions behave contrary to this belief [34,56].In arid regions, the temperature has been observed to reduce after construction of a new development.This is attributed to the change of a barren surface to vegetated surface with more water and consumptive use.Comparing with the corresponding NDVI and NDBI plots of these sites (shown in Figure 2) reveals that site L temperature decreased after 1997 whereas NDVI increased.On the other hand, site D underwent a gradual increase in temperature matched by a gradual decrease in NDVI.Such trends are observed in various parts of Las Vegas showing opposing trends of LST and NDVI.An increase in the vegetation at these points is also revealed by ancillary observations from Google Earth high resolution imagery when available.Such inverse relationships between LST and NDVI are also observed by Yue et al. [57] over various land covers in Shanghai, China.In Las Vegas, this relation is consistent with the increasing trends of tree cover, open spaces, and water revealed in Figure 9.Even though the increase in vegetation is small, it shows impact in reducing the temperature.The contributing factors could include shadow on surface and role of vegetation in redirecting heat through transpiration. To further confirm, this behavior, Figure 11 illustrates change in LST and NDVI between 1990 and 2010.These images are computed using annual averages of LST and NDVI during 1990 and 2010, respectively.The two-decadal change is shown as the difference image.A comparison of LST images shows relatively lower temperature in 2010, especially in the areas that were developed after 1990.These areas can be verified in the NDVI images showing urban expansion.The highest rise in NDVI is observed in the new developments with golf courses and parks.Similar to the previous observation, pixels showing increased NDVI correspond to pixels with decreased LST. To quantify this effect, the averages of LST and NDVI over the sites A through L are computed for 1990 and 2010 and listed in Table 3.The differences are listed in columns titled ∆LST and ∆NDVI.Additionally, the land cover for the two years is provided to analyze the effect of land cover change.Five sites represent areas with no land cover change ( A, B, C, D, and K) whereas six sites represent areas where rural land cover was changed to some form of urban land cover ( F, G, H, I, J, and L) such as residential, commercial, and open space (golf course and park).One site (E) also is chosen where land cover changed from a park to residential land cover.The differences reveal that sites where rural land cover was changed to residential, park, and golf course land cover, ∆LST is negative (decreased) and ∆NDVI is positive (increased).Overall, ∆LST and ∆NDVI have opposite signs.This is further elaborated in Figure 12 as a scatter plot between ∆LST and ∆NDVI.A regression line is fitted with R 2 = 0.832 and establishes a strong relation between LST and NDVI over the selected sites.Similar phenomena have been observed in many arid cities around the world and often referred as urban cool island effect [58][59][60].The analysis of the land cover change and the corresponding relation between LST and NDVI shows that the Las Vegas growth has lowered LST in areas where NDVI increased.The expansion in residential areas has brought about significant increase in roads but also increased vegetated cover in some area as trees, golf courses, and parks.This has resulted in an interesting behavior of the temperature trend which reveals that the vegetated cover has counteracted the impact of temperature rise due to the urbanization. Summary The urban expansion of Las Vegas was studied using remote sensing data.Las Vegas grew rapidly during 1990-2010 period due to accelerated building of residential neighborhoods to accommodate people moving for better wages and warmer climate.Remote sensing is a useful way to study the response of landscape and environment to such rapid urbanization.In particular, the response of land surface temperature can be analyzed that is directly related to urban geophysical processes including hydrology and aerodynamics impacting the quality of urban living. This research used remote sensing techniques to quantify land cover change in Las Vegas and analyzed its surface temperature change.Spectral mixture analysis is used to retrieve subpixel land cover fraction of eight classes.Generally, SMA models are applied using endmember spectral information.This research demonstrates a calibration technique for SMA modeling that takes advantage of high resolution classified NAIP image.It is shown that when pure pixels are difficult to identify or insufficient, a high resolution classified image can be used to calibrate the SMA model.This approach provides the spectral response of pure land cover classes through the known fractional areas derived from a high resolution image. The results from inversion of SMA model reveal that the rate of change of various land cover classes followed a linear trend in Las Vegas.The largest increase occurred for the houses class followed by roads and commercial areas.The largest expansion has been due to houses and providing roadway access to them.Being a city in an arid region, this expansion has been accompanied with increase in vegetation cover in the form of tree cover and open spaces (golf courses and parks).Since, the pre-development surface was primarily desert landscape with brush, various post-development localities with more vegetation have increased shading, water consumption, and evapotranspiration.Among other land cover classes, the increase in water class is also observed most likely due to residential pools and ponds in public parks.The results reveal a gradual decrease in the barren land and bladed ground class, which is consistent with the relevant increase in the other land covers. The trend analysis of the LST revealed an inverse relation with NDVI, especially in the new developments of Las Vegas.This result is consistent with the previously reported findings of temperature change in arid regions [34,56,59,61].Moreover, despite an increase in the heat absorbing asphalt and concrete, even an increase in vegetation cover can play a key role in counteracting the impact of temperature rise.The overall cooling response is observed in vegetated new developments due to shading from vegetation as well as increased water use.It is noted that there are many other inter-related factors related to surface geometry and materials that must be considered for a complete thermodynamic analysis of urban surface.Nevertheless, this research provides a simple relation between LST and NDVI with useful insight about the role of vegetation in ameliorating temperature rise in an arid urban areas. Figure 1 . Figure 1.(a) Plot showing Clark County population growth trend and (b,c) corresponding expansion in Las Vegas urban footprint revealed by Landsat TM images.Overlaid regions of interest are used to analyze land cover change and its impact on temperature. Figure 2 . Figure 2. Trends of NDVI (a,b) and NDBI (c,d) at a new development site L (a,c) and older development site D (b,d). Figure 3 . Figure 3.Comparison of (a,c) Landsat and (b,d) NAIP pixels at two sites including (a,b) a road network and (c,d) residential area. Figure 5 left panel shows a magnified view of the selected NAIP classified pixels and right panel compares an example of training sample of classified image with the corresponding TM data.The land cover fractions under the TM pixels, as retrieved from underlying classified NAIP training samples are used to estimate r using the least squares estimation method to get r = F † ρ , Figure 4 . Figure 4. SMA modeling workflow showing the the step of computing calibration parameter R and estimating subpixel land cover fractions f from surface reflectance ρ. Figure 5 . Figure 5.Comparison of classified NAIP image and TM image at two sites of Figure 3.A magnification of a 4 × 4 TM pixels with underlying NAIP pixels is shown on the left. 1 and K 2 , λ is the central thermal band wavelength (11.45 µm), ρ c is constant (1.438 × 10 −2 mK) from Planck's constant, speed of light, and Boltzmann constant, and is the NDVI-based surface emissivity computed from TM bands 3 (red) and 4 (near infrared).The algorithm is described in detail in [53].The derived LST is validated using ground-based air temperature data and shown in Figure 6.The time series comparison in this figure reveals that LST matches reasonably with air temperature until 1995 but later reveals higher summer time values.It is noted that the area around the observation point saw additional development of McCarran airport after 1995, a possible reason of change in local thermodynamics.Nevertheless, the winter values match better as also evident in the scatter plot.The overall correlation value between LST and air temperature is 0.92 with RMS error 6.15 • C. Figure 6 . Figure 6.Validation of LST estimation by comparing with ground-based thermometric temperature. Figure 7 . Figure 7. Change in the asphalt land cover fraction between 1990 and 2010 in Las Vegas. Figure 8 . Figure 8. Change in the tree cover fraction between 1990 and 2010 in Las Vegas. Figure 9 . Figure 9.Time series plots of surface area of eight land cover classes in Las Vegas urban area. Figure 10 Figure 10 shows a time series plot of the land surface temperature at a sites L (a) and D (b).Comparing with the corresponding NDVI and NDBI plots of these sites (shown in Figure2) reveals that site L temperature decreased after 1997 whereas NDVI increased.On the other hand, site D underwent a gradual increase in temperature matched by a gradual decrease in NDVI.Such trends are observed in various parts of Las Vegas showing opposing trends of LST and NDVI.An increase in the vegetation at these points is also revealed by ancillary observations from Google Earth high resolution imagery when available.Such inverse relationships between LST and NDVI are also observed by Yue et al.[57] over various land covers in Shanghai, China.In Las Vegas, this relation is consistent with the increasing trends of tree cover, open spaces, and water revealed in Figure9.Even though the increase in vegetation is small, it shows impact in reducing the temperature.The contributing factors could include shadow on surface and role of vegetation in redirecting heat through transpiration.To further confirm, this behavior, Figure11illustrates change in LST and NDVI between 1990 and 2010.These images are computed using annual averages of LST and NDVI during 1990 and 2010, respectively.The two-decadal change is shown as the difference image.A comparison of LST images shows relatively lower temperature in 2010, especially in the areas that were developed after 1990.These areas can be verified in the NDVI images showing urban expansion.The highest rise in NDVI is observed in the new developments with golf courses and parks.Similar to the previous observation, pixels showing increased NDVI correspond to pixels with decreased LST.To quantify this effect, the averages of LST and NDVI over the sites A through L are computed for 1990 and 2010 and listed in Table3.The differences are listed in columns titled ∆LST and ∆NDVI.Additionally, the land cover for the two years is provided to analyze the effect of land cover change.Five sites represent areas with no land cover change (A, B, C, D, and K) whereas six sites represent areas where rural land cover was changed to some form of urban land cover (F, G, H, I, J, and L) such as residential, commercial, and open space (golf course and park).One site (E) also is chosen where land cover changed from a park to residential land cover.The differences reveal that sites where rural land cover was changed to residential, park, and golf course land cover, ∆LST is negative (decreased) and ∆NDVI is positive (increased).Overall, ∆LST and ∆NDVI have opposite signs.This is further elaborated in Figure12as a scatter plot between ∆LST and ∆NDVI.A regression line is fitted with R 2 = 0.832 and establishes a strong relation between LST and NDVI over the selected sites.Similar phenomena have been observed in many arid cities around the world and often referred as urban cool island effect[58][59][60]. Figure 10 . Figure 10.Time series of temperature at selected sites L and D. Figure 11 . Figure 11.Comparison of annual averages of LST (a,b) and NDVI (d,e) during 1990 and 2010.The differences are shown in (c) for LST and (f) for NDVI. Figure 12 . Figure 12.Scatterplot between ∆NDVI and ∆ LST showing a regression line fit. Table 1 . Comparison of bands of Landsat TM and NAIP images. Table 2 . Rate of change of land cover classes from 1990 to 2010. Table 3 . List of selected sites showing change in land cover, LST, and NDVI between 1990 and 2010.
2019-04-15T04:40:07.530Z
2018-11-14T00:00:00.000
{ "year": 2018, "sha1": "7b12a155a043e973aa171cb992e7235737417eb2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-445X/7/4/135/pdf?version=1542185888", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "7b12a155a043e973aa171cb992e7235737417eb2", "s2fieldsofstudy": [ "Environmental Science", "Mathematics" ], "extfieldsofstudy": [ "Environmental Science" ] }
232092633
pes2o/s2orc
v3-fos-license
The PRad Windowless Gas Flow Target We report on a windowless, high-density, gas flow target at Jefferson Lab that was used to measure $r_p$, the root-mean-square charge radius of the proton. To our knowledge, this is the first such system used in a fixed-target experiment at a (non-storage ring) electron accelerator. The target achieved its design goal of an areal density of 2$\times$10$^{18}$ atoms/cm$^2$, with the gas uniformly distributed over the 4 cm length of the cell and less than 1% residual gas outside the cell. This design eliminated scattering from the end caps of the target cell, a problem endemic to previous measurements of the proton charge radius in electron scattering experiments, and permitted a precise, model-independent extraction of $r_p$ by reaching unprecedentedly low values of $Q^2$, the square of the electron's transfer of four-momentum to the proton. Introduction The Proton Radius Experiment at Jefferson Lab (PRad) [1] carried out a precise measurement of an important quantity in physics, the root-mean-square (rms) charge radius of the proton, r p . Precise knowledge of r p has a wide-ranging impact: from our understanding of the structure of the proton in terms of its quark and gluon degrees of freedom, to our knowledge of the Rydberg constant -a fundamental constant of nature -due to the impact r p has on bound-state quantum electrodynamics (QED) calculations of atomic energy levels. The charge radius of the proton can be measured using two techniques. In the first, it is extracted from spectroscopic measurements of energy level differences of the hydrogen atom (e.g. the Lamb shift), combined with state-ofthe-art quantum electrodynamics (QED) calculations. In the second method, utilized by PRad, r p is determined from the slope of the proton's electric form factor G E , extracted from the electronproton e-p elastic scattering cross section and extrapolated to zero momentum transfer. More formally, r p is given by where Q 2 is the square of the four-momentum transfer in e-p elastic scattering. Historically, r p obtained from these two methods agreed within experimental uncertainties [2]. However, in 2010 r p was obtained for the first time from a measurement of the Lamb shift of muonic hydrogen, in which the electron of the H atom is replaced by the much heavier muon. The result was a factor of ten more precise than all previous measurements [3], but significantly smaller than previous measurements. Around the same time, a new electron scattering experiment was also performed with over 1400 data points at Mainz [4] and a new value of r p was extracted. Although the new result was more precise than previous scattering measurements, it was consistent with the old results, leading to a >7σ discrepancy between the muonic hydrogen and regular hydrogen values of r p . This triggered the "proton charge radius puzzle" and led to major experimental and theoretical efforts to understand and/or resolve the discrepancy. In this regard, significant progress has been made in recent years. The latest Lamb shift results on regular hydrogen [5] favor the smaller value of r p indicated by muonic hydrogen. Likewise, the PRad result [1] also agrees with the muonic hydrogen r p . The PRad experiment featured a number of innovations that made it the least model-dependent of all modern, high-precision electron scattering measurements of r p to date. First, utilizing a large-acceptance, high-resolution electromagnetic calorimeter (HyCal), it achieved the lowest Q 2 ever observed for e-p scattering in a magnetic-spectrometer-free measurement. Additionally, the large acceptance of the calorimeter allowed coverage in Q 2 that was wide enough to ensure the necessary extrapolation to Q 2 = 0 in Eq. 1 was robust. The second innovation was the simultaneous detection of e-e (Møller scattering) and e-p elastic scattering in the same experimental acceptance. Doing so helped control systematic uncertainties associated with the beam-target luminosity to an unprecedented level. The third innovation, and the topic of this article, was a new hydrogen gas target that eliminated scattering from the end caps of the target cell, a problem common to previous electron scattering measurements of r p . Together, these innovative methods permitted a precise electron scattering measurement at unprecedentedly low values of Q 2 , allowing for extraction of r p in a model-independent manner. The PRad result agrees with the muonic hydrogen results and gives support to the recently revised value for the Rydberg constant [6], one of the most accurately determined fundamental constants in nature. Here we report on the design, construction, and performance of the windowless, cryo-cooled, continuous-flow hydrogen gas target that was used in the PRad experiment. The target incorporated a novel design feature of small apertures on the front and back surfaces of the target cell, such that the electron beam interacted almost exclusively with the hydrogen gas inside the cell. Gas that escaped through the apertures and into the accelerator beam line was removed by a number of high capacity vacuum pumps, reducing its density by three or more orders of magnitude. With this design, the target maintained an areal density of approximately 2 × 10 18 hydrogen atoms/cm 2 distributed uniformly over the 4 cm length of the target, while also minimizing the amount of material exposed to the beam outside the cell, a critical factor for reducing systematic uncertainties in the experiment. 2 Target Design and Construction To detect the energy and scattering angle of electrons in both e-p and e-e scattering at very low values of Q 2 , the PRad experiment ( Fig. 1) utilized HyCal, an electromagnetic hybrid calorimeter originally built for a precise measurement of the neutral pion radiative decay width by the PrimEx collaboration [7,8,9]. The angular resolution of the measurements was further improved using two gas electron multiplier (GEM) detectors directly in front of HyCal. Nevertheless, as the detectors were placed at very forward angles, it was not possible to reconstruct the scattering vertex with extreme precision. Furthermore, backgrounds are often a serious issue for very forward-angle electron scattering experiments because the cross section for many processes increases with decreasing scattering angle. These aspects made it critical to localize the hydrogen target sample to a relatively small volume free from any contaminants, including beam-entrance and beam-exit windows. At the same time, a highly accurate determination of the absolute target density was not necessary, thanks to the simultaneous measurement of e-e rates from Møller scattering alongside the elastic e-p rates. To this end, the PRad target was a sample of hydrogen gas flowing continuously through an open (windowless) target cell 4 cm long. The gas was cooled to cryogenic temperatures to increase its volumetric density inside the cell to about 5 × 10 17 atoms per cm 3 , and the cell was specifically designed to create a large pressure difference between gas inside the cell and the surrounding beam line vacuum. Figure 2 is a sectional drawing of the PRad target chamber and shows most of its major components. A photograph of the target installed on the Hall B beam line is shown on the left in Fig. 3. High-purity hydrogen gas (>99.99%) was supplied from a high-pressure cylinder located outside the experimental hall and metered into the target system via a 0-10 slpm mass flow controller. Using a pair of remotely actuated valves, the gas could be directed into the target cell for production data-taking, or into the top of the chamber for background measurements. Before entering the cell, the gas was cooled to cryogenic temperatures using a two-stage pulse tube cryocooler (Cryomech model PT810) with a base temperature of 8 K and a cooling power of 20 W at 14 K. The cryocooler's first stage served two purposes. It cooled a tubular, copper heat exchanger that lowered the hydrogen gas temperature to approximately 60 K. It also cooled a copper heat shield surrounding the lower temperature components of the target, including the target cell. The second stage of the cryocooler cooled the gas to its final operating tempera-3 ture using a similar heat exchanger and cooled the target cell via a 40 cm long, flexible copper strap. The temperature of the second stage was measured by a calibrated cernox thermometer 4 and stabilized at approximately 15 K using a small cartridge heater and automated temperature controller. Without this, the hydrogen gas would condense or even freeze inside the second stage heat exchanger. The target cell, shown on the right in Figure 3, was machined from a single block of C101 copper. Its outer dimensions were 7.5 × 7.5 × 4.0 cm 3 , with a 6.3 cm diameter hole along the axis of the beam line. The hole was covered at both ends by 7.5 µm thick polyimide foils held in place by aluminum end caps. Cold hydrogen gas flowed into the cell at its midpoint and exited via 2 mm holes at the center of either polyimide foil. The holes also allowed the electron beam to pass through the H 2 gas without interacting with the foils themselves, effectively making this a "windowless" gas target. Compared to a long thin tube, the design of a relatively large target cell with small orifices had two important advantages. First, it produced a more uniform density profile along the beam path, allowing us to better estimate the gas density based upon its temperature and pressure. Second, it greatly reduced a potential source of background scattering. Rather than scattering from the 4 cm long copper cell walls, any "halo" electrons outside the primary beam radius could only scatter from the much thinner 7.5 µm polyimide foils. A second calibrated cernox thermometer, suspended inside the cell, provided a direct measure of the gas temperature. Approximately 50 cm of each of the thermometer's four lead wires was coiled inside the cell to improve the thermal conduction between the thermometer and gas. The gas pressure was measured by a 0-10 torr capacitance manometer located outside the vacuum chamber and connected to the cell by a carbon fiber tube approximately one meter long and 2.5 cm in diameter. The same tube was used to suspend the target cell from a motorized, 5-axis 4 Lake Shore Cryotronics motion controller which could position the target with a precision of about ±10 µm. The motion controller was also used to lift the cell out of the beam in order to investigate possible scattering of beam halo from the polyimide windows. Finally, two 1 µm thick carbon and aluminum foils were attached to the bottom of the copper target cell for background and calibration measurements. High-speed turbomolecular pumps were used to evacuate the hydrogen gas as it left the target cell and maintain the surrounding vacuum chamber and beam line at low pressure. Two Pfeiffer HiPace 3400 magnetically levitated turbo pumps, each with a nominal pumping speed of 3000 l/s, were attached directly under the chamber, while two additional Pfeiffer HiPace 1500 turbo pumps with 1400 l/s speed each were used on the upstream and downstream portions of the beam line. A second capacitance manometer measured the hydrogen gas pressure inside the target chamber, while cold cathode vacuum gauges were utilized in all other locations. While the response of capacitance manometers was independent of the gas species being measured, the cathode gauge readings required correction for the ionization energy of hydrogen, made according to the manufacturer's specifications. As illustrated in Fig. 2, additional polyimide orifices were installed at various locations to limit the extent of target (hydrogen) gas along the path of the beam. With this design, the density of gas decreased significantly outside the target cell, with an estimated 99% of scattering occurring within the 4 cm length of the cell (Sec 4.1). For obvious reasons of safety, the hydrogen exhausted from all vacuum pumps was vented outside the experimental hall. A continuous flow of nitrogen gas was also added to the vent line to prevent the formation of a combustible mixture of hydrogen and oxygen. Target Performance The temperature and pressure of H 2 gas flowing through the PRad target cell, as well as the resulting areal density, are shown as a function of flow rate in Fig. 4. For these measurements, the temperature of the cryocooler was regulated at 15 K. At lower temperatures, target operation became unstable as hydrogen condensed and eventually froze inside the second stage heat exchanger. The cernox thermometer inside the cell had a calibration accuracy of ±9 mK, while the accuracy of the capacitance manometer was ±0.01 torr. No attempt was made to determine temperature gradients within the cell. The pressure difference between cold gas in the cell and room temperature gas in the manometer was estimated using the correlation function [11]: and was less than 0.2% under all measured conditions. Here P H,L and T H,L are the pressures and temperatures of the gas at the High and Low temperature ends of the connecting tube of radius r, expressed in Pa, K, and m, respectively. As shown in Fig. 4, the temperature of the gas inside the cell was largely independent of flow rate, while the pressure increased in a linear manner. This is the expected behavior of a compressible, near-ideal gas flowing through an orifice of diameter d 2 , where the mass flow rate can be written as [12] Here ρ 1 is the density of the gas on the upstream side of the orifice, and P 1 and P 2 are its pressures on the upstream and downstream sides, respectively. C is the discharge coefficient (about 0.6 for an orifice with sharp edges), β = d 2 /d 1 is the ratio of the orifice diameter to the upstream pipe diameter, and is the expansibility factor for small-bore orifices [13], with γ the ratio of the gas's specific heats. For hydrogen gas at the PRad operating conditions, γ = C p /C v = 1.66. Taking P 1 P 2 , β 1, and ρ 1 ∝ P 1 , Eq. 3 reduces to the linear relationship 6 between pressure and flow that is seen in Fig. 4. The red curve in Fig. 4 was generated using Eq. 3 to calculate the flow of H 2 gas through two 2 mm orifices at the measured pressures and temperatures and using a discharge coefficient C = 0.65. Target Operation Data collection during the PRad experiment was typically broken into one hour segments, or "runs", with the target operating in one of the four configurations illustrated in Fig. 5. Production data for measuring the proton charge radius utilized configuration (a), in which high-density H 2 gas flowed through the target cell while the surrounding vacuum chamber and beam line were filled with lower-density gas escaping from the cell. The performance the target in this configuration is described in Sec. 4.1. Configurations (b)-(d) were utilized to examine scattering of electrons from material other than hydrogen atoms in the target cell and are the subject of Sec. 4.2. Production Run Performance All production runs for measuring r p were made with 600 sccm H 2 gas flowing through the target cell, giving pressure and temperature measurements of about 0.47 torr and 19.5 K, respectively. The resulting gas density was 0.78 µg/cm 3 [14], which corresponded to a 1.9 × 10 18 cm −2 areal density of hydrogen atoms within the 4 cm long cell. The performance of the target throughout all 110 production runs is shown in Fig. 6. During the course of any one hour run, the gas temperature and pressures varied by less than one percent, although fluctuations up to a few percent between runs can be seen in Fig. 6. These occurred following long periods of operation with other target configurations but had no impact on the extracted value of r p because the e-p elastic scattering rates were always normalized to the Møller scattering rates. Gas pressures measured in other regions of the beam line ("residual gas") were two-to-four orders of magnitude lower than the cell pressure (Table 4.1). The greatest quantity of residual gas along the beam path was inside the 4 m long downstream vacuum chamber (Fig. 1). Here the pressure was slightly higher than at the downstream turbo pump, presumably due to outgassing or leaks in the chamber. This can be greatly reduced in future installations by additional pumping on the chamber or reducing the 22.9 mm orifice at the chamber's entrance (see Fig.2). Table 4.1 indicates that approximately 99% of all hydrogen in the beam's path was constrained within the 4 cm length of the target cell. Because the pressure sensors were mounted several centimeters from the beam axis, the values in Table 4.1 could not be utilized to accurately correct for the presence of the residual gas. Instead, these corrections were made using the background measurements described below. In addition, the COMSOL Multiphysics ® modelling software was used to simulate the density of H 2 gas flowing through the target system and beam line in configurations (a) and (b) (Figure 7). Additional studies, including simulations with various density profiles outside the target cell, were performed, and systematic uncertainties were assigned to account for the presence of the residual gas [15]. This, along with halo scattering contributed a systematic uncertainty of less than 0.5% [16] to the extracted value of r p . Background Measurements The target configurations (b), (c), and (d) shown in Fig. 5 were used to study sources of background in the PRad measurements, that is, electrons that scattered from material other than hydrogen atoms inside the target cell. In configuration (b), the hydrogen gas flow was kept at 600 sccm but was admitted directly into the target chamber rather than the target cell. Thus, all scattering sources along the beam path were the same as in production runs except for gas inside the cell, which was reduced more than three orders of magnitude. The resulting chargenormalized data rates for e-p and e-e (Møller) scattering made with background configuration (b) were then subtracted from the full-cell measurements to isolate scattering from hydrogen atoms within the target cell. Configurations (c) and (d) were used to better understand the origin of background events. There was no gas flowing into the system in either configuration, and the only difference was the location of the target cell. The cell remained in the beam path in (c) but was lifted in (d), thus removing the cell windows as a possible source of background. Scattering rates for each of the three background configurations are plotted as a function of reconstructed electron scattering angle in Fig. 8. These measurements were made at a 2.2 GeV beam energy and normalized to the production scattering rates measured with configuration (a). All rates display prominent peaks at very forward angles, indicating the greatest sources of background scattering were near or upstream from the target cell. As expected, the rates from configuration (b) were the greatest, since they included all sources of background scattering, including residual hydrogen gas in the beam line. The background contribution from this residual gas can be determined from the difference (b)-(c) and is seen to be approximately 1%, consistent with the results shown in Table 4.1. Rates for configurations (c) and (d) are similar, which indicates little background from the target cell windows. We conclude that the majority of the background (6-8%) came from halo scattering from beam line elements other than the target and was likely produced by the upsteam Beam Halo Blocker seen in Fig. 1, a 12.7-mm diameter collimator designed to reduce the intrinsic size of the halo. Summary We have described a new hydrogen gas target utilized in PRad, an electron scattering measurement of the root-mean-squared charge radius of the proton conducted at Jefferson Lab. The target design eliminated the beam entrance and exit windows that have constituted major sources of background scattering in previous r p measurements from electron scattering. Together with other innovative instrumentation and measurement techniques, the target permitted a precise and model-independent extraction of r p from e-p elastic scattering. This target will be used in a newly approved PRad-II [17] experiment at JLab that will improve the proton charge radius measurement by a factor of nearly four compared with the PRad experiment. The apparatus described here is also compatible with practically any noncorrosive target gas (deuterium, helium, argon, neon, etc.), and can be used in other experiments where such a target system is advantageous.
2021-03-03T02:15:55.152Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "2b9e6fad35375ef58641f20668a52e88f70f3d94", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2103.01749", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7d7a95e67578407aa7aa8e1dd86f0e3a9cc53bcf", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
140323595
pes2o/s2orc
v3-fos-license
Low-Dose Propranolol as Secondary Prophylaxis for Varix Bleeding Decreases Mortality and Rebleeding Rate in Patients with Tense Ascites Background and Aim: The risk and benefit of non-selective propranolol in patients with tense ascites are controversial. This study aimed to investigate the effect of propranolol as secondary prophylaxis on varix rebleeding and overall mortality in patients with tense ascites. Methods: This study used a database of the Health Insurance Review and Assessment Service (HIRAS), which provides health insurance to 97.2% of the total population in Korea. A total of 80,071 patients first variceal bleeding as the first decompensated complication enrolled from 2007 to 2014. Results: There were 2274 patients with large-volume ascites prescribed propranolol as secondary prophylaxis after first varix bleeding. The average prescription dose of propranolol as secondary prophylaxis was 74 mg/day in patients with large-volume ascites. The mean duration of rebleeding was 22.8 months. Result of analysis showed that low-dose propranolol (40–120 mg/day) compared to inadequate propranolol dose (<40 mg/day) as secondary prophylaxis decreased overall mortality and varix rebleeding in patients with tense ascites. Conclusions: Low-dose propranolol (40–120 mg/day) as secondary prophylaxis for variceal re-bleeding decreased overall mortality and varix rebleeding recurrence in patients with tense ascites. Introduction Variceal bleeding is a major complication of liver cirrhosis [1]. Mortality rate within 6 weeks after the first occurrence of variceal bleeding is approximately 15-20% [2,3]. The incidence of variceal rebleeding within six weeks of the first bleeding is 30% and increases to 60% within one year. The mortality rate of variceal rebleeding is 34% within 12 months [4,5]. Non-selective beta-blockers (NSBBs) can reduce the incidence of variceal rebleeding. Most guidelines recommend the use of beta-blockers as secondary prophylaxis for variceal bleeding [6][7][8]. Recently, physicians are facing controversy on the safety of NSBBs in patients with large-volume ascites. A landmark study by Serste in 2010 first reported that the use of NSBBs increases mortality rate in patients with large-volume ascites [9]. Their follow-up study also showed that the use of NSBBs in patients large-volume ascites was associated with paracentesis-induced circulatory dysfunction [10]. 48.9-80 mg) were relatively effective and safe in patients with large-volume ascites [11][12][13][14]. Thus, most recent reports insisted that small amount of propranolol is known to be safe, if patients didn't have hypotension, hyponatremia, or impaired renal function [12,[15][16][17][18]. However, most previous studies used NSBBs as primary prophylaxis for varix bleeding (never bleeding) in compensated patients or as a mix of primary and secondary prophylaxis. Moreover, sample size of most studies was small. Therefore, this study aimed to investigate the efficacy and safety of NSBBs as secondary prophylaxis for varix rebleeding in patients with large-volume ascites who require paracentesis. Data Source This study used data from the Health Insurance Review and Assessment Service (HIRAS). The data are patient claims recorded by the HIRAS, which provides health insurance to 97.2% of the total population of Korea. All insurance claims of hospitals and clinics in Korea are reviewed by the HIRAS. Approximately 46 million claims are filed yearly, including those from >80,000 medical institutions nationwide. These claims are approved and funded by the National Health Insurance Corporation, and data are recorded using encrypted numbers according to the disclosure principle (IRB: HYUH 2017-04-006). All processes were performed in accordance with the relevant guidelines and regulations by both IRB and HIRA. We used open data source, so the informed consent waived. Study Design This is a retrospective cohort study using data from January 2007 to December 2014, as requested by the Health Insurance Review and Assessment Service. Informed consent was waived because of the retrospective nature of the study. Study Population A total of 80,071 patients with first varix bleeding as the first decompensated complication enrolled from 2007 to 2014. A total of 27,372 patients with initial variceal hemorrhage were followed up for more than 2 years. Among them, 6,826 patients received propranolol for over 30 days after the initial variceal bleeding. This study included, among varix bleeding-naïve patients, 2,274 patients with large-volume ascites, which was defined as serum albumin concentration of <3.0 mg/dL and aspirated ascitic fluid of >3 L after paracentesis ( Figure 1). Inclusion Criteria The inclusion criteria were as follows: (a) patients with variceal bleeding as the first complication and have not been treated for decompensated complications (e.g., varix bleeding, large-volume ascites, hepatic encephalopathy, and hepatorenal syndrome) for the past 2 years; (b) patients treated with the beta-blocker propranolol for more than 30 days. Exclusion Criteria The exclusion criteria were as follows: (a) patients who received nadolol or carvedilol, besides propranolol. (b) Patients treated for decompensated complications of liver cirrhosis within 2 years of enrollment. (c) Patients who died within 365 days after variceal bleeding. (d) Patients who were prescribed propranolol for less than 30 days. (e) Patients who were younger than 18 years. (f) Patients diagnosed with malignant tumors within 5 years. (g) Patients who had hepatic encephalopathy or hepatorenal syndrome at the time of variceal bleeding. Baseline Adjustment of Study Population Correction of the baseline characteristics in the compared groups was very important in this study. We adopted variceal bleeding as the first complication. Patients who had no decompensated complications within 2 years after diagnosis were included because median survival after first decompensation is approximately 1.6 years. Moreover, patients who died within 1 year after variceal bleeding were excluded to normalize the severity of variceal hemorrhage and disease, and also because the dose of propranolol prescribed for one year after bleeding was calculated using the number of days of medication. Patients who underwent diagnostic paracentesis or those who used diuretics to control ascites were also excluded. Definition of Inadequate User and Medication Compliance Patients were included in the inadequate user group if they were prescribed propranolol for more than 30 days and showed inappropriate compliance. However, the average annual prescription dose was <40 mg/day. Medication compliance was calculated as the amount of propranolol prescribed during the first year after variceal bleeding. Only the beta-blocker propranolol was included in the calculation. Operational Definitions of Decompensated Complications Decompensated cirrhotic complications were defined as varix bleeding, large-volume ascites, hepatic encephalopathy, and hepatorenal syndrome. Patients with large-volume ascites were defined as those who received paracentesis treatment or were prescribed albumin (227104BIJ) under insurance reimbursement. We gave an insurance code for albumin prescribed to treat volume expansion after large volume paracentesis. Reimbursement condition of albumin is very strict in HIRAS. Albumin use is limited to when blood albumin concentration is <3.0 mg/dL and >3 L after paracentesis. Patients with varix bleeding were defined as those treated with endoscopic sclerotherapy, ligation, or medication (vasopressin, terlipressin, somatostatin, or octreotide). Patients with hepatic encephalopathy and hepatorenal syndrome were defined as those managed using lactulose enema (M0076) and those admitted to a hospital and administered terlipressin and albumin simultaneously as insurance benefits, respectively. Patients were covered by insurance if (a) their serum creatinine level is more than two times higher than the baseline and more than 2.5 mg/dL within 2 weeks, and (b) if their creatinine clearance is reduced by more than 50% over 24 h to <20 mL/min. Validation of Operational Definitions After IRB approval, the medical records of patients from two hospitals were reviewed retrospectively using the operational definitions of varix bleeding, large-volume ascites, and decompensated cirrhosis. A total of 144 patients met the operational definition of variceal bleeding, and 87 patients from two hospital databases met the operative definition of large-volume ascites. The electronic chart of each patient was checked to confirm consistency with the operational definitions ( Figure 2). After IRB approval, the medical records of patients from two hospitals were reviewed retrospectively using the operational definitions of varix bleeding, large-volume ascites, and decompensated cirrhosis. A total of 144 patients met the operational definition of variceal bleeding, and 87 patients from two hospital databases met the operative definition of large-volume ascites. The electronic chart of each patient was checked to confirm consistency with the operational definitions ( Figure 2). Primary and Secondary Endpoint Primary endpoint was mortality due to the use of the propranolol in patients with simultaneous variceal bleeding and uncontrolled ascites. Secondary endpoint was variceal rebleeding. Statistical Analysis The ANOVA test and Chi-square test were used to analyze demographic and biochemical data differences according to sex. Meanwhile, the Kaplan-Meier analysis was used to evaluate the survival rate and frequency of rebleeding with respect to the prescribed dose of propranolol. Data were analyzed with respect to viral and alcohol-induced cirrhosis. Baseline Characteristics This study included 2,274 patients with large-volume ascites who were prescribed propranolol as secondary prophylaxis for more than 30 days ( Figure 1). The mean age of the subjects was 52.6 years, and 79.6% of the study population comprised of men. The average prescription dose of propranolol was 74 mg/day ( Table 1). The mean follow-up period was 43.7 months, and the mean duration of rebleeding was 22.8 months. Inadequate users (noncompliance group) were defined as those prescribed with an average beta-blocker dose of <40 mg/day, with a mean dose of 31.6 mg/day. Primary and Secondary Endpoint Primary endpoint was mortality due to the use of the propranolol in patients with simultaneous variceal bleeding and uncontrolled ascites. Secondary endpoint was variceal rebleeding. Statistical Analysis The ANOVA test and Chi-square test were used to analyze demographic and biochemical data differences according to sex. Meanwhile, the Kaplan-Meier analysis was used to evaluate the survival rate and frequency of rebleeding with respect to the prescribed dose of propranolol. Data were analyzed with respect to viral and alcohol-induced cirrhosis. Baseline Characteristics This study included 2274 patients with large-volume ascites who were prescribed propranolol as secondary prophylaxis for more than 30 days ( Figure 1). The mean age of the subjects was 52.6 years, and 79.6% of the study population comprised of men. The average prescription dose of propranolol was 74 mg/day ( Table 1). The mean follow-up period was 43.7 months, and the mean duration of rebleeding was 22.8 months. Inadequate users (noncompliance group) were defined as those prescribed with an average beta-blocker dose of <40 mg/day, with a mean dose of 31.6 mg/day. Validation of Operational Definitions Patient records that met the operational definitions were extracted from two independent hospitals to confirm operational definition and agreement (Figure 2). A total of 144 patient data was extracted from two hospitals as variceal bleeding according to its operational definition, and all those data (100%) were consistent with variceal bleeding caused by liver cirrhosis. Eighty-seven data were extracted from two hospitals as large-volume ascites according to its operational definition. Eighty-five these date (97.7%) were consistent with large-volume ascites. Two inconsistencies were noted. One patient with cirrhosis underwent panperitonitis surgery, but he did not have large-volume ascites. One cirrhotic patient also underwent brain aneurysm surgery. Effects of Low-Dose Propranolol on Overall Mortality The Kaplan-Meier survival curve showed that mortality rate was lower in the low-dose propranolol group (40-120 mg/day) than in the inadequate user group (<40 mg/day) (p < 0.001 and p = 0.0028, respectively) ( Figure 3A). However, this advantage of propranolol was not observed in the high-dose propranolol group (≥120 mg/day). Data were analyzed according to the cause of cirrhosis (viral and non-viral cirrhosis). Propranolol at 40-120 mg/day decreased overall mortality in the viral cirrhosis group (p = 0.003, Figure 3B), but and in the non-viral cirrhosis group only propranolol at 40-80 mg/day decreased overall mortality (p = 0.006; Figure 3C). Validation of Operational Definitions Patient records that met the operational definitions were extracted from two independent hospitals to confirm operational definition and agreement (Figure 2). A total of 144 patient data was extracted from two hospitals as variceal bleeding according to its operational definition, and all those data (100%) were consistent with variceal bleeding caused by liver cirrhosis. Eighty-seven data were extracted from two hospitals as large-volume ascites according to its operational definition. Eighty-five these date (97.7%) were consistent with large-volume ascites. Two inconsistencies were noted. One patient with cirrhosis underwent panperitonitis surgery, but he did not have large-volume ascites. One cirrhotic patient also underwent brain aneurysm surgery. Effects of Low-Dose Propranolol on Overall Mortality The Kaplan-Meier survival curve showed that mortality rate was lower in the low-dose propranolol group (40-120 mg/day) than in the inadequate user group (<40 mg/day) (p < 0.001 and p = 0.0028, respectively) ( Figure 3A). However, this advantage of propranolol was not observed in the high-dose propranolol group (≥120 mg/day). Data were analyzed according to the cause of cirrhosis (viral and non-viral cirrhosis). Propranolol at 40-120 mg/day decreased overall mortality in the viral cirrhosis group (p = 0.003, Figure 3B), but and in the non-viral cirrhosis group only propranolol at 40-80 mg/day decreased overall mortality (p = 0.006; Figure 3C). Effects of Low-Dose Propranolol on Rebleeding Rate The Kaplan-Meier survival curve showed that propranolol treatment at all dose decreased varix rebleeding rate in patients with tense ascites to a greater extent than inadequate propranolol Effects of Low-Dose Propranolol on Rebleeding Rate The Kaplan-Meier survival curve showed that propranolol treatment at all dose decreased varix rebleeding rate in patients with tense ascites to a greater extent than inadequate propranolol (<40 mg/day) ( Figure 4A). Propranolol decreased varix rebleeding in both patients with viral and non-viral cirrhosis ( Figure 4B,C). Discussion This study showed that low-dose propranolol (40-120 mg/day) as secondary prophylaxis decreased overall mortality and recurrence of varix rebleeding. This study used the database of HIRAS, which covers 97.2% of the total population in Korea (n = 49,989,620) and is the first large-scale study to identify the effects of propranolol as secondary prophylaxis in patients with large-volume ascites. Beta-blockers are a commonly used treatment modality for reducing portal venous pressure and preventing re-bleeding in cirrhotic patients with varix [7,19]. The reason for the use of nonselective beta-blockers in varix is to reduce portal vein pressure by decreasing portal blood flow into the portal vein. Non-selective beta-blockers reduce cardiac output and block the adrenergic dilatory tone of the mesenteric arteriole, leaving only alpha adrenergic-mediated vasoconstriction. It has been used for many years as a pharmacological treatment for prevention of variceal bleeding due to the effect of lowering portal pressure caused by vasoconstriction [20,21]. The prevalence of re-bleeding was low in beta-blocker group in a RCT article,[2] and the meta-analysis showed that beta-blockers were effective in reducing mortality (absolute risk reduction = 7%) [22]. However, in a study published in 2012, the use of beta-blockers in patients with variceal bleeding events showed an increased incidence of death and re-bleeding [9]. The use of beta-blockers in patients with refractory ascites may result in fragile hemodynamic status by a decrease in cardiac output, leading to decreased organ perfusion and increased mortality. Thus, a study suggesting that beta-blockers should not be used in patients with hypotension or organ Discussion This study showed that low-dose propranolol (40-120 mg/day) as secondary prophylaxis decreased overall mortality and recurrence of varix rebleeding. This study used the database of HIRAS, which covers 97.2% of the total population in Korea (n = 49,989,620) and is the first large-scale study to identify the effects of propranolol as secondary prophylaxis in patients with large-volume ascites. Beta-blockers are a commonly used treatment modality for reducing portal venous pressure and preventing re-bleeding in cirrhotic patients with varix [7,19]. The reason for the use of nonselective beta-blockers in varix is to reduce portal vein pressure by decreasing portal blood flow into the portal vein. Non-selective beta-blockers reduce cardiac output and block the adrenergic dilatory tone of the mesenteric arteriole, leaving only alpha adrenergic-mediated vasoconstriction. It has been used for many years as a pharmacological treatment for prevention of variceal bleeding due to the effect of lowering portal pressure caused by vasoconstriction [20,21]. The prevalence of re-bleeding was low in beta-blocker group in a RCT article, [2] and the meta-analysis showed that beta-blockers were effective in reducing mortality (absolute risk reduction = 7%) [22]. However, in a study published in 2012, the use of beta-blockers in patients with variceal bleeding events showed an increased incidence of death and re-bleeding [9]. The use of beta-blockers in patients with refractory ascites may result in fragile hemodynamic status by a decrease in cardiac output, leading to decreased organ perfusion and increased mortality. Thus, a study suggesting that beta-blockers should not be used in patients with hypotension or organ dysfunction is suggested [23,24]. In subsequent studies, low-dose non-selective beta-blocker is safe in patients with refractory ascites and the effects of beta-blockers with refractory ascites are controversial [25,26]. The purpose of this study was to evaluate the efficacy and safety of beta-blockers in large number patients with variceal bleeding. In this study, no effect of propranolol was observed according to etiology. In other preliminary studies, no difference in effect was observed when the etiology was calibrated also [2, 14,25]. Regardless of etiology of liver cirrhosis, the cause of occurred varix and ascites in patients with liver cirrhosis is the increase of portal venous pressure. The effect of beta-blocker is thought to have no effect on the cause. In addition, propranolol less than 120 mg/day had a beneficial effect in this study, but a beneficial effect was masked at doses greater than 120 mg/day. This tendency is similar to the results of previous studies suggesting that low-dose beta-blockers below 80 mg are safe [26]. Mean dose of NSBB used in Serste's study [9], which presented a different result from this study, was 113.25 mg, which was higher than the average dose of this article and previous papers. In addition, we think that the severity of the patient was higher than our articles and other papers because patients with Child-Pugh class C were 67.5%, and the mean survival rate was only 8 months. In the previous studies, the relationship between only the mean dose and the mortality of beta-blockers was analyzed. In this paper, we propose clearer cut-off that can be used safely by presenting the amount of beta-blocker usage by intervals. The use of high-dose beta-blockers may further reduce portal venous pressure. Conversely, cardiac reverse may be reduced to increase complications such as acute renal failure or hepatorenal syndrome. It is considered that it is better to pay attention to high dose use of beta-blocker. The most critical point in using reimbursement claim data is to balance a baseline between comparable groups. Due to the characteristics of the data, there was no lab data, such as albumin or prothrombin time, to evaluate the accurate severity of the patient. In order to supplement this, we attempted to control the patients as homogenously as possible. At first, we included first variceal bleeding as the first decompensation. We reviewed all patients' reimbursement claim data recorded on the preceding two years. We also excluded the following history of complications within 2 years: tense ascites (paracentesis: C8050, C8051, and Q2470), variceal bleeding (variceal ligation: Q2430-Q2438, Q7631-Q7634), any vasoactive drug (octreotide, vasopressin, terlipressin, or somatostatin), hepatic encephalopathy (lactulose enema: M0076), and hepatorenal syndrome (co-administration of terlipressin and albumin). Second, our definition of tense ascites used was homogenous. Although patients with tense ascites who did not undergo paracentesis or use high-dose diuretics were not included in this study, our definition of tense ascites was uniform. We selected a homogenous population of patients who had large-volume ascites, which we defined as those who were hospitalized and underwent paracentesis with a fluid volume and blood albumin concentration of >3 L and <3.0 mg/dL, respectively. Reimbursement condition of albumin is very strict in HIRAS. All physicians must submit results of serum albumin test as well as paracenthesis code to reimburse albumin. All patients who died within 1 year of variceal hemorrhage were excluded in this study because of the following two reasons. First, we compared mortality and rebleeding rate with medication compliance, whereas medication compliance was calculated on the basis of adherence to medication during the first year after bleeding. For this reason, patients who died within one year were excluded. Definition of inadequate group was those who were inadequately prescribed propranolol (<40 mg/day) during the first year, instead of those who did not take propranolol at all. We excluded those who were not prescribed propranolol after variceal bleeding for one year, because they might have had severe co-morbidity (for example, hypotension, chronic obstructive pulmonary disease, diabetes, or other cardiovascular diseases). To date, several studies on the efficacy and safety of NSBBs have been conducted in patients with large-volume ascites, and the results are conflicting [9,[11][12][13][14]. Such results are attributed to the different doses of NSBBs used. Sertes first reported that the use of beta-blockers increases mortality in patients with large-volume ascites [9]. In their study, the mean dose of NSBBs used was 113.25 mg/day. However, in the following three studies, the mean NSBB dosage was <80 mg. On the basis of these data, low-dose (<80 mg) NSBBs are believed to be relatively safe for patients with large-volume ascites [16,26]. Our data supported recent data. We used various cut-off doses of propranolol (40-80 mg/day, 80-120 mg/day, and ≥120 mg/day) to evaluate the benefit of propranolol as secondary prophylaxis in patients with large-volume ascites and found that propranolol at low doses reduced overall mortality and rebleeding rate. To best our knowledge, the present study had a relatively long follow-up period, with an average of 43.7 months (3.64 years) and included more than 2,000 patients with large-volume ascites. On the contrary, all previous studies had short follow-up periods (<10 months) and a small number of subjects (<150 people). However, this study had some limitations. First, we could not confirm the medical records of patients because we used data recorded for insurance claims. We also defined large-volume ascites and variceal hemorrhage using their operational definitions. To overcome these problems, we examined the validity of the operational definitions by reviewing medical records from two hospitals. Second, the inadequate user group was defined as the propranolol noncompliance group (inadequate dose of propranolol), instead of the non-propranolol-user group. The mean dose of propranolol in inadequate user group was 31.6 mg because almost all patients took propranolol after variceal bleeding according to the guideline without any contraindication. Patients who did not use NSBBs after variceal bleeding were not included as controls because they were highly likely to have severe cardiovascular and respiratory illnesses or a very poor general condition. As such, the inadequate group was defined as those who received propranolol for more than 30 days after variceal bleeding but were improperly prescribed an average dose of <40 mg. In a previous study, the dose of the appropriate beta-blocker in Korean patients was 154.4 ± 59.4 mg, as measured according to hepatic venous pressure gradient after taking propranolol [27]. Third, we defined propranolol adherence on the basis of the dose of propranolol prescribed at the first year. However, medication compliance at the first year is not representative of compliance during the entire treatment period. In the present study, only medication adherence during the first year was analyzed because if we had calculated the mean dosage of propranolol at the entire study period, it would have affected medication adherence and survival, and subsequently the results of statistical analyses. Fourth, we assumed that effect on short term mortality of other co-morbidity (ex. diabetes, hypertension, and kidney disease) is limited, because life expectancy is very short mean and survival period of decompensated cirrhosis is only two years. But some other information to must be help assess patient severity, for example, comorbidities and comorbidity burden, prior hospitalization/emergency room visits, prior healthcare costs, etc. This would help readers assess whether or not there are differences in patients at baseline and a pre-defined look back period. But we did not address and adjust several important factors. In conclusion, low-dose propranolol (40-120 mg/day) as secondary prophylaxis decreased overall mortality rate in patients with tense ascites. This has potential applications for physicians in clinical practice. Secondary prophylaxis using low-dose propranolol (40-120 mg/day) after variceal bleeding can be safe even if patients have large-volume ascites.
2019-05-01T13:04:03.008Z
2019-04-26T00:00:00.000
{ "year": 2019, "sha1": "d8988d32de883d0bc544ebecaffe39e7b9a1366a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/jcm8050573", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d8988d32de883d0bc544ebecaffe39e7b9a1366a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258488069
pes2o/s2orc
v3-fos-license
Factors associated with caregiver compliance to an HIV disclosure intervention and its effect on HIV and mental health outcomes among children living with HIV: post-hoc instrumental variable-based analysis of a cluster randomized trial in Eldoret, Kenya Background The HADITHI study is a cluster-randomized trial of children living with HIV and their caregivers in Kenya that aimed to increase rates of caregiver disclosure of their child's HIV status, encourage earlier status disclosure, and improve pediatric mental health and HIV outcomes. This analysis identified characteristics predicting caregiver non-responsiveness and compared outcomes among children based on disclosure status. Methods A penalized logistic regression model with lasso regularization identified the most important predictors of disclosure. The two-stage least squares instrumental variable approach was used to assess outcomes accounting for non-compliance to disclosure. Results Caregiver non-isolation and shorter time on antiretroviral therapy were predictive of HIV status disclosure. There were no statistically significant differences found in CD4 percentage, depression status, or mental and emotional status based on disclosure status up to 24 months-post intervention. Conclusion These findings have implications for specialists seeking to tailor disclosure interventions to improve caregiver-child dyad responsiveness. Introduction In 2021, there were approximately 1.7 million children living with HIV and 160,000 children newly infected with HIV globally (1). As these children age, appropriate timing and methods of disclosure of HIV status become important components in the management of their health. The World Health Organization recommends that the HIV disclosure process for children perinatally infected with HIV should be started by age six and completed by age twelve (2). Adolescents living with HIV who are aware of their HIV status have been found to have improved adherence to antiretroviral treatment (3)(4)(5)(6)(7)(8), knowledge of sexual and reproductive health (5,8), and psychological wellbeing (3,5,9) across multiple global settings. In addition, disclosure provides adolescents critical autonomy and personal control over their health, which becomes increasingly important as they navigate the lifelong impact of their HIV status (10, 11). Despite the clear importance of disclosure for children living with HIV, most adolescents in resource-limited settings remain undisclosed. Up to 50% of adolescents across studies in low and middle-income countries were told non-HIV related reasons for HIV illness and healthcare visits (12). These differences are most prominent in African countries, which are home to 75% of children living with HIV under the age of fifteen and have HIV disclosure rates for HIV-positive adolescents between 15 and 64% (4,(13)(14)(15)(16)(17). Qualitative studies have found that caregivers choose not to disclose their child's HIV positive status with them due to their beliefs about HIV-related stigma, including stereotypes associating HIV with immorality and death, concerns about the impact of others' stereotypes about HIV on the child, worry about how others will treat their family if others found out about the diagnosis, and fear of a negative psychological reaction from the child (18-21). Based on this literature, it is well-understood that HIV disclosure interventions are successful in improving disclosure rates and play an important role in supporting the overall health and wellbeing of adolescents living with HIV. Yet, an important assumption remains unquestioned from existing research: is the action of disclosure itself, or participation in a disclosure intervention, more predictive of improved health outcomes? In other words, does disclosure matter after a dedicated HIV disclosure intervention, or could these same outcomes be found from participation in the intervention even without disclosure? Relatedly, are there common characteristics among caregivers who decide to disclose post-intervention, and are these relevant to their child's post-disclosure outcomes? This study attempts to answer these questions by employing instrumental variable analysis to isolate disclosure as an independent variable separate from intervention participation using data from a cluster-randomized controlled trial of an HIV disclosure intervention conducted in Kenya from 2013-2015 (22). Using this statistical methodology, we analyze factors associated with compliance to disclosure after completion of a disclosure intervention and assess the impact of disclosure on HIV, mental, and behavioral health outcomes of adolescents living with HIV. Methods The HADITHI intervention for HIV disclosure The Academic Model Providing Access to Healthcare (AMPATH) Consortium is a partnership established in 2001 between 14 universities and academic health centers across North America and Moi University and Moi Teaching and Referral Hospital in Eldoret, Kenya that aims to provide comprehensive and preventative care, advance research findings, and educate medical students, residents, and community healthcare workers. Based on over a decade of medical practice and research related to pediatric HIV/AIDS in western Kenya, the AMPATH consortium developed a culturally adapted, multicomponent intervention to support disclosure of HIV status to perinatally infected children referred to cumulatively as the HADITHI ("Helping AMPATH Disclose Information and Talk about HIV Infection") intervention. The disclosure intervention included patient-centered materials to guide disclosure, disclosure counselors, and post-disclosure child support groups to supplement usual care resources (30). A cluster randomized controlled trial (Vreeman 1R01MH0099747-01, "Patient-Centered Disclosure Intervention for HIV-Infected Children") conducted between 2013 and 2015 evaluated the effectiveness of the HADITHI intervention on 285 caregiver-child dyads recruited from eight facilities in Eldoret, Kenya using an as-treated approach with intensive clinical and psychosocial assessments at 6-month intervals until 2 years post-intervention. The primary outcome was prevalence of HIV disclosure and pre-specified secondary outcomes included mental and behavioral outcomes for children living with HIV. Current study design The overall objective of this post-hoc study was to examine the effect of non-adherence to disclosure among HADITHI intervention participants. In this study, we applied instrumental variable estimation to the intention-to-treat analysis previously published in 2019 (22). Our primary aims were 2-fold: (1) identify covariates that predict caregiver responsiveness to the HIV disclosure HADITHI counseling intervention and (2) Patient selection Two hundred and eighty five caregiver-child dyads were recruited for the HADITHI trial from eight facilities in Eldoret, Kenya between June and August 2013. We restricted our analysis to children who were not disclosed to at baseline. Disclosure was defined as a binary variable (whether or not the child knew his/her HIV status) as reported by either child or caregiver via disclosure questionnaires. Of the 285 caregiver-child dyads, 146 children (51.2%) were non-disclosed at baseline by either caregiver or child report. Measures Disclosure post-intervention was defined as a binary variable of whether or not the child knew his/her HIV status as reported by both child and caregiver via disclosure questionnaires. Disclosure was assessed at 6-month intervals from baseline (immediately postintervention) to 24 months post-intervention. Eighty demographic and clinical covariates were extracted from children's medical files or compiled from baseline questionnaires provided to caregivers and children during the HADITHI trial, including the Pediatric AIDS Clinical Trials Group General Health Assessment for Children Quality of Life Questionnaire, Strengths and Difficulties Questionnaire-Youth Version (SDQ), Patient Health Questionnaire nine-item depression instrument (PHQ-9), and locally developed and validated adherence and Stigma in AIDS family adherence (SAFI) stigma questionnaires (Supplementary Table 1). Details regarding measures, their administration, and their use in the HADITHI trial has been previously published (31,32). Statistical analysis Sample characteristics and distributions of categorical predictors were summarized using numbers and percentages for categorical variables, mean and standard deviation for normalized continuous variables, and median and interquartile ranges for non-normalized continuous variables. Baseline characteristics of children in the intervention and control groups were compared using Pearson chi-square tests, Fisher's exact tests, two-sample t tests, and two-sample Wilcoxon rank sum tests, as appropriate to test for statistical significance. Caregiver compliance to disclosure The least absolute shrinkage and selection operator (LASSO) penalized regression was used to select the best subset of predictors of disclosure for the 60 caregiver-child dyads who were randomized into the intervention group. We defined disclosure as a binary response of "disclosed" vs. "not disclosed, " with disclosure defined as caregiver-reported disclosure at any point within the 24-month follow-up period. Caregiver-child dyads who participated in the intervention were analyzed to select the best subset of multilevel predictors (HIV, mental health, overall health, economic, household, or community-related factors) of disclosure. LASSO selects a subset of predictors by shrinking the coefficients of the least contributive variables to zero, thereby excluding them from the model. LASSO combines the benefits of multiple regularizations and is particularly useful in studies such as this one where the number of observations is less than the number of variables and there are groups of correlated variables. To identify relevant predictors, all explanatory variables available in the data set were entered into the LASSO procedure. To calculate a reliable measure for the model validity, we used the conventional validation technique and randomly split the data into two data sets: 50% as the training data on which variable selection via the LASSO was done and 50% as the test data on which the logistic regression and corresponding pseudo R-squared were calculated. The dependent variable was disclosure after baseline. Independent variables included 80 demographic and clinical covariates described above. In the testing data, the most important predictors were selected using validation alpha standard error (ASE). To reduce the random effect arising out of the random split of the data, we used the bootstrapping method to repeat the random partition 1,000 times with replacement, and the average value of the obtained ASEs was calculated to produce the best regularization model (33). Local average treatment e ect Instrumental variable (IV) methods were used to assess outcomes of children with HIV to account for non-compliance to disclosure among intervention participants ( Figure 1). While standard intent-to-treat analysis estimates the effect of treatment assignment on outcomes, this does not carry causal interpretation in the presence of treatment non-compliance. IV methods provide an alternative approach by using "instruments" to isolate the variance in outcome that is due to non-compliance of the intervention (34,35). IV analysis was chosen because its estimates have been shown to be unbiased when non-compliant behaviors are symmetrically dependent on patients' conditions (36), which was consistent with other studies of HIV disclosure compliance (4,14,16). Randomization at the clinic level was used as the instrument for IV analysis because this instrument satisfied the two required assumptions: (1) the instrument affects the processes patients . FIGURE Causal diagram for intent-to-treat HADITHI intervention data analysis with comparison updated casual diagram for instrumental variable analysis isolating HADITHI counseling intervention from HIV disclosure to depict potential non-compliance. receive and (2) the instrument is not correlated with unmeasured factors or directly related with outcomes (37,38). Randomization into the intervention or control group directly affected the treatment that the patient received. It was expected that the cluster randomization and the study designs would also not be correlated with unmeasured factors affecting study outcomes. However, this may not always be the case since cluster randomization at a clinic level may not fully balance patient characteristics between groups. When randomization influences treatment monotonically, IV methods estimate the causal effect among the adherers, also known as the local average treatment effect (LATE) or complieraverage causal effect (39). IV models were estimated using the two-stage least squares (2SLS) approach (40). The fully specified 2SLS model for each outcome model included a first-stage disclosure equation that was explained by the control variables (gender, age, and tribe) and one indicator variable of the instrument (variable = 1 if the subject was in the intervention group, otherwise 0). Because the outcome measures were repeated measures, the model also included a variable representing time since disclosure and an interaction term of time period * intervention status. In the second stage of 2SLS, the predicted number of disclosures and the same set of control variables were used to estimate the process effects on the outcome. The second stage 2SLS model was run independently for data at post-intervention time points of 6, 12, 18, and 24 months. The regression coefficient for the predicted treatment received in the second stage of the model is a consistent estimator of the LATE if the first stage model is a linear regression containing all the variables appearing in the second stage (41,42). As a comparison, ordinary-least-squares (OLS) linear regression models were utilized to estimate the effects of the processes on outcomes using an as-treated approach. The outcome model was explained by intervention status and the control variables again run independently at all four post-intervention time points. SAS Studio 3.8 (Enterprise Edition) was used in managing data, performing descriptive statistics, comparisons, and diagnostic tests, and completing the regression analysis. A statistical significance level of 5% was utilized for all analyses, and clinical significance was defined as a 5-fold change in the test statistic. Baseline demographic and clinical characteristics Of the 146 non-disclosed caregiver-child dyads who participated in the HADITHI intervention, 130 (89%) completed all follow-up assessments and were included in this secondary analysis. Among all non-disclosed participants, the median child age was 11.42 years old and 55% were girls ( Table 1). The majority of primary caregivers (n = 86) were the child's biological mother. Forty eight children (37.5%) had Stage 3 HIV disease at baseline, . /fpubh. . Demographic characteristics were not significantly different between non-disclosed caregiver-child dyads in the control and intervention subgroups except for tribe (p < 0.0001). The plurality of non-disclosed dyads in the control were members of the Luhya tribe (45%), while the plurality of non-disclosed dyads in the intervention were members of the Kalenjin tribe (40%). Caregiver compliance to disclosure Ninety-one percent of caregiver-child dyads who participated in the HADITHI intervention completed follow-up disclosure questionnaires until 24 months post-intervention. Eighty clinical and demographic covariates collected from the 60 caregiverchild dyads who participated in the HADITHI intervention were entered into the regression procedure. Two of the 80 predictorscaregiver isolation (test statistic = 8.79, p = 0.0030) and length of time on ARVs (test statistic = 5.20, p = 0.0226)-were retained as significant in the regression model ( Table 2). The model's discrimination between caregivers' disclosure status was strong, with an area under the curve (AUC) = 81.21%. Caregivers who experienced isolation, defined as responding "ever happened" to the prompt "Because the child or someone else in my family has HIV or because I have HIV, I am isolated or avoided by others" on the SAFI stigma questionnaire, were significantly less likely to disclose their child's HIV status to them post-intervention. While 44 out of 52 (84.6%) of non-isolated caregivers disclosed HIV status after the intervention, only 3 out of 8 (37.5%) of isolated caregivers disclosed. In addition, children who had been on ARVs for an increased time prior to the intervention were less likely to be disclosed to about their HIV status postintervention. The mean (SD) length of time on ARVs for children who were disclosed to was 1.3 (1.5) years, while that of those who were not disclosed to was 2.5 (2.4) years. Local average treatment e ect on clinical HIV and mental and behavioral health outcomes IV regression was conducted to compare clinical outcomes between intervention participants who disclosed HIV status post-intervention and those who did not. There were no significant differences in CD4 count, PHQ-9 scores, or SDQ scores between the children of intervention compliers and noncompliers at 6 months, 12 months, 18 months, or 24 months post-intervention (Table 3). All p-values were >0.16, indicating no statistical significance. OLS regression was also conducted for comparison using the predetermined subset of non-disclosed caregiver-child dyads at baseline. There were also no significant differences in CD4 count, PHQ-9 scores, or SDQ scores between those who completed the intervention vs. those who were in the control group at any point post-intervention. All p-values were >0.12. When comparing OLS and IV methodology, a clinically significant difference between models was defined as a 5-fold difference in the test statistic, or the difference in outcomes between intervention participation (OLS methodology) and disclosure (IV methodology). IV analysis found a clinically significant decrease in SDQ scores at 6 months (0.031 vs. . Discussion This study defined caregiver-child characteristics that predicted caregiver compliance to a disclosure intervention for children living with HIV and their families in western Kenya. We also used instrumental variable methods to explore local average treatment effects of disclosure post-intervention on HIV, mental health, and behavioral health outcomes. In completing this analysis, we aimed to isolate disclosure as a variable separate from intervention participation to independently assess the independent prediction and impact of disclosure. Two caregiver-child characteristics were found to be predictive of compliance to the HIV disclosure intervention: caregiver isolation and the length of time that the child had been taking ARVs. While almost 85% of non-isolated caregivers disclosed their child's HIV status to them after the intervention, less than 40% of caregivers who self-reported feeling isolated complied with HIV disclosure. This finding substantiates qualitative research conducted within this population prior to the intervention when caregivers cited their own isolation and fears of their child's isolation post-HIV status disclosure as reasons for not previously disclosing their child's HIV status (18). It also links to studies in other settings that cite caregiver isolation as a barrier to HIV disclosure for children living with HIV (43-45). This study used the SAFI questionnaire definition of caregiver isolation as self-report of anxiety or isolation based on their own or a family member's HIV status. This introduces additional considerations that may impact caregiver isolation leading to non-disclosure, including beliefs about HIV-related stigma and stigma regarding their own HIV status. This link has been studied in other settings, finding lack of knowledge about HIV and HIV stigma as barriers to disclosure among caregivers of children with HIV (13,15,16). Studies also show statistically increased likelihood of disclosure with caregiver beliefs that disclosure had benefits (14,16) and participation in HIV-positive communities (46). Those with more isolation may not feel they have a safe place to share their worries about disclosure, and they may not have the contacts to hear about the potential benefits of disclosure. Each of these factors may contribute to the feelings of isolation self-reported by caregivers in this study. In combination, this research suggests that HIV-related stigma may isolate caregivers and impact their willingness to disclose HIV status, even after specific curriculum within a disclosure intervention to address and attempt to combat HIV stigma. These isolated caregivers may continue to have substantial fear about the potential for a disclosed child to share about HIV status with others and subsequently be subject to further stigma. It also proposes the need for increased focus on caregiver HIV status and stigma within pediatric HIV disclosure interventions, which has already been implemented in various disclosure models (47,48). Caregivers of children who were on ARVs for a longer time were also less likely to disclose post-intervention in this study. The average length of time on ARVs for children who were disclosed was more than 1 year less than those who were not disclosed. This result is surprising, since other studies have found that increased length of time on ARV medication was significantly associated with HIV disclosure (14,15,17). This variable may substitute the length of time since a child's HIV diagnosis and may suggest that the longer a caregiver has hidden HIV status from their child and created false narratives about their HIV treatment, it may be more difficult for the caregiver to disclose to their child. One study in Zimbabwe provided insight into this potential issue, as caregivers noted that they did not disclose to their children because they felt that the child would reject the caregiver in anger for not disclosing sooner (49). This has also been found in other studies from Kenya among adults with HIV, who were shown in some studies to be less likely to disclose their own HIV status to others if they have hidden their status for a longer time (50,51). Additional studies should assess the impact of length of time on ARVs on rates of pediatric HIV disclosure across contexts to assess when to administer disclosure interventions for optimal compliance. Of note, this study did not find that any of the other 78 baseline caregiver-child characteristics were significantly associated with disclosure post-intervention The full list of baseline characteristics included in the regression model can be found in the Supplementary Table. Other studies have found a variety of other characteristics to be statistically significantly associated with disclosure in other settings, including: child age, place of follow-up, caregiver educational level, child weight, and child sex (4,(13)(14)(15)(16)(17). It is possible that a larger sample size may have found more associations, or that something specific to each study setting led to these differences. It is also important to consider that this study assesses characteristics of compliance to disclosure status post-disclosure intervention, rather than assessing prevalence of disclosure outside of an intervention. We did find similar characteristics to other studies when predicting baseline disclosure as described elsewhere (6), but only children who were not disclosed to at baseline were included in our analysis. Our study is the first to assess characteristics of compliance to disclosure after a specific intervention in this context. Our study did not find statistically significant differences in CD4 percentage, PHQ-9 score, or SDQ score between children based on disclosure status across the intervention. Disclosure was shown to not impact mental health or HIV outcomes, which contradicts caregiver fears that disclosing HIV status would worsen children's mental health and control over HIV treatment (18), a reported reason for non-disclosure in other low-resource settings (52). These results may be encouraging to share with community members in future interventions to lessen anxieties about the potential negative impacts of disclosure for their children. In addition, there appear to be mixed clinically significant differences in PHQ-9 and SDQ scores across time points when using IV analysis as compared to OLS. Our results may suggest clinically relevant mental health improvement for adolescents with newly disclosed HIV, with improved behavioral health outcomes found at 6 and 12 months from the IV analysis, followed by clinically relevant worsening depression and behavioral health outcomes at 12 and 24 months from IV analysis. Given the lack of overall statistical significance in combination with the mixed results between the IV analysis and the OLS analysis, we cannot conclude whether disclosure impacted the mental health of adolescents who participated in the HADITHI intervention. A larger trial may . show transient yet statistically significant changes in mental and behavioral health outcomes within 1 year post-disclosure that return to baseline over time. These outcomes fit within growing literature that suggests that disclosure of HIV status to children in resource-limited settings may lead to improved HIV, psychological, and quality of life outcomes. One unmatched case control study of 309 children living with HIV in Tanzania found that patients who had their HIV status disclosed to them were more likely to have improved ART adherence as measured by a treatment adherence manual and improved quality of life as measured by World Health Organization Quality of Life standard tool (16). A second observational prospective cohort study on 160 children with HIV in Bangkok, Thailand evaluated the psychosocial outcomes of disclosure as measured by the Child Behavioral Checklist (53). Researchers found that the median depression score decreased significantly at 2-month and 6-month follow-up. Similar observational and cohort-based research has been published in Ghana (54), Namibia (25), and South Africa (55). Despite this initial evidence, however, randomized controlled trials have only assessed parental disclosure of their own HIV status to seronegative children (56, 57) rather than caregiver disclosure of a child's own HIV status. This study was the first to assess the direct impact of disclosure using a cluster-randomized control trial. Although the HADITHI trial found improved outcomes among children living with HIV when comparing children who completed the disclosure intervention to those who did not (22), these results did not translate to this study's analysis isolating disclosure status from participation in the intervention. This suggests that participation in the intervention may support health outcomes even without ultimate compliance to disclosure. Future studies should continue to distinguish between outcomes attributed specifically to disclosure as compared to those correlated with completion of a disclosure-focused intervention. Our study contained numerous limitations. First, this analysis was limited by its sample size since the study was not sufficiently powered for effect sizes. Almost 50% of children who participated in the HADITHI intervention were already aware of their HIV status at baseline, significantly limiting the number of caregiverchild pairs in this analysis and thus the study's power. Second, the definition of disclosure used in this quantitative analysis could not capture the nuances of disclosure studied in this and other settings. Some caregiver-child dyads reported divergent answers for disclosure across the study-with the caregiver stating that they disclosed to the child but the child not expressing knowledge of their HIV status or vice-versa. For this analysis, we defined disclosure as both the parent and the child reporting disclosure, but this may underestimate study results. In addition, it is well-studied that disclosure is not a binary variable but a longitudinal process (49, 55), which could not be addressed within the scope of this paper. Finally, the statistical methodology of instrumental variable analysis requires an assumption that the disclosure intervention itself did not impact HIV and mental health outcomes, instead attributing these changes only to disclosure. In reality, it is possible that participating in the intervention, even without disclosure, may have impacted mental health or HIV outcomes of adolescents involved in the study. This is important to note especially when comparing IV and OLS analysis, as these methods analyze different populations; IV analysis only compares adolescents who completed in the intervention based on disclosure status, while OLS analysis also includes adolescents who did not complete the intervention. This limits comparison of local average treatment effect test statistics from IV analysis and average treatment effect test statistics from OLS analysis. Despite these potential limitations, we chose the instrumental variable method to most closely approximate the isolated impact of disclosure in our study population. Conclusion Our study found that caregiver isolation status and the length of time a child had been on antiretroviral therapy were predictive of disclosure of HIV status to children living with HIV in western Kenya after participation in a disclosure intervention. We also found that children who had their HIV status disclosed to them and those who did not had no statistically significant different outcomes in CD4 percentage, depression status, or mental and emotional status up to 24 months post-intervention. The results of this study can inform future adaptation of the HADITHI intervention as well as disclosure interventions in other low-resource contexts. Disclosure interventions should integrate additional mental health and treatment adherence counseling aimed at continuing to stabilize mental health post-disclosure and improving mental health and HIV outcomes over time. Educational components of disclosure interventions can also be tailored to include concepts related to HIV-related stigma and caregiver and family isolation and target new ARV patients. Future research should replicate this study design elsewhere to understand factors contributing to disclosure compliance in other settings and further explore the impact of disclosure on health outcomes at further timepoints postdisclosure. Data availability statement The original contributions presented in this study are included in the article/Supplementary material. Additional data from the original HADITHI study and further inquiries can be directed to the corresponding author. Ethics statement The studies involving human participants were reviewed and approved by Institutional Review Board at Indiana University School of Medicine in Indianapolis, Indiana, USA, and Institutional Research Ethics Committee at Moi University School of Medicine in Eldoret, Kenya. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. Funding The HADITHI study was funded by a National Institute of Health R01 research grant for RV (1R01MH0099747-01, Patient-Centered Disclosure Intervention for HIV-Infected Children). EM also received funding from the Icahn School of Medicine at Mount Sinai's Global Health Summer Research Program in support of this sub-analysis.
2023-05-05T13:27:41.270Z
2023-05-05T00:00:00.000
{ "year": 2023, "sha1": "e95c4c680ff7f71ef845b40576cdb8738672a1e1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "e95c4c680ff7f71ef845b40576cdb8738672a1e1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
83504499
pes2o/s2orc
v3-fos-license
Generic revision of Notodasus Fauchald , 1972 ( Polychaeta : Capitellidae ) with descriptions of four new species from the coasts of Mexico the capitellid genus Notodasus Fauchald, 1972 is emended and its previously known species are redescribed. since the original descriptions of N. magnus Fauchald, 1972, N. dexterae Fauchald, 1973, and N. arenicola Hartmann-schröder, 1992 omitted important morphological details and were either incomplete or misleading, these species are redescribed based upon examination of type materials. Four new species are described from tropical localities in Mexico: Notodasus harrisae n. sp., N. hartmanae n. sp., N. kristiani n. sp. and N. salazari n. sp. standardised descriptions are provided for all species including the methyl green staining pattern, the epithelial texture and the shape of hooded hooks. a key for all described species is provided. the genus Notodasus is one of the poorly studied groups.it was established by Fauchald (1972) to include N. magnus Fauchald, 1972, its type species, from carmen island (Gulf of california).the genus was erected as having 11 thoracic chaetigers with bilimbate capillary chaetae, the first chaetiger being uniramous and the first two abdominal chaetigers having only capillary chaetae.Fauchald (1973) later described N. dexterae from naos island (Pacific Panama), and Hartmann-schröder (1992) described N. arenicola from ascension island (central atlantic ocean).these two species match the original diagnosis of Notodasus except that they both have biramous parapodia in the first chaetiger. the study of some specimens from several localities from the atlantic and Pacific coasts of Mexico resulted in some undescribed species belonging to Notodasus.this prompted the study of the type materials for all species and the results are presented herein as a revision, with a redefinition of the genus and descriptions of all previously known species, together with the description of four new species. MatErial anD MEtHoDs type material was examined from the natural History Museum of los angeles county, allan Hancock Foundation Polychaete collection (lacM-aHF) and the zoologisches Museum of the Universität Hamburg (zMH).Material of the new species and non-type specimens were collected by hand, mostly intertidally at several localities on both coasts of Mexico.specimens were fixed in a 10% formaldehyde sea-water solution, washed with tap water to remove the fixing agent, and preserved in 80% ethanol.Holotypes and paratypes were deposited in the colección Poliquetológica de la Universidad autónoma de nuevo león (Uanl), and some paratypes were deposited in lacM-aHF, in the Museum national d'Histoire naturelle, Paris (MnHn), and in the zoologisches Museum, Universität Hamburg (zMH).the methyl green staining pattern (MGsP) was used to determine specific patterns of glandular areas.We submerged specimens for one or two minutes in a solution of methyl green in 70% alcohol and washed them in several alcohol changes to eliminate the excess (Warren et al. 1994).Green (2002) included a detailed discussion of morphological terminology, principally on hooded hooks; in addition to the characters used by Green, we employed the proportion and form of the main fang (triangular or subtriangular).sYstEMatics Family Capitellidae Grube, 1862 Genus Notodasus Fauchald, 1972, emended Notodasus Fauchald, 1972: 246-247, Pl. 51 Fig. a-c.Type species.Notodasus magnus Fauchald, 1972.Diagnosis.thorax with eleven chaetigers with bilimbate capillary chaetae.First chaetiger biramous.First two abdominal chaetigers with bilimbate capillaries on both rami, following ones with hooded hooks.lateral organs and branchiae present. Remarks. the genus Notodasus was erected by N. magnus Fauchald (1972). in the original description, Fauchald assumes that Notodasus had only a neuropodium on the first chaetiger, but our revision of the N. magnus holotype showed the presence of a notopodium and a neuropodium on the first chaetiger. Notodasus differs from other capitellids by the presence of bilimbate capillaries on both rami of the first two abdominal segments.Notodasus is very close to Notomastus, both having eleven thoracic chaetigers with bilimbate capillary chaetae, but in Notomastus all abdominal parapodia possess only hooded hooks, while Notodasus has bilimbate capillary chaetae on the first two abdominal parapodia.Dodecaseta is also similar to Notodasus in that it has eleven thoracic chaetigers with bilimbate capillary, but differs in the chaetae arrangement of the first two abdominal segments.the holotype of Dodecaceta oraria was found to have capillary chaetae mixed with hooded hooks in both the notopodia and the neuropodia of the first abdominal segment, whereas the second abdominal segment had only hooded hooks on both rami.Mccammon and stull (1978) explain the possible chaetal variation in Dodecaseta oraria: "the first abdominal neuropodium may bear all capillary chaetae, all rostrate uncini, or a mixture of both."Dodecaceta eibyejacobseni, described by Green (2002) from the andaman sea, could be included under Notodasus due to the presence of eleven thoracic and bilimbate capillary chaetae on both rami of the first two abdominal chaetigers, but the type material was not available for a detailed revision.Notodasus differs from other genera with eleven thoracic segments as follows: Mastobranchus has the first two abdominal chaetigers with mixed hooks and capillaries; Rashgua lacks chaetae in the abdominal notopodia.like Dasybranchus, Notodasus possess thirteen chaetigers with capillaries chaetae, but in Notodasus the transition between thorax and abdomen is given by the longitude change of the two region segments, evidenced in all its species, in which the first abdominal segment corresponds to half the size of the last thoracic segments.Furthermore, a constriction between the two regions is evident; in Dasybranchus the transition between the thorax and the abdomen is given by the chaetal change, and no changes are observed changes in the segment structure.Hartmann-schröder, 1992 Figs. 1a-E, 8a Redescription. the following redescription is based upon the paratype; the holotype was not available.Paratype complete but broken in two, with more than 300 segments, 270 mm long, and 2 mm wide in abdomen.anterior fraction with 244 chaetigers and 220 mm, posterior one with 56 chaetigers and 22 mm.colour in alcohol dark brown.Prostomium conical with a small palpode.Eyespots present.First nine segments including peristomium, with tessellated epithelium, subsequent segments smooth (Fig. 1a). Notodasus arenicola thorax with 11 chaetigers with bilimbate capillaries in both rami.all segments biannulated.chaetae inserted in middle part of thoracic segments.thoracic notopodia lateral in the first segments, moving dorsally posteriorly. Distinct lateral organs protruding, between the noto-and neuropodia throughout the body; nearer to the notopodia, as a small pore in the thorax and closer to the neuropodia in abdominal segments.Genital pores not seen.transition between thorax and abdomen marked by the abrupt reduction in the abdominal segment length.notopodial lobes from chaetigers 13-20 short, fused medially, with a line of approximately 70 hooded hooks, separated dorsally by constriction along abdomen (Fig. 1B); notopodial lobes separate from chaetiger 20.abdominal neuropodial lobes fused ventrally, anteriorly, with about 160 hooded hooks (Fig. 1c); posterior lobes shorter, with around 80 hooded hooks per fascicle (Fig. 1D).notopodial and neuropodial abdominal hooded hooks similar along body, with long anterior shaft, angled node, moderate constriction, slight shoulder and short hood; posterior shaft curved, longer than anterior one, attenuated to the end.Four rows of teeth above triangular main fang, basal row with four teeth, middle basal row with three, middle apical row with three and distal row with two teeth (Fig. 1E). Methyl green staining pattern.the stain differs in thorax and abdomen.Peristomium and thoracic chaetigers 1-5 staining slightly, following thoracic segments and first and second abdominal segments with medium intensity.Prechaetal area of the third abdominal segment with a dark transverse line; three longitudinal bands from postchaetal area of the third abdominal segment, central band less pigmented, laterals darker interrupted by lateral organs and intersegmental ring; central band interrupted by intersegmental ring and notopodial lobes (Figs.1a-B, 8a). Habitat.rock pools, between calcareous algae and rocky sand, 0.5 m depth. Distribution. Panam beach and Mars Bay, ascension island (central atlantic ocean). Remarks. in the description of N. arenicola, Hartmann schröder (1992) mentioned the absence of eyespots in the holotype, but the paratype examined has eyespots partially covered by the peristomium.also, she described lateral organs only to chaetiger 8; in our analyses we observed lateral organs throughout the entire body.Fauchald, 1973 Fig. 2a-D, 8B thorax with 11 chaetigers, with bilimbate capillaries in both rami.thoracic segments biannulated, abdominals uniannulated.chaetae inserted in middle part of thoracic segments.notopodia lateral in the first thoracic segments, moving dorsally in subsequent segments. lateral organs present along the body, located between noto-and neuropodia; those on thoracic segments closer to notopodia, as small rounded pores; those on abdominal region closer to neuropodia, larger, projecting.Genital pores small, poorly visible, located between intersegmental rings of segments 8/9, 9/10 and 10/11.transition between thorax and abdomen marked by reduced length of the first two abdominal segments; remaining abdominal segments as long as thoracic ones (Fig. 2B).abdomen with notopodial lobes short, fused, with a small line of 9 hooded hooks per fascicle, separated dorsally along the abdomen (Figs.2B-c).neuropodial lobes projected to the dorsal region, each with a line of around 150 hooded hooks per fascicle, separated ventrally.notopodial and neuropodial abdominal hooded hooks similar along the body, with long anterior shaft, indistinct constriction, bulbous node extended to posterior end, short shoulder, hood short, inserted medially on shoulder; posterior shaft shorter than anterior one.Five rows of teeth above triangular main fang, basal row with nine teeth, middle basal row with five, middle apical row with three and distal two rows, with only one tooth.Main fang subtriangular, slightly longer than wide (Fig. 2D). Branchiae and pygidium not seen.Posterior part of body with eggs in coelom, each egg about 14.7 μm in diameter. Methyl green staining pattern.Weakly staining from the peristomium to chaetiger 8; more darkly staining from thoracic chaetiger 9 to second abdominal segment.intersegmental area stained slightly (Fig. 2 a).Postchaetal area of second abdominal chaetiger with a transverse continuous dark line.From third abdominal segment, two dorsolateral dark bands, interrupted by parapodial lobes and lateral organs (Figs. 2 B-c, 8 B). Distribution.only known from naos island, Panama Bay. thorax with 11 chaetigers, with bilimbate capillaries in both rami.thoracic and abdominal segments are biannulated.notopodia lateral in the first thoracic segments, moving dorsally in subsequent segments. lateral organs present along the body, located between notopodia and neuropodia; those on thoracic region closer to notopodium, abdominal ones closer to neuropodial lobe.thoracic lateral organs larger than abdominals.Genital pores not seen. transition between thorax and abdomen marked by abrupt shortening of abdominal segments.notopodial lobes of abdominal chaetigers 3-9 fused dorsally, each line of hooded hooks almost fused, each with approximately 50 hooks; following segments with a middle constriction; chaetal fascicles clearly separated (Fig. 3B).neuropodial lobes fused in abdominal chaetal fascicles with about 135 hooded hooks.notopodia and neuropodial abdominal hooded hooks similar along the body, with long anterior shaft, angled node, distinct constriction, developed shoulder, short hood, and posterior shaft longer than anterior one.three rows of teeth above main fang, basal row with five, middle with nine and distal one multidentate.Main fang subtriangular, longer than wide (Fig. 3D). Branchiae emerge from a ventral pore, evident from chaetiger 60, with around 14 well-developed filaments (Fig. 3c).Pygidium not present.Posterior of body with eggs in the coelom, each egg about 68μm of diameter. Methyl green staining pattern.Peristomium and first seven chaetigers weakly staining; first and second abdominal chaetiger slightly darker staining ventrally and laterally, subsequent twenty abdominal segments with two dark dorso-lateral longitudinal bands, separated by lateral organs, (Figs.3B, 8c).remaining segments with a light green continuous discrete dorsal band.abdominal ventrum with a narrow traverse band, at the posterior margin of each neuropodial lobe. Habitat.these specimens were collected from different substrates characterised by fine sands with high content of organic matter and shell fragments (Estero de Urías and Ensenada de la Paz), sandy beach with high energy (El Mogote), sandy beach with low energy (El tesoro beach, Balandra beach, la choya Beach, los angeles bay, Municipal beach), sandy beach with coarse sand (El Quemadito, los cocos beach and santispac beach and mangrove). Type locality.El tesoro beach, la Paz Bay (Gulf of california). Distribution. Gulf of california. Etymology.the species is named in honour of leslie Harris for her constant support and help during our visits to the los angeles Museum. Remarks.Notodasus harrisae n. sp.resembles N. dexterae n. sp. by having fused anterior abdominal notopodial lobes; however, those from N. harrisae are long and thin, while in N. dexterae they are short and wide.the number of hooded hooks on the notopodial lobes of both species also differs; N. harrisae has around 50 hooks in each fascicle, while N. dexterae has 9 hooks in each fascicle.the hooded hooks of N. harrisae have an angled node and three apical rows of teeth; those of N. dexterae have a bulbous node and five apical rows of teeth.Description.Holotype anterior fragment with 44 segments, 34 mm long, 4 mm wide in abdomen.Paratypes and other specimens 15-35 mm by 2-4 mm.colour in alcohol dark brown.Prostomium rounded with palpode.Eyespots present, partially covered by anterior margin of peristomium.thorax, first two abdominal segments and prechaetal area of the third one with tessellated epithelium dorsally (Fig. 4a, B). Notodasus hartmanae thorax with 11 biannulate chaetigers, with bilimbate capillaries in both rami.notopodia lateral in anterior thoracic segments, moving dorsally in subsequent segments.lateral organs present between notopodia and neuropodia; those of abdominal region closer to neuropodia, larger than thoracic organs.Genital pores absent. transition between thorax and abdomen marked by an abrupt increase in the height of each segment; the first two abdominal segments triannulate with bilimbate capillaries in both rami; subsequent abdominal segments shorter, triannulate with only hooded hooks. Each abdominal segment longer than thoracic ones.notopodial lobes free along all abdominal segments, each notopodium with around 80 hooded hooks (Fig. 4B).neuropodial lobes extending from ventrum part to latero-dorsal position, with fascicles of around 200 hooks.notopodial and neuropodial abdominal hooded hooks similar along body, with long anterior shaft, angled node, distinct constriction, developed shoulder, long hood, and posterior shaft longer than anterior one.three rows of teeth above triangular main fang, basal row with six teeth, middle row with seven, distal row with three.(Fig. 4D). Branchiae emerge from a ventral pore, evident from chaetiger 59, with around 15-20 well developed filaments (Fig. 4c).Pygidium not seen.Posterior part of body with eggs in coelom, each egg 22.7 μm in diameter. Methyl green staining pattern.thoracic region weakly staining, dorsum of abdominal region slight- ly darkly staining except at the notopodial and neuropodial lobes, moderately staining longitudinal line with two lateral dark lines between notopodia and neuropodia (Figs.4B, 8D).abdomen with a dark, thin mid-ventral line (Fig. 4c). Remarks.N. hartmanae n. sp. and N. salazari n. sp. are similar in that, unlike other species in this genus, they have free abdominal notopodial lobes.these two species differ in epithelium rugosity; in N. salazari, the epithelium is tessellated up to chaetiger 7, becoming smooth posteriorly, while N. hartmanae is tessellated along the entire thorax and the first three abdominal chaetigers, becoming posteriorly smooth. Habitat.compact mud, among numerous tubes of Diopatra rhizophorae Grube, with high content of organic matter. Distribution. chiapas, Mexico. Etymology.this species is named to honour the late Dr. olga Hartman, in recognition of her many useful publications on polychaetes, especially for her study on capitellids and her contributions to the study of polychaetes from western Mexico. Notodasus kristiani n. sp. Fig. 5a-D Description.Holotype incomplete with 59 segments, 30 mm long, 3 mm wide in abdomen.Paratypes and other specimens 25-85 mm long, 2-4 mm wide.colour in alcohol light brown.Prostomium conical with palpode.Eyespots present, covered by anterior margin of peristomium.Peristomium to the eighth thoracic chaetiger tessellated, remaining segments smooth (Fig. 5a).thorax with 11 chaetigers, with bilimbate capillaries in both rami.thoracic and abdominal segments biannulated.notopodia lateral in the first thoracic segments, moving dorsally in subsequent segments.lateral organs, between notopodia and neuropodia throughout body, those of thoracic region closer to notopodium, those in the abdominal closer to neuropodial lobe; thoracic lateral organs larger than abdominal ones.Genital pores between segments 9-10. transition between thorax and abdomen marked by abrupt shortening of abdominal segments.notopodial lobes of abdominal chaetigers 3-7 fused dorsally (Fig. 5B), each line of hooded hooks completely separated, with around 20 hooks per fascicle.neuropodial lobes separate ventrally, with chaetal fascicles with about 80 hooded hooks (Fig. 4c).notopodial and neuropodial abdominal hooded hooks similar along the body, with long anterior shaft, bulbous node, indistinct constriction, developed shoulder, short hood, posterior shaft longer than anterior one.Four rows of teeth above triangular main fang, basal row with five teeth, middle basal row with seven, middle apical row with nine and distal row with two teeth.(Fig. 5D). Branchiae emerge from a ventral pore, evident from chaetiger 51, with around 18 well-developed filaments (Fig. 5c).Pygidium not seen.Posterior part of body with eggs in coelom, each egg 113.5 μm in diameter. Methyl green staining pattern.Peristomium and first five chaetigers weakly staining, chaetiger 6 to the second abdominal more darkly staining.third to sixth abdominal chaetiger with two dorsal transverse, darkly staining bands, separated by notopodia and lateral organs (Figs.5B, 8E); remaining abdominal segments moderately staining. Habitat.Mud with high content of organic matter (Varadero beach), in soft sediments retained into nastier boxes (santa Marina bay), and mud pockets between Mytilus edulis beds (los angeles Bay, Municipal beach). Distribution.Gulf of california and western coast of Baja california. Etymology.the species is named in honour of Kristian Fauchald, who established Notodasus and described two of its species, and especially in recognition of his many contributions to the study of polychaetes. Remarks.N. kristiani n. sp.resembles N. arenicola in having a constriction of notopodial fused lobes in the entire abdominal parapodia.they differ in the shape of the neuropodial lobes, which are free along the entire body in N. kristiani and fused in N. arenicola, expanded on the anterior region, and considerably reduced posteriorly.Further, their MGsP is very different, as described above.Fauchald, 1972 Fig. 6a-D Redescription.Holotype an anterior fragment, with 92 chaetigers, 90 mm long, 5 mm wide in the abdomen.colour in alcohol light brown.the epithelium slightly damaged.Prostomium is conical with palpode.Eyespots absent.Peristomium tessellated, thoracic epithelium longitudinally striated, abdomi- nal segments with smooth epithelium (Fig. 6a).thorax with 11 chaetigers, with bilimbate capillaries in both rami.thoracic segments biannulate, anterior abdominal segments also biannulate, triannulate posteriorly.notopodia lateral in the first thoracic segments, moving dorsally in subsequent segments. Notodasus magnus lateral organs located between notopodia and neuropodia; those of thoracic region closer to notopodium, rounded like pores, those of the abdomen as small protuberances found closer to neuropodial lobe.Genital pores not seen.transition between thorax and abdomen marked by sharp reduction in last thoracic segment length.notopodial lobes of abdominal chaetigers 3-8 fused dorsally, those of subsequent chaetigers separate (Fig. 6B); notopodial fascicles with a line of around 40 hooded hooks.neuropodial lobes separate ventrally, extending to dorso-lateral region, with a line of around 90 hooded hooks per fascicle ventrally on the neuropodium, (Fig. 6c).notopodial and neuropodial abdominal hooded hooks similar along body, with long anterior shaft, well developed node, distinct constriction, wide shoulder, well developed hood, posterior shaft longer than anterior one, slightly curved.three rows of teeth above triangular main fang, basal row with six teeth, middle row with five and distal row with three small denticles.(Fig. 6D).Branchiae emerging from a ventrolateral pore of neuropodial lobe, distinct from chaetiger 61, with around six filaments per branchiae (Fig. 6c).Pygidium not seen.Posterior part of body with eggs in coelom, each egg 6.8 μm in diameter Methyl green staining pattern.Weakly staining dorsally from thorax to second abdominal segment, third to fifth abdominal chaetiger with a darker prechaetal and postchaetal transverse band, remaining chaetigers unstained (Figs.6B, 8F). Habitat. this species is known from one locality, collected by dredge in sediments with sand, mud and pebbles, 29-35 m depth. Type locality.sW Punta arena, carmen island (Gulf of california). Distribution.Gulf of california.Remarks.one of the characters that Fauchald (1972) employed to differentiate Notodasus from other genera was the absence of notopodial setae on the first thoracic chaetiger; however, we have observed that the first chaetiger is biramous in the holotype of N. magnus.Fauchald also failed to note the presence of branchiae on the posterior region, and lateral organs in the entire length of the body.Description.Holotype complete, about 400 segments, 185 mm long, 2.7 mm wide in abdomen.Paratype incomplete with 30 segments, 25 mm long, 2.5 mm wide.colour in alcohol light brown.Prostomium conical with palpode.Eyespot present, covered by anterior margin of peristomium.Peristomium, first seven thoracic segments, and prechaetal area of chaetiger 8 with a tessellated epithelium; remaining segments smooth (Fig. 7a). Notodasus salazari thorax with 11 chaetigers, with bilimbate capillaries in both rami.thoracic and first two abdominal segments biannulated, abdominal segments uniannulated.notopodia lateral on thoracic segments, on abdominal segments dorsal.lateral organs present along entire body, between notopodia and neuropodia, those of thoracic region closer to notopodium, closer to neuropodia in abdominal segments.Genital pores present on all abdominal segments.transition between thorax and abdomen marked by abrupt decrease in segment length; first two abdominal segments with capillaries in both rami; subsequent abdominal segments with hooded hooks only (Fig. 7B).notopodial lobes of each abdominal segment separated medially, each one with about 40 hooded hooks.neuropodial abdominal lobes separated ventrally, extending to lateral surface, chaetal fascicles with about 90 hooded hooks (Fig. 7c).notopodial and neuropodial abdominal hooded hooks similar along body, with long anterior shaft, angled node, evident constriction, developed shoulder, short hood, and posterior shaft longer than anterior one but attenuate to terminal end.Main fang subtriangular, longer than wide: three rows of teeth above main fang, basal row with five teeth, middle row with six, and distal row multidentate (Fig. 7D). Branchiae emerge from a ventral pore, evident from chaetiger 72, with about 6-7 well developed filaments (Fig. 7E).Pygidium with a triangular caudal cirri.Posterior part of body with eggs in coelom, each egg 11.3 μm in diameter. Methyl green staining pattern.Peristomium and first two chaetigers with large moderately staining dorsum.First two abdominal segments strongly staining, following abdominal segments with moderately staining transverse postchaetal band (Figs.7a-B, c, E, 8G). Remarks.Notodasus salazari n. sp.differs from other species by the abdominal notopodial lobes being separated throughout.the first six abdominal segments of N. magnus have fused with notopodial lobes, while those of the remaining segments are separated as in N. salazari but the epithelium in N. magnus is longitudinally striated in chaetigers 1-11, while in N. salazari the epithelium is tessellated from chaetigers 1 to 7. the methyl green stain pattern of the first two abdominal segments in N. salazari with completely staining is also distinctive, as he other Notodasus species have incomplete staining on the first two abdominal segments.conclUsions Fauchald (1972) describes Notodasus based on a single specimen collected in the Gulf of california for N. magnus.since then, only two species have been described, N. dexterae Fauchald (1973) for the Pacific coast of Panama, and N. arenicola Hartmannschröder (1992) from the mid-atlantic ocean.the knowledge of these species had been limited only to the original descriptions.some bad interpretations or omissions of several characters were found with revision of the type material of each species, which include the form of the first chaetiger, presence of lateral organs, genital pores and branchiae, as well as the notopodial lobes shape and hooded hooks.Based on specimens found on the Mexican coasts, in this work four species of Notodasus were described, evidencing the little knowledge of this group.
2018-12-07T01:18:09.088Z
2009-12-30T00:00:00.000
{ "year": 2009, "sha1": "d924b42544ce41b3e45e8438b14fe188124507de", "oa_license": "CCBY", "oa_url": "https://scientiamarina.revistas.csic.es/index.php/scientiamarina/article/download/1123/1168/1148", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "30619eb27ee45c8e86737f9127b33d652930dcc3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
153312345
pes2o/s2orc
v3-fos-license
Changes on the Structural and Physicochemical Properties of Conjugates Prepared by the Maillard Reaction of Black Bean Protein Isolates and Glucose with Ultrasound Pretreatment The conjugates of black bean protein isolate (BBPI) and glucose (G) were prepared via the wet heating Maillard reaction with ultrasound pretreatment. The physicochemical properties of UBBPI-G conjugates prepared by ultrasound pretreatment Maillard reaction had been compared with classical Maillard reaction (BBPI-G). The reaction rate between BBPI and glucose was speeded up by ultrasound pretreatment. A degree of glycation (DG) of 20.49 was achieved by 2 h treatment for UBBPI-G, whereas 5 h was required using the classical heating. SDS-PAGE patterns revealed that the BBPI-G conjugates with higher molecular weight were formed after glycosylation. The results of secondary structure analysis suggested that the α-helix and β-sheet content of UBBPI-G were lower than that of BBPI-G. In addition, UBBPI-G conjugates had exhibited bathochromic shift compared with BBPI by fluorescence spectroscopy analysis. Finally, UBBPI-G achieved higher level of surface hydrophobicity, solubility, emulsification property and antioxidant activity than BBPI and BBPI-G (classical Maillard reaction). Introduction Black soybean is planted and consumed in various regions of the world, as they are nutritionally rich in biologically active compounds, such as proteins, essential amino acids, anthocyanins, isoflavones and polyunsaturated fatty acids so on [1]. Furthermore, the black soybean has higher protein content than other kinds of soybean and contains a favorable balance of amino acids [2]. Therefore, black soybean is an excellent source of protein for extraction and modification. Black bean protein isolate, due to its good solubility, emulsifying attributes and antioxidant activity, has exhibited remarkable potential of application in the food industry [2,3]. Moreover, these functional properties can be improved by appropriate modifications, including physical, chemical and enzymatic treatments [2][3][4]. However, up to now, most of the researches on black soybean have focused on the anthocyanin and phenolics in seed coat [1], while the protein ingredient has not received enough attention. Recently, the conjugate of protein and sugar produced by Maillard-type reaction has attracted much attention because of its commendable functional properties as well as the antioxidant activity and emulsifying properties [5]. Mu et al. [6] reported that the solubility of soy protein isolates-acacia gum Preparation of Ultrasound Pretreatment BBPI The preparation of BBPI followed the method reported by Jiang, et al. [2]. Phosphate buffer (0.1 M, pH 7.0) was added to the BBPI powder and the mixture solution was stirred for 2 h at an ambient temperature (20 • C). A 5 mg/mL total protein concentration solution was obtained. This solution was treated by an ultrasound processor (NingBo Scientz Biotechnology Co. Ltd., Ningbo, Zhejiang, China) under the ultrasound power and frequency of 150 w and 20 kHz for 30 min (pulse duration of on-time 2 s and off-time 2 s). During the ultrasonic process, an ice-water bath was used to control the reaction temperature at 20 • C. After ultrasonic pretreatment, we got the black soybean protein of ultrasound pretreatment (UBBPI). Preparation of BBPI-G and UBBPI-G Conjugates BBPI-G and UBBPI-G conjugates were respectively mixed with glucose at the same mass ratio (w/w) of protein (5 mg/mL), glucose = 2:1 in phosphate buffer (0.1 M, pH 7.0), which were thoroughly mixed. Then, the solution was incubated at 80 • C for different times (1-6 h). After graft process, all preparations were ended by cooled down to the ambient temperature, and the corresponding lyophilized samples were named as UBBPI-G or BBPI-G conjugates. Degree of Glycation (DG) The slightly modified o-phthaldialdehyde (OPA) assay was used to measure the free amino groups [13]. 1 mL OPA (40 mg) methanol solution, 25 mL Na 2 B 4 O 7 solution (100 mM), 2.5 mL 20% (w/v) SDS and 100 µL β-mercaptoethanol were mixed and diluted to 50 mL by distilled water to obtain the OPA reagent. Thus, 4 mL OPA reagent was mixed with 200 µL protein samples (5 mg/mL), and incubated for 2 min at 35 • C. Using the deionized water as blank, the absorbance was measured at 340 nm. According to the standard curve constructed using 0.25-2 mM L-lysine, the free amino groups could be calculated. where A 0 is the absorbance of the sample before Maillard reaction, and A t is its level after Maillard reaction for t h. Browning Value The browning value was determined as previously reported [14] with some modification: 0.1% (w/v) SDS was used to dilute protein samples to 0.2% (w/v) as blank. The absorbance at 420 nm was detected to evaluate browning value. Sodium Dodecylsulfate Polyacrylamide Gel Electrophoresis (SDS-PAGE) SDS-PAGE was carried out on a discontinuous-buffer system [15] on 12% (v/v) separating gel and 5% (v/v) stacking gel. The gels were stained with Coomassie Blue after the run. Fluorescence Spectroscopy The F-4500 Fluor photometer (Hitachi, Tokyo, Japan) was used to obtain the fluorescence spectra. The protein solution (0.2 mg/mL) was dissolved with phosphate buffer (pH 7.0, 10 mM). Using 295 nm as excitation wavelength, the emission spectra were recorded from 300 to 440 nm with a constant slit of 5 nm [16]. Surface Hydrophobicity (H 0 ) Surface hydrophobicity (H 0 ) was determined according to Haskard and Li-Chan [18]. 1,8-anilinonaphthalenesulfonate (ANS) was the fluorescence probe used to determine the surface hydrophobicity (H 0 ) values of proteins. The protein solutions (0.04-0.4 mg/mL) were prepared in phosphate buffer (10 mM, pH 7.0). Then 4 mL of the above solutions were mixed with 40 µL ANS (8.0 mmol/L). The fluorescence intensity (FI) was measured wavelength of 468 nm (emission) using an excitation at 390 nm. The index of surface hydrophobicity (H 0 ) was obtained from a plot of initial slope of the FI versus protein concentration (mg/mL). Solubility A slightly modified method of protein solubility measurement was used [19]. The protein solution (5 mg/mL), in phosphate buffer (10 mM, pH 7.0), was centrifuged at 12,000× g for 20 min. The protein content in the supernatant was determined by Coomassie blue method and the standard curve was constructed using bovine serum albumin. Emulsifying Property To assess the emulsifying property, the emulsifying activity index (EAI) and emulsifying stability index (ESI) were determined [20]. For emulsion formation, the protein solution (2 mg/mL) and soybean oil were homogenized at 3:1 (v/v) using the homogenizer (AE300L-H; Shanghai Angni Instruments Co., Shanghai, China). After the homogenization, emulsion (50 µL) was sucked from the bottom at 0 and 10 min, and then diluted with 0.1% (w/v) SDS solution in 1:100 (v/v). The measurement of absorbance was performed at 500 nm. where T is 2.303, A 0 and A 10 are the absorbance at 0 and 10 min. N is dilution factor (100), φ is the oil volume fraction (0.25), L is path length of cuvette (1 cm), and C is protein concentration (g/mL). Iron Chelating Capacity Iron chelating capacity of BBPI, UBBPI and UBBPI-G were evaluated by the method of Dinis et al. [21]. Using the distilled water as control, the absorbance value of the protein samples (A) and the control (A 0 ) were measured at 562 nm. Reducing Power Reduction capacity of BBPI, UBBPI and UBBPI-G was evaluated using the method of Oyaizu [22]. 1 mL of the BBPI, BBPI-G or UPPBI-G solution (5 mg/mL) at pH 6.6 adjusted by sodium phosphate buffer was blended with 1.0 mL of potassium ferricyanide (1%). The mixture was kept at 50 • C for 20 min, and then cooled to room temperature. 1.0 mL of trichloroacetic acid (10%) was added to above mixture, which was centrifuged next. The supernatant was 2-fold diluted using distilled water, and then 400 µL FeCl 3 0.1% was added. After dispersing and standing for 10 min, the absorbance was measured at 700 nm to assess the reducing power. Hydroxyl Radical Scavenging Rate Hydroxyl radical scavenging rate of BBPI, UBBPI and UBBPI-G was evaluated by the method of Amarowicz et al. [23], using distilled water as blank. The distilled water instead of salicylic acid ethanol solution was taken as control. The absorbance at 510 nm was measured. Statistical Analysis All the experiments were repeated 3 times, and the results are given as means ±standard deviations. The analysis of significant differences (p < 0.05) was performed through Duncan's multiple range test of SPSS (20.0) software (New York, NY, USA). Effect of Ultrasound Pretreatment on BBPI-G Grafting Reaction The SDS-PAGE analysis reported in Figure 1 indicated that the ultrasound pre-treatment ( Figure 1B, lane 0) did not induce any change in the protein pattern in comparison with BBPI ( Figure 1B, lane N). This observation confirmed that there were no major changes in protein electrophoresis profiles for UBBPI samples, which was similar to the results of Jiang et al. [2]. We could see that for both BBPI-G ( Figure 1A) and UBBPI-G ( Figure 1B) the glucose molecules were combined with BBPI molecules, which could be confirmed by the characteristic new slower migrating bands in Figure 1. This suggested that high molecular weight protein-sugar conjugates were indeed generated. The result agreed well with previous research of soy protein isolate-glucose conjugates [6]. In addition, the ultrasound pretreatment induces a faster appearance of the high molecular weight components ( Figure 1B) than wet Maillard reaction ( Figure 1A). This phenomenon was similar to the SDS-PAGE patterns of peanut protein-maltodextrin conjugates produced by ultrasound-assisted wet heating Maillard reaction, which showed that ultrasound treatment could promote Maillard reaction and make the occurrence of protein glycation more readily [12]. Therefore, from these observations, it could be inferred that UBBPI-G should obtain higher DG compared with BBPI-G in the same time. Statistical Analysis All the experiments were repeated 3 times, and the results are given as means ±standard deviations. The analysis of significant differences (P < 0.05) was performed through Duncan's multiple range test of SPSS (20.0) software (New York, NY, USA,). Effect of Ultrasound Pretreatment on BBPI-G Grafting Reaction The SDS-PAGE analysis reported in Figure 1 indicated that the ultrasound pre-treatment ( Figure 1B, lane 0) did not induce any change in the protein pattern in comparison with BBPI ( Figure 1B, lane N). This observation confirmed that there were no major changes in protein electrophoresis profiles for UBBPI samples, which was similar to the results of Jiang et al. [2]. We could see that for both BBPI-G ( Figure 1A) and UBBPI-G ( Figure 1B) the glucose molecules were combined with BBPI molecules, which could be confirmed by the characteristic new slower migrating bands in Figure 1. This suggested that high molecular weight protein-sugar conjugates were indeed generated. The result agreed well with previous research of soy protein isolate-glucose conjugates [6]. In addition, the ultrasound pretreatment induces a faster appearance of the high molecular weight components ( Figure 1B) than wet Maillard reaction ( Figure 1A). This phenomenon was similar to the SDS-PAGE patterns of peanut protein-maltodextrin conjugates produced by ultrasound-assisted wet heating Maillard reaction, which showed that ultrasound treatment could promote Maillard reaction and make the occurrence of protein glycation more readily [12]. Therefore, from these observations, it could be inferred that UBBPI-G should obtain higher DG compared with BBPI-G in the same time. The changes in DG and browning values of BBPI and glucose conducted by classical heating and ultrasound-assisted pretreatment are shown in Table 1. It was obvious that ultrasound-assisted Maillard reaction required less time to reach the same DG than the classical Maillard reaction. For example, a DG of 20.49% was obtained for UBBPI-G samples by 2 h, whereas a DG of 20.60% was obtained by classical heating after a much longer time of 5 h. This result was similar to that of The changes in DG and browning values of BBPI and glucose conducted by classical heating and ultrasound-assisted pretreatment are shown in Table 1. It was obvious that ultrasound-assisted Maillard reaction required less time to reach the same DG than the classical Maillard reaction. For example, a DG of 20.49% was obtained for UBBPI-G samples by 2 h, whereas a DG of 20.60% was obtained by classical heating after a much longer time of 5 h. This result was similar to that of previous study [12], and indicated that the Maillard reaction was effectively enhanced by ultrasound pretreatment. Because ultrasound could improve the rate of the heat and mass transport processes, provide good mixing and develop the graft reaction, it was considered to supply a sonocatalysis effect [12,24,25]. Moreover, ultrasound cavitation could speed up the protein molecules motion, rearrange and unfold the protein molecules, causing the protein secondary and tertiary structural changes [17]. These structural changes could induce the exposition in BBPI of more reactive free amino groups for the graft process [5,11]. Generally, the Maillard reaction is divided into three stages: initial, intermediate, and advanced. In the initial stage of Maillard reaction, the products formed via a condensation of carbonyl groups with amino group usually did not lead to an absorbance in the visible spectrum. In contrast, a melanoidin compounds with the maximum absorbance at 420 nm can appear in the latter two stages of Maillard reaction [26]. Therefore, the browning values became higher when the graft reaction time was extended (Table 1). However, compared with the BBPI-G conjugates, the UBBPI-G conjugates showed smaller browning values under the similar grafting degree, which meant that the ultrasonic treatment reduced browning intensity of the Maillard reaction. This phenomenon might suggest that ultrasonication could prevent the polymerization of intermediate products to form melanoidins during graft process [27]. Similarly, the browning values of β-conglycinin and maltodextrin conjugates prepared by classical Maillard reaction were also much higher than those by ultrasound treatment at the same DG [28]. Hence, it can be seen that ultrasound treatment not only accelerates the graft reaction, but also reduces the brown colours, which is favour of industrial application. Fourier Transform Infrared (FTIR) Spectroscopy The secondary structure of BBPI-G and UBBPI-G conjugates was analysed by FTIR spectroscopy and the calculated value of each secondary structure component is shown in Table 2. The results indicated that the secondary structure of BBPI was changed after both types of Maillard reaction. In particular, the unordered structure content (β-turn + random coil) grew evidently (p < 0.05) following the glucose attachment, but the ordered structure content (α-helix + β-sheet) declined in an opposite fashion. During the graft process, the interaction between BBPI and glucose molecules could affect the hydrogen bonds and van der Waals forces, which maintained the stability of the secondary structure of proteins. Therefore, the secondary structure of protein molecules was changed as a study of Zhang et al. [5]. On the other hand, due to the heating treatment during the Maillard reactions, the heat denaturation of proteins cannot be excluded [29]. Kim et al. [30] found that the soybean glycinin showed obvious changes in their secondary structure after heating above 80 • C. Compared to BBPI-G, e UBBPI-G conjugates lost more ordered secondary structure content and gained more unordered structure content. This result was ascribed to the pressure alterations and turbulence caused by ultrasonic treatment, leading to structural transformations [25]. Moreover, ultrasound could partially destroy the interaction force between protein molecules by facilitate the glycosylation of protein and sugar. Therefore, UBBPI-G conjugates had more effective changes in the structure distribution, which could cause better uniformity and flexibility conjugates compared with classical Maillard reaction. It was known that the functionality was well related to the structure [7,11]. Unordered structure had better flexibility than ordered structure, which was beneficial to the function of protein. Mu et al. [11] showed that the greater the flexibility of the protein molecules, the better the emulsification. Fluorescence Spectroscopy Analysis Based on analysis of fluorescence spectroscopy, the microenvironment around tryptophan in proteins can be reflected to detect the changes of protein conformation [31]. The fluorescence spectroscopy of BBPI-G and UBBPI-G conjugates was shown in Figure 2. In comparison to BBPI, the λ max of BBPI-G showed a bathochromic shift phenomenon, illustrating that the microenvironment of the tryptophan groups had become more polar (Figure 2A). Generally, the bathochromic shift phenomenon is known to appear with the increase of maximal fluorescence, but the fluorescence spectra decreased gradually here owing to the shielding effect of the hydrophilic sugar chain to the tryptophan residues. Similar results were also found in the previous work [25]. In addition, the UBBPI samples clearly exhibited lower FI after the ultrasound pretreatment ( Figure 2B). Because this treatment could generate fluid mixing and shear forces by cavitation effects [32], the protein molecules were partly disrupted, and exposed more chromophores to the solvent, which led to the FI decrease [2,5]. Therefore, the tertiary structure of protein was changed after ultrasonic pretreatment. Based on this reason, the UBBPI-G conjugates showed even much lower fluorescence intensity than the BBPI-G conjugates. This result was in accord with the DG data in Table 1 showing that the UBBPI-G conjugates grafted with more glucose molecules, generating a stronger shielding effect in UBBPI-G conjugates obtained by ultrasound pretreatment Maillard reaction compared to classical heating. It possessed similar fluorescence features of the glycosylated products as given in previous reports [29], which might be related with the high degree of graft during prior Maillard reaction. spectroscopy of BBPI-G and UBBPI-G conjugates was shown in Figure 2. In comparison to BBPI, the λmax of BBPI-G showed a bathochromic shift phenomenon, illustrating that the microenvironment of the tryptophan groups had become more polar (Figure 2a). Generally, the bathochromic shift phenomenon is known to appear with the increase of maximal fluorescence, but the fluorescence spectra decreased gradually here owing to the shielding effect of the hydrophilic sugar chain to the tryptophan residues. Similar results were also found in the previous work [25]. Surface Hydrophobicity (H 0 ) Surface hydrophobicity (H 0 ) reflects the number of hydrophobic groups on the protein surface [33]. The effect of the classical wet heating and ultrasonic pretreatment Maillard reaction on the H 0 values was shown in Figure 3. The result revealed that the H 0 values of BBPI-G conjugates were much higher than those of BBPI. It was well known that most hydrophobic residues were buried in the interior of the compact globular region [34]. Hence, the fluorescence probe (ANS) binding to hydrophobic residues was disturbed, and the H 0 value of BBPI was comparatively low [25]. According to previous study, Zhao et al. [27] showed that the H 0 values of heated soy protein isolate was clearly higher than native soy protein isolate, meaning that more hydrophobic groups exposed on the protein surface after heating. This indicated that the surface hydrophobicity was influenced greatly by temperature factor. Based on this result, the conjugates glycosylated by β-conglycinin and dextran showed higher H 0 value than native β-conglycinin, as expected [35]. A similar result was also reported by Wang et al. [29], who illustrated that the mung bean protein isolate (MBPI)-glucose (G) conjugates subjected to whether ultrasound treatment or classical heating both manifested the higher surface hydrophobicity than untreated MBPI, presumably resulting from aggregate dissociation or protein unfolding. In Figure 3, we also noticed that the H 0 values of BBPI-G and UBBPI-G conjugates declined slightly with the ongoing increase in reaction time. This phenomenon may be due to the more hydrophilic glucose Polymers 2019, 11, 848 9 of 15 molecules linked to the proteins during the glycosylated reaction, which could be confirmed by the DG data in Table 1, partly covering the hydrophobic area of the proteins. (MBPI)-glucose (G) conjugates subjected to whether ultrasound treatment or classical heating both manifested the higher surface hydrophobicity than untreated MBPI, presumably resulting from aggregate dissociation or protein unfolding. In Figure 3, we also noticed that the H0 values of BBPI-G and UBBPI-G conjugates declined slightly with the ongoing increase in reaction time. This phenomenon may be due to the more hydrophilic glucose molecules linked to the proteins during the glycosylated reaction, which could be confirmed by the DG data in Table 1, partly covering the hydrophobic area of the proteins. Moreover, H 0 values of UBBPI-G conjugates were even higher than those of BBPI-G, since the cavitation phenomenon and mechanical effect induced by ultrasonic treatment destroyed protein conformation and structure. Then, the hydrophobic residues buried in the protein interior were exposed towards the aqueous environment more effectively, which resulted in the enhancement of surface hydrophobicity [28]. Solubility Solubility is an important physicochemical property of proteins, but beyond that it is also deemed as a prerequisite for the other functional properties [29]. The solubility of BBPI, BBPI-G and UBBPI-G conjugates were shown in Figure 4. It could be observed that the solubility of BBPI-G and UBBPI-G was slightly higher than the native BBPI. The solubility of BBPI was evidently enhanced by the combination with glucose despite the increase in the surface hydrophobicity (as shown in Figure 3). Normally, when hydrophobic groups are exposed, the protein molecules can be rearranged into larger supramacromolecular complexes by non-covalent interaction, resulting in a decrease of solubility. However, the increase of BBPI-G solubility might be owing to the attachment of a hydrophilic saccharide on the protein surface, and the hydrogen bonding capacity of saccharide's -OH group could lead to an increased affinity between proteins and water molecules [29,36]. Furthermore, the attachment of glucose might also inhibit the non-covalent interaction of protein molecules to facilitate protein dissolution [37]. rearranged into larger supramacromolecular complexes by non-covalent interaction, resulting in a decrease of solubility. However, the increase of BBPI-G solubility might be owing to the attachment of a hydrophilic saccharide on the protein surface, and the hydrogen bonding capacity of saccharide's -OH group could lead to an increased affinity between proteins and water molecules [29,36]. Furthermore, the attachment of glucose might also inhibit the non-covalent interaction of protein molecules to facilitate protein dissolution [37]. In addition, as shown in the DG study, the ultrasonic pretreatment could improve the reaction between protein and glucose. In fact, more hydrophilic glucoses conjugated to protein molecules via ultrasound pretreatment Maillard reaction than classical wet heating were found at the same reaction time. Therefore, the large amount of hydrophilic groups resulted in the increasing solubility of UBBPI-G. Meanwhile, ultrasonic treatment could make proteins unfolding and peptide In addition, as shown in the DG study, the ultrasonic pretreatment could improve the reaction between protein and glucose. In fact, more hydrophilic glucoses conjugated to protein molecules via ultrasound pretreatment Maillard reaction than classical wet heating were found at the same reaction time. Therefore, the large amount of hydrophilic groups resulted in the increasing solubility of UBBPI-G. Meanwhile, ultrasonic treatment could make proteins unfolding and peptide bonds breaking, transforming insoluble protein aggregates into soluble ones [25]. Hence, ultrasonic assisted glycosylation was a more effective method in improving the solubility of protein. Emulsifying Property The emulsifying activity index (EAI) and emulsifying stability index (ESI) of grafted BBPI by classical heating and ultrasound pretreatment Maillard reaction at different reaction times is shown in Figure 5. The protein molecules have strong adsorbing ability on the oil-water interface and the saccharides dissolve well in the aqueous phase. Therefore, the protein-saccharide conjugates colligated the two characteristic properties of protein and saccharides often exhibit favourable emulsifying property [7]. As expected, the EAI and ESI of BBPI-G conjugates increase significantly (p < 0.05) after glycosylation. This result was the same as that obtained by Wang et al. [29] on the mung bean protein isolate (MBPI)-glucose (G) conjugate. Glycosylation optimized the hydrophobic-hydrophilic balance on the protein surfaces and modified the protein surface properties, which supported the emulsion stability through electrostatic interaction [38]. Furthermore, BBPI-G conjugates were predicted to show better emulsifying properties, due to the more unordered, less compacted and more flexible structure than native BBPI. mung bean protein isolate (MBPI)-glucose (G) conjugate. Glycosylation optimized the hydrophobic-hydrophilic balance on the protein surfaces and modified the protein surface properties, which supported the emulsion stability through electrostatic interaction [38]. Furthermore, BBPI-G conjugates were predicted to show better emulsifying properties, due to the more unordered, less compacted and more flexible structure than native BBPI. Compared with the BBPI-G conjugates, the UBBPI-G conjugates achieved higher EAI and ESI. These phenomena were consistent with the results reported by Zhang et al. [5], reporting that significant increase in EAI and ESI of β-conglycinin-maltodextrin samples was attributed to the exposure of the internal hydrophobic groups of protein under ultrasonic treatment, which reacted easily with the reducing-end carbonyl group in sugars and favoured emulsion formation and stabilization. Moreover, the ultrasound treatment was capable of accelerating the molecules mobility and the adsorption on the oil-water interfaces because of the mechanical effects caused by cavitation. Additionally, it can be observed that the trends of the emulsifying property as a function of reaction time were consistent with the solubility result ( Figure 4). Therefore, the enhanced emulsifying property was probably correlated with the higher solubility, which demonstrated that the solubility was an important factor to assess the emulsifying ability [39]. Compared with the BBPI-G conjugates, the UBBPI-G conjugates achieved higher EAI and ESI. These phenomena were consistent with the results reported by Zhang et al. [5], reporting that significant increase in EAI and ESI of β-conglycinin-maltodextrin samples was attributed to the exposure of the internal hydrophobic groups of protein under ultrasonic treatment, which reacted easily with the reducing-end carbonyl group in sugars and favoured emulsion formation and stabilization. Moreover, the ultrasound treatment was capable of accelerating the molecules mobility and the adsorption on the oil-water interfaces because of the mechanical effects caused by cavitation. Additionally, it can be observed that the trends of the emulsifying property as a function of reaction time were consistent with the solubility result ( Figure 4). Therefore, the enhanced emulsifying property was probably correlated with the higher solubility, which demonstrated that the solubility was an important factor to assess the emulsifying ability [39]. Antioxidant Activity The changes of BBPI-G antioxidant activity were shown in Figure 6. The iron chelating capacity of BBPI was improved after conjugation with glucose, which might be due to the formation of the high molecular weight compound prepared by the cross-linking of the free amino acids with sugars [40]. This result was same as that obtained by Liu et al. [41], who showed that the high molecular weight compound possessed strong iron chelating capacity. Furthermore, Liu et al. [42] predicted that the Maillard reaction was an effective method to improve the free radical scavenging, iron chelating activity and reducing power of proteins. Accordingly, the hydroxyl radical scavenging activity and reducing power of BBPI-G were both drastically developed after Maillard reaction. This phenomenon may be ascribed to the finding that the intermediate compounds of Maillard reaction were known as reductones, which exhibited good ability to break the radical chain as hydrogen atoms donator [8]. On the other hand, glycosylation could cause the structural changes in protein molecules, which could generate a wide range of compounds, leading to the preparation of conjugates that contributed to the reducing power [43]. changes in protein molecules, which could generate a wide range of compounds, leading to the preparation of conjugates that contributed to the reducing power [43]. Compared with the classical heating treatment, the ultrasound pretreatment Maillard reaction showed higher antioxidant potential, which was similar with the results obtained by Abdelhedi et Compared with the classical heating treatment, the ultrasound pretreatment Maillard reaction showed higher antioxidant potential, which was similar with the results obtained by Abdelhedi et al. [44]. They found that the conjugates with higher antioxidant ability could be prepared via ultrasound pretreated Maillard reaction instead of conventional process. This fact might be caused by the higher DG obtained by ultrasound treatment, which could produce more intermediate compounds, functioning as antioxidants. Similarly, Daglia et al. [45] explained that advanced Maillard reaction productions were the particular complex mix including numerous compounds. Hence, the protein antioxidant activity was improved by the contribution of the product mixture. Conclusions In this study, BBPI-G and UBBPI-G conjugates with higher molecular weight were successfully prepared, using the classic and ultrasound pretreated Maillard reaction. Compared with two types of glycosylation, we found that the Maillard reaction could be effectively promoted by ultrasound pretreatment to obtain higher DG in the same reaction time. This behaviour can be attributed to the finding that ultrasound could change the secondary and tertiary structure of BBPI, unfold the protein molecules, increase the speed of molecule motion, and then accelerate the Maillard reaction. In addition, UBBPI-G also exhibited significantly higher levels of the surface hydrophobicity, solubility, emulsification activity, emulsification stability and antioxidant activity than native BBPI and BBPI-G. Therefore, the combination of ultrasound pretreatment and Maillard reaction is expected to find applications in producing conjugates with desirable function properties.
2019-05-15T14:31:19.785Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "9e2282a08ad042998c168559923b26e8d983e9f9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/11/5/848/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e2282a08ad042998c168559923b26e8d983e9f9", "s2fieldsofstudy": [ "Chemistry", "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
81001465
pes2o/s2orc
v3-fos-license
The Possibility of Complex Treatment of Optic Nerve Atrophy based on Etiopathogenetic Approach using the New Classification of this Ophthalmopathology Application of treatment, differentiated based on the degree of functional changes and stages of atrophy, type of atrophy and nature of the lesion, significantly alters the effectiveness of treatment when compared to the isolated electropharmocological stimulation and even more so compared to the traditional medication method of treatment. Introduction Optic nerve atrophy (ONA) is the end result of disease, intoxication, genetically determined abnormality or injury of retinal ganglion cells and/or their axons situated between the retina and the lateral geniculate bodies of the brain. The prevalence percentage of various optic nerve diseases in the eye disease hospital is approximately 1-1.5%, 19 to 26% of those cases resulting in complete atrophy of the optic nerve and incurable blindness. Causes of ONA are: diseases of retina and optic nerve (inflammation, dystrophy, including glaucomatic and involutional, poor circulation due to hypertension, atherosclerosis, diabetes, etc., swelling, profuse bleeding, compression and damage of the optic nerve), diseases and injuries of the orbit, Central nervous system diseases (optic-chiasm leptomeningitis, abscesses and brain tumors with increased intracranial pressure, neurosyphilis, demyelinating disease, traumatic brain injury), intoxication with methyl alcohol, antibiotics (streptomycin, gentamicin), antimalarial drugs (quinine, hingamin). ONA may be a component or sole manifestation of a number of hereditary diseases (congenital amaurosis, hereditary optic nerve atrophy, etc.) [1,2]. 130 Treatment of optic nerve atrophy is a very complex and difficult problem because of the extremely limited regenerative ability of the neural tissue. All depends on how widespread the degenerative process in the nerve fibers is and whether their viability is preserved. Some progress in the treatment of optic nerve atrophy has been achieved with the help of pathogenetically directed influences aimed to improve the viability of nervous tissue. The development of new methods of treatment of partial optic nerve atrophy (PONA) has greatly enhanced the possibility of rehabilitation of patients with this pathology. However, the abundance of methods in the absence of clear indications complicates the choice of a treatment plan in each individual case. [1,3,4,5,6,7] The analysis of literature on diagnosis and treatment of PONA showed lack of clear classification and the existence of various approaches to the assessment of the severity of the disease [2,8,9,10]. The following classification presented in Table 1 was used to determine the treatment plan [7,9]. The purpose of work. To create a method of optic nerve atrophy treatment differentiated depending on severity and other individual characteristics of the patient and to analyze the effect of the application of this technique. Material and Methods To treat the patients with partial atrophy of the optic nerve, we use the following methods. Infita-a low-frequency pulse physiotherapy device designed to expose the central nervous system (CNS) to low-frequency pulse electromagnetic field (without direct contact with the patient), which results in an improved central blood flow, saturation of blood with oxygen, and increased redox processes in the nervous tissue. It has as the following characteristics: no output signal -a triangular voltage pulse with negative polarity, pulse frequency 20 -80 Hz (most frequently used 40 -60 Hz), pulse duration of 3 ± 2 V, recommended number of procedures 12 -15, starting with 5 minutes, increasing to 10 and then 12 minutes beginning with the fifth procedure and so on up to 12 treatments. Treatment method, hereinafter called the direct electropharmacological stimulation (EPS), includes installation of a soft PVC catheter into the retrobulbar space and a repeated inoculation of various medications through it into the retrobulbar space selected based on the etiopathogenesis of the atrophy. All patients were infused with a 10% solution of piracetam and exposed to electrical stimulation through a needle electrode inserted into the retrobulbar space through the catheter with the device "AMPLIPULS" 40 minutes later [11,12]. Also the following surgical methods can be used -ligation of the superficial temporal artery, implantation of a collagen sponge into the subtenon space, decompression of the optic nerve. In connection with the specifics of performing of surgical procedures in our clinic the technique of their execution is given below. Ligation of the superficial temporal artery. Local anesthesia -lidocaine 2.0 % subcutaneously. A 3 cm long skin incision is made 1 cm in front of the tragus. The tissue is bluntly separated. The superficial temporal artery is ligated with two stitches and overlaps between them. Albucidum powder is infused into the wound. The soft tissue is sutured with catgut suture. Silk sutures are placed on the skin. The wound is treated with a solution of brilliant green dye, aseptic sticker is placed. Implantation of a collagen sponge into the subtenon space. Local anesthesia -lidocaine 2,0 % subcutaneously and dicain 0,5 epibulbarly. A 5-6 mm long skin incision is made in the upper nasal quadrant 5-6 mm away from the limbus, parallel to the limbus. A tunnel is formed between the sclera and the capsule of tenon to the posterior pole using a spatula. An implant of a collagen sponge 10 -8 mm long and 5 -6 mm wide, pre-soaked with the solution of emoxipine (cortexin, retinalamin and other drugs or their combinations) is implanted into the tunnel closer to the optic nerve. The suture is placed on the conjunctiva and under the conjunctiva, followed by antibiotics and dexamethasone. After the implantation, antibiotics and a solution of diclofenac is applied locally for 5 -7 days [13,14]. Decompression of the optic nerve is performed under general anesthesia. Blepharostat is used. An incision is made on the inner side of the conjunctiva. The internal straight muscle is sutured up in front the tendon and is clipped off. Three incisions of the scleral ring around the optic nerve are made. The solution of albucid is applied, and the muscle is locked in place. The suture is placed on the conjunctiva. Dixon and antibiotics are placed under the conjunctiva. The following scheme of treatment was suggested for the peripheral section of the optic nerve: iii. Degree: Catheterization of the retrobulbar space, direct EFS + piracetam, dexamethasone, emoksipin 2 times a day. Implantation of a collagen sponge with emoxipin into the subtenon space, ligation of the superficial temporal artery, piracetam 20,0 intravenously with physiological saline 200,0. iv. Degree: Step 1 -decompression of the optic nerve, step 2 or in case step 1 is not possible (severe somatic pathology) -catheterization + direct EFS, piracetam, dexamethasone, emoksipin 2 times retrobulbarly into the catheter. Ligation of the superficial temporal artery (if not done earlier). Implantation of a collagen sponge with emoxipin into the subtenon space, fenotropil tablets according to the treatment scheme, piracetam 20,0 intravenously with physiological saline 200,0. 131 Treatment scheme for the lesion of the central part of the visual pathway. a. Stage I: Glycine 1 tablet 3 times a day sublingually for one month, cavinton according to the treatment scheme, then phenotropil (tablets) according to the treatment scheme. "Infita" -percutaneous low-frequency electrical stimulation. No cases of deterioration were recorded. Conclusion Medication therapy combining medications which have various effects on the nervous tissue is effective only for the initial stages of atrophy of the optic nerve. Also, the use of non-invasive physiotherapeutic methods is effective in early stages. The use of direct electropharmacological stimulation is more reasonable for
2019-03-18T14:02:25.349Z
2018-03-28T00:00:00.000
{ "year": 2018, "sha1": "db6d0fe235e16c125e6d96d9cda1868839bff6d0", "oa_license": "CCBY", "oa_url": "http://lupinepublishers.com/biomedical-sciences-journal/pdf/OAJBEB.MS.ID.000127.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9409715a12e4301f3d711c91d6fd51692d56c812", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258786957
pes2o/s2orc
v3-fos-license
Functional brainstem representations of the human trigeminal cervical complex Background The human in-vivo functional somatotopy of the three branches of the trigeminal (V1, V2, V3) and greater occipital nerve in brainstem and also in thalamus and insula is still not well understood. Methods After preregistration (clinicaltrials.gov: NCT03999060), we mapped the functional representations of this trigemino-cervical complex non-invasively in 87 humans using high-resolution protocols for functional magnetic resonance imaging during painful electrical stimulation in two separate experiments. The imaging protocol and analysis was optimized for the lower brainstem and upper spinal cord, to identify activation of the spinal trigeminal nuclei. The stimulation protocol involved four electrodes which were positioned on the left side according to the three branches of the trigeminal nerve and the greater occipital nerve. The stimulation site was randomized and each site was repeated 10 times per session. The participants partook in three sessions resulting in 30 trials per stimulation site. Results We show a large overlap of peripheral dermatomes on brainstem representations and a somatotopic arrangement of the three branches of the trigeminal nerve along the perioral-periauricular axis and for the greater occipital nerve in brainstem below pons, as well as in thalamus, insula and cerebellum. The co-localization of greater occipital nerve with V1 along the lower part of brainstem is of particular interest since some headache patients profit from an anesthetic block of the greater occipital nerve. Conclusion Our data provide anatomical evidence for a functional inter-inhibitory network between the trigeminal branches and greater occipital nerve in healthy humans as postulated in animal work. We further show that functional trigeminal representations intermingle perioral and periauricular facial dermatomes with individual branches of the trigeminal nerve in an onion shaped manner and overlap in a typical within-body-part somatotopic arrangement. Trial registration: clinicaltrials.gov: NCT03999060 Introduction The trigeminal nerve with its three branches (V1, V2, V3, see Figure 1A) is the anatomical source for primary headache and facial pain syndromes. Although the anatomical location of the spinal trigeminal nucleus (STN) is principally known, the functional representations of all three branches on brainstem level (1)(2)(3)(4)(5)(6), at least below the pons (7), is unknown in humans. Particularly, the exact anatomical correlation between the nuclei of the fifth cranial nerve and the greater occipital nerve (GON), the so-called trigemino-cervical complex (TCC), has not been shown, although the evidence of its existence is strong (8). The latter is clinically important: primary headache syndromes such as migraine and cluster headache involve mainly the first trigeminal branch (9), while trigeminal neuralgia mainly affects V2 and V3 (10). Pharmacologically blocking the GON reduces the number of headache days in certain primary headaches like cluster headache (11)(12)(13)(14)(15)(16), trigeminal neuralgia (17) and probably also migraine (18)(19)(20)(21)(22)(23)(24)(25). This raises the question of the central processing of trigeminonociceptive pain and indicates that the central organization of the heads' somatotopy is more complex than assumed. Animal (26)(27)(28) and recently also human data (29) suggest a convergence of V1 and GON input (30) in the TCC at the level of C2, where the GON enters the brainstem, which is thought to explain the functional interrelation of the two systems ( Figure 1a). Based on animal data (1)(2)(3)(4)(5)(6) it has further been suggested that the representation of the trigeminal branches in the STN follow a somatotopic arrangement which follows an onion-shaped segmentation (7,31) reaching from perioral (front, nose) to periauricular areas of the face (ear, back of the head) (Figure 1b). According to this theory, the perioral dermatomes in the STN are represented more rostrally, whereas the periauricular dermatomes are placed more caudally. This somatotopy of rostral versus caudal arrangement in the brainstem is thought to be intermingled with the three trigeminal branches (7,31). It is interesting that although a somatotopic arrangement for example of the limbs in the somatosensory and motoric cortices has clearly been shown in the homunculus, it has also been shown that the central representation within one limb exhibits wide overlaps of the representations of its individual dermatomes (e.g. for the individual digits) in the motor cortex but also partly in the somatosensory areas (32)(33)(34)(35). Withinlimb representations also seem to overlap in the spinal cord at least in some nuclei while others show a more distinct somatotopy (36)(37)(38)(39)(40)(41). To investigate the functional representations on brainstem level of the individual dermatomes of the head we recorded advanced high-resolution functional magnetic resonance imaging (fMRI) (42,43) simultaneously to painful, electric stimulation of rostral V1, V2, V3, and the GON. Since the template for the Montreal Neurological Institute (MNI) space only reaches as low as z ¼ À70 ($1 cm below the cerebellum), previous studies (7,44) might have missed important nuclei involved in trigemino-cervical nociception. Hence, we specifically acquire data with a lower anatomical boundary of C2/3 ( Figure 1a). Of specific interest is the co-localization of the GON with the first branch of the trigeminal nerve (V1) and the connectivity to the hypothalamus (45) as a possible modulator/ generator of migraine attacks (46,47). In a second step, we aimed to disentangle the representations of the trigeminal branches according to their rostral versus caudal positions along the perioral-periauricular axis of the facial dermatomes. We aimed to provide evidence of an onion-shaped representation intermingled with the brainstems' representation of the individual branches of the trigeminal nerve in the lower brainstem. Study population and experimental design Results from a pilot study with 25 healthy participants using the same paradigm on repetitive, randomized, peripheral, painful electrical stimulation of the rostral part of the three trigeminal branches (V1R, V2R, V3R) and the GON were used to generate a hypothesis, i.e. primary and secondary outcomes, and estimate effect. The actual study, built on these preliminary results, was consequently preregistered (clinicaltrials.gov: NCT03999060). We then investigated a new and independent group of 38 participants from which two had pain and/or headache disorders (people reporting infrequent episodic tension-type headache (TTH) were accepted, as long as they had less than 12 episodes a year), first-generation family member with primary headache disorder, intake of weak analgesics more than 14 days a month, skin lesions or other dermatological illnesses in the applied areas (neck and face region), men with full beard, neurological, psychiatric or other chronic disorders, whiplash or other damages to the cervical spine, pregnancy or breast feeding, misuse of alcohol or drugs, inability to speak or understand German. The study was approved by the local ethics committee (Ethikkommission der € Arztekammer Hamburg, PV5490) and all participants gave written, informed consent. Experimental design: Electrode positioning and equipment. Electrical stimulation was applied using a MRcompatible Digitimer DS7A Current Stimulator (Digitimer Ltd., Welwyn Garden City, UK), which was coupled to four WASP electrodes (Specialty Developments, Bexley, UK) via a D188 Remote Electrode Selector (Digitimer Ltd., Welwyn Garden City, UK) and custom-built MR-compatible cables. The cables were built using a MR-safety tested design (48) to prevent tissue damage due to currents induced by electro-magnetic wave coupling. The four electrodes were positioned on the left side according to the three branches of the trigeminal nerve and the GON (Figure 1a). The GON was located by palpation according to validated procedures (49,50) and the electrode was positioned immediately above. Rostral V1 (V1R) was stimulated by means of an electrode placed on a vertical line between the medial and lateral quarter of the face corresponding to the middle of the eyebrow and approximately 1 cm above. The rostral V2 (V2R) was stimulated 1 cm lateral of the same vertical line on the level of a horizontal line through the inferior part of left ala of the nose. Rostral V3 (V3R) was stimulated along the same vertical line approximately 1 cm caudal from the corner of the mouth. Figure 2a demonstrates the location of the electrodes in the first experiment. In the second experiment, we changed the locations of two of the four stimulating electrodes to additionally cover the caudal parts of the first and third trigeminal branch (V1C and V3C, see Figure 2b). The stimulating electrode in V1C was attached 2 cm lateral to the left as well as 2 cm rostral from the point at the half of the distance between nasion and inion. Stimulation at V3C was located 1 cm rostral to the tragus. Experimental design: Stimulation paradigm. After fixing the electrodes, the subjects were moved into the scanner and the electrical detection thresholds (EDTs) at all electrode sites were determined by means of the QUEST-procedure (51). The final current was set to 10 times the EDT of the V3R electrode for both experiments but was not allowed to exceed 5 mA nor a pain rating above 50 (on a numeric rating scale from 0 to 100) for a single pulse. The actual stimulation consisted of a small train of three pulses separated by 100 ms each with 400 V and 2 ms duration. Each stimulus was followed by a break of 3 s (jittered between 2 and 4 seconds); then a pain intensity rating on a visual analogue scale (VAS) with levels between 0 and 100 using a button box with the right hand and then another break before the next trial. The inter-trial interval was set to 15 s (jittered between 12 and 18 s). The stimulation site was randomized and each site was repeated 10 times per session. The participants partook in three sessions resulting in 30 trials per stimulation site during approximately 30 minutes of fMRI scanning. The experimental design is shown in Figure 1c. MR data acquisition and processing All MR data was recorded with a Siemens 3T PRISMA scanner (Siemens, Erlangen) using a 64-channel head coil. During the actual experiment we recorded three sessions with 230 images each using an echo-planer imaging (EPI) protocol (repetition time 2.93 s, echo time 33 ms, 1.3 Â 1.3 Â 2.0 mm 3 spatial resolution, GRAPPA acceleration, flip angle 80 , 72 slices with a multiband factor of 2, FOV 215 mm, no gap, flow rephasing) with a field of view covering the brainstem as low as C2/3, cerebellum, midbrain and the insula cortices. In each session the first five images were removed to avoid scanner saturation effect. Afterwards we recorded fieldmaps (repetition time 0.792 s, echo times 5.51 and 7.97 ms, 3 Â 3 Â 2 mm 3 spatial resolution, flip angle 20 , 72 slices, FOV 222 mm, no gap) covering the same volume as the EPIs to attenuate inhomogeneities of the magnetic field. Pulse and breathing were recorded simultaneously to attenuate extra-cerebral (i.e. cardiovascular) artifacts. Afterwards we acquired high resolution (1 mm 3 ) anatomical images (MPRAGE, repetition time 2.3 s, echo time 2.98 ms, flip angle 9 , 240 slices, FOV 256 mm). All fMRI data were first filtered using the spatially adaptive non-local means filter (52) implemented in the CAT12 toolbox. The fMRI data was then corrected for movements and for distortions of the homogeneity of the magnetic field (fieldmaps) using the realign and unwarp algorithm as implemented in SPM 12. Additionally, slice time correction was performed using the onsets of the single slices as suited for our multiband protocol. We then calculated a subject-wise general linear model (GLM) including condition-wise onsets of each stimulus as stick functions, which were then convolved with a hemodynamic response function (HRF). The button box responses as well as the onset and duration of the VAS were modelled as regressors of no interest. Additional regressors of no interest were included to correct for (uncorrelated) movement, cardiovascular influence, using the algorithms proposed by Deckers and colleagues (53), and changes in the spinal fluid extracted from the fourth ventricle. The co-registered structural images were segmented with the unified segmentation approach algorithm (54) implemented in SPM12 but using the templates provided by Blaiotta et al. (55), which are optimized for the brainstem and spinal cord, to gain deformation fields used to warp the contrast images of the subject-wise GLM into MNI space. Each step was controlled by visual inspection. We further calculated a group template and gray and white matter masks from the warped structural images. Primary and secondary outcomes Our primary outcome was that using functional neuroimaging and a standardized trigemino-nociceptive input, we would be able to observe a somatotopical arrangement of the trigeminal areas in the brainstem below the pons at a statistical threshold of T-value 3. To test our preregistered primary and secondary outcomes we calculated repeated measure ANOVAs as implemented in SPM12 for the four stimulated regions for both cohorts of the first experiment separately. As described in the preregistration, we calculated small volume corrections in the second cohort at MNI coordinates and T-value thresholds derived from the first cohort. For the brainstem below the pons, these are a T-value >3 and a sphere with a search radius of 10 Figure 1). Functional representations of the trigeminal branches and the GON within the lower brainstem To investigate the functional somatotopy of the rostral part of the three branches of the trigeminal nerve and the GON in the brainstem we calculated a repeated measure ANOVA with all data from both studies (i.e. n ¼ 61 subjects). Since the co-localization of V1 and GON was of special interest, we further conducted a conjunction analysis between them. We performed differential contrasts between the three trigeminal branches (i.e. V1 vs V2, V1 vs V3, V2 vs V3). The statistical threshold for the results was set to T-values higher than 3. We additionally assigned each voxel surviving aforementioned threshold to the stimulation site (dermatome) which, within this voxel, had the highest T-value in order to present not only the overlaps but also the somatotopic arrangements of the individual trigeminal branches and the GON within the brainstem. Functional connectivity for the co-localization of V1 and GON Because the co-localization of V1 and GON carries a particular significance in primary headache syndromes, and since the hypothalamus plays a specific role in generating migraine attacks (46,47), we investigated the functional connectivity between the individual clusters of their conjunction and the hypothalamus with a psychophysiological interaction (PPI) (57). Since we had four different stimulations, and the current implementation of PPI is only capable of analyzing a contrast between two conditions, we used a generalized approach for this analysis (58). We report results at a statistical threshold of p < 0.05 (FWE-corrected) using a small volume correction within a sphere of 6 mm radius around previously reported coordinates (45-47) ([0, 2, À6] mm xyz in MNI space) of the hypothalamus. Somatotopy of perioral versus periauricular facial dermatomes of the trigeminal branches To reveal the functional somatotopy of the trigeminal branches according to their perioral or periauricular position we calculated a repeated measure ANOVA for the 26 subjects included in the second experiment. We further analyzed differential contrast between the stimulated sites. The statistical threshold for the main effects was set to FWE-corrected p < 0.05 and for the differential contrast to T-values higher than three both exceeding a minimal cluster extent of five voxels. Results are presented for activations in the lower brainstem. Data availability statement European and German data protection rules as well as the hospitals data protection officer prevent us from uploading raw data to a public repository. Nevertheless, unconditional access to anonymized data and access to the code are available to qualified investigators on request to the corresponding author. Primary and secondary outcomes All preregistered primary and secondary outcomes regarding the functional representations of the three trigeminal branches and the GON in lower and upper brainstem, insula, thalamus and cerebellum were significant ( Table 1). The representations of the trigeminal branches in thalamus, insula and cerebellum are presented in Figure 3 using data from all participants (n ¼ 61) of the first experiment. The representations of the individual trigeminal branches overlap greatly on brainstem level and less in thalamic and insular cortex, which is typical for within-body-part representations as opposed to a more distinct somatotopy between body parts. Functional representations in the lower brainstem The functional somatotopic arrangement in the brainstem below the pons revealed several clusters for each of the three trigeminal branches as well as for the GON. The trigeminal branches are represented at up to six locations along the spinal trigeminal nucleus, namely: at the level of the pons (around MNI z ¼ À50), below the pons (around MNI z ¼ À56), at the lower end of the cerebellum (around MNI z ¼ À66), at C1 (around MNI z ¼ À78), at C2 (around MNI z ¼ À84), and between C2 and C3 (around MNI z ¼ À94). The GON is represented at five different levels within the lower brainstem. A co-localization and thereby possible locations for interaction between V1 and GON is present at five levels below the brainstem. We summarized these results in Table 2, Online Supplemental Table 1 and Figure 4. The differential contrast between the individual branches of the trigeminus as well as for the GON did not show any significant results at the chosen statistical threshold (Figure 2b). This is probably due to the onion-shaped central representation of the trigeminal nerve at the brainstem level, which is why we conducted the second experiment testing perioral versus periauricular facial dermatomes (see below). The analysis where we assigned each voxel surviving the aforementioned threshold to the stimulation site (dermatome) which, within this voxel, had the highest T-value (Figure 4f) showed a clear somatotopic representation of the individual trigeminal branches at different heights of the brainstem/spinal cord. Functional connectivity between the hypothalamus and the conjunction of V1 and GON Using the five co-localized clusters of V1 and GON as seeds for a functional connectivity analysis, we found a stronger connectivity (p ¼ 0.027 (FWE-small volume corrected), T ¼ 3.85) between the cluster directly below the pons (MNI coordinate z ¼ À56) for V1, but not between the GON and the hypothalamus (MNI coordinates: [À4, 0, À9]). We found no significant results for the opposite contrast (GON>V1R) nor for the other four clusters which showed a conjunction between V1R and GON (Table 2) within the lower brainstem ( Figure 5). Functional representations of perioral versus periauricular facial dermatomes of the trigeminal branches The main effect of the second experiment, testing perioral versus periauricular facial dermatomes along V1 and V3 revealed more rostral representations of the perioral and more caudal representation of the periauricular facial dermatomes along the first as well as the third trigeminal branch (Figure 6a, Table 3, Online Supplemental table 2) at the level of C1. We found differential effects (V1 vs. V3) directly below the pons (for V3C vs V3R, and V3C vs V1R), at the level of C2 (V1C vs V3C) and between C2 and C3 (V1R vs V3R) ( Figure 6). Discussion The aim of this study was to map the functional representation of the three trigeminal branches in the cerebellum, thalamus, insula, and brainstem and to pinpoint a functional co-localization of the trigeminal nuclei with the nucleus of the GON in the lower parts of the brainstem. The STN is the first relay station of the trigeminal nerve for sensory and nociceptive input. We found that this macro-anatomically quite long and thin structure of the lower brainstem holds functional representations of the trigeminal branches at six different levels, where three of them are below the default templates of human fMRI analysis. Our findings are limited by the variance of the individuals' anatomy in brainstem and spinal cord (for the imaging data) as well as the face and back of the head (for stimulation electrode positioning). Evidence of a functional relationship between the sensory systems of the face and occipital regions was recently revisited with behavioral data (59) and the question arises where in the brainstem such a functional connection could take place. So far evidence from neuroimaging has pointed towards areas within the brainstem at levels as low as the C2 (44). In our study, we identified possible sites of interaction at five different brainstem levels where representations of the GON are co-localized with the trigeminal nerve. Of particular interest is the level directly below the pons, since this is the only area where the functional connectivity between the first trigeminal branch and the GON to the hypothalamus differs. It will be interesting to investigate whether the functional connectivity is different between patients where the migraine attack starts in the neck/back of the head versus those where the pain starts in the forehead. It is also of interest whether and where in these five anatomical brainstem areas the functional connectivity between the occipital and the trigeminal system changes during the migraine phases or indeed following clinical intervention, e.g. GON-block as well as a possible predictor for efficacy. Nevertheless, since the occipital nerve enters the brainstem and the TCC at the level of C2 and at this same level crosses to the contralateral side from where the nociceptive signal is transmitted to higher processing centers, the most likely functional modulatory effect between both systems is at the level of C2 (27,29,30,60). The central representation of the three trigeminal branches In the spinal trigeminal nucleus intermingles in an onion-shaped manner with the peripheral facial dermatomes along the perioral-periauricular axis in such a way that, at least at the level of C1, perioral dermatomes are presented more rostrally than periauricular dermatomes. While no differences between the perioral sites of the three trigeminal branches could be shown in the first experiment, an optimization of the stimulation sites to presumably more distant representations in the STN revealed differences between the trigeminal branches as well as along the perioralperiauricular axis. This supports the findings of a central onion-shaped representation of facial dermatomes in humans (7). In conclusion, our study provides evidence for multiple central representations of the three trigeminal branches as low as C2/C3 and in the brainstem, the cerebellum and the thalamus in humans. Thereby, the individual branches in the STN seem to follow an onion-shaped representation of the facial dermatomes where perioral dermatomes are located more rostrally. Furthermore, we have identified several sites of action for a GON blockage along the STN, where functional activity of V1 and GON are co-localized.
2023-05-20T06:17:11.050Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "21541e158f1d027a6b60c2a4ca164890f2904675", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/03331024231174862", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "8f30dee61c9f805d9232b21d52464971da92d360", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268874625
pes2o/s2orc
v3-fos-license
Design of robotic arm for the porcelain bushing in substation With the development and the application popularization of artificial intelligence robot technology and 5G technology, a robotic arm is designed and developed for rinsing porcelain bushing in high voltage substation in this paper. Firstly, the components and implementation of robotic arm are presented, subsequently, a circular cleaning structure with a 120-degree split is proposed to rinse the porcelain bushing. Secondly, a two-stage simple and effective method to realize automatic orientation is proposed utilizing photoelectric switches. Moreover, a prototype of robotic arm with control system is developed based on the regime switching function, and the result of edge computing is transmitted by 5G technology. Finally, feasibility and effectiveness of the robotic arm are verified in the Nanjing power grid. The case study manifests that the robotic arm developed by the proposed method in the paper can achieve efficient rinsing and all the corresponding information can be transmitted preciously. The proposed method lays a foundation for wide application of cleaning robot in high voltage substation. www.nature.com/scientificreports/post insulators that can improve cleaning efficiency while ensuring consistency in cleaning quality.With the development of power automation and robotics, the research on automatic cleaning devices for ceramic post insulators is becoming increasingly important.This article develops an automatic cleaning device for 500 kV porcelain sleeves in the event of power outage maintenance in substations.The device adopts an open cleaning circular ring structure, which can adapt well to the complex environment of the substation site.At the same time, the design of its variable axis length cleaning brush rod can effectively improve the efficiency of cleaning and ensure the consistency of cleaning quality. In this paper, a type of 5G driven HV electrical device porcelain bushing cleaning robotic arm, which is a kind of mechanical arm with intelligent function, is developed for maintenance of electrical devices in HV substation, precise positioning of the mechanical arm is achieved with the aid of photoelectric switch and the perception of the porcelain bushing position in the space.Furthermore, based on the IoT and edge computing 19 , platform tilt angle is perceived and a kind of leveling control method is proposed.In addition, based on 5G 20,21 shared base station, the information interaction during the cleaning of Porcelain Bushing is achieved.The field test results indicate that this robotic arm could clean the porcelain bushing effectively with high quality and high security.It enriches the technical meanings of porcelain bushing anti-pollution flashover and improves the state control level of substation equipment. Overall design The robotic arm is composed of five components including mobile electric lifting platform, cleaning brush "finger", operation box, drive control box and photoelectric switch and lead screw module.The design scheme of robotic arm is shown in Fig. 1. The electric lifting platform adopts scissor-lift mechanical structure and hydraulic power transmission to provide the stable support of the whole device.The cleaning brush "finger" is specifically designed which can clean the surface of porcelain bushing.Control button box and industrial control touch screen are set in the operation box, which are used to control the lifting of the electrical lifting platform, the moving speed of the lead screw module and the operating speed of the motor of cleaning brush head.The drive control box realizes the control of the automatic mechanical cleaning arm.Photoelectric switch and lead screw modules are used for precise positioning of the robotic arm, which ensures that the porcelain bushing is centered in the opening ring of the cleaning brush "finger". Design of cleaning brush "finger" The cleaning brush "finger" is the critical unit of the automatic mechanical cleaning arm.As shown in Fig. 2, the structure of the cleaning brush "finger" with a 120-degree split is proposed.The upper and lower ends of the porcelain bushings of the substation are respectively connected with the base of the inspected device and other devices, so the fixed ring cleaning structure is not suitable for site conditions.In addition, due to the long spatial distance between the porcelain bushing and the ground, it is not convenient to adopt the ring cleaning structure with mechanical opening and closing.The recommended 120-degree-split structure of cleaning brush "finger" can adapt to the complex field environment in the substation lightly.The ringent structure can contribute to approaching the porcelain bushing from different directions. The cleaning brush "finger" is made of aluminum alloy, on which 6 cleaning motors(brushless motors) and 3 photoelectric switches are installed.The cleaning motor is connected with the cleaning brush "finger" with different coaxial length by a rigid coupling to prevent the collision with adjacent bristles and achieve all-round cleaning within 240 degrees. In the process of porcelain bushing cleaning, in order to avoid the collision between the electric lifting platform and other devices in the substation, the cleaning brush head is required to extend out of the platform as far as possible.Meanwhile, due to the suspended brush head, the additional torque under the influence of gravity has been produced.Therefore, the structure adopts an extension rod with a cardan wheel support device installed in the middle position.On one hand, the extension of the brush head is achieved, on the other hand, the stability of the structure is improved by cardan wheel support device which brings the significant decrease of the torque impact caused by the self-weight of brush head. The cleaning brush on the brush "finger" is driven by DC brushless motor, and the speed can be adjusted flexibly depending on the site conditions.DC motor and cleaning brush are connected by rigid coupling, which has good structural stability. The internal and external double-layer bristles are adopted for the cleaning brush.The length of the outer bristles is longer than that of the outer bristles.The material of the outer bristles is softer than the outer bristles.The two layers of bristles rotate around the same axis to achieve efficient cleaning.According to the practical cleaning experience, there may be friction and collision between adjacent cleaning brush hairs on the same plane, which may affect the actual cleaning effect.Therefore, the structure of brush rod can achieve the dynamic adjustment of adjacent cleaning brush between different working surfaces and avoid the collision between brush hairs.Moreover, the effective cleaning area is enlarged, and the cleaning efficiency is improved. To express the ideas mentioned in the text more clearly, a two-stage positioning process is used to illustrate, including the schematic diagram of the cleaning claw and porcelain sleeve, as shown in Fig. 3.The specific positioning process can be summarized as the following steps: (1) The operator performs preliminary alignment work, and the insulator cleaning device moves longitudinally to approach the insulator to be cleaned.When the 1st or 2nd photoelectric switch is activated, it enters the first level positioning stage.(2) Based on the operation of the 1st and 2nd photoelectric switches, adjust the horizontal direction of the cleaning brush head until both photoelectric switches operate simultaneously.This indicates that the opening of the cleaning brush head is aligned with the insulator and the first level positioning is completed.(3) Clean the brush head and feed it longitudinally again, further approaching the insulator.When the No. 3 photoelectric switch is activated, it indicates that the insulator to be cleaned is already inside the cleaning brush head and is very close to the No. 3 photoelectric switch at the root of the brush head.At this point, we enter the secondary positioning stage.(4) Based on the pre measured distance information, control the cleaning brush head to retreat to the given distance, ensuring that the insulator is in the center position of the open cleaning brush head, as shown in the following Fig. 3.At this point, the secondary positioning is completed and waiting for the next cleaning process to proceed. Photoelectric switch design The photoelectric switch is the positioning core of the porcelain bushing automatic mechanical cleaning arm, which is installed at a spacing of 120 degrees on the cleaning brush "finger" ring, as shown in Fig. 4. No. 1 photoelectric switch and No. 2 photoelectric switch at the opening of ring are installed face to face.Depending on this primary positioning group, the porcelain bushing can be aligned to accurately at the connection center of the opening of ring.No. 3 photoelectric switch is installed at the joint between the end of the cleaning ring and the extension rod, which is positive to the center of the ring as the secondary positioning group.The inductive distance of the photoelectric switch is about 50 mm.According to the feedback of the positioning group, the position of the porcelain sleeve can be detected.The movement of the mechanical cleaning arm is adjusted accordingly, so the porcelain bushing can be located in the center of the opening ring of the cleaning brush "finger".When the first level positioning action is completed, the porcelain bushing will be located in the connection center of the opening of ring, the lead screw module will drive the cleaning brush head forward, and the porcelain sleeve will entry into the working range of the ring, approaching No. 3 photoelectric switch gradually.When No. 3 photoelectric switch at the joint operates, it demonstrates that the porcelain bushing is already laid in the center of the opening ring, that is, the insulator target location, which represents that the secondary positioning is completed simultaneously.Three photoelectric switches with a spacing of 120 degrees can be positioned in two stages, and they can perform the alignment of the porcelain bushing simply and effectively to prevent the collision risk caused by the deviation of the porcelain bushing position.The position of the porcelain bushing is judged by the photoelectric switch.After receiving the special signal, a high-speed pulse is sent out by the PLC to drive the stepper motor driver, so that the lead screw module can be adjusted to the appropriate position.The photoelectric switch and the module limit switch judge the position of the target porcelain bushing accurately according to the sensor feedback.The relay is used to height adjustment of the lifting platform by controlling the oil pump motor and oil pump solenoid valve.The upper device adopts industrial touch screen to communicate with S7-200 PLC for motion control and parameter setting. Automatic leveling control mechanism In order to ensure that the cleaning brush "finger" ring will not collide with the porcelain sleeve in the process of leveling of the electric lifting platform, it is required that the angle of the lifting platform is within the allowable error range, and the angle should be adjusted automatically with edge computing when the angle is out of range.The control flow chart is shown in Fig. 6. The platform leveling of this arm adopts the real-time angle-feedback control method, and utilizes three groups of gyroscopes installed in the driving control box to collect the data of tilt angle of the platform under the working condition.Before acquiring angle, three groups of initial angle reference values are set in advance. ), the absolute inclination angles of the load surface and the horizontal plane in three dimensions (x, y, z) is collected respectively at the rate of 40 times/second.The absolute inclination for control is calculated as follows. , which represents the absolute inclination of the platform in three dimensions.In order to level the arm, the ideal value of adjustment rate in three dimensions R j , which is shown as R j = r X j , r Y j , r Z j j = 1, 2 , is further introduced.According to the absolute tilt angle A i , the rotation speed of the stepping motor located at the bottom leg of the electric lifting platform is dynamically adjusted. To reduce the influence of accumulated error, when the arm moves, the reference values of angle in each dimension are reset every 10 s, and when the movement of the arm is aborted and restarted, the reference value of angle should be reset. After the angle data α of each group of gyroscopes is detected, the stepping motor corresponding to the bottom leg of the platform is activated to achieve the angle control of the platform.When A i is in a neighborhood of the origin of 3D space, namely motor is locked.It should be noted that ε X , ε Y , ε Z are the locked sensitivity angles of each dimension.It is recom- mended that the locked sensitivity angle of each dimension is θ 0 to simplify the discussion. The real-time angle-feedback control method used in this arm is composed of two practical control strategies including threshold control and flexible control, which can be selected and switched through the Human Machine Interface (HMI). The threshold control strategy with x-axis leveling control is shown as follow.When the gyroscope detects that α X i exceeds [− θ 1 ,θ 1 ], the speed of the stepping motor is adjusted at the rate of r X 1 .When α X i returns to [− θ 1 ,θ 1 ] and is not in the locked range, the speed of the stepping motor slows down, which represents that the speed is adjusted at a rate of r X 2 .Under the threshold control strategy, the x-axis regulation rate at time i is shown as where r X 1 > r X 2 , θ 1 > θ 0 .The flexible control strategy refers to the flexible control of the leg stepping motor by introducing the regime switching function.Expression of the regime switching function proposed by Chen et al. 8 taking the x-axis as an example is as follow: where is the threshold value; γ is the conversion speed parameter, which is generally taken as a positive integer and satisfies that γ is further greater than 1 .Keeping www.nature.com/scientificreports/When the gyroscope detects that α X i exceeds [− θ 1 ,θ 1 ], speed regulation is transited to r X 1 by regime switching function (3).When α X i returns to [− θ 1 ,θ 1 ] and the motor is not locked, the speed of stepping motor slows down and the angle of platform is adjusted softly. According to the flexible control strategy, based on the regime switching function, the regulation rate at time i is as shown where r X 1 > r X 2 .The recommended threshold value is θ 0 , and the recommended range of the conversion speed parameter γ is [400, 600]. The threshold control strategy achieves the real-time control of absolute tilt angle, which has the advantage of fast response time (within 500 ms) while the adjustment accuracy is low.The advantage of the flexible control strategy is that the adjustment is more accurate, the adjustment process is smoother, and the overshoot is smaller, which can ensure that the inclination of the load surface can be adjusted to within θ1 at a suitable speed.Two strategies are selected on the basis of the practical restrictions of the site.Generally, the two strategies can meet the practical needs in speed adjustment.According to practical maintenance experience in the HV substation, the recommended value of θ 0 is 2°.Moreover, the value of θ 1 should meet (5) as follows: where margin parameter k is more than 2. and θ min is the minimum angle that can be identified.The recommended value of θ 1 is 5°. To illustrate the requirements of the coordinate system more clearly, use Fig. 7 for illustration.Establish an Oxyz coordinate system as shown in Fig. 7, with the ground as the reference plane and the projection point of sensor 3 as the origin, to describe the spatial position information. The real-time values of x-axis, y-axis, z-axis regulating speed and torque sensor are collected by intelligent maintenance terminal.Afterwards, the regulating speed vector is synthesized and the real-time map of regulating speed vector is drawn utilizing the edge computing.Ultimately the optimized control strategy is obtained.All these valuable information is sent back to the monitoring system by 5G network. Perception technology of charged area based on UWB Considering the complex electrical environment in the 500 kV substation, when the automatic cleaning robotic arm works, it is necessary to keep sufficient distance between the arm and the electrical devices in HV substation to ensure the safety of devices and the safety of personnel.Owing to edge computing, a safe-distance control method and location method based on UWB location technology is presented.Adopting two-way time of flight method for ranging is the principle of UWB ranging.Base station transmits request pulse signal at T a1 , and tag receives request signal at time T b1 and transmits a response signal at time T b2 .Base station receives the response signal at time T a2 , and then the distance R between base station and tag can be calculated by (6). where C is the speed of light. The dual safe distance control method of mobile UWB base station array and standing UWB base station array is employed to perform charged area sensing.This method can be divided into two types (type I control and type II control), as shown in Fig. 8. Safe distance control based on mobile UWB base station array (type I control) A mobile light UWB base station array is arranged around the outage area, and a UWB label is arranged on the automatic mechanical cleaning arm to perform the dynamic calculation of the distance between the arm and the charged area.When the distance R between any light UWB base station and the label is less than the low warning threshold (LWT), an audible and visual alarm is triggered.When the distance R between the base station and the label is less than the high warning threshold (HWT), the arm is locked and the power supply is interrupted forcibly. Safe distance control based on standing UWB base station array (type II control). In order to monitor the position of equipment and workers in the substation and achieve better professional management, a standing UWB base station array is arranged in the primary site of the substation.Based on the distance measurement of the UWB label loaded on the automatic sweeping arm, the distance information between the device label and different standing UWB base stations can be obtained.Using more than three distance information, the specific position of the automatic sweeping arm in the working area of the substation can be determined by the intersecting circle principle, and the relevant position information and sweeping information can be transmitted back to the main control room of the substation by 5G network.Therefore, a real-time charged-area warning map can be drawn based on the returned data. Owing to edge computing, the small-scale positioning of the working area of the arm is achieved using the type II control, which isolates the charged area near the work site and ensures the safety of the equipment and the safety of personnel during the cleaning.The efficient control of personnel movement track can be obtained and the major accidents such as inrush of charged bay are effectively prevented by means of type I control. This article proposes a secure distance control and positioning method based on UWB positioning technology.Ultra wideband (UWB) technology is a radio technology based on the IEEE802.15.4a and 802.15.4z standards, which can accurately measure the flight time of radio signals, thereby achieving centimeter precision distance/ position measurement.Unlike other positioning technologies such as Bluetooth and WiFi, the inherent physical characteristics of UWB RF signals have defined UWB technology from the beginning: achieving real-time, ultra precise, and ultra reliable positioning and communication.UWB positioning technology has high positioning accuracy and is not affected by harsh conditions such as dust, rain, and snow.It can be used under high voltage www.nature.com/scientificreports/and strong magnetic field conditions.At the same time, its position refresh frequency is high, and it can send the target object position to the management platform for presentation without delay. Simulation The basic design requirement of the cleaning device is to realize the safe and reliable cleaning of the Porcelain Bushing.Operation reliability requires that the device can accurately identify the Porcelain Bushing and ensure that it is basically in the center of the cleaning brush.However, operation safety requires that the device does not collide with the body of Porcelain Bushing during the process of hugging and cleaning.Since the low-cost two-stage automatic orientation method is used in this paper to realize the alignment of the Porcelain Bushing by measuring distance.In order to verify its reliability and safety, a 120° distributed laser sensor (No. 1, No. 2, No. 3) is installed at the position of the cleaning brush "finger" of the device.Whether the Porcelain Bushing is in the center of the cleaning brush "finger" is judged by the distance measurement result fed back during the cleaning process, as shown in Fig. 9. Table 1 is the record of the ranging results.The data demonstrates that the two-stage simple and effective method proposed in this paper can effectively realize the alignment of the Porcelain Bushing and ensure the safety and stability of the subsequent cleaning work.The coordinate system diagram in sensor positioning is shown in Fig. 10 below.The laser sensor experiment is shown in Fig. 11. After actual experiments, the key data of distance measurement during the adjustment process of the cleaning brush are shown in Fig. 12.The robotic arm smoothly approaches the Porcelain Bushing without exceeding the preset safety distance (0.05 m), which ensures the reliability and safety of subsequent cleaning.Further, the specific action speed of the cleaning robot can be flexibly adjusted according to the preset motion parameters, and the whole cleaning process can be ensured with good stability and consistency through multiple debugging. Construction of 5G shared base station The intensity of signal can be weakened in special domain in switch yard by the strong electromagnetic interference in HV substation.The transmission of the edge computing results got from switch yard will be seriously affected and the control performance will be poor.To solve this problem, 5G network can be used as the physical basis for the robotic arm to transmit the edge computing results to the monitoring and control system.Although China is racing to build 5G networks, the construction of 5G base station still seems a long way off.Accelerating the distribution of 5G base stations near the substation is conducive to the advanced technology based on 5G, which may improve the intelligent maintenance level of the substation.In this paper, a new idea of 5G shared base station construction is proposed. The roof of some substations will be opened to 5G operators to arrange 5G antennas, so that there will no longer exist the difficulty of base station positioning. Massive MIMO technology is popularly applied in 5G base station and the demand of power supply is 2 or 3 times more than the demand in 5G base station.Power supply is a huge challenge for the 5G operators.In 5G shared base station, since highly reliable power transformed from HV substation can supply dozens of 5G base station, uninterrupted power supply to communication equipment of 5G base station is guaranteed. The inspection, operation and maintenance of 5G base station should be included in the overall maintenance system of the substation to reduce the maintenance costs of operators significantly.For example, the inspection robot in the 500 kV HV substation inspect all the devices for 3 times one day, so the inspection cost of the equipments in 5G base station is absolutely reduced.Therefore, though it will cost much on the construction of 5G base station, the benefit of using 5G is still considerable. According to the above methods, a 500 kV substation has been rebuilt as a 5G shared base station in advance, and 5G information transmission test and network storm test using simulation data has been carried out.Before the field test of the robotic arm, the technical preparation of 5G information interaction has been completed. Field test According to the porcelain structure of a typical 220 kV Circuit Breaker in 500 kV substation, corresponding structural parameters of the porcelain cleaning robotic arm are set, and the prototype of the robotic arm is developed.The parameters of the robotic arm are summarized in Table 2. After the preliminary commissioning in the laboratory, cleaning test on porcelain bushing of CB has been performed in the power grid of Nanjing, as shown in Fig. 13. After the preliminary commissioning in the laboratory, cleaning test on porcelain bushing of CB has been performed in the power grid of Nanjing. When the robotic arm is close to the CB under test, the platform is leveled according to the established control strategy to ensure that the level of the lifting platform is always in a balanced state, and the adjustment need to be completed within 0.5 s. When the robotic arm reached the designated test position, the arm starts to clean the CB porcelain bushing.All sensors operate normally.It takes 30 s to clean a 4.8 m long 500 kV porcelain bushing.The field test shows that the initial positioning time of workers before cleaning is more discrete which is 20-75 s denoting that the total cleaning time is 50-105 s, which improves the efficiency of manual cleaning significantly. During the dynamic and static process, the robotic arm correctly perceives the surrounding electriferous area and effectively alarm or cut off the power supply, and the sensitivity reaches the design expectation.The robotic arm achieves platform leveling according to the established control strategy, and the single leveling time is about 300 ms (threshold control strategy) to 650 ms (flexible control strategy).Edge computing is employed to deal with the sense information.Based on the result of edge computing, the control strategy drives the arm to clean the CB porcelain bushing.Moreover, the result of edge computing is transmitted to the monitoring center by 5G technology.The performance of the robotic arm are summarized in Table 3.By comparison, The performance of conventional manual methods are shown in Table 3 as well. According to Table 3, the robotic arm shows higher efficiency than conventional methods.Table 2. Parameters of the robotic arm. Items Value Weight of the cleaning brush "finger" and the lead screw module 55 kg Weight of the robotic arm 520 kg The length of the extension rod of the cleaning brush "finger" 0.9 m The number of photoelectric switches 3 www.nature.com/scientificreports/ In the 5G shared base station, the signal is distortionless during the process of field test, which effectively realizes the fast transmission of related information. In conclusion, the main performance targets of the porcelain cleaning robotic arm have been achieved and even exceeded the design expectations. Conclusion In this paper, the cleaning robotic arm for porcelain bushing of high-voltage devices in substation is proposed.In order to adapt to the complex environment of the substation, the opening cleaning ring structure is adopted in mechanism design.The edge computing is used in the control system.A new platform leveling control strategy and the perception method of the charged area based on UWB are proposed.Moreover, the technical framework of the construction of 5G shared base station is introduced.The field test results show that the robotic arm has a good ability of porcelain bushing cleaning.The technical measures against pollution flashover of porcelain bushing are enriched.The state control level of substation equipment is improved, and important guidance for the future research of intelligent operation and inspection is provided. In the future, structure of the brush head of the porcelain cleaning robotic arm will be further optimized and the adaptability of different size porcelain should be enhanced.Along with the edge computing and 5G technology, the positioning accuracy and control efficiency of the arm can be further improved.Besides, the integration of the arm, its drive control system and upper computer software also need to be improved. Figure 3 . Figure 3. Schematic diagram of cleaning claw and porcelain sleeve. Figure 6 . Figure 6.Control flow chart of real-time angle feedback control method. Figure 7 . Figure 7. Schematic diagram of coordinate system description.(a) Coordinate system diagram.(b) Coordinate system diagram. Figure 8 . Figure 8. Two types of location method. Figure 9 . Figure 9. Measuring results sensors during cleaning.(a) Measuring result of sensor No. 1.(b) Measuring result of sensor No. 2. (c) Measuring result of sensor No. 3. Figure 12 . Figure 12.Positioning adjustment and measuring results. Table 1 . Detection results of feature. Table 3 . Performance comparison of the robotic arm and conventional method.
2024-04-04T06:18:57.394Z
2024-04-02T00:00:00.000
{ "year": 2024, "sha1": "1acba013ded3b1131dafba9b8dfe2efd2e616116", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-024-58443-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "50e055b347b72e3770bc62447466245015423e31", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
4317670
pes2o/s2orc
v3-fos-license
Study on the species composition and ecology of anophelines in Addis Zemen, South Gondar, Ethiopia Background Malaria is a public health problem in Ethiopia and its transmission is generally unstable and seasonal. For the selection of the most appropriate vector control measures, knowledge on the ecology of the vector is necessary at a local level. Therefore, the objectives of this study were to document the species composition, breeding habitat characteristics and occurrence of anopheline larva in Sheni stream and the vectorial role of the prevailing Anopheles in relation to malaria transmission in Addis Zemen, Ethiopia. Methods Immature anophelines were sampled from breeding habitats and characteristics, such as water temperature, turbidity, water current, water pH and other variables, of the habitats were measured from October 2011 to February 2012. Adult anophelines were sampled inside human dwellings using space spray and Center for Disease Control light traps. Artificial pit shelters and clay pots were also used for outdoor adult collections. Anophelines collected were identified using morphological key. The enzyme-linked immunosorbent assay was applied to detect circumsporozoite proteins of Plasmodium and source of blood meals. Results A total of 6258 Anopheles larvae were collected and identified morphologically. Five anopheline species were found: An. gambiae (s.l.), An. cinereus, An. demeilloni, An. christi and An. pretoriensis. Anopheles gambiae (s.l.) existed in most of the habitats investigated. Only the former three species were captured in the adult collections. Sun-lit Sheni stream, rain pools, hoof prints, drainage and irrigation canals were found to be habitats of larvae. Anopheles gambiae (s.l.) larvae were most abundantly sampled from sand mining and natural sand pools of Sheni stream. Multiple regression analysis showed that clear, permanent and temporary habitats devoid of mats of algae were the best predictors of An. gambiae (s.l.) larval abundance. It is also the responsible malaria vector in the study area and exhibits anthropophilic and endophagic behaviour. Conclusions The malaria vector An. gambiae (s.l.) was found in Addis Zemen throughout the study period from both adult and larval collections. Sheni stream is the main larval habitat responsible for the occurrence of anopheline larvae during the dry season of the study area when other breeding sites perish. Background Malaria is one of the main public health problems globally and is endemic in 91 countries of the world. Its incidence is estimated to be 212 million cases in 2015; of these, 90% of the cases occur in Africa [1]. In Ethiopia, about 68% (approximately 67.5 million people in 2015) of the population is at risk of getting malaria [2]. The transmission of malaria in Ethiopia is generally unstable and seasonal. There are two malaria transmission seasons in the country, one is the major transmission season that occurs between September and December, following the rain from June to August, and the second occurs between April and May, due to the February and March rains. Some localities may also experience perennial malaria transmission as the environmental and climatic situations permit the continual breeding of vectors in permanent breeding sites [2,3]. In Ethiopia, there are four species of Anopheles mosquitoes which transmit malaria, namely, Anopheles arabiensis, An. pharoensis, An. funestus and An. nili. The former is the major vector, whereas the rest are secondary vectors [2]. The control of malaria involves education, vector control and chemotherapy, however, vector control has been recognized as the most effective [4]. To implement effective and locally suitable vector control measures, a detailed understanding on the ecology and behaviour of the local vectors and local malaria transmission dynamics is necessary [5]. Although malaria is prevalent in Addis Zemen, Libo-Kemkem Woreda [6,7], information on the species composition, breeding sites, distribution and densities of malaria vectors is lacking. Therefore, this study aimed to document the species composition, larval habitat characteristics and the role of a small stream in maintaining larvae during the dry months. Study area This study was conducted from October 2011 to February 2012 in Addis Zemen town in Libo-Kemkem District, found in the South Gondar Zone of the Amhara Regional State. The district is situated at 37°15′36"E, 11°54′36"N, at an average elevation of 2000 m above sea level. The area receives a unimodal rainfall of approximately 1300 mm per year, mostly between June and August. The mean annual temperature is 19.7°C. The district is divided into 30 kebeles, the smallest administration. According to the 2007 census report of the Ethiopian Central Statistical Agency (ECSA), its total population was 196,813 of which 88.9% live in rural areas. Addis Zemen is the capital town of Libo-Kemkem. It is divided into three kebeles, which are separated by road and Sheni stream (Fig. 1). Various government institutions and residential houses are located close to the stream. Local residents utilize Sheni stream for irrigation purposes, swimming, washing clothes and sand mining. The present entomological study covered all the three kebeles to understand the situation of malaria transmission. The study included inspecting Sheni stream if it maintains the aquatic stages of the malaria vector(s) in the dry months. Entomological studies Larval sampling and species identification Anopheline larvae were sampled twice monthly between October 2011 and February 2012, giving more focus on larval habitats along Sheni stream. Larvae were also sampled from habitats found out of Sheni stream in Addis Zemen. A standard larval dipper (11.5 cm diameter and 350 ml capacity), pipettes and a plastic tray were used in larva sampling. After inspecting for the presence of anopheline mosquito larvae, ten dips were taken from each mosquito breeding habitat [4,8]. The water was left to settle for about 2 min after each subsequent dipping. Larvae of anophelines were separately taken from culicines and recorded according to their larval instar stages as first-, second-, third-and fourthinstar on prepared data sheets. Sampling was done by the same person in the morning (09:00-12:00 h) or afternoon (14:00-17:00 h) for about 60 min or less at each larval habitat throughout the sampling period. From the collected larvae, all third-and fourth-instars of anopheline larvae were killed and preserved in small vials containing 3% formalin solution. Each larva was mounted on a glass slide separately in a drop of Gum-Chloral mounting medium and covered with a coverslip [9]. Identification of larvae was carried out using a compound microscope based on the key of Gillies & Coetzee [10]. Larval habitat characterization For each habitat, environmental factors that could potentially be associated with the abundance of anopheline larvae were measured and recorded simultaneously with larval sampling. These characteristics included habitat depth, width and length, water pH, water temperature, exposure to sunlight, turbidity, vegetation type, water current, substrate type, whether the habitat is natural or man-made, presence of green algae, permanence of the habitat and distance of habitat to the nearest house. Water temperature was measured using ordinary mercury thermometer and pH was measured using pH meter. A metal ruler was used to measure breeding habitat length, width and depth. The depth of each habitat was measured at three different points and the averages of these measures were recorded. Water current was determined by visual inspection and categorized as slow flowing or stagnant. Turbidity was observed by taking water in glass test tubes and holding against a white background to categorize as either clear or turbid. Exposure to sunlight was visually categorized as light and shade. The type and presence of aquatic vegetation was observed and recorded as emergent, floating, emergent plus floating and none if there was no vegetation at all. Type of substrate was observed and recorded as muddy, stone and soil, stone and sand gravel with little soil and stone. Distance to the nearest house was measured with a measuring tape when it is shorter than 100 m and by footsteps when it exceeded 100 m. These were then categorized into three classes (1, 0-100 m; 2, 100-300 m; and 3 for distances > 300 m) [11,12]. The larval habitats were finally grouped according to their stability into temporary, semi-permanent and permanent habitats. Temporary are habitats that hold water for a short period of time (i.e. until approximately two weeks after larvae were collected from that habitat) whereas semi-permanent habitats are habitats that stay two to eight weeks by maintaining water. Permanent habitats; conversely, hold water for a longer period of time (i.e. for more than two months until the end of the sampling period) [12]. Collection of indoor adult anophelines Indoor anophelines were sampled using Center for Disease Control (CDC) light traps and pyrethrum spray sheet collections (PSC). The households were selected based on their distance from Sheni stream, which were 50-300 m from the stream. The houses and day of collection for PSC and CDC light trap collections were different. CDC light trap (John W. Hock Company, Gainesville, Florida, USA) collections were carried out twice monthly from six selected houses. For sampling of night-biting mosquitoes, light traps were hung on near the feet of sleeping persons and operated the whole night from dusk to dawn. Indoor resting mosquitoes were collected twice monthly using the pyrethrum spray sheet collection (PSC) method from five selected houses close to Sheni stream between 6:00-8:00 h [8]. Collected female Anopheles were classified according to their abdominal stages and also identified using the key in Gilles & Coetzee [10]. Specimens were preserved individually in Eppendorf tubes which contained silica gel and then transported to Aklilu Lemma Institute of Pathobiology (ALIPB) for further laboratory analysis. Collection of outdoor resting mosquitoes Two artificial pit shelters [13] were constructed in a shaded site under a tree or large bush near to human dwellings near Sheni stream. In addition to pit shelters, a total of six clay pots (two per each kebele) were used to sample outdoor resting anopheline mosquitoes. Pots were placed in shaded places, such as under trees [14]. The outdoor traps were set apart from each other by at least 1 km. Resting mosquitoes, in these shelters, were collected twice a month with aspirator, torch and mosquito cage [4]. The time of collection for outdoor resting mosquitoes was from 1:00-3:00 pm. Mosquitoes were identified, categorized according to their abdominal status and preserved individually in Eppendorf tubes containing silica gel inside and transported to ALIPB. Identification of blood meal origins and circumsporozoite proteins of Plasmodium in anophelines Blood meals of Anopheles captured from various collection methods were tested using enzyme-linked immunosorbent assay (ELISA) following the procedure of Beier et al. [15] at the Entomology Laboratory of ALIPB. The abdomen of freshly-fed Anopheles was ground in 50 μl PBS (0.01 M phosphate buffered saline), pH 7.4. Samples were then diluted in PBS (1:50) and 50 μl of the triturate added to each well of plates, which were then covered and incubated at room temperature for 3 h. At the same time, positive controls (human and bovine whole blood) and a negative control (prepared from laboratory reared unfed female An. arabiensis) were also added to specific wells. Each well was then washed twice with PBS containing 0.5% Tween 20 (PBS-Tw 20). This was followed by the addition of 50 μl host-specific conjugate [antihost IgG conjugated in either peroxidase or phosphatase human IgG 1:2000, bovine 1:250 dilution in 0.5% boiled casein containing 0.025% Tween 20 (peroxidase conjugate for human, phosphatase conjugate for bovine)]. After 1 h, wells were washed three times with PBS-Tween 20. Finally, the absorbance at 414 nm was determined with microplate reader 30 min after the addition of 100 μl of ABTS peroxidase substrate. Each bloodmeal sample was considered positive if the absorbance value exceeded the mean plus three standard deviations of the mean of three negative controls and also by observing color change (green color). Similarly, Anopheles mosquitoes were tested for sporozoite infection using ELISA as described in Wirtzet al. [16]. The head and thorax of each of the Anopheles collected was ground in 50 μl of blocking buffer (BB) (IG-630). After grinding, each pestle was rinsed with two 100 μl BB to bring the total of titrate to 250 μl. Each well of a 96-well plate was coated with 50 μl of monoclonal P. falciparum, and P. vivax-210 and 247 captureantibodies, which were then covered and incubated overnight at room temperature. Separate plates were used for each parasite species. The next morning the contents of the plates were aspirated, each well filled with blocking buffer and incubated at room temperature for 1 h. Then, the blocking buffer was aspirated and 50 μl of the mosquito triturate was added to the appropriate dried wells. Fifty microliters of positive (commercially prepared controls for each of the parasite) and negative (prepared from laboratory reared uninfected female An. arabiensis) controls were also added to specific wells at this time. After 2 h of incubation, the mosquito triturate was aspirated and the wells washed twice with PBS containing Tween-20. Then, 50 μl of monoclonal antibody peroxidase conjugate was added to each well and then after 1 h the plate washed three times with PBS-Tw. Finally, the absorbance was read at 405 nm using microplate reader 30 min after addition of ABTS substrate. Results were recorded as negative because the absorbance value did not exceed the mean plus three standard deviations of the mean of three negative controls, or also by observing no color changes. Data analysis Data were entered in to Microsoft Excel 2003 and copied to data editor window of STATA version 11 software. The distribution of data for normality was checked by plotting a histogram. To compare mean larval density among habitat types and habitat characteristics with 3+ categories were analyzed using one-way analysis of variance (ANOVA). The mean density of each anopheline species among habitat characteristics with two categories was also compared by Student's t-test for independent samples. The effect of environmental variables on the presence or absence of Anopheles larvae species in a given habitat was investigated using logistic regression after recording all the variables for each individual larva. Larval densities of a particular anopheline species from each breeding habitat were expressed as the number of larvae per 10 dips [4]. Linear regression was used to determine the predictor habitat characteristics associated with relative larval abundance of anopheline species. Significant associations observed during linear regression were further examined using multiple regression analysis. Data of adult anopheline species were analyzed using standard descriptive techniques. The mean daily density of anophelines collected from CDC traps was taken as the number of adult Anopheles collected/number of traps/number of nights. Results Anopheles larval collections Anopheline species composition and habitat diversity A total of 6258 Anopheles larvae were collected from different breeding habitats, of which 3926 (62.7%) were early instars and the rest 2332 (37.3%) were late instars. Five species, namely An. gambiae (s.l.), An. cinereus, An. demeilloni, An. christi and An. pretoriensis, were identified ( Table 1). The identity of An. gambiae (s.l.) is inferred from a study conducted in Gorgora which lies in the same geographical area with Addis Zemen. The most abundant species was An. cinereus, followed by An. gambiae (s.l.). The proportion of the rest Anopheles was small. A total of 73 aquatic habitats were sampled, with most of them were from different sites of Sheni stream ( Table 1). The habitats were sand mining and naturally created sand pools along Sheni stream (n = 66), rain pool (n = 4), hoof print (n = 1), drainage (n = 1) and irrigation canal (n = 1) (Fig. 2). The habitat on Sheni stream persisted throughout the study period, while the rest dried out in December. Larvae of the two predominant species, An. gambiae (s.l.) and An. cinereus, were collected most abundantly from Sheni stream and the stream was productive throughout the study period for both species. Sand mining pools of Sheni stream and drainage canal were inhabited by larvae of An. demeilloni and An. christi; however, they were scarce and absent from other types of habitats. Anopheles pretoriensis was very scarce and found only in sand mining pools of Sheni stream, together with An. gambiae (s.l.) and An. cinereus. Habitat characteristics associated with larval occurrence All the habitats identified were found to be exposed to sunlight. Anopheles gambiae (s.l.) prefers to occur in temporary habitats compared to permanent habitats (OR = 27.80, 95% CI: 1.67-463.25) and muddy substrate types, rather than combination of soil and stone substrates (OR = 1.0135, 95% CI: 1.000-1.700). Anopheles cinereus was usually absent from habitats without mats of algae (OR = 0.016, 95% CI: 0.001-0.133). Habitat characteristics associated with larval density In linear regression analysis, the crude effect of each of the key environmental factors on larval density of anopheline was analyzed ( Table 3). The relative larval density of anopheline larvae was negatively associated with changes in water temperature (16-34°C) and pH (7)(8)(9)(10). The abundance of An. cinereus larvae were also negatively associated with the water temperature and pH but positively associated with change in habitat length. The relative larval density of An. gambiae (s.l.) was negatively associated with change in habitat width. From eight categorical environmental factors analyzed, two were significantly and positively associated with total anopheline larval density. The relative abundance of both An. gambiae (s.l.) and An. cinereus larvae were significantly associated with four of the environmental variables: habitat water permanence, presence of algae, water turbidity and water current. Further, in multiple regression analysis after adjustment for environmental characteristics, the relative abundance of total anopheline larvae was negatively associated with change in habitat width, which was also true for An. cinereus larval abundance. The abundance of An. gambiae (s.l.) larvae was positively associated with clear and temporary habitats that had no mats of algae and located between 0 and 100 m from human dwellings, but negatively associated with permanent habitats. Adult collections A total of 182 adult female anopheline mosquitoes were captured using various methods of collections (Table 4). Three Anopheles species, namely An. gambiae (s.l.), An. cinereus and An. demeilloni, were identified. Unlike larval sampling, An. gambiae (s.l.) was the most abundant species followed by equal number of An. demeilloni and An. cinereus. The two other species that were scarcely obtained in larval collection were absent in adult collections. Indoor collections A total of 161 Anopheles representing three species [An. gambiae (s.l.), An. cinereus and An. demeilloni] were caught using CDC light traps and pyrethrum spray sheet collection (Table 4). Very low numbers of An. gambiae (s.l.) and An. cinereus were obtained from 44 houses inspected using PSC method. Anopheles gambiae (s.l.) was more predominant in human dwellings than An. cinereus and An. demeilloni. The mean daily density in CDC light traps (number of anopeline species/trap/ night) was 0.87 for An. gambiae (s.l.), 0.19 for An. cinereus and 0.21 for An. demeilloni. The density of An. gambiae (s.l.) varied between the months; the highest density was in October whereas the lowest was in December (Fig. 3a, b). Outdoor collections Attempts to collect Anopheles from 17 visits of 2 pit shelters and 32 visits of 6 clay pots resulted in capturing nine An. cinereus, eight An. demeilloni and three An. gambiae (s.l.). Anopheles gambiae (s.l.) was collected only in February from pit shelters. Clay pots were not productive, with only one An. cinereus collected throughout the sampling period. Blood meal source and test for circum-sporozoite proteins Blood meals of 29 freshly fed anophelines, of which the majority was sampled from indoor locations, were tested to determine their source of blood meals. The small number of blood meals of An. gambiae (s.l.) indicated the source as both human and bovine ( Table 5). The other two species also appear to feed on both human and cattle, however, the blood meals were predominantly taken from bovine. A total 182 anophelines captured by four different methods, 88 An. gambiae (s.l.), 47 An. cinereus and 47 An. demeilloni, were tested to detect circum-sporozoite proteins of Plasmodium falciparum Discussion This study provided baseline information for the species composition of anophelines, types of larval breeding habitats and adjoining characteristics as well as some entomological indicators in Addis Zemen in relation to malaria transmission. The presence of An. gambiae (s.l.) (presumably An. arabiensis), the principal vector of malaria in the country [2], and four other non-vectors (An. cinereus, An. demeilloni, An. christiand and An. pretoriensis) is ascertained from both larval and adult sampling. Larvae of An. gambiae (s.l.) were the second most abundant in almost all habitats, including sun-lit pools formed at the bed and edges of Sheni stream, rain pools, hoof prints and drainage canals. All the types of habitats reported here have been previously documented in Ethiopia as well as elsewhere in Africa [11,[17][18][19]. Sheni stream is the most common breeding site in the area and the density of An. gambiae (s.l.) and An. cinereus was higher here than other breeding sites. This is inconsistent with previous study in Eretria [18,19]. Sand mining and naturally created pools along Sheni stream captured An. gambiae (s.l.) as the water is clear and sunlit. This observation is similar to the findings noted by Keneaet al. [11] in Ziway Two were from outdoor collections and the rest were from indoor collections area. Similarly, higher density of this species was also sampled from rain pool habitats. Like An. gambiae (s.l.), An. cinereus abundantly breeds in Sheni stream, while the contrary was noted for An. demeilloni, An. christi and An. pretoriensis. All these species are regarded as highland mosquitoes, except for An. gambiae (s.l.) and An. pretoriensis whose distribution extends to the lowlands [20,21]. Multiple regression analysis revealed that clear and sun-lit temporary habitats are positively associated with the abundance of An. gambiae (s.l.). This agrees with recent findings in Ethiopia which indicated that An. arabiensis breeds in clear, temporary and often sun-lit pools of water [11,21]. This could be due to inert particles suspended in the larval environment, which may prevent larval mosquitoes from feeding, being found less in clear water than in turbid water [22][23][24]. In contrast, other studies showed larval density of this species to be positively associated with turbid semi-permanent habitats [24,25]. The positive association of An. gambiae (s.l.) larval density with habitats devoid of mats of green algae reported here may be due to the exposure of habitats with muddy substrate to sunlight provide favorable conditions for the survival of bacteria from which larvae get their nutrients [26,27]. The negative association of permanent habitats with the abundance of An. gambiae (s.l.) is similar to the findings of Keneaet al. [11]. This may be because of larval predation is more prevalent in large, permanent habitats [22][23][24]. Anopheles gambiae (s.l.) is positively associated with habitats located between 0 and 100 m from human dwellings, which is one of the strong predictors for indoor Anopheles abundance [28]. Change in habitat width, however, is negatively associated with the abundance of An. cinereus. Although the number is small, adult Anopheles collections contained more An. gambiae (s.l.) than the other species, the majority of which were captured from CDC light traps indoors indicating possible host-seeking behaviour, although they may have been attracted by light from the trap. This might be consistent with its anthropophilic and endophagic behaviorur noted by other investigators [21,29]. The mean daily density in CDC light traps of this species was relatively high compared to the study conducted in Fuchucha & Jarso [30], which was 0.3/trap/night. Even though the number of fresh fed An. gambiae (s.l.) tested for blood meal analysis was very low, the few positive reactions exhibited both the zoophilic and anthropophilic behaviour of the vector, which is typical of its biting behaviour [10] and is similar to a number of studies in Ethiopia [21,30,31]. All these indicated that this species might be responsible for local malaria transmission in the study area. The outdoor density of this species in Addis Zemen could not be determined, as only three mosquitoes were collected from pit shelters. The absence of sporozoite infection could also be due to the very small number adult mosquitoes tested. Anopheles cinereus and An. demeilloni were the second greatest in adult sampling, after to An. gambiae (s.l.), and the majority of these tested for blood meal analysis showed a preference towards cattle feeding showing their zoophilic and poor anthropophilic behaviorur. This is in agreement with the study conducted in highlands of western Kenya [32]. These species are also widely distributed in east African highlands, at altitudes ranging from 1400 to 2500 m [10]. Only one An. cinereus was caught resting in clay pots in outdoor collection, showing very little attraction to man-made habitats. However, in western Kenya, more mosquitoes were captured from clay pots (37%) than pit shelters [14]. Adults of An. gambiae (s.l.), An. cinereus and An. demeilloni were present in collections throughout the study period. Moreover, larvae of the former two species continued to survive in Sheni stream during the dry months, showing the importance of this stream in providing suitable condition for the survival of the two species, particularly to An. gambiae (s.l.), during the period when other breeding sites perish. The presence of An. gambiae (s.l.) in both stages during the entire study period indicates that active transmission of malaria might take place throughout the entire year. Therefore, further study on the prevalence of malaria in conjunction with anophelines is required to better describe the disease. Conclusions The present study demonstrated the preferred anopheline larval habitats and best predictor environmental factors for larval abundance. Sheni stream, present in the study area, plays an important role in maintaining An. gambiae (s.l.) and other Anopheles species. In addition, this study indicates that the presence of the principal malaria vector, An. gambiae (s.l.), in the country in the study area from both larval and adult collections. Since the collection of adult mosquitoes was low, owing to the brief period of study and the dry season, a more detailed and year-round investigation is required to gather appropriate and relevant entomological indices of transmission towards contributing knowledge-based strategy for effective vector control management.
2018-03-28T13:24:32.331Z
2018-03-27T00:00:00.000
{ "year": 2018, "sha1": "032ef47fb193a8f0bee3b23339deb2280ef660e0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13071-018-2701-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9ad7f9aec0b5f3e1f2f3305821432778e6f609bb", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
227913323
pes2o/s2orc
v3-fos-license
Humanitarian crisis and sustainable development: perspectives and preferences of internally displaced persons in the northeastern Nigeria This study seeks to contribute to the knowledge of linkages between humanitarian actions in conflict situations and sustainable development. We analysed data generated from qualitative interviews and focus group discussions with encamped and self-settled internally displaced victims (IDPs) of the Boko Haram insurgency in the northeastern Nigeria. Our analysis searched for themes that summarise their preferences and desires of durable solutions. Overall, the majority of the IDPs were more inclined to local integration or resettlement than return. More than males, female IDPs were likely to cite personal experience of violence as a reason for rejecting voluntary repatriation. Feelings of vulnerability, experience of violence and hope of economic and social empowerment were major reasons given in support of local integration or resettlement. Self-settled IDPs are more disposed to returning to their places of origin than encamped IDPs. The need to rebuild livelihoods and restore social and community networks were the major factors participants associated with the choice of return. Beliefs in divine destiny, lack of trust and confidence in the government were dominant views expressed by participants who were indifferent about durable solutions. There is a sense that cultural androcentric norms which give men the power to make decisions for the family shape decision-making even in emergency situations. We conclude that, regardless of their preferences about durable solutions, IDPs have long-term needs that can only be provided if humanitarian actions are integrated into the overall development agenda and programmes of governments. Introduction Humanitarian emergencies remain one of the biggest development challenges of the twentyfirst century. Despite the growing efforts to address the challenges of poverty and inequality as they affect the most vulnerable members of developing countries, sustainable development agendas and programmes, until recently, did not seem to give much attention to all categories of vulnerable individuals and groups. The initial focus of sustainable development efforts was primarily on long-term programmes that aim to tackle poverty, social inequalities, environmental decline, etcetera, while also addressing structural issues that undermine growth, prosperity and sustainability. On the other hand, humanitarian interventions in conflict situations and areas affected by environmental catastrophes tend to prioritise saving lives, alleviating stress and providing relief materials (Schafer 2002). Even when humanitarian actions seek to achieve long-term solutions to violence, persecution and displacement, such efforts often ignore how those solutions affect or are affected by the overall development programmes of governments. Recently, the scholarship on sustainability and development (Tamminga 2011;CGDEV (Center for Global Development) 2017; Blind 2019) as well as international development initiatives such as the Sustainable Development Goals (SDGs) 16 has begun to recognise the importance of linking development interventions with humanitarian crises in volatile regions and areas affected by natural disasters. The linkages between sustainable development and humanitarian emergencies caused by natural disasters have been explored by a number of studies (Schipper and Pelling 2006;Eriksen and O'Brien 2007;Strömberg 2007). A common theme in these studies is that vulnerability, poverty and suffering that follow natural disasters have significant implications for development policy. Disasters increase poverty and reverse development when people lose lives and livelihoods. Poverty deprives people of food, health, education and other resources. Lack of resource further contributes to vulnerability and increases the risk of suffering (injury, death and loss of livelihoods) in the face of health hazards, natural disasters and violence. Policy response to emergencies need to involve long-term development interventions aimed at enhancing economic and social development, reducing poverty, rebuilding sustainable livelihoods and strengthening the resilience of populations to future shocks (Eriksen and O'Brien 2007). Thus, addressing the underlying causes of vulnerability to the impacts of emergencies is crucial to sustainable development. Scholarly attention is also shifting towards humanitarian emergencies caused by conflicts and violence. As studies such as Blind (2019) suggest, since humanitarian crises that result from civil wars and other violent conflicts are inherently developmental challenges, they cannot be solved using quick interventions and short-term measures. Their solutions need to include long-term development programmes that can lead to stability and development (Tamminga 2011). On the policy realm, the United Nations Sustainable Development Goals (SDGs) 16 seeks to put humanitarian interventions at the heart of the development agenda of the international community. The goals stress the need to gear efforts toward building strong institutions that will bring about peace and justice in places bedevilled by conflicts, violence and other emergencies. In the international responses to conflict-induced humanitarian crises, there are now renewed efforts to move beyond providing immediate relief materials (such as food, clothing, shelter, essential medications) to restoring destroyed livelihoods, rehabilitating individuals and communities, building resilience and reducing risks, and preventing further spread of conflict. Experts believe that these efforts can only achieve the desired results if adequate attention is paid to gaining a better understanding of the peculiar livelihood needs and conditions of the people and communities affected by violent conflicts, how meeting those needs fits into or reflects the overall long-term interest of sustainability and how humanitarian and development interventions can be conceptualised to integrate those livelihood needs and conditions. This paper seeks to contribute to our understanding of the linkages between humanitarian emergencies and sustainable development. The aim is to get a better understanding of preferences and perceptions of internally displaced persons (IDPs) who fled the Boko Haram conflict in the northeastern Nigeria on traditional 'durable solutions' to displacement with a view to ascertain how their preferences reflect the overall goals of development and sustainability. As defined in the Inter-Agency Standing Committee's (IASC) framework on durable solutions to displacement (Inter-Agency Standing Committee 2010:A1), a durable solution entails creating a situation where IDPs 'no longer have any specific assistance and protection needs that are linked to their displacement and can enjoy their human rights without discrimination on account of their displacement'. This can be achieved through 'sustainable reintegration at the place of origin (return), sustainable local integration in areas where IDPs take refuge and sustainable integration in another part of the country (resettlement)'. As a durable solution, local integration refers to the situation where IDPs voluntarily acquire the legal right to naturalise in their place of refuge, the economic right to establish 'sustainable livelihoods and a standard of living comparable to the host community', and the right to social and cultural adaptation and acceptance that enables them to 'contribute to the social life of the host community and live without fear of discrimination' (Fielden 2008:1). Resettlement implies the movement of displaced persons to a destination within the country other than their places of origin and refuge where they would have permanent residency. Like local integration, resettlement involves displaced persons acquiring all the legal, economic and social cultural rights comparable to members of the host community. The Boko Haram insurgency that has continued to ravage the region of the Northeast Nigeria since 2009 has caused enormous damage to the economy, society and environment in the region. The conflict has led to the collapse of rural livelihood in much of Borno State, as well as in some parts of Adamawa and Yobe State. This conflict, according to the Office for the Coordination of Humanitarian Affairs (OCHA), has resulted in one of the most severe humanitarian crises in the world, with about 1.7 million people internally displaced; over 3 million people facing 'critical and crisis' levels of food and nutrition insecurity, and nearly 600,000 people in urgent need of protection (OCHA 2018). Overall, the United Nations Development Programme (UNDP) has estimated that the number of people in need of lifesaving humanitarian assistance as a result of the conflict has reach more than 10 million in 2018 (UNDP 2018). The humanitarian crisis is also responsible for widespread cases of gender-based violence, child abuse and trafficking, and severe public health challenges. According to the Federal Government of Nigeria, the war has also led to the destruction of over 1500 schools and the death of about 2295 and displacement of 19,000 teachers (Punch News 2018). The region makes an interesting case for examining the links between humanitarian emergencies, development and sustainability. Even before the 10-year-old brutal insurgency by Boko Haram, the region was already affected by severe development challenges such as extreme poverty and profound environmental change (UNFCCC 2007;BBC World Trust 2010;UNDP 2018). For instance, the Nigerian Living Standard Survey (National Bureau of Statistics 2019) indicated that while 41.1% of the total population of Nigerian were classified as poor, about 72% of the people in the North Eastern Subregion were classified as poor. Similarly, a report by the Federal Ministry of Environment referred to the Northeast and Northwest regions as the most vulnerable of the six geopolitical zones to negative effects of climate change (Federal Ministry of Environment 2014). Despite these challenges, there are rarely any empirical studies that explore the links between the humanitarian crisis and responses and wider developmental issues affecting the region. Much of the current research on the crisis focuses on assessing the efficiency of humanitarian interventions or the plight and experiences of the victims. Such studies treat humanitarian emergencies as separate from deeply rooted challenges of development and sustainability. As Long (2014) observed, the thinking that informs both research and policy on forced migration which sees displacement as merely a humanitarian challenge instead of development challenge needs revisiting. In the same vein, studies in development issues fail to recognise that the livelihood and protection needs of displaced persons and other people affected by brutal conflicts are different from those of other poor people in the society. This is because displaced victims of violence suffer from intrinsic risks such as loss of livelihoods, marginalisation, social disarticulation, emotional and psychosocial trauma, among other impoverishment risks. On the lack of appreciation of the peculiar development needs of people living in conflict situations, Long (2014) contends that one of the reasons why conventional solutions to displacement are failing is the failure to engage with broader development issues and how they affect displaced populations. For sustainability in humanitarian action, there is an urgent need to gain a renewed understanding of the long-term development needs of IDPs by listening to their own voices. As Cohen (2008) observed, most decisions on internal displacement do not sufficiently reflect the needs of the displaced persons. Since sustainable development practitioners and humanitarian actors emphasise the agency of the poor and vulnerable groups, it is crucial to analyse the perspectives and preferences of these groups. By presenting a contextual analysis of the views of the displaced victims of the Boko Haram war, this study is likely to also help us gain a much better understanding of the challenges of existing policies on humanitarian intervention and development in the region. The institutional arrangement in response to the crisis To understand the humanitarian situation in the northeastern region of Nigeria, it is critical to give a brief background to the general institutional arrangement put in place in response to the crisis. Since the escalation of the Boko Haram conflict in 2014, humanitarian assistance was jointly provided by the government of Nigeria in collaboration with its local and international partners. The Federal Government of Nigeria through the National Emergency Management Agency (NEMA) is primarily responsible for the coordination of humanitarian interventions to people affected by the conflict. At regional (state) levels, there are agencies established to complement and support NEMA in providing assistance to the victims of the war. Because of the sheer scale of the humanitarian crisis in the region, national and regional humanitarian agencies require the support and expertise of local and international organisations. According to officials interviewed, there are over 52 organisations providing humanitarian and developmental assistance to victims of the war. These agencies can be classified into three broad groups: The first group includes United Nations agencies such as the United Nations International Children's Emergency Fund (UNICEF), the World Health Organization (WHO), the United Nations Development Programme (UNDP), the International Organisation for Migration (IOM) and the United Nations Fund for Population Activities (UNFPA). The second group consists of International Nongovernmental Organisations (INGOs) such as the International Rescue Committee (IRC), the International Committee of the Red Cross (ICRC) and the Premiere Urgence Internationale (PUI). The last group includes Non-governmental Organisations (NGOs) such as the Nigeria Red Cross Society, Faith Based Organisations (FBOs), Community Based Organisations (CBOs) and Civil Society Organisations (CSOs). Humanitarian actors in the Northeast Nigeria operate using the cluster approach. Also called 'sector system' approach, the cluster approach is developed to enhance accountability, predictability and better coordination of humanitarian response and recovery from emergencies. Under this approach, humanitarian response is coordinated through groups called clusters. A cluster is defined as a group of humanitarian organisations operating in one or more of the sectors of humanitarian action (e.g. protection, health, nutrition, education). Humanitarian actors in the Northeast region are grouped into clusters based on their mandate and responsibility. These mandates can be classified as either humanitarian or developmental. By their mandate, 'developmental' agencies such as UNDP and World Bank focus on recovery and transition stage of the emergency while 'humanitarian' agencies such as the IOM and United Nations High Commission for Refugees (UNHCR) handle humanitarian issues services such as psychosocial support, community mobilisation and protection (IOM 2015). There are, however, overlaps in the mandates and activities of the agencies operating in the region and some agencies fit into more than one cluster, thereby addressing both humanitarian and developmental issues. For instance, based on its mandate, the UNDP is tagged a developmental actor and the lead agency in transition and recovery working group. However, the agency is involved in the provision of immediate humanitarian assistance (UNDP 2020). Similarly, IOM is an agency that specialises on migration issues but belongs to education, health and other working groups that focus on recovery and transition (IOM 2015) Methodology and data To explore the perspectives and preferences of internally displaced persons in the Northeast Nigeria, we analysed data collected via 28 semi-structured interviews and 3 Focus Group Discussions (FGDs) with the internally displaced victims of the insurgency. The interviews were conducted in April and May 2018. The data was collected as part of a larger study on the crisis of large-scale displacement resulting from the Boko Haram war in the region. In the study, we explore institutional responses to displacement and exile, the impact of war and persecution on women and children, as well as the challenges to attaining durable solutions. Participants were recruited from three locations hosting thousands of internally displaced persons in Adamawa State of Nigeria. The locations are government-run Malkohi (housing 1329 IDPs) and Fufore (1726 IDPs) camps and Malkohi village host community (2030 IDPs). Of the 28 interviews conducted, 16 were with male IDPs, while 12 were with female IDPs. All interviews and focus group discussions with encamped displaced persons were conducted with the permission of camp authorities. Access to self-settled IDPs was negotiated with the assistance of host community leaders and humanitarian volunteers. In line with established ethical requirements of social research, the study was sensitive to participants' needs while securing consent and during data collection. Adequate measures were taken to ensure the security, confidentiality and anonymity of all participants. The average interview lasted for about 33 min, while focus groups lasted for about 65 min on the average. An interview guide was developed to help guide the conversation, prompting the researchers on the major themes or issues to prove while also letting the participants freely tell their stories without interruptions. All interviews were manually transcribed and coded using Nvivo 11 qualitative data analysis software. All data transcripts and audio records were stored securely and destroyed at the end of the study. Like most qualitative studies, this study considers the research participants-IDPs-as social actors whose agency can facilitate or inhibit humanitarian relief policy and, as vulnerable members of the society, the wider development policy in the region. As social agents, IDPs' personal accounts of their experiences of the consequences of violence and displacement as well as their views on durable solutions in the aftermath of the conflict are crucial to understanding the intersection between humanitarian interventions and development in the region. Thus, a qualitative analysis procedure that sees IDPs as active agents-who have the capacity to influence the outcomes of humanitarian interventionswas used in this study. The coding process and analysis searched for themes that describe views of displaced persons on durable solutions, their long-term preferences and desires, as well as the reasons they gave to support their views. A social scientific study of this nature, in such a volatile environment, is likely to face a number of ethical issues and practical challenges. We therefore wish to acknowledge that some ethical and practical challenges associated with field research in a volatile and insecure environment were encountered in the course of data collection. One of the challenges is that some IDPs initially expected that participation in the study would attract financial incentives and made efforts to influence the selection process. Another ethical challenge is the perception that their 'voices would be heard' and the study will lead to government taking measures to improve their living conditions. A major practical challenge encountered is we could not collect data in IDP camps and host communities in the city of Maidugiri, which hosts the largest concentration of IDPs in the region. In tackling these challenges, we tried to be as transparent to our respondents as possible before and during interviews and FGDs. That is, we provided them with sufficient information about the purpose of the research, who we are, where we are working, who is sponsoring the study and how we are going to disseminate the findings. We secured informed consent of all participants Shehu and Abba Journal of International Humanitarian Action (2020) 5:17 Page 4 of 10 who took part in the interviews and FGDS while promising to them anonymity and confidentiality in reporting the research findings. In addition, participants were given the liberty to choose to answer or not to answer any questions posed by the researchers. Results This section presents the analysis of data generated from qualitative interviews and focus groups. The data on preferences of durable solutions is presented under the three main headings, namely local integration and resettlement, return and indifference. Analysis of participants' preferences and views on acceptable lasting solutions to their displacement has found that local integration and resettlement are the most preferred solutions among all categories of IDPs, even though more self-settled IDPs than those in camps prefer voluntary repatriation. Local integration and resettlement are reported under the same heading, as most participants mixed the two options in their responses to questions regarding which solution is preferable to them. Also, the reasons given by this category of participants were found to be the same for local integration and resettlement. It is clear in the data that participants reduce solutions to displacement to two rather than three: return or no return. In this case, those who do not want to return are willing to accept either local integration or resettlement. Local integration and resettlement Concern over security and safety back home is the major reason IDPs gave for preferring to integrate in their place of displacement or resettle elsewhere in the country away from their homes. This concern is born out of previous experience of violence and trauma. Another important reason advanced by this category of participants is loss of livelihoods in their communities of origin, which makes them think that there are better opportunities for social and economic empowerment in local integration or resettlement than in voluntary repatriation. Personal experience of violence Some of the IDPs who indicated preference for local integration were emphatic during the interviews that they will never return to their homes: "We'll never go back. Return is not an option at all to me. There is a challenge in return. I cannot go back … I would prefer to remain here and integrate. But we cannot return to our homes. We have suffered enough. We can't." (001 encamped female IDP, Malkohi). "You see the first option (return), I don't like it … Even if I will be given a new house there and all that, I would prefer to stay here…" (002 Female encamped IDP, Malkohi) When one of the participants quoted above was asked why she felt local integration or resettlement are better for her than return, she mentioned "fear" and "suffering" resulting from Boko Haram's violence and during their movement to safety as things she would not want to experience again: "Because of the suffering we went through… Spending days and nights in the bush running for your life barefooted, in rain. You cooked food that you could not eat because of suffering and fear. That is it. So, I prefer to live the rest of my life somewhere far from our former home. If we can get the land to farm where one can get some food to feed one's family and a house to live, that is much better I believe." (001 Female encamped IDP, Malkohi) Others were of the view that although they would like to return to their homes, local reintegration or resettlement is more tenable since security has remained a big challenge in their communities of origin. They maintained that they would not want a repeated experience of violence and trauma: "Everybody wants to go back home since you were born there, and you grew up there. You would like to go back there if there is peace. But if there is no peace, then you must stay somewhere that the government gives to you. So, wherever you find yourself, to stay, where there is peace would be what you want." (007 female encamped IDP, Malkohi) "… in my opinion, if I could get somewhere where I would not go through the horror I went through in the past then local integration or resettlement is what would work for me." (003 Male encamped IDP, Malkohi) However, for some female IDPs, being married means it is not up to them to decide which option to accept. For instance, a female IDP indicated her willingness to oblige with her husband's choices and decisions even if they were against hers: "…you see even if I choose resettlement, if my husband does not like it then there is nothing I can do but to follow him and return… but I personally prefer resettlement. But if everything normalises, no one will dislike their home… no place like home." (20 Female encamped IDP, Fufore) This thinking shows that there is a gender dimension to decision-making during displacement and exile. Traditional androcentric norms which give men the power to decide what to do in the family appear to also influence how decisions are made within families even in emergency situations. Even though the principle of durable solution requires giving displaced persons the opportunity to make voluntary decisions on whether or not to return to their place of origin, there is a sense that some displaced women lack the decision-making power. Although some male encamped displaced persons have cited safety and previous experience of violence as the main reason for rejecting voluntary repatriation, more women have echoed this concern during interviews. This is not surprising given that women and children were targets of abductions, rape and enslavement by the Boko Haram terrorists. Some of the participants narrated how they were forced to watch the gruesome execution of their sons and husbands by the terrorists. Loss of livelihoods Another important reason IDPs gave for their preference of local integration over voluntary repatriation is the loss of economic livelihood following the war and displacement. As one male participant stated during the interview: "all that we had is lost now. Even before we left, everything we had was burnt or stolen. My farm, my house, my belongings… Now we do not have anything… even if we are to go back, where can we even start…?" (019 Male encamped IDP, Fufore) These participants believe that it would not be possible to rebuild their livelihoods if they were to go back to their place of origin. They believe that it is easier to settle in their place of displacement and rebuild their livelihoods than in the war-torn villages and towns they fled. In addition to loss of livelihoods support systems following violence, persecution and exile, some IDPs also indicated that availability of immediate needs and safety are major reasons given for the choice of local integration. Vulnerability, powerlessness and uncertainty Vulnerability, powerlessness and uncertainty are also some of the concepts that emerged in the views expressed by participants who prefer local integration or resettlement. Vulnerability was echoed as a factor that may prevent them from return. The perception of powerlessness on the part of displaced persons as well as the feeling of uncertainty as to what the future may hold after return have also been echoed to justify the choice of local integration over return. As can be seen from the quotations of participants above, displaced persons living in government run camps are more likely than self-settled IDPs to express desires for local integration or relocation and reject voluntary return to their area of origin. Majority of self-settled IDPs interviewed reject the idea of local integration. For instance, two self-settled IDPs mentioned the following reasons during an FGD: "As I told you earlier, we are well received by the Fulani community here. However, everyone of us is anxious to return home. They gave us land to farm, yet their cattle encroached on our farms all the time and we cannot confront them." (Male self-settled IDP, FGD) "I swear to Allah, we have never received assistance from either Gwoza local government or the Borno state government. We are not receiving assistance as those in the nearby camp. And no matter how long we live here, the hosts will continue to treat us a Gwoza not Adamawa people. So… there is no point in remaining here once Gwoza is safe for return." (Male self-settled IDP, FGD) Voluntary repatriation Study participants who expressed preference for voluntary repatriation/return to their homes cited socioeconomic reasons such as reunion with their families and restoring of severed social and community networks as their major concern. Another concern salient in the narratives of participants who prefer voluntary repatriation is the need to rebuild livelihoods. Participants also mentioned 'attachment to home' as an important motivation for return. It is, however, important to note that all participants who consider return as their most preferred choice have identified peace and safety as a precondition for return. "To be candid, if it was possible, return to Gwoza would be the best solution. If peace returns to Gwoza and every village is secure, you will not find a single individual here tomorrow. All of us will go back." (Self-settled male IDP, FGD) Reunion with family The need to reunite with family and relations has emerged as the most widely mentioned reason for return by all categories of IDPs who see voluntary repatriation as the most viable of the three traditional durable solutions to their displacement. "…if home is safe, going back home would be the best, you have your remaining relatives, your kids would also know their relatives, you would be free, there is no place like home, that is if it is safe. But if Apart from safety, the participants quoted above imply that a sustainable return home to reunite with family requires economic support through the provision of capital and skills. The first participant quoted further added that from the information they are receiving from those who fled recently, the area is still not secure, as raids by insurgents and armed confrontations with the military are still ongoing. Restoring social and community network A number of IDPs emphasised the importance of social and community ties which were severed following displacement and exile. Such participants opined that if they would be safe and protected, return would be a more desirable and sustainable solution, as it offers them another opportunity to restore and rebuild their social and community network. As can be seen from the view expressed by the participant quoted below, status deprivation during displacement is another reason advanced by a small number of participants who preferred repatriation over local integration and resettlement. "if the war ends, and peace returns, it would be necessary for me to go back home, since I am a community leader… I have followers, but if there is no peace, I would rather remain here or resettle elsewhere in the region" (009 male encamped IDP) Despite their cultural connections to 'home' and the need to restore social and community ties, this category of IDPs has serious concerns over security and safety. Rebuilding livelihoods Displacement and exile were viewed by many of the IDPs we interviewed as involving the loss of livelihood resources and other sources of life support. Accordingly, the need to recover lost natural capital, especially land, as well as job was found to be one of the major reasons for wanting to return home among mostly self-settled IDPs. Also, lack of access to natural and man-made capital in their place of refuge and the hope that some of the life-supporting resources that are missing would be regained upon return were mentioned by IDPs: "…farming was our major occupation that sustained us in the past. We used to feed and clothe ourselves from what we got from our farms. Even our weddings and childbirth were funded from the proceeds of farming. There was nothing that we used to do to get sustenance other than farming. Before we came here, all of us were self-sufficient, we never knew anything called begging, neither did we depend on anybody to help us with anything. It was after we arrived here that we began to realise that a human can be so helpless and dependent on others. We never knew anything like this. We are historically an independent and hardworking people…So for us, if we can get back to our farmlands to use our labour and cultivate, we would be grateful to Allah." (Self-Settled IDP FGD) As indicated by the participant quoted above, return home could guarantee self-sufficiency and sustainable income and, in effect, bring an end to their current state of 'dependency' and 'helplessness'. A durable solution to this category of IDPs is one that would guarantee sustainable income through self-reliance and independent pursuit of economic goals. One major distinction between self-settled and encamped IDPs is access to land for farming. Self-settled IDPs in Malkohi village, for instance, have revealed that the host community has provided them with lands for both settlement and farming. IDPs living in governmentrun farms do not have land where they can farm. Despite having access to land and freedom of movement, majority of self-settled IDPs in Malkohi village prefer voluntary repatriation over local integration. These IDPs mentioned stigmatisation in the host community, tension between IDP farmers and their pastoral hosts and lack of humanitarian assistance to self-settled IDPs compared to those in camps as some of the reasons why they prefer voluntary return. Passivity, resignation and pessimism There were a few IDPs interviewed who indicated that they were willing to accept any solution offered to them. The variety of views this category of participants expressed during interviews and FGDs indicate passivity, resignation and pessimism. The first set of views is based on the belief in divine destiny, that is, the view that 'everything is controlled by God', including their sustenance and their future. This belief leads to 'submission' to the will of God as a means of coping with situations where individuals are faced with difficult choices that require difficult decisions. Although the belief has featured prominently throughout the data, some participants have used it to express their indifference and passivity regarding durable solutions: "Well, the earth belongs to Allah… south, north, east and west…yeah… whichever place is more peaceful is the best... (okay) so… if I were asked to choose one… whether home or here or elsewhere..., to be candid, if we can have the house and other things, anywhere is okay" (016 male encamped IDP) Even though this participant has surrendered his affairs to the will of God, he still underscores the need for sustainable housing and other life-supporting resources wherever he finds himself. Another participant suggested that whether she is asked to return, integrate or relocate, her major needs are food and farmland to feed her orphaned children and a house to live in: "…well either going home or settling here or in a new place… all that is required is two things -food and land, as for health, it is in the hands of Allah… orphans' care is also in the hands of Allah… that is my thought" (019 female encamped IDP) Some participants suggest that while they are willing to accept any solution presented to them by the government, they are pessimistic the government is sincere, committed and capable of fulfilling its promises. The participant quoted below was implying that institutional failures and lack of sincerity will prevent any effective implementation of durable solutions in a way that would address their livelihood needs: "Well, there is one thing, the Nigerian government is good at making empty promises, whether you go home or stayed here it all depends… if you go home you may yet be homeless, if you stay here for how long would you stay?" (020 male encamped IDP) In addition to the feeling that the government lacks the capacity to support them to meet their basic economic and social needs, there is an obvious fear of 'homelessness' after return and 'uncertainty' in the case of local integration among some IDPs. The tension resulting from these conflicting possible outcomes is resolved by fatalistic beliefs in divine preordination and destiny. This thinking on the part of IDPs underscores the pitfalls of institutional failures such as corruption and inefficiency on humanitarian assistance and development in conflict situations. Durable solutions, peace and security A major recurrent theme in the narratives of all participants regardless of preferences of durable solution is the issue of peace, security and an end to the war. Both IDPs who preferred voluntary repatriation identified insecurity and the continuation of fighting as the biggest obstacle to the realisation of their dream of returning home: "The problem is Boko Haram is still in control of much of the area around Gwoza. The entire villages outside of the town of Gwoza is still under Boko Haram. They are everywhere. You cannot even go outside the town to get firewood with a military escort and even with the escort you must be very quick, otherwise they would attack you." (Male selfsettled IDP, FGD) "The area is still unsafe. You see this man (points to one of the IDPs standing), he has been living in Gwoza. He just escaped from the town two weeks ago. He is now here looking for a place to live." (Male self-settled IDP, Malkohi village) Another participant in the FGD added: The military conducted a patrol once outside the town and sacked Boko Haram from two neighbouring villages. But since then, the did not make any attempt to expand their operations outside the town. They even rescued some villagers who were unable to escape, including one of my nieces who is crippled. She was brought to Gwoza by the army during that operation. They did that operation with the help of local vigilantes. They did another operation in Belneke once, that is all. But within the town of Gwoza, there is peace. But outside the town, there is no security at all. (Male self-settled IDP, FGD) In addition to echoing the view that Boko Haram is still in control of much of the areas the Nigerian military claimed to have liberated in Borno, many IDPs pointed out that they do not expect the military to end the war and restore normalcy in the area anytime soon. We asked one the officers responsible for camp management about how they are implementing durable solutions when security remains a big challenge. He responded: "We have done a durable solution survey here recently. The governor Adamawa State has formed a durable solution committee which comprises of all Shehu and Abba Journal of International Humanitarian Action (2020) 5:17 Page 8 of 10 major humanitarian agencies operating in this state. Based on our survey, majority of them want to go back home, if their towns and villages are secure. A few of them said they prefer to be resettled elsewhere. And you know, they are different people with different preferences and experiences… The main challenge is the war is still going on. The military had reclaimed some territories previously governed by the terrorists. But the war is far from over… and we cannot allow them to return only to be displaced again, abducted, or even killed." These views show that achieving durable solutions to the problem of protracted displacement in the Northeast Nigeria depends on the actual resolution of the conflict and an end of all armed hostilities. As one female IDP recalled, Boko Haram had followed them to their place of refuge in Malkohi camp, Adamawa, and carried out a suicide bomb attack that killed a scores of IDPs and camp officials in September 2015. According to her, even after resettlement or local integration, IDPs still have concerns over the ability of the government to prevent future displacement and keep them safe from Boko Haram's violence. Discussion and conclusion The 2030 Agenda for Sustainable Development which ushered in the Sustainable Development Goals in 2015 was anchored on the mantra of 'leaving no one behind'. A year later, the World Humanitarian Summit drew from the 2030 Sustainable Development Agenda and introduced the idea of 'working together differently to end (humanitarian) needs' (Blind 2019). The idea linking humanitarian action with development, although not new, has received renewed attention with the unveiling of the 2030 Sustainable Development Agenda. In humanitarian crisis situations, the concept of durable solutions offers an analytical lens for exploring the humanitarian and development needs of vulnerable populations and the social and institutional challenges of addressing those needs. However, what appears to be missing from decades of research on sustainable solutions to violenceinduced displacement is a critical understanding of the views and preferences of the IDPs. The analysis presented above tries to gain the perspectives and preferences of both encamped and self-settled displaced persons on durable solutions to displacement due to the Boko Haram war. The central assumption of our analysis is that, despite their peculiar vulnerabilities, IDPs are social agents that have needs and choices and preferences. Thus, sustainable solutions to the problem of large-scale protracted displacement are those that consider and reflect the needs and choices of the people suffering from the consequences of violence and displacement. Our analysis therefore explores views and preferences of IDPs on the permanent solution to their displacement. As the data shows, variations exist in IDPs' needs and preferences of durable solutions as well as the reasons given for the choice and desires of durable solutions. Despite the variations, it appears all the reasons given by IDPs to rationalise their preferred durable solution reflect the overall goals of development and sustainability. For instance, to IDPs who rejected voluntary return, local integration, safety and security, economic empowerment are the major considerations. For IDPs who see voluntary repatriation as the appropriate durable solution, major reasons include restoration of family and community network and rebuilding livelihoods, regaining economic independence and self-reliance. Another reason is emotional connection to the place of origin. Although the majority of IDPs were displaced from the same place (Borno and Adamawa State), desire for voluntary return is more common among self-settled IDPs than encamped IDPs. A common thinking among both IDPs who desire local integration and those who prefer return is the expectation of gaining economic opportunities after the implementation of a durable solution. The last category of IDPs showed indifference and fatalism towards durable solutions. Their perspectives seem to be influenced by either religious beliefs in preordination or lack of trust in government and humanitarian institutions. Finally, there was a sense of fear among many IDPs and some humanitarian officials that durable solutions can hardly be achieved in foreseeable future unless the government is ready and able to completely prevent Boko Haram from attacking civilians in the entire region. These findings suggest that in situations of largescale displacement such as the case of the Northeast Nigeria, IDPs think and hope for a future beyond ad hoc emergency humanitarian relief and assistance. Although displaced persons have diverse preferences and wishes, there is evidence that they seek solutions that would address their vulnerability, poverty, impoverishment and vulnerability. As their different voices suggest, whether they choose voluntary repatriation or local integration or relocation, IDPs expect not just permanent protection from violence but also sustainable employment, housing, education and healthcare among other essential development services and opportunities. These solutions can only be provided if humanitarian efforts are integrated into the overarching development agenda and programmes of governments. As the poorest and most vulnerable members of the society, displaced victims of violent conflicts deserve a special development intervention that will be specifically targeted at improving their economic self-reliance, while reducing the risks of vulnerability, homelessness and powerlessness. If anything, the voices of IDPs in this study suggest the need to reconsider the current thinking on traditional durable solutions where states and international organisations practically recognise voluntary repatriation as the 'best' and 'ideal' solution (Long 2010(Long , 2014 to conflict-induced displacement. Finally, the Nigerian government and development actors need to come up with a holistic approach to support these poorest and most vulnerable members of the society to overcome the risks of impoverishment caused by war and displacement. In other words, there is an urgent need for a special development intervention specifically targeted at improving their economic self-reliance by providing them with the necessary assets, skills and other resources needed to build sustainable livelihoods. Such an approach needs to engage and include the IDPs and other victims of the war in the development and planning of the interventions.
2020-12-09T14:26:49.661Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "7283b6d7fc0ba787075672b87465094d49edbd43", "oa_license": "CCBY", "oa_url": "https://jhumanitarianaction.springeropen.com/track/pdf/10.1186/s41018-020-00084-2", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "335958839ddb429f8ee8dd1ca3ea56be2d95870a", "s2fieldsofstudy": [ "Sociology", "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
118380719
pes2o/s2orc
v3-fos-license
Radion stability and induced, on-brane geometries in an effective scalar-tensor theory of gravity About a decade ago, using a specific expansion scheme, effective, on-brane scalar tensor theories of gravity were proposed by Kanno and Soda (Phys.Rev. {\bf D 66} 083506 ,(2002)) in the context of the warped two brane model of Randall--Sundrum. The inter-related effective theories on both the branes were derived with the space-time dependent radion field playing a crucial role. Taking a re-look at this effective theory, we find cosmological and spherically symmetric, static solutions sourced by a radion--induced, effective stress energy, as well as additional, on-brane matter. The distance between the branes (governed by the time or space dependent radion) is shown to be stable and asymptotically non-zero, thereby setting aside any possibility of brane collisions. It turns out that the inclusion of on-brane matter plays a decisive role in stabilising the radion - a fact which we demonstrate through our solutions. I. INTRODUCTION The possible existence of extra spatial dimensions is now a well-known theoretical assumption where our four dimensional world is considered to be a 3-brane embedded in a higher dimensional spacetime. Such a description emerges naturally in the backdrop of various string-inspired models [1]. Moreover, extra dimensional models were developed as a nonsupersymmetric, alternative approach in tackling the well-known fine tuning/gauge hierarchy problem in the regime of the Standard Model of particle physics. It became more and more evident that gravity may become an integral part to address issues on physics beyond the Standard model. The extra dimensional models can broadly be classified into those having large compact radii [2] or having small compact radii [3]. Regarding their geometry, these models are generally compactified under various topological setups. The uncompactified, four dimensional spacetime then emerges as a low energy effective theory which contains signatures of the higher dimensional theory. However, among all models proposed so far, we will confine ourselves to the Randall-Sundrum (RS) model [3] which has two 3-branes, with equal and opposite brane tensions, embedded in a five dimensional spacetime. This model was initially developed to combat the unnatural fine tuning involved in determining the mass of the Higgs boson. While determining the theoretically predicted mass of the Higgs boson (100 − 125 GeV) from higher order self energy calculations, this boson gets quantum corrections typically of the order of the Planck energy scale. As a result, an extreme fine tuning needs to be carried out at every order of perturbation theory to obtain the theoretically predicted value. This fine tuning is often known as the Higgs mass hierarchy problem or naturalness problem in particle physics. Without introducing any intermediate scale in the theory, the RS model successfully resolved the fine tuning problem by exponentially suppressing all mass scales on one of the 3-branes, known as the visible brane. Thus the entire low energy theory is reproduced on the negative tension visible brane at TeV scale. By far, this is one of the most successful approach for addressing the naturalness problem for a constant inter-brane separation. However the RS model suffered from the stabilization problem. In the absence of any stabilization scheme, the two brane system can collapse under the influence of equal and opposite brane tensions. Therefore, a reasonably generic method for stabilising the brane separation distance r c or the modulus field, was proposed by Goldberger and Wise [4] in which a stabilizing potential for the modulus field is generated by a 5D bulk scalar field with appropriate value at the boundary. The minimum of the modulus potential corresponds to the vev of the modulus field (kr c ). From this condition the vev of the modulus field can be set as kr c ≃ 11.5 ( to resolve the naturalness problem) without any fine-tuning of the 4D parameters. In other words, the stabilisation is achieved without sacrificing the conditions necessary to solve the gauge hierarchy problem. Besides offering explanations to the problems beyond the Standard Model of particle physics, the RS model has attracted the attention of cosmologists due to its unique interpretation of the cosmological constant fine tuning problem. Therefore, over the last decade, various cosmological and astrophysical issues like galaxy formation, existence of anisotropies in cosmic microwave background, dark energy and dark matter, black hole formation have been extensively studied in the context of the RS two-brane model (see [5] and references therein). In the present paper, we consider the effective, on-brane, scalar-tensor theories formulated by Kanno and Soda [6] where the radion field, which measures the inter-brane separation between the visible brane and the Planck brane is not a constant quantity. In fact, while studying the cosmological solution on the visible or the Planck brane, the radion is taken as a time dependent field. Similarly, for spherically symmetric, static on-brane geometries, the radion field depends on the radial coordinate. The spatial or temporal dependence of the radion therefore leads to the requirement that it must be non-zero everywhere in order to avoid brane collisions. We are able to demonstrate that by assuming the existence of on-brane matter, a stable non-zero distance between the branes is possible. In the next section we provide an overview of the effective scalar-tensor theories proposed by Kanno and Soda [6]. Subsequently in Section III, we deal with cosmological solutions and in Section IV we look at spherically symmetric solutions. In the last Section, we provide our summary and conclusions. THEORY Let us now briefly discuss the low energy effective theory on a 3-brane developed by Kanno and Soda [6] in the context of the two-brane model developed by Randall and Sundrum. The two 3-branes being Z 2 symmetric are located at orbifold fixed points y = 0 and y = l such that the geometry under consideration in this model is: M 1,3 × S 1 /Z 2 . Our Universe is assumed to be on the visible 3-brane which is a hypersurface embedded in a five dimensional AdS bulk filled with only a 5D bulk cosmological constant. The bulk curvature scale is l. Typically, in the RS model, the Einstein equations are determined by keeping the inter-brane distance fixed and considering a flat 3-brane. However, the scenario drastically changes once the inter-brane separation distance or the proper length becomes a function of the spacetime co-ordinates and the on-brane geometry is curved. These generalizations are incorporated while deriving the effective equations of motion on a 3-brane [6]. Beginning with [7] there has been a lot of work on the effective Einstein equations on the brane under various assumptions [8]. In fact, the effective equations for the two-brane system as obtained in [6] has also been re-derived in a different approach in [9]. An interesting recent work on slanted warped extra dimensions and its phenomenological consequences appeared in [10]. In order to determine the effective theory, we assume the following five dimensional action and a five dimensional metric with a spacetime varying proper distance between the two 3 branes. The action functional is given as where the tensions on the Planck brane and visible brane are respectively given by σ a = 6 κ 2 l and σ b = − 6 κ 2 l . Let us consider the most general 5D line element, where κ 2 is five dimensional gravitational coupling constant. Since both cosmological and astrophysical solutions that we consider in the present case occur at energy scales much lower than that of the Planck scale, therefore in the effective theory approach the brane curvature radius L is much large compared to bulk curvature l. As a result, perturbation theory can be used with a dimensionless perturbation parameter ǫ such that ǫ = ( l L ) 2 << 1. This method, called the gradient approximation scheme, is a metric-based iterative method in which the bulk metric and extrinsic curvature are expanded with increasing order of ǫ in perturbation theory. The effective Einstein equations on a brane are determined with the solutions of these quantities and the junction conditions. In this method, the RS fine tuning condition is reproduced at the zeroth order when the inter-brane separation is constant and the two 3-branes are characterised by opposite brane tensions. The effective Einstein equations are then obtained at the first order incorporating non-zero contributions of the radion field and brane matter. Using the gradient expansion scheme, the effective Einstein equations on the visible brane are as follows: [6] where Φ = e 2 d l − 1 and d is the proper distance between the branes which in general is a spacetime dependent quantity. κ 2 is the 5D gravitational coupling constant. T a µν , T b µν are the matter on the Planck brane and the visible brane respectively. All covariant derivatives in the above expression are defined w.r.t. the metric on the visible brane (denoted by the superscript 'b') given by f µν . The proper distance, a spacetime dependent function, between the two 3-branes in the interval y = 0 and y = l is defined as : and the corresponding equation of motion of the scalar field on the negative tension brane is given by, Here T a and T b are traces of energy momentum tensors on Planck brane and visible brane respectively. The coupling function ω(Φ) in terms of Φ can be expressed as, It is however known that the gravity on both the branes are not independent. The dynamics on the Planck brane situated at y = 0 is related to that of the visible brane by the following transformation [6] : where Ψ is the radion field defined on Planck brane. Now, the induced metric on the visible brane can be expressed in terms of Ψ as, where g (1) µν is the first order correction term. It is to be noted that in the subsequent calculations we will assume that the on-brane stress energy is present only on the 'b' brane i.e. on the visible brane. An important feature of the effective equations given above is that unlike the ones derived in [7] there is no non-local contribution (bulk-Weyl-dependent E µν [7]) from bulk geometry. III. COSMOLOGICAL SOLUTIONS In order to study the cosmological solution on the negative tension, visible brane, we assume the radion field to be time dependent. Therefore, the proper distance between the orbifold fixed points i,e. y = 0 to y = l is given by, The Friedmann-Robertson-Walker (FRW) solutions of the Einstein equations can be obtained for three different types of spatial curvature, k = −1, 0, 1. In this section, we study the solutions corresponding to each of these values of k separately. The FRW metric with a non-zero spatial curvature is given by : where r, θ, φ are the radial co-ordinates and a(t) is the scale factor to be determined. Substituting the above metric in eqn. (3), the Einstein's equations with spatial curvature k are obtained as follows : and the scalar field equation is given by : where an overdot represents derivative with respect to time t. It is to be noted that eqn. (12) is obtained by substituting (13) in (ii) component of the Einstein's equations. The scalar field equation is found to be independent of spatial curvature k and hence the equation remains same for any value of k. However, the scalar field profile is different for different k values due to the different functional forms of a(t). Let us now consider each value of k separately and study the cosmological solution in the presence of a radion field with a time dependence. A. Spatially flat solution (k = 0) To construct a spatially flat FRW Universe on the visible brane in the presence of a time dependent radion field, we consider the line element given by eqn. (10), which, for k = 0 reduces to: We initially assume that both the 3-branes are devoid of brane energy densities and pressures. Therefore when ρ = 0 = p, eqn. (13) can be re-expressed in terms of first integral of the Φ equation. The scalar field equation reduces to: After substituting k = 0 and ρ = 0 = p in eqn. (11), eqn. (12) and adding the two equations Integrating eqn.(16) we get, Now, we can choose the dimensionful factorC 1 = 1 by a scaling choice, so that the solution of scale factor is re-written as, where C 2 is a constant of integration. Substituting eqn.(15) and the scale factor into eqn. (11) (with k = 0) and then integrating it gives the solution for time dependent scalar field as: where C 1 is a non-zero constant with dimensions of L 1 2 . The constant C 2 may be set to zero by time translation so that a(0) = 0. However, C 1 must be strictly non-zero so that the scalar field Φ(t) remains non-zero as well. From the above solution of Φ(t) we can construct the proper distance d(t) as given below: The above solution indicates that the scale factor has a decelerating (but expanding) nature and the scalar field approaches zero in the later time whereas it is large in the early universe. The obtained solution is similar to that of the FRW radiation-dominated universe. However, , which measures the inter-brane distance, tends to zero in the limit t → ∞ thereby indicating an instability. Let us now consider a perfect fluid but with the equation of state p = ρ 3 and then construct the solutions. The traceless property of the energy momentum tensor for a perfect fluid with p = ρ 3 offers some simplifications. With the above mentioned equation of state, addition of eqn. (11) and eqn. (12) for k = 0 produces same differential equation for the scale factor a(t) as before and hence the same solution which is: where we have set the constant C 2 = 0. Using the scale factor derived above in eqn.(15), the solution of the scalar field can now be written as, where we now have an extra parameter A. Now using the solutions of a(t) and Φ(t), the energy density on the visible brane is given by, We note that when A = 2, eqn.(22) exactly reduces to the solution of Φ(t) given in eqn. (19) (with C 2 = 0) which is the scalar field solution in the absence of the brane matter on both the 3-branes. The nature of the variation of Φ(t) versus t is shown in Figure 1 where A = 2, C 1 = 2 √ 2 (red curve) and A = 3, C 1 = 2 √ 2 (green curve). The horizontal line (blue) shows the non-zero asymptotic value of Φ(t) when brane-matter is present. Now in the presence of matter the proper distance between the branes using eqn.(22) is found to be : in the presence of visible brane matter (A = 3 and C 1 = 2 √ 2(green curve)); non-zero asymptotic value of Φ(t) when brane matter is present (horizontal blue line). As t → ∞, d(t) is always non-zero and tends to a constant value for all A > 2. Hence the proper distance never vanishes and therefore no instability exists. Thus, the perfect fluid matter on the brane with equation of state p = ρ 3 stabilizes the distance between the branes. It is to be noted that such an equation of state corresponds to a perfect fluid comprising of relativistic particles. B. Spatially curved solutions (k=-1,+1) Let us now construct the FRW solution on the visible brane with non-zero spatial curvature. With appropriate time translation (t + A 1 → t) , the solution of the scale factor may, in general be written as, where, K is a real integration constant. In our form of the solution, we have chosen K = −A 2 1 < 0 and a(0) = 0. Here, the universe is eternally expanding, though with deceleration. If we now write the scalar field as 1 + Φ(t) = e 2d(t) l , then using this and eqn.(15), we can express the proper distance d(t) in terms of integral of the scale factor as given below, where B 1 is a constant of integration. Thus, given the scale factor for any spatial curvature Similarly, for k = +1 using eqn.(26) in eqn.(30) we get, where B and D are integration constants. Let us now try to see that if Φ(t) can become zero for any t. This will be possible for some Similarly, for k = 1, we can obtain the roots for t when Φ(t) may become zero. These turn out to be (with A 1 = 1 and D ′ = (±1 − D) 2 ), Here, it is clear that both roots lie within the domain of t. which is 0 ≤ t ≤ 2. If D ′ = 0 (i,e. D = 1, with the upper sign in the expression for D ′ ) then there ia a single root at t = 1. The variation of radion field Φ(t) with time for both k = −1, 1 are shown in Figure 2 and they confirm the above discussion. It is clear that in the k = +1 case an instability (brane collision) arises during the evolution of the universe. The condition under which d(t) can be never equal to one for the spatially flat case has already been shown earlier. IV. SPHERICALLY SYMMETRIC, STATIC SOLUTIONS Let us now look at spherically symmetric static solutions of the effective Einstein's equations on the visible brane. In constructing such a solution, it is legitimate to assume a radial coordinate i.e. r dependent radion field Φ(r). We begin with a line element of the Majumdar-Papapetrou [11] form which uses isotropic coordinates: where U(r) is the unknown function to be determined by solving Einstein's equations. First, let us assume that the branes are empty i,e. T a µν = T b µν = 0. Substituting the metric ansatz given by eqn.(35) in eqn. (3), we arrive at the following field equations : Here, a prime denotes a derivative with respect to r. Adding eqn.(37) and eqn.(38) one Since Φ ′ (r) = 0 one can consider the term in brackets in the above equation as a condition on Φ and its derivative. However, the scalar field equation for Φ(r) given by, can be readily integrated once to get where C 1 is a positive, non-zero integration constant. Consistency of eqn.(39) (i.e. the equation Φ ′ 2(1+Φ) + 1 r = 0) and eqn.(41) for Φ ′ (r) leads to a unique form of Φ(r) given by : Further, we can use the condition in eqn.(39) to rewrite the Einstein equations in the following form: We note that the R. H. S. of the above field equations lead to the traceless-ness requirement on the L. H. S. Therefore U(r) must satisfy the following differential equation: which is the Laplace equation ∇ 2 U = 0 expressed in spherical polar coordinates (this result is the same as what follows in Einstein-Maxwell theory for Majumdar-Papapetrou type solutions [11]). The solution for U(r) is therefore straightforward and is given by, where C 2 and C 3 are two positive, non-zero constants. Substituting the solutions obtained for U(r), Φ(r) and their derivatives in either of the two Einstein's equations, i.e. eqn.(43) or eqn.(44), we find a single condition between the non-zero constants given as : Hence the final solutions for U(r) and Φ(r) in terms of C 1 , C 2 and C 3 become : At r = C 1 = C 3 C 2 , U(r) = 0 which implies existence of a black hole horizon. Now for the same value of r, the radion field Φ(r) or the inter-brane distance vanishes suggesting an instability which needs to be removed. To keep Φ(r) always non-zero, we apply the method adopted in the case of cosmology (see the earlier section of this article). We add traceless matter on the visible brane. Therefore, using eqn.(35) in eqn.(3) once again (but with the presence of matter on the visible brane) we now obtain the following Einstein equations on the visible brane, where ρ(r), τ (r) and p(r) are the diagonal components (in the frame basis) of the energymomentum tensor on the visible brane. As long as this additional brane matter is traceless, there is no change in the scalar field differential equation. The general solution of the scalar field equation however needs to be taken as where C 4 is a positive constant which is responsible for generating the brane matter. Even though with C 4 = 0 the r-dependent Φ(r) produces a non-flat on-brane metric, it involves an unstable radion and also corresponds to the case when the visible brane is empty. We can easily see that as long as C 4 > 2, Φ(r) never vanishes and by having traceless matter on the visible brane, the instability disappears for this particular, spherically symmetric solution with a r-dependent inter-brane distance Φ(r). It is to be noted further that the solution for U(r) remains unaltered under the tracelessness condition on brane matter. However, it is now possible to choose C 3 C 2 to be different from C 1 . We assume From the above expressions for U(r) and Φ(r), the visible brane matter energy momentum, i.e. ρ, τ and p turn out to be: We note that C 5 as well as C 1 cannot be zero in order to ensure a non-constant U(r) and Φ(r). At the same time, C 4 = 0 is also not desirable because it would lead to an instability (i.e. Φ(r) becoming zero at some r). Further, all the three constants must satisfy C 1 > 0, C 4 > 0 and C 5 > 0. It is possible to have both C 1 and C 4 negative but this does not effect the functional forms of ρ, τ and p or Φ(r). However, if one chooses C 5 < 0, the solution leads to a naked singularity. It is also clear that we cannot have τ = p because this condition leads to a quadratic equation for r which implies specific r values as its solutions. The only allowed condition is the one for traceless matter, i.e. ρ = τ + 2p. In addition, the Weak Energy Condition (WEC) or Null Energy Condition (NEC) will be violated . In particular, Since we must have C 1 , C 4 > 0 for stability, ρ+ τ < 0 but one can satisfy ρ > 0 and H. S. of (51)-(53) as ρ ef f , τ ef f , p ef f and verifying the validity of ρ ef f > 0, ρ ef f + τ ef f = 0 and ρ ef f + p ef f > 0. The functional forms of ρ, τ and p are shown in Figure 5 for a specific choice of the parameters, with C 1 = C 5 . We have also checked (not shown here) that the profiles of ρ, tau and p are similar when C 1 = C 5 . It is now easy to convert the metric solution (and the scalar field solution) into the extremal Reissner-Nordstrom black hole form by the following identifications: This leads to the extremal Reissner-Nordstrom black hole metric given as: We note that r ′ = M is the location of horizon as well as the spacetime singularity. For such spherically symmetric solutions, we can also obtain the Ψ(r) by exploiting the relation between Φ(r) and Ψ(r) given in [6]. For example, in the simple case (without visible brane matter) we have where the h ij is the metric on the Planck brane and the visible brane metric functions, f ij , are given in terms of the U(r) obtained above. V. CONCLUSION In summary, we have shown the following: • In the cosmological case, for traceless matter (p = ρ 3 ) on the visible brane we find analytic solutions for the scale factor and the radion field. In the spatially flat universe, the scale factor is that of the radiation dominated FRW case while the radion is stable and never zero. Instability arises when there is no on-brane matter. In a spatially curved universe with traceless, radiative matter, the results are similar for the case of negative spatial curvature. With positive spatial curvature, instabilities arise even with on-brane matter. • In the spherically symmetric, static case, in isotropic coordinates, we find that the solution obtained is nothing but the extremal Reissner-Norstrom solution. However, there is no physical charge or mass here (like in Einstein-Maxwell theory) and the radion field parameters play the role of an equivalent charge or mass. For the case when the matter on the brane is not necessarily traceless we are unable to find analytical solutions. Numerical work (not discussed here) suggest that the nature of the solutions for say, p = 0 or p = −ρ are different from the solutions for p = ρ 3 discussed here. It is noteworthy that our analytic solutions are all obtained using traceless, on-brane matter. However, we also note that the stability of the radion may not necessarily have any connection with the tracelessness of on-brane matter, though the need for some on-brane matter to achieve stability has been demonstrated in our examples. A hint about what kind of matter can achieve stability of the radion can be obtained by setting C 4 = 0 in the expressions for ρ, τ and p. Notice (from Eqns. (57)-(59)) that for C 2 1 > C 2 5 the NEC and WEC will be satisfied. Does this indicate that a stable radion requires energy-condition violating on-brane matter? A general statement is unlikely here though one may surely try to explore the exact link between the nature of on-brane matter and radion stability in future investigations. Finallly, the fact that we have rediscovered known solutions (i.e. the FRW scale factors in cosmology and the extremal Reissner-Nordstrom in the static, spherisymmetric case) in the context of a theory different from General Relativity is certainly welcome. This feature was also noticed in the first analytic solution in the Shiromizu-Maeda-Sasaki on-brane, effective theory [7] where the Reissner-Nordstrom solution was rediscovered as an exact solution [12]. There, the interpretation of a charge or mass was entirely geometric and largely dependent on the presence of the extra dimensions. Here too, it is the presence of extra-dimensions, through the space or time dependent radion, which is responsible for the nature of the solutions, though on-brane matter seems to be crucial is maintaining stability.
2013-12-19T14:21:26.000Z
2013-09-17T00:00:00.000
{ "year": 2013, "sha1": "611fcab5c3c4df1f2cd95de13990d521c3a17dc6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1309.4244", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "611fcab5c3c4df1f2cd95de13990d521c3a17dc6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
250378908
pes2o/s2orc
v3-fos-license
An assessment of strategies for sustainability priority challenges in Jordan using a water–energy–food Nexus approach This study aimed at supporting robust decision-making for planning and management of water–energy–food Nexus systems in the country of Jordan. Nexus priority challenges in Jordan were identified as (1) water scarcity, (2) agricultural productivity and water quality, and (3) shift to energy independence. We created a water–energy–food Nexus model that integrates three modelling frameworks: (1) the Water Evaluation and Planning system WEAP model to estimate water demands, supplies and allocation; (2) the MABIA model to estimate crop production, and, (3) a GIS-based energy modelling tool to estimate energy requirements of the water system. Through a set of scenario runs, results show how desalination is needed to address water scarcity, but it has to be coupled with low-carbon electricity generation in order to not exacerbate climate change. Improving water productivity in agriculture improves most of the studied dimensions across the water–energy–food security nexus; however, it does little for water scarcity at the municipal level. Reducing non-revenue water can have positive effects on municipal unmet demand and reduction of energy for pumping, but it does not improve agricultural water productivity and may have negative feedback effects on the Jordan Valleys aquifer levels. Energy efficiency can support energy-intensive projects, like desalination, by substantially reducing the load on the energy system, preventing increased emissions and achieving a more resilient water system. Finally, when all interventions are considered together all of the major drawbacks are reduced and the benefits augmented, producing a more holistic solution to the WEF Nexus challenges in Jordan. The WEF Nexus quantitative model for Jordan In light of the challenges and potential solutions identified in the participatory approach and in order to inform sustainable development, we developed a WEF Nexus model allowing stakeholders to assess the impact of selected nexus interactions. The model is based on the integration of three modelling frameworks: (1) the Water Evaluation and Planning system (WEAP) model to estimate water demands, supplies and allocation in order to assess the sustainability of the water system; (2) the MABIA model to estimate crop production based on the availability of water, and, (3) a GIS-based energy modelling tool to estimate the energy requirement for water pumping, water desalination and wastewater treatment (see Fig. 1). The model uses WEAP and MABIA to estimate water supplies based on climate-driven hydrological routines that calculate rainfall runoff and groundwater recharge. It estimates usage patterns for the main water sectors, evaluates the productivity of cropping systems under different climate futures and assess their impact on the water system. The energy component implements GIS-based methodologies to estimate energy requirements for groundwater and surface water pumping, new water desalination projects and major wastewater treatment plants. Finally, different scenarios are evaluated for solutions targeting at least one of the challenges, and an integration of all tested solutions. Information about the input datasets used and their sources can be found in section 2 of the Supplementary information, Tables S3 and S4. The WEAP hydrological and agricultural model Over the past decade, WEAP has been used extensively to model water resources allocation in Jordan, and to assess potential strategies to address the gap between water supplies and demands [31][32][33][34][35][36]. Based on this, the Jordanian Ministry of Water and Irrigation (MWI) has developed a national WEAP model for Jordan (see Fig. 2) that considers the distribution and consumption of water resources through: -Water demand: 94 water demand sites aggregated by governorate, including domestic, commercial, refugees, tourism, industrial and agriculture use. -Groundwater supply: 12 major groundwater basins and 26 groundwater units representing well fields. -Surface water supply: rainfall-runoff flows in wadis, Yarmouk flows from Wehdah dam, and Lake Tiberia inflows to King Abdullah Canal (KAC). -Desalinated water supply: the existent Aqaba desalination project and the planned Red-Dead desalination project with their conveyance systems. -Water distribution network: 67 pipelines and canals connecting water supplies and demands. The conveyance system is represented with a coarse resemblance to reality, but with enough quality to capture the system magnitude and the geospatial differences between supply and demand points (e.g. location, elevation, water table depth and conveyance distance). The starting point for this research used the existing Jordan WEAP model maintained by MWI. The model represents the main components of the Jordan water system and is regularly updated by MWI to include the best available data to quantify the physical features of the system as it currently exists. Demand nodes are used to represent the water use of private households, refugees, commercial, and tourism. Water use within these sectors is determined by population and sector-specific water use rates. These demand nodes are allocated to each governorate through their respective towns and cities. Supply to these nodes is modelled using in-field information on the domestic use of resources. The WEAP model captures the preference of supply to demand nodes through a priority level system. Nodes with higher priority will be supplied and fulfilled first before supplying the next node in the priority level. The priority structure reflects the national water policy in Jordan by prioritizing domestic demands over other sectors. Moreover, supply to domestic needs is transmitted from freshwater sources only, and the sanitation structure is reflected by connecting the demand sites to wastewater treatment plants, where outflows are reported as a function of the processing capacities of the plants. The WEAP model represents irrigation requirements using fixed yearly water demands with monthly variations for a variety of crop types, including citrus, vegetables, fruits, and cereals. This representation allows for a high-level assessment of water allocation to agriculture, but it does not allow accurate simulation of hydrologic processes, nor does it allow simulation of crop yields, which is necessary to project how crop production changes with water supply. To solve this, we updated the Jordan WEAP model to use MABIA as the method to estimate crop water demands, irrigation requirements, and crop production for 17 crop types. MABIA simulates daily irrigation demand and related climatic variables (see the next section for more details). This update allows users to interact dynamically with different climate futures and provides a higher resolution to represent energy demand in the agriculture sector. The MABIA method The MABIA-WEAP package (originally developed in the Institut National Agronomique de Tunisie by Dr. Ali Sahli and Mohamed Jabloun) [37] has been commonly used to project crop and irrigation water requirements and estimate the effects of climate change on crops [38,39]. We used MABIA to make daily simulations of evapotranspiration, irrigation needs and climatic and crop-specific variables. The method is based on algorithms outlined in FAO Drainage Paper No. 56 [40], which estimates reference evapotranspiration ( ET ref ) and soil water capacity using the Penman-Monteith equation [40]. MABIA calculates the potential crop evapotranspiration ( ET c ) on a daily basis throughout the growing season for a given crop type, using the dual crop coefficient ( K c ) method (Eq. 1). In this method, K c is divided into two components: the basal crop coefficient ( K cb ) and the evaporation representation factor ( K e ). This separation of transpiration and evaporation allows the model to represent actual ET conditions under dry surface with sufficient root zone moisture. K cb is best represented in a crop coefficient curve demonstrating its variation through the growing season in four stages. At the initial stage, shortly after planting of annual crops or the initiation of new leaves for perennials, K cb,ini is small. During the crop development stage, the coefficient grows to its maximum value, where it will remain throughout the mid-season stage ( K cb,mid ). Finally, the late-season stage represents the beginning senescence until crop death or full senescence where K cb,end falls from its maximum value. The three values of K cb are obtained from the crop library dataset of WEAP, which is based on the FAO Drainage Paper No. 56 [40]. The evaporation coefficient ( K e ) represents the evaporation from the surface that is not transpired by the crop. The value of K e varies through multiple conditions. It is largest during the beginning of the growing season and after irrigation or rainfall, where the sum of K e and K cb cannot exceed a maximum value. K e is determined by calculating the soil evaporation reduction coefficient, which is a function of field capacity, wilt point, and effective depth of surface soil. Other variables such as exposed and wetter soil fractions are estimated by MABIA from soil and crop characteristics given in the model. To calculate the actual evapotranspiration ( ET a ) (Eq. 2), the actual crop coefficient ( K act ) is estimated through factoring in a stress coefficient into the crop coefficient ( K cb ). The stress coefficient is estimated through a function that evaluates the root zone depletion. When root zone depletion is equal to or lower than the Readily Available Water (RAW), the actual and potential ET are equal. Otherwise, the stress factor is estimated as a function of Total Available Water (TWA), RAW, and root zone depletion. MABIA includes libraries of crop-and site-specific parameters, such as crop coefficients, planting date, root depth, depletion and yield response factors, which are taken from the FAO Drainage Paper No. 56. This study used these libraries to set values for each of the 17 crops considered. MABIA also allows for the selection of different approaches for irrigation scheduling, including fixed time intervals, fixed depth, or percent depletion, where irrigation is applied when soil moisture is brought below a given threshold specific to crop types. For this study, we selected the percent depletion method. Historical climatic data (i.e. 1948-2016) was acquired from the Princeton Climate and Weather data, based on the given altitude and latitude of the desired site. The data set includes minimum, maximum, and mean temperatures, pressure, and relative humidity. Then, the MABIA algorithm used latitude and longitude to estimate solar radiation. Moreover, irrigation-specific parameters such as irrigation schedule, fraction wetted and irrigation efficiency were defined based on data provided by local stakeholders. The GIS-based energy model The strategy of the energy model captures the WEF nexus by using GIS-based methods for quantifying electricity requirements for different processes. These processes, include the conveyance of water for agricultural irrigation, the extraction of groundwater, and the conveyance of water for drinking, industrial and other purposes. In addition, the model estimates electricity requirements for wastewater treatment and sea water desalination. GIS methods were selected to prevent, as much as possible, the aggregation of spatial dimensions, which was supported by the geospatial nature of the WEAP model. This approach allowed us to couple outputs from the WEAP-MABIA analysis to spatial objects of the WEAP schematic (see Fig. 2), capturing elevation and groundwater depth differences from remote-sensed data throughout the country. A representation of the energy model is shown in Fig. 3. Estimating energy demand for groundwater pumping Energy for water pumping can be expressed as the energy required to lift water from groundwater sources and to overcome friction in pipes, pumps, and other elements of the distribution system used for conveyance across the land surface. Electrical energy, or electricity (kWh), is expended when a unit volume (m 3 ) of water passes through a pump during its operation [41]. The electricity demand ( E D (kWh) Eq. 3) depends on the efficiency of the pump, the pipeline length and diameter, pipe material roughness or friction factor, and the volumetric demand for water. where d is the distance through which the water is lifted, Q is the required volumetric amount of water, P is the pressure required at the point of use, t is the time over which the water is pumped (assuming a constant head), and f l is the friction loss along the distance within the distribution system. Electricity demand for water pumping can then be calculated as described in Eq. 4. where Seasonal Scheme Water Demand SSWD (m 3 ) is defined as the total volume of water required over a selected season, (kg∕m 3 ) is the density of water, g(m∕s 2 ) is the acceleration of gravity (9.81 m/s 2 ) and 1∕3600(s∕h) * 1∕(1000(W∕kW)) are factors to convert from Joule units to kWh. Moreover, TDH g w (m) represents the Total Dynamic Head and PP eff (%) accounts for the Pumping Plant efficiency. The calculation of the Total Dynamic Head is estimated using Eq. 5. where EL (m) is the Elevation Lift, SL (m) expresses the Suction Lift, OP (m) stands for Operating Pressure and accounts for the pressure needed based on the application and conveyance system, and FL (m) expresses the Friction Losses in the piping systems. Finally, the overall power required for pumping water is determined as per Eq. 6. where PSWD (m 3 ∕s) is the peak water demand within the SSWD period. Estimating energy demand for surface water conveyance Surface water conveyance in Jordan happens through a wide and complex pipeline network, covering the country from south to north (see Fig. 2). Such network gets water from several supply points, including groundwater, surface water and desalination plants. Once the water is in the network, it is conveyed to every demand point throughout the country; however, some demand points, especially some agricultural sites, source their water directly from groundwater aquifers. Energy requirements for surface water conveyance are estimated by capturing geospatial characteristics of every demand site, pipeline section and supply point. By knowing the location and elevation of every point, the Elevation Lift (EL) can be computed as the elevation difference between every start-and end-point of every pipeline section. The length of each section was calculated using geospatial functions from the Python package GeoPandas and the monthly water flow throughout every section derived from the WEAP model. Then, the energy for water conveyance was estimated using the previously described methodology and adding specific characteristics to the main pipelines (i.e. diameter and roughness factor). Estimating energy requirements to improve water quality The main wastewater treatment plants were captured in the WEAP model (see Fig. 2), for which energy requirements were modelled based on the type of treatment technology and the number of treatment stages. When specific data from a treatment plant was not available, international standards on energy intensity of wastewater treatment were used at 0.6 kWh/m3 using the active sludge treatment process [42]. Similarly, energy requirements for water desalination were modelled according to specifications of the Aqaba desalination plant and preliminary estimations of the energy intensity of the Red-Dead desalination project, at 5 kWh/m3 and 3.31 kWh/m3 respectively [43]. The additional energy required for pumping water from the Red Sea up north, was considered into the surface water conveyance methodology. Scenario analysis A set of scenarios were analyzed in order to explore nexus interactions and the impacts of different measures targeted to one of the systems (i.e. water, energy or agriculture), on the other systems. A time horizon of 30 years was selected, covering the period from 2020 to 2050. All of the scenarios were formulated and agreed on with the project stakeholders. The scenarios evaluated were: No intervention scenario, which takes a Business as Usual approach where the main current trends (in terms of demand, supply and growth) are unchanged. It assumes that domestic demands will increase over time, with refugees staying (but no new refugees coming), and agriculture and industry not growing over time. Reduce non-revenue water (NRW) scenario, which assumes a reduction of non-revenue water by 20% by year 2050non-revenue water is the amount of water that is either lost in the transmission and distribution processes due to technical issues, or withdrawn and consumed without authorization from the water authorities. The reduction is set as a goal for each municipality to achieve by year 2050. New water resources (desalination) scenario, which assumes the construction of the Red Sea-Dead Sea project and associated desalination plant (with capacity of 110 MCM/yr). The production of desalinated water from the plant will start by year 2025 with a quarter of its capacity, and will reach full capacity by year 2029. This water will be transported by pipeline systems from south to north of the country, in order to supply the main urban areas. Increased agricultural water productivity scenario, which considers a combination of interventions targeting to increased crops water productivity. These interventions include improving the efficiency of irrigation schemes and the use of controlled micro-climates such as greenhouses. Integrated strategies scenario, which examines the combination of interventions in other scenarios, including nonrevenue water reduction, construction of the Red Sea-Dead Sea desalination and conveyance project and increasing agricultural water productivity. Pumping energy efficiency, which considers the gradual improvement and modernization of the water network. Pumping energy efficiency is gradually increased starting from the individual efficiency of each pumping system (i.e. groundwater pumps and conveyance pipelines) and reaching an average target efficiency by year 2050. This strategy is parallel to all scenarios, being tested for the previous 5 scenarios presented. Effects of increased evapotranspiration Crop water requirements are strongly related to the amount of water that is evaporated and transpired from the plant [40]. About 99% of the water used by a plant is transpired through the leaves and evaporated from the soil. Different variables affect the amount of evapotranspiration that occurs on any given season of the year. Among those variables, climatic factors such as solar radiation, air temperature, air humidity and wind speed are within the most influential [44]. Crop water requirements are then often calculated as the amount of water required to compensate for the water that is evapotranspirated [45]. Historical reference evapotranspiration data in Jordan indicates an increasing trend in evapotranspiration that will potentially impact hydrology and crop water requirements. To assess the impact of evapotraspiration, all scenarios were evaluated under two conditions: -No increasing trend in evapotranspiration, and -A drier future with an increasing trend in evapotranspiration applied to hydrology and irrigation requirements. This analysis helps to evaluate some of the climatic uncertainties that can affect water requirements for irrigation and, in turn, compromise crop yields and affect related pumping energy requirements. Results and discussion In this study we developed a WEF Nexus model in order to support the implementation of SDG target 6.4 of the 2030 sustainable development agenda and to define the safe boundaries for water sustainability in the country of Jordan. For this, we used the MABIA method to enhance the current WEAP hydrological model owned by the Jordan Ministry of water. The MABIA method implements simulations of daily irrigation water demand and crop yields for several crop types in the country. Therefore, it allows users to evaluate effects of future climates on crop production. Moreover, the WEAP-MABIA model was coupled with a custom-made GIS-based model to account for the different energy-for-water requirements throughout the country. A series of scenarios, covering a time period from 2020 to 2050, were evaluated in order to test how different interventions targeted at least one of the sectors affected the other sectors. Finally, an open source online visualization platform was developed, in order to enable stakeholders to access all data generated with the model and explore results in detail. Broadly speaking, there was not a perfect scenario that targeted all challenges identified in the participatory approach (Table 1). Therefore, it can be argued that to achieve a holistic solution a combination of interventions is needed. In this section, first we present results for the No Intervention scenario and the implications of increased evapotranspiration in The No intervention scenario is highlighted with gray background (taken as the reference case). Colored arrows are used to denote positive or negative differences between tested scenarios and the No intervention scenario for selected indicators. The number of arrows indicates the intensity of the change a warmer future. Then, we cover each scenario results comparing them to the No Intervention scenario and discussing the broader sustainable development implications. No intervention scenario Results show that without intervention, water demands from the municipal and agriculture sectors could be increasingly unmet at a rate of around 17% and 38% respectively in the last decade (i.e. from 2040 to 2050, see Table 1) (unmet demand works as a water scarcity indicator by measuring the gap in percentage of the water that is actually delivered against the water that is demanded). However, the effects of increased evapotranspiration in that regard were not significant. Moreover, agricultural water productivity (i.e. unit of crop produced over unit of water applied) would constantly decrease over the entire period (Fig. 4). Increased evapotranspiration exacerbates this by reducing the water productivity over the last decade in about 9% average. Aquifers would continue to decrease (Fig. 4) affecting the water supply for agricultural production, domestic drinking water and energy for pumping needs. Moreover, increased evapotranspiration substantially increases drawdown in the Jordan Valley aquifer, in about 18 meters more by year 2050 against the historical climate trend (Fig. 4). On the other hand, energy requirements would increase due to aquifer drawdown and the need to convey more water from south to north (Fig. 5). However, increased evapotranspiration would affect only energy requirements for groundwater pumping with an increase in average by about 6.8% in the last decade, which is directly related to the increase in aquifer drawdown. The effects of increased evapotranspiration in the system can be explained by the water allocation hierarchy in the country. Jordan supplies first its domestic and industrial demands and leaves agriculture at the lowest priority. Thus, a warmer future would produce greater evapotranspiration, reducing both the amount of surface water availability and the water that percolates and recharges the groundwater aquifers. To compensate for that, more water is pumped from the groundwater aquifers (Fig. 6). These is evidenced by an overall increase in groundwater extraction of 7.5% average in the last decade (Fig. 6), being the Jordan Valleys the aquifer most affected (13.6% average increase in the last decade). As a consequence, the levels of the aquifers are substantially reduced, specially in the Jordan Valley which sustains an important share of the agricultural activity in the country. Although the extraction of more water maintains the same level of water deliveries to the agriculture sector, the warmer climate exerts more stress in crops affecting the agricultural water productivity (Fig. 4). Moreover, the findings presented regarding unmet demands and aquifer drawdown, are in agreement with previous water resource management studies of individual basins of Jordan [31,32,35]. These studies also showed how both municipal and agricultural unmet demands are set to increase with time, and aquifers will continue to be depleted in the studied regions. Effects of climate change in crops have also been studied previously in arid and semi-arid regions such as Jordan. Results agree with our findings, as increasing crop water requirements and steady decline in crop yields are expected, due to raising temperatures, increased evapotranspiration and decreased precipitation during planting seasons [38,39,41,46]. Reduced non-revenue water scenario Domestic water scarcity was shown to improve by implementing measures to reduce non-revenue water. Unmet municipal demands were reduced annually in average by 4.3% in the last decade compared against the No Intervention scenario (Table 1 and Fig. 7). However, agricultural unmet demands did not see substantial improvements as most of the inefficiencies in water transport and distribution happen at the municipal level (Fig. 7). As a result, agriculture water productivity continued to decrease at a similar rate as without intervention (Fig. 7). In addition, a negative feedback effect is seen over the Jordan Valley aquifer with an additional decrease of the water table levels of around 7.7 m by year 2050 (Table 1 and Figure S4 in Supplementary information). This happens because the agricultural areas of the Jordan Valleys region typically use substantial amounts of treated wastewater to irrigate. Reduced non-recoverable losses means less water being discharged into the Zarqa River from the Samra wastewater treatment plant. This in turn creates the need to extract more groundwater for agricultural irrigation in the region (see Figure S5 in Supplementary information), thus affecting negatively the levels of the Jordan Valleys aquifer. On the other hand, the Dead Sea aquifer saw a slight improvement of its levels in about 3.4 m by year 2050 against the Non Intervention scenario (Table 1 and Figure S4 in Supplementary information). Finally, energy demand for water conveyance decreased by an average of 380 GWh in the last decade due to reduced losses in the system (Fig. 7). This is a significant improvement in energy use, which would decrease Green House Gases (GHG) emissions and support the shift to the energy independence of the country. New water resources (desalination) scenario Domestic water scarcity was alleviated by adding new water resources from the Dead Sea-Red Sea water desalination project. This measure increased the availability of water, which had a direct effect on unmet demands. Unmet municipal demands decreased by an annual average of 5.3% in the last decade (Table 1 and Fig. 7). These results are in line with previous studies of the Dead Sea-Red Sea project, which have also shown substantial reductions on municipal unmet demand in specific regions of Jordan [34]. As agriculture irrigation has the lowest priority in water allocation, most of the new available water was directly consumed by the municipal sector, translating into little improvement in unmet demand in agriculture (Fig. 7). Moreover, water levels in the Dead Sea aquifer saw a substantial improvement of about 12.8 m by year 2050, whereas other aquifers remained with similar drawdowns as without intervention (Table 1 and Figure S4 in Supplementary information). This outcome is logical, as the Dead Sea-Red Sea project plans to pump sea water from south (Red Sea) to north of the country, desalinate it and use the resulting brine to help recover the Dead Sea levels. On the other hand, agriculture water productivity continued to decrease at a similar rate as without intervention Fig. 7 Main results for all scenarios in an increased evapotranspiration future (i.e. warmer future), a unmet demand in the municipality sector (the No Intervention and the Increased Water Productivity scenarios have the exactly the same values, making both lines to overlap); b Unmet demand in the agricultural sector; c Agricultural water productivity; d Total energy demand for groundwater pumping, water conveyance, wastewater treatment and sea water desalination Fig. 7). This is also due to the water allocation hierarchy, reducing the probability for agriculture using the new water resources (i.e. desalinated water). As the major trade-off, energy demand for water conveyance and desalination substantially increased in average by about 218 GWh in the last decade (Table 1 and Fig. 7). Moreover, a major increase of about 400 GWh of energy would be seen in year 2029 when the desalination project starts operating. This substantial increase in energy needs, will exert great pressure in the energy system for it to ensure new generation capacity. Although Jordan counts with high solar energy potential, it has recently restricted new installations of intermittent renewable supply and instead opted for boosting generation with fossil fuels produced locally. This will have repercussions in GHG emissions, hindering progress of Jordan's National Determined Contributions (NDC) [47] and clearly having trade-offs with SDG target 7.2 on increasing substantially the share of renewable energy in the global energy mix by 2030. Shifting to energy independence could also be affected if Jordan keeps its high dependence on fossil fuel imports to supply the energy-for-water requirements, especially with the additional energy needs from the new desalination water resources. Increased agricultural water productivity scenario To increase agricultural water productivity, a combination of interventions targeted to produce more crops with less water are applied (e.g. improving efficiency of irrigation schemes, using controlled micro-climates as greenhouses for crop harvesting). With these interventions, unmet agricultural demand decreased in average by 3.7% in the last decade, which is the best improvement between all tested scenarios (Table 1 and Fig. 7). However, as this measure targets only the agriculture sector, unmet municipal demands remained the same as without intervention (Fig. 7). Furthermore, this scenario was the only one that achieved an improvement on agriculture water productivity, considerably increasing the crop production per water unit by an annual average of 15.6% in the last decade (Fig. 7). The more efficient use of water in agriculture was positively translated into fewer water extractions, with consistent improvements over all aquifer drawdown trends ( Figures S4 and S5 Adding improved pumping energy efficiency The current energy efficiency for pumping of the water system throughout Jordan is low, at around 50% in average. Therefore, we tested a goal of achieving 80% average pumping efficiency by year 2050 applying a linear growth from current 2020 levels. The improvement of energy efficiency would help to assess the effect that modernizing pumps and water networks may have in the energy system. Results are presented for the No Intervention scenario and the New Resources scenario in order to capture the state of affairs and the worst case scenario in terms of energy requirements (Fig. 8). In both scenarios, improving pumping efficiency flattened the growing energy requirements for conveyance and groundwater pumping. Moreover, energy requirements by year 2050 were even lower in the New Resources scenario with improved efficiency than in the No Intervention scenario without efficiency improvements. This is especially important as the New Resources scenario had the greatest increase in energy requirements for water desalination and conveyance. Thus, energy efficiency can be seen as an important complement to water scarcity solutions as it can reduce the load on the energy system, support the shift to energy independence, improve the resilience of the water system and even reduce the cost of crop production. Integrated strategies scenario All of the tested strategies had benefits and drawbacks, but none of them targeted holistically the WEF Nexus challenges in Jordan. Reducing Non-Revenue water uses helps towards municipal water scarcity and shift to energy independence, but it has negligible effects on agricultural water scarcity and productivity. Moreover, adding New water resources by implementing the Red sea-Dead sea desalination project has substantial improvements on municipal water scarcity, but it does it with a high cost on increased energy requirements. If sustainable solutions are not considered into how to supply the large amount of extra energy needed, Jordan may face considerable consequences into its progress towards a cleaner energy mix. This has drawbacks in achieving energy independence due to the high dependence on fossil fuels imports in the power generation system. On the other hand, Increasing agricultural water productivity seemed to be the solution best addressing all WEF sectors, as it improved agricultural water scarcity and productivity, substantially reduced energy requirements for groundwater pumping, and consistently reduced aquifer drawdown. However, it did not had any effect on municipal water scarcity, which is one of the main priority challenges in Jordan. Finally, energy efficiency can be seen as a major allied for sustainability development, as it can support the execution of high energy intense projects, while reducing stress on the energy system, and achieving a more resilient water system. It is interesting, however, that some of the benefits of the individual strategies are opposed to the drawbacks of another strategy (see Table 1). Thus, to evaluate the combined effect of all strategies we tested them in an integrated strategies scenario. Results show how this more holistic approach significantly reduced the drawbacks presented in each individual scenario and considerably augmented the benefits (Table 1 and Fig. 7). Municipal unmet water demands in the final decade of the analysis improved by around 9.3% average compared to no intervention. Agricultural unmet water demands in the final decade, improved by around 3.6% average. Agricultural production in the final decade increased an additional 15%. Drawdowns and groundwater extractions were consistently reduced through all aquifers when compared against no intervention levels ( Figure S4 and S5 in Supplementary information). Finally, energy demand would be reduced below no intervention levels in average of about 224 GWh in the last decade, despite the high extra energy required for the Red-Dead desalination project. Finally, the mix-method approach followed in the study proved to be useful to address complex resource and development challenges. The need of such approaches had already been identified by [3] in their systematic review of WEF Nexus assessment methods. By utilizing multiple and interdisciplinary approaches, and engaging stakeholders and decision-makers, this method ensured strong participation of key actors in charge of the governance of the WEF nexus system in Jordan and increased the policy relevance of the results. Conclusions In this study, we adopted a water, energy, food Nexus perspective to assess solutions to sustainability challenges in the country of Jordan. Three main challenges were identified through a participatory approach involving key stakeholders in Jordan: (1) water scarcity, (2) agricultural productivity & water quality, and (3) shift to energy independence. Four scenarios were evaluated, including a No Intervention scenario that captured the current state of affairs taking a "business as usual" approach. In the three other scenarios we evaluated how measures to reduce Non-revenue water, produce New water resources and Increase agricultural water productivity would reduce water scarcity, affect crop production, alleviate aquifer drawdown and impact the energy system. In addition the effects of increased evapotranspiration (due to a warmer climate) were also analyzed. Results show how increased evapotranspiration would further exacerbate sustainability challenges. Aquifers would see extra drawdown due to increased groundwater pumping in order to maintain deliveries of water demand. Higher temperatures are the main cause of this as evapotranspiration in the surface increases, reducing the amount of surface water available and the amount of recharge from water percolation to groundwater aquifers. Agricultural water productivity (i.e. crop per drop) would be negatively affected mainly due to higher heat stress in crops, and energy requirements for groundwater pumping would be substantially increased due to lower aquifer levels. It can be argued that there is not one perfect solution that addresses all challenges holistically. This is apparent as none of the tested sectorial solutions targeted all combined challenges; instead, some solutions even had negative effects on other sectors. Therefore, integrated strategies are needed to target holistically the challenges among all sectors. This is evidenced by four key findings: (i) desalination is needed to address water scarcity, but it has to be coupled with lowcarbon electricity generation in order to not exacerbate climate change; (ii) agricultural water productivity in agriculture is a win-win across the water-energy-food security nexus; however, it does not target the issue of municipal water scarcity; (iii) reducing non-revenue water can have positive effects on municipal unmet demand and reduction of energy for pumping, but it does not improve agricultural water productivity and may have negative feedback effects on the Jordan Valleys aquifer levels; (iv) energy efficiency can support energy intensive projects as desalination by substantially reducing the load on the energy system, preventing increased emissions and achieving a more resilient water system. In light of this, when all interventions are considered together under an Integrated Strategies scenario, all of the major drawbacks were reduced and the benefits enhanced, producing a more holistic solution to the WEF Nexus challenges. The outcomes of this study help decision-makers and key stakeholders in charge of the governance of the three resource systems to understand the trade-offs and synergies of sustainable solutions for Jordan. Moreover, the participatory approach and the resultant WEF Nexus model constitutes the first framework for analysing new strategies in the country under a WEF Nexus sustainability lens. This framework supports decision-making with data-driven insights and promotes holistic governance involving actors from the different resource systems. Limitations and future research As future research, climate projections could be implemented to evaluate effects of different climate futures into the systems. This would allow us to make a more robust estimation of the uncertainties related to climate change. This representation would require to downscale known Representative Concentration Pathways (e.g. RCP4.5, RCP6, RCP8.5) from existing climate models to the Jordan scale. Moreover, the geospatial representation and resolution of the WEAP model could be improved to better position demand and supply sites and achieve a more detailed water transmission and distribution system. This would require to collect more in-field data to be able to model the water demands of different settlements throughout the country, and disaggregate the currently used irrigation perimeters. This, however, would increase considerably the computing requirements of the model. Moreover, the characteristics of the pipeline system can be detailed further in order to model more accurately the energy friction losses due to surface water conveyance. And finally, scenarios could be developed to evaluate how energy requirements can be sustainably met, e.g. evaluating the use of renewable power generation technologies, as solar power, to support pumping requirements of the Jordan water system. Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. Results can, however, be explored in the Jordan WEF Nexus interactive visualization platform at https:// jordan-nexusmodel. herok uapp. com/. Code for the GIS-based energy model and the softlinking of models is available under MIT license at https:// doi. org/ 10. 5281/ zenodo. 65213 05. Competing interests The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-07-09T15:02:11.110Z
2022-07-07T00:00:00.000
{ "year": 2022, "sha1": "9a480a920ac63ff91274125b03016bc38461a453", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s43621-022-00091-w.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "9ac2f22a3d04075cc7ff96c90ed497f9a568c992", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
12203576
pes2o/s2orc
v3-fos-license
Knowledge workers collaborative learning behavior modeling in an organizational social network Computations related to learning processes within an organizational social network area require some network model preparation and specific algorithms in order to implement human behaviors in simulated environments. The proposals in this research model of collaborative learning in an organizational social network are based on knowledge resource distribution through the establishment of a knowledge flow. The nodes, which represent knowledge workers, contain information about workers social and cognitive abilities. Moreover, the workers are described by their set of competences, their skill level, and the collaborative learning behavior that can be detected through knowledge flow analysis. The proposed approach assumes that an increase in workers competence is a result of collaborative learning. In other words, collaborative learning can be analyzed as a process of knowledge flow that is being broadcast in a network. In order to create a more effective organizational social network for co-learning, the authors found the best strategies for knowledge facilitator, knowledge collector, and expert roles allocation. Special attention is paid to the process of knowledge flow in the community of practice. Acceleration within the community of practice happens when knowledge flows more effectively between community members. The presented procedure makes it possible to add new ties to the community of practice in order to influence community members competences. Both the proposed allocation and acceleration approaches were confirmed through simulations. Introduction There is no doubt that the concept of collaboration is closely related to learning. The collaboration process, in which people interact, employs self-critiquing (reflection); inquiry and arguing skills are a solid base for the (social) constructivism pedagogy that is commonly utilized in modern companies (Schaf et al., 2009). Today, almost every company wants to become a knowledge-creating company. Knowledge management pioneer Nonaka (Nonaka et al., 2000) claims that making personal knowledge available to others through social networks is the central activity of a knowledge-creating company. It takes place continuously and at all levels of an organization. In the knowledge management area, the main focus rests on information technologies (IT). The problem of how knowledge can be shared effectively among workers using organizational social relationships has been marginalized (Dong et al., 2012). Prior research on knowledge management shows that the proper arrangement of organizational social relationships significantly impacts the efficiency of knowledge sharing. Researchers have noticed a move from a technological-based knowledge management strategy to a socialization-based knowledge management strategy as companies seek to more effectively facilitate knowledge sharing. Recent works bring some insight to the problem. Long and Qing-hong's (2014) study investigated how to divide users into collaborative learning groups. They utilized the users' educational interests to group them into customized clusters. In each cluster, a genetic algorithm was adopted for collaborative learning group division based on a user's knowledge level in order to approximate the optimal development of a collaborative learning group. Another approach to the problem of efficient design and the use of knowledge flows in order topic of recommended learning material that is suitable for students' characteristics, needs, and preferences was presented in Kozierkiewicz-Hetmańska's work (2011). In this article, the research problem addresses collaborative learning through knowledge flows in the design of an organization. Knowledge flows are the most important elements of the collaborative learning process in an organizational social network. For this reason, we want to understand exactly how they move through the network. Besides the cognitive and social abilities of the knowledge workers and their relationships, the knowledge that flows is the main influencer on the workers' collaborative learning process. In addition, an effective collaborative learning process results in competence development. Moreover, we assume that knowledge flow is more intense in a community of practice. As a result, in the presented research, we want to establish different methods to make knowledge flows more efficient with respect to the different roles in the network and the community of practice. In the proposition, a number of concepts are combined into one model and all of them will be described in the upcoming sections of the article. The approach presented in this article extends the available models toward the concept of knowledge workers, who are described by information concerning their competences (in vector format) and mask data structures, which reflect a worker's ability to labor in a specific area. Moreover, knowledge diffusion in the network is achieved by knowledge resource broadcasting. The workers' collaborative learning behavior is described through a computational model and allows for the analysis of different worker configurations and relationship statuses. This article is divided into four parts. The following section covers the theoretical background related to the problem. In particular, attention is paid to competence development in an organization, knowledge flow in the description of communities of practice, and the collaborative learning development process. The model for a knowledge network in an organization is described in Section 3. The model is based on the formalization of knowledge resources that are transferred by knowledge flows throughout the network. Section 4 describes the method for role allocation in an organizational social network. The roles involved are those of knowledge facilitator, knowledge collector, and expert. The next section analyzes the problem of community of practice acceleration through the addition of new relationships. Competences in an Organization There are a number of ways to understand the concept of competence depending on the origin of the field of science or humanities being referenced. The French word "compétence" was originally used to describe the capability of performing a task in the context of vocational training (Romainville, 1996). Later on, the word found its place in general education, where it was mainly related to the "ability" or "potential" to act effectively in a certain situation. Perrenoud (1997) claimed that competence was not only limited to the knowledge of how to do something but also reflected the ability to apply this knowledge effectively in different situations. Grant and Young (2010) analyzed and summarized the skills and knowledge approach to competence. The requirements for the development of a competence-based approach come from staff development and deployment; job analysis reveals the need for new approaches to knowledge modeling in organizations (Radevski et al., 2003). In modern companies, the competence-based approach is a main component of employment planning, recruitment, training, increasing work efficiency, personal development, and managing key competences. Draganidis et al.'s (2008) study showed that a competence-based approach can identify the skills, knowledge, behaviors, and capabilities needed to meet current and future personnel selection needs that are in alignment with various strategic and organizational priorities. Moreover, a competence-based approach can focus on the individual as well as group development plans in order to eliminate the gap between the competences needed for a project, job role, or enterprise strategy and those that are currently available. Sanchez (2004) reported some challenging issues that must be addressed with a competence-based approach, including: the development and use of a consistent set of concepts and vocabulary for describing competences, the classification of different types and levels of activities within organizations that collectively contribute to achieving competence, and the articulation of interactions between different types and levels of organizational activities that are critical in the processes of competence building and leveraging. The representation of competence in the information system is based on the ontology framework (García-Barriocanal et al., 2012;Draganidis et al., 2008;Jussupova-Mariethoz and Probst, 2007). Macris et al. (2008) described why the ontological structure is appropriate for competence processing. The most important consideration is that ontology allows for the definition of an organization-wide role structure based on the competences required by different job functions and organizational positions. Moreover, ontology helps identify the competences required to perform the various activities involved in each business process and assigns roles to process these activities based on the competences. Additionally, ontology is a base for the identification of the competences that have been acquired in the organization and for the assignment of users to roles through competence matching. In the literature, two different base concepts of competence coexist (Bass et al., 2008). An interesting discussion of this issue can be found in McHenry and Strønen (2008), who concluded that the first concept defines competence by targeting individual workers while the second one defines competence by the results of the work produced. We analyzed this issue based on McHenry and Strønen's work. The first competence concept focused on individual competences and took the workers' attributes as the starting point for discussing competence. The workers' competence value was treated as a stock that could be developed through training and validated in "objective" rating schedules. In the second concept, competence was conceptualized as a characteristic of organizations where human competences are seen as one of the available resources. Knowledge Flow in Communities of Practice According to Kirschnera and Laib (2007), a community of practice is a process in which social learning occurs because the people who participate in the process have a common interest in some subject or problem and are willing to collaborate over an extended period with others who have this same interest. From another perceptive, communities of practice are groups of people who share a concern or passion for something they do and who learn how to do it better as they interact regularly (Wenger et al., 2002). The results of communities of practice members' collaboration are ideas, the finding of solutions, and the building of a repository of knowledge that changes each member's competence. Moreover, in many industry sectors the community of practice is recognized as a key to improving performance (Abel, 2008). In the work of Zhuge et al. (2005) we found a number of definitions related to the previously discussed issue of knowledge flow in communities of practice. Knowledge flow is the process of passing knowledge within a team. In other words, knowledge flow is a process of knowledge interchange in a cooperative team . A similar definition was created by Li (2007): knowledge flow is the process of knowledge diffusion, knowledge transfer, knowledge sharing, and relevant knowledge increase caused by the aforementioned items, which results from interaction between different actors, including the organization and the individual. A knowledge flow begins and ends at a knowledge node. A knowledge node is either a team member or a role that can generate, process, and deliver knowledge. A knowledge flow network is made up of knowledge flows and knowledge nodes. In modern companies, knowledge flows networks are used to facilitate knowledge sharing. The research carried out by Cowan and Jonard (2004) presents the impact of different types of network structures in the context of knowledge diffusion across organizations based on a simulation. The knowledge flow network has to satisfy the following predetermined conditions in order to create effective flows (Zhuge et al., 2005): knowledge nodes in the network use similar intelligence to acquire, use, and create knowledge; knowledge nodes share knowledge autonomously; knowledge nodes share knowledge without reserve; and the team is cooperative, small, and flat within the organization. Moreover, the geographical, cognitive, and social distance is an important consideration for knowledge flows between individuals (Østergaard, 2009). Guo et al. (2005) describe why knowledge passing and sharing only happens when trust is present. The communities of practice supported by effective knowledge flows can provide task-relevant knowledge to community members that helps them fulfill their knowledge needs quickly and effectively (Liu et al., 2013). Collaborative Learning Development Collaborative learning is a learning method that helps workers study through intragroup collaboration and competition between groups (Long and Qing-hong, 2014). Due to the largely Internet-based and intercultural workplace of many professionals, the collaborative learning process is migrating toward computer-supported collaborative learning (Popov et al., 2014;Colace et al., 2006). Knowledge workers, the members of the collaborative learning community, may participate in various collaboration activities in different ways based on their competences (Kolodner, 2007). At the organization level, the group composition, group size, collaborative media, and learning tasks may differ (Rummel and Spada, 2005). The classic learning process in universities is teacher-centered and, due to cost limitations and organizational obstacles, it cannot be directly implemented in companies. However, collaborative learning supports a company's needs for training and worker selflearning. According to Kuljis and Lees (2002), the principles of collaborative learning are based upon a learner-centered model that treats the learner as an active participant. The members of the cooperative group are encouraged to carry on deeper conversations, create multiple perspectives, and develop reliable arguments. This is the main reason why collaborative groups facilitate greater cognitive development than what the same individuals can achieve while working alone (Hutchins, 1995). The higher levels of human-human interaction are a solid foundation for collaboration in an organization (Schaf et al., 2009). In order to develop collaborative learning in the company network system, we have analyzed individual learning interest and people's knowledge level as users, as along with their quantifications, and have established a user model (Long and Qing-hong, 2014). This approach is similar to community building. In order to make a collaborative learning network effective, all groups need to coordinate their efforts and resources in effective ways (Kwon, 2014). The task of building an effective collaborative learning network is composed of two sub-problems (Long and Qing-hong, 2014): how to choose and quantify the proper features to build a user model for a collaborative learning network and how to divide the users into optimal teams in order to achieve their learning goals. Research shows that workers need unique group regulatory behavior, because sharing common ground is paramount for effective collaboration with other group members (Kwon, 2014). Moreover, the thoughtful design of a collaborative learning network must include scaffolding to encourage the desired approaches and behavior (Willey and Gardner, 2012). Furthermore, any culturally diverse members of the group need to overcome an additional level of complexity due to culture-related differences (Popov et al., 2014). Other issues related to building a collaborative learning network include the cognitive, motivational, and socio-emotional challenges that are experienced in collaborative learning, understanding how conflict emerges, and what students' emotional reactions and interpretations are (Näykki et al., 2014;Ayoko et al., 2008). From the technological side, collaborative learning activities can be realized through the following modes (Zhao and Zhang, 2009): face-to-face collaborative learning, asynchronous collaborative learning, asynchronous distributed collaborative learning, or synchronous distributed collaborative learning. It should be noted that another research problem is the optimal selection of an information system for different modes of the IT market (Colace et al., 2014). Knowledge Worker From the market's point of view, a company's global objective is to maintain a position in the market. In a knowledge-based economy, increasing the company's intellectual capital is a primary element of this strategy (López-Ruiz et al., 2014;Nemetz, 2006). Moreover, from the knowledge perspective, the organization's knowledge worker competences and any related core competences are an important part of intellectual capital (Ulrich, 1998). Core competences are abilities that are unique to the company in the market (Ligen and Zhenlin, 2010). However, due to tough competition, the competitive advantage comes from not only owning these kinds of competences, but also having high levels of them, or at least higher levels than a competitor has. The key to the successful operation of an organization is to effectively manage the process of transferring knowledge, which allows the company to use its assets in the most effective way (Dong et al., 2012;Różewski et al., 2013). Let us assume that organization X is composed of knowledge workers determined by . All the knowledge workers in the knowledge-based organization are characterized by a set of competences. Knowledge workers enhance their competences by taking part in projects and cooperating with other workers (who are willing to share their knowledge and who have higher competences), by attending training courses, and through self-study (Różewski et al., 2013). All organizational competences are related to a worker's knowledge set and are stored in a competence bank. Some competence values may be equal to zero. In that case, a strategic goal for the organization would be to increase the value of this competence. If we assume that the set Cb consists of all the elements of vector Cb and that set Cc represents core competences, then Cb Cc  is a subset of the organization's competences. The core competences are the most important part of an organization's intellectual capital. More information about core competence can be found in Bonjour and Micaelli (2010). The level of competence n for worker i is calculated by the audit procedure: The audit procedure is based on various methods and techniques for competence analysis (Grant and Young, 2010). From the point of view of the competence audit, each competence has a name and a set of attributes that define it. Each of the attributes for a given employee is evaluated is some way (e.g., questionnaire, interview) (Koeppen et al., 2008). The aggregated attributes allow us to calculate a worker's competence level. Every worker, i , possesses a competence set characterized by a competence vector However, in the discussed model, the level of competence does not have an upper limitation due to the open nature of the knowledge process in an organization. In some cases, the competence level can be transformed to a linguistic variable in order to obtain some kind of Likert's scale (e.g., based on the fuzzy approach [Guillaume et al., 2014]). Additionally, an employee with more competence (expert) within a given domain is skilled, competent, and thinks in qualitatively different ways than novices (Anderson, 2000). i is able to teach others. This means that such an individual has the social skills to adapt (personalize) communication to the recipient (Xu et al., 2005). In addition to his/her competence set, every worker is defined by the purpose of his/her action. In the proposed model, the current area of interest is defined by the selection vector i  . The definition of a selection vector for worker Applying a selection vector on a competence vector yields a selected worker's set of active competences. If 0  i n  , then competence n is outside the scope for the current time. All communication with coworkers and other activities are filtered by the selection vector. Network Definition We can distinguish between different levels of networks in an organization. However, all of these layers should be reduced to a one-dimensional network in order to make processing more effective. For example, every employee is related to his/her peers through social, work-related, and other kinds of relationships. Furthermore, the communication-based social network is created from the data collected within the organization, such as e-mail logs, phone call records, surveys, and other sources (Michalski-Kazienko, 2014). A number of research papers have covered the issue of social networks by mining from different organizational sources and metadata (Kilduff and Tsai, 2003), information diffusion in multilayer networks (Michalski et. al, 2013) or application of branching processes . In our approach, the organizational social network is a network structure that was created from the social, organizational, operational, and other layers of various companies. More information about the different layers of company integration can be found in Michalski and Kazienko (2014) and Maier (2007). In order to estimate the strength of existing relationships between employees, we have to integrate all the networks in a common structure. In most cases, we need to assess relationship strength through the analysis of different types of relationships between employees. Moreover, due to the complex nature of organizational relationships, the resulting network will be very dense. All layers are based on the same set of nodes, where every node represents a knowledge worker. The graphs with multiple edge types are denoted as multilayer graphs but can be transformed into a single-layer undirected graph (Boden et al., 2012). The organization network for organization X is an undirected graph without self- is a set of nodes representing knowledge worker i , is the set of edges representing a symmetrical relationship between nodes (knowledge workers), and The neighborhood of a given knowledge worker (node If a connection between Knowledge Resource Broadcast in a Network The traditional approach to knowledge resources includes the following elements in this group (based on Zhen et al., 2011): design cases, patents, technical standards, design formulae, design rules, software, and experts. In our approach, we focused on the communication between knowledge workers and did not model the content of the knowledge resources. The value of specific knowledge resources is determined by their impact on the competence set in a given resource's consumer. As a result, in order to increase the value of a specific knowledge worker's competence, he/she has to receive proper knowledge resources. The knowledge resources are transferred or exchanged during the employee's collaborations. In the competence context, knowledge resources can be possessed, transferred, acquired, developed, and stored. We can estimate the amount of competence available in knowledge resources based on the competences of the person who created the resource and his/her social abilities to teach. The change in competence value is influenced by the recipient's cognitive abilities, the social ability of the sender, the recipient's competences with regard to the knowledge resource, and Constraints: Condition (3) assumes that the competence of sending node y is greater than the receiving of node i . In the proposed model, the employee distributed newly created knowledge resources to all connected employees. The knowledge resources were broadcast according to the following procedure: 1. knowledge resource creation, 2. knowledge resource transmission, and 3. knowledge resource assimilation. Knowledge Resource Creation Let us assume that knowledge resources are created by employee A . Every element of vector A m r has the following definition: The quality of the developed knowledge resources depends on worker A 's social ability to teach A s and his/her competence vector A C . Moreover, the selection vector for knowledge resources A  helps locate the selected employee A 's specific set of competences in the knowledge resources. Some part of the knowledge related to competence A n c , is stored in the resources. If 0  A n  , then competence n is outside of the knowledge resources. The knowledge resources can be saved and stored in the knowledge repository for future use. Knowledge Resource Transmission The knowledge resource created by worker Knowledge Resource Assimilation The knowledge resource ( Roles in the Knowledge Network From some perspectives, only negative and positive role identification is required in a knowledge network (Brendel and Krawczyk, 2008). In this case, we focused on knowledge development and opposite knowledge deterioration and disintegration. However, in the proposed model, we analyzed the different roles related to knowledge processing. A broad overview of roles in a knowledge network can be found in Maier's work In addition, Awazu (2004) introduced gatekeepers (control the knowledge that enters or leaves a network) and bridges (connecting people who do not share common backgrounds, skills, or experiences). The paper by Boari and Riboldazzi (2014) adds two other roles: representative (communicates information to or negotiates exchanges with outsiders) and liaison (links distinct groups without any prior allegiance to each other). In our approach, we focused on three roles: knowledge facilitator, knowledge collector, and expert. All of these individuals integrate many of the roles presented earlier. The knowledge facilitator plays the role of the knowledge sponsor, administrator, and broker, who maintains contact between the workers (experts) in different fields and facilitates a faster flow of knowledge in the network. The expert (mentor/coach) introduces new knowledge into the network. As a result, the knowledge flow in the network can be redesigned. The knowledge collector is responsible for knowledge transfer to the company's repositories and plays the role of knowledge administrator and gatekeeper. One important issue we tend to overlook is the problem of management. In our opinion, the management issue will be important after the knowledge flow has been optimized. Role Allocation In real-world situations, information about a worker's cognitive and social abilities, as well as his/her level of competence, is difficult and costly to determine. For this reason, in the role allocation process, we focused on the network structure and the social characteristics of the network. Let us define the actions in time with relation to the nodes that accept the new role: -The node f v , which plays the role of knowledge facilitator, has to increase its relationship power by value  : -The expert is node e v with an explicitly higher value of competence in the network: consideration a specific set of network characteristics. Strategies S1-S4 rely on well-known metrics from Social Network Analysis (Newman, 2003). Closeness centrality focuses on how close a node is to all the other nodes in a network (Wasserman and Faust, 1994) and how long it will take to spread information from the node to all other nodes sequentially (Newman, 2005). S4 Betweenness Betweenness represents the total amount of flow that a node carries when a unit of flow between each pair of nodes is divided up evenly over the shortest paths possible (Kleinberg and Easley, 2010). High betweenness nodes occupy critical roles in the network ("gatekeepers"). S5 Time sharing The network configuration can provide information about possible working time needed to pass information to the node's neighbor. If the node is connected with a number of other nodes, its working time has to be divided and shared between all connected nodes. Time sharing MAX S6 Dissemination Based on the information about our neighborhood (neighbors of our neighbors) we select the most linked nodes for future cooperation. In this strategy, we select the node with a lower degree, but one that is still connected to high-degree nodes. We focused on the potentially best-connected future source of knowledge. Small world MIN It is important to notice that strategies S3 and S4 are strongly dependent on the weights in the network beyond the topological effects. The relationship between nodes is weighted in proportion to the organization's structure at an organizational, social, and cognitive level. As a result, we have to use a weighted version of the algorithm to determine closeness and betweenness (Opsahl et al., 2010;Opsahl and Panzarasa, 2009). Strategies S5 and S6 are based on the information about the nodes' neighborhood configuration that was reflected in the Co-Author Model (Tambayong, 2007). An important aspect of networks with multiple relations is the possibility of node cooperation time (S5). This function is understood as the ability of a node to make its resources available to other nodes. We can define the cooperation time based on the Co-Author Model. The Co-Author Model is a metaphor for the works of researchers who spend time writing papers. According to Jackson and Wolinsky (1996), a link represents the collaboration between two researchers, and the amount of time a researcher spends on any given project is inversely related to the number of projects that particular researcher is involved in. In this model, indirect connections will enter the utility function in a negative way, as they detract from one's coauthor time (Tambayong, 2007). The cooperation time strategy for node i from network N is formulated in the following way (Jackson and Wolinsky, 1996): (Jackson and Wolinsky, 1996). The greatest value for the function is given to the node that works with many coworkers on an exclusive basis. On the other hand, the smallest value means that the node is connected with other nodes to a high degree. Such observations are the basis for strategy S6. Moreover, the weight between nodes does not affect the S5 and S6 strategies. Simulation Results The proposed model was verified during simulations in terms of knowledge diffusion and the development of competence within an organizational social network based on knowledge workers' collaborative learning. Simulations were performed on the Wats-Strogatz network with 0.1 rewiring probability and 484 nodes. Each network node was assigned an initial competence from the range (0, 10) using the previously defined competence vector i  with ten elements and masked with binary values representing the availability to receive and transfer competence. The main goal of the simulations was to show the areas of application in competence management within an organization based on knowledge workers and their behaviors in the area of collaborative learning. Simulations were performed with the parameter B = 0.006, which represented the process of forgetting knowledge. During the simulations, the proposed strategies were verified for the selection of knowledge workers for specific roles such as experts, for the increased edge weights representing social relations, and for the knowledge collectors storing knowledge in the knowledge bank. In the first step, the role of experts within the network was modeled and the process of selection occurred based on six strategies (see Table 1). The results were compared with a reference simulation (R) based on the knowledge flow without identified roles. Using the proposed model, it was possible to simulate changes after increasing the competence of experts with knowledge randomly assigned from the range (10-50). Ten percent of the nodes were selected according to strategies (S1-S6) and the results were modeled in 500 steps. Fig. 2 presents the average competence from the simulation. Moreover, the reference simulation, without any changes, was added to the result shown in Fig. 2. Fig. 2. Simulations based on increasing competence for selected nodes (by introducing experts to the network) The initial starting competence resulted in an average value of five and was stabilized during the first 100 steps of the simulation. The best results were obtained for the dissemination strategy, with an average competence of 45 in the 500 th step of the simulation. A strategy based on selecting knowledge workers with maximal closeness resulted in a 10% smaller result for average competence and was similar to a time sharing based strategy with an average competence of 40. The expert selection strategy, along with betweenness, resulted in a 20% smaller result with a 32.5 average value of competence, while the degree-based strategy was similar to a random strategy in its measurements. The simulations represent a situation in an organization where there is a real need to increase the competence of a selected group of knowledge workers. One of the approaches is training, which generates additional costs. Another approach can be based on the knowledge facilitator, who is responsible for better communication and access to resources. This approach is based on increasing the weights representative of social relations for a selected set of nodes. The selection of nodes can be performed using different network measures (strategies S1-S6); the results are presented in Fig. 3. The betweenness-based strategy delivered an average competence of 11.5. The reference average competence based on simulations without changing the weights delivered similar results to random selection. Increasing the value of weights represents a situation within an organization where social relations can be improved and results in better knowledge flow. Selecting simulated roles can improve the flow of knowledge within a network; for example, the role of a knowledge collector can improve competence management and the use of stored knowledge. Selecting workers responsible for knowledge collection can be done based on the strategies used for expert selection. This role can be assigned using the presented strategies; results are presented in Fig. 3 for the 50 collectors selected within the network. Fig. 4. Simulation based on the role of knowledge collector During the simulations, the total gathered knowledge was computed and compared for different strategies. The best results were obtained for collectors based on the betweenness and closeness strategies. The aggregated value of competence for both strategies was 145 at the 500 th step of the simulation. There was a 10% decrease in results with a value of only 400 for the strategy related to the measure of sharing time, closely followed by the degree strategy. The worst results were obtained for the dissemination strategy, with only a value of 350, which was 8% lower than the random selection strategy. For all strategies, the level of gathered knowledge stabilized after dynamic growth in the first 100 steps of the simulation. The simulation shows that the best node for the expert role can be selected according to its neighborhood structure. This is because the main expert's role is to provide new knowledge in a network. In the first step the new knowledge is distributed to the expert's neighborhood. At this point it is important to accurately transfer as much knowledge as possible. In the next step nodes from the neighborhood redistribute the knowledge to their own neighborhoods based on their connections. Here, having a dense neighborhood structure is important. If we set aside the nodes' cognitive/social characteristics and knowledge potential, the most effective node for the expert role is the one with the most nested neighborhood. The best neighborhood structure for the expert role is a subject for future research. Moreover, interesting results may be gathered from clique analysis of sets of nodes from a node's neighborhood. The second simulation approach (Fig. 3) focused on more effective knowledge distribution. The knowledge facilitator is selected to speed up the transfer of knowledge in certain parts of the network. The simulation shows that, similar to expert role selection, the knowledge facilitator selection process seeks nodes with the most efficient neighborhood. However, in this situation we focused on cliques that were explicitly separated from other parts of the network. In the last simulation (Fig. 4) we looked for nodes with the best in/out transfer ratio in the network. The potential nodes for knowledge collection should play the role of transfer point in the overall network structure. Closeness and betweenness are best suited for the knowledge collector selection process because they take into consideration the overall network structure. Moreover, both of these metrics evaluate the value of relationships between nodes. Generally speaking, interesting results regarding the presented tasks can be obtained if we analyze the nodes' cognitive/social characteristics and their knowledge level in addition to the network structure. Knowledge Flow The community of practice will be discovered through the analysis of the working area of each user (node selection vector i  ). More specifically, some parts of the selection vector are chosen and form the core for the community of practice z  , for i z    . If the selection vectors are compatible, then we can assume that the related workers are working in the same area of interest and can be matched to the same community. Next, by analyzing knowledge flows we try to improve the effectiveness of each community of practice. Knowledge flow is the passing of knowledge between nodes according to certain rules and principles (Zhuge, 2006). In addition to knowledge flows, we analyzed the knowledge energy of each node in order to identify the importance of each node in the knowledge flow and the community. The node's knowledge energy is a numeric representation developed by Zhuge (2006) of each node's cognitive and creative ability. The knowledge energy is the power to drive knowledge flow, so it is also called "knowledge power" or "knowledge intensity" (Zhuge, 2004). Furthermore, in the proposed model, node i 's knowledge energy is estimated based on the node level of competence, as well as on its cognitive and social abilities according to the formula: Formula (7) reflects the node's knowledge potential in a network for the community of practice z . From the dot product of competence and selection vectors, information about the importance of the community of practice and its levels is obtained. To formulate the full image of a node's knowledge potential, we should account for the node's ability to learn and teach as a base for knowledge transfer and assimilation in the knowledge flow. In order to create effective knowledge flows, the following principles must be fulfilled (as defined by Zhuge et al. [2005]): -Between any two nodes, knowledge only flows when their energies differ in at least one unit field. -A knowledge flow network is efficient if every flow is from a node of higher energy to one of lower. -Knowledge energy differences tend to diminish without reserve. -If knowledge does not depreciate, then its energy will never decrease. The presented principles provide some idea of how to manage knowledge flows in the community of practice. The most important statement is the one related to the order of nodes. In general, the knowledge flow should move from the node with the highest energy to a node with less energy, all the way down to the smallest node. Community of Practice Acceleration Procedure Due to network complexity, it is extremely difficult to develop methods for an optimal solution that can accelerate the community of practice's growth. The proposed procedure is a heuristics-based approach to the problem. The aim of the presented procedure for the community of practice's acceleration is to improve the knowledge flows between community members. In other words, the analysis of relationships between community members and node energy allows for decisions to be made about various ways to accelerate the community's knowledge flows. In the proposed approach, we improve the community knowledge flow transfers by creating new relationships between community members. We did not consider the problem of deleting relationships, as we cannot damage the existing structures in an organization. The community of practice acceleration procedure starts with community detection. The detection process is based on the node selection vectors and looks for the community of practice core z  . The selection vector for the community of practice core helps identify core community competences. We assume that nodes with similar selection vectors work in the same field of activity and use the same set of competences. Node classification is maintained by multi-label classification (Madjarov et al., 2012). As a result, the set of network nodes is divided into overlapping sets of nodes within communities Another important concept is the efficiency of knowledge transfer between nodes. According to Zhuge (2005), the flow transports knowledge from nodes with higher knowledge energy to nodes with lower energy. The efficiency of knowledge transfer reflects the shortest path for the transfer of knowledge calculated based on the assumption that the . In other words, the relationship is influenced by associations with the starting nodes' social (teaching) abilities, the weight of the relationship itself, and the receiving nodes' cognitive (learning) abilities. The efficiency of knowledge transfer between any nodes from the community is calculated as: , and d is the number of nodes in the shortest path. x v is a starting node and y v is a final node in the path when . Moreover the shortest path is defined as an are the subsequent pair of nodes in the shortest path set. Our concept of community acceleration is related to a more efficient knowledge flow between nodes of a selected community. In order to accelerate the knowledge flow, the proposed procedure will suggest the location of a new tie and its value. The community of practice acceleration procedure is as follows: 1. Classify nodes in order to discover their communities is determined based on formula (8). 5. A pair of nodes with the smallest value of efficiency of knowledge transfer is selected. A new direct tie between them is created. The strength of this tie can take different values. In our approach we heuristically assumed that the tie is equal to the average tie's strength in the network. The presented procedure is applied to each community z k for a period of time in order to achieve the assumed efficiency of knowledge transfer between the nodes. On one hand, the procedure should be applied based on need due to the continually changing node energies. On the other hand, the procedures create ties that can be costly to maintain. In some cases, the new relationship creation idea is questionable due to worker differences in base knowledge or a different position in the company's structure. Simulation Results To illustrate the proposed approach in a detailed way, a Wats-Strogatz network with 0.1 rewiring probability and 25 nodes was generated. Each node was assigned an initial competence c i from the range (0, 10) and masks m j with binary values representing the availability to receive and transfer competences. The masks represent selection the vectors that are assigned to each node. The nodes were grouped into three clusters: C 1 , C 2 , and C 3 based on mask similarity, and a core set of identical competences with binary masks was identified for each cluster. For the first cluster C 1 , the set of nodes [3,5,7,11,12,13,23] was assigned; cluster C 2 was assigned nodes [2,4,6,8,10,14,15,16,21,22,24], and cluster C 3 was assigned nodes [0,1,9,17,18,19,20]. Within the first cluster, competence c 5 with mask m 5 was identified as a core competence; for the second cluster, core competences are based on a set with masks m 9 and m 10 . Within cluster three, a set of competences with masks m 5 and m 7 was identified as the core. The social network with the illustrated clusters based on competence vectors is shown in Fig. 5. The problem within the organization can be related to the consideration that even knowledge workers are similar in terms of attributes; they can be unconnected or weakly connected to potential coworkers. In such situations, creating additional ties can improve the network's characteristics. Knowledge flow within networks was analyzed without any changes, and in the second step an additional random link was added within cluster C 1 between nodes 11 and 23 (II). In the next step, a second random connection between nodes 7 and 13 (III) was added. In the fourth step of the simulations, a connection was computed using the proposed approach and resulted in a connection between nodes 5 and 3 (IV). Simulations were performed on four versions of the network in 500 steps to compare results within the network. The main goal of the simulations was to improve knowledge flow and monitor core competence, which was represented by mask m 5 for nodes within cluster C 1 with a set of nodes (N 3 , N 5 , N 7 , N 11 , N 12 , N 13 , N 23 ). The results of the simulations are presented in Fig. 6-Fig. 9. and from node N 12 to node N 13 Fig. 9. Simulations based on link N 3  N 5 added using the proposed approach Simulations performed on an unchanged network resulted in a maximal level of 9.5 for competence c 5 and then demonstrated a continuous drop, which is visible in Fig. 6. Adding single random links improved the results and a maximal level of 11 was obtained (Fig. 7). Similar results were achieved for a network with two random links added within the first cluster (Fig. 8). Even though maximal results were improved, there was still an observed drop in competence across the network. Use of the proposed method for the selection of new connections between nodes N 3 and N 5 is illustrated in Fig. 9. The proposed approach resulted in improvements within the cluster for most nodes and competence c 5 increased. Conclusion One of the important features of the proposed approaches is their ability to accurately predict organizational network development. In order to predict the knowledge flow movement we have to acquire information about worker competences and mutual relationships. The competence audit is a complex and costly operation. In normal conditions, an organization is able to maintain only a limited number of audits, usually once per year for each worker. For this reason, the ability to predict the future changes in an organizational network and worker competence level is very valuable. The presented approach, based on network behavior, allows the prediction of worker characteristics depending on worker roles, membership in communities of practice, and new relationships between the workers. Moreover, the knowledge collector role facilitates the analysis of the company's repository development. The knowledge workers' collaborative learning behavior model is based on knowledge flows and resource modeling. From the modeling side, the learning-teaching process is a complex activity where both sides have their own interests, which are reflected by their strategies. Generally, knowledge workers seek to transfer knowledge to other workers and follow organizational objectives in order to achieve some level of competence through them. The presented model helps analyze and change a given node's learning behaviors by changing competence levels or the tie structure in order to increase a company's (average) level of competences. The Analyzing the structure of organizational social networks in terms of knowledge flow should be done in two stages, using the network structures and the attributes of the nodes. For future work, the proposed approach can be extended to the identification of communities within the graph and can seek to find relationships between clusters created with the vectors assigned to nodes; the results could then be verified using real-world datasets and more extensible simulations.
2015-05-12T17:15:24.000Z
2015-05-12T00:00:00.000
{ "year": 2015, "sha1": "548be75b7ba788b4549c94f3a447ca8523081637", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1505.03055", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "548be75b7ba788b4549c94f3a447ca8523081637", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Psychology" ] }
255004452
pes2o/s2orc
v3-fos-license
Does working at a start-up pay off? Using representative linked employer-employee data for Germany, this paper analyzes short- and long-run differences in labor market performance of workers joining start-ups instead of incumbent firms. Applying entropy balancing and following individuals over ten years, we find huge and long-lasting drawbacks from entering a start-up in terms of wages, yearly income, and (un)employment. These disadvantages hold for all groups of workers and types of start-ups analyzed. Although our analysis of different subsequent career paths highlights important heterogeneities, it does not reveal any strategy through which workers joining start-ups can catch up with the income of similar workers entering incumbent firms. Workers do not benefit from entering start-ups. This study analyzes a huge data set for Germany and compares similar workers who either join newly founded firms, so-called start-ups, or incumbent firms. It follows both groups of workers over ten years and finds substantial short- and long-run differences in their labor market performance. Entering a start-up instead of an incumbent firm is associated with huge drawbacks in terms of workers’ wages, yearly income, and employment. These disadvantages exist for various types of workers analyzed and for different types of start-ups. There is no strategy through which workers who join start-ups can catch up with the income of comparable workers who enter mature firms. Thus, the practical implication of this study is that for most workers, it is advisable not to enter start-ups if they have a chance to obtain a decent job in a mature firm. Introduction The role of newly founded firms, so-called start-ups, in structural change and job creation is a highly disputed topic both in scientific and political debates (Shane, 2009). There exists a broad empirical literature focusing on the quantity of jobs created and destroyed in new firms, across regions and at the aggregate level, mostly finding positive net effects of start-ups (see, e.g., Haltiwanger et al., 2013 for the USA, Fritsch & Weyh, 2006 for Germany, Criscuolo et al., 2014 for 18 countries, and the review by Block et al., 2018). In contrast, relatively few studies have analyzed the quality of these jobs from the viewpoint of the individual worker. Some of these studies suggest that job quality in start-ups may be questionable, but evidence so far is too scarce to make any definite statements (see the reviews by Block et al., 2018 andNyström, 2021). For workers-be they employed or unemployed-it is largely an open question whether joining a newly founded rather than an incumbent firm is advisable or not. Hence, the primary objective of this paper is to analyze empirically whether working at a start-up is beneficial in the short and long run for individual workers. Are there temporary or persistent advantages and disadvantages in terms of remuneration and (un)employment prospects from entering a start-up rather than an incumbent firm? Although the quality of jobs is a multi-dimensional concept that also includes work content and non-monetary benefits (Block et al., 2018), the employment and earnings prospects individuals face in start-ups surely play a major role. Workers entering a start-up rather than an incumbent firm may receive higher wages as a compensation for the higher failure risk of start-ups, but they also could initially face lower wages due to the financial constraints of their young employer operating at an inefficient scale (Brixy et al., 2007). In the latter case, working at a start-up could pay off in the long run if new firms survive and become more profitable (Nyström, 2021). Wages in start-ups might even rise more steeply than in incumbent firms if flat hierarchies in expanding young firms mean that the initial workers are first in line to reach better-paid positions quickly (Fackler et al., 2019). Similarly, the greater variation in performed tasks and the expanded responsibility individuals typically experience in (small) start-ups may accelerate their career progression and earnings growth when moving to other, more mature firms. On the downside, the diverse and often idiosyncratic activities employees perform in start-ups may limit earnings growth and impede workers from moving to incumbent, betterpaying firms (Sorenson et al., 2021). Furthermore, wage profiles could be steeper in incumbent firms if these are more likely to offer backloaded compensation schemes to their employees-a strategy that will be less credible for risky new firms (Schmieder, 2013). Regarding (un)employment prospects, workers in newly founded firms face a high risk of involuntary job loss due to their employer's closure (Fackler et al., 2013;Fairlie et al., 2019;Haltiwanger et al., 2013). Hence, entering a start-up might be associated with a higher risk of unemployment and worse future labor market opportunities due to displacement and stigma effects (Sorenson et al., 2021). As start-ups are particularly vulnerable to economic downturns, displaced employees of start-ups may experience more serious problems in finding a new job during a recession (Sorenson et al., 2021), and the resulting spells of unemployment may have negative, long-lasting effects on employment and earnings trajectories. Employees joining start-ups can thus be expected to record less days in employment and more days of benefit receipt (and consequently lower annual incomes). These brief considerations suggest that it is initially not clear whether entering a start-up as opposed to an incumbent firm will pay off for workers in the short and the long run. In addressing this open question, previous research has primarily compared average wages in start-ups and incumbents or focused on differences in workers' entry wages at the point of being hired. The empirical evidence so far is ambiguous (see the reviews by Block et al., 2018 andNyström, 2021). While some papers show that wages are significantly lower in start-ups than in incumbent firms, ceteris paribus (e.g., Fackler et al., 2019;Nyström & Elvung, 2014), others find a positive wage differential, in particular for very successful start-ups (e.g., Ouimet & Zarutskie, 2014;Schmieder, 2013). Brixy et al. (2007) identify a negative wage differential that becomes smaller over time, but they only have data at the level of establishments and not of workers. According to Burton et al. (2018), the typical start-up, which is both young and small, pays less than the average incumbent firm but the largest start-ups even pay a wage premium. Babina et al. (2019) report a pay penalty at young firms that turns into a small pay premium after controlling for various dimensions of worker and firm heterogeneity. Finally, Kim (2018) finds that MIT graduates at venture capital-backed start-ups (but not at other start-ups) earn about 10% higher entry wages than their counterparts at incumbent firms, which mainly reflects worker ability and selection. Very few papers have been able to follow workers and their wages over time. 1 The paper most closely related to our research is the study by Sorenson et al. (2021) with Danish registry data. Like us, the authors use a matched employer-employee database and follow (full-time) employees ten years after changing employers. They show that individuals who join young firms (i.e., less than four years old) earn substantially less than matched employees of large, mature firms over the subsequent ten years, and these earnings disparities are not found to diminish over time. Analyzing linked employer-employee data from Britain, Adrjan (2018) finds that young firms pay slightly higher wages to new hires, but subsequent wage growth is steeper at mature firms. He demonstrates that this finding holds both within continuing employment relationships and for individuals who change jobs, but he is not able to further analyze workers' (un)employment trajectories. A certain limitation of both studies is that they focus only on remuneration as the sole indicator for labor market success. In her recent review article, Nyström (2021, p. 928) concludes that "there is a clear scarcity of research regarding the long-term wage trajectories of employees in entrepreneurial firms." In addition, there is a lack of studies that look at the long-term (un)employment trajectories of individuals. 2 Our paper contributes to this small literature and goes beyond previous studies in various ways. First, when asking whether it pays off to enter a start-up rather than an incumbent establishment, we not solely focus on wages but also consider other indicators of labor market success such as days in employment and unemployment benefit receipt. This is important because workers suffer from job loss and unemployment not only in terms of earnings losses but also in terms of non-monetary outcomes such as psychological costs or negative effects on children and families (see, e.g., the survey by Brand, 2015). Second, using a large, representative linked employer-employee data set for Germany, we follow individuals joining a start-up over ten years and analyze whether there are differences in wages and (un)employment compared to similar individuals who have entered incumbent firms. To ensure comparability of the two groups of workers, we apply entropy balancing (Hainmueller, 2012). We then examine whether the remaining differences are only temporary or long-lasting and whether they vary for different groups of workers. Third, we further add to the literature by investigating various potential explanations for the observed short-and long-term differences, such as joining successful vs. failing start-ups or pursuing different subsequent employment paths (like staying or leaving the establishment). The upshot of our empirical analysis is that there are large and long-lasting drawbacks from entering a start-up rather than an incumbent establishment. Workers joining start-ups experience significantly lower income and daily wages, which is in line with recent studies on wage developments by Adrjan (2018) for Great Britain and Sorenson et al. (2021) for Denmark. In addition, we present first evidence that workers entering a newly founded firm record less days in employment and more days of benefit receipt than their counterparts joining incumbent firms. These disadvantages are persistent and hold for all groups of workers and types of start-ups analyzed. The remainder of the paper is organized as follows: Section 2 explains our data and provides descriptive evidence on the composition of workers entering either a new or an incumbent firm. The methods and results of our econometric analyses are presented and discussed in Section 3. Section 4 concludes. Data and descriptive evidence To analyze the different labor market prospects of workers entering either a start-up or an incumbent firm, we use an extensive linked employer-employee data set for Germany based on social security notifications, which is provided by the Institute for Employment Research (IAB). Our data set combines workerlevel information from the Integrated Employment Biographies (IEB) and establishment-level information from the Establishment History Panel (BHP). Detailed data on labor market participants is collected in the IEB, which provides daily information on employment relationships for all workers subject to social security notifications, as well as periods of benefit receipt, registered job search, and participation in active labor market programs from 1975 to 2014 for Western Germany. 3 Since 1992, Eastern Germany is included in the data as well, and from 1999 onwards, information on marginally employed workers is collected, too. Additionally, the IEB contains individual characteristics such as age, gender, education, and nationality. 4 Yearly information on all German establishments with at least one worker subject to social security contributions is contained in the BHP, including size, sector, location, and workforce composition as of June 30 of a given year. 5 Crucial for our analysis of newly founded establishments, the BHP also contains information on worker flows (Hethey-Maier & Schmieder, 2013). In order to distinguish whether a new establishment identifier in the data refers to a truly new entry or is caused by mergers, acquisitions, or other changes of the identification number, worker flows are used to identify which fraction of a new establishment's initial workforce has previously been employed together in another establishment. We restrict our analysis to newly founded establishments defined by Hethey-Maier and Schmieder (2013) as "new (small)" or "new (med & big)", implying that the establishment either employs not more than three workers in its first year of business, or, if larger, less than 30% of its initial workforce have worked together under a common establishment identifier in the previous year. Moreover, it must be noted that establishments in the BHP are defined as local production units, which do not necessarily correspond to firms as legal entities. Since we intend to focus our analysis on the foundation of new, independent firms instead of branch openings of multi-plant firms, we exclude establishments with more than 20 employees in their first year of business. We evaluate the success of this procedure in reducing the number of branch openings by using information from the IAB Establishment Panel, a yearly survey of approximately 16,000 German establishments. 6 Since the Establishment Panel includes information on single-and multi-plant firms, we can link this information with those establishments from the BHP that we classify as start-ups as described above and that meet further sample restrictions described below. It can be shown that circa 94% of the establishments we define as start-ups are independent new firms, while only 6% are branch openings of existing entities. The sample of start-ups that is used for our analyses consists of a 10% random draw of all establishments newly founded in the years 2000 to 2004, only focusing on establishments in their very first year of business. We then link information from the IEB on all newly hired workers in the respective year, i.e., workers that have not been working with the same employer in the previous year. Since workers' employment biographies are available until 2014, this allows us to follow each cohort of workers (and firms) over ten subsequent years. Note that we do not restrict our analysis to a balanced panel but allow for attrition, e.g., due to exit from the labor force. The control group of incumbent establishments is constructed by drawing a 5% sample of all establishments existing during that period. Here, for each cohort of workers, we keep only those 3 This implies that the IEB only includes information on hired employees. The founders of the firms are not listed in the data, since they are not subject to social security contributions. 4 For more information on the IEB, see Antoni et al. (2016) who provide a description of the Sample of the Integrated Labour Market Biographies (SIAB), a 2% random sample from the IEB. 5 For detailed information on the BHP, see Schmucker et al. (2016). 6 For further information on the IAB Establishment Panel see Ellguth et al. (2014). We do not use the IAB Establishment Panel in our main analysis, even though it includes some additional information at the firm level, because the number of young establishments in the data set is rather small and typically establishments in their very first year of existence are not included in the survey at all. who join establishments that are five years or older. 7 In both groups, we exclude establishments in agriculture, energy and mining, and in the public sector. We further exclude workers younger than 18 and older than 50 at the time of being hired, as well as apprentices. Table 1 gives a short overview over the establishments in our final sample. To summarize the composition of workers entering new and incumbent establishments, respectively, we present selected individual characteristics at the point of entry in Table 2. We see that the two groups differ significantly in almost all variables presented. Workers entering new establishments are more often women and they are on average older than the control group. They are more often mediumqualified, while a higher share of workers entering incumbent establishments is either low-qualified, i.e., having no degree at all, or high-qualified, i.e., graduated from university. Moreover, workers taking up a job in a start-up are less often of German nationality, have less frequently performed a job-to-job transition 8 and are less often hired in a part-time job. In terms of years of working experience, we find no significant differences, while workers entering new establishments have previously spent more time in benefit receipt. 9 Moreover, individuals entering start-ups have had more previous employers, which points towards more stable employment biographies in the control group. All these differences in the sample composition might affect the labor market success of the two groups of workers. Our goal in the following empirical analysis is to study workers' employment trajectories in the long run and to investigate whether various indicators of labor market success differ between workers entering either a start-up or an incumbent, thereby conditioning a broad range of individual and firm characteristics. Econometric approach To account for differences in the composition of the groups of workers entering start-ups vis-à-vis incumbents, we apply entropy balancing (Hainmueller, 2012, see Hainmueller & Xu, 2013 for a description of the respective Stata ado-file ebalance). This method allows us to directly impose the first and second moments, i.e., means and variances, of a large set of covariates to be perfectly balanced among both groups. Without having to postulate any further assumptions, entropy balancing reweights observations to match the The threshold of five years might appear arbitrary, but Brixy et al. (2006) show that after the first five years of business, differences in wage levels and working conditions between new and incumbent firms become insignificant. 8 Following Fackler et al. (2019), we define job-to-job transitions as recruitments where individuals left their previous job not more than 90 days before joining the respective establishment, hence allowing for a short period of frictional unemployment. If workers left their previous job more than 90 days ago and in the meantime were registered as a job seeker, received benefit payments, participated in labor market programs or were not observed in the data, they are not defined as transitioning from employment. respective balance constraints by deviating as little as possible from the initial weights. By directly focusing on covariate balance, entropy balancing improves on related methods such as propensity score matching, which often depend on manual adjustment of the weighting scheme and repetitive balance checking and therefore frequently fail to balance all covariates perfectly. Moreover, while matching approaches often discard less comparable individuals in the control group, entropy balancing retains all relevant information by assigning weights smoothly to all observations in the data (Hainmueller & Xu, 2013). 10 In our case, we aim to compare two groups of individuals with the same preconditions when joining an establishment, so that diverging trajectories in labor market performance in the subsequent years can be more credibly ascribed to entering either a new or an incumbent establishment. Thus, we balance the two groups of workers among a wide range of characteristics at the point of entering a start-up or an incumbent, respectively, and compare the subsequent career paths of the reweighted groups for the ten following years. More specifically, we require observations in the control group to be reweighted so that means and variances of the workers' year of entry, sex, age, qualification and German nationality equal those of the group of workers entering a start-up, since all these characteristics typically account for differences in individual career paths and wages. 11 We also balance the two groups in terms of preceding employment status, indicating whether an individual either has performed a job-to-job transition or has come from unemployment or outside the labor market, and in terms of total previous years of experience and years 10 To check whether our results depend on the empirical method chosen, we additionally run a robustness test where we substitute entropy balancing with propensity score matching. Moreover, we estimate an unweighted OLS regression in which we control for all explanatory variables that are also used in our balancing procedure. Results are almost identical to the main outcomes discussed below and are available upon request. 11 While it would be technically possible to balance further moments of the variables' distribution, we act in accordance with Hainmueller (2012, p. 32) who states that "in many empirical cases we would expect the bulk of the confounding to depend on the first and second moments." As a technical side note, introducing skewness into the procedure does not change the balancing of most of our variables since they are coded as dummy variables. of benefit receipt. In addition to these variables that might affect workers' labor market opportunities, we include the number of former employers in the balancing procedure to capture previous employment stability. Moreover, we also impose the two groups to be balanced concerning the new job's part-time status and occupation, as well as the (two-digit level) sector and labor market region of the establishment. 12 We do not include establishment size in our balancing procedure because comparing small start-ups with similarly small incumbents may be misleading. According to learnings models such as Jovanovic (1982), new firms start at a small scale because they do not know their true efficiency. Firms that are more efficient will grow and survive whereas less efficient firms shrink and eventually exit the market. Hence, comparing start-ups and incumbents with the same size implies a comparison between young (and potentially efficient) firms unaware of their optimal employment level with inefficient incumbent firms that have not grown or are even shrinking. Nevertheless, we also perform a robustness check making start-ups and incumbents more comparable in size, which is discussed in Section 3.3. We investigate individuals' labor market performance over time in terms of yearly income, 13 average daily full-time earnings, 14 days in employment, and days of benefit receipt 15 in the reweighted sample for the next ten years following workers' entry in the respective establishment. To compare these indicators between the two groups of workers, we run an OLS regression in the balanced sample, where Y it determines the labor market outcome of interest for individual i in year t and T indicates a set of relative time dummies, ranging from zero in the year in which the individual newly enters the establishment up to year 10. Additionally, these time dummies interact with a start-up indicator SU i that is equal to one if the worker had entered a new establishment and zero for workers who had joined an incumbent firm at the beginning of the observation period. The coefficient t therefore shows the difference in the performance of the two balanced groups of workers for each year. Our empirical approach allows us to render the two groups of workers comparable among a broad set of observable characteristics. However, it must be acknowledged that there may be further dissimilarities between the individuals which we are not able to capture with our identification strategy, but which could affect their future career paths as well. For example, workers entering start-ups might be less risk averse than workers who choose to work for an incumbent (Kim, 2018). One might also imagine workers joining new firms to have a stronger preference for non-monetary aspects of a job, such as flat hierarchies, more independence and responsibility or more diverse tasks which are often associated with working at a start-up (Sauermann, 2018;Sorenson et al., 2021). These characteristics may also play a role in workers' future career decisions and affect their success in terms of wages and employment. Since these (and other) unobservable differences could bias our estimate of differences in labor market performance, we additionally apply a robustness check in which we include workers' labor market outcomes of the three preceding years in our balancing procedure. By controlling for income, as well as days in employment, full-time employment, and benefit receipt, of the three years prior to entering the establishment, we abstract from any unobservable differences between the two groups of workers that had affected their labor market trajectories before our 12 We categorize occupations according to Blossfeld (1987). Labor market regions are classified on the basis of workers' commuting patterns according to Kropp and Schwengler (2011). 13 Our income measure cumulates daily wages from all employment relationships of a given year and is deflated by the consumer price index. If an individual holds multiple simultaneous employment relationships, only the main (i.e., the highest paying) job is taken into account. 14 Note that our indicator for wages, average daily earnings, is defined conditional on full-time employment. Since our data does not contain information on working hours, we are not able to calculate hourly wages. Hence, part-time workers are excluded from the analysis of wages to reduce heterogeneities in working hours. Wages are deflated by the consumer price index. 16 Results of this robustness test will also be discussed in the following. However, we do not control for labor market outcomes in preceding years in our main analysis since introducing the additional variables into entropy balancing would force us to discard all labor market entrants from our sample. Results The labor market trajectories of workers entering either a start-up or an incumbent establishment, both before and after entropy balancing, are presented in Fig. 1. A first look already reveals that workers who joined a start-up in year zero perform worse in terms of all outcome variables over the whole observation period. Even though entropy balancing strongly reduces the gap between the two groups of workers, pointing towards negative selection into start-ups, the overall patterns remain stable. Taking a closer look at each labor market outcome, one can see that workers entering a start-up already have lower yearly incomes in the year of entry, even after balancing. 17 This gap seems to widen slightly in the first years and then remains very persistent, without any indication that workers who initially entered a new establishment catch up to the control group. It should be noted that our indicator for yearly income captures two aspects, an employed worker's wage and (periods of) non-employment in the respective year, the latter being assigned zero earnings. We therefore disentangle the two aspects by looking separately at wages (conditional on full-time employment) and days in employment. Focusing on average daily full-time earnings first, Fig. 1b shows lower wages for workers in start-ups already in year zero, and the difference to the balanced group of workers entering incumbents hardly changes during the ten subsequent years. In terms of days in employment as well as days of benefit receipt, there is more variation over time. While differences in the year of entry are comparably small, the gap between the two groups widens considerably in the following two years, potentially picking up the effect of higher failure rates among start-ups. There seems to be some convergence in terms of days in employment, but workers who initially entered a start-up still perform worse than the control group even after ten years. 18 In order to assess the differences in labor market performance and their statistical significance, Fig. 2 shows the estimation results of the OLS regression described above. More specifically, the lines indicate the magnitude of the coefficients t and the respective 95% confidence intervals for estimations in the unbalanced and the balanced sample. Our results confirm that workers entering a start-up perform significantly worse than the control group over the subsequent 10 years. Even after entropy balancing, they earn about € 4000 (or approximately 20%) less yearly income from the second year onwards compared to workers who joined an incumbent firm, and this gap remains stable until the end of our observation period. Two factors contribute to this difference in yearly income: one is the persistently lower wages of approximately € 10 (roughly 15%) less per day 19 and the other is the continuously lower probability of being employed. After two years, workers in a new establishment spend almost 20 days less in employment per year than their peers in incumbents, and while this gap is slightly reduced over the next years, differences remain highly significant throughout the observation period. The fact that there is also a strong increase in days of benefit receipt compared to the control group over the first two years after entry suggests that these workers usually do not have other income sources compensating employment losses. As discussed above, one might be skeptical whether our empirical approach is successful in reducing all differences between the two groups of workers, since in entropy balancing, we cannot control for unobservable characteristics such as ambition or risk aversion. In Fig. 5 in the Appendix, we therefore present the results of a robustness check in which we include indicators of labor market performance in the three preceding years in the balancing procedure as crude proxies of unobserved characteristics. Here, we assume that workers' behavior resulting in a worse labor market performance (such as many days of benefit receipt) does at least partially reflect workers' ambition, risk aversion, or other unobservable characteristics. It can be shown that although there are no remaining differences in terms of labor market success in the years -3 to -1 after reweighting the two groups, there are still substantial differences in labor market performance after entering the respective establishment, thus confirming the findings of our main specification. 20 These differences remain even if we include worker fixed effects. 21 Additionally, we use workers' previous employment histories to generate further proxies Table 3 and 4 in the Appendix; the gray dashed lines indicate the 95% confidence intervals 20 Note that for this robustness test, we have to exclude all individuals with missing information on labor market performance for one or more of the three preceding years. To test whether this smaller sample differs strongly from our main sample in terms of subsequent labor market performance, we rerun the original balancing procedure (without controlling for previous labor market success) in this subsample and find that results are in line with our main results. 21 Including fixed effects in our regression only takes account of level-differences between workers, not of developments. Thus, any interpretation hinges strongly on the chosen reference year and is not easily comparable to our main results. We therefore do not include fixed effects in our preferred specification. However, results are available on request. for unobservable preferences, namely the number of occupations (on the three-digit level) before entering the establishment, the number of start-ups an individual previously worked for, and a dummy indicating whether the last employer was a young establishment not older than five years. While including these variables in the balancing procedure does not change our insights, it reduces our sample considerably, especially by those individuals who have just entered the labor market. Therefore, we do not include these measures in our main specification. Moreover, we estimate an additional robustness check where we restrict our analysis to workers who enter an establishment with a maximum of 20 employees, to make both groups more comparable with respect to establishment size. In our preferred specification, we do not control for establishment size, since comparing small start-ups only to a group of similarly small incumbents may be misleading. While a small start-up might grow quickly in its first years of business, an established firm of comparable size potentially signals that it has been not so successful so far and therefore did not expand. Therefore, our main insights might also be driven by differences in establishment size coming along with establishment age. The results of this robustness check (Fig. 6 in the Appendix) show that even after making the two groups more comparable in terms of establishment size, we still find significant and persistent drawbacks from joining a start-up. While the differences in earnings shrink by more than half compared to the results of our preferred specification, differences in terms of employment prospects are similar in size. 22 To sum up, our main results imply that workers entering a start-up suffer from severe and long-lasting drawbacks in terms of earnings and employment prospects, compared to workers joining an incumbent establishment instead. To analyze whether these insights hold for various subgroups of workers, we perform entropy balancing separately for subgroups defined by gender, age, qualification, and previous employment status, and run OLS regressions for each of these balanced subsamples. We further investigate whether our insights also apply to different percentiles of the income distribution. The respective regression results for yearly income as a summary measure for wages and employment prospects are provided in Fig. 3. With respect to gender, the income penalty of workers entering start-ups rather than incumbents is slightly larger for men than for women. In year zero, for instance, the difference amounts to € 3400 for men and to € 2000 for women, which corresponds to percentage income gaps of 18 and 16%, respectively. The development of the income gap over the subsequent ten years is remarkably similar for both sexes. Focusing on subgroups defined by age, the youngest workers are experiencing the smallest (but still significant) drawbacks from joining a new establishment, as differences to the balanced control group amount to approximately € 2000 in all years of observation. The income difference increases with workers' age group, both in absolute and relative terms, indicating that the decision to enter a start-up is most harmful for old workers. One potential explanation for this pattern is that when entering incumbent firms, older workers can make better use of the human capital they have accumulated during their working life. This is supported by the finding that the income difference between young and old workers is largely driven by wages rather than employment. Analyzing the development of yearly income for workers of different qualification, we find that the difference to the control group is the largest for workers with a university degree, who earn almost € 6000 less even ten years after entry. 23 A similar pattern emerges when we investigate income trajectories for different percentiles of the income distribution. Here, instead of estimating OLS regressions, we estimate unconditional quantile regressions using recentered influence functions (RIFs) as proposed by Firpo et al. (2009). We focus on the 20th, 50th, and 80th percentile to study the impact of joining a start-up on low-income 22 A potential explanation for this finding is that firm age is a more important determinant of employment stability than firm size. That our results are broadly similar when comparing days in employment or benefit receipt-both variables being proxies of employment stability-between start-ups and incumbents of similar size implies that firm size is not an important determinant of employment stability. Hence, the difference between workers entering start-ups and incumbents is largely driven by firm age (rather than size). Size, however, turns out to be an important determinant of wages or earnings, which is in line with previous studies on the relationship between firm size and wages. and high-income earners as well as on the median. 24 Results show that entering a newly founded establishment decreases income most for workers at the 80th percentile of the distribution, while the 20th percentile is affected to a much smaller extent, indicating that especially workers with high income suffer severe drawbacks (in absolute terms) from joining a start-up as opposed to an incumbent. 25 Note that, for a closer look at high-wage workers, we run an additional analysis of those individuals who are in the highest 20% of the overall income distribution in the initial year of joining the start-up and, alternatively, in the year before. Comparing high-wage workers in start-ups with high-wage workers in incumbents confirms the results of our main regression. Finally, we also test whether the consequences of entering a start-up vary for workers with different previous employment statuses and find that the difference to the control group is slightly larger for workers who performed a job-to-job transition compared to those who came from non-employment. The percentage income gaps, in contrast, are somewhat larger for workers coming from non-employment due to their overall lower income levels. Nevertheless, developments over time are very similar for both groups. In conclusion, even though the disadvantages from entering a start-up as opposed to an incumbent are most pronounced for men, old employees, and highly qualified employees, as well as for workers in the upper part of the income distribution, we find that all subgroups earn lower incomes when joining start-ups rather than incumbents over the whole period of observation. Results for different subsequent career paths To explore potential explanations for the significant and long-lasting difference in performance between workers entering start-ups and those joining incumbents, we also investigate income trajectories for workers with different specific subsequent career paths. Specifically, we have a closer look at workers who stay with their initial employer to check whether worse labor market prospects in start-ups occur (only) due to their lower employment stability compared to incumbents. Moreover, to shed light on the relevance of the high failure risk of start-ups for workers' labor market performance, we compare workers who join a start-up that turns out to be successful and does not close down in the early years of business with those entering businesses that subsequently fail. We further examine the role of startups as "stepping stones" to other positions in workers' subsequent careers. Figure 4 shows the results of these analyses for yearly income. Focusing on continuing matches, we include only those workers who are still employed with the same establishment which they entered in year zero. Comparing income trajectories of stayers in start-ups with those of stayers in incumbent establishments after balancing (Fig. 4a), we see that the difference between the two groups is even more pronounced than in our main analysis, and the gap widens continuously over the observation period. This result indicates that the lower employment stability in start-ups cannot be the only reason for the differences in labor market performance described above. We also find no indication that those workers who remain employed in a start-up over a longer period of time are experiencing steep careers and better earnings prospects (e.g., due to flat hierarchies in newly founded establishments). Instead, continued employment in incumbents seems to lead to steeper wage increases, e.g., due to backloaded compensation schemes or better opportunities for career advancement in internal labor markets. 26 Moreover, we analyze subgroups of workers with different lengths of start-up employment (results are available on request). For individuals who leave the start-up in the first year or between years 2 and 5, we find a sharp drop in days in employment when workers leave the establishment, followed by a slow recovery, suggesting their leave is mostly not voluntary. and mining, and the public and nonprofit sectors. The sample comprises individuals of age 18-50, excluding apprentices. Graphs show the OLS estimates of differences in labor market trajectories; the gray dashed lines indicate the 95% confidence intervals While workers with a match duration of more than five years perform best in terms of employment over the whole observation period, they are also the group with the largest gap in wages compared to workers entering incumbents. This result shows again that a longer career within a start-up does not seem to pay off in terms of income and wages. Additionally, we analyze whether the main reason why workers entering start-ups are less successful on the labor market can be found in the bad economic performance of these establishments. Since many start-ups are failing in their very first years of business (see, e.g., Fackler et al. 2013;Fritsch & Weyh, 2006;Mueller & Stegmaier, 2015), workers will oftentimes be forced to search for a new job or-in the worst case-become unemployed. Therefore, we divide the group of workers entering a new establishment into those whose employer survives over a considerable period of time, i.e., at least five or ten years, and those whose employer closes down within the respective time frame. 27 Figure 4b shows the income trajectories for these specific groups of workers after entropy balancing, indicating that indeed individuals who enter a start-up that survives for at least five or ten years, respectively, are performing significantly better than those who joined a start-up which closes down within that time window. Therefore, we also compare the performance of workers entering a surviving start-up with those who initially entered an incumbent establishment, as shown in Fig. 4c. However, our results imply that the gap in income between these two groups still amounts to approximately € 2000 to € 3000 in all periods. 28 Hence, the difference between workers entering start-ups and incumbents cannot solely be explained by the high failure rate of risky new businesses. Finally, at least one successful strategy for workers joining a start-up might be to use this establishment as a stepping stone to other, potentially more stable or betterpayed positions. We define workers using the start-up as a stepping stone as those who leave it reasonably early, i.e., within the first five years after entry, and without an imminent threat of firm exit, i.e., at least two years before closure. Moreover, they are required to take up a job at a different establishment within a maximum of 90 days. 29 We then compare these workers who use the start-up as a stepping stone to a balanced sample of all other individuals entering a newly founded establishment, as presented in Fig. 4d, and find that this indeed seems to be a successful strategy. Workers who quickly leave start-ups for positions in other establishments earn approximately € 3000 more income than the comparison group in year one, the year in which the majority of workers leave the start-up. This gap remains remarkably stable over the subsequent observation period. Nonetheless, when we investigate the difference in yearly income to all workers who instead joined an incumbent in year zero, we find that the latter are still performing significantly better (see Fig. 4e). 30 Thus, even though our analysis of different subsequent career paths after entering a start-up highlights important heterogeneities, it does not reveal any potential channel or strategy through which workers joining a start-up can catch up with or become even more successful than workers entering an incumbent establishment. the quantity of jobs created by newly founded firms, the implications of joining a start-up for the individual worker have not been analyzed in depth so far. Therefore, we explore the advantages and disadvantages of entering start-ups instead of incumbent firms, both in terms of remuneration and employment prospects, and investigate whether differences in labor market performance are long-lasting over a worker's subsequent career path. We apply entropy balancing to make both groups of entrants comparable and follow individuals in start-ups and incumbent firms over ten years. Our results imply that workers joining a start-up experience significantly lower income and daily wages, as well as less days in employment and more days of benefit receipt, than similar workers joining an incumbent. These severe drawbacks are persistent over the subsequent ten years after entering the respective establishment and they hold for all groups of workers and types of start-ups analyzed. Concerning earnings differences between workers entering start-ups and those joining incumbents, the negative differential in entry wages found is in accordance with findings by Nyström and Elvung (2014) for Sweden and Fackler et al. (2019) for Germany but somewhat questions the positive or insignificant wage differentials found in some other studies (e.g. Zarutskie, 2014 andKim, 2018). Regarding the development of earnings differences over time, our findings are in line with other current research on by Adrjan (2018) for Great Britain and Sorenson et al. (2021) for Denmark, as both studies find long-run pecuniary disadvantages from entering a newly founded firm. 31 We go beyond existing research by showing that persistent drawbacks from joining a start-up can also be found in terms of (un) employment prospects. We also provide insights concerning the role of the higher failure risk and the lower employment stability in start-ups as potential explanations for the observed differences in labor market performance (see also Schnabel et al., 2011). Analyzing workers who remain employed with their initial employer and workers who enter successful vis-à-vis failing start-ups, we still find substantial drawbacks compared to similar workers entering incumbents. When focusing on workers who use start-up employment as a stepping stone to positions in other establishments, we find that even this strategy does not render workers joining newly founded firms as successful as those entering incumbents. While our main insights imply long-lasting negative consequences from working at a start-up, some limitations of our analysis must be taken into account when interpreting our results. First and foremost, the various indicators of labor market success investigated in this study do not represent all dimensions of job quality. In particular, our data do not allow us to draw any conclusions concerning job satisfaction. Hence, it is possible that workers in start-ups experience especially high levels of job satisfaction due to, e.g., flatter hierarchies or more autonomy and responsibility (Sauermann, 2018). Focusing on remuneration, one must bear in mind that our data do not include information on non-standard means of financial compensation, such as fringe benefits or firm shares. We argue that this shortcoming should not affect our insights, since fringe benefits do not play an important role in the German labor market due to the scope of social security provision by the state (Schmieder, 2013), and employee share ownership is not very common in Germany and rarely found in small establishments (Bellmann & Möller, 2016). Moreover, the risky nature of start-ups makes it unlikely that firm shares are regarded as an adequate form of compensation by employees. Another limitation could be that our data do not contain self-employed individuals. We thus cannot observe if some workers who were initially employed at start-ups become entrepreneurs themselves, another potential career path that we are not able to analyze. A final, small caveat when interpreting our results is that we do not observe workers' complete employment biographies after entering the respective establishment. However, we claim that the time span of ten years is long enough to observe whether a convergence process sets in and therefore suffices to make meaningful statements on the longrun effects of entering a start-up. Since all our insights point towards significant disadvantages from entering a start-up, the question arises why workers decide to join newly founded firms at all. One reason might be that individuals are just not well informed about the negative consequences of working for a start-up. Although the high likelihood of failure among new firms is a stylized fact that is often discussed both politically and scientifically (e.g., Fackler et al., 2013;Fairlie et al., 2019;Geroski, 1995;Haltiwanger et al., 2013), workers might not be aware of the disadvantages arising even if their employers do not fail. A second potential explanation for workers' decision to enter a new firm could be the different types of work environment. As already mentioned, employment in start-ups is often associated with flat hierarchies, a broader set of tasks assigned to a job, and more responsibility for the individual worker. These factors might compensate workers with strong preferences for such non-monetary job attributes for foregone earnings and worse employment prospects. 32 Finally, it must be noted that newly founded firms often offer opportunities for workers who face disadvantages at the labor market due to, e.g., their age, foreign nationality, or previous unemployment experience (Coad et al., 2017;Fackler et al., 2019;Nyström, 2012). Put differently, for some groups of workers, the superior alternative of joining an incumbent may simply not be available. From this perspective, working at a start-up can still offer an opportunity for disadvantaged workers who would otherwise be unemployed, especially if they enter start-ups that prosper and survive or if they intend to use the start-up as a stepping stone for (better) positions in other establishments. In conclusion, since our insights indicate that jobs created by start-ups do not provide workers with the same opportunities for long-run career advancement as those created by incumbents, the role of new firms as job creators should be interpreted cautiously (see also Sorenson et al., 2021). Even though the strong political attention and financial support which start-ups receive in many countries is probably not motivated by the expectation that they create stable high-wage employment, the worker-level perspective taken in our analyses provides some additional support for the skepticism toward start-up subsidization expressed by some authors (e.g., Santarelli & Vivarelli, 2007;Shane, 2009). 32 An analysis of R&D employees in the USA by Sauermann (2018) indeed shows that individuals working in start-ups have strong preferences for job attributes such as autonomy and responsibility but place less importance on job security and income. energy and mining, and the public and nonprofit sectors. The sample comprises individuals of age 18-50, excluding apprentices. Graphs show the OLS estimates of differences in labor market trajectories between workers entering a start-up or an incumbent, the gray dashed lines indicate the 95% confidence intervals Fig. 6 a-d OLS estimates of differences in labor market trajectories between workers entering new and incumbent establishments, sample restricted to establishments with not more than 20 employees in year zero. Sources: Integrated Employment Biographies (IEB); Establishment History Panel (BHP); authors' calculations. Notes: Sample includes only workers entering establishments with not more than 20 employees in the years 2000-2004, excluding agriculture, energy and mining, and the public and nonprofit sectors. The sample comprises individuals of age 18-50, excluding apprentices. Graphs show the OLS estimates of differences in labor market trajectories between workers entering a start-up or an incumbent; the gray dashed lines indicate the 95% confidence intervals
2022-12-24T14:46:47.376Z
2021-09-08T00:00:00.000
{ "year": 2021, "sha1": "06adad329e888d1acd62d63ea2a3b642483c32e3", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11187-021-00508-2.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "06adad329e888d1acd62d63ea2a3b642483c32e3", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [] }
259909409
pes2o/s2orc
v3-fos-license
Video-Based Human Activity Recognition Using Deep Learning Approaches Due to its capacity to gather vast, high-level data about human activity from wearable or stationary sensors, human activity recognition substantially impacts people’s day-to-day lives. Multiple people and things may be seen acting in the video, dispersed throughout the frame in various places. Because of this, modeling the interactions between many entities in spatial dimensions is necessary for visual reasoning in the action recognition task. The main aim of this paper is to evaluate and map the current scenario of human actions in red, green, and blue videos, based on deep learning models. A residual network (ResNet) and a vision transformer architecture (ViT) with a semi-supervised learning approach are evaluated. The DINO (self-DIstillation with NO labels) is used to enhance the potential of the ResNet and ViT. The evaluated benchmark is the human motion database (HMDB51), which tries to better capture the richness and complexity of human actions. The obtained results for video classification with the proposed ViT are promising based on performance metrics and results from the recent literature. The results obtained using a bi-dimensional ViT with long short-term memory demonstrated great performance in human action recognition when applied to the HMDB51 dataset. The mentioned architecture presented 96.7 ± 0.35% and 41.0 ± 0.27% in terms of accuracy (mean ± standard deviation values) in the train and test phases of the HMDB51 dataset, respectively. Introduction Human action recognition (HAR) is an interdisciplinary field related to computer vision that seeks to analyze human motion, balance, postural control, and interactions with their environment. It comprises biomechanics, machine vision, image processing, data analytics, nonlinear modeling, artificial intelligence, and pattern recognition. It can be analyzed through bidimensional, deep, or thermal images or movement, sensors adapted to the body, or smartphones [1]. In this context, the movements and positions of body parts are used to recognize human actions in human model-based methods. However, to develop an applicable and efficient HAR system, researchers must analyze the diversity in human body sizes, postures, motions, appearances, clothing, camera motions, viewing angles, and illumination. HAR has been studied due to its numerous applications in a wide range of domains and complexities, highlighting applications in safety, environmental monitoring, video surveillance [2,3], robotics [4], training and practical courses with immediate response [5], healthcare, specific medical diagnosis and fitness monitoring [6], biomechanical analysis approaches using data analytics [7], among others. The main challenges in HAR include (i) inference from nonexplicit poses and actions; (ii) different people can classify poses and actions differently; (ii) the possibility of partial occlusion of the body or objects involved in the scene; (iii) videos of questionable quality, such as blurring, and poor-quality sensors which generate noise in the data; (iv) large differences between the times of different actions; (v) no lighting or high brightness; and (vi) difficult acquisition of large-scale datasets [8]. With technological advances in smartphones, it has been possible to collect data from various types of sensors, including accelerometers, gyroscopes, microphones [9], and cameras [10], to measure activities of daily living but without the explicit interaction of users with acquisition devices, i.e., not interfering or disturbing the actions [1]. Using these datasets collected by sensors and developing artificial intelligence techniques can provide an advanced understanding of the image caption task for activity detection or recognition. However, these techniques are shown to be limited and dependent on the extractor, making their usefulness restricted to specific applications [11]. In this context, deep learning approaches begin to stand out due to their generalization capabilities and the fact that there is no need to model the extraction of manual characteristics [12]. The convolutional neural networks (CNN) [13] evaluated in this paper are a residual network (ResNet) depth 50 [14] and a bi-dimensional vision transformer (ViT) with long short-term memory network (LSTM) [15]. The performance indicator that helps us evaluate the classifiers is the accuracy measure. The main objective of this study is to evaluate a hybrid deep learning model of supervised and semi-supervised learning for HAR in red, green, and blue (RGB) videos applied to the human motion database (HMDB51). The focus is a deep learning architecture that proves to be feasible for application in a real-life scenario, in which the algorithm processing can follow the real rate of image capture. In summary, we make the following contributions: (i) A systematic review of the literature was conducted on themes related to HAR; (ii) A label smoothing technique was tested with a 3D ResNet-50 in the HMDB51; (iii) A model based on a semi-supervised learning methodology was evaluated in the HMDB51; (iv) The results analysis of the proposed deep learning approach are presented based on the accuracy indicator applied to the HMDB51. The remainder of this paper is organized as follows: Section 2 introduces relevant works relating to HAR and the ideas that helped define this work. Section 3 presents the used database. Section 4 focuses on the methodology applied to ResNet-50, a CNN with a fully connected layer, and a 2D ViT with LSTM. Section 5 presents the experiments and results analysis. Section 6 concludes this paper and future directions of research. Related Works In this section, we provide a comprehensive introduction to previous studies in the related fields of HAR. To guide this process, a set of keywords was defined: "activity recognition", "action recognition", "behavior recognition", "RGB (red, green and blue) video", "single camera video", "mono camera video", "deep learning", "neural network", and "CNN". The research was carried out in a period ranging from 01/1985 to 01/2021 using three databases, IEEE (Institute of Electrical and Electronics Engineers) Digital Library, Science Direct, and Springer Link, obtaining a total of 4334 papers as illustrated in Figure 1. From the analysis of the data in Figure 1, studies published before 2015 were discarded due to the low distribution of articles in the early years and the constant evolution of technology, totaling 2952 documents, whose percentage distribution is shown in Figure 2. Then, other exclusion criteria were applied in addition to the number of citations per article. These articles were classified into two groups. Both groups are related to RGB video image processing applied to HAR, group 1 is for articles using deep learning techniques, with 42 articles, and group 2 is for unsupervised learning, with 18 articles. Thus, 60 articles were included in the qualitative analysis. In the recent literature, there are studies with several deep learning architectures [16], varying the type of pre-processing [17], input formats [18], artificial neural network configuration, memories, recurrences, filters [19], and final classification layers, among others [20]. However, there is still space to improve video classification when compared to image classification [21]. The 2D CNNs are still widely used [22] for the recognition of actions [22], and even though they cannot capture temporal characteristics, other complementary structures are proposed, such as optical flows [23], LSTM [24][25][26], and temporal groupings [27]. A complete review of human activity recognition using CNNs can be seen in [28]. Another frequently used strategy is that of streams, in which various types of input are processed in different networks, the most common is the two-stream network that processes RGB video frames in one and an optical stream in the other; Hao and Zhang employed this architecture [29]. The use of artificial intelligence models is growing in line with increased processing power, making deep learning applications increasingly popular [30]; these applications include time series prediction [31][32][33] and classification, especially in computer vision [34][35][36]. A structure that has been widely explored with the emergence of large datasets is 3D CNN, as described in Table 1. A disadvantage of this architecture is the high number of parameters, an order of magnitude greater than 2D CNNs, which often leads to overfitting [21]. Hara et al. [37] performed tests using 3D CNN applied to HMDB51, the University of Central Florida (UCF101) dataset, and the activity net dataset, but they did not obtain acceptable generalization results; however, while using Kinetics, those authors obtained a performance like that presented in the literature. Recently, video databases for human activity recognition have started to get bigger, in the hundreds of thousands. Kinetics was proposed in 2019 [51] with 700 thousand labeled videos and Sports-1M in 2014 [52] with 1.1 million. Another alternative to a large labeled dataset is using a self-supervised or unsupervised learning method to extend the data universe without needing to go through the long labeling process [51]. ResNet and ViT are CNN-based models which are becoming popular given their high performance in classification tasks. Using ResNet-50, Wen, Li, and Gao [53] obtained accuracies of 98.95%, 99.99%, and 99.20% for fault diagnosis, outperforming other deep learning models. According to He, Liu, and Tao [54], the residual connections boost the performance of the neural nets. Xue and Abhayaratne [55] applied ResNet for the classification of COVID-19, and when they used 3D ResNet-101 an accuracy of 90% was achieved, which was better than other methods. Li and He [56] proposed an improved ResNet, and by adjusting the shortcut connections, they obtained an accuracy of 78.63%, which was 2.85% higher than the original ResNet. These results were based on an evaluation using the CIFAR-10 dataset. On CIFAR-100, the accuracy of their method was 42.53%. The variation in the structure of the method was also studied by Paing and Pintavirooj [57], where a fast Fourier ResNet was proposed. Using a model based on ResNet-50, they achieve an F1-score of 0.95 for colorectal polyp adenoma dysplasia classification. Using ViT, Wang et al. [58] evaluated the genitourinary syndrome of menopause. Considering optical coherence tomography images, they obtained an accuracy of 99.9% for the genitourinary syndrome of menopause dataset and 99.69% for the UCSD dataset. In [59], an application of ViT is presented for fault diagnosis, an average accuracy of 99.9% was achieved considering the 1D-ViT. Besides the accuracy, this model has a low number of floating point operations compared to other CNN structures. Materials The HMDB51 is widely used in the literature [60][61][62]; it is small and has a high noise rate. Small sets can lead to overfitting, making the main objective of the job difficult. It comprises 6849 videos with 51 action classes and at least 101 clips per class. Most of these videos are taken from movies. However, a part comes from YouTube (a public video repository). Furthermore, it is one of the most widely used datasets in the research community for benchmarking state-of-the-art video action recognition models. The classes of the HMDB51 dataset are divided into five groups [12]. In addition, there are metadata available, along with the videos, with information on the selection of test data, information about the point of view of the cameras, the presence or absence of camera movement, quality, and the number of agents acting [63]. Methods This section describes the 3D ResNet and 2D ViT models applied in this paper. These CNNs are used as backbones for the classification task, and the DINO (self-DIstillation with NO labels) is considered to enhance the performance of these structures. DINO is a model developed by Facebook (Meta) applied for self-supervised vision using transformers [64]. DINO focuses on training vision transformers using two main components: clustering and contrastive learning. The first step is to cluster the representations (embeddings) produced by the vision transformer. This involves grouping similar representations and creating clusters that capture different visual patterns in the data. The clustering step helps to provide structure and organization to the learned representations [64]. After clustering, the DINO method employs contrastive learning to refine the representations further. Contrastive learning is a technique where positive and negative pairs of samples are created to encourage the model to bring similar samples closer and push dissimilar samples apart in the embedding space. By doing so, the model learns to discriminate between different visual patterns and improve the overall quality of the representations. The combination of clustering and contrastive learning in this method allows the vision transformer to learn meaningful visual representations in a self-supervised manner [64]. 3D ResNet ResNet is a popular CNN architecture for image recognition, which utilizes skip connections to avoid the vanishing gradient problem during training. Skip connections allow information from previous layers to be directly passed to deeper layers, improving the flow of gradients, and facilitating deep network training. Three-dimensional ResNet builds upon this architecture by adding an extra dimension to the input data. It is used to process 3D spatial-temporal data such as video frames or medical images, where each image is a 3D volume that changes over time [65]. The architecture of 3D ResNet ( Figure 3) consists of multiple layers, each of which includes a series of 3D convolutional layers, followed by batch normalization and a nonlinear activation function. The convolutional layers extract features from the 3D input data, and the batch normalization layer normalizes the feature maps to improve the stability and convergence of the training process. The activation function introduces nonlinearity to the output of the convolutional layer [66]. The key innovation of 3D ResNet is using residual blocks, which comprise multiple convolutional layers with skip connections that enable information to bypass some of the layers. This helps mitigate the vanishing gradient problem that can arise in deep neural networks [67]. One of the most popular 3D ResNet architectures is 3D ResNet-50, which has 50 layers and has been widely used in various applications such as action recognition, medical image segmentation, and 3D reconstruction [68]. Thus, 3D ResNet is a powerful neural network architecture for processing 3D spatialtemporal data. By incorporating skip connections and residual blocks, 3D ResNet can effectively handle the challenges of training deep neural networks and has achieved state-ofthe-art performance on various 3D data tasks. It was created to process the time dimension along with the image's width and height [69]. The pre-training phase is performed on large datasets so that fine-tuning is performed on smaller sets. However, the main difficulty of this network is the number of parameters needed to be trained, often being an order of magnitude greater than in bi-dimensional. In this architecture, the entire hierarchy and relationships between space and time are up to the network to create and discover; it does not need other inputs, such as optical flows and other variables. Furthermore, there are no additional steps in the sequence of the network; the input is processed, and the final output is generated, also called an end-to-end network. However, so that training does not generate overfitting, a large volume of data is needed, a fact that has become possible with new sets such as Kinetics [70]. Often these architectures become the basis of future models, with pre-trained parameters allowing fine adjustments and small architectural changes to achieve other goals. Figure 4 shows the main step using the 3D ResNet architecture, and Figure 5 presents its training process. Hara et al. [39] trained 3D ResNet models with the Kinetics dataset [70] and the moments in time dataset [71]. This pre-trained model was fine-tuned with HMDB51, and, additionally, the loss function used was cross-entropy with label smoothing. Label smoothing is a regularization technique that is employed to improve the generalization ability and mitigate overfitting in classification tasks. By modifying the target labels during the training procedure, it instills a sense of ambiguity in the model regarding the definitive labels. This prompts the model to consider the complete probability distribution of the entire class, rather than solely emphasizing the highest probability. As a result, the model demonstrates an enhanced capacity to extrapolate findings to various scenarios and displays increased resilience to disturbances present in the training dataset [72]. Two-Dimensional Vision Transformer The ViT is a recent approach to computer vision that builds upon the success of the Transformer architecture in natural language processing [73]. Traditional computer vision approaches rely on CNNs to extract features from images, but ViT takes a different approach. Instead of using convolutions, ViT splits the image into a grid of patches, which are then flattened and fed into a Transformer network. ViT's input is a sequence of patches, rather than a single image, and the Transformer is used to model the relationships between the patches. ViT consists of two main components: the patch embedding and the Transformer. The patch embedding is responsible for converting each patch into a vector representation that can be fed into the Transformer. This is typically performed using a linear projection layer, which maps each patch to a vector with a fixed dimensionality. The Transformer is then used to model the relationships between the patch embeddings. The Transformer consists of a series of self-attention layers, allowing the network to focus on different parts of the input sequence selectively. The output of the Transformer is a sequence of feature vectors, which can be used for classification or other downstream tasks. A key advantage of ViT is its ability to scale to large image sizes, which is difficult for traditional CNN-based approaches. ViT has achieved state-of-the-art performance on a number of benchmark datasets, including COCO [74], CIFAR-100 [75], and ImageNet [64]. Caron et al. [64] applied self-distillation to train a 2D ViT with the ImageNet dataset. An input image was cropped into a small and a global section; each one passes through a different net with the same architecture. A logarithmic loss function was applied between two outputs (y 1 and y 2 ), the small section net was trained, and this learning was transferred to the other net by exponential moving average, see details in Figure 6. To enhance the temporal modeling capabilities of the pre-trained 2D ViT, a fine-tuning approach was employed by replacing the classifier method with an LSTM layer. This LSTM layer effectively captures the memory of all inputs from the video segment previously processed by the 2D ViT, generating a corresponding output. A cross-entropy loss function with label smoothing was applied to optimize the classifier parameters during the training process. For further information and a detailed methodology breakdown, please refer to Figure 7. The procedure of the dataset preparation phase for the pre-trained 2D ViT is equivalent to the procedure for 3D ResNet, which is presented Figure 4. During the training process, the first step involves loading the annotation file, which contains information about image files and their corresponding labels for each video, thereby constructing the dataset object. Following this, the target model is initialized, and its parameters are either randomly initialized or loaded from a pre-trained model. During each epoch, video segments consisting of "n" sequential frames are transformed, passed through the model, and an output is generated. Subsequently, a loss function in the form of cross-entropy with label smoothing is applied, and the model's parameters are optimized. This iterative process is repeated until all the batches have been processed and the previously planned number of epochs is reached. For a more detailed understanding of the proposed approach, please see Figure 8, which visually represents the approach's flowchart. Pre-Processsing and Metrics We utilized the "normalize" function in the Torchvision transforms package to perform image normalization. This function applies the following normalization procedure: Cross-entropy loss is a commonly used loss function in machine learning for classification tasks. It measures the difference between the target variable's predicted and true probability distributions. The cross-entropy loss, or cost function, used to train the model was calculated as follows: l(x, y) = L = {l 1 , . . . , l N } T , (2) l n = −w y n log exp x n,y n ∑ C c=1 exp(x n,c ) where x is the output of the model, y is the target, w is the weight, C is the number of classes, and N is the dimension of the batch. Accuracy is a widely used metric in classification tasks that measures the proportion of correctly classified instances out of the total number of instances. For accuracy, first, the prediction vector (p) is compared to the ground-truth (Y). If p = Y, then 1; otherwise, 0. Results In this section, the results of the performed experiments are presented. The main objectives of these experiments were to test and train deep learning architectures and apply a semi-supervised learning method to the HAR task to overcome the problem of a lack of labeled data. The application of label smoothing, a technique used to reduce the noise in the dataset, was also analyzed. This study used a Dell ® Gaming G5-5590-A25B notebook (Dell, Round Rock, TX, USA) with Intel ® Core i7 9th generation processor (Intel, Santa Clara, CA, USA), an NVIDIA ® GeForce GTX 1660 Ti graphics card (NVIDIA, Santa Clara, CA, USA) with 6 GB dedicated, and 16 GB of random access memory. After obtaining the data, the frame rate per second (fps), width, height, and duration were scanned. Then, the videos were processed and partitioned in each frame; a 10 s video with 30 fps was partitioned into 300 images, preparing the set to be processed in the algorithm flow. This process was performed for the dataset HMDB51. The video dataset was organized in a structured folder architecture and prepared to run the machine learning models during training. Thus, all videos were split into frames and saved as images. In each epoch, the batches of video segments with n sequential frames pass through the model and generate an output. Supervised and semi-supervised learning techniques were tested with twenty and eight different configurations applied to HMDB51. However, in the first one, only the best results are presented, either using or not using labeling smoothing. Supervised Learning HMDB51 has a certain degree of noise and was used for label smoothing. In this approach, an error factor is inserted in the loss calculation step; considering the batch average at each iteration, a small disturbance is added in network training. This way, problems such as wrong labels and bad and/or noisy data are minimized. For the execution of this experiment, 30 epochs were used. Three-dimensional ResNet was fine-tuned like in Hara et al. [39] with a learning rate of 0.1, a time step equal to four, and a batch equal to eight. Random temporal selection, horizontal inversion, and cut were used. A multi-step learning rate scheduler (8, 16, 24, and 27) and a classifier with a fully connected layer were used. For this experiment, 31 runs were performed for each view of the HMDB51 datasets (1, 2, and 3), and an average between the views was obtained. The vision of the sets is nothing more than different combinations of videos for training and validation. However, the integrity of the dataset always remained the same for all of them. Table 2 shows the results of the experiments from 3D ResNet applied to the HMDB51 with or without label smoothing. The performance metric used in this study is accuracy (Acc), and when applying label smoothing there was a drop of approximately eight percentage points, which is reflected in the training loss gain. Comparing the validation loss, there was a reduction of approximately two units. There was a slight loss in training and a greater reduction in validation accuracy. However, the validation loss function was superior using label smoothing, suggesting greater potential for generalization. As much as label smoothing improves part of the overall results of the network, reflected in the loss of validation, classification is the main objective, so this technique will not bring gains to maximize accuracy. The values presented follow the mean ± standard deviation format of the 31 runs. Semi-Supervised Learning Semi-supervised learning executions were performed based on Caron et al. [64]. Thus, two pre-trained networks were applied to recognize human actions, 2D ResNet 50 and 2D ViT. The training process was conducted in an unsupervised manner; that is, the image labels were not used during the training process, only the content itself. It is worth noting that these architectures were developed to work with a single image, so they were adapted for video processing. Each video frame enters the network and generates a set of features that are grouped by video segment and classified into different actions. The details of the different architectures applied in a database are described in Table 3. Variants 1 to 5 used the 2D ResNet 50 as the base architecture with a batch equal to eight, while variants 6 to 8 used 2D ViT architecture with a batch of 16. Temporal grouping and LSTM were used in the classifier to adapt the 2D network to the 3D scenario. The runs were performed with the following settings: 30 epochs, cut centered on images with dimensions of 224 × 224 pixels, random horizontal inversion, resizing values, standardization, conversion to tensors, and random temporal selection. Table 4 displays the outcomes of the experiments conducted with the variants outlined in Table 3. In this case, we only present the most promising results obtained from the study, which are the DINO 6 to DINO 8 variants that outperform the 3D ResNet results presented in Table 2. These results indicate that 2D ViT architectures have high potential in this task. Table 5 presents a comparison of our proposed hybrid method with two other selfsupervised pre-trained models applied to the human activity recognition problem and trained using HMDB51. Our model outperformed the odd-one-out model [76] Discussion Training small datasets could be a hard task as they are difficult to train from scratch, and so they are likely to overfit. HMDB51, with approximately 7k videos and 51 classes, is a small set and, beyond those points, it has noisy labels [78]. Cross-entropy with a label smoothing technique was applied to overpass the last observation. To test the hypothesis that a label smoothing process would achieve a better performance, a 3D ResNet 50 pretrained by [39] was used. The results found that the model without label smoothing performed better in terms of the training and validation accuracy; however, the model with label smoothing obtained a loss function value 17% lower, indicating a slight trend to generalize better. In this work, a self-supervised pre-trained network [64] was applied to the HAR task to overcome this barrier. The use of four temporal steps on variant DINO 7 brought higher training accuracy; however, the one temporal step on variant DINO 8 led to a 0.9 percentage point above the previous one. This indicates that four temporal steps could better model training data while one temporal step achieves superior generalization. Comparing the ViT model using only a fully connected layer, on variant DINO 6 , with the ViT model using an LSTM layer, on variant DINO 8 , the LSTM outperformed by 1.7 percentage points, indicating a better aggregation of the temporal information. Conclusions In recent years, data fusion, deep learning approaches, and a combination of models have been widely studied and applied in HAR. Deep learning approaches based on CNN and LSTM have demonstrated remarkable success in HAR. This paper investigated two classifier systems for HAR based on a 3D CNN and a hybrid 2D ViT with LSTM both applied to the HMDB51. The classification results using a 3D ResNet 50 with a fully connected layer and 2D ViT with LSTM demonstrated promising performance in the HMDB51. It obtained 96.7 ± 0.35% and 41.0 ± 0.27% for accuracy scores in the train and test phases, respectively. In future research, we intend to examine different deep learning architectures such EfficientNet [79] and NASNet (Neural Architecture Search Network) [80] for ensemble learning design combined with feature engineering approaches and the proposed hybrid CNN and LSTM approach in this paper. In addition, we should test the proposed hybrid method on longer and more complex datasets to measure its full capabilities better.
2023-07-16T15:03:28.459Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "d612bdfefe3340bfa13fd814d51a6b8992df5b1d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/23/14/6384/pdf?version=1689256787", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "11027dd80c2ad471973af5836b3effc3f80f639d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
234249251
pes2o/s2orc
v3-fos-license
High pretreatment plasma D-dimer levels predict poor survival in patients with diffuse large B-cell lymphoma in the real world Background Data on the role of pretreatment plasma D-dimer levels in the prognostic prediction of patients with diffuse large B-cell lymphoma (DLBCL) are limited. We here studied the potential prognostic roles of pretreatment plasma D-dimer levels in patients with DLBCL. Methods We retrospectively analyzed medical records of 308 newly diagnosed patients with DLBCL admitted to the Fujian Medical University Union Hospital between January 2011 and December 2018. Receiver operating characteristic (ROC) curve analysis were used to generate an optimal cut-off value for pretreatment plasma D-dimer levels in patients with DLBCL. According to the cut-off value, all patients were divided into the low D-dimer and high D-dimer groups. We analyzed the relationship between pretreatment plasma D-dimer levels and clinical and laboratory characteristics in patients with DLBCL. Univariate and multivariate analyses were used to assess prognostic factors for overall survival (OS) and progression-free survival (PFS). Results Patients with B symptoms, plasma lactate dehydrogenase levels >upper limit of normal (ULN), poor Eastern Cooperative Oncology Group score (2 to 4), advanced stage (III–IV), >1 extranodal site, higher International Prognostic Index (IPI) (2 to 5) and higher National Comprehensive Cancer Network IPI (NCCN-IPI) (≥4) (all P<0.001) had higher pretreatment plasma D-dimer levels (≥1.4 µg/mL). Patients with higher plasma D-dimer levels had worse OS and PFS (P<0.001 and P=0.001, respectively). Conclusions Higher pretreatment plasma D-dimer level was associated with poor survival and was an independent poor predictor of OS among untreated patients with DLBCL. extranodal disease sites was used to predict the outcomes of patients with DLBCL. However, the IPI cannot accurately discriminate the outcomes of all patients with DLBCL. Several parameters extracted from clinical characteristics or laboratory examination were added to this index to improve its prognostic efficiency; however, the precision of this modified index remains controversial. Therefore, it is necessary to identify other factors to be extracted from patients with DLBCL to provide additional information for prognostication. Abnormal forms of coagulation characterized by hypercoagulation caused by cancer cells are found in many cancer types. Cancer-associated thrombosis occurs secondary to hyperfibrinogenemia or low levels of fibrinolysis. Factors involved in coagulation and fibrinolysis were reported to contribute to the proliferation, migration, and invasion of cancer cells (4)(5)(6)(7)(8). D-dimer, is a specific product of fibrin degradation. Several studies have shown that elevated pretreatment plasma D-dimer levels were predictors of poor survival in various types of solid tumor (9)(10)(11)(12)(13)(14)(15). However, only few studies have assessed the prognostic role of pretreatment plasma D-dimer level in DLBCL (16,17). Thus, we retrospectively analyzed data of patients with DLBCL at our hospital, aiming to explore the relationship between pretreatment plasma D-dimer levels and the prognosis of patients with DLBCL and to evaluate the prognostic value of pretreatment plasma D-dimer level. We present the following article in accordance with the REMARK reporting checklist (available at http://dx.doi.org/10.21037/ tcr-20-2908). Patient selection The medical information of newly diagnosed patients with DLBCL was reviewed and collected at Fujian Medical University Union Hospital from 1 January 2011 to 31 December 2018. The inclusion criteria were as follows: (I) diagnoses made via tissue biopsy or surgical excision according to the World Health Organization classification; (II) aged ≥14 years; (III) received no less than 4 cycles of immunochemotherapy; (IV) plasma D-dimer concentrations assessed within seven days before treatment. The exclusion criteria were as follows: (I) diagnosed with primary mediastinal lymphoma or primary central nervous system lymphoma; (II) without sufficient clinical data; (III) known congenital coagulative abnormality; (IV) thromboembolic event or ongoing anticoagulant treatment within 3 months before treatment; (V) known active infection or positive serologic tests for the human immunodeficiency virus; (VI) neurosurgery, pregnancy, or stroke within 6 months before treatment. The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). This study was approved by the ethics committee of Fujian Medical University Union Hospital (2019KJCX047) and individual consent for this retrospective analysis was waived. Data collection Among all patients included, data regarding age, gender, B symptoms, LDH level, Ann Arbor stage, histopathological diagnosis, imaging findings, bone marrow aspiration biopsy results, performance status, and clinical follow-up were collected. Histopathological diagnosis of DLBCL was classified into germinal center B-cell (GCB) or non-GCB phenotype according to the Hans algorithm (18). All patients included underwent immunochemotherapy combined with or without surgery as primary therapeutic regimens. Immunochemotherapy consisted of standard CHOP (cyclophosphamide, doxorubicin, vincristine, and prednisone) or CHOP-like (cyclophosphamide, epirubicin, vincristine, and prednisone) regimens combined with rituximab. Response to treatment was evaluated according to the International Working Group response criteria for malignant lymphoma (19). Patients who experienced treatment failure or disease progression or relapse were treated with second-line regimens recommended by the National Comprehensive Cancer Network (NCCN) guidelines (20)(21)(22). Plasma D-dimer levels were assessed using an automatic coagulation analyzer (Stago Co., Paris, France) according to the manufacturer's instructions. Peripheral blood samples were collected from all patients within a week before the primary therapy. Receiver operating characteristic (ROC) curve analysis and the area under the curve (AUC) were used to determine the optimal cut-off value for survival as indicated by D-dimer. Overall survival (OS) was defined as the period between the date on which the patient started treatment and the date of death or last follow-up. Progression-free survival (PFS) was defined as the period from the date on which the patient started treatment to the date of disease progression, relapse, or death, whichever came first. Deaths from all causes were Statistical analysis Continuous and dichotomous variables were compared using the t-test and chi-squared test, respectively. Time-toevent data were analyzed using the Kaplan-Meier method. The log-rank test was used to compare the survival times of different groups. The Cox proportional hazards model was used for the univariate analysis of the potential predictors of survival. Variables identified to be significant prognostic factors in the univariate analysis were included in the multivariate analysis using the Cox regression model. All statistical analyses were performed using SPSS version 19.0 for Windows (SPSS Inc., Chicago, IL, USA). Two-tailed P values <0.05 were considered statistically significant. Patients' characteristics A flowchart showing screening for DLBCL patients included was shown as Figure 1. Between January 2011 and December 2018, 308 patients with newly diagnosed DLBCL who met the inclusion criteria were included in the present study. No thrombus event occurred in these patients. The median age of the study cohort at diagnosis was 56 (range, 14-86) years. Pretreatment plasma D-dimer levels among these patients ranged from 0.22 to >20 μg/mL, with a median value of 0.93 μg/mL. Identification of optimal D-dimer cut-off values and patient outcomes A ROC curve analysis was used to determine that the optimal cut-off values for D-dimer of OS was 1.4 μg/mL, with an AUC value of 0.746 (95% CI, 0.662-0.829, P<0.001) ( Figure 2). Then, the patients were divided into two groups, high (≥1.4 μg/mL) and low (<1.4 μg/mL) D-dimer level groups, for further analysis. Ninety-six patients had high plasma D-dimer levels and 212 had low levels. The correlation analyses of the relationships between the characteristics of patients with DLBCL and pretreatment plasma D-dimer levels in the study cohort are shown in Table 1. The presence of B symptoms, higher plasma LDH level [> upper limit of normal (ULN)], poor performance status [Eastern Cooperative Oncology Group (ECOG) score 2-4], advanced stage (III-IV), more than 1 extranodal site, higher IPI (2 to 5) and higher NCCN-IPI (≥4) were associated with higher pretreatment plasma D-dimer levels ( Table 1) Univariate and multivariate analyses of potential prognostic factors for survival The median follow-up period of this cohort was 22.13 (range, 2.73-89.07) months. 43 patients died, and 75 patients were refractory to initial treatment or relapsed after remission during the period of follow-up. The median OS and PFS were not reached in this cohort. Univariate and multivariate analysis of prognostic factors for OS and PFS of DLBCL patients in this cohort were listed in Tables 2,3. Kaplan-Meier curves of pretreatment plasma D-dimer level for OS and PFS were presented in Figure 3. Discussion In the last decade, with the addition of rituximab to standard chemotherapy, the outcomes of patients with DLBCL have improved dramatically. Meanwhile, the Transl Cancer Res 2021;10(4):1723-1731 | http://dx.doi.org/10.21037/tcr-20-2908 efficiency of IPI in prognostic prediction among patients with DLBCL has declined. Thus, experts modified the IPI to R-IPI and NCCN-IPI by adjusting or refining factors included previously (23,24). Other researchers added some clinicopathological characteristics to IPI to constitute new prognostic indicators (25)(26)(27). However, the efficiencies of these new indicators in prognostic prediction remain controversial (25,(28)(29)(30)(31). D-dimer is a fibrin degradation product. Several studies have shown that higher pretreatment plasma D-dimer level was a poor prognostic factor for lung cancer, colorectal cancer, breast cancer, and so on (9-15). However, data regarding pretreatment plasma D-dimer levels among untreated patients with DLBCL are limited, and its role in determining the prognosis of DLBCL remains controversial. Liu et al. found that higher pretreatment plasma D-dimer level was negatively associated with OS and an independent prognostic factor for worse OS among untreated patients with DLBCL (17). Geng et al. found that higher pretreatment plasma D-dimer level was associated with some clinicopathological factors, such as advanced Ann Arbor stage (III-IV) and high LDH level (> ULN), and was negatively associated with OS, but was not an independent poor prognostic factor among untreated patients with DLBCL (16). In the present study, we assessed the roles of D-dimer and other potential prognostic factors on survival among untreated patients with DLBCL. At first, we determined the optimal cut-off value of pretreatment plasma D-dimer level, according to the OS of patients with DLBCL using ROC curve analysis. The optimal cut-off value was 1.4 μg/mL. Then we found that higher pretreatment plasma D-dimer level (≥1.4 μg/mL) was associated with the presence of B-cell lymphoma symptoms, higher plasma Transl Cancer Res 2021;10(4):1723-1731 | http://dx.doi.org/10.21037/tcr-20-2908 (16,17). Higher pretreatment plasma D-dimer level was an independent predictor of poor OS among patients with DLBCL, according to the univariate and multivariate analyses. This result was similar to that reported by Liu et al., but different from that reported by Geng et al. (16,17). The optimal cut-off value of plasma D-dimer in our study (1.4 μg/mL) was approximately equal to that reported by Liu et al. (1.6 μg/mL) but much greater than that reported by Geng et al. (0.92 μg/mL) (16). The low cut-off value reported by Geng et al. might classify the subset of patients with DLBCL with low plasma D-dimer level and good prognosis into a group with high plasma D-dimer level, leading to a reduction in the discrimination efficiency of the cut-off value (16). Although higher pretreatment plasma D-dimer level was negatively associated with PFS among patients with DLBCL, it was not an independent predictor of poor PFS. We believe that this could be because of the fact that the cut-off value was determined based on the OS of patients with DLBCL. Univariate analysis showed that other factors, including higher plasma LDH level (> ULN), poor performance status (ECOG score 2-4), advanced stage (III-IV), more than 1 extranodal site, higher IPI (score 2-5), and higher NCCN-IPI (≥4) were positively associated with poor OS and PFS among patients with DLBCL. Owing to the larger sample size compared with that of other studies, the cut-off value of D-dimer level in our study was different from others, even though we determined it on the same basis as others (16,17). However, from the results of our study and that reported by Liu et al., we can conclude that higher pretreatment plasma D-dimer level was an independent predictor of poor prognosis among patients with DLBCL. Therefore, it is necessary to derive a universal cut-off value for different cohorts of patients with DLBCL. Although abnormalities of coagulation and fibrinolytic factors have been proven to contribute to cancer progression, the mechanisms underlying how abnormal D-dimer levels affect the outcomes of patients with DLBCL remain to be investigated. Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). This study was approved by the ethics committee of Fujian Medical University Union Hospital (2019KJCX047) and individual consent for this retrospective analysis was waived. Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
2021-05-11T00:06:43.762Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "cdb25e5a810ffc39a56d11d5dc758d82510aaf57", "oa_license": "CCBYNCND", "oa_url": "https://tcr.amegroups.com/article/viewFile/50715/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b178207219e32665330d62654cd1d7f7684ff27f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
13023692
pes2o/s2orc
v3-fos-license
Investigation of factors potentially influencing calcitonin levels in the screening and follow-up for medullary thyroid carcinoma: a cautionary note Background The malignant transformation of thyroid C cells is associated with an increase in human calcitonin (hCT), which can thus be helpful in the early diagnosis of medullary thyroid carcinoma (MTC). For this reason, hCT levels should be determined in all patients with nodular goitre. Hashimoto’s thyroiditis, nodular goitre and proton pump inhibitor (PPI) therapy are factors reported to influence basal serum hCT concentrations. The diagnostic role of mildly to moderately increased hCT levels is thus a matter of debate. In this study, we attempt to clarify the role of the aforementioned factors. Methods From 2008 to 2009, we collected data from 493 patients who were divided into five groups. We assessed whether there were significant differences in hCT levels between patients with Hashimoto’s thyroiditis, patients with nodular goitre, patients with PPI therapy, and healthy control subjects. In addition, we investigated whether a delayed analysis of blood samples has an effect on serum hCT concentrations. Results Immunoradiometric assays (Calcitonin IRMA magnum, MEDIPAN) revealed that the time of analysis did not play a role when low levels were measured. Delayed analysis, however, carried the risk of false low results when serum hCT concentrations were elevated. Men had significantly higher serum hCT levels than women. The serum hCT concentrations of patients with Hashimoto’s thyroiditis and nodular goitre were not significantly different from those of control subjects. Likewise, PPI therapy did not lead to a significant increase in serum hCT concentrations regardless of the presence or absence of nodular goitre. Conclusions Increases in serum hCT levels are not necessarily attributable to Hashimoto’s thyroiditis, nodular goitre or the regular use of PPIs and always require further diagnostic attention. as a result of a dysfunction of the regulatory system. For this reason, the measurement of calcitonin levels is a useful tool for the early detection, diagnosis and follow-up of MTCs. Since the early detection of MTCs is associated with excellent prospects for cure and MTCslike all highly differentiated tumoursmostly tend to grow slowly, early diagnosis and treatment play an important role despite the low prevalence of MTC. [8]. Human calcitonin (hCT) is a peptide hormone that consists of 32 amino acids and is produced in humans by the parafollicular cells (C cells) of the thyroid. It is part of a regulatory system and helps control serum concentrations of calcium. Bones, the kidneys and the gastrointestinal tract are the main targets of the biolocigal effects of calcitonin. Evidence of interactions between C cells and thyroid cells suggest that there is a functional relationship between these types of cells although there is still a lack of precise data [9]. Serum contains only very low levels of hCT. There are no ethnic differences in basal serum hCT concentrations but men are reported to have higher concentrations than women [2,[10][11][12]. Patients with clinically apparent MTC usually have serum hCT levels that are 10 to 100 times higher than normal [13,14]. Markedly elevated basal serum hCT levels or pentagastrin-stimulated serum hCT levels higher than 100 pg/ml are thus indicative of MTC. At postoperative follow-up, such levels may suggest a recurrence or untreated metastases [11,13,14]. Normal serum hCT concentrations range from 0 to 10 pg/ml for women and from 0 to 15 pg/ml for men [15]. Pentagastrin and calcium are the usual provocative agents used worldwide. Both tests are performed in patients with nodular thyroid disease and mildly elevated basal serum calcitonin concentrations. At the moment pentagastrin is no more available in several countries, therefore the intravenous calcium stimulation test is used more often. In the literature, Hashimoto's thyroiditis, nodular goitre and the use of proton pump inhibitors (PPIs) have been reported to influence basal serum hCT concentrations [3,[15][16][17][18][19][20]. If, for example, patients are intolerable to pentagastrin and cannot undergo a pentagastrin stimulation test for an evaluation of increased serum hCT levels, omeprazole can be used instead to induce an increase in serum hCT concentrations [16]. However the omeprazole test has not been validated yet. Further studies are required to investigate the influence of a regular use of PPIs [19]. Patients with moderately elevated hCT levels, i.e. levels that are no more than 10 times higher than normal, are difficult to evaluate, especially in the absence of clinical evidence and the presence of potential influencing factors that have been described in the literature. For this reason, the diagnostic value of mildly or moderately increased hCT levels is currently being discussed in the literature [20]. A concrete evaluation of these levels is required. The objective of the present study was to investigate factors possibly influencing serum calcitonin concentrations. We assessed whether gender, Hashimoto's thyroiditis, nodular goitre or the regular use of PPIs influences serum hCT levels and whether a delayed analysis of blood samples has an effect on serum hCT measurements. We conducted this study in an attempt to shed more light on the role of mildly or moderately increased serum calcitonin levels. Study design and patients This prospective single-centre study included 493 consecutive patients (406 men and 87 women) with Hashimoto's thyroiditis or nodular goitre with or without PPI therapy who attended the thyroid and surgical clinics of the German Armed Forces Central Hospital in Koblenz over a period of two years (from 2008 to 2009). Diagnosis of Hashimoto's thyroiditis was based on laboratory tests including measurements of free triiodothyronine (fT3), free thyroxine (fT4), thyroid-stimulating hormone (TSH), microsomal antibodies (MAB), and thyroglobulin antibodies (TAB), as well as on the clinical picture and ultrasonography. A typical clinical presentation is a painless diffuse enlargement of the thyroid gland accompanied by hypothyoidism. A typical ultrasound scan shows a generalized hypoechogenicity usually of the entire thyroid gland [21][22][23]. Marked heterogeneity of the internal structure is seen in 70% of the patients [24]. A proportion of patients shows hypoechoic pseudonodular and multifocal lesions representing areas of high inflammatory activity, i.e. lymphocytic infiltration [25]. The control group consisted of volunteers with a normal thyroid who did not take PPIs and were attending the surgical outpatient clinic. Regular use of PPIs was defined as the daily intake of a minimum dose of 10 mg of omeprazole, 20 mg of pantoprazole or 10 mg of esomeprazole over a period of at least two weeks. Patients with a history of radioiodine treatment or thyroid surgery, patients with malignant or severe nonmalignant systemic diseases (≥ ASA 3 according to the American Society of Anaesthesiologists classification system), and patients who had undergone surgery involving another organ system during the previous six months were excluded from the study. Further exclusion criteria were known hypercalcitoninaemia, renal insufficiency, bacterial infection, known alcohol abuse, and pseudohypoparathyreodism. On the basis of their data, patients were assigned to one of five groups for analysis. Group 1: patients with Hashimoto's thyroiditis who did not take PPIs (n = 122). Group 2: patients with nodular goitre and PPI therapy (n = 73). Group 3: patients with nodular goitre who were not treated with PPIs (n = 118). Group 4: patients with a normal thyroid and with PPI therapy (n = 59). Group 5: patients with a normal thyroid who did not take PPIs (n = 121, control group). The study was conducted in compliance with the Helsinki Declaration and was approved by our local ethics committee (Ethics Committee, Medical Council Rhineland-Palatinate, Germany, IRB00004206). All patients or their legal representatives gave their written informed consent. Laboratory tests Serum levels of thyroid parameters were determined no later than 60 minutes after sampling in the Laboratory of Nuclear Medicine of the German Armed Forces Central Hospital in Koblenz. If this was impossible, serum samples were frozen at −20°C. Serum hCT concentrations were determined using an immunoradiometric assay (Calcitonin IRMA magnum, MEDIPAN; normal levels range from 0 pg/ml to 15 pg/ml for men and from 0 pg/ml to 10 pg/ml for women) [26]. This assay is highly specific and highly sensitive and has a functional sensitivity of 1.5 pg/ml [26]. In accordance with the manufacturer's instructions, venous blood samples were processed no later than two hours after sampling or frozen at −20°C since otherwise there was a risk of false low results. For an assessment of the influence of delayed serum analysis on calcitonin levels, a series of 20 randomly selected serum samples was analysed immediately after sampling and after storage for two and four hours at room temperature. Statistical analysis We compared the means and medians for the five groups in order to detect any significant differences and to assess the influence of Hashimoto's thyroiditis, nodular goitre or the regular use of PPIs on serum hCT concentrations. All data were entered into an Excel® spreadsheet and analysed using SPSS® Version 15.0. The level of significance was set at p < 0.05. First, we performed an analysis of variance (ANOVA) for a single factor, i.e. mean serum hCT concentrations. If significant differences between the means were found, we used a post-hoc Bonferroni test for multiple comparisons of dependent means. For a gender-specific analysis of serum hCT concentrations and a comparison of the different patient groups, we determined not only means but also medians in order to minimise the effects of outliers. Statistical analysis was performed using a Kruskal and Wallis H test. If the H test showed significant between-group differences, a Mann-Whitney U test was used to identify groups that were significantly different from each other. The mean serum hCT concentration was 6.385 pg/ml (± 5.7845 pg/ml) for all subjects. A comparison of serum hCT levels for different age groups did not reveal remarkable differences. 191 patients had a nodular goiter. Ultrasound detected multiple nodules in 156 patients (82%) and a solitary nodule in 35 patients (18%). The mean diameter of the dominant nodule in the patient group with multiple nodules was 19 ± 4 mm (range: 7-34 mm). The mean diameter of the nodules in the patient group with solitary nodules was 22 ± 3 mm (range: 10-42 mm). Influence of time of blood sample analysis Statistically, there was a mean difference of 1.40 pg/ml between the levels measured immediately and after storage for two hours at room temperature and 1.42 pg/ml between the levels measured immediately and after storage for four hours. The mean difference between the levels measured after two hours and after four hours was as low as 0.023 pg/ml. With p-values of 0.749 and 0.726 respectively, the mean differences of 1.40 pg/ml and 1.42 pg/ml were statistically not significant. The mean difference of the measured hCT concentrations was below the functional assay sensitivity of approximately 1.5 pg/ml specified by the manufacturer. It should be noted that several calcitonin levels that were mildly elevated (>7 pg/ml, n = 7) during the initial measurement decreased considerably. This difference, however, was not significant. Influence of gender The male subjects had significantly higher mean concentrations (p = 0.0001) and median concentrations of serum hCT (p = 0.0001) than the female subjects. In male subjects (n = 406) we measured mean serum hCT concentrations of 6.876 pg/ml (± 5.85 pg/ml) and median serum hCT concentrations of 6.055 pg/ml (± 5.85 pg/ml). In female subjects the mean serum hCT concentrations were 4.094 pg/ml (± 4.91 pg/ml) and the median serum hCT concentrations 2.720 pg/ml (± 4.91 pg/ml). Table 1 provide an overview of the mean and median serum hCT concentrations for the five patient groups. There were significant differences between both the mean (p = 0.012) and median concentrations (p = 0.01) of the patient groups. The group of patients with Hashimoto's thyroiditis showed the lowest mean concentration, i.e. 5.368 pg/ml (± 5 pg/ml), and the lowest median concentration, i.e. 4.12 pg/ml (± 5 pg/ml). By contrast, the group of patients with a normal thyroid and PPI therapy had the highest mean concentration, i.e. 8.672 pg/ml (± 9.447 pg/ml), and the highest median concentration, i.e. 6.1 pg/ml (± 9.447 pg/ml). Influence of Hashimoto's thyroiditis, nodular goitre and PPI therapy An analysis of the mean concentrations for the different patient groups revealed no significant differences between Groups 1 to 4 and Group 5 (control subjects with a normal thyroid and no PPI therapy) ( Figure 1, Table 1). The only significant difference (p = 0.004) was found between the mean concentrations for patients with Hashimoto's thyroiditis (Group 1) and patients with a healthy thyroid and PPI therapy (Group 4). The mean concentration for Group 1 was 5.368 pg/ml (± 5 pg/ml) and was thus significantly lower than that for Group 4, which was 8.672 pg/ml (± 9.447 pg/ml). All other differences between the mean concentrations for the various groups were not significant. An analysis of the median concentrations showed no significant differences between Groups 1 to 4 and Group 5 (subjects with a normal thyroid and no PPI therapy) ( Figure 1, Table 1). Significant differences, however, were found between the group of patients with Hashimoto's thyroiditis and all other groups with the exception of the control group (normal thyroid without PPI therapy). Patients with Hashimoto's thyroiditis showed a significantly lower median serum hCT concentration than Group 2 patients (nodular goitre with PPI therapy) (p = 0.014), Group 3 patients (nodular goitre without PPI therapy) (p = 0.012), and Group 4 patients (normal thyroid and PPI therapy) (p = 0.001). There were no further significant differences between the medians for the patient groups. The maximum difference between median concentrations, however, was 1.98 pg/ml and was thus only slightly above the functional assay sensitivity of approximately 1.5 pg/ml as specified by the manufacturer. The box plots in Figure 1 show a high number of outliers for Group 4 (normal thyroid and PPI therapy) that resulted in a just significant difference between the means and medians of Groups 1 and 4 ( Figure 1) and an increased standard deviation for Group 4. Discussion Since even very small MTCs tend to metastasise [27] and prognosis depends on early diagnosis and treatment [6,28,29], which consists of thyroidectomy and lymphadenectomy [30], early detection and management play an important role despite the low prevalence of MTC. The prospects for cure are excellent after surgery if treatment is instituted at an early stage, i.e. before MTC has metastasised. A 10-year survival rate of 97.7% has been reported [31]. Because of the high sensitivity and specificity of serum hCT measurement by immunoassay technology using two monoclonal antibodies routine hCT measurement has been suggested for assesment of patients with thyroid nodular disease by major authorities in Europe [2,20,32,33] and the United States [34,35]. Yet it is not rountinely performed in most cases. In a survey of Bennebaek et al. the serum hCT measurement was only used in 43% of all cases with nodular thyroid disease [36]. However the routine use of serum hCT screening in patients with nodular thyroid disease is still under debate [5]. There are also several authors, who do not recommend the routine use because of the hight prevelance of thyroid nodules and the rarity of MTC. Routine hCT measurement has no general acceptance in the US [37]. Therefore the expert panel of the ATA (American Thyroid Association) could not recommend either for or against the routine measurement of serum calcitonin in 2009 (Recommendation rating: I) [38]. Despite the previous AACE-AME guidelines did not endorse the routine measurement of hCT the revised 2010 guidelines favors, but does not recommend, routine hCT testing: "measurement of basal serum calcitonin level may be useful in the initial evaluation of thyroid nodules" [39]. The European Thyroid Association (ETA) and the German Society of Endocrinology (DGE) recommend the routine hCT screnning in patients with nodular thyroid disease [14,15,30]. Patients with clinically apparent MTC usually have serum hCT levels that are 10 to 100 times higher than normal. Markedly elevated hCT levels are thus indicative of MTC. At postoperative follow-up, such levels may suggest a recurrence or untreated metastases [28]. Patients with mildly or moderately elevated hCT levels (not exceeding 100 pg/ml) are difficult to evaluate, especially since the literature reports a number of factors that can influence serum calcitonin concentrations [3,8,15,18]. Hypercalcitoninaemia can occur with a broad spectrum of conditions. An elevation of calcitonin levels can be found not only in patients with MTC but also in patients with C-cell hyperplasia (CCH), which is difficult to differentiate and may precede MTC. In addition, up to 22% of patients with renal failure present with markedly elevated serum hCT concentrations [27]. In the literature, moderate elevations in serum hCT levels (not exceeding 100 pg/ml) were also reported in patients receiving proton pump inhibitor (PPI) therapy [16,17,40] as well as in patients with Hashimoto's thyroiditis [18,19]. Likewise, hypercalciuria [41], paraneoplastic syndromes [15], and chronic alcoholism [42] were found to induce an increase in hCT levels. By contrast, there is no evidence to support earlier research suggesting that elevated hCT levels are caused by medicines containing calcitonin or salmon calcitonin [15,26] or elevated procalcitonin levels associated with bacterial infections [26], which were found by modern highly specific and sensitive assays without cross-reactivities to be largely insensitive to the aforementioned influences. The interpretation of mildly or moderately elevated serum hCT concentrations always requires that the risk of surgery for a benign condition be weighed against the risk of missing an MTC. For this reason, the diagnostic role of serum hCT concentrations is a matter of debate [20]. When the test kit (Calcitonin IRMA magnum) was used in this study to compare immediate and delayed analyses of hCT levels, there were no significant differences in the results when low levels within normal limits were measured. A delayed analysis of blood samples with hCT levels that were primarily elevated but still within normal limits led to a few lower values after two hours at room temperature. This suggests that false low results can indeed be produced in the case of elevated hCT concentrations and a delayed analysis. Accordingly, valid results can only be obtained if the processing steps and times specified by the manufacturer of the assay are strictly observed [26]. The present results confirm the finding that men have significantly higher serum hCT concentrations than women. Saller et al. and Vierhapper et al., too, reported gender-specific differences and measured higher basal serum hCT concentrations in men [2,12]. According to the manufacturer of the immunoradiometric assay used in this study, normal calcitonin levels in the serum of healthy persons range from 0 pg/ml to 15 pg/ml in men and from 0 pg/ml to 10 pg/ml in women [26]. Elevated serum hCT concentrations in women must receive particular attention since a medullary thyroid carcinoma is the underlying cause in approximately 80% of the cases Figure 1 Box plots displaying the 25th and 75th percentiles, the medians, the whiskers extending from the 2.5th to the 97.5th percentile for the different groups of patients (O = increase, X = outlier). [2]. By contrast, moderately elevated serum hCT concentrations in men are the result of C-cell hyperplasia in up to 80% of the cases [2]. The maximum difference between mean levels was 3.304 pg/ml (maximum standard deviation: 9.4468). This is mainly attributable to the considerable number of outliers observed for Group 4 (healthy thyroid and PPI therapy). When a statistical analysis of the medians was performed in order to minimize the influence of outliers, the maximum between-group difference was 1.98 pg/ml. This difference was slightly above the functional assay sensitivity (approximately 1.5 pg/ml) specified by the manufacturer. Unlike Schütz et al. and Karanikas et al., we did not observe an elevation of serum hCT concentrations in patients with Hashimoto's thyroiditis [18,19]. The mean and median concentrations of patients with Hashimoto's thyroiditis were not significantly different from those obtained for the control group. Compared with the other groups, patients with Hashimoto's thyroiditis had even the lowest mean and median concentrations. The findings presented here show that the serum hCT concentrations in our patient population were uninfluenced by the presence or absence of Hashimoto's thyroiditis. As a result, elevated serum hCT levels were not caused by Hashimoto's thyroiditis and generally require special diagnostic attention. Likewise, neither the mean nor the median serum hCT concentrations of patients with nodular goitre were significantly different from those of the control group, regardless of whether the patients received or did not receive PPI therapy. For this reason, elevated serum hCT levels were not attributable to nodular goitre and require further diagnostic evaluation. The literature reports mild or moderate increases in serum hCT concentrations after short periods of treatment with PPIs [16,17]. There were considerable differences in these increases. Vitale et al. found, for example, that gastrin responsiveness to omeprazole had great variability [17]. This can be explained by the effects of gastrin. In the present study, there were no significant differences between patients who regularly took PPIs and control subjects with a normal thyroid who did not take PPIs. The regular use of PPIs may lead to a habituation effect so that calcitonin levels are no longer elevated in response to gastrin levels. The regular use of PPIs is thus not associated with an increase in serum hCT concentrations irrespective of whether patients have a normal thyroid or present with nodular goitre. It is interesting to note, however, that patients with a healthy thyroid and PPI therapy showed an increased number of outliers. Nevertheless, an elevation of serum hCT concentrations is not necessarily attributable to the regular use of PPIs. The present study emphasises the low susceptibility to errors of calcitonin screening. Contrary to some authors the study suggests that an elevation of hCT levels cannot be explained by conditions such as Hashimoto's thyroiditis, nodular goitre or PPI therapy [15,16,18]. When patients have hCT levels higher than the genderspecific upper limits, the underlying cause must be thoroughly investigated. Technical problems must be ruled out. Where appropriate, a different assay should be used to perform a second measurement in order to confirm the presence of hypercalcitoninaemia [15]. An intravenous calcium stimulating test or a pentagastrin stimulation test should be performed in thoses cases [30]. A marked increase in serum hCT levels to ten times the normal level after pentagastrin or calcium stimulation is clear evidence for MTC and is an indication for thyroidectomy [3,15,30]. This approach allows the vast majority of medullary thyroid carcinomas to be detected and treated in time. Although this implies that a notable number of patients with C-cell hyperplasia, especially male patients, are likely to undergo surgery, the poor prognosis of metastatic MTC justifies this approach. Limitations The results of laboratory tests might be affected by the molecular heterogeneity of calcitonin. Therefore serum calcitonin concentration can vary because different assays use antisera that recognize different epitopes of the calcitonin molecule. At present two-site immunoassays are commonly used. These tests combine monoclonal antibodies against regions, which are unique to the mature form of the calcitonin molecule. A radioisotopic (IRMA) or luminescent (ILMA) labeling is currently regarded as the most accurate [3,43]. In this study the IRMA is used. So there might be limitations in transfering the results presented here to results gathered by other assays and labelings. The strength of the results presented here is limited by the number of subjects. Furthermore the men to women ratio is 5:1 in this study whilst worldwide the prevalence of thyroid diseases in particular Hashimoto's thyroiditis is higher in the female then in the male sex. Therefore further studies are required to confirm the findings. Conclusions Our study helps clarify the role of mildly to moderately increased calcitonin levels. The presence of Hashimoto's thyroiditis, nodular goitre or the regular use of PPIs did not significantly influence the measured calcitonin concentrations. As a result of these findings, every abovenormal increase in serum hCT levels requires particular attention and a careful evaluation since an increased production of hCT should always be suspected of indicating medullary thyroid carcinoma.
2017-06-28T19:57:49.736Z
2013-11-04T00:00:00.000
{ "year": 2013, "sha1": "cb89e6ac3e5b8c591526c7feebdd5257a2d9b41b", "oa_license": "CCBY", "oa_url": "https://bmcclinpathol.biomedcentral.com/track/pdf/10.1186/1472-6890-13-27", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "224bf6fd74078c7ed1002f1fa7ec97f08a69b381", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
1207365
pes2o/s2orc
v3-fos-license
High Viral Fitness during Acute HIV-1 Infection Several clinical studies have shown that, relative to disease progression, HIV-1 isolates that are less fit are also less pathogenic. The aim of the present study was to investigate the relationship between viral fitness and control of viral load (VL) in acute and early HIV-1 infection. Samples were obtained from subjects participating in two clinical studies. In the PULSE study, antiretroviral therapy (ART) was initiated before, or no later than six months following seroconversion. Subjects then underwent multiple structured treatment interruptions (STIs). The PHAEDRA study enrolled and monitored a cohort of individuals with documented evidence of primary infection. The subset chosen were individuals identified no later than 12 months following seroconversion to HIV-1, who were not receiving ART. The relative fitness of primary isolates obtained from study participants was investigated ex vivo. Viral DNA production was quantified using a novel real time PCR assay. Following intermittent ART, the fitness of isolates obtained from 5 of 6 PULSE subjects decreased over time. In contrast, in the absence of ART the fitness of paired isolates obtained from 7 of 9 PHAEDRA subjects increased over time. However, viral fitness did not correlate with plasma VL. Most unexpected was the high relative fitness of isolates obtained at Baseline from PULSE subjects, before initiating ART. It is widely thought that the fitness of strains present during the acute phase is low relative to strains present during chronic HIV-1 infection, due to the bottleneck imposed upon transmission. The results of this study provide evidence that the relative fitness of strains present during acute HIV-1 infection may be higher than previously thought. Furthermore, that viral fitness may represent an important clinical parameter to be considered when deciding whether to initiate ART during early HIV-1 infection. Introduction HIV-1 exists within the host as a swarm of genetically related strains, termed quasispecies [1]. The heterogeneity of the quasispecies occurs largely as a result of the highly erroneous reverse transcription process [2]. Combined with the rapid rate of virion production (between 10 8 and 10 9 virions per day) and the large number of infected cells (10 7 to 10 8 ), the result is a highly diverse HIV-1 population [3,4,5]. Additionally, recombination between distinct strains within a host can also occur, further increasing diversity within the virus population [1,6]. The inherent genetic diversity of HIV-1 facilitates rapid evolution and adaptation to a given or changing environment within the infected host, referred to as viral fitness [6,7]. Adaptation of HIV-1 involves migration and dissemination throughout the host, escape from adaptive and innate immune responses, and from antiretroviral drug pressure [6]. Fitness therefore is dependent upon viral and host factors, and has been associated with HIV-1 disease progression in individuals with chronic HIV-1 infection [6,8,9]. It is thought that individuals harbouring virus isolates that are attenuated or replicate poorly are able to control virus replication and delay disease progression compared with individuals infected with rapidly replicating virus isolates. A correlation between poor ex vivo replication and VL suppression was observed following analysis of individuals infected with a nef/LTR attenuated strain [9,10,11,12,13,14]. In the findings of Trkola et al. (2003), viral fitness of isolates obtained prior to initiation of ART strongly correlated with the degree of VL rebound following treatment cessation in a group of 20 individuals with chronic HIV-1 infection [8]. A strong correlation between ex vivo viral fitness and disease progression was demonstrated following analysis of virus isolates obtained from three well characterised long term survivors (LTS) of HIV-1 infection, and three individuals with chronic, progressive HIV-1 infection [15]. Similarly, Campbell et al. (2003) reported a strong linear relationship between HIV-1 replication ex vivo and plasma VL for 12 individuals with chronic HIV-1 infection [16]. Collectively, these observations suggest a correlation between ex vivo viral fitness and clinical outcome in chronic HIV-1 disease [17]. Little is known regarding viral fitness during the acute phase of infection. From what is known, the fitness of isolates present during acute HIV-1 infection is thought to be low relative to isolates present at later stages of infection, due to the significant genetic bottleneck imposed upon transmission [1,18]. Indeed, findings from two studies investigating founder viruses and viral diversification in acute HIV-1 infection revealed that in the majority of individuals investigated, infection occurred as a result of transmission or expansion of a single founder virus [19,20]. The genetic properties required for efficient transmission may differ from those required for effective establishment and dissemination of HIV-1 infection throughout the new host. As a result, the adaptive potential of transmitted strains may be reduced [1]. To examine the relationship between ex vivo viral fitness and control of VL in the acute or early chronic stage of HIV-1 infection in this study, viral strains obtained from participants of two clinical cohorts were investigated [21,22]. Relative viral fitness was assessed using a highly sensitive, quantitative real time PCR (QPCR) assay to measure production of total HIV-1 DNA. Total HIV-1 DNA production can be detected as early as 3 h postinfection ex vivo, preceding production of integrated and circular forms [23]. Hence, total HIV-1 DNA production was thought to represent a sensitive, early and reliable marker to assess the relative viral fitness of isolates investigated in this study. We found that ex vivo viral replicative fitness did not correlate with coincident plasma VL from individuals in the acute and early chronic stages of HIV-1 infection. Surprisingly, the fitness of isolates obtained from individuals prior to, or immediately following seroconversion to HIV-1 was equal to or greater than that of isolates obtained from ART naïve individuals with early, chronic HIV-1 infection. The results of this study suggest that despite the genetic bottleneck occurring upon transmission of HIV-1, the replication capacity of transmitted strains is not necessarily reduced. As viral pathogenicity has been linked to fitness, the findings of this study also suggest that the pathogenicity of isolates present during acute HIV-1 infection may be higher than previously thought, perhaps providing further evidence for the initiation of ART during this phase of HIV-1 infection. Patients Plasma samples were obtained from 20 of 60 participants of the PULSE study [21] (Table S1). The PULSE study was designed to investigate whether individuals with acute HIV-1 infection could suppress HIV-1 replication following multiple structured interruptions (STIs) to ART. Briefly, the PULSE study consisted of four phases: A, B, C and D. Baseline plasma samples were collected from subjects upon enrolment into the study, prior to initiation of ART (Phase A). Subjects received ART [stavudine, lamivudine, ritonavir-boosted indinavir with randomisation to hydroxyurea (HU) or not] until plasma VL decreased to ,50 RNA copies/ml for three consecutive months. Patients selected were stratified to ensure a balance of acute or early primary HIV-1 infection (PHI) with or without HU [21]. Once VL was contained below detection in Phase A, subjects underwent carefully monitored STI in Phase B. Subjects remained off ART if the VL remained below 5 000 RNA copies/ml. Once the VL increased above 5000 RNA copies/ml, ART was reinitiated as Phase C. Treatment interruption (Phase B) followed by reinitiation of ART (Phase C), occurred a maximum of three times for each subject, prior to entry into Phase D. Phase D was a follow-up phase, a period of clinical monitoring following the completion of the mandated treatment interruptions study [21]. Seventeen participants of the PHAEDRA study were investigated in parallel with PULSE study subjects (Table S2). The PHAEDRA study was a natural history cohort study, patients could elect to be treated or not. It was established to monitor immunological and virological characteristics of individuals with acute and early HIV-1 infection. Documentation of acquiring HIV within the past 12 months was the criteria for entry. This particular substudy was restricted to a cohort of patients who had elected not to receive ART. All these participants had seroconverted to HIV-1 at enrollment. Samples were collected at baseline and 24, 36 and 52 weeks subsequently. Seroconversion for both cohorts was defined according to stages described by Fiebig and collegues [24] (Tables S1 and S2). For the subjects from whom virus was successfully isolated and further study performed, at baseline the PULSE subjects had a median Fiebig stage of 4 with a mean VL and CD4 T cell count of 1 383 342 RNA copies/mL and 533.5 cells/ml, respectively. At baseline, the median Fiebig stage was 6 for the PHAEDRA subjects, with a mean VL and CD4 cell count of 159 286 RNA copies/mL and 720.7 cells/ml, respectively (Tables S1 and S2). Plasma samples were stored at 280uC, and patient PBMCs in liquid nitrogen, until required. Research ethics approval was given by St Vincent's Hospital, Sydney, St Vincent's Health, Melbourne and the University of New South Wales Research Ethics Committees. All participants signed an informed consent form before study entry. Cells Peripheral blood mononuclear cells (PBMCs) were isolated by density gradient centrifugation from buffy packs collected from healthy, HIV-1 seronegative individuals, obtained from the Australian Red Cross Blood Service (ARCBS, Melbourne, Australia), as described [25]. Cells were maintained in RF-10 medium (RPMI-1640 medium supplemented with 10% [v/v] heat-inactivated foetal bovine serum, 0.03 mg/ml L-glutamine, 100 U/ml penicillin and 100 mg/ml streptomycin), and activated with 10 mg/ml of phytohemagglutinin (PHA) for 3 days prior to infection with primary HIV-1 strains. Replication of primary isolates can vary considerably in PBMCs from different donors [26]. To minimise the impact of donor variability, all donor PBMCs used for virus isolation and viral fitness experiments were screened against a diverse panel of primary HIV-1 isolates to determine permissiveness to infection with HIV-1, prior to use. Cells were selected for use in the fitness assay based on the ability to support replication of a genetically diverse panel of primary HIV-1 isolates [9]. The level of CD4 expression on the surface of PBMCs capable of supporting replication of genetically diverse primary HIV-1 strains was significantly higher than on PBMCs that could not (Pate and McPhee, unpublished). To further minimise the effects of donor variability, pooled preferred PHA-PBMCs from two separate HIV-1 negative donors were used for all experiments. Viruses The reference isolate HIV-1 MBC925 was isolated from PBMCs collected from an AIDS patient, and characterised as described [27]. This highly pathogenic, clade B, CCR5-using primary isolate was selected as it had been observed to replicate efficiently and reproducibly in PHA-PBMCs (McPhee, D.A., unpublished) [27]. The use of HIV-1 MBC925 also enabled a direct comparison between the fitness of isolates present during acute infection relative with that of an isolate obtained from an individual with advanced disease. Virus isolation was attempted from 36 and 34 plasma samples collected from PULSE and PHAEDRA subjects, respectively, by centrifugation over a 20% (w/v) sucrose cushion at 45 0006 g for 1 h. Plasma was preferred as it best represents the circulating quasispecies. The pelleted virus was resuspended in IL-2 medium (RF-10 medium containing 10 U/ml IL-2 and 12 mM HEPES) containing 1610 7 PHA-PBMCs and cultured for 14 days [25]. Virus production was analysed by measurement of cell-free reverse transcriptase (RT) activity or p24 antigen production. Virus isolation was attempted from plasma collected at Baseline prior to the initiation of ART, from all PULSE subjects, and from any additional, available Phase B (STI) plasma sample with a VL $5 000 RNA copies/ml. Coincident plasma VL measurements ranged from 260 to 7 500 000 RNA copies/ml (Table S1). Isolates were successfully obtained from 19 of the 36 plasma samples: 15 from Baseline and 4 from plasma collected subsequent to Baseline. A strong correlation between plasma virus isolation from PULSE subjects, and high coincident VL, was observed. Virus isolation was unsuccessful from plasma samples with a VL ,153 000 RNA copies/ml (Table S1). Virus isolation was attempted from two plasma samples obtained from each PHAEDRA subject: a sample collected at Baseline and a sample collected at either week 24, 36 or 52 subsequent to Baseline. Two sequential isolates were successfully obtained from 12 of the 17 PHAEDRA subjects (Table S2). Only one isolate, obtained from plasma collected at Baseline, was obtained from an additional PHAEDRA subject (data not shown). Successful virus isolation from plasma obtained from PHAEDRA subjects did not correlate with plasma VL (Table S2). A total of 25 viruses were isolated from PHAEDRA cohort members (Table S2). The relative fitness of 18 of the 25 isolates was subsequently investigated in this study. Virus isolation was attempted from cryopreserved PBMCs available from a subset of PULSE subjects, where plasma was unavailable, or when virus isolation from plasma was unsuccessful, using co-culture with preferred PBMCs as described above [9]. After recovery from storage in liquid nitrogen, the viability of all PBMCs collected from PULSE subjects and subsequently used for co-culture was $70% (data not shown). PBMCs were available from two post-Baseline time-points from six of 10 PULSE subjects, and one post-Baseline time point from a further four subjects, a total of 16 samples. Following co-culture, eight additional isolates were successfully obtained from eight PULSE subjects. There was no correlation between VL and successful virus isolation from PBMCs (Table S1). A total of 28 isolates were obtained from 16 of 20 PULSE subjects; the relative fitness of 24 of the 28 isolates was subsequently investigated. Parallel infection assays A standardised input of 600 pg of p24 antigen of each primary or reference isolate was incubated with 2610 5 PHA-PBMCs for 2 h, in triplicate. Isolates were minimally passaged in an attempt to ensure isolates reflected the replication competent virus present in vivo [28,29]. Where 600 pg of p24 could not be achieved, undiluted infection supernatant was added. Cells were washed in IL-2 medium and transferred to 96 well plates at 2610 5 cells/well in a final volume of 200 ml, achieved using IL-2 medium. Cells were harvested at various time-points between 0 and 158 h postinfection. Following harvest, cells were washed and resuspended in 200 ml of TE buffer. To lyse infected cells, 300 ml of MagNA Pure lysis buffer (Roche, Castle Hills, NSW, Australia) was added and the cells incubated (15 minutes, room temperature). Lysed cells were stored at 280uC until required for DNA extraction. Harvested supernatant was stored at 220uC and virus production analysed by measurement of cell-free RT activity or p24 antigen production. DNA was extracted from HIV-1 infected PHA-PBMCs using the Invitrogen Easy DNA kit as per the manufacturer's instructions, with the exception of the initial cell lysis step. Measurement of total HIV-1 DNA production to estimate relative viral fitness To evaluate relative viral fitness, a quantitative real time PCR (QPCR) assay was developed to measure production of total HIV-1 DNA (extrachromosomal, integrated and 2-LTR circular forms) for a period of between 96 and 158 h post-infection. Published primer and probe sequences targeting a highly conserved region of the 59-LTR and the human Albumin gene, were used ( Figure 1) [30,31]. Prior to use assay sensitivity and intra-and inter-assay variation were extensively tested (Text S1; Tables S3 and S4). To detect total HIV-1 DNA, the real time PCR reaction mix contained 5 ml DNA in a final volume of 20 ml. The mix contained QPCR Probe Mastermix (Integrated Sciences, Australia), 100 nM dual labelled probe, 300 nM of HIV-1 LTR forward and reverse primers, and nuclease free water (NFW). To detect Albumin DNA, the real time PCR reaction mix contained 5 ml DNA in a final volume of 20 ml. The reaction mix contained QPCR Probe Mastermix, 100 nM dual labelled probe, 300 nM Albumin forward and reverse primers and NFW. All DNA amplifications were performed using a Stratagene MX3000P real time PCR machine (Integrated Sciences, Australia) with the following conditions: 1 cycle at 95uC for 10 mins; 40 cycles at 95uC for 30 sec and 60uC for 1 min. Quantification and calculation of relative viral fitness scores As each strain was tested in triplicate, the mean Ct for each time point was calculated following QPCR analysis. Copies of target DNA were quantified by converting the mean Ct value generated for each sample to DNA copies using the standard curve generated by MX3000P software from the quantified DNA standards included in each run. Copies of total HIV-1 cDNA were calculated per 200 000 cells, the number of PHA-PBMCs in each sample. Using the copies of HIV-1 DNA measured for each strain, a relative viral fitness score was calculated for each isolate. Total HIV-1 DNA production at 96 h post-infection was measured for all isolates tested; hence calculation of viral fitness scores at this time-point enabled direct comparison between the relative fitness of all PULSE and PHAEDRA isolates tested. A second score at the final time-point tested (either 110 or 158 h post-infection) enabled analysis of DNA production for those isolates not detected at 96 h post-infection. As the viral fitness measured in this study was relative to that of the pathogenic reference strain HIV-1 MBC925 , fitness scores for test strains were calculated relative to HIV-1 DNA production by HIV-1 MBC925 from coincident time points. To calculate viral fitness scores, copies of DNA produced by test strains were divided by copies of DNA produced by the reference strain at a coincident time point post-infection [Fitness score = (HIV-1 DNA T /HIV-1 DNA R )], where HIV-1 DNA T and HIV-1 DNA R correspond to copies of HIV-1 DNA produced by the test and reference strains, respectively. Fitness scores throughout the text and figures are represented as a fraction of 1. Isolates with a relative fitness score of $0.1 were classified as fit; isolates with a relative viral fitness score of 0.1 to 0.01 were classified as moderately fit; the relative fitness of isolates with a score of ,0.01 was classified as low. Results Reduced relative fitness of a nef/LTR attenuated virus compared with a primary wild type HIV-1 strain, HIV-1 MBC925 Whether primary HIV-1 isolates of variable replicative fitness could be distinguished on the basis of total HIV-1 DNA production was investigated using HIV-1 MBC925 and a nef/LTR attenuated isolate, HIV-1 D36III . The HIV-1 D36III isolate, obtained from a long term non-progressor (LTNP), replicates poorly, as a result of deletions/mutations in the nef/LTR region [9,11,32]. Production of viral DNA by both isolates was detected at four h post-infection, however a significant difference in replicative fitness over time was observed ( Figure 2). The difference between total HIV-1 DNA produced by HIV-1 MBC925 and HIV-1 D36III at 96 h post-infection was 38.7-fold ( Figure 2). It has been observed by studies in our laboratory, and those by Kim and collegues, that a single replication cycle takes between 20 and 24 h [33,34]. Hence, several rounds of infection were required to demonstrate differences in the kinetics of total HIV-1 DNA production. A slow/low replication phenotype was observed for the attenuated virus strain compared with a fast/rapid DNA production profile for the reference virus. Increased DNA production at all timepoints tested by HIV-1 MBC925 relative to HIV-1 D36III indicated that the replicative fitness of the reference strain was greater than that of the attenuated isolate. For both virus infections there was an increase followed by a modest decrease between 4 and 12 h post infection as observed previously in a study of one step growth kinetics of HIV-1 [34]. Furthermore, these results indicated that using the QPCR assay, primary HIV-1 isolates with variable replicative fitness could be readily distinguished on the basis of viral DNA production over several rounds of replication. Decreased replicative fitness from acute to early chronic HIV-1 infection, following treatment with ART (PULSE subjects) The replicative fitness of isolates obtained from 14 PULSE subjects was investigated using the QPCR assay. The reference isolate HIV-1 MBC925 was cultured in parallel with test isolates in each viral fitness experiment, enabling calculation of a relative fitness score and to monitor any potential inter assay variation. Two isolates from different time-points obtained from 6 PULSE subjects, and 8 single isolates obtained from 8 PULSE subjects, were tested ( Figure 3). From the viral fitness scores calculated using total HIV-1 DNA production at 158 h post-infection, isolates obtained from PULSE subjects were categorised into three groups: high fitness, moderate fitness and low fitness (Figure 3). The seven isolates classified as highly fit were all obtained from plasma collected at Baseline, during acute HIV-1 infection, and prior to initiation of ART. For three BL isolates replication was near equivalent to the reference strain used (Figure 3). Four of the 8 isolates classified as moderately fit were obtained from plasma collected at Baseline, four were obtained from plasma collected during STI, subsequent to Baseline. The relative fitness of eight isolates was classified as low, indicating that total HIV-1 DNA production by these isolates was less than 1% of total HIV-1 DNA production by the reference isolate at a coincident time- Over time, following the initiation of ART, the fitness of isolates obtained from 6 PULSE subjects decreased (Figure 4). Due to the small number of subjects analysed, the decrease observed was not significant (p = 0.14). Furthermore, although decreasing viral fitness coincided with decreasing plasma VL for 4 of 6 subjects from whom multiple isolates were obtained, overall, viral fitness did not correlate with plasma VL for the 14 PULSE subjects investigated (Borderline statistical significance p = 0.051; Figure 5). There was no correlation between CD4+ T cell counts and relative viral fitness for the PULSE subjects investigated (data not shown). Increasing replicative fitness during chronic HIV-1 infection (PHAEDRA subjects) Sequential isolates from 9 of 12 PHAEDRA subjects were analysed using the QPCR assay ( Figure 6). Based on relative viral fitness scores calculated using total HIV-1 DNA production at Shown are clinical and experimental data obtained for PULSE subjects from which virus was successfully isolated and subsequently tested using the real time PCR assay. Indicated by the column headings are the subject identification code and seroconversion status at Baseline ('+' indicates subject had seroconverted, '2' indicates subject was seronegative, 'w+' indicates that a weak antibody response was detected). Also shown are the number of STIs experienced by the subject, whether VL was suppressed below 5 000 RNA copies/ml upon STI (indicated by 'controller' or 'non-controller'), and the phase of the PULSE study during which the relevant sample was collected. The time (in weeks) post Baseline that the sample was collected, coincident VL and CD4 + T cell counts and the sample type from which virus was successfully isolated, are also shown. Finally, viral fitness scores calculated using DNA production measured at 96 h postinfection ex vivo, and at the final time-point analysed (158 h post-infection), are shown for each isolate. The fitness scores generated for the isolate obtained from subject 3.13 were calculated from total HIV-1 DNA produced at 60 and 72 h post-infection. 'ND' indicates that the specified measurement was not done. doi:10.1371/journal.pone.0012631.g003 110 h post-infection, isolates were classified according to the 3 groups used above. Four isolates were categorised as highly fit, 3 of which were obtained from plasma collected 36 weeks subsequent to Baseline (Figure 6). In contrast, all of the isolates obtained from PULSE subjects that were classified as fit were obtained from plasma collected at Baseline, prior to the initiation of ART (Figure 3). Of the 5 PHAEDRA isolates classified as moderately fit, 3 isolates were obtained from plasma collected 52 weeks subsequent to Baseline, and 2 isolates were obtained from plasma collected at Baseline (Figure 6). The relative fitness of 9 isolates obtained from PHAEDRA subjects was classified as low. Interestingly, 6 of the 9 isolates with low relative fitness were obtained from plasma collected at Baseline, in contrast to results obtained for the PULSE subjects investigated. Over time, the relative fitness of isolates obtained from seven PHAEDRA subjects increased significantly (p = 0.03; Figure 7). However, viral fitness was not found to correlate with plasma VL following analysis of the 18 isolates obtained from PHAEDRA subjects (Figure 8). In addition, relative viral fitness was not found to correlate with CD4+ T cell counts for the PHAEDRA subjects investigated (data not shown). High relative fitness of isolates from acute infection (PULSE) compared with early chronic HIV-1 infection (PHAEDRA) It is widely believed that, due to a genetic bottleneck occurring upon transmission, the fitness of isolates present during acute infection is low relative to isolates obtained later in infection [6,35,36,37,38,39]. To investigate this, we compared the viral fitness of isolates obtained from Baseline plasma from PULSE subjects, to those collected 36 to 52 weeks post-baseline from PHAEDRA subjects. The isolate groups were selected to enable the relative fitness of viruses present during acute HIV-1 infection, naïve to any selection pressures exerted by ART (PULSE), and those found during untreated, early chronic infection (PHAE-DRA), to be compared. Relative to isolates obtained from PHAEDRA subjects, replication of PULSE Baseline isolates was considerably slower, with replication of 57% of isolates not detected by 96 h postinfection. However, between 96 and 110 or 158 h post-infection, total HIV-1 DNA production increased substantially, with replication of 84% of PULSE viruses detected ( Figure 3). By comparison, replication of 39% of PHAEDRA post-baseline isolates was not detected by 110 h post-infection ex vivo ( Figure 6). In addition, the increase in total HIV-1 DNA production between 96 and 110 h post-infection for isolates obtained from PHAEDRA subjects was not substantive relative to wild type or the isolates obtained from PULSE subjects (data not shown). We observed that overall, PULSE Baseline isolates were slower to establish a productive infection relative to the PHAEDRA post-baseline isolates (Figure 9). From this we suggest that the genetic diversity of isolates obtained post-Baseline from the PHAEDRA subjects was greater than that of the PULSE isolates obtained at Baseline, evidenced by greater relative adaptive ability. However, once infection was established, the amount of HIV-1 DNA produced by the PULSE Baseline isolates was comparable to, or higher than, the amount of viral DNA produced by PHAEDRA post-Baseline isolates (Figure 9). These findings provide evidence that the relative fitness of isolates present during acute HIV-1 infection may be higher than previously thought. When viral fitness scores were plotted relative to the stage of seroconversion the results are even more striking. The most fit viruses were observed during the earliest stage of seroconversion monitored ( Figure 10). Conceivably, in vivo viral fitness is compromised as HIV-1 infection progresses, in response to selective immunological pressure on replicating virus. . Decreasing fitness over time observed following analysis of paired isolates obtained from acute HIV-1 infection subjects, measured using QPCR. Relative viral fitness scores were calculated for isolates obtained from PULSE subjects and represented on a box-plot. Only subjects from whom a Baseline isolate and at least one additional isolate (Week 27 to 106) were obtained were included in the analysis (n = 6). Where multiple isolates from additional time-points were obtained, the average of the combined viral fitness scores was used. Shown are viral fitness scores calculated at the final time-point tested (158 h; exception was 3.13 which was at 72 h) ex vivo for paired isolates obtained from 6 PULSE subjects. The box represents the middle 50% of values for the data set, the solid line indicates the median value. The vertical 'whiskers' extending from the box respectively indicate the lowest and highest observed values. The open circle represents an outlier; the asterisk represents an extreme outlier. The significance of the observed changes in viral fitness over time is shown (p = 0.14), calculated using a signed rank test. doi:10.1371/journal.pone.0012631.g004 Figure 5. Viral fitness did not correlate with VL following analysis of isolates obtained from acute HIV-1 infection subjects. Coincident plasma VL measurements (log 10 RNA copies/ml) were plotted against relative viral fitness scores (log 10 ) for 16 isolates, obtained from plasma, from PULSE subjects. The Pearson correlation was rho = 0.496, p = 0.051. doi:10.1371/journal.pone.0012631.g005 Discussion In this study we investigated the relative viral fitness of isolates obtained from individuals with acute and early HIV-1 infection. Temporal changes in relative viral fitness were observed for 6 and 10 subjects participating respectively in the PULSE (acute HIV-1 infection) and PHAEDRA (early HIV-1 infection) studies (Figures 3, 4, 6 and 7). Consistent with the findings of previous studies investigating viral fitness during untreated HIV-1 infection [6,15,36], the relative fitness of paired isolates obtained from 7 PHAEDRA subjects increased significantly over time (p = 0.03; Figure 7). Viral fitness decreased over time following intermittent ART for 5 of the 6 PULSE subjects analysed (Figure 4), an observation that might be expected due to the potential bottleneck imposed by suppressive ART. Most unexpected was the high relative fitness of isolates obtained from PULSE subjects during acute HIV-1 infection, prior to the initiation of ART, compared to isolates obtained from individuals with early chronic HIV-1 infection. Furthermore, total HIV-1 DNA production by several PULSE Baseline isolates was comparable to, or greater than that of the highly pathogenic, primary reference isolate HIV-1 MBC925 obtained from an individual with AIDS ( Figure 3) [27]. These findings provide evidence that despite the bottleneck occurring upon transmission, the relative fitness of isolates present during acute HIV-1 infection may indeed be high. To investigate relative viral fitness, a 'parallel infection assay' was used [17]. Parallel infection assays have been successfully used in other studies to examine replication of primary HIV-1 isolates in primary cell types [35,40,41]. Alternatively, viral fitness can be investigated using a growth competition assay, whereby replication of test and reference strains is compared in the same culture, primarily performed using recombinant viruses [6,15,17,42,43,44,45]. The use of recombinant strains, as in recent studies by Miura et al., [46] and Kong et al., [47] to investigate the contribution of specific genes to the fitness of viruses during acute infection, does not permit investigation of the fitness of the circulating viral quasispecies. We used a parallel infection assay to enable investigation of the replicative fitness of strains isolated directly from patient plasma, to maximise the clinical relevance of results obtained [8]. It is widely accepted that regardless of the route of HIV-1 infection, the virus encounters an extreme genetic bottleneck upon transmission, resulting in a highly homogenous virus population in the recipient [19,37,38,39,48]. Decreased genetic diversity is thought to activate Muller's ratchet [49], therefore, the fitness of strains present during acute infection is thought to be low. As 10 of the 20 PULSE individuals investigated had not fully seroconverted to HIV-1 (Table S1), we anticipated that the fitness of viruses isolated from coincident plasma samples would be low. A virus population with highly constrained genetic diversity would not be expected to readily adapt to an environment distinct to that found within the host, such as the ex vivo system used in this study to measure relative viral fitness [6]. However, 7 of the 13 isolates obtained from plasma collected at Baseline from PULSE subjects were classified as highly fit (Figures 3 and 9). Indeed, analogous to the findings of this study, rapidly replicating variants have been identified in similar, smaller studies investigating the fitness of isolates present during acute and early HIV-1 infection [6,36,40]. In the findings by Ferbas et al. (1996) for one individual, high viral fitness was observed following analysis of the ex vivo fitness of isolates obtained at the time of peak viremia, but prior to seroconversion [36]. Kong et al. (2008) recently reported that strains with higher Figure 7. Increasing viral fitness over time observed following analysis of paired isolates obtained from early chronic HIV-1 infection subjects, measured using QPCR. Relative viral fitness scores were calculated for isolates obtained from PHAEDRA subjects and represented on a box-plot. Only subjects from whom a Baseline isolate and at least one additional isolate (Week 36 to 52) were obtained were included in the analysis (n = 8). Shown are viral fitness scores calculated at the final time-point tested (110 h) ex vivo for paired isolates obtained from 8 PHAEDRA subjects. The box represents the middle 50% of values for the data set, the solid line indicates the median value. The vertical 'whiskers' extending from the box respectively indicate the lowest and highest observed values. The asterisk represents an extreme outlier. The significance of the observed changes in viral fitness over time is shown (p = 0.03), calculated using a signed rank test. doi:10.1371/journal.pone.0012631.g007 Figure 8. Viral fitness did not correlate with VL following analysis of isolates obtained from early chronic HIV-1 infection subjects. Coincident plasma VL measurements (log 10 RNA copies/ml) were plotted against relative viral fitness scores (log 10 ) for 16 isolates, obtained from plasma, from PHAEDRA subjects. The Pearson correlation was rho = 0.133, p = 0.697. doi:10.1371/journal.pone.0012631.g008 replicative fitness with respect to the env gene were vertically transmitted by mothers with chronic HIV-1 infection [47]. Combined with the observation of highly fit strains present during acute HIV-1 infection in this study, these results suggest the bottleneck that occurs upon initial transmission of HIV-1 does not necessarily result in loss of fitness. The level of relative viral fitness has been linked to the genetic diversity of the viral quasispecies. Kong et al. (2008) reported transmission of multiple virus strains; Borderia et al. (2010) recently demonstrated a direct correlation between increasing genetic diversity and increasing in vivo viral fitness of clonal populations [47,50]. Troyer et al. (2005) reported strong correlation between genetic diversity of the viral quasispecies, and ex vivo viral fitness [6]. In our study, with subjects that were therapy naive, viral fitness increased over time for 7 of the 9 PHAEDRA subjects investigated. Observations that genetic diversity correlates with viral fitness are certainly not novel; fitness of an RNA virus population increasing with genetic diversity is described by the Red Queen hypothesis [51]. This has been applied extensively in the field of HIV-1 research [7,40], and is highly relevant given the level of genetic diversity of the viral quasispecies present in infected individuals. Cloning of the env sequences of isolates obtained from PULSE subjects is currently underway, to investigate whether the observed high level of fitness correlated with genetic diversity of the quasispecies present at baseline, during acute infection. Following commencement and subsequent interruption of suppressive ART, viral fitness decreased for 5 of 6 PULSE subjects investigated (Figure 3). Analogous to the findings of this study, reduced viral fitness was also observed for individuals experiencing STI following initiation of ART during acute infection by Wang et al. (2007) [52]. Suppressive antiretroviral therapy can result in the development of drug resistant mutations in the viral quasispecies to evade inhibition, which has been shown to reduce viral fitness [6,21]. Development of drug resistance mutations in this study was not suspected as VL suppression was observed upon resumption of ART in all PULSE subjects investigated [21]. Instead, analogous to the findings of Wang et al., (2007) [52] and Borderia et al. (2010) [50], decreasing relative viral fitness over time was thought to be a direct result of a genetic bottleneck created by suppressive ART, activating Muller's ratchet [6]. Muller proposed that when genetically diverse populations are randomly reduced, such as during treatment with ART, or the development of potent immune responses, the overall fitness of the population also decreases [6,40]. The fitness of Baseline isolates obtained from 6 of the 9 PHAEDRA subjects was also classified as low (Figure 6). At Baseline, all PHAEDRA subjects had clearly seroconverted to HIV-1 (Table S2). The observed low relative fitness may have resulted from mutation of the viral quasispecies as a direct result of the development of potent immune responses following seroconversion. Indeed, escape from targeted immune responses has been observed in similar studies investigating anti-HIV-1 immune responses during early HIV-1 infection [17,53]. The accumulation of escape mutations can incur a high fitness cost to the virus, depending on the genomic location of the mutation [54,55,56]. Indeed, Goonetilleke and colleagues (2009) reported that selection of viral escape mutants, following development of adaptive T-cell responses, occurred rapidly following containment of peak viremia in 4 individuals with acute HIV-1 infection confirming earlier studies [57,58,59]. However, there was no obvious fitness cost to the viruses studied [59]. Similarly, as relative viral fitness increased subsequent to Baseline for 7 of 9 PHAEDRA subjects investigated in this study, accumulation of deleterious mutations seems unlikely. Not as restrictive as suppressive ART, development of potent immune responses upon seroconversion may have created a ''wider'' bottleneck, limiting but not preventing the expansion and diversification of the viral quasispecies [6]. Consequently, we propose that increasing fitness subsequent to seroconversion observed for 7 of 9 PHAEDRA subjects occurred as a result of virus evolution and diversification within the host to evade adaptive immune responses [6,7,40,51]. Although contribution of cellular immune responses to containment of virus replication has not been investigated we are currently assessing neutralising antibody responses for both the PULSE and the PHAEDRA subjects. There were several limitations to the present study. The use of an ex vivo system, such as that used in this and other studies, does not reflect the sensitivity of the virus to antiretroviral drugs, chemokines or additional inhibitory agents that may affect fitness in vivo. Furthermore, for 6 of the 14 PULSE subjects from whom plasma virus could not be isolated, virus was isolated from PBMC ( Figure 3). In addition to PBMC-derived isolates, for 5 of these 6 subjects, virus was obtained from plasma collected at distinct timepoints throughout the study. The fitness of both PBMC and plasma derived viruses was subsequently investigated (Figure 3). It has long been understood that HIV-1 can evolve separately in distinct physiological compartments [60,61]. In addition, it is a widely held belief that the current, circulating viral quasispecies are present in the plasma and that cellular reservoirs of HIV-1 contain archived strains. However, the findings of recent studies suggest otherwise [62,63]. Indeed, we observed that the kinetics of HIV-1 DNA production by the PBMC-derived isolates tested in this study were distinct relative to plasma derived isolates obtained at different time-points from the same PULSE subject (data not shown). Combined, observations of the relative fitness of PULSE and PHAEDRA isolates suggest selection of the fittest virus, or viruses, upon transmission which progressively become less fit upon development of adaptive immune pressure and/or commencement of antiviral therapy. Further studies to investigate the longterm impact of viral fitness on disease progression are warranted. Muira and collegues recently reported the attenuated replication capacity of isolates obtained from individuals who became HIV-1 controllers during early infection [46]. In this study, none of the PULSE subjects from whom Baseline isolates with high replicative fitness were obtained controlled HIV-1 replication in the absence of therapy (data not shown). Although the role of viral fitness in disease progression remains unclear, what is clear from the findings of this study is that the fitness of strains present during acute/early HIV-1 infection can be high. In conclusion, the findings of this study suggest that despite the bottleneck transmission of a strain or strains with high relative fitness does occur. Furthermore, these results suggest that viral fitness decreases subsequent to the development of adaptive immune pressure and/or commencement of antiviral therapy. The findings of this study make a substantial contribution towards understanding that the selection process during transmission of HIV-1 from donor to recipient can be for a very fit virus. Supporting Information Text S1 Detail of viral fitness QPCR assay validation. Found at: doi:10.1371/journal.pone.0012631.s001 (0.04 MB DOC) Table S1 Clinical and virus isolation data for PULSE subjects. Shown are the clinical results and the results of attempted virus isolation from plasma or PBMCs obtained from PULSE subjects; ''triangle'' indicates that virus isolation was attempted from the sample indicated. A single asterisk indicates the sample used for virus isolation was plasma; a double asterisk indicates that virus isolation was attempted from PBMCs when either plasma was not available or virus isolation from plasma was unsuccessful. Shown is the subject identification number followed by the phase of the PULSE study during which the sample was collected. ''A'', ''B'' and ''C'' indicate PULSE study Phases A, B and C. The subsequent number indicates during which of up to three B or C phases sample collection occurred; prefaced by ''W'' (weeks), the following number indicates duration of the specified phase at sample collection. Seroconversion status according to the Fiebig et al [24] stages, coincident CD4+ T cell counts and plasma VL at the time of sample collection, are shown: ''.log10 5.88'' indicates VL was above the upper limit of detection, and was not quantified. Whether subjects received HU in addition to ART is indicated. Reverse transcriptase and p24 antigen EIA assay results, performed following virus isolation, are also shown: ''ND'' indicates culture supernatant was not tested using the RT assay; ''NQ'' indicates that the relevant result for the isolate was above or below the limit of detection for the assay and was not quantified; ''2'' indicates that virus isolation was attempted but RT activity or p24 antigen were not detected. Found at: doi:10.1371/journal.pone.0012631.s002 (0.32 MB DOC) Table S2 Clinical and virus isolation data for PHAEDRA subjects. Shown are the clinical results and the results of attempted virus isolation from plasma obtained from PHAEDRA subjects. A single asterisk indicates the plasma sample from which virus isolation was attempted. Indicated by the column headings are the subject identification code, the phase of the PHAEDRA study at which the relevant sample was collected, and seroconversion status according to Fiebig et al [24]. Coincident CD4+ T cell counts and plasma VL at the time of sample collection are shown: ''.log10 5.88'' indicates VL was above the upper limit of detection, and was not quantified. Reverse transcriptase and p24 antigen EIA assay results, performed following virus isolation, are also shown: ''ND'' indicates culture supernatant was not tested using the RT assay; ''NQ'' indicates that the relevant result for the isolate was above or below the limit of detection for the assay and was not quantified; ''2'' indicates that virus isolation was attempted but RT activity or production of p24 antigen was not detected subsequently. Found at: doi:10.1371/journal.pone.0012631.s003 (0.17 MB DOC) Table S3 Intra-assay variation analysis for the QPCR assay. To examine intra-assay variation, 20 replicates of each HIV-1 (A) and albumin (B) DNA standard were tested in the same run. Data represent the mean Ct value (Mean), standard deviation (SD) and coefficient of variation (COV, expressed as a percentage) for each standard. ''N'' indicates the number of replicates detected for each standard. Found at: doi:10.1371/journal.pone.0012631.s004 (0.04 MB DOC) Table S4 Inter-assay variation analysis. To examine inter-assay variation, five consecutive runs with the HIV-1 (A) and albumin (B) DNA standards were performed. Standards were tested in triplicate within each run. Shown are the mean Ct values obtained for each standard following each of the five independent runs. Data represent the total number of replicates detected (N), mean Ct value (Mean), standard deviation (SD) and coefficient of variation (COV, expressed as a percentage) for each standard. Mean, SD and COV values were calculated using Ct values obtained for each replicate detected of the specified standard. A HIV-1 negative non-amplification control (NAC) was included, consisting of cellular DNA. ND indicates that the specified sample was not detected. Found at: doi:10.1371/journal.pone.0012631.s005 (0.06 MB DOC)
2014-10-01T00:00:00.000Z
2010-09-09T00:00:00.000
{ "year": 2010, "sha1": "c130cd28b957b278c6450e4f0873eea5165f2c09", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0012631&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c130cd28b957b278c6450e4f0873eea5165f2c09", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
33149214
pes2o/s2orc
v3-fos-license
Correlation between body fat components and coronary heart disease risk scores Cardiovascular diseases are one of the very important causes of death all over the world. There are several factors known to be strongly associated with coronary heart disease. Body fat is one such factor. Among the components of body fat, visceral adiposity has been proposed to correlate more with coronary heart disease risk. Visceral fat and WHR are linked to the development of glucose intolerance in many populations, including Asian Indians1-5. But whether body fat can identify future coronary heart disease by itself is not known. There are well studied and proven scoring systems to identify future risk of coronary heart disease events in an individual like Framingham score, PROCAM score and Vascular age. Our study is an effort to find out if quantity of body fat, the anthropometric parameters indicating body fat and its componentsvisceral and tissue fat can be considered as a predictor of future coronary heart disease events similar to Framingham risk score, PROCAM score and Vascular age. Introduction Cardiovascular diseases are one of the very important causes of death all over the world. There are several factors known to be strongly associated with coronary heart disease. Body fat is one such factor. Among the components of body fat, visceral adiposity has been proposed to correlate more with coronary heart disease risk. Visceral fat and WHR are linked to the development of glucose intolerance in many populations, including Asian Indians [1][2][3][4][5] . But whether body fat can identify future coronary heart disease by itself is not known. There are well studied and proven scoring systems to identify future risk of coronary heart disease events in an individual like Framingham score, PROCAM score and Vascular age. Our study is an effort to find out if quantity of body fat, the anthropometric parameters indicating body fat and its componentsvisceral and tissue fat can be considered as a predictor of future coronary heart disease events similar to Framingham risk score, PROCAM score and Vascular age. Material and Methods Study was conducted after getting clearance from our college-Karnataka Institute of Medical Sciences ethics committee. After taking written and informed consent, study subjects were evaluated by clinical examination first and by fat measurement, blood tests later. Our study included 103 patients who were willing to be part of study. Inclusion criteria: Adult patients willing for clinical evaluation, to undergo tests for body fat measurement and blood test for lipid profile estimation. Anthropometric measurements: Using a measuring tape, with the subject standing, the waist circumference was measured as the narrowest circumference between the lower costal margin and the iliac crest. The hip circumference was the maximum circumference at the level of the greater trochanter of femur. Waist Hip Ratio (WHR) was then calculated. Body fat measurement: Body fat was measured by bio-impedence method by Omron karada scan HBF 361. The study subjects were asked to hold the body fat measuring instrument in standing position with arms extended. Total body fat, subcutaneous fat, visceral fat as measured by the instrument were noted. This method has been proven to correlate well with body fat analyzed by DEXA 6 . Lipid profile test: Serum lipid profile of the study subjects was tested in the morning after overnight fast for 12 hours at least. Calculation of coronary heart disease risk scores: Framingham score, PROCAM score and Vascular age were calculated using software after entering relevant data of history, anthropometric data, blood sugar levels and serum lipid profile values. Statistical analysis: The data were analyzed using the software SPSS. Mean and standard deviation for each continuous variable was calculated separately for males and females. The correlation between the Framingham risk scores, PROCAM scores, Vascular age with anthropometric data and components of serum lipid was tested by Carl Pearson's correlation coefficient method. The influence of anthropometric data and components of serum lipid on Framingham risk scores, PROCAM scores, Vascular age was tested by the multivariate regression analysis. Results and Data Analysis Our study included 103 patients. The distribution of the cardiac risk factors, results of laboratory investigations and the anthropometric data are summarized in table1. A high percentage of patients in our study were suffering from Diabetes mellitus (44.7% overall). 35% of the male patients were smokers. Mean systolic blood pressure was in hypertensive range (141.70 mmHg ± SD 21.75) among the whole study group as well as males and females. But diastolic blood pressure (83.66mmHg ± SD11.77mmHg) in our whole study group as well as separately in males and females was within normal limits. Mean BMI of our whole study group was 26.71± 4.12 suggestive of slight overweight. The same was observed in males and females separately also. Mean values of Total cholesterol (178.39±43.93), HDL (42.86±25.02), LDL (114.11±33.90) were within normal limits, but triglyceride levels were slightly higher (157.89±69.36). There was no significant gender difference in Vascular age ( p value 0.13) and PROCAM scores (p value 0.97), but Framingham Risk score was significantly higher in males (p value 0.0000) [table 2]. The set of independent predictors for each of the dependent variables was determined through stepwise regression analyses. In these multivariate models, visceral fat remained the strongest correlate of each of the coronary heart disease risk scores, and WHR was the next most significant independent predictor of these outcomes. Discussion Body fat is one of the very well proven risk factors for coronary heart disease. Body mass index (BMI), waist circumference (WC) are the anthropometric measures commonly employed to quantify overall adiposity. However, as more and more research has been carried out in this field, it is becoming obvious that regional fat depots may be playing a greater role than overall adiposity with regards to coronary heart disease etiology. [7][8][9] This has been stressed by several studies which have highlighted pericardial fat and abdominal visceral adipose tissue (VAT) as unique, pathogenic fat depots. [10][11][12][13][14][15][16] However, the results have not been consistent and in study by Amir A. Mahabadi et al. 17 none of these fat depots are independently associated with CVD after further adjustment for traditional risk factors. Our study is an effort to understand the concept of varying influence of different fat tissues on coronary heart diseases. We tested this by quantifying total body fat, visceral fat, tissue fat and correlating them with known scoring systems of identifying future risk of coronary heart disease events-Framingham Risk Score, PROCAM score and Vascular age. Our study, as per our knowledge, is the first to test correlation between components of body fat and coronary heart disease risk scores. Framingham risk score is used to predict the 10 year risk of developing coronary heart disease in people without history of cardiovascular disease. 18 It has been developed based on data from a sample of the Framingham Heart and Offspring studies. This scoring system considers sex, age, total cholesterol, HDL cholesterol, systolic blood pressure, and smoking. PROCAM score is also a risk score to predict risk of coronary heart events in individuals with no coronary heart disease and is derived from the European PROCAM study, performed in Germany. "Heart Age" or "Vascular Age" is a newer concept to convey expression of age-appropriate cardiovascular risk based on the output of Framingham Risk Scores and shown to promote more accurate risk perception in users. 19 It is a IJBR (2013) 04 (04) www.ssjournals.com simple method for communicating risk to general population. Earlier, fat, in general, was considered to be always associated with increased coronary heart disease risk. But as more and more research has been carried out, this concept has been proven to be only partly correct. The location of fat is an important derterminant of its coronary heart disease risk potential. In the abdomen, visceral fat appears to confer greater disease risk than adipose tissue in the subcutaneous location. 20,21,22 Coronary heart disease risk is also influenced by the location of fat within the thigh. 23,24 Fat in other fat depots (i.e., stored within muscle, around muscle fibers) is related to insulin resistance in obese persons, but there appears to be no such correlation with subcutaneous thigh fat . 25 Although the mechanisms responsible for the differing effects of central and peripheral adiposity on coronary heart disease risk remain to be determined, the total adiposity probably does not adequately indicate the extent of coronary heart disease risk in individuals. Hence, usefulness of BMI, which is only an indicator of total adiposity only, in assessing coronary heart disease risk is therefore questionable. The same factor has been reflected in our study also and we did not find BMI and total body fat to be significantly correlating with coronary heart disease risk prediction scores. But, Waist Hip Ratio, which is a marker of visceral adiposity, and visceral fat itself were found to correlate significantly with coronary heart disease risk prediction scores. On the other hand, tissue fat was not found to correlate with coronary heart disease risk scores. The scientific reason why truncal adiposity increases risk for coronary heart diseases and lower-extremity adiposity decreases risk for coronary heart diseases has been based on heterogeneity of adipose tissue metabolism in different locations. It is clear from the data available from in vitro studies that adipocytes located in visceral abdominal regions are more sensitive to lipolytic stimuli and resistant to suppression of lipolysis by insulin than fat cells from gluteal-femoral subcutaneous regions; 26,27 Daily systemic flux of free fatty acids, per unit of fat mass, has been shown to be higher in subjects with a predominant abdominal adiposity than in those with fat predominant in lower body, due both to a higher sensitivity to the activation of lipolysis and to an reduced suppression of lipolysis in abdominal fat cells. And also, abdominal fat may impact hepatic free fatty acid flux directly due to its location close to the portal circulation and, hence, increase TG synthesis and decrease hepatic insulin clearance. 21,28 Thus from our study, we recommend considering WHR and visceral fat to be equivalent to coronary heart disease risk prediction scores. In clinical practice, apart from stressing only on measures aimed at weight reduction, measures to reduce abdominal adiposity may be more fruitful in coronary heart disease risk reduction. Instead of calculating coronary heart disease risk scores, which are not very easy to calculate in clinical practice, WHR and visceral fat can be used as easy to use, scientifically sound tools to convey future coronary heart disease risk events to the general population. Conclusion -WHR, visceral fat are best correlates of coronary heart disease risk scores. -BMI, total body fat, tissue fat do not correlate with coronary heart disease risk scores. -WHR, visceral fat can be considered as surrogates of coronary heart disease risk prediction scores in clinical practice.
2019-03-10T13:04:22.956Z
2013-05-01T00:00:00.000
{ "year": 2013, "sha1": "6145000a832c354d7ad8e1eb2e0b28cbac3f42c3", "oa_license": "CCBY", "oa_url": "http://ssjournals.com/index.php/ijbr/article/download/821/817", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1ff2806c66336e0ac1eafada5bc6a0f5a555d3aa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265761643
pes2o/s2orc
v3-fos-license
Phytochemical analysis and antibacterial activity of traditional plants for the inhibition of DNA gyrase Introduction and Aim: DNA gyrase is a class of Type II Topoisomerases that plays an important role in bacterial viability. It is found in all bacteria and is involved in replication, repair, recombination, and DNA transcription. Negative supercoiling of bacterial DNA by DNA gyr B is essential in replication which further influences all the metabolic activities. Staphylococcus aureus (ATCC 25923) is one of the pathogens that can modify its genome easily under multidrug resistance. This study explores the activity of medicinal compounds to inhibit DNA gyrB. Plant species Solanum nigrum, Vitex negundo, and Euphorbia hirta were studied for the potential plant-based molecules. The compounds alkaloids, glycosides, flavonoids, and terpenoids were considered to have high-potential targets. The study focuses on DNA gyrase as a target and shows insights into future drug development. The research focuses on the discovery of novel plant-based therapeutic compounds to target DNA gyrase B activity. Methods and Materials : Phytochemical screening was performed to study the medication options that could inhibit DNA gyrB. Phytochemicals were determined using GC-MS. Results : Utilizing GC MS and FT-IR analysis, the phytochemical constituents of S olanum nigrum, Vitex negundo, and Euphorbia hirta were discovered. It will be simpler to do a follow-up study on discovering bioactive compounds and evaluating their effectiveness in inhibiting DNA gyrB with the help of this preliminary data from the analytical procedures. Conclusion : There are countless applications for the phytochemicals that medicinal plants produce. Staphylococcus aureus will be stopped by DNA gyrB inhibition. The study employs DNA gyrase as its target and provides information on potential therapeutic targets. The study aims to identify innovative plant-based medicinal molecules that specifically target DNA gyrase B activity. INTRODUCTION he search for drugs to combat diseases is neverending.The developing resistance against the compounds seeks for new improved drugs with desirable properties (1).DNA Topoisomerases are proven to have therapeutic qualities against effective targets (2).DNA topoisomerase enzymes were first discovered by James Wang in 1971 within Escherichia coli (3).Supercoiling by Gyrase is involved in all DNA-related metabolic processes and is required for replication (4).Gyrase uses a method known as sign inversion to supercoil DNA, in which a positive supercoil is inverted to a negative one by passing a DNA segment through a temporary double-strand break.The ability of gyrase to catenate and uncatenate DNA rings is due to the reversal of this strategy, which relaxes DNA (5).Adenosine triphosphate (ATP) binding causes a conformational shift that drives each round of supercoiling; ATP hydrolysis allows for new cycles (6).The inhibition of gyrase by two antimicrobial classes reflects the fact that it is made up of two reversibly linked subunits.The A subunit is linked to coordinated DNA breakage and rejoining, while the B subunit is linked to DNA replication.The GyrA and GyrB subunits of DNA gyrase contain three gates that play an important role in the enzyme's activity.The GyrA subunit of DNA gyrase contains an active-site tyrosine residue that is important for double-stranded DNA breakage and reunion (dsDNA).The GyrB subunit, on the other hand, contains the ATPase active site, which provides the energy required for DNA supercoiling (7).The GyrA subunit contains 4 domains whereas the GyrB consists of 3 domains.The winged-helix domain (WHD), long domain, tower domain, and variable Cterminus are all domains of GyrA whereas the GyrB subunit is comprised of only three domains i.e., GGHKL, ATP transducer, and TOPRIM (8). Staphylococcus aureus as a causative agent Staphylococcus aureus was thought to be a normal flora inhabiting the human population overall, but it has emerged as the causative agent of many severe infections in immunocompromised patients and healthy people in the community (9,10).S. aureus infections have recently become a major cause of T human morbidity and mortality in both community and hospital settings (11).Furthermore, S. aureus strains combining resistant and virulent genes have become a major treatment issue in Europe, the United States, and developing countries such as India.Because of antibiotic resistance, enzyme and toxin production, biofilm formation capacity, and immune evasion capability, the available therapies are no longer fully effective in treating Staphylococcal infections (12).S. aureus (ATCC 25923) is one of the pathogens that can modify its genome easily under multidrug resistance (13).In this study, one of the regions of S. aureus and its potential platform for drug resistance will be explored.Plant-based molecules such as alkaloids, phenols, flavonoids, and saponins, are considered to have high potential (14).These consist of high biological values that can be studied. Preparation of plant extract The plant species Solanum nigrum, Vitex negundo, and Euphorbia hirta are herbs and are seen throughout the places in India.These are very common household plants that can be seen to be growing in moist soil conditions.The plants Solanum nigrum, Vitex negundo, and Euphorbia hirta were collected from the college premises of Vels Institute of Technology and Advanced Sciences (VISTAS), Pallavaram, Chennai, Tamil Nadu.Healthy and mature leaves were selected then the leaves were thoroughly washed under water to remove the dirt and kept for drying.After cleaning the leaves were kept for drying.They were left to dry in newspapers for a week under a shady place to remove the present moisture content. Ultrasonication The dried leaves were pulverized and ready for extraction once the moisture content was eliminated.Using polar (ethanol and ethyl acetate) and non-polar (petroleum ether and hexane) solvents, extraction was done using an ultrasonic method.Ultrasonication is the application of ultrasound waves for the disintegration of sludge.The intense sonication produces compression and refraction which agitates the particles by breaking the droplets and thus disrupting the cells causing homogenization and dispersion effects.The solvents were kept for 20 min and extracts were purified using Whatman filter paper. Phytochemical analysis Examination of the medicinal plants' phytochemical qualities was utilized to identify and separate the medication, lead compounds, and component parts from the plant's parts.The qualities of the phytochemicals in plants can be used to pinpoint their distinct biological activity.Most of the plant parts used for the investigation of the phytochemical qualities were the leaves, roots, stems, bark, and fruits.A variety of phytochemicals were extracted from medicinal plants using various solvents, including ethanol, methanol, chloroform, acetone, hexane, petroleum ether, ethyl acetate, and water (15,16). Thin layer chromatography To separate, detect, and quantify various types of bioactive components, thin layer chromatography (TLC) is a crucial analytical technique.Each solvent extract was subjected to thin-layer chromatography.The silica gel was prepared on the TLC plates.It was performed to analyze the bioactive variation of compounds present.The glass slides were coated with silica gel and kept in the hot air oven for 20 minutes.The powdered sample was extracted with solvents such as ethanol, petroleum ether, hexane, and ethyl acetate.The mobile phase of the solvents was ethyl acetate: petroleum ether, Ethyl acetate: hexane; ethanol: petroleum ether; ethanol: hexane.These mobile solvents were used for the detection of active compounds such as carbohydrates, Alkaloids, Saponins, Terpenoids, Glycosides, Phenol, Steroids, and Flavonoids.The developed chromatograms were studied under UV light.Retention factor values were calculated with the following with the formula given below. Column chromatography The highest retention factor of the TLC results of the plant extract is proceeded for column chromatography.A liquid solvent (mobile phase), which gently descends the glass cylinder-shaped column with the aid of gravity or applied external pressure, is in contact with the solid phase at the top.The removal of chemicals from a mixture is accomplished using this method.The sample is put into the column's top portion once it is ready.The mobile solvent is then permitted to pass through the column as it descends.The compounds in the mixture interact differently with the solid phase and the mobile phase, which causes them to migrate all along the mobile phase at varying rates or intensities.Compounds are successfully separated from the mixture in this manner (17).After the mobile phase, the solvents were chosen in different concentrations as ethyl acetate: hexane.The solvents were mixed in the ratio such as 10:0, 9:1, 8:2, 7:3, 6:4, 5:5, 4:6, 3:7, 2:8, 1:9, 0:10 respectively.The solvent mixture is poured in the column and the samples are collected in the test tubes. FTIR (Fourier transform infrared) analysis FTIR analysis is a technique for identifying inorganic, organic, and polymeric substances by scanning materials with infrared light.FTIR has proven to be a successful method for characterizing and validating the chemicals or chemical links present in an undetermined blend of plant extract.Three of the main IR spectroscopic sampling techniques are attenuated total reflection (ATR), attenuated reflection, and transmission.For some examples, each model is effective, while others present difficulties.The infrared spectrum, which depicts the intensity of infrared spectra, is represented by the x-axis, or horizontal axis.When exposed to the infrared portion of the electromagnetic spectrum, the sample's distinct atomic vibrations correlate to the peaks frequently referred to as absorption bands (18,19). GC-MS analysis Compounds present in a plant sample can be determined and identified using the combined analytical approach known as gas chromatographymass spectroscopy (GC-MS).The phytochemical screening and chemotaxonomic investigations of medicinal plants with physiologically active components depend heavily on GC-MS (20)(21)(22).Peak regions, on the other hand, are correlated with the concentration of the relevant chemical.Complex samples segregated by GC-MS will produce many different peaks, every one of which produces a unique mass spectrum used for determining the compound.Huge, commercialized collections of mass spectra could be used to find and analyze unknown compounds and target analytes. Antibacterial activity Anti-bacterial activity was determined by an agar well diffusion test.The glassware was washed and dried in the hot air oven.The agar medium was transferred to the Petri dishes and kept for solidifying at room temperature.The test organism used is E. coli and it was spread across the agar with a cotton swab.A sterile borer was used for making wells of 8mm diameter.After the well is made the sample is loaded with a control.Ampicillin is used as the control and the plant sample was added to the well and kept in incubation at 37 0 C for 24 hours in the incubator.After 24 hours the zone of inhibition was observed, and diameter was measured. Rate of kill assay Rate of kill assay is the study of the activity of any antimicrobial agent against a bacterial strain to determine the bactericidal activity over time.The E. coli culture was prepared in LB broth and incubated for 24 hours.The plant sample is mixed with DMSO solution and serially diluted.To each of the Eppendorf 2 ml of the sample and 1 ml of the bacterial culture are added.The reading is observed in the 0th, 2 nd, and 24 th hour.The reading is measured in 595 nm. Phytochemical analysis The phytochemical analysis of Solanum nigrum, Vitex negundo, and Euphorbia hirta confirmed the existence of bioactive chemicals, and tests were conducted using four main solvents: ethanol, ethyl acetate, petroleum ether, and hexane.The testing of phytochemical components and the accompanying observations are shown in Tables 1 and 2 below. Identification of bioactive constituents Result analysis Test for Carbohydrates Equal volumes of Fehling A and Fehling B reagents were mixed, and 2 ml of the mixed reagents were added to the 2 ml leaves extract and gently boiled. A brick-red precipitate appeared at the bottom of the test tube indicating the presence of reducing sugar. Test for Alkaloids 2 ml of the Mayer's reagent was added to the 2 ml of the crude extract A cream precipitate was formed which showed the presence of Alkaloids.Test for Saponins 2 ml of the sample was dissolved in the distilled water and the sample was shaken vigorously A forth formation indicates the presence of Saponins. Test for Terpenoids 2 ml of the sample, 2 ml of Chloroform, and 2 ml of conc.Sulfuric acid was added to the test tube.The solution was shaken well The appearance of the reddish-brown color of the interference indicated the presence of terpenoids.Test for Glycosides 2 ml of Sample, 1 ml of glacial acetic acid, 1 ml of ferric chloride, and 1 ml of conc.Sulfuric acid was added.The solution is shaken gently. The appearance of greenish brown-blue color indicated the presence of glycosides. Test for Phenol 2 ml of Sample and 1% of 2 ml ferric chloride were added to a test tube and shaken gently. The appearance of dark green color indicates the presence of phenol. Test for Steroid 2 ml of sample, conc.Sulphuric acid was added and gently shaken. The appearance of the lower layer yellow indicates the presence of steroids. Test for Flavonoid 2 ml of the sample with 1 % of 2 ml Sodium hydroxide flakes were added.Drops of Hydrochloric acid were added to the sides of the test tube. The appearance of yellow color indicates the presence of flavonoids. TLC (Thin layer chromatography) Ethyl acetate, ethanol, petroleum ether, and hexane were the solvent systems employed.For the mobile the combination of ethyl acetate: petroleum ether, ethyl acetate: hexane, ethanol: petroleum ether, and ethanol: hexane was used.The Retention factor for the crude extracts of Solanum nigrum and Vitex negundo are shown in Table 3. Column chromatography Hexane crude extract of Euphorbia hirta showed the highest retention factor of 0.909 and thus the stationary phase silica gel is mixed with the solvent ethyl acetate and hexane and filled inside the tube to get settled.After that for the mobile phase, the solvents were chosen in different concentrations as ethyl acetate: hexane.The solvents were mixed in the ratio such as 10:0, 9:1, 8:2, 7:3, 6:4, 5:5 , 4:6 , 3:7, 2:8, 1:9, 0:10 respectively.The solvent mixture is poured in the column and the samples are collected in the test tubes.The collected solutions are shown in the figure below.From the 11 concentrations, the ratios 7:3 and 6:6 showed good results and it proceeded to the GCMS analysis. FTIR (Fourier transform infrared) analysis The infrared spectrum showed a wave number for a mid-range IR ranging from 4,000 and 400 cm -1 .Three medicinal plants-Solanum nigrum, Vitex negundo, and Euphorbia hirta-had their ethyl acetate and hexane extracts studied using FT-IR spectroscopy. The various functional groups present in extracts are often seen in the FT-IR tests with various peak characteristics.The highest values in the infrared (IR) band were used to characterize the function groups of the active elements found in the extract using the FT-IR spectrum.Based on the peak ratio of the extract after FT-IR processing, the components were divided into functional categories.According to the findings of the FT-IR study, the functional groups O-H, C-C, C=C, C-H, O-H, R-COO, and CH3 were confirmed. There is proof that FTIR analysis is a precise and trustworthy technique for determining the composition of biomolecular systems. GC-MS analysis The Clarus 680 GC was used in the analysis employed a fused silica column, packed with Elite-5MS (5% biphenyl 95% dimethylpolysiloxane, 30 m × 0.25 mm ID × 250 μm df), and the components were separated using Helium as carrier gas at a constant flow of 1 ml/min.The injector temperature was set at 260°C during the chromatographic run.The extract sample (1μL) was injected into the instrument The oven temperature was as follows: 60°C (2 min); followed by 300 °C at the rate of 10 °C min −1 ; and 300 °C, where it was held for 6 min.The mass detector conditions were transferring line temperature of 240°C; ion source temperature of 240°C; and ionization mode electron impact at 70 eV, a scan time of 0.2 sec, and a scan interval of 0.1 sec.The fragments range from 40 to 600 Da.The mass conditions were solvent Delay=2.00 min, Transfer Temp=24 0 C, Source Temp=24 0 C, Scan: 50 to 600Da. Antibacterial activity The agar well diffusion method showed good results Escherichia coli was used as the pathogen and a 10µg/mL concentration of plant extracts of Euphorbia hirta was used in the well.The solvents ethyl acetate, petroleum ether, ethanol, and hexane showed zones of inhibition of 2 mm 2.2 mm 2.9 mm, and 3.2 mm respectively, which is shown in Fig. 1.The crude extract hexane of E. hirta showed the highest zone of inhibition of 3.2 mm (Fig. 2). reported high potential for the target of DNA gyrase B inhibition. CONCLUSION The targets chosen in this research, Staphylococcus aureus (ATCC 25923) DNA Gyrase B, are appealing because they are relatively undiscovered targets in the field of drug discovery and thus hold enormous potential for the development of novel agents.The designed primer shows potential insights into the cloning of DNA gyr B enzyme from the S.aureus (ATCC 25923) in the future.The thin layer chromatography performed on the hexane crude extract of Euphorbia hirta shows the highest retention factor of 0.909 cm followed by the ethyl acetate crude extract of vitex negundo and petroleum ether crude extract of Solanum nigrum as 0.907cm and 0.781cm respectively.E. hirta was reported to have the highest alkaloid content of 17.29% followed by Vitex negundo.Thus, the research concludes that Euphorbia hirta can be considered as a potential therapeutic compound against the S. aureus (ATCC 25923) DNA Gyr B inhibition. Table 3 : Retention factor values of crude extracts of Solanum nigrum and Vitex negundo
2023-09-23T15:03:29.649Z
2023-09-18T00:00:00.000
{ "year": 2023, "sha1": "f1845524501ddf9d4d1c071098fc037e695cf485", "oa_license": "CCBY", "oa_url": "https://biomedicineonline.org/index.php/home/article/download/2859/999", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "cba2a380dade274dc519fa0f46f7ec79bb759ca9", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
98952790
pes2o/s2orc
v3-fos-license
Superconductivity at the vacancy disorder boundary in K$_x$Fe$_{2-y}$Se$_2$ The role of phase separation in the emergence of superconductivity in alkali metal doped iron selenides A$_{x}$Fe$_{2-y}$Se$_{2}$ (A = K, Rb, Cs) is revisited. High energy X-ray diffraction and Monte Carlo simulation were used to investigate the crystal structure of quenched superconducting (SC) and as-grown non-superconducting (NSC) K$_{x}$Fe$_{2-y}$Se$_{2}$ single crystals. The coexistence of superlattice structures with the in-plane $\sqrt{2}\times\sqrt{2}$ K-vacancy ordering and the $\sqrt{5}\times\sqrt{5}$ Fe-vacancy ordering were observed in SC and NSC crystals along side the \textit{I4/mmm} Fe-vacancy free phase. Moreover, in the SC crystal an Fe-vacancy disordered phase is additionally present. It appears at the boundary between the \textit{I4/mmm} vacancy free phase and the \textit{I4/m} vacancy ordered phase ($\sqrt{5}\times\sqrt{5}$). The vacancy disordered phase is most likely the host of superconductivity. The role of phase separation in the emergence of superconductivity in alkali metal doped iron selenides AxFe2−ySe2 (A = K, Rb, Cs) is revisited. High energy X-ray diffraction and Monte Carlo simulation were used to investigate the crystal structure of quenched superconducting (SC) and as-grown non-superconducting (NSC) KxFe2−ySe2 single crystals. The coexistence of superlattice structures with the in-plane √ 2 × √ 2 K-vacancy ordering and the √ 5 × √ 5 Fe-vacancy ordering were observed in SC and NSC crystals along side the I4/mmm Fe-vacancy free phase. Moreover, in the SC crystal an Fe-vacancy disordered phase is additionally present. It appears at the boundary between the I4/mmm vacancy free phase and the I4/m vacancy ordered phase ( √ 5 × √ 5). The vacancy disordered phase is most likely the host of superconductivity. The self-organization of electronic nematic states and multi-phase separation are at the heart of the underlying lattice complexity prevalent in the high-temperature iron-based and cuprate superconductors [1]. Superconductivity emerges by suppressing the static antiferromagnetic (AFM) order [2] but spin and charge density fluctuations are commonly inferred and may be responsible for electron pairing. Coupled with these fluctuations is a heterogeneous lattice where the spatial interplay between spin and charge yields nanoscale phase separation [3]. Therefore, the nature of the lattice structure is key to elucidating the symmetry-breaking ground state that may lead to superconductivity. The A x Fe 2−y Se 2 system is a test bed for exploring the very peculiar states that appear with the close proximity of superconductivity to a magnetic insulating state [4], leading to a multiphase complex lattice. The A x Fe 2−y Se 2 (A = K, Rb, Cs) iron selenide superconductor has been intensely studied [5] in part due to the possible role of Fe-vacancy order and whether or not phase separation occurs between SC and NSC regions [6][7][8][9][10]. With vacancies at both the A and Fe sites, a wellknown structural transition occurs when the Fe vacancies order at T S ∼ 580 K [6]. In the high temperature tetragonal phase with the I4/mmm space group, the vacancies are randomly distributed at both the Fe and A sites. Upon cooling below T S , a superlattice structure appears due to Fe vacancy order. Several scenarios have been proposed regarding the nature of the crystal structure below T S . In one, the lattice is phase separated into a minority I4/mmm phase which is compressed in-plane and extended out-of-plane in comparison to the high temperature centrosymmetric phase and has no Fe vacancies, and a majority I4/m phase with the Fe vacancies ordered in different superlattice patterns [11,12,14,26]. The most commonly reported superlattice structure with Fe vacancy order is the √ 5× √ 5×1 [6,7,15,16]. More recently, other superlattice patterns have been reported in the literature such as the 2×2×1 [7,17], the 1×2×1 [7,8,17,18] and the √ 8 × √ 10 × 1 [9]. The distinction among the different superlattice patterns arises from the underlying order of the Fe and alkali metal sublattices. In the I4/m phase, the Fe site symmetry is broken from the high temperature I4/mmm space group, giving rise to two crystallographic sites. Preferred site occupancy leads to the √ 5 × √ 5 supercell, in which one site is empty (or sparsely occupied) while the other is almost full. Magnetic ordering is characteristic of this phase. Below T N ∼ 560 K, AFM order arises in the I4/m phase that persists well below T c [6]. The AFM state, commonly reported in the literature [6,8,15], is robust unlike what has been observed in all other Febased superconductors, and its coexistence with the superconducting state has raised concerns about the validity of the s+/-coupling mechanism coupled with the absence of hole pockets at the Fermi surface and the lack of nesting in this system. [19] More recently, evidence of alkali site vacancy order has been presented as well with a √ 2 × √ 2 superlattice structure within the I4/mmm phase in K x Fe 2−y Se 2 [11] and Cs x Fe 2−y Se 2 [20][21][22]. The centrosymmetry of the I4/mmm is broken due to the alkali metal order. The I4/mmm phase with no Fe vacancy has largely been attributed to be the host of superconductivity in part because of the absence of magnetism and vacancies at least at the Fe site. It is understood at present that by post-annealing and quenching, superconductivity can be enhanced in this system [23,24] even though the actual mechanism remains unknown. Magnetic refinement from neutron powder diffraction measurements revealed that magnetic order does not exclude the presence of a SC phase [25]. Moreover, the SC shielding fraction obtained from bulk magnetic susceptibility measurements correlates to a larger volume fraction of the I4/m phase instead of the smaller volume fraction of the I4/mmm phase. To investigate this issue further, high energy X-ray scattering measurements were performed on two kinds of K x Fe 2−y Se 2 single crystals, one annealed and SC, and the other as-grown and NSC. In combination with Monte Carlo simulation, it is shown that superconductivity in arXiv:1706.02182v2 [cond-mat.supr-con] 3 Aug 2017 the quenched crystal is most likely present in regions between the √ 5 × √ 5 × 1 I4/m domain boundaries, bordering the I4/mmm domains with no Fe vacancies. Thus superconductivity in this system appears at the crossover of the vacancy order-disorder transition. Quenching increases the boundary walls around the I4/m domains, leading to an increase of the percolation paths and an enhancement of superconductivity. Single crystals of K x Fe 2−y Se 2 were grown using the self-flux method. The first step of the synthesis involved the preparation of high-purity FeSe by solid state reaction. Stoichiometric quantities of iron pieces (Alfa Aesar; 99.99%) and selenium powder (Alfa Aesar; 99.999%) were sealed in an evacuated quartz tube, and heated to 1075 • C for 30 hours, then annealed at 400 • C for 50 hours, and finally quenched in liquid nitrogen. In the second step, a potassium grain and FeSe powder with a nominal composition of K:FeSe = 0.8:2 were placed in an alumina crucible and double-sealed in a quartz tube backfilled with ultrahigh-purity argon gas. All samples were heated at 1030 • C for 2 hours, cooled down to 750 • C at a rate of 6 • /hr, and then cooled to room temperature by switching off the furnace. High quality single crystals were mechanically cleaved from the solid chunks. In the final step, the annealed crystals were additionally thermally treated at 350 • C under argon gas for 2 hours, followed by quenching in liquid nitrogen. The crystals that were not heat treated were labeled as-grown. The magnetic susceptibility and transport were measured from 2 to 300 K and the as-grown crystal is NSC while the annealed crystal is SC. Back-scattered scanning elec- tron microscopy (SEM) measurements were carried out at room temperature on the two samples [25]. The characterization of these crystals was previously reported in Ref. [25]. The SEM measurements showed that the surface morphology of the as-grown crystal has two kinds of regions: rectangular islands with a bright color and a background with a dark color. On the other hand, instead of island-like domains, very small bright dots were observed on the surface of the annealed crystal. The single crystal diffraction measurements were carried out at the Advanced Photon Source of Argonne National Laboratory, at the 11-ID-C beam line. In-plane and out-ofplane measurements were carried out on both types of crystals at room temperature. The X-ray diffraction from the hk0 scattering plane shows evidence of coexistence of multiple phases. Shown in Figs. 1(a) and 1(b) are the patterns corresponding to the as-grown and quenched crystals, respectively. Several features are observed in both that arise from the presence of the two configurations of √ 5× √ 5×1 superlattice structure with the I4/m symmetry [11,16] indicated by the two inner dashed boxes as well as the I4/mmm phase indicated by the outer dashed box. Indicated by an arrow is a superlattice peak indexed as ( 1 2 1 2 0). The lattice con-stant calculated from the peak position matches that of the I4/mmm phase, indicating a √ 2 × √ 2 A-site vacancy ordered structure in the I4/mmm phase. The scattering patterns along the l -direction are shown in Figs. 1(c) and (d) for the as-grown and quenched crystals, respectively. Bragg peaks from I4/mmm appear at the lower Q side of the I4/m peaks. No l=2n+1 superlattice peaks are observed along the (00l) direction, leaving the out-of-plane stacking of the √ 2 × √ 2 K-vacancy order unclear. Due to sample rotation during measurement, weak reflections are observed at the lower Q and higher Q sides of Bragg peaks (006) and (006), and can be indexed to the (204) and (206) Bragg peaks, respectively. In both crystals, the diffraction pattern is dominated by a majority phase with the I4/m space group with Fe vacancies and a minority phase consisting of the high symmetry I4/mmm space group with no vacancies at the Fe site and a weak √ 2 × √ 2 vacancy order at the K site. Shown in Fig. 2(a) are the (200) Bragg peak from the I4/mmm minority phase and the (420) Bragg peak from the I4/m majority phase in the hk0 plane. They are well-resolved given that the two phases have different lattice constants (a/ √ 5 ∼3.90Å in I4/m, a ∼3.84Å in I4/mmm), often difficult to see in powders. Shown in Figs. 2(c) are the powder integrated diffraction patterns obtained from the annealed and as-grown crystals in the vicinity of the ( 1 2 1 2 0) superlattice peak. Even though this peak is observed in both diffraction patterns, it is significantly stronger and clearly above the background level in the as-grown crystal at Q ∼ 1.16Å −1 but barely visible in the annealed sample. The ( 1 2 1 2 0) peak is not as intense as the other superlattice features which suggests that the K-site vacancy is partially ordered in the I4/mmm phase. The K-site vacancy order can break the symmetry of the centrosymmetric I4/mmm to P4/mmm or to an even lower symmetry depending on its out-of-plane stacking pattern. However, our out-of-plane diffraction data did not provide enough information to further confirm the symmetry. Single crystal refinement was performed on the hk0 plane data, and the results are summarized in Tables I and II, where space group P4/mmm was used to refine the ( 1 2 1 2 0) superlattice peak of the minority phase. The refinement yielded a volume fraction for the I4/mmm phase of 18.4(3)% in the annealed sample and about 31.6(3)% in the as-grown. Given that there is less of the I4/mmm, the presumed host of the SC state, in the annealed sample which is SC than in the as-grown sample which is NSC, it is questionable whether or not this is the phase in which superconductivity occurs. How the K vacancy order affects superconductivity is still a question. At the same time, the refinement indicates that the I4/m phase is not fully ordered with the √ 5 × √ 5 × 1 Fe vacancy ordered supercell. Shown in Fig 2(b) is a comparison of the integrated intensity of the (110) I4/m superlattice peak to the calculated intensity assuming a fully ordered Fe-vacancy sublattice with no occupancy at the Fe1 site. The experimental intensity is lower which suggests that even within the majority phase, two different Fe sublattices are present, a fully vacancy ordered one and a partially ordered (or disordered) one. The disordered Fe sublattice is described within the I4/mmm symmetry, but it is indistinguishable from the ordered Fe sublattice because their lattice constants are unresolved in the experimental data. To quantify the differences between the ordered and disordered phases, a Monte Carlo simulation on the Fe sublattice was performed. The Hamiltonian was designed to be Ising like with the following form: H = <21> J 21 σ i σ j + <31> J 31 σ i σ j + <11> J 11 σ i σ j + <20> J 20 σ i σ j + <22> J 22 σ i σ j . Here the Ising variable σ i = 1 represents an Fe atom at site i and σ i = −1 stands for a vacancy at site i. The coupling constant between σ i and σ j were defined up to the 5th nearest neighbor. By setting J 11,20,22 > 0, J 21,31 < 0, the two √ 5 × √ 5 configurations will be energetically favored. For the Monte Carlo step, site swapping was employed instead of site flipping, in order to keep the vacancy ratio unchanged. When vacancies are less than 20 %, regions with no Fe vacancies will form on the lattice, simulating the I4/mmm phase. The as-grown sample was simulated by gradual cooling, then the system was heated to a high temperature, T a , and cooled back down to simulate the annealed sample. The simulation results with 15 % vacancies on a 300 × 300 lattice with J 11,20,22 = 6, J 21,31 = −1.5 and T a = 10 are shown in Fig. 3. Before annealing, the two √ 5 × √ 5 configurations (blue and green) and the Fe vacancy free I4/mmm phase (yellow) appear in big domains ( Fig. 3(a)). After annealing, many small domains with the √ 5× √ 5 structure form inside the previously vacancy free regions, breaking the I4/mmm domains into smaller islands, creating more domain boundaries (Fig 3(b)). The simulated lattice is more homogeneous after the annealing process, in agreement with our SEM results [25]. The volume ratio of the I4/mmm phase also decreases after annealing, which agrees with our X-ray data. The annealing process controls the phase distribution as seen in Fig 3 (c). With annealing, the Fe disordered I4/mmm phase grows significantly over the I4/mmm Fe vacancy-free and I4/m Fe vacancy-ordered phases. Difficult as it is to separate the contribution of the I4/mmm Fe disordered phase in the diffraction pattern, the difference between the experimental and calculated (110) superlattice peak intensities shown above in Fig 2(b) is indication that the I4/m is not fully ordered, consistent with the calculation. How does this affect superconductivity? Our Monte Carlo simulation indicates that the annealing process increases the total area of the domain boundary where Fe vacancies tend to be randomized. The increase in the domain boundary walls is seen in Fig 3(b) while the length of the boundary walls increases as the domains get smaller. In a real sample this disorder can be enhanced by the local distortion at the domain boundaries due to different lattice constants of the two phases. It was previously shown using thin films of K x Fe 2−y Se 2 [26] that the superconducting phase appears when the I4/mmm phase borders the I4/m phase. The domain boundary forms a filamentary network of Fe vacancy disorder. Fe vacancy disorder suppresses the band structure reconstruction and raises the chemical potential without completely destroying the Fermi surface [27]. The Fe vacancy disorder can thus serve as effective doping and lead to superconductivity. This is consistent with our X-ray and simulation results and provides a connection to the transport properties of the two samples. To conclude, the presence of competing degrees of freedom is a common theme in superconductors of interest today. In our system, the SC crystal is a multi-phase separated state just like the NSC crystal and what distinguishes the two is the extent of the Fe-vacancy disordered state. Our results offer contradictory evidence to the motion that the I4/mmm phase with no Fe vacancies is the host of the superconductivity. On the contrary, the SC crystal tends to form more domain boundaries with the Fe-vacancy disordered phase sandwiched between the I4/mmm vacancy free and the I4/m vacancy ordered phases as seen in Fig 3(d), and very possibly leads to superconductivity in a filamentary form, in agreement with a reported SPEM study [28]. In this way we provide a reasonable understanding of the enhancement of superconductivity by annealing as well as the filamentary nature of the superconductivity in this compound, a common feature observed in other superconductors. The I4/m and I4/mmm domains are reminiscent of the charge
2017-08-03T15:29:09.000Z
2017-06-07T00:00:00.000
{ "year": 2017, "sha1": "12d0ff5d5444f5e3ff984225842af12bba031efd", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.97.184502", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "12d0ff5d5444f5e3ff984225842af12bba031efd", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
262098787
pes2o/s2orc
v3-fos-license
Epidemiology and Long-Term Outcomes in Thoracic Transplantation Over the past five decades, outcomes for lung transplantation have significantly improved in the early post-operative period, such that lung transplant is now the gold standard treatment for end-stage respiratory disease. The major limitation that impacts lung transplant survival rates is the development of chronic lung allograft dysfunction (CLAD). CLAD affects around 50% of lung transplant recipients within five years of transplantation. We must also consider other factors impacting the survival rate such as the surgical technique (single versus double lung transplant), along with donor and recipient characteristics. The future is promising, with more research looking into ex vivo lung perfusion (EVLP) and bioengineered lungs, with the hope of increasing the donor pool and decreasing the risk of graft rejection. Introduction Trailblazing into the realm of medical miracles, James Hardy etched his name in history in 1963 by successfully demonstrating that a lung transplant was surgically feasible [1].Even though the initial patient could only grasp three additional weeks of life before succumbing to complications, this pivotal moment marked the beginning of an incredible journey toward medical advancement.The following decades, riddled with false starts and heartbreaking failures, were a testament to the resilience of the medical community, persisting despite setbacks that were predominantly due to anastomotic failure. From this bedrock of trials, the first glimmers of success emerged with single lung transplants (SLTs), swiftly culminating in a double lung transplant (DLT) in 1986 [1].Since then, the frequency and success rates of lung transplants have been on an uphill trajectory globally, thanks to leaps in technology and practice.The advent of powerful immunosuppressive agents, notably cyclosporine, coupled with enhanced patient management techniques, dramatically improved the prognosis post-transplant. Fast forward to 2017, and the landscape has transformed significantly.As per the International Society for Heart and Lung Transplantation (ISHLT), over 4500 lung transplants were performed annually worldwide-a testament to advanced organ allocation protocols.Nevertheless, the demand for organs still towers above the supply, pointing to a persistent and pressing need for solutions in organ transplantation.The introduction of newer techniques like EVLP has expanded the organ pool.This review focuses mainly on discussing EVLP and bioengineering, along with outcomes of lung transplantation. Single Versus Double Lung Transplant When looking through the literature, there are no randomized controlled trials specifically comparing single versus double lung transplants for ethical reasons.In addition, there are the technical aspects of the procedure and the consideration of the recipients' underlying lung disease.Taking a step back, knowing which disease entities warrant single versus double lung transplantation is important.Double lung transplant indications include cystic fibrosis and bronchiectasis; with these conditions, a single lung transplant is an absolute contraindication given the risk of contamination of the new lung.For most other diseases, such as interstitial lung disease, emphysema, and primary or secondary pulmonary hypertension, it appears that single versus double lung transplantation comes down to institute preference [2]. Looking at the ISHLT 2017 report, they analyzed 53,396 lung transplants over 15 years; they concluded that double lung transplantation revealed a higher one-year, three-year, five-year and ten-year survival rate.Furthermore the difference in the rate seemed to increase over the ten-year period.In this study, the main indication for transplantation was either chronic obstructive pulmonary disease (COPD) or ILD [3].Schaffer et al. conducted a comparative study in 2015, analyzing around 7000 patients over 7 years [4].They looked at outcomes for single vs. double lung transplantation for idiopathic pulmonary fibrosis (IPF) and COPD.They found that double lung transplantation was associated with better graft survival in the IPF group.The team found no significant difference between single versus double transplantation in the COPD group for graft survival at 5 years.In 2001, Meyer et al. performed a comparative study looking at 2260 patients over 6 years, mainly single versus bilateral lung transplants for COPD patients.They concluded that bilateral sequential lung transplants demonstrated a higher survival rate for individuals under 60 years of age [5]. We must also look into the development of bronchiolitis obliterans syndrome (BOS), one of the two phenotypes of CLAD, the other being restrictive allograft syndrome (RAS).Hadjialiadis et al. performed a single-center study; even by controlling the baseline characteristics between the two groups, they found that the incidence of BOS was higher with single lung transplantation [6].Neurohr et al. conducted a study looking at lung transplantation for pulmonary fibrosis; they reported that SLTs were associated with an increased risk of BOS [6].On the other hand, other studies, such as Meyer et al., found no difference in the development of BOS between SLT and DLT groups [5].Unlike the initial two single-center studies, the Meyer et al. study had a larger sample size of 2260 lung transplant recipients. As seen above, the results in the literature could be more consistent concerning long-term survival and BOS development.Many authors performed comparative studies using data from the comprehensive ISHLT database.Given the unique nature of lung transplantation, there appears to be far too many variables to control for when considering donor and recipient characteristics. Ex Vivo Lung Perfusion (EVLP) EVLP has established itself as a formidable intervention in helping to reduce the demand-supply mismatch of viable lungs for transplantation.Jirsch et al. first proposed and tested EVLP in 1970 through animal testing, and over the past 50 years, significant advancements have been made, including its successful use in human transplantation in 2001 [7,8].This revolutionary change has allowed for timely evaluations following explant, and has revitalized previously discarded donor lungs. EVLP is practiced across three different protocols.These include Lund, Toronto, and the OCS (Organ Care System) [9].Whilst Lund originated EVLP in clinical practice, it has been further refined into the Toronto Technique, which is most commonly used today.The procedure starts with an initial circuit containing 2 L of perfusate solution, 500 mg of methylprednisolone, and 500 mg of imipenem and cilastatin.Upon the retrieval of the donor's lungs, left atrial access is made by suturing a cannula to the cuff with prolene sutures.Following this, pulmonary artery access is established by inserting a cannula just before the artery bifurcation, held with silk ties.Where the donor artery is not retrieved, a conical cannula can be inserted and held with prolene sutures.To prevent the deflation of the lungs, the trachea is clamped, and an endotracheal tube is inserted.Before attaching to the main circuit, a retrograde flush is passed through the venous line to flush out microthrombi and debris.Finally, the lungs are placed within the dome, and the circuit is attached.Before running the perfusate, air must be removed from the lungs.After this, the cannulas are attached to the main circuit, and incrementally, flow is established, aiming for a 40% flow rate of the donor cardiac output.Simultaneously, normothermia is achieved at 37 degrees Celsius.Finally, once the circuit is optimized at physiological parameters, the clamp is removed, and the lungs can ventilate.The circuit allows for continuous arterial blood gas (ABG) and venous blood gas (VBG) monitoring, including the pre-and post-oxygenation values.Additionally, bronchoscopy and X-ray imaging are used to identify growing lesions.Please refer to Figures 1 and 2 outlining the EVLP circuit. donor's lungs, left atrial access is made by suturing a cannula to the cuff with prolene sutures.Following this, pulmonary artery access is established by inserting a cannula just before the artery bifurcation, held with silk ties.Where the donor artery is not retrieved, a conical cannula can be inserted and held with prolene sutures.To prevent the deflation of the lungs, the trachea is clamped, and an endotracheal tube is inserted.Before attaching to the main circuit, a retrograde flush is passed through the venous line to flush out microthrombi and debris.Finally, the lungs are placed within the dome, and the circuit is attached.Before running the perfusate, air must be removed from the lungs.After this, the cannulas are attached to the main circuit, and incrementally, flow is established, aiming for a 40% flow rate of the donor cardiac output.Simultaneously, normothermia is achieved at 37 degrees Celsius.Finally, once the circuit is optimized at physiological parameters, the clamp is removed, and the lungs can ventilate.The circuit allows for continuous arterial blood gas (ABG) and venous blood gas (VBG) monitoring, including the pre-and post-oxygenation values.Additionally, bronchoscopy and X-ray imaging are used to identify growing lesions.Please refer to Figures 1 and 2 outlining the EVLP circuit.Whilst EVLP's largest benefit has been to increase donor organ supply through the desirable restitution of lung physiology the post-operative outcomes are still yet to be thoroughly explored.The main hurdle thus far is the rate-limiting factor of prolonged ischemia time due to the transport of the lungs from the donor to the recipient. Cypel et al. published a review looking at the lung transplantation of high-risk donor lungs with the use of EVLP [10].They deemed the term 'high risk' using a fivepoint criteria, which included a PaO 2 :FiO 2 ratio of less than 300 mmHg, the presence of pulmonary edema, and donation after circulatory death (DCD).Their primary outcome was the development of primary graft dysfunction within 72 h post-operatively.There were numerous secondary outcomes, such as duration of hospital and critical care stay, 30-day mortality, and mechanical ventilation duration.The study included 20 lungs that underwent EVLP compared with 116 lungs that were procured in a conventional manner.The team found no significant difference in either group for the primary and secondary outcomes.Whilst EVLP's largest benefit has been to increase donor organ supply through the desirable restitution of lung physiology the post-operative outcomes are still yet to be thoroughly explored.The main hurdle thus far is the rate-limiting factor of prolonged ischemia time due to the transport of the lungs from the donor to the recipient. Cypel et al. published a review looking at the lung transplantation of high-risk donor lungs with the use of EVLP [10].They deemed the term 'high risk' using a five-point criteria, which included a PaO2:FiO2 ratio of less than 300 mmHg, the presence of pulmonary edema, and donation after circulatory death (DCD).Their primary outcome was the development of primary graft dysfunction within 72 h post-operatively.There were numerous secondary outcomes, such as duration of hospital and critical care stay, 30-day mortality, and mechanical ventilation duration.The study included 20 lungs that underwent EVLP compared with 116 lungs that were procured in a conventional manner.The team found no significant difference in either group for the primary and secondary outcomes. Divithotawela et al. ( 2019) conducted a nine-year cohort study assessing the postoperative outcomes of 230 EVLP versus 706 control thoracic transplant patients [11].It [11].It was observed that EVLP donor lungs had higher rates of injury, with significantly lower PaO 2 values, higher incidences of abnormal CXRs, and higher proportions of significant smoking histories.Despite this, there were no significant differences in the time to chronic lung allograft dysfunction (CLAD) or the survival time of the allograft itself.The study concluded that EVLP increases the donor pool with negligible differences in post-operative outcomes.Additionally, Tian et al. analyzed eight studies observing the outcomes of 1191 patients between EVLP and non-EVLP lung transplants [12].The metanalysis fortified the aforementioned results, demonstrating similar outcomes between both methods, except for poorer donor lungs noted in EVLP cases.These included poorer PaO 2 /FiO 2 values and higher smoking rates in donors.Outcome measures included the length of time on ECMO (extracorporeal membrane oxygenation), the length of time in intensive care, and graft disorder 72 h post-transplant.While physiological metrics and pathological markers are the main vehicles of assessment, Tikkanen et al. also evaluated the quality of life post-transplant [13].In their study, the specific criteria included the difference in meters walked over six minutes, and the maximum predicted FEV1 values were utilized.Both measures observed no significant differences in either metric across 340 conventional and 63 EVLP transplants. Notably, other primary research suggests that the effects of EVLP are potentially dependent on the cumulative caseload of transplant centers.Chen et al. (2023) analyzed the data set of 9708 normal transplants against 553 EVLP transplants, stratified into high versus low-case EVLP centers [14].It was observed that EVLP centers with lower caseloads (<15 transplants over the four-year study period) were more likely to have comparably poorer outcomes in the percentages 1-year survival compared to their conventional transplant counterparts.This difference was not observed in larger centers.Only 6% of transplants are EVLP-based in the United States, according to the United Network for Organ Sharing (UNOS).As this system falls in the minority of practice, adequate and routine training is crucial for optimal graft management.If operations are few and far between, and exacerbated at centers with low existing caseloads, this is possibly attributed to poorer outcomes, especially since the same study noted no significant differences in outcomes between the conventional thoracic transplants. Despite this, the outlook of EVLP is generally favorable, which is partly owed to EVLP's ability to correct pathologies in procurement.Nakajima et al. explain this further in their 2021 review article; with the high risk of corrective surgery in vivo, EVLP offers the vital opportunity to operate without the added risks, as well as to isolate the lungs without risk to other organs [15].As function is assessed and operative correction is made, it allows for predictable outcomes.Furthermore, the review exemplifies this through evidenced and treated pathologies, such as using alteplase in donor pulmonary emboli.The fibrinolytic effect was enhanced as there was no additional risk of bleeding compared to being treated in vivo.Additionally, through imaging and bronchoscopy, cases of pneumonia and pulmonary edema have been treated, too.Sanchez et al. (2014) exhibited this in a case report of successful reconditioning following neurogenic pulmonary edema [16].Despite donor lungs being rejected from numerous centers due to a PaO 2 value of 188 mmHg, once poor lung compliance and increased pulmonary vascular resistance had been corrected, improving oxygenation alongside reperfusion at normothermia reflected the correction of the pulmonary edema.The patient made a rapidly successful recovery, and was discharged 15 days later.The report demonstrates EVLP's use in expanding the donor pool, as well as ex vivo perfusion in correcting pathology and its comparative outcomes to normal transplantation.The risk of recipient infection from donor disease is possible in all transplant cases [17].The magnitude of this risk increases with poor ciliary clearance and extended times on mechanical ventilation.It is further exacerbated when patients are multimorbid, and medication damages other systems, especially for the possibility of multiple organs being donated, too.Part of the myriad benefits that EVLP brings in addressing such issues includes its allowance for increased drug volumes and concentrations without increasing the risk of morbidity to the recipient or donor.An example given by Ahmad et al. (2022) includes the administration of vancomycin [18].The therapeutic range is given at 10 mg/L; however, any increases past 30 mg/L risk acute renal failure.However, for a 75 kg male donor, whose lungs are placed in EVLP, a 1125 mg dose can be administered (15 mg/kg) to allow for a constant concentration of 225 mg/L running through the circuit, remarkably higher than the concentration dose for nephrotoxicity, which provides maximum antibiotic therapy in the light of perfusing lungs ex vivo.These results are evidenced in prior research by Andreasson et al., who looked into the effect of EVLP on microbial load in 18 lung donors [19].Microbial samples were collected using a bronchoalveolar lavage with 40 mL NaCl before aspirate samples were taken, and the lungs were allowed to ventilate and perfuse via EVLP.Thirteen donor lungs cultured microbial growth, both anaerobic and aerobic, as well as six donor lungs culturing yeast strains.The EVLP perfusate included amphotericin B and meropenem (to which all fungal species and bacteria were sensitive, respectively).The microbial and fungal loads significantly decreased upon sampling post-perfusate after treatment was given. In conclusion, EVLP has certainly helped to boost the donor pool; furthermore, it has not been associated with poorer outcomes compared to conventional transplantation, based on our literature review.Moreover, the use of EVLP has allowed teams to successfully treat conditions such as pneumonia and pulmonary edema in donors' lungs, thus enabling them to be transplanting viable organs. Bioengineered Lungs and Organ Repair Centers: Expanding Possibilities for Transplantation We mentioned earlier that the introduction of EVLP has revolutionized the field of lung transplantation by facilitating the maintenance of harvested lungs in a viable state.This technique has considerably expanded the pool of organs available for transplantation, overcoming the limitations of conventional cold organ preservation methods.In contrast to cold preservation, EVLP preserves lungs at normothermic conditions, thereby enabling the opportunity for pre-implantation organ repair and regeneration, which we refer to as bioengineering.This innovative approach holds great promise in enhancing organ survival rates, reducing rejection occurrences, and facilitating more flexible transplant scheduling. The underlying principle of lung bioengineering involves decellularizing retrieved lungs, and a subsequent recellularization using either autologous or allogeneic mesenchymal stem cells, utilizing the lung scaffold as a platform.This process is particularly significant in expanding the donor pool, which is currently restricted due to a high incidence of lung injury, especially among trauma patients.Targeted regenerative therapies applied to these injured lungs can potentially increase the availability of viable organs for transplantation. There is growing research on the various lung tissue scaffolds, either biological (acellular) or artificial (synthetic), and each has its benefits and limitations.First, we must understand the purpose of the scaffold; it essentially functions as the extracellular matrix (ECM), and it provides structural integrity to the tissue as well as the template for the recellularization process.The main benefit associated with biological scaffolds is the preservation of the complex architecture of the lung, as well as the retention of the ECM.We do not know exactly the extent to how much this ECM is affected with the decellularization process; however, there is relatively more preservation when compared to artificial scaffolds.The biological scaffold technique carries the risk of infection; furthermore, its use is limited by the shortage of donor lungs.To combat this, in the future, we may have to use xenogeneic (animal) ECM as the scaffold [20].There has been research looking into use of porcine lungs; however, there are concerns of α-galactosyl epitope in the ECM potentially eliciting a rejection response.Furthermore, more research needs to be conducted into the surgical technique, especially with respect to the anastomosis of the bronchi and pulmonary blood vessels [21]. There are numerous techniques to manufacture an artificial scaffold, such as 3D bioprinting, bioreactors, and electrospinning.The benefits associated with artificial scaffolding are the ability to design and create a specific scaffold, along with not having to rely on the donor organ pool.In the past decade, 3D bioprinting has made advances; it is a technique that relies on using laser-and ink-jet-based technology to create a scaffold.So far, there has been some success in bioprinting the trachea as per some animal studies.However, no trials have looked into bioprinting lung tissue [22].In theory, this technique should allow one to create as close of a replica of native lung tissue, however at present, this warrants further advances in technology.Of note, the specific limitations include an inability to successfully create a gas exchange interface, along with the creation of lung vasculature and alveolar epithelium.Furthermore, modern day bio-ink does not truly function like an ECM; hence, research is being conducted looking into possible hybrid solutions.Bioreactors are devices that provide the ideal conditions for growth or reactions to occur; in this setting, the bioreactor will allow for decellularization, recellularization, and lung maturation.Limitations of this technique include optimizing factors pertaining to homeostasis, such as ideal temperature management within the device, along with controlling the pH, ventilation, and perfusion.Finally, electrospinning is a modality whereby nanoscale fibers are created; these act as a scaffold for cell adhesion and attachment.The hope with electrospinning is that, by controlling the nanofiber dimensions, one can create the ideal extracellular matrix.This modality has thus far only been used in vitro, mainly to produce trachea scaffolds, with no published reports on its use in bioengineered lungs [20,23].The main limitations include mechanical strength along with concerns of the toxicity of the nanofiber scaffold. In 2015, Tan et al. demonstrated the successful revascularization and re-epithelialization of an implanted lung tissue scaffold [24].This led to the prolonged survival of a patient with a very limited life prognosis, who eventually succumbed to cancer recurrence after over a year.While this human case study demonstrates the potential of the technique, further advancements are necessary to ensure its feasibility and readiness for human trials. Bioengineering offers promising avenues for enhancing transplanted organs through genetic therapy, regenerative stem cell therapy, and pharmacotherapy to address infections.By leveraging these approaches, the functionality and viability of transplanted organs can be improved, thereby enhancing patient outcomes and organ availability. Building upon the pioneering work of Toronto General Hospital in the field of EVLP, the concept of Organ Repair Centers has emerged as a novel approach to facilitate the bioengineering of organs prior to transplantation.Currently, centers located in Silver Spring, Maryland, as well as Jacksonville, Florida, serve as central hubs catering to large geographic regions, providing the essential infrastructure required for this process.The fundamental concept entails transporting harvested organs to these specialized centers, where they undergo bioengineering procedures before being prepared for implantation.Although the costs associated with these additional maneuvers and tools, such as bioreactors and 3D printers for scaffold printing, are substantial, the potential benefits offered by organ repair centers are extensive and far-reaching.Establishing these centers opens up new possibilities for advancing the field of organ transplantation. Conclusions With an increasing number of individuals with end-stage lung failure, aside from lung transplant being curative, we must find ways to either boost the organ donation pool or ration the available resources.We still do not have unanimous data to support the use of single versus double lung transplants with regard to long-term survival.The reality is there are multiple factors at play, and it is unlikely we will see a direct, randomized, and controlled trial comparing the two techniques.EVLP demonstrates a notable propensity to augment the supply of viable lung donors, and execute comparable outcomes to conventional transplant techniques.We have observed this in comparable incidences of CLAD and allograft dysfunction post-transplant.Whilst still in its infancy, it serves as a beacon of transformation in lung transplantation through the assessment and restitution of marginal donors, through the correction of pulmonary edema, diagnosed infection, and other noted pathologies.Further research is needed to determine the effectiveness of sub-techniques within EVLP, such as portable machines versus center equipment, compared to external centers.This becomes especially pertinent when transport logistics is vital in organ procurement and explant, and centers are resource-and practice-dependent.Furthermore, we live in an age where technology is rapidly advancing.Maybe bioengineered lungs will become more prevalent in the future; only time will tell.The use of 3D bioprinting and electrospinning to create customized lungs, along with xenogeneic potential for scaffolding, are all exciting areas of research in the coming years. Figure 2 . Figure 2. Donor lungs with labelled inflow to the pulmonary artery (PA) and outflow from the left atrium (LA). Figure 2 . Figure 2. Donor lungs with labelled inflow to the pulmonary artery (PA) and outflow from the left atrium (LA). Divithotawela et al. (2019) conducted a nine-year cohort study assessing the postoperative outcomes of 230 EVLP versus 706 control thoracic transplant patients
2023-09-22T15:23:40.778Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "f46e9e3e9261991fb836e15a2b01fbc48ba7b985", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2308-3425/10/9/397/pdf?version=1695004138", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d1e7ca4d96b42745c7c7608bdff24ec5dfbb6f29", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3743470
pes2o/s2orc
v3-fos-license
Dynamics of Cough Frequency in Adults Undergoing Treatment for Pulmonary Tuberculosis Summary This is the first research to evaluate cough frequency continuously over 24-hour periods and to characterize associations with mycobacterial load and treatment. This study provides novel information on the circadian cycle of cough frequency and risk factors for increased cough frequency. In 2015, there were an estimated 10.4 million new tuberculosis cases causing 1.4 million deaths worldwide [1]. The major means by which transmission occurs is believed to be through aerosolized Mycobacterium tuberculosis expelled from an infectious person. A series of classic experiments in the 1960s showed that the number of M. tuberculosis droplet nuclei formed by coughing greatly exceeded those formed by singing or speaking, and concluded that cough is the main pathway by which bacilli are transmitted from the lung into the environment [2]. Cough frequency has been suggested as a predictor of transmission risk, and high cough frequency late in treatment has been associated with treatment failure [3,4]. However, a recent review highlighted the paucity of information related to the dynamics of cough in tuberculosis [3]. This review noted that the last study reporting the frequency of cough among patients undergoing tuberculosis treatment was conducted nearly 50 years ago and only included an 8-hour overnight assessment of 20 patients [5]. Though it is logistically convenient to monitor solely nocturnal cough patterns, it is not known whether daytime coughs show similar patterns. In this prospective cohort study, we recorded cough frequency using an objective acoustic tool [6,7], as is the current recommendation [8]. We surveyed participants before and during tuberculosis treatment to investigate (1) the circadian cycle, (2) risk factors associated with cough, and (3) the impact of appropriate treatment on cough frequency and mycobacterial burden. Study Design The parent prospective cohort study followed adults (aged ≥18 years) with a clinically suspected diagnosis of pulmonary tuberculosis in 2 reference tertiary academic Peruvian Hospitals: Hospital Nacional Dos de Mayo and Hospital Nacional Daniel Alcides Carrión, for which a protocol detailing sample size, selection criteria, and detailed information on variables has been published [9]. Data for human immunodeficiency virus (HIV)infected participants and those who did not have confirmed drug-susceptible pulmonary tuberculosis are being reported separately. For the current study, inclusion criteria were the subset of the parent study with sputum culture-positive tuberculosis confirmed to be susceptible to isoniazid and rifampicin (to reduce the risk of incorrect treatment confounding results) in participants confirmed to be HIV negative (due to the unknown effect of immunodeficiency on cough). The exclusion criterion was no adequate recording during the study period. Clinicians treated patients for tuberculosis according to the Peruvian national guidelines, using direct observation of every treatment dose. Patient treatment was not modified by this study. They were followed in our study until 62 days after treatment initiation. Cough was recorded among all suspected tuberculosis cases at the time of participant enrollment. Participants were asked to complete a previously published questionnaire regarding their socioeconomic status [9]. This study was approved by the ethics committees of both participating hospitals, A.B. PRISMA and Universidad Peruana Cayetano Heredia (UPCH) in Lima, Peru; and Johns Hopkins University in Baltimore, Maryland. Cough Frequency Assessment The Cayetano Cough Monitor (CayeCoM) device is a semiautomated ambulatory cough monitor that [9], along with our previously developed algorithm, identifies cough with a sensitivity of 75.5% and a Birring specificity [10] of 99.3% among adults with pulmonary tuberculosis [7]. All recordings that malfunctioned or were of poor sound quality due to high background noise were excluded. Recordings with a positive cough sound were further validated by 2 trained nurses who were responsible for listening to the portions of the recording identified by the algorithm to confirm each event as a "cough. " To reduce bias, recordings were randomly and blindly assigned to each nurse. For quality control purposes, a random subset of recordings was listened to by both nurses, and their agreement was assessed by calculating the κ statistic. Microbiological Assessment Standard instructions were given to participants to collect early-morning sputum samples by deep coughing. We obtained a single morning sputum sample on the day that each participant started treatment (day 0), and on days 3, 7, 14, 21, 30, and 60 of treatment. All sputum samples underwent microbiological protocols at UPCH for auramine-stained smears, and the microscopic-observation drug susceptibility (MODS) broth culture assay incorporating drug susceptibility testing for isoniazid and rifampicin, as previously described [9,[11][12][13]. The numbers of acid-fast bacilli visualized by auramine microscopy were recorded as the smear grade. In MODS culture-positive samples, the number of days from inoculation to positive was recorded to assess viable bacillary load in sputum, and defined as time to positivity (TTP). TTP predicts treatment response and correlates with the number of colony-forming units (CFU) prior to and during treatment [14][15][16]. Statistical Analysis All analyses were conducted using Stata statistical software version 14 (StataCorp LP, College Station, Texas), under a 95% confidence level. Cough Definitions Cough was more likely to occur in clusters, termed salvos, rather than individually. Therefore, cough episodes were analyzed rather than individual cough events. A cough episode was defined as all consecutive cough events that occurred without a cough-free pause of 2 seconds or more [7]. Cough frequency was defined as the number of cough episodes/hour. Based on previous findings by other groups [17,18], which used cough events rather than episodes, we defined "no cough" as ≤0.7 cough events/hour, and cough cessation was defined as the first of 2 consecutive recordings with no cough. Mycobacterial Load Calculations To estimate CFUs from TTP, we used the equation [log 10 CFU = 5.1 − (0.16×TTP)], based on our group's data on quantitative cultures [15,19]. Therefore, CFUs were estimated only for positive cultures, and all negative and indeterminate cultures were excluded from CFU analyses. Smear conversion was defined as the first negative smear with no subsequent positive smear; culture conversion was defined as the first negative culture with no subsequent positive culture. Circadian cycle of cough frequency. Cough frequency was modeled using nested negative binomial regression with random effects with an exchangeable correlation structure at the level of participant and treatment day, and a robust variance estimate. This was chosen based on a comparison of quasi-likelihood under the independence model criterion statistics between models with alternative correlation structures. To describe circadian cycles of cough, this model was adjusted for the hour of the day using harmonic sine and cosine terms as shown in Supplementary Equation 1. To test whether the circadian cycle varied with duration of treatment, models were fitted (1) separately by treatment day and (2) adjusting for treatment day, with interactions between treatment day and sine/cosine terms. Risk factors associated with increased cough frequency. Random-effects negative binomial regression with a participant-level random intercept was used to evaluate the association between cough frequency and mycobacterial load, as well as prior participant-reported tuberculosis, and socioeconomic status (monthly income). A final multivariable model was created, adjusting for risk factors found to be significant (P ≤ .05) in univariable analysis. In addition, because the relationship between the duration of treatment and cough frequency was nonlinear, both day of treatment and day of treatment squared were included as independent variables. TTP provides a more precise quantification of bacillary load, so it was preferred over smear-or culture-positive status in the multivariable analysis. Impact of appropriate treatment. The effects of the duration of appropriate treatment on cough frequency, smear grade, and MODS culture conversion were analyzed using Cox proportional hazard models. In this model, cough frequency was assessed as (1) a 2-fold reduction compared to the pretreatment cough frequency, as previously used by Loudon and Spohn [5], and (2) as no cough. Feasibility of Shorter Recordings Cough frequency calculated over a full day (≥23.5 hours) was compared to cough frequencies calculated over shorter periods (2-to 12-hour periods during the day). Cough recordings were split randomly into a discovery set (70% of recordings) and a validation set (30% of recordings). Using the discovery dataset, cough frequency was calculated during 2-to 12-hour periods throughout the day, and Spearman (nonparametric) correlation between total cough episodes occurring over each shortened window and total cough episodes over the full ≥23.5 hours was calculated. The time of day that correlated most highly with 24-hour cough was identified. This result, and the individual intraclass correlation coefficient, was then tested in the validation dataset. Demographics and Pretreatment Assessment of Microbiology Ninety-seven adults were enrolled in the parent study, who contributed with 957 recordings, with 685 of 1642 (42%) recordings excluded for technical reasons (Supplementary Table 1). Of these, 66 met inclusion criteria for the current study, and 2 were excluded because they had no adequate cough recording, so the study group consisted of 64 participants (Figure 1). Of these participants, all had at least 1 positive MODS culture, which included their first sputum sample for 95% of participants. Baseline demographic data are shown in Table 1. Cough Validation and Characteristics The median length of cough episodes was 0.61 seconds (interquartile range [IQR], 0.26-2.2), and 90% were <3.9 seconds long. Fifty percent of episodes contained only a single cough event, 24% had 2 cough events, and the remaining 26% contained ≥3 events. The maximum number of total cough events in a single episode was 21. There was good agreement among the 43% subset of recordings reviewed by both nurses, with a Cohen κ statistic of 0.93. Circadian Cycle of Cough Frequency Based on model estimates, the highest pretreatment cough frequency occurred from 1 pm to 2 pm and the lowest from 1 am to 2 am (2.4 vs 1.1 cough episodes/hour, respectively). Thus, cough episodes were twice as frequent during daytime as nighttime. This circadian cycle was present throughout the study period ( Figure 2). For example, after 14 days of treatment, cough episodes per hour were 1.5 from 1 to 2 pm vs 0.73 from 1 to 2 am. Risk Factors Associated With Increased Cough Frequency Cough frequency was independently associated with day of treatment (rate ratio [RR] per 10 days, 0.37; P < .01), day of treatment squared (RR, 1.3; P < .001), and TTP (RR, 0.93; P < .01) in the multivariable model (Table 2). Pretreatment cough frequency was correlated with cough frequency on treatment Figure 1). Cough frequency was not independently significantly associated with income or prior tuberculosis ( Table 2). The median time to halving of cough frequency was 7.0 days, and cough frequency had at least halved for 73% of participants by day 14. Participants with no cough pretreatment continued to have no cough throughout treatment. A higher pretreatment cough frequency was associated with a faster halving of cough frequency during treatment (hazard ratio, 1.2; P = .02). Other pretreatment factors (eg, age, sex) were not significantly associated with time to halving of cough frequency or time to achieve no cough. When cough trends were examined on a participant-by-participant basis, most showed a strong and immediate decrease in cough frequency after treatment, following the overall trend. There was no statistically significant association between pretreatment cough frequency (median, 2.3 cough episodes/ hour; IQR, 1.2-4.1) and time to smear conversion (median, 21 Each day begins at 9 am, as this is the time when recordings began. B, Separate negative binomial generalized estimating equation models fitted for each day following treatment. All recordings, regardless of total length, were included (n = 12 108 hours of recording). Random-effects modeling was used to adjust for study participant. Circadian cycles of cough were reflected by sine/cosine terms. days of treatment; IQR, 6-32) or culture conversion (median, 29 days; IQR, 13-61). Throughout the study, the cough frequency on days when participants had positive MODS cultures was approximately double the cough frequency on days when cultures were negative (univariate analysis RR, 2.3; P < .001). When assessing bacillary load, TTP was inversely associated with cough frequency such that as TTP increased indicating reduced bacillary load, cough frequency decreased (RR, 0.93; P < .01) ( Table 2). The relationship between cough frequency, TTP, and estimated log 10 CFU over time is shown in Supplementary Figure 2. There was a significant association (P < .001) between TTP and sputum smear positivity in pretreatment samples (Supplementary Figure 3). Among the 41% (26/64) of participants with matched cough recordings and sputum samples available pretreatment, median cough frequency was 2.3 cough episodes per hour (IQR, 1.2-4.1); median TTP was 6.0 days (IQR, 6.0-7.0), and 12% initially had no cough (Supplementary Table 2). At 2 weeks of treatment, 77% (49/64) of participants had matched recordings and sputum samples, with a median cough frequency of 0.43 (IQR, 0.0-1.1) and a median TTP of 11 (IQR, [10][11][12][13]. At day 60, 65% (15/23) of participants with matched samples had no cough. Comparing participants who were lost to follow-up on or before day 14 to those who continued past this point, there were no statistically significant differences in baseline cough (2.2 vs 2.4 cough episodes/hour; P = 1) initial smear results (++: 14% vs 12%; P = .16); or baseline TTP (6.5 vs 7.0; P = .11). Those who were lost to follow-up were also similar in sex, were of similar age, and were no more likely to have had prior tuberculosis. DISCUSSION Tuberculosis transmission occurs by aerosol spread, and the bacterial burden within sputum is often used as a proxy for infectiousness [20]. However, airborne tuberculosis transmission can only occur if there is a mechanism for distribution, such as expulsion through cough. Cough frequency in tuberculosis has been poorly studied with only a single study reported nearly 50 years ago. This study only observed patients at night and did not evaluate subject-specific dynamics over time [5]. To ensure adequate prevention strategies, an improved understanding of cough dynamics, before and during treatment, is required. Our group used a previously validated cough monitor and algorithm to record cough episodes from 64 HIV-negative participants diagnosed with drug-susceptible pulmonary tuberculosis. We observed that cough frequency varied throughout the Results of the univariable and multivariable negative binomial models examining cough frequency. A random-effects negative binomial model was used to adjust for study participant. day, with the highest frequency in the afternoon, a time of day when patients are likely to be active outside their homes, and lowest at nighttime, likely during sleep [21]. When comparing our results to those of Loudon and Spohn [5], we found a similar pattern of decrease in nighttime 8-hour cough frequency (11 pm-7 am) over our study period. We also found that shorter periods of cough recording have reasonable agreement with 24-hour recordings [22]. Our study also shows that cough, at a lower frequency, can continue within 2 months of treatment, supporting previous results [4,23]. However, it should be noted that cough alone can be a nonspecific symptom for tuberculosis; thus, both cough frequency and sputum MODS cultures were assessed. Increased cough frequency was associated with MODS culture positivity, as well as decreased time to positivity, a surrogate for bacterial load [15,16]. This suggests that cessation of cough is associated with sputum bacillary load and MODS culture conversion to negative during the first 2 months of treatment, which suggests that cough reflects treatment response [4]. Current guidelines note that following 2 weeks of tuberculosis treatment, infectiousness is greatly reduced [24][25][26], despite the presence of viable pathogens far beyond this time [27]. This implies that infectivity not only depends on microbiology positivity, but also on other factors such as cough frequency [28]. In support of this, we found that cough frequency dropped rapidly in the first days of treatment. Estimated CFU counts dropped faster within the first days of treatment [29][30][31], alongside an exponential decline in cough frequency. This supports the observation that within the first days of treatment a large proportion of the actively growing mycobacteria are killed [29], and that effective treatment may rapidly diminish transmission [32]. Of the 26 participants who provided concurrent cough recordings and sputum samples prior to commencing treatment, almost one-eighth had no cough. "No cough" patients with pulmonary tuberculosis have been described previously [33,34]; however, this is the first time this has been quantified. The World Health Organization, the International Union Against Tuberculosis and Lung Disease, and the Royal Netherlands Tuberculosis Association define case detection when cough lasts between 2 and 3 weeks, the entry point for routine tuberculosis diagnostic screening [35]. Thus, restriction of tuberculosis diagnostic testing to those defined within the current "case detection" definition worldwide may miss a substantial number of patients with active pulmonary disease, and therefore screening must also consider that cough might not be present. By not relying solely on 2-3 weeks of cough as the entry point for screening, this will increase the number of diagnosed and treated patients, which will increase the positive impact of the tuberculosis program, albeit at the cost of more people being eligible for screening. A strength of this study is that it used an objective cough monitor that has been validated in adults with tuberculosis [6,7], with a sensitivity of 75.5%, comparable to that of other semiautomated methods [36,37], and utilized day-long recordings that enabled determination of cough frequency by hour. A limitation of our method is the large proportion of recordings that could not be processed due to relatively high levels of background noise, and the relatively smaller number of early-morning (6-9 am) recordings available. We are working to solve this technical issue by the development of a second-generation accelerometer-based cough monitor [38]. In addition, we did not quantify CFUs but instead mathematically estimated CFUs from other quantitative data in MODS cultures [15,19]. Our formula provides similar results to those obtained by another group who modeled CFUs from TTP in Mycobacteria Growth Indicator Tube (MGIT) culture [16]. Another limitation is that participants enrolled in this study were recruited from 2 tertiary academic hospitals, and might not reflect the broader population of tuberculosis in the community. The current convention for infection control is that, following 2 weeks of adequate treatment, patients with tuberculosis pose a significantly reduced risk of onward transmission, so it is safe to consider discontinuing infection control practices including respiratory isolation [24][25][26]39]. Despite the fact that bacterial growth can occur from sputum obtained as late as 60 days into adequate treatment [27], our data show a rapid drop in cough frequency, which is associated with microbiological conversion. This supports earlier findings which show that pulmonary tuberculosis transmission is greatly reduced once adequate treatment starts . Kaplan-Meier curves for time to coughing cessation and microbiological conversion in study group. Cough cessation represents the time to the first of 2 consecutive recordings with a cough frequency of ≤0.7 cough events per hour (considered "no cough"); by day 14, the probability of cough cessation was 42% (95% confidence interval [CI], 25%-64%), and by day 60 the probability was 51% (95% CI, 33%-72%). Smear conversion represents the time to the first negative smear with no subsequent positive smear; by day 14, the probability of smear conversion was 26% (95% CI, 17%-39%), and by day 60 the probability was 85% (95% CI, 73%-93%). Microscopic-observation drug susceptibility (MODS) culture conversion represents time to the first negative culture with no subsequent positive culture; by day 14, the probability of MODS culture conversion was 29% (95% CI, 19%-41%), and by day 60 the probability was 94% (95% CI, 85%-98%). [32,39,40], and suggests that tuberculosis treatment response could be indirectly measured by assessing cough frequency. Supplementary Data Supplementary materials are available at Clinical Infectious Diseases online. Consisting of data provided by the authors to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the authors, so questions or comments should be addressed to the corresponding author. Data sharing statement Data from this study is publicly available through the Dryad Digital Repository at http://dx.doi.org/10.5061/dryad.gv234 Notes Author contributions. All authors were involved in the study design and drafting the manuscript for intellectual content, and all reviewed the final manuscript before submission. M. A. B. and J. W. L. directly contributed to the study design and were responsible for supervision of data gathering. A. P., G. O. L., D. B., and M. Z. directly contributed to data management and statistical analysis. The corresponding authors had full access to all the data in the study and had final responsibility for the decision to submit for publication.
2017-10-22T21:38:20.522Z
2017-01-25T00:00:00.000
{ "year": 2017, "sha1": "4780a58bdfefbcb4c544f707e8a257c04a06c8cd", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/cid/article-pdf/64/9/1174/13635128/cix039.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "276afc48fb42e8d5c77e6cff2b2ae772f9b3f9ea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
80579459
pes2o/s2orc
v3-fos-license
Correction of prominent ears: techniques and complications Prominent ears are present in approximately 5% of the Caucasian population. It is characterized by autosomal dominant inheritance and commonly caused by 2 developmental defects: underdevelopment of the antihelical fold and overdevelopment of the conchal wall. A thorough preoperative evaluation includes examination of ear symmetry, size, shape, and projection. Mustarde first described a technique for creating an antihelical fold by utilizing permanent conchoscaphal mattress sutures. Furnas popularized a technique of conchal setback using permanent cochomastoidal suturing. Introduction Prominent ears are present in approximately 5% of the Caucasian population [1]. They are characterized by autosomal dominant inheritance and commonly caused by two developmental defects: underdevelopment of the antihelical fold and overdevelopment of the conchal wall [1]. The ear is a complex structure of cartilage and skin with many intricate involutions and folds. It includes five critical elements: the concha, helix, antihelix, tragus, and lobule. Parts of lesser importance include the antitragus, intertragic notch, and Darwin's tubercle [2]. The anatomical divisions of the ear are based in embryology, and the ear's origins are from the first (mandibular) and second (hyoid) branchial arches. These hillocks begin to fuse and form the external ear. Assigning a specific anatomic location on the ear to any given hillock is more speculative, but it is generally thought that the tragus helical crus and part of the helix develop from the mandibular arch, and the remainder of the external ear develops from the hillocks of the hyoid arch [2]. The ear attains approximately 85% of its adult size by 3 years of age [3]. Ear width matures in boys at 7 years and in girls at 6; ear length matures in boys at 13 years of age and in girls at 12 [4]. This implies that children 5-6 years of age may safely undergo otoplasty without compromising external ear growth. The average height of an adult pinna is 5.5-6.5 cm, with a width of 3.3-3.9 cm. The helix is elevated 1-2 cm from the mastoid. The ideal auriculocephalic angle is 15-30 degrees; the conchocephalic and conchomastoid angles are 90 degrees [5]. The upper pole of the ear should fall at the level of the brow, whereas the inferior aspect of the lobe should fall at the level of the nasal ala. From the scalp, the helical rim of the ear commonly projects laterally 10-12 mm at the superior pole, 16-18 mm at the midpoint, and 20-22 mm at the lobule [3]. The superficial temporal and postauricular branches of the external carotid system provide excellent blood supply to the pinna. The innervation to the external ear follows its embryologic branchial arch origins. It consists of the anterior and posterior branches of the great auricular nerve, which innervates the first branchial arch structures (tragus and helical crus), and the auriculotemporal nerve, which innervates the second branchial arch structures (helix, scapha, antihelix, concha, antitragus, external acoustic meatus, and lobule). The external auditory meatus also receives innervation from branches of the vagus and glossopharyngeal nerves ( Fig. 1) [6]. Preoperative evaluation A preoperative evaluation includes an examination of ear symmetry, size, shape, and projection. The physical exam of each auricle should be documented, with special attention to the status of the antihelix, and the depth and projection of the conchal bowl, as these two deformities most often coexist in a protruding ear. During preoperative evaluation, one should assess the following in a patient with prominent ears: (1) degree of antihelical folding; (2) depth of the conchal bowl; (3) plane of the lobule and deformity, if present; (4) angle between the helical rim and the mastoid plane; (5) quality and spring of the auricular cartilage [7]. The optimal timing of surgical correction depends on a rational approach based on auricular growth and the age of school matriculation. As the ear is nearly fully developed by the 6-7 years of age, correction may be performed at this time [8]. Existing deformities, postoperative course, realistic outcomes, and complications may be discussed with the patient and parents at that time. Mustard technique In 1963, Mustarde [9] first described the creation of an antihelical fold using permanent conchoscaphal mattress sutures. Several subtle refinements of this technique have been described since that time, but the fundamentals of the procedure remain unchanged. Pediatric patients most commonly undergo this procedure under general anesthesia, and perioperative broad-spectrum antibiotics are administered. The face is prepped as a sterile field so that both ears can be visualized simultaneously. After infiltration with lidocaine 1% with epinephrine 1/100,000, an elliptical or dumbbell-shaped skin excision is made that tapers at both ends for ease of closure. Typically, more skin is excised from the postauricular surface than from the mastoid to camouflage the resulting scar into the postauricular sulcus following setback. Wide undermining is performed in the supraperichondrial plane almost up to the helical rim. The area of intended neoantihelix may be demarcated by placing 25-gauge needles through the anterior auricular skin at the intended site of antihelix creation and bringing the needle out the posterior side. The cartilage is then marked with methylene blue. The Mustarde procedure consists of inserting three to four horizontal placement sutures (4-0 nylon) to permanently recreate the antihelix. The suture is placed through the cartilage and anterior perichondrium, but not the anterior skin. Outer cartilage bites of 1 cm are separated by 2 mm. The distance between the outer and inner cartilage bites is 16 mm. The ear dressing is removed on the first postoperative day to check for hematomas, and then replaced for 3-4 additional days. Subsequently, a headband is worn continuously for 2 weeks and at night for an additional 4-6 weeks (Fig. 2) [9]. Furnas technique In 1968, Furnas [10] popularized a conchal setback technique using permanent cochomastoidal suturing. This procedure is often used in conjunction with techniques to correct an absent antihelical fold, as described above. The patient is prepped and draped in a manner similar to that described for antihelical fold correction. After infiltration of lidocaine 1% with epinephrine 1/100,000, a fusiform incision is made in the postauricular region. The width of the incision is estimated by manually pushing the concha towards the mastoid. Care is taken to avoid excessive skin excision, as tension on www.jcosmetmed.org the wound predisposes to hypertrophic scar formation. Little to no skin excision is required inferior to the level of the antitragus. After skin excision, the soft tissue and postauricular muscle are excised from the postauricular sulcus. Sufficient soft tissue is excised to produce a pocket to receive the concha during suture placement. The skin over the helix, antihelix, and concha is undermined with scissors, and permanent horizontal mattress sutures (e.g., 4-0 nylon) are placed at the lateral third of the concha cavum and concha cymba, parallel to the natural curve of the auricular cartilage. The sutures are placed through the cartilage and lateral perichondrium, but not the lateral auricular skin. At least three sutures are placed for adequate setback. The sutures are placed on what was the ascending wall of the concha. When tightened, they convert the wall into a longer floor of the concha. For long-term successful conchal reduction, suture bites of mastoid periosteum must be removed. Extremely thick cartilage frequently seen in older individuals may be weakened by excising small vertical ellipses. Importantly, conchomastoidal sutures must allow the concha to be set both medially and posteriorly, or external auditory canal stenosis can result. The wound is irrigated and closed as after the Mustarde [10] technique. A mastoid dressing is placed, and subsequent postoperative management is the same as that of antihelical fold surgery (Fig. 3). Combined technique The postauricular pinna and postauricular sulcus are infiltrated with local anesthetic. The patient is then prepped and draped. The central portion of the desired antihelix is marked in three places using an 18-gauge needle dipped in methylene blue and used to tattoo the desired areas. The surgical proce-dure is begun by removing a postauricular ellipse of skin. The posterior aspect of the auricular cartilage is exposed along the superior/inferior aspect of the pinna cartilage through helical rim. Small discs of cartilage are then scored and removed from the posterior aspect of the cavum conchae cartilage in areas where the patient has the most cartilage convexity. Conchalmastoid setback sutures are then placed. Using a 4-0 clear nylon suture, the conchal bowl is set back to its posterior medial extent. The antihelical fold is then created. Using the tattooed methylene blue markings, a horizontal mattress suture is placed with 4-0 nylon at the mid aspect of the desired antihelical fold. The perichondrium of the contralateral side is usually included in the surgical bite to avoid cheese-wiring of the suture through the cartilage over time. This is initially placed approximately 7 mm anterior to the blue marking and then 7 mm posteriorly in a symmetrical fashion. The skin is then closed with running locking 5-0 fast gut sutures. An identical procedure is performed on the contralateral side with careful attention to symmetry. After this procedure, the patient is placed in a secure dressing (Fig. 4). Complications Complications of otoplasty are not infrequently the consequences of poor surgical planning, although some may be unavoidable. Anterior surface irregularities, obliteration of the postauricular sulcus, and telephone ear deformity can be avoided with a meticulous surgical technique. Hematoma Hematomas occur in up to 3.5% of cases. Bleeding complications are typically an early complication due to incomplete hemostasis during surgery. A common hallmark is bleeding around the dressing or severe unilateral pain. Urgent evacuation of the hematoma is critical to prevent fibrosis and, ultimately, permanent deformity of the auricle, known as "cauliflower ear. " Careful hemostasis during hematoma evacuation should be obtained, and a drain and pressure dressing should be placed around the ear. The patient should be discharged on oral antibiotics and followed closely until the hematoma has completely resolved. Infection Wound infection occurs in <5% of otoplasties. Sterile technique is critical for prevention, and most surgeons include prophylactic antibiotics. As with hematomas, prompt identification and treatment are essential to avoid permanent deformity. Infections may present as pain, erythema, swelling, and drainage. Any suspicion of infection should be treated aggressively with oral antibiotics. Management includes drainage and irrigation of the wound, followed by treatment with oral anti-pseudomonas antibiotics. Patients with severe infection may require IV antibiotics. Late complications Loss of correction or relapse of auricular deformity: This occurs more commonly after cartilage-sparing techniques. There are multiple technical causes of inadequate correction, including pulling of sutures over time, improper suture placement, failure to correct a deformity during surgery, failure to anchor sutures firmly on the mastoid periosteum, and failure to weaken noncompliant cartilage. Inadequate correction requires revision otoplasty. Telephone and reverse telephone ear deformities: Telephone ear deformity occurs due to overcorrection in the middle third of the ear and relative under-correction of the superior and inferior poles. Reverse telephone ear deformity occurs when the middle third of the auricle remains prominent relative to the superior and inferior poles. Both deformities are avoidable with correct placement of the conchal setback sutures [11]. Conclusion Otoplasty using the Mustard, Furnas, or combined techniques can result in successful outcomes, and these procedures are very useful for prominent ear patients. However, care must be taken to prevent complications such as hematoma, infection, under-correction, and telephone ear deformity.
2019-03-17T13:10:31.813Z
2017-12-31T00:00:00.000
{ "year": 2017, "sha1": "97a23f0e9dcbafca65ebdebc323d5459e702060a", "oa_license": "CCBYNC", "oa_url": "http://www.jcosmetmed.org/journal/download_pdf.php?doi=10.25056/JCM.2017.1.2.90", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "915f783d74b013308f16797e340fcf30baa9b85d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258355767
pes2o/s2orc
v3-fos-license
Diabetes care cascade and associated factors in 10 700 middle-aged adults in four sub-Saharan African countries: a cross-sectional study Objectives We investigated progression through the care cascade and associated factors for people with diabetes in sub-Saharan Africa to identify attrition stages that may be most appropriate for targeted intervention. Design Cross-sectional study. Setting Community-based study in four sub-Saharan African countries. Participants 10 700 individuals, aged 40–60 years. Primary and secondary outcome measures The primary outcome measure was the diabetes cascade of care defined as the age-adjusted diabetes prevalence (self-report of diabetes, fasting plasma glucose (FPG) ≥7 mmol/L or random plasma glucose ≥11.1 mmol/L) and proportions of those who reported awareness of having diabetes, ever having received treatment for diabetes and those who achieved glycaemic control (FPG <7.2 mmol/L). Secondary outcome measures were factors associated with having diabetes and being aware of the diagnosis. Results Diabetes prevalence was 5.5% (95% CI 4.4% to 6.5%). Approximately half of those with diabetes were aware (54%; 95% CI 50% to 58%); 73% (95% CI 67% to 79%) of aware individuals reported ever having received treatment. However, only 38% (95% CI 30% to 46%) of those ever having received treatment were adequately controlled. Increasing age (OR 1.1; 95% CI 1.0 to 1.1), urban residence (OR 2.3; 95% CI 1.6 to 3.5), hypertension (OR 1.9; 95% CI 1.5 to 2.4), family history of diabetes (OR 3.9; 95% CI 3.0 to 5.1) and measures of central adiposity were associated with higher odds of having diabetes. Increasing age (OR 1.1; 95% CI 1.0 to 1.1), semi-rural residence (OR 2.5; 95% CI 1.1 to 5.7), secondary education (OR 2.4; 95% CI 1.2 to 4.9), hypertension (OR 1.6; 95% CI 1.0 to 2.4) and known HIV positivity (OR 2.3; 95% CI 1.2 to 4.4) were associated with greater likelihood of awareness of having diabetes. Conclusions There is attrition at each stage of the diabetes care cascade in sub-Saharan Africa. Public health strategies should target improving diagnosis in high-risk individuals and intensifying therapy in individuals treated for diabetes. INTRODUCTION Diabetes prevalence in adults in sub-Saharan Africa (SSA) is projected to increase from 23.6 million in 2021 to 54.9 million people in 2045. 1 Inadequate control of blood sugar and other cardiovascular risk factors will impose an unsustainable burden of diabetesrelated complications on already constrained regional healthcare systems. Existing data suggest that outcomes in individuals in SSA with diabetes are currently suboptimal with over 300 000 diabetes-related deaths before the age of 60 years in 2021, 1 highlighting the need to improve clinical care. Optimisation of diabetes management is contingent on numerous factors including the diagnosis of diabetes, appropriate escalation of therapy and patient adherence to therapeutic interventions, but effective strategies to improve diabetes management in SSA are hampered by a lack of knowledge about the extent of the deficiencies in this care continuum. STRENGTHS AND LIMITATIONS OF THIS STUDY ⇒ We present harmonised primary data on the diabetes care cascade from multiple countries in sub-Saharan Africa. ⇒ Our study included over 10 000 participants from eastern, western and southern Africa. ⇒ We did not perform glucose tolerance testing and therefore may not have identified individuals who met criteria for diabetes diagnosis only after a glucose challenge. ⇒ Glycaemic control was assessed using fasting plasma glucose which provides a point evaluation and may not be reflective of control over a longer period of time. Open access The cascade of care model, frequently used to identify deficits in HIV care, may be applied to diabetes to identify opportunities for improved outcomes. [2][3][4] The elements of the cascade, namely prevalence, awareness, treatment and control reflect aspects of the healthcare system, including effectiveness of prevention and detection strategies and the ability to implement and escalate therapy as necessary. On an individual level, diabetes awareness in particular is key to the adherence to lifestyle modification and medication that underpin glycaemic control. Evaluation of the diabetes care cascade allows policymakers to assess how well the healthcare system manages patients with diabetes and to identify areas for targeted interventions, particularly important in the resource-constrained lower-income and middle-income countries of SSA. Despite the benefits of establishing the diabetes care cascade, there is a paucity of primary data on it in SSA. Studies have often been limited to diabetes prevalence and awareness and conducted in hospital-based populations, introducing selection bias, while multicountry studies that have reported on the entire cascade have meta-analysed data from heterogeneous studies with methodological differences in determining each cascade stage. 2 3 We aimed to evaluate the diabetes cascade of care in four SSA countries, using harmonised data collected across six sites and performed exploratory analyses of the cascade stratified by sex and study site. We further investigated factors associated with the likelihood of having diabetes and being aware of a diagnosis of diabetes, the first two steps in the cascade. Study setting and participants The Genomic and Environmental Risk Factors for Cardiometabolic Disease in Africans (AWI-Gen) study and participating sites have been described in detail elsewhere. 5 6 In brief, 10 700 individuals were recruited from six sites in SSA in a community-based, cross-sectional study conducted between August 2013 and August 2016. Individuals were eligible for inclusion if they were aged 40-60 years and resided permanently in the study sites. We excluded individuals who were pregnant and, given that one of the broader objectives of the AWI-Gen study was to investigate genomic determinants of cardiometabolic disease, we also excluded individuals who were closely related to an existing participant and who had recently immigrated into the study site. We selected individuals aged 40-60 years as this is a peak time for the development of cardiometabolic disease. Three of the study sites were in South Africa (Soweto, Agincourt and Dikgale), one was in Kenya (Nairobi), one in Ghana (Navrongo) and one in Burkina Faso (Nanoro). Participants were therefore included from southern, eastern and western Africa. The selected sites were also on a continuum of urbanisation: Nairobi and Soweto were urban sites, Agincourt and Dikgale were semi-rural and Nanoro and Navrongo were rural. With the exception of Soweto, each study site is home to a health and socio-demographic surveillance system (HDSS) which enumerates all residents within the HDSS on a regular basis, ensuring a well-defined population sampling frame. In Nairobi, Agincourt, Navrongo and Nanoro, individuals were randomly sampled from the sampling frame, while in Dikgale, a convenience sampling strategy was employed. In Soweto, 700 women who were participants in the Study of Women Entering an Endocrine Transition study 7 and caregivers of the Birth to Twenty+ cohort 8 were recruited. Additional female and all male participants were randomly recruited, using a sampling frame which covered the Soweto region. Where necessary, there was oversampling to ensure equal numbers of women and men. Patient and public involvement Prior to the initiation of the AWI-Gen study, an extensive process of community engagement was conducted. This included meetings with civic and traditional leadership structures, household visits and group information sessions to discuss planned research activities. Study results were delivered annually to study participants, communities and community leaders. Data collection and definitions Data were collected by study staff trained on standardised protocols. Socio-demographic data and personal and family medical history were self-reported. Additionally, individuals were considered to have hypertension if the mean systolic blood pressure of the latter two of three readings at the study visit ≥140 mm Hg or the mean diastolic pressure ≥90 mm Hg (Omron M6, Omron, Kyoto, Japan). 9 Individuals were classified as HIV positive if they reported a previous diagnosis of HIV or if they tested positive on the rapid HIV tests that were offered to participants in South Africa and Kenya (MD HIV 1/2 test (Medical Diagnostech, Cape Town, South Africa); One Step anti-HIV1+2 rapid screen test (InTec, Xiamen, China); Determine rapid test kit (Abbott Pharmaceuticals, Chicago, USA)). Rapid HIV tests were not offered in Ghana and Burkina Faso due to the low prevalence of HIV in those countries; individuals in these sites who did not know their HIV status were classified as HIV negative. Physical activity was assessed using the Global Physical Activity Questionnaire and occupational, leisure time and travel-related physical activity variables from this questionnaire were summed to give the total moderate-vigorous intensity physical activity (MVPA) in minutes per week. Individuals were classified as having no MVPA (0 min/ week), insufficient MVPA (1-150 min/week) or sufficient MVPA (≥150 min/week). 10 Standing height was measured with the participant barefoot or in light socks, using a Harpenden digital stadiometer (Holtain, Wales, UK). Weight was measured with the participant in light clothing, using a digital Physician Large Dial 200 kg capacity scale (Kendon Medical, South Africa) and body mass index was calculated as weight in Open access kg divided by height in metres squared. Using a stretchresistant measuring tape (SECA, Hamburg, Germany), hip circumference, as a measure of gluteofemoral fat, was measured around the most protruding part of the buttocks. Visceral and subcutaneous adipose tissue, direct measures of central adiposity associated with insulin resistance, were measured using abdominal ultrasound (LOGIQ e ultrasound system (GE HealthCare, Connecticut, USA)). Study staff from all sites were centrally trained in Johannesburg, South Africa to perform the abdominal ultrasounds. Visceral adipose thickness was determined by the thickness of the fat pad between the anterior spine and peritoneal layer at end expiration, while subcutaneous adipose thickness was the thickness of the fat pad between the skin and the outer edge of the linea alba. Venous blood was collected at study visits in potassium oxalate/sodium fluoride tubes and centrifuged immediately after collection, with the supernatant plasma stored at −80°C until analysis, according to a detailed sample processing protocol provided to all sites. Analyses for glucose were all performed at a central site, using colorimetric methods, on the Randox Plus clinical chemistry analyser (Randox, UK) with a range of 0.36-35 mmol/L and coefficient of variation <2.3%. Diabetes was defined as a previous diagnosis of diabetes by a healthcare provider (which could include a doctor, nurse, community health worker or similar person), ever having received treatment for diabetes, or fasting plasma glucose (FPG) ≥7 mmol/L or random plasma glucose ≥11.1 mmol/L 11 12 on the sample taken during the study visit. Samples were considered random if a participant had not fasted overnight or fasting status could not be confirmed. Participants were considered to be aware of a diagnosis of diabetes if they reported ever having been told by a healthcare provider that they had diabetes and were considered to have been treated for diabetes if they reported ever having received treatment for diabetes (dietary advice and/or glucose lowering agents) from a healthcare provider. Individuals were considered to have their diabetes controlled if fasting glucose was <7.2 mmol/L. 11 Statistical analysis Categorical participant characteristics of marital status, highest level of education, current smoking, known hypertension, known HIV positivity, family history of diabetes and physical activity category were described using frequencies and percentages, while medians and IQRs were used to describe continuous characteristics of age, body mass index, hip circumference, visceral fat and subcutaneous fat. The Mann-Whitney U, χ 2 and Fisher's exact tests were used to compare continuous and categorical variables, respectively, between groups defined by sex to investigate sex-related differences in potential determinants and groups defined by data missingness status to evaluate for bias between those who were included and those who were excluded from the analysis due to missing data. Age-adjusted diabetes prevalence was determined using the United Nations African population distribution 13 as the reference population structure. The proportion of those aware of having diabetes was calculated as a percentage of those with diabetes and similarly, the proportion of those ever receiving treatment for diabetes was calculated as a percentage of those aware of having diabetes. The proportion of those who had their diabetes controlled was calculated as a percentage of those who reported ever receiving treatment. The method for interval estimation described by Tiwari et al 14 was used to determine the 95% CIs. The Soweto site was excluded from the latter two stages of the cascade as the 'ever receiving treatment' variable was not collected. Multivariable logistic regression was used to assess the relationship between the odds of having diabetes and sociodemographic and clinical characteristics including urbanicity. Independent variables for inclusion in the logistic regression were selected based on previous research. 15 16 The Soweto site did not collect data on family history of diabetes and was therefore not included in this model, as family history of diabetes has been demonstrated in other settings to be strongly associated with higher odds of having the condition. Additional multivariable logistic regression models were also fit, using data from all sites, to investigate associations with awareness of a diagnosis of diabetes. In the model investigating associations with odds of having diabetes, we included visceral and subcutaneous fat as direct assessments of central obesity and hip circumference as a measure of gluteofemoral fat. In the model investigating associations with awareness, we used body mass index as the measure of obesity as we thought awareness was more likely to be associated with a global assessment of obesity rather than individual fat depots. We were underpowered to assess associations with diabetes treatment and control. Sensitivity analyses were conducted in which associations with having diabetes and awareness of a diagnosis of diabetes were explored in analyses stratified by HIV prevalence, with the South African sites and Nairobi classified as high prevalence sites and Navrongo and Nanoro classified as low prevalence sites. Missing data were handled using pairwise deletion. Analyses were conducted using Stata V.16 (StataCorp, USA). Sample characteristics The characteristics of the 10 700 study participants are shown in online supplemental table S1. There were 5892 women (55%), with a median age of 50 years (IQR 45-55). There was some intersite variation in sociodemographic variables-while most participants in the urban and semi-rural sites had some formal education, between 70% and 80% of participants in the rural sites Open access did not. Smoking prevalence ranged between 6% and 30% overall, with prevalence several fold higher in men than in women in all sites. There was a high prevalence of chronic disease with 3755 (37%) participants having hypertension and 1310 (12%) known as being HIV positive, although intersite variation was evident, with HIV prevalence being low, for example, in Nanoro and Navrongo. Family history of diabetes was highest in the urban and semi-rural areas. Anthropometric measures of obesity and subcutaneous fat were higher in women in urban and semi-urban areas, while there were no clear sex differences in Nanoro or Navrongo. Visceral fat was generally similar in both sexes. The majority of individuals (82%) were undertaking at least 150 min of moderate-tovigorous physical activity weekly. Missing outcome data No participants had missing data on the diabetes status outcome, while 31 individuals had missing data on the awareness outcome and were slightly older (median age 54 vs 52 years; p=0.04), less likely to be employed (32 vs 64%; p<0.01) and had a different marital status distribution (p<0.01) than those who were not missing these data. Overall, just over half of the 613 individuals with diabetes were aware of their condition (54%; 95% CI 50% to 58%), with the highest awareness in Navrongo (65%; 95% CI 43% to 84%) and the lowest in Nanoro (25%; 95% CI 16% to 37%), although CIs across the sites were wide and overlapping. Nearly 75% of individuals aware of having diabetes reported ever receiving treatment, but only 38% (95% CI 30% to 46%) were adequately controlled. More women reported ever being treated for diabetes (p=0.01), but there were no sex differences in participants achieving control (p=0.98). In logistic regression models, increasing age (OR 1.1; 95% CI 1.0 to 1.1; p<0.01) and urban residence (OR 2.3; 95% CI 1.6 to 3.5; p<0.01) were associated with higher odds of having diabetes (table 1). Hypertension was also associated with having diabetes (OR 1.9; 95% CI 1.5 to 2.4; p<0.01), as was family history of diabetes (OR 3.9; 95% CI 3.0 to 5.1; p<0.01); conversely, known HIV positivity was associated with lower odds of diabetes (OR 0.6; 95% CI 0.4 to 0.9; p<0.01). Visceral and subcutaneous fat were also associated with higher odds, while there was Estimates for ever receiving treatment and achieving glycaemic control (calculated as percentage of those who ever received treatment) exclude Soweto as the treatment variable was not collected at that site. Data on diabetes control were missing for a further 17 participants. Open access a marginal negative association with hip circumference (table 1). Similar associations were evident in sensitivity analyses restricted to sites with high HIV prevalence (online supplemental table S3). However, only family history remained significantly associated with diabetes in low HIV prevalence settings, although previously unobserved associations with male sex and physical activity emerged (online supplemental table S4). These analyses were however limited by the low prevalence of diabetes in these settings which meant they were underpowered. Increasing age (OR 1.1; 95% CI 1.0 to 1.1; p=0.02), semirural environment (OR 2.5; 95% CI 1.1 to 5.7; p=0.02) and secondary education (OR 2.4; 95% CI 1.2 to 4.9; p=0.02) were all associated with greater likelihood of awareness of diabetes, as were the chronic conditions of hypertension (OR 1.6; 95% CI 1.0 to 2.4; p=0.04) and known HIV positivity (OR 2.3; 95% CI 1.2 to 4.4; p=0.02) (table 2). In sensitivity analyses in high HIV prevalence sites, only hypertension and known HIV positivity remained associated with higher awareness of diabetes (online supplemental table S5). The sample size in low HIV prevalence sites was too small to perform meaningful analyses. DISCUSSION In this multicountry study of the diabetes care cascade in SSA, we demonstrate attrition at each stage of the cascade with just over half of those with diabetes being aware of their condition and only approximately one-third of those who reported ever receiving treatment achieving optimal glycaemic control. We also report socio-demographic and clinical factors associated with increased odds of having diabetes including older age, urban residence and having hypertension and factors associated with awareness of having diabetes which included increasing age, semi-rural environment, secondary education and having hypertension or known HIV positivity. Our prevalence estimate of 5.5% is similar to the 2019 International Diabetes Federation (IDF) estimate for SSA of 4.7% in adults aged 20-79 years. 1 A subregional metaanalysis from western Africa revealed a lower prevalence (4.0% in urban adults and 2.6% in rural adults), 17 in keeping with our study where prevalence in the western African sites was two to three times lower than in the Southern and eastern African sites. Factors in our study associated with higher odds of having diabetes, such as increasing age and urban residence, have been previously reported, with the western African meta-analysis reporting over a threefold increase in prevalence in people over 50 years 17 and Werfalli et al reporting a prevalence of 20% in people living in urban areas versus 7.9% in those in rural areas. 18 Our findings of associations with family history of diabetes, hypertension and adiposity support results from other country-level meta-analyses in Africa. 19 20 We also noted lower odds of having diabetes in individuals with known HIV in keeping with other studies that have identified lower prevalence of cardiometabolic risk factors in individuals with HIV in SSA. 21 22 While our estimate of the prevalence of diabetes unawareness of 47% was broadly similar to the 2019 IDF estimate of the prevalence of undiagnosed diabetes of Open access 60% in SSA, 1 it did contrast sharply with other studies. A meta-analysis of 23 studies from across Africa estimated a much lower pooled prevalence of undiagnosed diabetes of just under 4%. 23 There was however significant heterogeneity in the included studies and the majority of the data originated from a single country, which may not be representative of other countries in the region. This itself differed considerably from data from 12 nationally representative surveys in SSA in which 73% of those with diabetes were unaware of their condition, with factors similar to our study, namely older age and higher level of educational attainment, associated with awareness. 24 Our findings also suggest that those with chronic diseases such as HIV and hypertension may be more aware of having diabetes, which may be due to increased contact with the healthcare system. 25 In a study reporting data from 15 SSA countries, approximately 40% of adults with diabetes received glucoselowering medication, while approximately 25% received counselling on diet, exercise or weight loss. 2 These proportions are lower than ours which may be due to the difference in denominators-we used a denominator of individuals aware of having diabetes rather than all those with diabetes. In another study reporting data from 12 SSA countries, just over 30% of those with diabetes were aware of their condition, with a similar percentage ever having received lifestyle advice or currently receiving diabetes medication and just over 20% achieving control. 3 While this study also used a fixed denominator of the number of people with diabetes, the results support our finding that there is not a major fall-off between the stages of awareness and treatment and the most significant deficits are at the stages of awareness of having diabetes, that is, diagnosis and achieving glycaemic control. Of note, this study used a more liberal definition of glycaemic control than our study (FPG <10.1 mmol/L or glycosylated haemoglobin (HbA 1c ) <8% in the single study in which it was available) and may have identified a more drastic control deficit if a threshold for glycaemic control similar to ours had been used. A country-level meta-analysis of 22 studies from Ethiopia suggested a similar degree of glycaemic control as our study, with approximately one-third of those included achieving glycaemic targets, regardless of whether these were assessed using FPG or HbA 1c . 26 We describe, to our knowledge, the first study in SSA in which harmonised primary data on the diabetes care cascade have been collected from multiple countries. Previous multicountry research in SSA on this subject has relied on systematic reviews and meta-analyses and has therefore been limited by the methodological heterogeneity of the constituent studies, including the use of different biomarkers to define diabetes. In our work, data were collected in a standardised manner and in addition to self-report, we used venous blood samples, analysed at a single laboratory, to ascertain biochemical evidence of diabetes. Our study also included over 10 000 men and women from three subregions of SSA. Our study does have limitations. We did not distinguish between type 1 and type 2 diabetes and the care cascade and associated factors may differ between these two conditions. While we used accepted and convenient diagnostic criteria for diabetes, we may have underestimated the prevalence of diabetes as we did not assess glucose tolerance and may therefore have excluded those who met the criteria for diabetes only after a glucose challenge, which may be particularly important in populations of African descent. Both oral glucose tolerance tests and HbA 1c , appear to classify more African-ancestry individuals as having diabetes than FPG alone 27 28 and use of either of these criteria may have increased diabetes prevalence in our study. Our research was conducted in HDSS sites and among a research cohort in Soweto, populations which may not be nationally representative. Indeed, individuals in these sites may have been told they had diabetes while taking part in previous studies, making the proportion of individuals with diabetes who know they have the condition higher than in the general population. We also used self-report rather than clinical records to determine ever receiving diabetes treatment. FPG was used to assess diabetes control and this provides an evaluation only at a single point in time and may be subject to more analytical variability than HbA 1c , which has largely supplanted it in clinical use in well-resourced environments. Several large scale epidemiological studies have however used plasma glucose measures to assess glycaemic control. 2 3 We collected data for this study between 2013 and 2016 and it is conceivable that some of the parameters in the cascade may have changed during or since that time. Despite these limitations, our study provides valuable information on the burden of diabetes in SSA and the deficiencies which need to be addressed to improve outcomes. In areas where diabetes prevalence is low, primordial prevention strategies should be employed to reduce the likelihood of developing risk factors such as obesity, with particular focus on higher risk urban environments. Screening of at-risk populations needs to be enhanced and the low percentage of individuals attaining satisfactory glycaemic control suggests that more aggressive, treat to target strategies need to be promoted among healthcare workers, although we acknowledge this may be limited by drug availability in many parts of the continent. Additional work is necessary to understand whether our findings are applicable to other SSA countries and subregions at different stages of the epidemiological transition and with variable access to healthcare. It is also essential to understand key determinants of ever receiving diabetes treatment and control, which we were underpowered to investigate, and care cascades for other important vascular risk factors in people with diabetes, such as elevated blood pressure and dyslipidaemia. Identification of the points in each of these care cascades at which significant attrition is occurring will assist public health officials in developing appropriate interventions to reduce diabetes-related morbidity and mortality. Twitter Shukri F Mohamed @shukrifmohamed and Engelbert A Nonterah @ EngelbertNonte1 Contributors ANW-conceptualisation, writing-original draft, review and editing, funding acquisition. IM-formal analysis, writing-review and editing. GAg-data collection, investigation, writing-review and editing. GAs-data collection, investigation, writing-review and editing. PB-data collection, investigation, writing-review and editing, funding acquisition. SSRC-investigation, writingreview and editing. FXGO-data collection, investigation, writing-review and editing, funding acquisition. EM-investigation, writing-review and editing. LKM-investigation, writing-review and editing. SFM-investigation, writingreview and editing. EAN-data collection, investigation, writing-review and editing. SAN-conceptualisation, investigation, writing-review and editing. HSdata collection, investigation, writing-review and editing. MR-conceptualisation, writing-review and editing, project administration, funding acquisition. NJCconceptualisation, writing-review and editing, funding acquisition. ANW, MR and NJC act as guarantors of the paper. Disclaimer This paper describes the views of the authors and does not necessarily represent the official views of the National Institutes of Health (USA) or the South African Department of Science and Innovation who funded this research. The funders had no role in study design, data collection, analysis and interpretation, report writing or the decision to submit this article for publication. Competing interests ANW declares an honorarium received from Sanofi for serving as a panel member at an educational event on thyroid cancer. SAN declares participation in a data safety monitoring board of a Phase IV open-label trial to assess bone mineral density in a cohort of African women on Depo-Provera and tenofovir disoproxil fumarate switched to tenofovir alafenamide fumarate based antiretroviral therapy and Council membership in the International Society of Developmental Origins of Health and Disease. Patient and public involvement Patients and/or the public were involved in the design, or conduct, or reporting, or dissemination plans of this research. Refer to the Methods section for further details. Patient consent for publication Not applicable. Ethics approval Ethical approval for the AWI-Gen study was provided by the Human Research Ethics Committee (Medical) of the University of the Witwatersrand (M121029, M170880). Participants gave informed consent to participate in the study before taking part. Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement Data are available upon reasonable request. Deidentified individual participant data from the AWI-Gen study are available from the European Genome-Phenome Archive (EGA) at study number EGA00001002482 (https://ega459 archive.org/datasets/EGAD00001006425). Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
2023-04-28T13:04:26.874Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "bcf5143cc53ee47438a320ebafd7fd9abebf55dd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "BMJ", "pdf_hash": "bcf5143cc53ee47438a320ebafd7fd9abebf55dd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265924396
pes2o/s2orc
v3-fos-license
Assessment of genetic diversity in Trigonella foenum-graecum and Trigonella caerulea using ISSR and RAPD markers Background Various species of genus Trigonella are important from medical and culinary aspect. Among these, Trigonella foenum-graecum is commonly grown as a vegetable. This anti-diabetic herb can lower blood glucose and cholesterol levels. Another species, Trigonella caerulea is used as food in the form of young seedlings. This herb is also used in cheese making. However, little is known about the genetic variation present in these species. In this report we describe the use of ISSR and RAPD markers to study genetic diversity in both, Trigonella foenum-graecum and Trigonella caerulea. Results Seventeen accessions of Trigonella foenum-graecum and nine accessions of Trigonella caerulea representing various countries were analyzed using ISSR and RAPD markers. Genetic diversity parameters (average number of alleles per polymorphic locus, percent polymorphism, average heterozygosity and marker index) were calculated for ISSR, RAPD and ISSR+RAPD approaches in both the species. Dendrograms were constructed using UPGMA algorithm based on the similarity index values for both Trigonella foenum-graecum and Trigonella caerulea. The UPGMA analysis showed that plants from different geographical regions were distributed in different groups in both the species. In Trigonella foenum-graecum accessions from Pakistan and Afghanistan were grouped together in one cluster but accessions from India and Nepal were grouped together in another cluster. However, in both the species accessions from Turkey did not group together and fell in different clusters. Conclusions Based on genetic similarity indices, higher diversity was observed in Trigonella caerulea as compared to Trigonella foenum-graecum. The genetic similarity matrices generated by ISSR and RAPD markers in both species were highly correlated (r = 0.78 at p = 0.001 for Trigonella foenum-graecum and r = 0.98 at p = 0.001 for Trigonella caerulea) indicating congruence between these two systems. Implications of these observations in the analysis of genetic diversity and in supporting the possible Center of Origin and/or Diversity for Trigonella are discussed. Background The family Fabaceae includes many crops useful for food, forage, fiber, wood and ornamental purposes. In this family, a few legumes such as chickpea, soybean, fababean, fenugreek, lentil, pea etc. are consumed as grain legumes. The grain legumes are plants used as food in the form of unripe pods, mature seeds or immature dry seeds, directly or indirectly [1]. The grain legumes not only provide variety to human diet but they also supply dietary proteins for vegetarian populations that lack animal and fish protein in their diet. Considering the growing problem of malnutrition, use of legume species as high-protein food is very important. Moreover, legumes are also capable of symbiotic nitrogen fixation enriching the soil condition suitable for a crop following the legume crop [2]. The genus Trigonella is one of the largest genera of the tribe Trifoliatae in the family Fabaceae and sub-family Papilionaceae [3]. Among Trigonella species, Trigonella foenumgraecum (commonly known as fenugreek) is a flowering annual, with autogamous white flowers occasionally visited by insects. Indigenous to countries on the eastern shores of Mediterranean, fenugreek is widely cultivated in India, Egypt, Ethiopia, Morocco and occasionally in England [4]. Trigonella foenum-graecum is extensively grown in the tropical and subtropical regions of India. Different parts of the plant such as leaves and seeds are consumed in India. It is also used for medicinal purpose. According to ancient medicinal system, the Ayurveda, it is a herbal drugs that is bitter or pungent in taste. It is effective against anorexia and is a gastric stimulant [5]. Fenugreek is becoming popular around the world with its extract used to flavor cheese in Switzerland, artificial maple syrup and bitter-run in Germany, roasted seeds as coffee-substitute in Africa, seed powder mixed with flour as fortification to make flat-bread in Egypt, as an anti-diabetic herb in Israel, whole seed and dried plant used as insect and pest repellent in grain storage, and oil used in perfumery in France [6]. Research reports in recent years have indicated that fenugreek can be a remedy to diabetes by lowering blood sugar and cholesterol levels [7]. T. caerulae, from the same family and commonly known as the Blue fenugreek, on the other hand, is a less commonly grown herb. This flowering annual with autogamous blue flower is found in Alps and in the mountains of eastern and south eastern Europe. Terminal leaves are mainly used for cooking while young seedlings are eaten with oil and salt. Dried and powdered leaves as well as flowers are used for flavoring and coloring bread, cheese, etc in China and Germany. They are also used as a condiment in soups and potato dishes and a decoction of leaves is used as aromatic tea [8]. Grain legumes like T. foenum-graecum and T. caerulea although important in food and medicine are rarely grown outside their native habitat. Across the world only known and well-defined cultivars are grown in specific areas. Gene banks also harbor scanty germplasm collection of Trigonella species [9]. The neglected and the underuse status of these locally important crops indicates a risk of disappearance of important plant material developed over thousands of years of cultivation. One of the important factors restricting their large-scale production and development of better varieties is that very little information is available about their genetic diversity, inter and intraspecific variability and genetic relationship among these species. Therefore, attempts to analyze possible untapped genetic diversity become extremely essential for breeding and crop improvement. The present study was undertaken with the objective of analyzing genetic diversity in various accessions of T. foenum-graecum and T. caerulea representing various countries where they are grown using molecular marker technology. Assessment of genetic diversity in T. foenum-graecum and T. caerulea using ISSR and RAPD markers A set of 100 ISSR primers was used for initial screening of 7 accessions of T. foenum-graecum of which 40 gave amplification. However, only 14 ISSR primers detected intraspecific variation in I7 accessions of T. foenum-graecum generating clear reproducible patterns and revealing 100 bands in the range of 500 bp to 2 kb. Among these seventy-two bands were polymorphic amounting to 72% polymorphism [ Table 3]. Furthermore, during ISSR analysis 11 unique bands were obtained, where 6 were contributed by accession TMP = 8714 from Yemen, 4 by accession TMP = 8691 from Turkey, and 1 by accession TMP = 8685 from Iran. Similarly, in T. caerulea, of the 100 ISSR primers used for initial screening, 47 gave amplification. Of these, 18 primers detected intraspecific variation in 9 accessions of T. caerulea showing 93.64% polymorphism [ Table 4]. With these 18 primers, 16 unique bands were produced, where 9 and 7 bands were contributed by accessions 206901 and 206486, respectively, from Turkey alone. In case of RAPD analysis, 100 RAPD primers were used for initial screening in T. foenum-graecum of which 22 primers generated polymorphic patterns revealing 70.12% polymorphism [ Table 3]. Eight unique bands were produced by these primers of which 3 were contributed by accession TMP = 8714 from Yemen, 2 by accession TMP = 8698 from Egypt, 1 by accession TMP = 8707 from Afghanistan and 1 each by accession TMP = 8691 and TMP = 8690 from Turkey. Similarly in T. caerulea of the 40 primers used for initial screening, 10 primers produced polymorphic pattern giving 95.83% polymorphism [ Table 4]. Eight unique bands were produced with these primers wherein; the maximum numbers of unique bands (4 each) were again produced by the same accessions 206901 and 206486 from Turkey. Dendrogram analysis for T. foenum-graecum and T. caerulea Genetic similarity was calculated from the Nei's similarity index value for all the 17 accessions of T. foenum-graecum considering ISSR and RAPD approaches individually as well as together. Based on ISSR markers alone, the similarity index values ranged from 0.69 to 0.92. These values were used to construct a dendrogram using Unweighted Pair Group Method with Arithmetic averages (UPGMA). In the ISSR based dendrogram T. foenum-graecum genotypes formed 4 clusters [ Figure not shown]. The first cluster grouped together accessions from Afghanistan (8707), Canada (1065), Pakistan (8717,8718), Iran (8675), and Turkey (8690). The second cluster contained accessions from India (8686,8689,8675). Remaining one accession from India (8687) and one from Nepal (8706) formed the third cluster. The fourth cluster contained accessions from Egypt (8698,8679) and Turkey (8692). Accessions from Ethiopia (8696), Turkey (8691) and Yemen (8714) out grouped from the main clusters. Three accessions from Turkey analyzed in the present study fell into different clusters. Based on RAPD markers alone, the similarity index values ranged from 0.71 to 0.91. In the RAPD based dendrogram, T. foenum-graecum genotypes formed 2 main clusters [ Figure not shown]. The first cluster had two subgroups, the first subgroup contained accessions from Afghanistan (8707), India (8686,8687,8675), Turkey (8690) and Egypt (8698) while the second subgroup contained accessions from Pakistan (8717,8718), Turkey (8692), and Egypt (8679). The accession from Canada associated with these clusters. The second cluster contained accession from Iran (8685) and Nepal (8706) and the accession from India associated with this cluster. Accessions from Ethiopia (8696), Turkey (8691) and Yemen (8714) out grouped from these two clusters. In the RAPD based dendrogram also accessions from Turkey fell into different clusters. Based on both the marker systems together the similarity index values ranged from 0.65 to 0.89 [ Fig. 1]. Here the T. foenum-graecum accessions from Egypt (8698,8679) were grouped together. Accessions from Pakistan (8717, 8718) and Afghanistan (8707) were grouped together in one cluster. Accessions from India (8686,8689,8675,8687), Nepal (8706) and Iran (8685) were grouped together. However, all the three accessions from Turkey fell in different clusters and did not group among themselves. Bootstrapping was done using the WinBoot program to estimate the relative support at clades. The robustness of the cluster was not very strong in T. foenum-graecum (50-70%). In T. caerulea, genetic similarity was calculated from the Nei's similarity index value considering ISSR and RAPD approaches individually as well as together. Based on ISSR marker system, the similarity index values ranged from 0.41 to 0.92 while that on the basis of RAPD markers ranged from 0.34 to 0.93. Similarity indices values based on both the marker systems together ranged from 0.38 to Heterozygosity and marker index Heterozygosity was calculated using ISSR and RAPD marker systems individually as well as together as detailed in Table 3 for T. foenum-graecum and in Table 4 Correlation between measures of similarity In T. foenum-graecum, when the similarity matrices generated using ISSR and RAPD markers were compared, a value of r = 0.78, at P = 0.001 indicated a good correlation between data generated by both the systems [ Fig. 3]. Similarly when the similarity matrices generated using ISSR and RAPD systems were compared in case of T. caerulea, a value of r = 0.98 indicated a very good correlation between the two marker systems [ Fig. 4]. Discussion The two marker systems, ISSR and RAPD used in the present study have also been used as effective tools to evaluate genetic diversity and to throw light on the phylogenetic relationships in Brassica napi [rapeseed, [10]], Allium sect. Sacculiferum [Alliaceae, [11]] and Asimina triloba [pawpaw, [12] and [13]]. Genetic diversity analysis using ISSR and RAPD markers in legumes like Cicer [ [14] and [15]] and Cajanus [ [17] and [18]] has been carried out in our own laboratory. These studies have given important clues in understanding species relationship, which may further assist in developing and planning breeding strategies. However, no such reports on genetic diversity using molecular markers were available in the genus Trigonella. In the present study, an attempt has been made to examine the level of genetic variation within T. foenumgraecum and T. caerulea accessions obtained from germplasm collection centers at Saskatoon (Plant Gene Resources of Canada) and Pullman (USDA-ARS Plant Introduction Station, Washington). The T. foenum-graecum accessions were selected in order to represent most of the countries where it is grown. In case of T. caerulea all the nine accessions available at Saskatoon and Pullman were used in present study. Analysis of polymorphism detected in T. foenumgraecum and T. caerulea Polymorphism in a given population is often due to existence of genetic variants represented by the number of alleles at a locus and their frequency of distribution in a population. Heterozygosity corresponds to a probability that two alleles taken at random from a population can be distinguished using the marker in question. Thus a convenient quantitative estimate of marker utility and the polymorphism detected can be given in terms of the mean heterozygosity and the marker index [18]. In T. foenumgraecum as well as in T. caerulea the H av and the marker index (MI) values for ISSR and RAPD markers [ Table 3 and 4], respectively, did not differ significantly indicating that similar levels of polymorphism were detected by both the marker systems in the given germplasm pools. This was also confirmed by the high correlation co-efficent for ISSR and RAPD marker systems obtained for T. foenumgraecum and T. caerulea [ Fig. 3 and 4]. Genetic diversity as measured by the heterozygosity was higher in T. caerulea (0.33) as compared to T. foenum-graecum (0.21). Based on allozyme diversity, the estimated mean heterozygosity values have been reported for self-pollinating species, Vigna unguiculata, H av = 0.027 [19] and Vicia tetrasperma, H av = 0.342 [20]. The heterozygosity value for Vigna unguiculata was lower while that for Vicia tetrasperma was higher as compared to T. foenum-graecum and T. caerulea. Based on ISSR markers, the estimates of genetic diversity, H av = 0.358 reported in cultivated pawpaw (Asimia triloba), was higher as compared to T. foenum-graecum and T. caerulea [13]. Estimation of genetic relatedness in T. foenum-graecum and T. caerulea Data collected with ISSR and RAPD marker systems were used to compare genetic similarity between various accessions of T. foenum-graecum and T. caerulea. The accessions could be a mixture of different genotypes. Therefore, to have a complete representation of a specific accession, DNAs from fifteen plants were mixed in equal proportion. Thus within accession diversity was eliminated and a complete banding profile of the accession was used for the analysis. In T. foenum-graecum, ISSR and RAPD could detect almost similar level of polymorphism (72% with ISSR and 70.12% with RAPD). In the UPGMA analysis, T. foenumgraecum accessions from one country and the nearby region grouped together in some cases while they were placed in different clusters in certain cases. Accessions from Pakistan and Afghanistan grouped together in one cluster while accessions from India and Nepal grouped in another cluster. Moreover, the three accessions from Turkey fell in different clusters inspite of being geographically very close to each other. Thus, there was no clear clustering pattern of geographically closer accessions in the present study indicating that the association between genetic similarity and geographical distance was less significant. However, it is necessary to use more number of accessions from each geographical location to confirm the available pattern. In T. caerulea also, ISSR (93.64%) and RAPDs (94.83%) detected almost equal level of polymorphism. T. caerulea showed more polymorphism as compared to T. foenumgraecum. In case of T. caerulea also, the UPGMA analysis showed that plants from different geographical regions were distributed in different groups. Here again, the accessions from Turkey were not grouped together. Two accessions from Turkey out grouped from the main cluster while one was grouped in the first cluster with Australia. In T. caerulea we could obtain three accessions from Turkey and only one accession from other countries. Therefore, it would be premature to comment precisely about the correlation of geographic distance and genetic diversity in this case. To confirm the available pattern it is necessary to use more number of accessions from each geographical location. Regression analysis of similarity matrices generated using RAPD and ISSR marker systems in T. foenum-graecum Figure 3 Regression analysis of similarity matrices generated using RAPD and ISSR marker systems in T. foenum-graecum. Center of Origin and /or Center of Diversity for Trigonella The place of origin of a species as explained by Vavilov is an area that contains a large amount of genetic variability of that species. According to him variation is a function of time, hence the region containing the greatest variation in a species would have supported and sustained that species for a longer time than the other regions suggesting that region to be the Center of Origin and/or Diversity. He set up eight geographic centers, two of which namely the Near Eastern and Mediterranean Centers extent into Turkey [21]. It has been postulated by Vavilov that the Near East region extending from Israel through Syria, Southern Turkey into Iran and Iraq and the Mediterranean Center including Spain, Morocco and Turkey is the Center of Origin of Trigonella, Trifolium and Medicago species [22]. In the present study both, T. foenum-graecum and T. caerulea accessions from Turkey exhibited more diversity. These results support Vavilov's hypothesis. The Indian accessions of T. foenum-graecum i.e. accession number 8686 (Khandwa), 8685(Mumbai), and 8687 (Patiala) separated by an aerial distance of 52 km, 104 km, and 135 km, respectively from each other, were genetically more similar (similarity index 0.893) and clustered together [ Fig. 1]. However, the accessions of T. foenum-graecum from Turkey i.e. accession number 8692 (Malatya) and 8691 (Elbistan) separated by a distance of 100 km (similarity index value 0.875-0.745) were out grouped and were genetically more distant from each other although morphologically they were similar to each other as well to the accessions from other countries. Turkey is one of the significant and unique countries in the world from the point of view of plant genetic resources and plant diversity. The country has more than 3,000 endemic plants and immense diversity has been reported Regression analysis of similarity matrices generated using RAPD and ISSR marker systems in T. caerulea Figure 4 Regression analysis of similarity matrices generated using RAPD and ISSR marker systems in T. caerulea. [23]. Many genera of cultivated plants like Cicer, Lens, Pisum, Amygladus, Prunus, Triticum, etc have their Center of Origin and/or Diversity in this country [22]. Vavilov designated southeastern Turkey and the adjoining Syria as the primary Center of Origin (now the center of diversity) for chickpea [24]. Similar to chickpea (and other grain legumes also), in T. foenum-graecum the large seeded cultivars are abounded around the Mediterranean region whereas the small seeded cultivars are predominated eastwards. Thus, Turkey may also be the primary Center of Origin of T. foenum-graecum and T. caerulea. However, this hypothesis needs to be confirmed by considering more accessions distributed over a wide geographic range especially from the Near East and the Central Mediterranean region. Conclusions In conclusion, molecular markers allowed us to estimate the overall genetic diversity in T. foenum-graecum and T. caerulea and simultaneously revealed molecular based genetic relationship. In the UPGMA analysis, no significant correlation was observed between geographic distance and genetic diversity. Our data further supported the hypothesis of the Near East and the Central Mediterranean to be the Center of Origin and/or Diversity for Trigonella as put forth by Vavilov. Plant material and DNA extraction Seeds of T. foenum-graecum accessions were obtained from Plant Gene Resources of Canada (PGRC), Saskatoon, Canada. These accessions along with the TMP numbers and the country from where they have been collected are outlined in Table 1. Seeds for various T. caerulea accessions were obtained from PGRC, Saskatoon, Canada and USDA-ARS Plant Introduction Station at Pullman, Washington (W-6), and are detailed in Table 2. Fifteen plants of each accession were grown in pots for DNA isolation. Two gram of young leaf tissue was harvested for each plant and frozen in liquid nitrogen for DNA extraction. Plant DNA was extracted by the Doyle and Doyle method [25] and equal amount of DNA from each of the fifteen plants was pooled together for each accession. PCR amplification ISSR A set of 100 anchored micro satellite primers was procured from University of British Columbia, Canada. PCR amplification of 20 ng of DNA was performed in 10 mM Tris-HCI pH 7.5, 50 mM KCI, 1.5 mM MgCl2, 0.5 mM spermidine, 2% formamide, 0.1 mM dNTPs, 0.3 uM primer and 0.8 U of Taq DNA polymerase (Ampli-Taq DNA polymerase, Perkin Elemer, USA) in a 25 ul reaction using Perkin Elmer 9700 thermocycler. After initial dena-turation at 94°C for 5 minutes, each cycle consisted of 30 seconds denaturation at 94°C, 45 seconds of annealing at 50°C, 2 minutes extension at 72°C along with 5 minutes extension at 72°C at the end of 45 cycles. RAPD RAPD analysis was performed using arbitary decamer primers procured from University of British Columbia, Canada. The reaction mixture (25 ul) contained 10 mM Tris-HCI pH 7.5, 50 mM KCI, 1.5 mM MgCl2, 0.5 mM spermidine, 0.1 mM dNTPs, 15 pmoles of primer, 20 ng genomic DNA, and 0.8 U of Taq DNA polymerase (Ampli-Taq DNA polymerase, Perkin Elmer, USA). Amplification was carried out using Perkin Elmer 9700 thermocycle for 40 cycles, each consisting of a denaturing step of 1 minute at 94°C, followed by annealing step of 1 minute at 36°C and an extension step of 2 minutes at 72°C. The last cycle was followed by 5 minutes of extension at 72°C. Amplified products were electrophoresed on 2% agarose gel using 0.5× TAE buffer (10 mM Tris HCl and 1 mM EDTA pH. 8.0) and visualized by ethidium bromide staining. The patterns were photographed and stored as digital pictures in gel documentation system. The reproducibility of the amplification was confirmed by repeating each experiment three times. Agarose gel electrophoresis Amplified products were electrophoresed on 2% agarose gel using 0.5x TAE buffer (10mM Tris HCl and 1mM EDTA pH. 8.0) and visualized by ethidium bromide staining. The patterns were photographed and stored as digital pictures in gel documentation system. The repro- Data analysis Unequivocally reproducible bands were scored and entered into a binary character matrix (1 for presence and 0 for absence). The genetic similarity among accessions was determined by Nei's genetic distance [26]. A dendrogram was constructed based on the matrix of distance using Unweighted Pair Group Method with Arithmetic averages. Scores entered in matrix were analyzed using TAXAN version 4.0 software based on the degree of bands sharing. Similarity matrix was generated using Dice coefficient as SI = 2Nab/Na+Nb where Na = total number of bands present in lane a, Nb = total of bands in lane b, Nab number of bands common to lanes a and b [27]. The dice values were then used to perform the UPGMA analysis. To evaluate the robustness of the grouping formed the binary data matrix was subjected to bootstrapping using Win-Boot program [28]. The phenogram was reconstructed 1000 times by repeating sampling with replacement and the frequency with which the groups were formed was used to indicate the strength of the group. Correlation coefficient for the similarity matrices generated by ISSR and RAPD data in both, T.foenum-graecum and T.caerulea, were calculated by method of Smouse et al [29]. The expected heterozygosity, Hn for a genetic marker was calculated as: Hn = 1 -pi2, where pi is the allele frequency of the ith allele [26]. The arithmetic mean heterozygosity Hav for each marker class was calculated as Hav = Hn/n, where n = number of markers or loci analyzed [18]. The average heterozygosity for polymorphic marker (Hav)p was further derived as; (Hav)p = Hn/np; where np is the no. of polymorphic markers or loci [18]. Marker index (Ml) for each marker system was also calculated as, MI = E (Hav)p; where E = effective multiplex ratio [E = nβ where β is the fraction of polymorphic marker or loci,18].
2014-10-01T00:00:00.000Z
2004-07-30T00:00:00.000
{ "year": 2004, "sha1": "d619395ee9a4e36c151d725720240a76161ce672", "oa_license": "CCBY", "oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/1471-2229-4-13", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7476c4dbc20303307ba6d9d8dee8faa609f6da8b", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
4606138
pes2o/s2orc
v3-fos-license
Inequality in old age cognition across the world &NA; Although cohort and country differences in average cognitive levels are well established, identifying the degree and determinants of inequalities in old age cognitive functioning could guide public health and policymaking efforts. We use all publicly available and representative old age surveys with comparable information to assess inequalities of cognitive functioning for six distinctive age groups in 29 countries. We document that cognitive inequalities in old age are largely determined by earlier educational inequalities as well as gender differential survival rates. For example, a one percentage point increase in the Gini index of past education is associated with an increase of 0.45 percentage points in the Gini index of delayed recall and 0.23 percentage points in the Gini of immediate recall. Results are robust to a variety of alternative explanations and persist even after controlling for gender‐related biases in survival rates. Furthermore, we find evidence that unequal opportunities for education −captured by differences in parental background and gender‐ also have significant effects on inequality of old age cognition. HighlightsA linkage between past educational inequality and today's late life cognition is documented.Past educational inequality increases inequality of late life cognition functioning.The relatively larger survival rate of females increases inequality of cognition.It is verified that inequality of late life cognition is decreasing over time for some countries.Inequality of opportunity (parental background) plays a role on cognitive inequality. Introduction Intact cognitive functioning in old age refers to attention, thinking, understanding, learning, decision-making and problem solving. It is fundamental to "an individual's ability to engage in activities, accomplish goals and successfully negotiate the world" (Blazer et al. 2015, p. 2). From an economic perspective, cognitive abilities are an indicator of accumulated human capital that depreciates over time, although the individual can take limited measures for cognitive maintenance or repairing (McFadden 2008). In particular, at older ages, higher starting levels of cognitive functioning are even more important, as processes of cognitive aging lead to declines in cognitive functioning. Intact cognitive functioning is related to autonomy, quality of life and active aging, whereas cognitive impairment or dementia goes along with increased disability and higher health expenditures (Bonsang et al. 2012). Many studies have focused on measuring the level of cognitive functioning and its determinants (Leist and Mackenbach 2014), and phenomena regarding cohort and country differences in average cognitive levels, such as the Flynn effect and associations to economic development are well established (Skirbekk et al. 2013, Skirbekk et al. 2012, Rindermann 2008). However, little is known regarding the inequalities in cognitive functioning in old age. We argue that the degree and determinants of old age cognitive inequalities may provide important information for public health and policymaking efforts. Knowing about the potential of education to increase cognitive reserve (Chen 2016, Banks and Mazzonna 2012, Meng and D'Arcy 2012, Singh-Manoux et al. 2011, Glymour et al. 2008, Lee et al. 2003, the distribution of cognitive functioning in old age may reflect undeveloped potential for cognitive functioning due to early-life educational inequalities and lack of educational opportunities. Therefore, high inequality in old age cognition may be associated with low average levels of old age cognition. Given the high costs of cognitive impairment and dementia (Prince et al. 2015, Handels et al. 2013) and its importance for health expenditures, it is expected that high inequality of cognitive functioning may undermine the sustainability of healthcare. Further, considering the importance of cognitive functioning for financial decision making and financial outcomes (Christelis 2010, Smith et al. 2010, inequalities in cognition may exacerbate the inequality of wealth due to poor financial planning and investment decisions. Indeed, a recent study by Lusardi et al. (2017) shows that financial literacy can explain about 30-40% of wealth inequality in the U.S. In a broader perspective, inequality of old age cognitive functioning can be related as well to the distribution of wellbeing among old people. In fact, cognitive functioning may determine key dimensions for this population group, such as autonomy, mental health, and planning ability, among others. Educational inequalities have been shown to have long-run consequences on hampering equality of opportunity for accumulation of resources over the life course (Attewell andNewman 2010, Roemer 1998). In this paper, we analyze current inequalities in old age cognitive functioning around the world. Our goal is to assess the extent to which educational inequalities experienced at young age have long-run effects on inequality in cognitive functioning experienced in old age. We condition our results for the role of survival rates on cognition because differential survival rates may further aggravate today's inequalities due to gender-unbalanced accessibility to education often observed in older cohorts (Weber et al. 2014). In our baseline estimation we find that a one percent increase in educational inequalities is associated to a positive and significant increase in inequality in cognitive functioning in late age that ranges from 0.10 to 0.45 percentage points depending on the cognitive functioning indicator. The effects are consistent in significance and size across a variety of robustness checks, involving the way in which inequalities are measured and changes in inequalities correctly identified. Results are also robust to the effects of unfair differences in parental background and gender. Our investigations are based on a variety of available data sources, including survey data, population projections and the historical distribution of educational attainment, drawn from 29 countries with diverse economic development levels in four continents. The selection of countries is mostly based on the availability of survey representative data measuring cognitive functioning among old individuals. The main results show evidence of significant long-term effects of past educational inequalities on inequalities in old age cognitive functioning observed today. In addition, we show that the relative higher life expectancy of women may contribute to increase cognitive inequality. All in all, we also bring new evidence that countries that experienced a large gender gap in education are showing higher old age cognitive inequalities. Data For the measurement of inequality of old-age cognition we use survey data from 29 countries for years 2008-2015, with most of the surveys (23 out of 29) taken between 2011 and 2015. The complete list of countries, years and surveys are reported in Table 1. The selection of these countries is based on the public availability of the data, comparability of cognitive tests and national representativeness of the sample. All these surveys are specialized studies focused on the elderly population (generally aged 50+) that can be considered sister studies of the Health and Retirement Survey (HRS). Altogether, these surveys represent about 61% of the world's 50+ population. (2015), Germany (2015), Sweden (2015), Netherlands (2013), Spain (2015), Italy (2015), France (2015), Denmark (2015), Greece (2015), Switzerland (2015), Belgium (2015), Israel (2015), Czech Republic (2015), Poland (2015), Luxembourg (2015), Hungary (2011) (2002), Thomas et al. (2001) and Checchi (2004), with small differences, compute educational Gini indexes with this data by using the following formula: The study by Benaabdelaali et al. (2012) use the same formula for the Gini index and the seven educational levels in the BL data, but they do not rely on external data for educational level durations. Instead, they assume that males and females show the same average years of schooling in each level. Castello and Doménech (2002) use BL data and compute Gini indices of educational attainment with a formula employing four educational levels, i.e. no education, primary, secondary and tertiary education. They do not need to rely on any other data source to 1 An alternative dataset of historical educational attainment is the one constructed by Cohen and Soto (2007) which reports educational attainments for 95 countries, every ten years from 1960 to 2010. The database displays the average years of education of the population aged 15+, 25+, 25-64 and by 5-year age groups. Likewise, the Wittgenstein Centre Data Explorer includes projections of educational attainment for 1970-2100 in 195 countries by sex and 5-year age groups. These data cannot be used because information for individuals aged 25-29 in 1960 is missing therein. compute Gini indices. In general, all these papers show that educational inequality is negatively related to average years of education and educational inequality is declining over time. Inequality in levels of education The approaches previously mentioned provide estimates of inequality of educational attainment under the assumption that attainment is cardinally measurable. Attainment is nevertheless bounded from above. Increasing school attainment in a country, for instance by promoting larger participation to secondary and tertiary education, would raise average attainment in the population as well as decrease attainment gaps (since attainment of those in tertiary education cannot systematically grow with average schooling attainment expansion). This change in the school attainment distribution would mechanically reduce inequalities. One alternative is to focus on inequality in the distribution of levels of education, i.e. in number of years effectively completed (the levels) within the education system, irrespective of educational attainment. Barro and Lee (2013) report the theoretical duration in years of each educational level. We let indicate a given level of education. It is measured by natural numbers with = 0 for lack of formal education and , and be the theoretical duration of primary, secondary and tertiary education (reported in years, corresponding to the highest level achievable by a given cohort in a given country), respectively. The data also include information on the probability of not attending any form of education ( 1 ), the probability of attending some primary education or completing it ( 2 and 3 ), the probability of attending some secondary education or completing it ( 4 and 5 ) and the probability of attending some tertiary education or completing it ( instance, the cumulative distribution function writes (0) = 1 , � � = 1 + 2 + 3 and ( ) = 1 + −1 2 for any = 1, … , . The cumulative distribution function is hence a step function with uniform increments across levels in either primary, secondary or tertiary education. The distribution ( ) is qualitatively equivalent to the distribution of a counting indicator. In our case, the indicator counts the education levels achieved by a target population. Understanding inequality in levels of education boils down to evaluating the distribution of cumulative probabilities (0), … , ( ), … , ( ) associated to attained years of education. An intuitive measure of inequality in education levels is the Gini index of attained years of education, ( ), which is the Gini mean difference of attained years of education divided by the average number of completed years in education: (2) The ( ) index has interesting properties. It is normalized so that ( ) = 0 if and only if years of attained education coincide across the population. In sharp contrast with most of the indices inspired by income inequality literature (such as the Gini index of years of education by Thomas et al. 2001) the average level of education cannot be targeted as a relevant egalitarian objective (as it would be the average income). Differently from the analysis of income inequality, basic inequality-reducing rich-to-poor income transfers (that are mean preserving) are not sufficient to reach the egalitarian distribution of education attainments. The ( ) index internalizes this feature of the data, and offers a consistent and normatively sound inequality indicator for assessing inequality using counting scores for educational achievement. education below the median (implying an increase of 1 − ( ) and a symmetric decrease of ( ) thus reducing the product ( )�1 − ( )� when ( ) < 0.5) are generally accompanied by a reduction in overall inequality compared to increments in education taking place above the median. We use ( ) as our preferred measure of educational inequality. Overall, this measure is strongly associated with index in Thomas et al. (2001) (correlation is 99%). This is not surprising: when is continuous (as it is the case for population models of income distributions) indices like the ones in equations (1) and (2) are qualitatively equivalent (Yitzhaki 1998). Yet, the equivalence breaks down when the population distribution of the data is not continuous, as it is the case for the underlying variable counting levels of education achieved. The ( ) index has to be preferred in this case. As a robustness check, we also consider the Thomas et al. (2001) index of inequality in educational attainment as the main treatment variable. In Table A1 in the appendix we report averages of the distribution of ( ) indices by country and by age group based on the BL dataset. Inequality of cognition in old age We exploit the specific counting nature of the cognitive functioning indicators (memory scores) to construct appropriate inequality measures of cognition in old age. The memory scores reported in the surveys are counts of the number of correct word recalls from a list of 10 words. The verbal fluency score counts the number of animal names over a given amount of time. We measure inequality in cognition by the degree of inequality in the distribution of scores of memory tests across the population. Let be the count of correctly recalled items by individual ( = 1, … , ) for a given memory test. The count score takes on values = , with = 0,1, … , , the maximal number of correctly recalled items. For instance, = 10 in the immediate recall memory indicator. Based on these data, we can calculate the country-age group specific probability that exactly out of items are correctly recalled, denoted by = ∑ : = , where is individual weight. The empirical cumulative distribution (cdf) of counts is ( ) = ∑ =0 . The average cognitive functioning score in the sample, , can be directly expressed as a function of the cdf as follows: .3 The concept of unequal distribution of cognitive functioning is not well-defined as it is the case for income inequality: first, the distribution of cognitive functioning scores is bounded above and below; second, the notion of inequality decreasing ( Building on Aaberge et al. (2015), we propose a social welfare function that is rank dependent: social welfare represents the preferences of a social planner that is concerned with the extent of well-being stemming from cognitive functioning score (represented by the count measures) and the proportion of population enjoying that well-being ( ). We denote social , where Γ is a distortion function which assigns different weights to different ranks of the cognitive functioning score distribution function. The function Γ should satisfy some desirable properties. Consider first the case of an improvement in the distribution of cognitive functioning scores in the society, implying that 1 − increases. Improvements in memory are definitely good for societal well-being, which amounts to require that Γ is increasing in 1 − . For instance, a linear well-being function Γ (i.e., Γ( ) = ) would imply that ( ) = , which is increasing in memory improvements. A second condition that we require is that societal well-being increases more if the improvement in old-age cognition occurs at the bottom of the cognitive functioning score distribution (where 1 − is relatively high) rather than at the top (where 1 − is relatively low). This amounts to additionally require that Γ is convex in 1 − . , it is sufficient to note that is a step function with increments . The area below the survival function 1 − is hence an appropriate estimator of the counts expectation. 4 In the context of income inequality the value of W(F) must increase with the mean income and decrease with the level of inequality, encapsulating the trade-off between efficiency and equity (Lambert 2001). There are many examples of functions Γ that are increasing and convex. In this study, we consider the parametric family Γ( ) = , with ≥ 1. Larger values of are associated to welfare evaluations that are more sensitive to the incidence of low old-age cognitive abilities in the population. The associated inequality index is: Our preferred measure of inequality is a Gini-type indicator of inequality, which is obtained by setting = 2. We hence refer to Gini of cognition in the remainder of the paper as the reference measure of inequality for cognitive functioning (based on the memory and verbal fluency test). The interest and innovativeness of the index ( ) lies on the idea of assessing inequality in cognition through the lenses of the counting approach (see Aaberge et al. 2015). This approach explicitly recognizes that the average memory score , (interesting for comparing the memory affluence across populations) does not generally coincide with the egalitarian outcome. Differently from standard (income) inequality indices (e.g. Gini, Atkinson and entropy measures), mean-preserving progressive transfers of outcomes (such as rich-to-poor transfers) converging to may not suffice to reduce inequality in cognition to its minimum ( ) = 0, while they suffice to eliminate income inequality. In fact, the egalitarian distribution is only achieved when all individuals display the same cognitive score (that can be a very low cognitive level) rather than , which might not be an admissible score. This makes the welfare measure ( ), and the implied inequality indicators in (3), normatively relevant in this context. In the robustness section, we additionally consider specifications of the index in (3) where indicates the calendar year when the synthetic individual was aged 25-29. The dependent variable , is our proposed inequality index of cognitive functioning in (3) Regarding the survival rate of a given age-group observed in the survey is the expected survival probability of this group when they were aged 25-29. So, the survival rate is the probability of surviving from age 25-29 to the current age of the age-group and it is specific to each age-group in order to take cohort differences into account. We chose the reference age group 25-29 because the decisions on educational investment have already been taken for most of the individuals at that age. Regarding the computation of survival rates, we utilize the series of life 1950-2100 (2015). In more detailed terms, the survival rate (by sex and total) is measured back in the year the group was aged 25-29. This measures the probability that the individuals aged 25-29 in the past will survive until the current age of the age-group. The following formula is employed: , , = � 25+ , + 30+ , � � 25, + 30, � � The subscript t indicates the year the age-group i was aged 25-29, and the subscript c stands for country. The term 25, is extracted from a period life table and indicates the number of surviving individuals at age 25 in year t, and the term 25+ , is the number of individuals who 5 The survey year considered for each country is indicated in Table 1. will survive up to the target age 25 + . Both terms 25, and 30, are employed in order to take into account the number of survivors in the 5-year age group. Main results Raw correlations recover a negative association between the average score of cognition and inequality of cognition ( Figure 1). So, age-group observations with a higher level of cognitive functioning are more likely to have a more equitable distribution of cognitive abilities in old age. This resembles the negative relationship found between educational inequality and average years of education in other studies (such as Castelló andDoménech 2002, andThomas et al. 2001). The descriptive statistics can be consulted in the appendix. The positive relationship between inequalities in education and cognition is confirmed in linear regressions that include age group survival rates and dummy variables for countries. By including country fixed effects, we account for unobserved differences across countries at diverse development stages considered in our sample. The estimates reported in Table 2 Verbal fluency Gini index of cognitive functioning in old age Gini index of past education (when the cohort was aged 25-29) minus female survival rates) in the regression models. Here, a larger relative surviving probability of women -who are in general less educated than men -is associated with a higher level of cognitive inequality. The estimates for this variable imply that if the relative deficit of survival of males were cancelled out (i.e. males and females would have the same surviving rate), the effect of a one percent change in educational inequalities on inequality in cognitive functioning at old age scales down to 0.24, 0.45, 0.34 and 0.10 for immediate memory, delayed memory, average memory and verbal fluency, respectively. Robustness checks The first robustness check concerns our response variable, inequality in cognitive functioning at old age. We report in Table 3, estimates of the effect of a change in the Gini of past education on inequality of memory scores, using different degrees of sensitivity for low memory scores in the population (parameter in equation (3)). The results confirm findings in Table 2: the effect of an increase of one percentage point in the Gini of past education is associated with a positive and significant increase in inequality in cognitive functioning. Effects are increasing in sensitivity to low cognitive functioning scores, but the size of the effect is never larger than one. The largest impact throughout various specifications is on the Gini of delayed memory, consistently with Table 2. The unit of the analysis of the Ordinal Least Square (OLS) regressions is the synthetic unit formed by age group and country. There are 6 age groups and 29 countries, and therefore the sample consists of 174 observations. Inequality in cognitive functioning computed according to the index ( ) of equation (3). The parameter refers to the level of the sensitivity of the inequality index to individuals located in the bottom of the distribution. Larger values of are associated to welfare evaluations that are more sensitive to the incidence of low old-age cognitive abilities in the population. Inequality in past education is measured by the index ( ) of equation (2). Every regression controls for country fixed effects, the gender-based difference in survival rates and includes a constant. Robust standard errors are clustered by country and are in parentheses. Significance levels are * p<0.10, ** p<0.05, *** p<0.01. The second robustness check we consider involves the treatment variable, the inequality in past education. We replicate estimates of the baseline regression, while using the index of educational attainment of equation (1). Estimates (see Table 4, panel A) coincide with those in the baseline regressions, based on our preferred measure of educational inequality. The unit of the analysis of the Ordinal Least Square (OLS) regressions is the synthetic unit formed by age group and country. There are 6 age groups and 29 countries, and therefore the sample consists of 174 observations. All regressions include dummy variables for countries, a constant and control for gender-related differences in survival rates. In panel B, the variable 'first period' takes value 1 for the groups observed about 2004 and zero for the groups observed about 2015. There is a total of 13 countries with observations in two distant periods: United States (2002), United Kingdom (2002, Austria (2004Austria ( , 2015, Belgium (2005Belgium ( , 2015, Denmark (2004Denmark ( , 2015, France (2004France ( , 2015, Spain (2004Spain ( , 2015, Germany (2004Germany ( , 2015, Sweden (2004Sweden ( , 2015, Netherlands (2004), Italy (2004, 2015, Switzerland (2004Switzerland ( , 2015 and Israel (2006Israel ( , 2015. Verbal fluency is not examined as this is not present in the US. Robust standard errors are clustered by country and are in parentheses. Significance levels are * p<0.10, ** p<0.05, *** p<0.01. The third robustness check challenges the identification condition. In the baseline model we include country fixed effects, implying that identification arises from variability in inequality of past educational attainment across cohorts within the same country. This source of variability would ideally arise from effects of shocks (crises, school reforms) that are exogenous at the individual level but common to all people of the same cohort, and that vary across cohorts. In the baseline setting, however, it is not possible to distinguish cohorts from age groups, increasing the potential risk that inequality in cognitive functioning at old age and educational inequality systematically covariate across age groups. Controlling for age fixed effects would substantially reduce identification power to variations within the same country and cohort. We propose an alternative strategy that consists in pooling the country-cohort/age data with information on same age-range individuals from survey waves distanced by about 10 years from the baseline data. Thus, we expand country-cohort data with information on inequality in past education and in old age cognitive functioning from synthetic individuals that are of the same age-range as in the baseline panel, but born 10 years earlier. By doing so, we isolate cross-cohort variability within the same age group. We can exploit this 'longitudinal' feature of the data (where educational and cognitive inequalities are measured for same age-range synthetic individuals from two distinct cohorts) only for 13 countries. For each of these countries, information on cognitive functioning scores is reported for two sufficiently distanced periods: the first period is The last robustness check is for potential bias arising for learning effects in cognitive functioning tests that are attributable to retake of the cognitive tests for those individuals participating to longitudinal surveys. For a substantial number of countries, retake of the test is not an issue. 7 We include in our baseline regressions a variable indicating the percentage of people (within each age-group country) who have taken the test during the wave of analysis and in the previous wave (see Table 4, panel C). We observe a statistically significant coefficient for this variable in the case of the regressions for delayed and average memory. Importantly, the results regarding the association between inequality of cognition and education do not invalidate baseline estimates. Inequality of opportunity for educational attainment and its implications Attainment of primary and secondary education contributes to the formation of one's human capital and lifelong well-being opportunities. Upper secondary and tertiary education attainment also bear important signaling components: the interplay of preferences, talents and effort determine investment in upper education, with larger attainment working as a signaling device of accumulation of specialized human capital and of own abilities. Inequality in educational attainment might hence be desirable for efficient allocation of talents, provided accessibility to primary, secondary and tertiary education is granted to everybody irrespectively of social origin and disposable resources, and skills acquisition only depends on one's choices and innate talents. Recent literature and policy debate have brought about evidence that opportunities for access to, and for benefitting from, (good quality) education are unequally distributed across strata of the society. The quality of parental background is one of the major drivers of inequality of opportunities for human capital acquisition (see Cunha and Heckman, 2007, Roemer and Trannoy, 2015and Ramos and Van de gaer, 2016. Parental investments during childhood (both in terms of disposable income and of quality time spent with children) generate unfair differences in abilities early in life that later capitalize into educational attainment inequalities (Cunha et al., 2006) and cognitive functioning inequalities in old age (Case and Paxson, 2008). We isolate and measure the extent of unequal opportunities for educational attainment in a way consistent with the literature, and we quantify the contribution of this aspect of educational inequalities experienced early in life on the unequal distribution of old-age cognition scores. We follow Ferreira and Gignoux (2011) to produce parametric and non-parametric estimates of relative inequality of opportunity in educational attainment based on the mean log deviation index of inequality. We are able to estimate cohort-country specific level of inequality of opportunity that we later merge with information on inequality in old-age cognition. Cohort survival probabilities vary substantially across educational levels, implying that educational composition estimated from recent surveys (where responded belong to the 50+ age group) may differ from educational composition estimated at age 25. The computation of IOP estimates accounts for differences arising from survival probability that is specific to the country, sex, age and education level of the individual.9 We focus first on the baseline specification of the testing model, where inequality in past education is replaced by indices of inequality of opportunity. We consider four separate sets of regressions for each inequality in old age cognitive functioning, each corresponding to a different definition of types in the population. Results of these regressions are in Table A2 in the appendix. We find that IOP of past education is only weakly associated with old age cognition for types A and B. In most of the cases, coefficients are not significant. Association is stronger when the IOP index accounts also for the implications of gender differences, as for types AG and BG. In the latter case, leading to most significant results, we find that a one percent increase in Table A3 in the appendix discard this hypothesis. There is no statistical support to conclude that IOP has explanatory power on inequality in cognitive functioning once educational inequality are taken into account (models (1)- (8)). Only IOP indices that account as well for implications of gender differences have statistically significant positive marginal effects on inequality in old age cognitive functioning. More interestingly, the effect of inequality of education survives after controlling for implications of inequality of educational opportunities. Effects in the preferred specification of the model (see models (9) to (16) in Table A3) are in the range of 0.2-0.35, consistent with baseline estimates in table 2. This evidence confirms that unequal opportunities for education stemming from differences in family background quality (captured by parental education) and gender have important effects on old age cognitions. Nonetheless, other channels may equally explain the tight partial effects of inequality in education on inequality in cognition, which persist even after controlling for IOP.10 10 Education attainment is treated as a cardinal variable by the IOP measures we consider. As an additional robustness check, we obtain parametric IOP estimates based on the indices in Ferreira and Gignoux (2014), which are appropriate for variables that convey only ordinal information. Regression results (available upon request) are comparable in size and magnitude to those in Table A5 and Table 2. Conclusions Our results document significant long-term effects of past educational inequalities on inequalities in old age cognitive functioning observed in the present. Furthermore, we find that the relative higher life expectancy of women contributes to increased cognitive inequality. Given the lower educational attainment of older women, and the positive relationship between education and cognitive abilities, we can speculate that countries that experienced a large gender gap in education are showing higher old age cognitive inequalities. Thus, reducing the gender gap in education and improving the distribution of education among the young will reduce inequalities in cognitive functioning in the future. Furthermore, we assess the role of inequality of opportunities experienced at young age and find evidence suggesting that unequal opportunities for education stemming from differences in parental education and gender have important effects on the distribution of old age cognition. 50-54, 55-59, 60-64, 65-69, 70-74 and 75-79. In the second panel, each cell indicates the average of the relevant statistic among countries for each age group. The survival rate of the age group is the probability that the individuals aged 25-29 in the past will survive until current age. The differential survival rate is the survival rate of females minus that of males. IOP is the indicator of inequality of opportunity (non-parametric and weighted by survival probabilities) Note: The unit of the analysis is the synthetic unit formed by age group and country. See Section 5.3 for explanations on sample selection. The dependent variable is inequality in cognitive functioning at old age ( ( ) with = 2). IOP index is based on the mean log deviation of the distribution of type-specific average education attainment. IOP indices are non-parametric estimates with survival probabilities adjusted individual weights. Type A originated from all admissible pairs of maternal and paternal education (no education, primary, secondary or tertiary). Type B originated from maximum level of education in the family of origin, distinguishing same education parents. Type AG and Type BG obtained by refining types A and B by gender of the survey respondent. All regressions include dummy variables for countries. Robust standard errors are clustered by country and are in parentheses. Significance levels are * p<0.10, ** p<0.05, *** p<0.01. Constant 0.091*** 0.100*** 0.102*** 0.151*** 0.094*** 0.106*** 0.106*** 0.149*** 0.091*** 0.101*** 0.102*** 0.150*** 0.089*** 0.099*** 0.100*** 0.150*** Note: The unit of the analysis is the synthetic unit formed by age group and country. See Section 5.3 for explanations on sample selection. The dependent variable is inequality in cognitive functioning at old age ( ( ) with = 2). Gini of past education is measured by the ( ) in (3). IOP index is based on the mean log deviation of the distribution of type-specific average education attainment. IOP indices are non-parametric estimates with survival probabilities adjusted individual weights. Type A originated from all admissible pairs of maternal and paternal education (no education, primary, secondary or tertiary). Type B originated from maximum level of education in the family of origin, distinguishing same education parents. Type AG and Type BG obtained by refining types A and B by gender of the survey respondent. All regressions include dummy variables for countries. Robust standard errors are clustered by country and are in parentheses. Significance levels are * p<0.10, ** p<0.05, *** p<0.01.
2018-04-26T19:15:18.294Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "4bc07a1e68c3ff6d35a37d0381d58c8ab148a75f", "oa_license": "CCBYNCSA", "oa_url": "https://orbilu.uni.lu/bitstream/10993/26306/1/Olivera-Leist-Chauvel_PAA2016_orbilu.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "cded13c06d4929df46e62007194d627ddcf5def9", "s2fieldsofstudy": [ "Economics", "Sociology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
226299535
pes2o/s2orc
v3-fos-license
T-Weyl calculus Let $\left( W,\sigma \right) $ be a symplectic vector space and let $% T:W\rightarrow W$ be a linear map that satisfies a certain condition of non-degeneracy. We define the Schur multiplier $\omega _{\sigma ,T}$ on $W$. To this multiplier we associate a $\omega _{\sigma ,T}$-representation and and we build the $T$-Weyl calculus, $\mathrm{Op}_{\sigma ,T}$, whose properties are are systematically studied further. Introduction In two classical papers [5] and [19], H.O. Cordes and T. Kato develop an elegant method to deal with pseudo-differential operators. In [5], Cordes shows, among others, that if a symbol a (x, ξ) defined on R n ×R n has bounded derivatives D α x D β ξ a for |α| , |β| ≤ [n/2]+1, then the associated pseudo-differential operator A = a (X, D) is L 2 -bounded. This method can be extended and can be used to prove trace-class properties of pseudo-differential operators. For example, by using this method it can be proved that A ∈ B p L 2 (R n ) if D α x D β ξ a is in L p (R n × R n ) for |α| , |β| ≤ [n/2] + 1 and 1 ≤ p < ∞, where B p L 2 (R n ) denote the Schatten ideal of compact operators whose singular values lie in l p . It is remarkable that this method can be used for (θ, τ )-quantization, in particular, for both the Weyl quantization and Kohn-Nirenberg quantization. The (θ, τ )-quantization In this section we shall briefly describe the (θ, τ )-quantization. The definitions, the results and the notions introduced will serve both as models and as examples for the T -Weyl calculus developed in the following sections. Let V be an n dimensional vector space over R and V * its dual. We denote by dx a Lebesgue measure in V and by dp is the dual one in V * such that Fourier's inversion formula holds with the usual constant. Replacing dx by cdx one must change dp to c −1 dp so dxdp is invariantly defined. Since the equation in a ∈ S ′ (V × V * ), id ⊗ F −1 X a • c θ,τ = K, has a unique solution for each K ∈ S ′ (V × V ), a consequence of the kernel theorem is the fact that the map Op θ,τ : S ′ (V × V * ) → B(S (V ) , S ′ (V )), a → Op θ,τ (a) ≡ a θ,τ (X, D) is linear, continuous and bijective. Hence for each A ∈ B(S, S ′ ) there is a distribution a ∈ S ′ (V × V * ) such that A = Op θ,τ (a). This distribution is called (θ, τ )-symbol of A. highlights the symplectic structure on V × V * and a linear map τ × θ * : V × V * → V × V * which satisfies a certain condition contained in the definition of the set Ω (V ). This condition is equivalent to the condition that the Schur factor ω θ,τ is non-degenerate. Note also that the representation indicates the contribution of the symplectic structure of V × V * in the definition of the (θ, τ )-quantization, the operator Op θ,τ . The (θ, τ )-calculus built up in this section can be further generalized, and we shall do this in this paper. The paper is organized as follows. In Section 2 we summarize the most important notations and results from linear symplectic algebra. In Section 3 we define the ω σ,T -representation and the associated Weyl system and present some of their properties. In Section 4 we define the T -Weyl calculus and we prove one of the important results of the paper, Theorem 4.4, which is an important technical result that establishes the connection between the T -Weyl calculus and the standard Weyl calculus. In Section 5 we study modulation spaces and Schattenclass properties of operators in the T -Weyl calculus. The results in this section on Schatten-class properties of operators in the T -Weyl calculus, together with the results in Section 6 are used to prove an extension of Cordes' lemma in Section 7. Sections 7-10 are devoted to the Cordes-Kato method for T -Weyl calculus. As can be seen, we started from a natural definition for a pseudo-differential calculus, and we obtained a projective representation. In this paper, we shall follow the path in the opposite direction, namely using symplectic 2-form σ we shall associate to any linear map T on W a 2-cocycle or Schur multiplier ω σ,T . If the Schur multiplier ω σ,T is non-degenerate, which may be expressed by a nondegeneracy condition of T , then any two irreducible ω σ,T -representations are unitary equivalent. For an irreducible ω σ,T -representation (H, W σ,T , ω σ,T ) of W there is a well defined linear, continuous and bijective map, the T -Weyl calculus, where S is the dense linear subspace of H consisting of the C ∞ vectors of the representation W σ,T , S * is the space of all continuous, anti-linear mappings S → C and S * (W ) is the space of all continuous, anti-linear mappings S (W ) → C. The framework Our notations are rather standard but we recall here some of them to avoid any ambiguity. Let (W, σ) be a symplectic vector space, that is a real finite dimensional vector space W equipped with a real antisymmetric non-degenerate bilinear form σ. We denote by σ ♭ the isomorphism associated with the non-degenerate bilinear form σ, Symplectic adjoint Suppose that (W 1 , σ 1 ) and (W 2 , σ 2 ) are symplectic vector spaces and T : W 1 → W 2 is a linear map. Define the symplectic adjoint T σ : where T * is the the usual adjoint, Then Remark 2.2. The property (2.1) characterizes the symplectic adjoint. is bilinear. Let us note that σ S is antisymmetric if and only if S σ = S and σ S is non-degenerate if and only if S is an isomorphism. If S : W → W is a linear isomorphism and S σ = S, then the 2-form σ S is symplectic. Let φ S : W → W a linear isomorphism that takes a symplectic basis with respect to σ S to a symplectic basis with respect to σ. Then The converse is obvious. Corollary 2.6. Let (W, σ) be a symplectic vector space and S : W → W a linear isomorphism. If S σ = S, then det S > 0. Definition 2.7. Let (W, σ) be a and X ⊂ W be a linear subspace. The symplectic complement of X is the subspace Remark 2.8. (a) Let X be an n dimensional vector space over R and X * its dual. Denote x, y, ... the elements af X and k, p, ... those of X * . Let ·, · : X × X * → R be the duality form, which is a non-degenerate bilinear form. The symplectic space is defined by W = T * (X) = X × X * the symplectic form being σ ((x, k) , (y, p)) = y, k − x, p . Observe that X and X * are lagrangian subspaces of W . (b) Let us mention that there is a kind of converse to this construction. Let (X, X * ) be a couple of lagrangian subspaces of W such that X ∩ X * = 0 or, equivalently, X + X * = W . If for x ∈ X and p ∈ X * we define x, p = σ (p, x), then we get a non-degenerate bilinear form on X × X * which allows us to identify X * with the dual of X. A couple (X, X * ) of subspaces of W with the preceding properties is called a holonomic decomposition of W . Observe that if ξ = x + k and η = y + p are their decomposition s in W , then σ (ξ, η) = y, k − x, p . The symplectic Fourier transform A symplectic vector space (W, σ) is always orientable since the 2-form σ is nondegenerate if and only if its n-fold exterior power is non-zero, i.e. where dim W = 2n. We will call the exterior power σ n the symplectic volume form. We define the Fourier measure d σ ξ as the unique Haar measure on (W, σ) such that the symplectic Fourier transform or σ-Fourier transform, is involutive (i.e.F 2 σ = 1) and unitary on L 2 (W ). We use the same notation F σ for the extension to S ′ (W ) of this σ-Fourier transform. Let us note that is the 1-density given by the symplectic volume form σ dim W 2 . Lemma 2.9. Let (W 1 , σ 1 ) and (W 2 , σ 2 ) be two symplectic spaces of same dimension Proof. The first two equalities are direct consequences of the fact that φ is a symplectic isomorphism (φ * σ 2 = σ 1 ). As for the third equality, it is enough to to prove it equality for b in S (W ). Let b ∈ S (W ). Then we have The equivalence is a consequence of identity φ σ • φ = id W1 . and for b ∈ S (W ) we have Lemma 2.10. Let (W, σ) be a symplectic vector space and S : W → W a linear isomorphism such that S σ = S. If σ S is the symplectic form Since d σ η is a multiple of the Lebesgue measure, the change of variables formula implies that if A : W −→ W is a linear isomorphism, then Let b ∈ S (W ). Then In particular, if S = S σ : W → W is a linear isomorphism, then where M λ(·) denotes the multiplication operator by the function λ (·). If S = S σ : W → W is a linear isomorphism, then Proof. (c) We use (b) and Lemma 2.10 twice: Indeed, using equality τ ξ = e −iσ(Dσ ,ξ) one sees that In many situations we need to consider additional structures such as the inner product or complex structures. We shall ask that these structures to be compatible with symplectic structure. Recall that a complex structure on a vector space V is an automorphism J : for all nonzero v ∈ W . This is equivalent to is a positive definite inner product. We denote by J (W, σ) the space of σ-compatible complex structures on (W, σ). An inner product g on a symplectic vector space for all v, w ∈ W . We denote by G (W ) the space of inner products on W , and by G (W, σ) the space of σ-compatible inner products on (W, σ). [20]), so we can associate to any inner product in a canonical manner a σ-compatible one. ω σ,T -representation and the associated Weyl system Lemma 3.1. Let T : W → W be a linear map, and let ω σ,T be the function Then ω σ,T is a 2-cocycle or Schur multiplier. Moreover, the Schur multiplier ω σ,T is non-degenerate, that is if and only if T + T σ is an isomorphism. Proof. We have to show that ω σ,T satisfies the cocycle equation By definition this equality is equivalent to Obviously we have ω σ,T (ξ, 0) = ω σ,T (0, ξ) = 1, ξ ∈ W . Hence ω σ,T is a 2-cocycle or Schur multiplier. Let ξ ∈ W . Then So we deduce that and this clearly implies that ω σ,T is non-degenerate if and only if T + T σ is an isomorphism. Remark 3.2. (a) We know that for any continuous multiplier ω on W , there is a projective representation {W (ξ)} ξ∈W ≡ (H, W, ω) whose multiplier is ω, that is a strongly continuous map which satisfies W is called a ω-representation (or, less precisely, a multiplier or ray, or cocycle representation). This representation is called the regular ω-representation of W . Here t → W σ,T (tξ) it is not in general a group representation of R. Instead, by using equality We recall that the set of all possible multipliers on W can be given an abelian group structure by defning the product of two multipliers as their pointwise product. The resulting group we denote by Z 2 (W ; T). The set of all multipliers satisfying for an arbitrary function µ : G → T such that µ (0) = 1, forms an invariant subgroup B 2 (W ; T) of Z 2 (W ; T). Thus we may form the quotient group Two multipliers ω 1 and are ω 2 equivalent (or cohomologous) if ω1 ω2 ∈ B 2 (W ; T). The functions ω σ,T and ω σ,T are cohomologous and ω σ,T is normalized, i.e. The map t → W σ,T (tξ) is a group representation of R and for each ξ ∈ W there is a unique self-adjoint operator φ T (ξ), T -field operator associated to ξ, such that for all real t. Since where S ′ is the commutant of the subset S ⊂ B (H). Thus, we have partially proved the following result. (b) The map t → W σ,T (tξ) is a group representation of R and for each ξ ∈ W there is a unique self-adjoint operator φ T (ξ), T -field operator associated to ξ, such that W σ,T (tξ) = e itφT (ξ) for all real t. (c) The map T is a non-degenerate Schur multiplier), then σ T +T σ is a symplectic form and in this case Also, in this case, the map ξ → φ T (ξ) is R-linear. (e) The projective representations (H, W σ,T , ω σ,T ) and H, W σ,T , ω σ,T satisfy Proof. As noted before, almost all the statements have been proven. The rest is a simple interpretation of the definitions except the R-limearity of the map ξ → φ T (ξ) which is a consequence of the fact that H, W σ,T , ω σ,T is a Weyl system. From now on, we shall always assume that T + T σ is an isomorphism. then there is a unitary transformation U in H,uniquely determined apart from a constant factor of modulus 1, such that Proof. The hypothesis S σ (T + T σ ) S = (T + T σ ) is equivalent to the fact that S is a symplectic transformation in (W, σ T +T σ ). Now (a) follows from Segal's theorem, Theorem 18.5.9 in [17], and the previous lemma, and (b) is a consequence of (a). and the condition T + T σ is an isomorphism and the condition T + T σ is iar invertible is equivalent to T -Weyl calculus Let S be the dense linear subspace of H consisting of the C ∞ vectors of the representation W σ,T The space S can be described in terms of the subspaces where B is a (symplectic) basis. The topology in S defined by the family of seminorms · k,ξ1,...,ξ k k∈N,ξ1,...,ξ k ∈W ϕ k,ξ1,...,ξ k = φ T (ξ 1 ) ...φ T (ξ 1 ) ϕ H , ϕ ∈ S makes S a Fréchet space. We denote by S * = S (H, W σ,T ) * the space of all continuous, antilinear (semilinear) mappings S → C equipped with the weak topology σ(S * , S). Since S ֒→ H continuously and densely, and since H is always identified with its adjoint H * , we obtain a scale of dense inclusions S ֒→ H ֒→ S * such that, if ·, · : S × S * → C is the antiduality between S and S * (antilinear in the first and linear in the second argument), then for ϕ ∈ S and u ∈ H, if u is considered as an element of S * , the number ϕ, u is just the scalar product in H. For this reason we do not distinguish between the the scalar product in H and the antiduality between S and S * . See p. 83-85 in [1]. Proof. For a proof see Lemma 1.1 in [1]. Moreover, from Lemma 4.1 one obtains that where p is a continuous seminorm on S(W ) and q and q ′ are continuous seminorms on S. If we consider on S * (W ) the weak * topology σ(S * (W ) , S (W )) and on B(S, S * ), the topology defined by the seminorms {p ϕ,ψ } ϕ,ψ∈S , On W we have two symplectic structures, the first, the initial one, is given by the 2-form σ, and the second one obtained in the normalization process of the factor ω σ,T (ω σ,T ω σ,T ), namely the structure given by the 2-form form σ T +T σ . Accordingly, we have two symplectic Fourier transformations F σ and F σ T +T σ . Their connection will be established below. or in operator form, Proof. We take S = T + T σ in Lemma 2.10. We write θ σ,T for the quadratic form on W given by with the associated symmetric bilinear form β σ,T : W × W −→ R defined by We write λ σ,T for the function in C ∞ pol (W ) given by To this function, we associate the convolution operator λ σ,T (D σ ) defined by using the symplectic Fourier transformation, i.e. where M λσ,T (·) denotes the multiplication operator by the function λ σ,T (·). For this operator we shall also use the notation e − i 2 θσ,T (Dσ) , i.e. For a in S ′ (W ), we shall denote by a w σ,T the temperate distribution in S ′ (W ) given by If a ∈ S (W ), then we have Hence Op σ,T (a) = Op w a w σ,T , a ∈ S (W ) . Here we used the equality By continuity and density arguments we find that Thus we have proved one of the important results of the paper. Modulation spaces and Schatten-class properties of operators in the T -Weyl calculus The importance of Theorem 4.4 lies in the fact that it establishes both the connection between the T -Weyl calculus and the standard Weyl calculus, as well as the connection that exists between the symbols used in them. The maps, are continuous linear isomorphisms. Clearly S (W ) is an invariant subspace for both maps. Other invariant subspaces for these maps are particular cases of of modulation spaces. Now we shall recall the definition of the classical modulation space M p,q (R n ) with parameters 1 ≤ p, q ≤ ∞. Definition 5.1. Let 1 ≤ p, q ≤ ∞. We say that a distribution u ∈ D ′ (R n ) belongs to M p,q (R n ) if there is χ ∈ C ∞ 0 (R n ) 0 such that the measurable function belongs to L q (R n ). These spaces are special cases of modulation spaces which were introduced by Hans Georg Feichtinger [8] in 1983 (see also [10]). They were used by many authors (Boulkhemair, Gröchenig, Heil, Sjöstrand, Toft ...) in the analysis of pseudodifferential operators defined by symbols more general than usual. Now we give some properties of these spaces. belongs to L q (R n ). (b) If we fix χ ∈ C ∞ 0 (R n ) 0 and if we put then · M p,q ,χ is a norm on M p,q (R n ) and the topology that defines does not depend on the choice of the function χ ∈ C ∞ 0 (R n ) 0. To go further, we need a more convenient way to describe M p,q (R n )'s topology. (b) Let u ∈ D ′ (R n ) (or u ∈ S ′ (R n )) and χ ∈ C ∞ 0 (R n ) (or χ ∈ S (R n )). Then Lemma 5.4. Let χ ∈ S (R n ) and u ∈ S ′ (R n ). Then Proof. Suppose that u, χ ∈ S (R n ). Then using we obtain The general case is obtained from the density of S (R n ) in S ′ (R n ) noticing that both, and uτ x χ (ξ) = u, e −i ·,ξ χ (· − x) , depend continuously on u. Corollary 5.5. Let 1 ≤ p ≤ ∞, χ ∈ S (R n ) and u ∈ S ′ (R n ). Then Corollary 5.6. Let 1 ≤ p, q ≤ ∞ and u ∈ S ′ (R n ). Then the following statements are equivalent: Corollary 5.7. Let 1 ≤ p, q ≤ ∞ and χ ∈ S (R n ) 0. Then is a norm on M p,q (R n ). The topology defined by this norm coincides with the topology of M p,q (R n ). Let A be a real, symmetric and non-singular matrix, Φ A the quadratic form in R n defined by Φ A (x) = − Ax, x /2, x ∈ R n . Then This formula suggests the introduction of the operator δ ⊗ e iΦA * u. Then is continuous. Here S (n) is the vector space of n × n real symmetric matrices. Proof. We shall write ξ = (ξ ′ , ξ ′′ ) for an element in R m × R n and accordingly D = (D ′ , D ′′ ). We shall also write F m for F R m and F n for F R n . Let h ∈ S (R m ) 0, ψ ∈ S (R n ) 0, H = F −1 m (h), Ψ = F −1 n (ψ) and χ = h ⊗ ψ. For u ∈ M p,q (R m × R n ) we shall evaluate Using the equality F −1 • τ ζ = e i ·,ζ F −1 , if we set where Ψ A = F e iΦAΨ it follows that This estimate implies (a) and (b). For part (c), we use this estimate and Lebesgue's dominated convergence theorem. For λ ∈ End R (R n ) and u ∈ S ′ (R n ) put u λ = u • λ whenever it makes sense. Theorem 5.10. If λ ∈ End R (R n ) is invertible and u ∈ M p,q (R n ), then u λ ∈ M p,q (R n ) and there is C ∈ (0, +∞) independent of u and λ such that Proof. Let χ ∈ C ∞ 0 (R n ) be such that χ (x) dx = 1. We shall use the notation · M p,q for · M p,q ,χ . Let r > 0 be such that suppχ ⊂ {x : |x| ≤ r}. We denote by χ 1 the characteristic function of the unit ball in R n . We evaluate Since χ (x − y) χ (λx − z) = 0 implies |x − y| ≤ r and |λx − z| ≤ r, we get that |z − λy| ≤ r (1 + λ ) on the support of the integrand. We can write It follows that We obtain that This estimate implies The integral in the last row can be estimated using Hölder's inequality: which by integration with respect to ξ gives us with C = c χ L 1 = (2π) −n r n vol({|x| ≤ 1}) χ L 1 . From Teorems 5.9 and 5.10 we deduce another important result of the present paper. This theorem together with Theorem 4.4 and the corresponding results from the standard Weyl calculus implies the first results on Schatten-class properties for operators in the T -Weyl calculus. Proof. This theorem is a consequence of the previous theorem and the fact that it is true for pseudo-differential operators with symbols in M p,1 (W ) (see for instance [3] or [23] for 1 ≤ p < ∞ and [4] for p = ∞). An embedding theorem Results such as those on pseudo-differential operators from [1] and [2], can be obtained using Theorem 5.12 and an embedding theorem. To formulate the result we define some Sobolev type spaces (L p -style). These spaces are defined by means of weight functions. Definition 6.1. (a) A positive measurable function k defined in R n will be called a weight function of polynomial growth if there are positive constants C and N such that The set of all such functions k will be denoted by K pol (R n ). (b) For a weight function of polynomial growth k, we shall write M k (ξ) = C ξ N , where C, N are the positive constants that define k. Remark 6.2. (a) An immediate consequence of Peetre's inequality is that M k is weakly submultiplicative, i.e. where C k = 2 N/2 C −1 and that k is moderate with respect to the function M k or simply M k -moderate, i.e. In fact, the left-hand inequality is obtained if ξ is replaced by −ξ and η is replaced by ξ + η in (6.1). If we take η = 0 we obtain the useful estimates The following lemma is an easy consequence of the definition and the above estimates. Lemma 6.3. Let k, k 1 , k 2 ∈ K pol (R n ). Then: continuously and densely. Lemma 6.5. Let g : (0, +∞) → C and k : R n → (0, +∞) be two differentiable maps of class ≥ r satisfying the following conditions: (a) For any j ≤ r, there is C g,j > 0 such that t j g (j) (t) ≤ C g,j |g (t)| , t > 0. Lemma 6.6. Let 1 ≤ p ≤ ∞, χ ∈ S (R n ) and v ∈ L p (R n ). Then χ (D) v ∈ L p (R n ) and Proof. We have Now Schur's lemma implies the result. For 1 ≤ q < ∞, it is clear that if qt 1 > dim V 1 , ..., qt ℓ > dim V ℓ , then the weight function k satisfies the hypotheses of the previous theorem. Cordes' lemma Recall the definition of generalized Sobolev spaces. Let s ∈ R, 1 ≤ p ≤ ∞. Define Lemma 7.1. If s > n and 1 ≤ p ≤ ∞, then H s p (R n ) ֒→ M p,1 (R n ). Proof. This is a particular case of theorem 6.7. To state and prove Cordes' lemma we shall work with a very restrictive class of symbols. We shall say that a : R n → C is a symbol of order m (m any real number) if a ∈ C ∞ (R n ) and for any α ∈ N n , there is C α > 0 such that We denote by S m (R n ) the vector space of all symbols of order m and observe that Observe also that a ∈ S m (R n ) ⇒ ∂ α a ∈ S m−|α| (R n ) for each α ∈ N n . The function x m clearly belongs to S m (R n ) for any m ∈ R. We denote by S ∞ (R n ) the union of all the spaces S m (R n ) and we note that S (R n ) = m∈R S m (R n ) the space of tempered test functions. It is clear that S m (R n ) is a Fréchet space with the seni-norms given by In addition to the previous lemma, we use part (iii) of Proposition 2.4 from [2] which state that Proof. It is sufficient to prove the inclusion Let m > n and s ∈ (n, m). We shall show that if a ∈ S −m (R n ), then F −1 a ∈ H s p . since · s a ∈ S s−m (R n ) and s − m < 0. Using previous lemma we obtain that F −1 a is in the Feichtinger algebra M 1,1 (R n ). Kato's identity In this section we shall describe an extension of a formula due to T. Kato [19]. On the symplectic vector space (W, σ) both, duality ·, · σ S,S ′ and antiduality ·, · σ S,S * are defined taking into account the symplectic structure of W, i.e. if ϕ, u ∈ S (W ), then Likewise, the convolution, * σ , is defined by means of the Fourier measure, d σ ξ, by Let ϕ ∈ S (W ) and u ∈ S ′ (W ) (or u ∈ S * (W )). Then the formal integral has a rigorous meaning when interpreted as the action of the distribution u on the test function η → ϕ (ξ − η) (or η → ϕ (ξ − η)), i.e. If S : W → W is a linear isomorphism such that S σ = S then d σS η = (det S) , and this formula remains true in all cases where the operations of convolution and composition with a linear bijection make sense. If b ∈ S (W ), c ∈ S * (W ), then b * σ c ∈ S * (W ) and from Theorem 4.4 one obtains that . Recalling that Op w is the standard Weyl calculus corresponding to the Weyl system H, W σ,T , ω σ,T , for the symplectic space (W, σ T +T σ ), this can be used together with Lemma 2.1 and Lemma 2.2 from [1] to represent Op σ,T (b * σ c). where the first integral is weakly absolutely convergent while the second one must be interpreted in the sese of distributions and represents the operator defined by ϕ, , for all ϕ, ψ ∈ S. It may be appropriate here to introduce {U σ,T (ξ)} ξ∈W , defined by Then a change of variables gives again with the first and second integrals weakly absolutely convergent and the third interpreted in the sese of distributions. The family {U σ,T (ξ)} ξ∈W has very nice properties that can be deduced from Lemma 2.1 in [1] by means of Theorem 4.4. Here we applied Corollary 2.13 for S = T + T σ and λ = λ σ,T (·) = e − i 2 θσ,T (·) as follows: = (τ ξ a) w σ,T . For (b) and (c) we use (a) and Lemma 4.1. By this lemma we know that w = w ϕ,ψ = ϕ, W σ,T (·)ψ S,S * ∈ S (W ). Assume that a ∈ S * (W ). Then, writing a for F σ a, we have Hence and (b) and (c) follows at once from this equality. Op σ, where the first integral is weakly absolutely convergent while the second one must be interpreted in the sese of distributions and represents the operator defined by , for all ϕ, ψ ∈ S. where the integral is weakly absolutely convergent. First we consider the case when b, c ∈ S (W ). Then, writing a for F σ a, we have where in the last equality we used the formula . for every ξ ∈ W , and the uniform boundedness principle implies that there are M ∈ N, C = C (M, w) > 0 such that where ξ = 1 + |ξ| 2 1 2 and |·| is an euclidean norm on W . Remark 8.3. The last two results are true for ω σ,T -representations that are not necessarily irreducible. Kato's operator calculus In [5], H.O. Cordes noticed that the L 2 -boundedness of an operator a (x, D) in OP S 0 0,0 could be deduced by a synthesis of a (x, D) from trace-class operators. In [19], T. Kato extended this argument to the general case OP S 0 ρ,ρ , 0 < ρ < 1, and abstracted the functional analysis involved in Cordes' argument. This operator calculus can be extended further to investigate the Schatten-class properties of operators in the T -Weyl calculus for an irreducible ω σ,T -representation (H, W σ,T , ω σ,T ) of W . Then g ∈ L 1 (W ) and a = b * σ g because The proof is established upon employing Theorem 9.5. Recall that the Sobolev space H s p (W ), s ∈ R, 1 ≤ p ≤ ∞, consists of all a ∈ S * (W ) such that (1 − △ W ) s/2 a ∈ L p (W ), and we set Op σ,T (a) Bp(H) ≤ Cst a H s p (W ) . If we note that Op σ,T (a) ∈ B 2 (H) whenever a ∈ L 2 (W ) = H 0 2 (W ) for any irreducible ω σ,T -representation (H, W σ,T , ω σ,T ) of W , then standard interpolation results in Sobolev spaces give us the following result.
2020-11-12T02:01:26.289Z
2020-11-11T00:00:00.000
{ "year": 2020, "sha1": "1b289d315e2f4473cf4aed2bcd9a84e6c3b8365d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1b289d315e2f4473cf4aed2bcd9a84e6c3b8365d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
233330230
pes2o/s2orc
v3-fos-license
When MINMOD Artifactually Interprets Strong Insulin Secretion as Weak Insulin Action We address a problem with the Bergman-Cobelli Minimal Model, which has been used for 40 years to estimate SI during an intravenous glucose tolerance test (IVGTT). During the IVGTT blood glucose and insulin concentrations are measured in response to an acute intravenous glucose load. Insulin secretion is often assessed by the area under the insulin curve during the first few minutes (Acute Insulin Response, AIR). The issue addressed here is that we have found in simulated IVGTTs, representing certain contexts, Minimal Model estimates of SI are inversely related to AIR, resulting in artifactually lower SI. This may apply to Minimal Model studies reporting lower SI in Blacks than in Whites, a putative explanation for increased risk of T2D in Blacks. The hyperinsulinemic euglycemic clamp (HIEC), the reference method for assessing insulin sensitivity, by contrast generally does not show differences in insulin sensitivity between these groups. The reason for this difficulty is that glucose rises rapidly at the start of the IVGTT and reaches levels independent of SI, whereas insulin during this time is determined by AIR. The minimal model in effect interprets this combination as low insulin sensitivity even when actual insulin sensitivity is unchanged. This happens in particular when high AIR results from increased number of readily releasable insulin granules, which may occur in Blacks. We conclude that caution should be taken when comparing estimates of SI between Blacks and Whites. INTRODUCTION The Minimal Model (MINMOD) has been a resounding success by any measure. The original paper (Bergman et al., 1979) has been cited over 2,000 times, and the numerous variants of the model developed by the Cobelli group have been cited collectively many thousands of times. MINMOD was designed to measure insulin sensitivity (S I ) during an intravenous glucose tolerance test (IVGTT) by fitting the glucose response following an injected bolus of glucose, with the measured insulin used as a model input. Acute intravenous injection of glucose stimulates the release of insulin mainly from the rapidly releasable pool (RRP) within the beta cells. The area under the curve of plasma concentrations of insulin during the first 10 min, termed the acute insulin response (AIR) is often used as a measure of insulin secretion (Cobelli et al., 2007). AIR varies inversely with S I , reflecting the compensatory increase of insulin secretion to compensate for deteriorating insulin sensitivity (Bergman et al., 1981;Cobelli et al., 2007). Our main finding is that MINMOD may underestimate S I when AIR is large and therefore be unreliable in comparing S I between groups with very different characteristic levels of AIR. This is distinct from the fundamental observation that S I and AIR tend to vary inversely. When the product S I * AIR, known as the Disposition Index (DI), is nearly constant as S I decreases, i.e., when insulin secretion, represented by AIR increases in proportion, normoglycemia is maintained. In contrast, DI decreases as individuals progress from normal glucose tolerance through impaired glucose tolerance to type 2 diabetes (T2D) (Bergman et al., 1981;Cobelli et al., 2007). This concept is a cornerstone of the modern understanding of T2D pathogenesis, as it makes quantitative the concept that T2D is avoided if insulin secretion (beta-cell function) increases in inverse proportion to falling insulin sensitivity but occurs if the beta cells are unable to mount such a compensatory response. Here we consider a case in which a group with higher DI paradoxically has higher risk of T2D, potentially casting doubt on the DI paradigm. Our starting point and motivation are the published observations from many groups that Blacks have lower S I than Whites when assessed by MINMOD (Haffner et al., 1996(Haffner et al., , 1997Festa et al., 2006;Goedecke et al., 2009;Goree et al., 2010;Kodama et al., 2013). This deficit is a possible explanation for the greater risk of T2D among Blacks. However, other studies of insulin sensitivity using the reference hyperinsulinemic euglycemic clamp method (HIEC) have by and large not found differences between Blacks and Whites (Saad et al., 1991;Stefan et al., 2004;Pisprasert et al., 2013;Ebenibo et al., 2014;Bello et al., 2019) which suggests that the enhanced risk of T2D among Blacks lies elsewhere. Resolving the discordance between these FIGURE 1 | The insulin secretion rate, ISR (Eq. 5), can be decomposed into two components, delivery of granules from the reserved pool to the plasma membrane docked pool (variable N 6 ), with rate proportional to σ (Eqs. 6, 7), and the priming of docked granules into readily releasable pool (RRP; variable N 5 ) granules, with rate proportional to parameter r 0 2 (Eq. 8). The release steps correspond to the fast calcium-dependent binding steps (variables N 1 − N 4 ) as well as vesicle fusion and insulin release. For details see Eqs. A10-A12, Supplementary Table 8 in Supplementary Material-Equations and refs (Grodsky, 1972;Topp et al., 2000;Dalla Man et al., 2002;Chen et al., 2008;Ha et al., 2016;Ha and Sherman, 2020). two well-established techniques of assessing insulin sensitivity is important for designing clinical trials and therapies optimized for preventing and treating T2D among Blacks. IVGTT studies also show that Blacks have higher AIR and higher DI (Kodama et al., 2013), which should be protective against T2D, but nonetheless have higher T2D risk. This is a paradox that we will not attempt to resolve in this limited study. Rather, we will examine closely the relationship between AIR and S I with a goal of determining which set of observations and interpretations to credit. The equations for MINMOD, as implemented in MINMOD Millennium (Boston et al., 2003), are: where the independent variables are glucose, G, and insulin action, X, taken to be proportional to insulin in a remote (interstitial) compartment, which is not measured but estimated along with G using the measured I values as input to the model. The other measured quantities are basal glucose, G b , and basal insulin, I b . By fitting G, the model estimates parameters p 2 and p 3 , which are combined to yield an estimate of insulin sensitivity S I , defined as p 3 /p 2 . Finally, parameter S G is estimated and interpreted as the ability of glucose to promote its own uptake independent of insulin (glucose effectiveness). We have previously described a model for longitudinal diabetes progression (Ha et al., 2016;Ha and Sherman, 2020) that builds on the core physiology represented by Eqs. 1, 2. That model was shown to be able to represent responses at any stage of glycemic progression during IVGTTs and oral glucose tolerance tests (OGTTs). Our approach will be to use that model (referred to here as the synthetic model) to generate responses of virtual individuals with prescribed parameters for insulin sensitivity and beta-cell function and investigate how well MINMOD and HIEC recover the assumed parameters. MATERIALS AND METHODS The synthetic model described here was developed to describe the pathogenesis of type 2 diabetes over months and years (Ha et al., 2016) and then extended to simulate oral glucose tolerance tests (OGTTs) and IVGTTs at fixed time points during that process (Ha and Sherman, 2020). Here we employ the model to generate virtual individuals for use in testing the ability of MINMOD and HIEC to estimate parameters of insulin resistance for subjects with defined characteristics. Terms and symbols are listed in Table 2. Following (Topp et al., 2000) we first rewrite the glucose equation for MINMOD as: where R 0 can be viewed as the input of glucose to the plasma compartment from either exogenous sources, such as intravenous injection and absorption from the gut, or from endogenous glucose production. Whereas glucose input is constant in MINMOD, we make it time-dependent and add more physiological detail. First, we subdivide R 0 into exogenous and endogenous terms: For IVGTTs, R exo is a function that rises and decays sharply: Here BW is body weight, V G is the volume of distribution for glucose, and IVGTT bar sets the scale of the total glucose bolus. The parameters for R exo are fixed in this study and are listed in the Supplementary Material-Equations, Supplementary Table 3. For OGTTs, we use a piece-wise linear function, simplified from the formula in Dalla Man et al. (2002), that rises and falls more gradually than in the IVGTT due to slow absorption from the gut: FIGURE 2 | (A) insulin and (B) glucose during a simulated IVGTT. The red traces represent a case of AIR increased by increasing the rate r 0 2 of vesicle priming in the synthetic model (Figure 1). Although the assumed S I is the same (C), MINMOD reports a reduced value (D). Control and Large RRP cases differ as well in S G , which is adjusted to equalize basal glucose. See Table 1 The formula for R endo , representing mainly hepatic glucose production (HGP) is: hepa max (S I ) α HGP (S I ) + hepa SI I + HGP bas R endo is a decreasing function of I that depends on S I to account for the correlation between hepatic and peripheral insulin sensitivity and on hepa SI to account for a component of hepatic-specific insulin sensitivity independent of S I . The only parameter varied in this study is S I . The details of hepa max and α HGP (S I ) are in the Supplementary Material-Equations, Eqs. A5, A6 and the fixed parameters are in Supplementary Table 4. We rewrite the glucose equation compactly, showing only the parameters of R 0 that are varied in this study: We have added an equation for X to the model in Ha and Sherman (2020) to more accurately represent IVGTTs. It is the same as in MINMOD, but with p 2 factored out to show S I explicitly: The synthetic model adds an equation for insulin, which we use to generate virtual subject with different capacities to secrete insulin and hence different AIR when assessed by IVGTT. It represents the balance between secretion rate, ISR, and clearance: where V is the volume of distribution for glucose and k is the insulin clearance rate. The variable β represents beta-cell mass. Following (Topp et al., 2000), β satisfies a differential equation representing the hypothesis that mass adapts homeostatically over a period of years to compensate for insulin resistance. The variables γ and σ in the ISR term represent two aspects of compensation in beta-cell function, respectively, the calcium dependence of exocytosis, as mediated by K(ATP) channels, and the rate of delivery of insulin granules to the plasma membrane. These compensatory variables change on time scales of days to years and are thus effectively constant at their initial conditions during IVGTTs and OGTTs. The details of the equations for β, γ, and σ are not important for this study but are provided in the Supplementary Material-Equations. The initial value of σ is varied as a way of increasing AIR. The parameter r 0 2 in the ISR controls the rate transfer of insulin vesicles from the docked pool to the readily releasable pool (RRP; Figure 1), known as vesicle priming. This is another way we use to vary AIR. The values of these parameters for each figure are found in Table 1. The details of how σ and r 0 2 determine ISR are described next. The insulin secretion rate ISR is the output of a model of insulin granule exocytosis (see Figure 1), following broadly the classical two-pool model of Grodsky (1972) as updated and FIGURE 3 | MINMOD estimate of S I vs. the assumed S I , which is systematically underestimated. Parameters are in Table 1; cases are numbered 1-4 from left to right (increasing S I ) for each of Control and Large RRP. Frontiers in Physiology | www.frontiersin.org elaborated in Chen et al. (2008) and incorporated in Ha and Sherman (2020) to study the roles of first-and second-phase insulin secretion in diabetes pathogenesis. The key variables in the exocytosis module of the synthetic model are the numbers of vesicles in the docked pool, N 6 , and the RRP, N 5 (Figure 1). The rate of transfer of vesicles from the reserve pool (treated as inexhaustible, so not represented by a discrete compartment) is r 3 , given by We vary this rate by varying σ, which increases both first-and second-phase secretion because it increases both the docked pool and, by mass action, the RRP. The variable C i is intracellular calcium and G F is an increasing function of glucose that represents the effect of one or more mitochondrial metabolites to amplify the efficacy of calcium by increasing vesicle trafficking to the plasma membrane: We view the similar effect of the incretins GLP-1 and GIP to amplify insulin secretion as implicitly folded in to this expression. When we simulate IVGTTs, we reduce the parameters G F,max , and G F,b in the above equation to account for the greatly reduced effect of incretins during an intravenous glucose challenge. Finally, the other parameter we use to vary AIR is r 0 2 which controls the rate at which docked vesicles become primed, i.e., transfer from the docked pool to the readily releasable pool (RRP): Simulated IVGTTs We create two classes of virtual individuals, control and enhanced AIR, where AIR is increased by increasing the rate of vesicle priming (parameter r 0 2 affecting ISR in Eq. 3), shown in Figure 2. The assumed insulin sensitivity S I is the same for both cases. The insulin levels and AIR are increased (panel A), whereas the glucose profiles are almost identical (panel B). Figure 2C shows the assumed values of S I , which are the same for both cases. However, MINMOD incorrectly finds lower S I for the individual with stronger secretion (Figure 2D). We repeated the above scenario for several matched pairs of S I values. Figure 3 shows that MINMOD systematically underestimates S I . Further increasing r 0 2 and AIR results in still lower estimated values of S I (not shown). 1 http://www.math.pitt.edu/~bard/xpp/xpp.html The interpretation by MINMOD of this behavior makes sense: in the high AIR case, insulin is higher, but glucose is not changed much. It therefore concludes that the high AIR individuals are insulin resistant. This is analogous to the Matsuda index of insulin sensitivity, which assumes that insulin sensitivity is inversely proportional to the product of AUC glucose and AUC insulin. Nonetheless, we know the ground truth for these simulations because we prescribed the value of S I , and MINMOD is in disagreement with the assumptions. Simulated HIECs We also simulated HIECs for the same matched pairs of S I values. One example is illustrated in Figure 4, showing insulin (panel A), glucose (panel B), and the glucose disposal rate normalized for body weight and insulin during the clamp. In contrast to MINMOD, HIEC is indifferent to the RRP size because it does not elicit endogenous insulin secretion and glucose remains near basal levels. Consequently, HIEC correctly estimates S I , independent of AIR ( Figure 5). Alternative Scenario: Increased Vesicle Docking We next considered an alternative way to attain increased AIR, increasing the rate of vesicle docking (parameter σ in the synthetic model). We proportionally reduced S I as we increased σ so that this case would correspond to compensatory increases in beta-cell function as insulin sensitivity is reduced. In the simulated IVGTTs (Figure 6), this increased AIR as well as AUC insulin over the entire test ( Figure 6A) while keeping the glucose profiles almost unchanged ( Figure 6B). In agreement with the assumed values of S I (Figure 6C), MINMOD in this scenario correctly estimated reductions in S I inversely proportional to the increased AIR ( Figure 6D). This finding is in accord with the definition of insulin resistance-higher insulin with unchanged glucose indicates insulin resistance. HIEC correctly recovers the assumed S I (Figure 7). Thus, in this scenario MINMOD and HIEC are in agreement. Choosing Between the Scenarios We have illustrated two ways of increasing AIR, increased vesicle priming and increased vesicle docking. In the former, MINMOD underestimates S I , whereas in the latter, MINMOD's estimates are correct. We are left with the question of which scenario is more relevant for the case of comparing Black and White individuals, for which we do not know the ground truth regarding insulin sensitivity. We address this by looking at the performance of Black and White individuals on another test, the OGTT. Clinical studies show that Blacks, when normally glucose tolerant, have slightly lower glucose levels and somewhat higher insulin levels than Whites (Weiss et al., 2006;Chung et al., 2019;Fosam et al., 2020). FIGURE 7 | HIEC correctly recovers the assumed S I independent of vesicle docking rate σ and hence AIR. Control cases correspond to Control 1-4 in Figure 3; Large sigma cases correspond to the same cases but with σ increased 2x. We used the synthetic model to simulate OGTTs for the scenario of increased vesicle priming (Figures 8A,B). The insulin (panel A) and glucose (panel B) profiles are similar, with the high AIR individuals exhibiting slightly higher insulin and slightly lower glucose. This is in agreement with some but not all clinical observations in Blacks and Whites; see Discussion. The natural interpretation of the OGTT is that insulin sensitivity of the two hypothetical individuals is similar. This is in contrast with the simulated IVGTTs, in which the high AIR individual had higher insulin but similar glucose levels (Figure 2). We also simulated OGTTs for the scenario of increased vesicle docking (Figures 8C,D). The insulin (panel C) is higher for the AIR individuals, whereas the glucose profiles are the same (panel D), the same pattern seen in the IVGTT (Figure 6). The natural interpretation of both the OGTT and the IVGTT for this scenario is that the high AIR individual is more insulin resistant. However, the behavior during the OGTT is in agreement with some observations in Black and White individuals, especially at the later time points. Specifically, the hypothetical high AIR individuals simulated here have much higher glucose at the 2h timepoint, as found in some studies (Osei and Schuster, 1994;Osei et al., 1997). DISCUSSION Our motivation for this study was an interest in resolving discrepancies between IVGTT and HIEC in estimating insulin sensitivity of Black and White individuals. IVGTT, interpreted using MINMOD, generally finds that Blacks have lower insulin sensitivity, whereas HIEC generally does not find differences (Saad et al., 1991;Stefan et al., 2004;Pisprasert et al., 2013;Ebenibo et al., 2014;Bello et al., 2019). We asked whether MINMOD is misled by the higher AIR during IVGTTs of the Black subgroup to underestimate insulin sensitivity, S I . We investigated this question with a synthetic model (Ha et al., 2016;Ha and Sherman, 2020) to generate hypothetical individuals with varying degrees of AIR and S I and simulate their performance during IVGTTs and HIECs. We considered two scenarios for increased AIR. In one, high AIR was the result of increased size of the RRP, which is closely related to first-phase insulin secretion, and one in which high AIR was the result of increased rate of mobilization of insulin granules, which increases both first-and secondphase secretion. In both scenarios, the simulated IVGTTs were qualitatively similar to those exhibited by high AIR and low AIR individuals. However, the first way resulted in a systematic underestimation of S I by MINMOD, that is, lower than the assumed value. The second way resulted in a correct recovery of S I by MINMOD. HIEC by contrast recovered S I equally well in both cases, independent of AIR. The question then is which scenario corresponds better to the experimentally observed differences between Black and White groups. Published OGTT data vary, likely depending on the age, BMI, sex (including menopausal status) and other characteristics of the population studied, as well as how well the groups are matched. Weiss et al. (2006) showed similar insulin and glucose profiles in Blacks and Whites, consistent with the RRP scenario, whereas Osei and colleagues (Osei and Schuster, 1994;Osei et al., 1997) showed similar glucose but substantially elevated insulin during OGTT in Blacks, consistent with the second scenario. We conclude that a finding using MINMOD of lower S I in Blacks relative to Whites, or any comparison of high and low AIR groups, should be interpreted cautiously in the absence of corroborating evidence from OGTTs or clamps. It is instructive to view the issue treated here in terms of MINMOD's response to changes in AUC insulin as well as AIR. The Matsuda index for OGTTs defines insulin sensitivity as inversely related to the product of AUC insulin and AUC glucose. MINMOD estimation of S I in the scenarios considered here is likewise inversely related to the product of AUC insulin and AUC glucose. In one scenario, that inverse relationship correctly corresponds to the assumed physiology, in the other it is incorrect. We hasten to point out, however, that MINMOD estimates of S I are not necessarily inversely related to AUC insulin. An example of great importance for the way MINMOD is generally implemented is the effect of infusing exogenous insulin at the 20-minute time point of the IVGTT. This modification, the insulin-modified IVGTT or IM-IVGTT, was introduced to improve estimates for individuals with greatly reduced endogenous secretion, such as those with type 1 diabetes or advanced type 2 diabetes. Estimates of S I obtained with the standard IVGTT and the IM-IVGTT are comparable, but the IM-IVGTT has greater precision, as shown in, for example (Quon et al., 1994;Pacini et al., 1998). Yang et al. (1987) showed similarly that increasing insulin secretion at the 20minute time point by injecting tolbutamide or delaying the peak of insulin by injecting somatostatin does not change the estimated value of S I but reduces the error of the estimate. Thus, the response of MINMOD to changes in AUC insulin depends strongly on the context. The context that we are concerned with here is whether MINMOD or HIEC is more trustworthy in evaluating ethnic differences. In the scenarios we considered, for which S I was known a priori, we found that HIEC was more trustworthy. It is also important to emphasize that HIEC is a method that directly measures insulin sensitivity whereas the minimal model uses a simulation approach. The application to studies of Black and White cohorts depends then on whether either of our scenarios describes correctly the underlying mechanism for enhanced AIR in black individuals. A PKPD studies using IVGTT suggests that the scenario of increased first-phase secretion due to larger RRP is a better representation (Xie et al., 2010). Of course, other putative mechanisms that we have not considered may be even better. We conclude that, at minimum, caution should be exercised in interpreting MINMOD estimates of S I between populations that differ substantially in AIR such as Blacks and Whites.
2021-04-22T13:32:24.255Z
2021-04-22T00:00:00.000
{ "year": 2021, "sha1": "7f303ef4af4a38893749b71603a5bd56e661cba3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2021.601894/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7f303ef4af4a38893749b71603a5bd56e661cba3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56402453
pes2o/s2orc
v3-fos-license
Diurnal land surface energy balance partitioning estimated from the thermodynamic limit of a cold heat engine Abstract. CE1 TS1 TS2Turbulent fluxes strongly shape the conditions at the land surface, yet they are typically formulated in terms of semiempirical parameterizations that make it difficult to derive theoretical estimates of how global change impacts land surface functioning. Here, we describe these turbulent fluxes as the result of a thermodynamic process that generates work to sustain convective motion and thus maintains the turbulent exchange between the land surface and the atmosphere. We first derive a limit from the second law of thermodynamics that is equivalent to the Carnot limit but which explicitly accounts for diurnal heat storage changes in the lower atmosphere. We call this the limit of a “cold” heat engine and use it together with the surface energy balance to infer the maximum power that can be derived from the turbulent fluxes for a given solar radiative forcing. The surface energy balance partitioning estimated from this thermodynamic limit requires no empirical parameters and compares very well with the observed partitioning of absorbed solar radiation into radiative and turbulent heat fluxes across a range of climates, with correlation coefficients r2 ≥ 95 % and slopes near 1. These results suggest that turbulent heat fluxes on land operate near their thermodynamic limit on how much convection can be generated from the local radiative forcing. It implies that this type of approach can be used to derive general estimates of global change that are solely based on physical principles. Introduction The turbulent fluxes of sensible and latent heat play a critical role in the land surface energy balance during the day as these fluxes represent the principal means by which the surface cools and exchanges moisture, carbon dioxide and other compounds with the atmosphere.Due to their inherently complex nature, these fluxes are typically described by semiempirical expressions (e.g., Businger et al., 1971;Louis, 1979;Beljaars and Holtslag, 1991).Yet representations of this exchange in land surface and climate models are still associated with a high degree of uncertainty.This uncertainty results, for instance, in biases in evapotranspiration and surface temperatures across different models (Mueller and Seneviratne, 2014), in empirical relationships of land surface exchange outperforming land surface models (Best et al., 2015), and in biases in boundary layer heights (Davy and Esau, 2016).The semiempirical and highly coupled nature of land-atmosphere exchange seems to make it almost impossible to derive simple, physically based estimates of the magnitude of turbulent exchange and how it changes with land cover change or global warming. An alternative approach to describing surface-atmosphere exchange can be based on thermodynamics (Kleidon et al., 2014;Dhara et al., 2016), an aspect that is rarely considered in the description of surface-atmosphere exchange.In this approach, turbulent exchange is formulated as a thermodynamic process by which turbulent heat fluxes drive a convective heat engine within the atmosphere that does the work to maintain convection and thus the turbulent exchange near the surface.This approach specifically invokes the second law of thermodynamics as an additional constraint on atmospheric dynamics (similar to previous approaches, such as the maximization of material entropy production (MEP); e.g., Paltridge, 1978;Ozawa and Ohmura, 1997;Lorenz et al., 2001;Ozawa et al., 2003).The second law sets a limit on how much work can be derived from the local radiative forcing of the system.The dynamics associated with convection are then essentially captured by the implicit assumption that convection works as hard as it can, so that the use of the thermodynamic limit approximates the emergent convective dynamics.Previous applications of this thermodynamic approach have shown that it can successfully describe the broad climatological variation of surface energy balance partitioning on land and ocean (Kleidon et al., 2014;Dhara et al., 2016), the strength and sensitivity of the hydrologic cycle and surface temperatures to global change (Kleidon andRenner, 2013a, b, 2017;Kleidon et al., 2015), and the dynamics of the Earth system in general (Kleidon, 2016). Rl,out Here we extend this approach to the diurnal variation of the surface energy balance on land and compare its estimated partitioning to observations across different climates.As in the previous applications of thermodynamics to landatmosphere exchange, the starting point is to view turbulent fluxes as the result of a heat engine that is driven by these heat fluxes (Fig. 1).The limit on how much work this heat engine can maximally perform is set by the first and second law of thermodynamics, from which the well-known Carnot limit of a heat engine can be derived (e.g., Kleidon, 2016). When applied to the setting of the diurnal cycle of the land-atmosphere system, two key aspects need to be considered as these shape the thermodynamic limit (as illustrated by the two boxes in Fig. 1b).First, the strong diurnal variation of solar radiation causes strong changes in heat storage within the system that result in a much less varying emission of terrestrial radiation to space.In the absence of such heat storage changes, nighttime temperatures would be much lower than those found on Earth.In the ideal case that is being considered here, the strong variation of solar radiation is completely leveled out to yield a uniform emission of radiation to space, as indicated by the blue line in the graph at the top of Fig. 1 labeled R l,out .While these heat storage changes predominantly take place below the surface for open water surfaces such as the ocean and lake systems (reflected in nearly uniform turbulent fluxes during night and day; see, e.g., measurements by Liu et al., 2009), the land-atmosphere system accommodates these changes mostly in the lower atmosphere (Kleidon and Renner, 2017) because heat diffusion into the soil is slow (e.g., Oke, 1987).The relevance of this different way of accommodating heat storage changes over land is that it takes place within the heat engine that we consider.The heat storage change is associated with a heating of the engine during the day, which represents an additional term in the entropy balance of the engine.What we show here is that the resulting thermodynamic limit is somewhat different to the common Carnot limit.We refer to this limit as the Carnot limit of a cold heat engine.Our motivation to refer to this limit as the limit of a cold heat engine is the behavior of a cold car engine in winter.When the car engine is still cold just after it has been started, one needs to hit the gas pedal harder to get the same power.As we will show below, the expression we derive here shows the same effect, that is, that a heat gain inside the engine reduces the work output of the engine.We will show that this enhanced heat flux is consistent with observations, so that this effect of heat accumulation during the day is an important factor that shapes the magnitude of turbulent fluxes on land. The magnitude of the diurnal variation in heat storage is well constrained when assuming that the radiative heating by solar radiation and the emission to space are roughly balanced over the course of day and night.The temporal change in heat storage during the day can then be inferred from the imbalance of radiative fluxes at the top of the atmosphere (indicated in the upper panel of Fig. 1b, and as described by Kleidon and Renner, 2017). The second aspect that shapes the thermodynamic limit is the reduction in surface temperature in the presence of greater turbulent fluxes at the surface (lower panel at the right of Fig. 1).This reduction in surface temperature reduces the temperature difference that is utilized by the heat engine to derive power, thus setting a limit of maximum power of the heat engine (as in, e.g., Kleidon and Renner, 2013a;Kleidon et al., 2014;Dhara et al., 2016).(This maximum power limit is very closely related to the proposed principle of maximum entropy production (MEP), as maximum power equals maximum dissipation in steady state, and entropy production is proportional to dissipation.An example of the application of MEP to convection is given by Ozawa and Ohmura, 1997.)We then combine the thermodynamic limit of a cold heat engine with the energy balances of the surface and of the whole surface-atmosphere system and maximize the power output of the heat engine to get a fully constrained description of the system that can, in first approximation, be solved analytically.It yields a description of the turbulent exchange between the land surface and the atmosphere that is fully constrained by thermodynamics and free of empirical turbulence parameterizations. In the following, we first derive the thermodynamic limit of a cold heat engine, combine it with the energy balances of the system and maximize the power output to estimate surface energy balance partitioning based on the solar forcing of the system.The estimated partitioning is then tested with observations across field sites of contrasting climatological conditions.We then discuss how our thermodynamic approach compares to the common approaches in boundary layer meteorology and consider the utility of our approach for future work as well as potential implications. Thermodynamic formulation of the land surface energy balance We consider the land-atmosphere system as a thermodynamic system in a steady state when averaged over the diurnal cycle.Surface heating by absorption of solar radiation, R s , causes the surface to warm, while the atmosphere is cooled by the emission of radiation to space, R l,out (Fig. 1). The surface and atmosphere are linked by the net exchange of terrestrial radiation, R l,net , and turbulent heat fluxes, J in , that result from convective motion.We consider this system to be a locally forced system with no advection.Convective motion within the boundary layer is seen as the consequence of a heat engine that generates motion out of the turbulent heat fluxes, where, for simplicity, we do not distinguish between the effects of the sensible and latent heat flux and the associated forms of dry and moist convection.The steady-state condition is used for the radiative forcing of the whole system by requiring that the mean radiative fluxes taken over the whole day are balanced such that R s,avg = R l,out (with R s,avg being the average of R s ).Furthermore, we assume that the generation of turbulent kinetic energy, or power G (or work per time), and its frictional dissipation, D, are in balance, so that G = D.In the following, we derive the limit on how much power can be derived from the forcing of the system directly from the first and second law of thermodynamics in a general way, so that we do not need to make the assumption that the atmosphere operates in a Carnot-like cycle.All variables used in the following are summarized and described in Table 1. Carnot limit with heat storage changes We first derive a thermodynamic limit akin to the Carnot limit from the energy and entropy balances of the heat engine, which specifically includes the change in heat storage within the engine.The first law of thermodynamics applied to this setup is given by where dU e /dt is the change in heat storage within the heat engine, J in represents the addition of heat by the turbulent heat fluxes from the surface and J out is the rate by which the heat engine is being cooled, which is accomplished by radiative cooling.Note that this formulation differs from the derivation of the Carnot limit by accounting for changes in internal energy on the left-hand side and for dissipative heating, D, on the right-hand side as frictional dissipation takes place within the system.As we consider a steady state with G = D, note that the contributions of these terms in Eq. ( 1) cancel out so that the equation reduces to dU e /dt = J in −J out .Also note that at this point, we neglect the effects of radiative energy transport from the surface to the atmosphere that would contribute to dU e /dt in the application to the surfacewww.earth-syst-dynam.net/9/1127/2018/Earth Syst.Dynam., 9, 1127-1140, 2018 atmosphere system.As it turns out, this contribution by radiation does not alter the limit, as shown in Appendix A. The associated entropy budget of the heat engine is given by a change in entropy associated with the change in heat storage, dU e /dt, at an effective engine temperature T e , the entropy input by J in at a temperature T s , the entropy export by J out at a temperature T a , frictional dissipation that is assumed to occur at temperature T e , and possibly some irreversible entropy production σ irr within the engine, i.e. irreversible losses that are not accounted for by the frictional dissipation term, D/T e : Note that this entropy budget is the entropy budget for thermal entropy, not for radiative entropy.This is an important distinction.A contribution by a radiative flux, e.g., a flux R l,out /T a , represents a flux of radiative entropy (and would require an additional factor of 4/3 as it deals with radiation); i.e., it is entropy reflected in the composition of radiation but not associated with the thermal motion of molecules that describes heat or thermal energy.As we deal with a convective heat engine, we must not include radiative terms as such but only when radiation is absorbed and heats air and water (adds thermal energy) or when the net emission of radiation cools (removes thermal energy).Radiative terms and radiative entropy production are typically much larger in the Earth system than non-radiative contributions (easily by a factor of 100, e.g., Kleidon, 2016).Yet any form of motion is associated with the much smaller but relevant thermal entropy terms. For the atmospheric temperature, T a , we use the radiative temperature associated with R l,out (i.e., we use the Stefan-Boltzmann law, R l,out = σ T 4 a , to infer T a , with σ = 5.67 × 10 −8 W m −2 K −4 being the Stefan-Boltzmann constant).This is the most optimistic temperature for the entropy export from the heat engine as it is the coldest temperature possible to emit radiation at a rate R l,out to space, and it thus represents the highest entropy export from the heat engine (note that blackbody radiation represents the radiative flux with maximum entropy).Note also that this temperature is not bound to a particular height within the atmosphere but is instead inferred from the energy balance constraint.The effective engine temperature, T e , essentially represents the potential temperature of the lower atmosphere as the temperature variation within the lower atmosphere is shaped by convection and is thus approximately adiabatic. The thermodynamic limit on how much power, G, can maximally be derived by the engine is obtained from the entropy budget using the ideal case in which σ irr = 0 (the second law of thermodynamics requires σ irr ≥ 0).This ideal case implies that the only source of entropy production is the frictional dissipation term, D/T e (cf.Eq. 2).Using Eq. (1) to replace J out in Eq. ( 2), we obtain In this expression, the temperature of the heat engine, T e , plays an important role.In the limiting case of T e ≈ T a , this expression reduces to the common Carnot limit as the effect of the change in heat content is indistinguishable from the waste heat flux, J out , of the heat engine.As the engine temperature essentially represents the potential temperature of the lower atmosphere, it is much closer to the surface temperature, so that the approximation T e ≈ T s is better justified.With this approximation, the thermodynamic limit of power then reduces to In the absence of heat storage changes, the term dU e /dt vanishes and yields, again, the common Carnot limit, except that T a appears in the denominator of the Carnot efficiency rather than T s , an aspect that has previously been derived in the context of a "dissipative" heat engine (Renno and Ingersoll, 1996;Bister and Emanuel, 1998).Note that in the presence of positive heat storage changes, as is the case during the day, the maximum power that can be derived from the heat flux J in is reduced.That is, the increase in heat storage within the engine (dU e /dt > 0) results in a lower efficiency in converting heat into power (with the efficiency given by the ratio G/J in ), consistent with our explanation in the Introduction of why we refer to this effect as that of a cold heat engine. Energy balance constraints We next use the energy balance constraints of the surface and the whole system to express dU e /dt and T s − T a in terms of the absorption of solar radiation at the surface, R s , and the turbulent heat flux J in .This will allow us to replace these two terms in Eq. ( 4), so that the power G only depends on R s and J in .Note that we refer to the atmospheric heat storage change, dU a /dt in the following rather than the engine heat storage change, dU e /dt.The difference is that when we apply the thermodynamic limit on the atmosphere, the heat storage is also affected by the net exchange of longwave radiation, which adds another term to the energy and entropy budget but which does not go through the engine as a heat flux.However, the resulting limit remains unaffected, as shown in Appendix A. The surface energy balance constrains the relationship between the heat flux J in and the temperature difference that drives the heat engine, T s − T a .We express this balance by where we linearize the net longwave radiative exchange, R l,net = k(T s − T a ), between the surface and the atmosphere 3) and (4) J in Turbulent fluxes of sensible and latent heat W m −2 Eqs. ( 1), ( 2) and ( 5) Turbulent fluxes J in optimized to yield max. power W m −2 Eq. ( 7) Cooling rate of the heat engine W m −2 Eqs. ( 1) and ( 2) Flux of terrestrial radiation to space W m −2 Assumed to be in steady state, with Surface absorption of solar radiation (average) W m −2 Eq. ( 6) T a Atmospheric temperature K Assumed to be the radiative temperature T e Temperature of the heat engine K Assumed to be similar to the surface temperature Change in atmospheric heat storage W m −2 Eq. ( 6) dU e /dt Change in heat storage within heat engine W m −2 Eqs. ( 1)-( 4) (assumed to be the same as dU a /dt in Sect.2.2) dU s /dt Change in ground heat storage W m −2 Prescribed from observations, Eq. ( 6) (or ground heat flux) dU tot /dt Change in total heat storage W m −2 Eq. ( 6) and where dU s /dt describes heat storage changes below the surface, which is represented by the ground heat flux.This formulation of the surface energy balance can be used to express the temperature difference, T s − T a , as a function of R s , J in , and heat storage changes below the surface, dU s /dt.The energy balance of the whole system, neglecting heat advection terms, yields a constraint of the form where dU tot /dt is the total change in heat storage within the surface-atmosphere system.We assume this balance to be in a steady state when averaged over day and night, so that on average, R l,out = R s,avg , where R s,avg is the temporal mean of R s taken over the whole day.The energy balance of the whole system provides an expression for dU a /dt as a function of the instantaneous value of absorbed solar radiation, R s , the mean absorption of solar radiation, R s,avg , and the ground heat flux, dU s /dt. Maximization of convective power The surface energy balance (Eq.5) can now be used to express the temperature difference that drives the heat engine, T s − T a , in the thermodynamic limit given by Eq. ( 4), while the energy balance of the whole system (Eq.6) can be used to constrain the terms describing the changes in heat storage, dU a /dt.As the power G is an increasing function of J in , but the temperature difference declines with greater values of J in , the power has a maximum, which is referred to as the maximum power limit.This limit can be derived analytically by ∂G/∂J in = 0 and is associated with an optimum heat flux of the form This expression is consistent with previous work where the optimum heat flux is given by J opt = R s /2 in the absence of heat storage changes (Kleidon and Renner, 2013a, b).It is, however, modulated by heat storage changes, and it matters whether these changes take place below the surface or in the lower atmosphere as the two forms of heat storage change enter Eq. ( 7) with a different sign. We next consider the two limiting cases.The first limit is when the heat storage changes take place primarily below the surface, like an open water surface of a lake.In this case, dU s /dt ≈ dU tot /dt (and dU a /dt ≈ 0), and the optimum heat flux reduces to The other limiting case is when the heat storage changes take place above the surface.Then, dU a /dt ≈ dU tot /dt (with dU s /dt ≈ 0), and the optimum heat flux is This expression implies that the optimum value of the turbulent heat flux varies directly with the absorbed solar radiation, R s , but has a constant offset given by half of the mean absorption, R s,avg /2.This offset should be a comparatively small value of about 80-100 W m value of surface absorption of solar radiation of 165 W m −2 (Stephens et al., 2012).Note that the power, however, does not differ between the two cases and yields the same value of Hence, the information on absorbed solar radiation (and the ground heat flux to account for dU s /dt) is sufficient to estimate surface energy balance partitioning from the thermodynamic limit of maximum power. Evaluation of the approach Evaluating our estimate requires observations of absorbed solar radiation during the day, R s , and the ground heat flux, dU s /dt.From the diurnal course of R s , the mean value of R s,avg can be calculated, which in turn yields an estimate for dU tot /dt.Taken together with the ground heat flux, this yields the value of dU a /dt, so that all terms in Eq. ( 7) can be specified.The resulting estimate of J opt can then be compared to observations of the turbulent heat fluxes or to the available energy, i.e., net radiation reduced by the ground heat flux. Data sources We use two types of data sources to test our approach.To test how reasonable the estimates are for the diurnal heat storage changes in the lower atmosphere, we first use 6hourly radiosonde data from the DWD meteorological observatory Lindenberg in Brandenburg, Germany (data avail-able at http://weather.uwyo.edu/upperair/sounding.html, last access: 16 April 2018).These observations allow us to derive an estimate of the diurnal variations in temperature (and moisture) in the lower atmosphere and thus of dU a /dt (Fig. 2a).We use data from this site because this observatory provides a long and consistent record of four vertical profiles a day as well as surface energy balance components, while typically only two vertical profiles a day are taken during routine radiosonde measurements.We use observations from June for the years 2006 to 2009 and calculate the moist static energy at each 6 h interval and then take the difference over the time interval to obtain estimates for changes in atmospheric heat storage.These differences are then compared to the change in atmospheric heat storage expected from solar radiation, as described by Eq. ( 6). We then use observations of absorbed solar radiation (R s ) and the ground heat flux (dU s /dt) at six field sites in highly contrasting climatological settings (listed in Table 2) to calculate the turbulent heat fluxes from maximum power (Eq.7).The six sites include a grassland and a forested site at Lindenberg, Brandenburg, Germany (Beyrich et al., 2006); three AmeriFlux sites (a tundra site at Anaktuvuk River, Alaska (Rocha and Shaver, 2011); a grassland site at Southern Great Plains, Oklahoma (Fischer et al., 2007;Raz-Yaseef et al., 2015); and a tropical rain forest site at Tapajos National Park, Brazil, (Goulden et al., 2004)); and a site in a planted pine forest at Yatir Forest in Israel (Rotenberg andYakir, 2010, 2011).For each site, we use 1 month of obser-Table 2. Site description of the six sites used for evaluating the estimations of the maximum power limit (with the letters referring to the graphs shown in Fig. 3).Also shown are the correlation statistics of the comparison to observations.The adjusted squared explained variance of the linear regression of J opt to observed net radiation (R n = R s − R l,net ) minus ground heat flux R n − dU s /dt is reported as r 2 .Standard error of slope and intercept of the regression are derived by a pre-whitening procedure to reduce the effect of serial correlation of the residuals (Newey and West, 1994;Zeileis, 2004) vations for a summer period in which solar radiative heating of the surface is highest and the effects of heat advection are minor; we estimate turbulent fluxes associated with maximum power (using Eq. 7) and compare these to the observed fluxes. Results We first evaluate the extent to which diurnal variations in solar radiation are buffered by heat storage changes in the lower atmosphere.To do so, we use the diagnosed variations of moist static energy from the radiosoundings in Lindenberg, Germany and compare these to the mean variation in absorbed solar radiation at the surface as well as variations in the ground heat flux at the site in Fig. 2b.The comparison shows that the heat storage variations in the lower atmosphere are substantially greater than the ground heat flux so that the diurnal variations in solar radiation are mostly buffered by the lower atmosphere.Although there is considerable variation (as indicated by the blue boxes), mostly due to pressure changes and advective effects, these variations follow the temporal course of what is expected from the variation in absorbed solar radiation (as described by Eq. 6).This confirms our conjecture that the diurnal variations in solar radiation on land are buffered primarily by heat storage changes in the lower atmosphere.This buffering of the diurnal variations over the land surface is rather different to how an open water surface buffers these variations (as also shown by observations; Liu et al., 2009; this is an aspect used previously to explain the difference in climate sensitivity of land and ocean surfaces; Kleidon and Renner, 2017). The comparison of the estimated surface energy balance partitioning from maximum power to observations at the six sites is shown in Fig. 3.The correlations are summarized in Table 2 in terms of the correlation coefficient as well as the slope and intercept.During nighttime, there is a mismatch between our approach and observations, which is represented by the intercept shown in Table 2.This mismatch may be explained by the prevalent stable nighttime conditions in which the atmosphere does not act as a heat engine, an aspect that we did not consider in our approach.During daytime, we find very high correlations of above 95 % between the estimated turbulent fluxes from the maximum power limit with observed net radiation (reduced by the ground heat flux), with a very good match of the estimated slopes in the correlation within 15 % of the observed.This high level of agreement is found across the range of climatological settings shown in Fig. 3. Also note that the maximum power limit without an explicit consideration of heat storage changes (i.e., with dU s /dt = 0 and dU a /dt = 0 in Eq. ( 7), as in Kleidon et al., 2014, and Jopt Jobs 00:00 06:00 12:00 18:00 24:00 00:00 06:00 12:00 18:00 24:00 00:00 06:00 12:00 18:00 24:00 Figure 3. Mean diurnal cycle of the absorption of solar radiation at the surface (R s , red line, observed), ground heat flux (dU s /dt, orange line, observed), and turbulent heat fluxes estimated by maximum power (J opt , black line, estimated) and observations (J obs , black circles, observed) for a selected month for six field observations in (a) a tundra ecosystem in Alaska, (b) a cropland in the midwestern US, (c, d) a grassland and pine forest in a temperate environment in Germany, (e) a planted pine forest in an arid environment in Israel, and (f) a tropical rain forest in the humid Amazon Basin in Brazil.The comparison of the turbulent heat fluxes estimated from maximum power to energy balance measurements is shown for 30 min observations in the right panel for each site for two cases of thermodynamic limits that differ by their consideration of heat storage changes (dark blue: with storage, as in Eq. ( 4); light blue: without storage, i.e., dU s /dt = 0 and dU a /dt = 0 so that J opt = R s /2).More information on the sites and the correlation statistics are provided in Table 2. www.earth-syst-dynam.net/9/1127/2018/Earth Syst.Dynam., 9, 1127-1140, 2018 mates turbulent fluxes that also result in a high correlation but with a magnitude that is too low compared to observations.This high level of agreement of the maximum power limit with diurnal heat storage changes suggests that it is an adequate description of surface energy balance partitioning and land-atmosphere exchange at the diurnal timescale, so that turbulent fluxes appear to operate near their thermodynamic limit.It further shows that it is critical to account for diurnal variations in heat storage in the thermodynamic limit to adequately represent the magnitude of the observed turbulent fluxes. Discussion Our approach, of course, only represents a general description of the full dynamics of surface-atmosphere exchange. Notable effects not considered in our approach that could alter the results and potentially modulate the outcome of the maximum power limit include a more detailed representation of radiative transfer, a distinction between the sensible and latent heat fluxes which result in different forms of storage changes in the atmosphere, entrainment effects at the top of the boundary layer, advection and coupling to large-scale atmospheric processes, and a better representation of nighttime processes, particularly regarding the formation of stable conditions at night that prevent convection to occur.These aspects can be explored further in future extensions.Yet even at this highly simplified level, the agreement of the estimated flux partitioning with observations is rather remarkable, indicating that the dominant forcing and the dominant constraints are captured by our approach. Our results emphasize the importance of considering the constraint imposed by the second law of thermodynamics on land-atmosphere exchange.While the complex, turbulent nature of this exchange makes it seem almost impossible to describe its outcome in simple terms, the generation of turbulent kinetic energy that drives the diurnal development of the convective boundary layer is nevertheless constrained by thermodynamics.The very good agreement of our results with observations suggests that this constraint imposed by thermodynamics is relevant to this generation, and land-atmosphere exchange appears to operate near this thermodynamic limit.This is consistent with previous research that applied thermodynamics and/or heat engine frameworks to atmospheric motion, for instance approaches using the proposed principle of maximum entropy production (Paltridge, 1978;Ozawa and Ohmura, 1997;Lorenz et al., 2001;Ozawa et al., 2003) or applications to hurricanes and atmospheric convection (Emanuel, 1999;Pauluis and Held, 2002a, b).Note that our maximization of power is almost identical to the maximization of material entropy production, as we assume a steady state in which power equals dissipation (G = D), and entropy production by turbulence is then given by D/T , where T is the temperature at which dissipation occurred (with T ≈ T s ).Yet our approach differs in that it specifically considered the effect of heat storage changes in altering the thermodynamic limit and feedbacks with the surface energy balance that altered the driving temperature difference of the heat engine.The heat storage changes in the lower atmosphere result in an additional term in the Carnot limit, and this can explain why the landatmosphere system functions quite differently with its pronounced diurnal variations in turbulent fluxes than the temporally much more uniform turbulent fluxes over open water surfaces (e.g., Liu et al., 2009;Kleidon, 2016;Kleidon and Renner, 2017).Thermodynamics combined with these two additional factors then provide sufficient constraints on the magnitude of turbulent heat fluxes.It would seem that this could provide valuable information to better parameterize turbulent fluxes within the Monin-Obukhov similarity theory for unstable conditions, specifically regarding the stability functions that are used in this approach (e.g., as in Louis, 1979). This insight that surface energy balance partitioning is predominantly determined by the local partitioning of the absorbed solar radiation is rather different than the way this exchange is commonly represented in climate models.In these models, surface exchange is parameterized using the aerodynamic bulk approach, in which the aerodynamic drag of the surface and horizontal wind speeds play a dominant role that is modulated by stability functions.Our approach differs in that solar radiation plays the dominant role in surface exchange by the local generation of buoyancy and power to drive convection, rather than wind speed and aerodynamic roughness as what the bulk method would suggest.A recent intercomparison between a number of commonly used land surface models (Best et al., 2015) shows, however, that land surface models using the bulk method generally underestimate the strong correlation of turbulent fluxes with downward solar radiation found in observations.Our approach can resolve this bias and suggests that the bulk method may underestimate the effect of the local forcing by solar radiation on surface-atmosphere exchange. We think that our approach provides ample opportunities for future applications and research.First, the simple expression of how turbulent heat fluxes on land vary during the day, as given by Eq. ( 9), provides an easy way to get a first-order estimate.It could serve as a baseline estimate that is solely based on physical principles, specifically, the first and second law of thermodynamics, and does not require tuning.This expression should nevertheless be further evaluated in a broader range of climatological conditions and over extended time periods to identify possible shortcomings, for instance with respect to the simple parameterization of longwave radiation or regarding the omission of advective effects.For a broader range of applicability, the approach would need to be extended further to derive an expression for near-surface air temperature, which would be related to the changes in atmospheric heat storage (dU a /dt), for the aerodynamic conduc-tance, and for boundary layer development, and the turbulent heat fluxes should be separated into the fluxes of sensible and latent heat.It would also be instructive to compare the power associated with the limit with estimates of the turbulent kinetic energy generation rate from observations to develop another possibility for testing the maximization approach. Our approach can then be used to evaluate aspects of global change analytically, such as land cover change or global warming, providing an alternative approach to these topics that complements complex, numerical modeling approaches.More generally, the success of our approach in reproducing observations very well constitutes another example of processes in complex systems appearing to evolve to and operate at their thermodynamic limit (Ozawa et al., 2003;Martyushev and Seleznev, 2006;Kleidon et al., 2010;Kleidon, 2016).This, in turn, encourages the application of thermodynamics to a broader range of questions and topics to understand the evolution and emergent dynamics of complex Earth systems. Conclusions We formulated a Carnot limit which accounts for heat storage changes within the atmospheric heat engine and used this limit to estimate the partitioning of the solar radiative forcing into radiative and turbulent cooling at the diurnal timescale.In contrast to common approaches to describe near-surface turbulent heat transfer into the atmosphere, we explicitly consider the thermodynamic constraint imposed by the second law of thermodynamics by treating turbulent heat fluxes and convection as the result of a heat engine.The maximization of the work output of this convective heat engine then yields estimates of turbulent fluxes that compare very well to observations across a range of climates and do not require empirical parameterizations.This demonstrates that our approach represents an adequate, general description of the land surface energy balance that only uses physical concepts and that does not rely on semiempirical turbulence parameterizations. We conclude that turbulent fluxes over land appear to operate near its thermodynamic limit by which the power of the convective heat engine is maximized.This limit is shaped by the second law of thermodynamics, as in the case of the Carnot limit of a heat engine in classical thermodynamics, but also requires the consideration of two additional factors that relate the heat engine to its environmental setting.The first factor relates to the strong diurnal variation of solar radiation, which results in diurnal heat storage changes.Over land these changes are buffered primarily in the lower atmosphere and these modulate the Carnot limit, resulting in a reduced efficiency and in what we referred to as a cold heat engine.Second, the limit of maximum power of the atmospheric heat engine is shaped by the trade-off in the driving temperature difference between surface and atmosphere, which decreases with greater turbulent heat fluxes.This trade-off results in the maximum power limit and represents a strong coupling between surface conditions and the lower atmosphere. Overall, our study shows that thermodynamics adds a highly relevant constraint to land-atmosphere coupling.This thermodynamic approach to the surface energy balance and land-atmosphere interactions should help us to better understand the role of the land surface and terrestrial vegetation in the climate system and how they interact with global change. Appendix A: Effects of radiative exchange on the limit of a cold heat engine The derivation of the Carnot limit with heat storage changes in Sect.2.1 assumed in the first law that the heat storage change within the heat engine is entirely caused by the heat flux J in .When applying this approach to turbulent fluxes between the land surface and the atmosphere, one also needs to consider the net transport of energy by radiative exchange between the surface and the atmosphere.In the derivation above, this net exchange is represented by the flux R l,net .This flux contributes to the heat storage change in the lower atmosphere, but it is not driven by the heat engine.This results in a small inconsistency when applying the limit of Sect.2.1 to the lower atmosphere.In the following, we show that the limit derived in Sect.2.1 is still valid.However, whether the lower atmosphere is opaque to longwave radiative transfer and absorbs R l,net or whether it is instead transparent makes a difference in the justification, which is why we included this derivation here rather than in the main text. In the following, we assume that the radiative-convective layer of the lower atmosphere is sufficiently opaque and absorbs the net longwave radiation of the surface, R l,net .Then, the first law described by Eq. ( 1) becomes the energy balance of the lower atmosphere and changes to where G = D and R l,out = R s,avg in steady state. The second law (Eq.2) obtains another term related to the entropy being added by the warming due to the absorption of the net flux of longwave radiation, R l,net .As this warming takes place at the prevailing physical temperature of the atmosphere (rather than the potential temperature), its temperature is likely closer to T a rather than T e or T s .Hence, the entropy budget changes to 1 T e dU a dt = As in Sect.2.1, we can combine Eqs.(A1) and (A2), solve them for D (= G), and obtain a limit on the power (G) by assuming that the entropy production σ irr = 0: This is the same expression as Eq. ( 3), so that the effect of net longwave radiative transfer actually cancels out. In the case in which the lower atmosphere is comparatively transparent for longwave radiation, the flux R l,net passes through the lower atmosphere without being absorbed.In this case, Eqs.(1) to (4) remain unaffected. Figure 1 . Figure 1.Schematic diagram of the land-atmosphere system where turbulent heat fluxes from the surface, J in , act as the driver of an atmospheric heat engine that generates convective motion and which sustains the heat fluxes.The heat source of the engine is the absorption of solar radiation at the surface, R s , reduced by the net exchange of terrestrial radiation, R l,net , which depends on surface temperature.The two critical effects that set the limit on how much work the engine can perform are illustrated in panel (b): diurnal changes in heat storage in the lower atmosphere due to the diurnal variation of solar radiation and the reduction in surface temperature, T s , due to greater turbulent heat fluxes both lower the work output of the engine. Figure 2 . Figure 2. Diurnal changes in heat storage are reflected in variations of soil temperature near the surface and in variations of air temperature and humidity in the lower atmosphere.Panel (a) shows a schematic diagram of these heat storage changes.It shows a typical, colder nighttime profile with an inversion near the surface and a warmer daytime profile.The difference between the extremes of these temperature (and humidity) profiles (area shaded in light red) corresponds to the change in diurnal heat storage change in the lower atmosphere, dU a /dt.Typical changes in belowground temperature profiles are also shown, with the heat storage change dU s /dt being marked in dark red.Panel (b) shows observations from Lindenberg, Germany, for the mean diurnal variation of absorbed solar radiation (shifted by its mean), R s − R s,avg , averaged for the month of June over the years 2006-2009 (red line, n = 480), the diurnal variation in heat storage in the lower atmosphere derived from 6-hourly radio soundings, dU a /dt (blue boxes represent the interquartile range and the horizontal thick blue line the median) and the ground heat flux, dU s /dt (orange line). Table 1 . Variables and parameters used in this study. .
2019-04-27T13:09:36.947Z
2018-09-21T00:00:00.000
{ "year": 2018, "sha1": "c804e06504ec8991ae3de2b22d2b51e48bdd1dce", "oa_license": "CCBY", "oa_url": "https://www.earth-syst-dynam.net/9/1127/2018/esd-9-1127-2018.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "ea2919aabeab47c5311df20cbe0b3591e11022a0", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
3505883
pes2o/s2orc
v3-fos-license
Immunotoxin Therapy for Lung Cancer introduction Lung cancer is the leading cause for cancer‐related deaths in both genders throughout the world. In the United States alone, there were 224,390 estimated new lung cancer cases and 158,080 estimated deaths in 2016.[1] As conventional chemotherapy has reached a plateau of effectiveness in lung cancers and fails in those tumors whose growth and metabolism can hardly be distinguished from normal tissues, innovative therapeutic strategies have been explored. The use of novel agents, for example, immunotherapies, has started to show promising potential in the field. Immunotoxin-based therapeutics Immunotoxin refers to a toxin with the targeting part being either an intact monoclonal antibody (mAb) or its fragment. The main function of an immunotoxin is to target specific cell surface molecules using a cytotoxic agent, which can be internalized to induce cell death by protein synthesis inhibition. In general, the essential parts of an immunotoxin are the binding unit (mAb or its fragment) and the cytotoxic unit (engineered toxin), which can be recombinantly linked together. Since immunotoxins function by directly killing the cells instead of inhibiting receptor-mediated signaling pathways, there can be less chance for tumor cells to upregulate rescue mutations or alternative signaling pathways to resist the immunotoxin therapy. [3] Plant toxins can be obtained from nature in the form of holotoxins and hemitoxins. Holotoxins contain a binding domain and an enzymatic domain linked by a disulfide bond. These toxins include ricin, modeccin, mistletoe lectin, and abrin. Compared to holotoxins, hemitoxins contain an enzymatic domain without a binding domain, which includes pokeweed antiviral protein, gelonin, and saporin. [4] It has been proven that both holotoxins and hemitoxins are able to remove the base of A 4324 in 28s rRNA so as to preclude the combination of elongation factor (EF)-1 and -2 with the 60s ribosomal subunit. [5,6] Bacterial toxins are somewhat different. An important requirement of fusion toxins is that the catalytic domain has to be separated from the other parts intracellularly. The two commonly engineered bacterial toxins are Pseudomonas exotoxin (PE) and diphtheria toxin (DT). [7] Both consist of three functional domains that can be produced as single polypeptide chains. The binding domain, Domain I, is located at the N-terminus, while domains II and III are located at the C-terminus. Domain II has translocation activity. Domain III catalyzes adenosine diphosphate-ribosylation and EF2 inactivation, to inhibit protein synthesis and ultimately causes cell death. [7,8] It is reported that up to 300 ribosomes can be irrecoverably inhibited in 35 minutes by a single toxin molecule, which is toxic enough to destroy a cancer cell. [9][10][11][12] After binding to the specific receptor, internalization of PE occurs through clathrin-coated pits into the endocytic compartment, and then PE is proteolytically cleaved in between amino acid (AA) 270 and 280, with the reduction of the disulfide bond connecting residues 265 and 287, which results in a fragment of 37,000 (AA280-612) at the C-terminus. With the translocation domain, the fragment is then transported to endoplasmic reticulum (ER), from which the catalytic domain is released into cytosol to ribosylates EF-2, leading to its inactivation. DT's killing mechanism is similar but has fewer steps: from the endocytic compartment, DT directly goes into cytosol to function. DT has a different AA sequence with a different enzymatic domain at the N-terminus. [2,13] Development of immunotoxins In the early 1980s, as mAbs began showing promise in the field of cancer therapy, Blythman et al. [14] first reported a novel immunotoxin that could kill cancer cells. However, the first-generation immunotoxin chemically conjugated a whole toxin to mAbs, which failed to distinguish the target of cancer cells from normal cells due to multiple potential chemical conjugation sites and the existence of the cell-binding domain of a whole toxin, showing unfavorable results in animal models. [15] The second-generation immunotoxins removed the cell-binding domain from the toxin part, thus affecting a much smaller amount of normal cells in animal models. [7] Nevertheless, the products were expensive for manufacturing, and not efficient enough to penetrate large and heterogeneous tumors, although the potency was proven in this generation of immunotoxins. [2] The third and latest generations of immunotoxins are designed to contain only the variable fragment (Fv) portion of a mAb for binding and the translocation and catalytic domains of toxins to kill tumor cells. The current production method for immunotoxins is to massively and cost-effectively use Escherichia coli. A disulfide bond or a peptide linker is engineered to link the heavy and light chain of the antibodies to form a single-chain variable fragment (scFv) or disulfide-stabilized scFv (scdsFv). [2,7] To date, more novel immunotoxins are increasingly developed with features such as stronger potency, higher affinity and specificity, and less immunogenicity. Improving potency, affinity, and specificity There are several main methods to improve the potency of immunotoxins, which include changing or mutating the toxin structure, assembling different fragments of antibodies, and changing the conjugation between the two parts. [8] Point mutation techniques are used in remodeling original toxins. [16] The binding domain of the toxin is removed or mutated to be non-effective, which results in much smaller constructs, such as PE38 (AA253-364 and AA381-613) and DT 388 or DAB 389 (the first 388 AA). [17][18][19][20] When the construct PE38 translocates into cytosol, it transforms to components including AA280-364 and AA381-613 with only one cysteine residue at position 287. Moreover, with the modification of the antibody from full size to scFv, the tumor penetration is further improved, leading to increased access of the tumor mass. [21] Nevertheless, a reduced binding stability is found in this altered form, which leads to the application of an intrachain disulfide bond connecting only the two Fvs of immunoglobulin (heavy-chain variable domain and light-chain variable domain), to maintain stability and affinity. [3] In addition, PE obtained increased potency with the carboxyl terminus sequence REDLK replaced with KDEL. The KDEL residue improves the cytotoxicity of PE by increasing binding to a sorting receptor that retrogradely transports the toxin from the trans-Golgi apparatus to the ER. [3] Decreasing immunotoxin immunogenicity One of the biggest challenges for immunotoxin therapy is its potential immunogenicity, which can originate from either antibody part or toxin part. [16] To avoid the generation of host anti-murine antibodies against the antibody part of an immunotoxin, this part can be further humanized or replaced by a fully human counterpart. [22] As for the more immunogenic toxin domain, it can be reengineered by combining mutations that decrease lymphocyte epitopes to significantly minimize the immunogenicity induced by a foreign toxin protein. [23] Immunotoxins with mutations at both B-and T-cell epitopes can theoretically eliminate the issue of immunogenicity. Mazor et al. [23] engineered the mesothelin-targeting immunotoxin, LMB-T14, for patients with lung cancer by removing both B-and T-cell epitopes to achieve reduced immunogenicity while maintaining cytotoxicity. In addition, studies using immunosuppressant regimens along with the immunotoxins also showed some benefit. [3,24,25] Other regimens, including a lymphocyte-depleting regimen, which consists of pentostatin and cyclophosphamide, were also found promising in delaying the stimulation of neutralizing anti-immunotoxin antibodies, thus allowing repetitive immunotoxin treatments for patients with solid tumors. [25] Pentostatin and cyclophosphamide selectively suppress the effect of T-and B-cells while largely sparing myeloid cells. [26] Reducing adverse effects Immunotoxin-induced toxicity is either targeted or nonspecific. Vascular leak syndrome (VLS) is a typical nonspecific toxicity, which is caused by endothelial cell damage from a high concentration of immunotoxins. In this case, capillaries are injured with fluid leakage. Fluid retains in tissues and causes edema in tissues, and serum albumin level falls. VLS can usually be managed by adequate hydration. [16] It is reported that high-dose ricin-based immunotoxins induce severe vascular collapse. [2] Bacterial toxins may be better in terms of VLS, given the fact that compared to ricin toxin A chain (RTA) that can directly binds to endothelial cells, a ligand is required for modified PE to connect with the endothelium. In an animal model, a mutation on RTA can decrease the occurrence of VLS. [27] Another inducement of toxicity is the unpredicted target effect due to same-target antigens also expressed on normal tissues. It has been reported that if organs with crucial functions, including the liver, neurons, and kidneys, express the same antigens targeted by immunotoxins, they will undergo immunotoxin-induced injury. [28][29][30] Thus, the selected target antigen should be highly specified to avoid targeting normal cells. [2] application in lung cancEr trEatMEnt Although there are accumulating data of immunotoxins targeting hematologic tumors, solid tumors (e.g., lung cancer) are much more difficult to treat. The tumor cells are highly condensed with tighter junctions in between cells. Furthermore, some researchers suggest that patients with these cancers are less immunosuppressed, less likely to become immunosuppressed with systemic treatment, and more likely to derive neutralizing antibodies to immunotoxins. [27] Immunotoxin therapy for nonsmall cell lung cancer MAb L6 is an immunotoxin that targets antigens expressed on human lung, breast, colon, and ovarian cancers. The antibody is chemically conjugated to the whole structure of ricin. In mice studies, mAb L6 showed cell-killing effects in xenograft human lung adenocarcinoma. [31] Mesothelin has a strong expression in many solid tumors, including lung adenocarcinoma and mesothelioma, but has low expression in mesothelium. [32][33][34][35][36][37][38][39][40][41] SS1P is an immunotoxin combining the SS1 anti-mesothelin antibody and PE38. It is currently combined with pentostatin and cyclophosphamide in a phase II study for immune depletion to reduce its immunogenicity in patients with lung adenocarcinoma and other mesothelin-positive cancers (NCT01362790). [42] The pilot study of SS1P showed that 3 of 10 treatment-refractory mesothelioma patients had major responses persisting more than 18 months. [25] The immunotoxin RG7787, combined with the humanized Fv fragment of SS1 and a modified PE fragment, has been reported to decrease tumor size in a xenograft mesothelin-expressing lung model. [16,43] High expression of the Lewis Y antigen (Le y ) is found in many epithelial tumors. For instance, the Le y antigen was found to be expressed in 80% of lung adenocarcinomas and 42% of squamous cell lung carcinomas in an immunohistochemistry analysis. [44] LMB-1 is an immunotoxin of mAb B3 (which reacts with the Le y antigen) and PE38. [45] It is effective in colon cancer and breast cancer patients, with toxicity due to limited specificity involving endothelial cells. [46] Based on LMB-1, its derivative, LMB-9, yet was developed by combining scdsFv of mAb B3 and PE38, which was utilized to treat several different types of advanced solid tumors including recurrent nonsmall cell lung cancer (NSCLC) expressing the Le y antigen (https://clinicaltrials.gov/ct2/ show/NCT00019435). However, results from a phase I study did not show significant effectiveness. [2,47] Naptumomab estafenatox, also known as ABR-217620, is an immunotoxin consisting of the fragment of the antigen-binding part of a mAb targeting 5T4 and the superantigen Staphylococcal enterotoxin A. Over 95% of tumors from patients with NSCLC, renal cancer, and pancreatic cancer have the expression of the 5T4 antigen. Thirty-one patients, including 19 NSCLC patients, were enrolled in a phase I study and had moderate and tolerable side effects. Most patients in this study achieved stable disease. [48] Results from the updated MONO study as well as another phase I study combining docetaxel (COMBO study) revealed that 36% and 38% of patients in the MONO study (with 51% of those being NSCLC patients) and the COMBO study (with 100% of those being NSCLC patients), respectively, reached stable disease (15% had a partial response in COMBO) at a 2-month follow-up with a maximum-tolerated dose of 26 μg/kg and 22 μg/kg, respectively. [49] Immunotoxin therapy for small cell lung cancer Similarly, immunotoxins based on mAbs SWA11 and SWA20, which target human small cell lung cancer (SCLC) antigen clusters w4 and 5A, respectively, conjugated with RTA, has shown promising activity against SCLC in mice. [50][51][52][53] The mouse mAb BrE-3 targeting the polypeptide core of antigen MUC1 [54] is combined with RTA to form another immunotoxin, which has been reported to be effective in SCLC. [55] CD56, an antigen of the neural cell adhesion molecule family, is the SCLC cluster 1 antigen. The immunotoxin N901-bR, fused by the anti-CD56 antibody N901 and modified ricin, was reported to be potent against SCLC expressing CD56. [56] In a phase I trial, N901-bR was administered in a group of 21 relapsed or refractory SCLC patients. One partial response was reported. [57][58][59] However, the future use of this drug is limited by the results of a phase II study using the same regimen, in which one fatal progressive VLS was found and all patients developed anti-immunotoxin antibodies, despite one stable disease and one complete remission for 3-4 months. [60] HuD is a neuronal RNA-binding protein detected in all SCLC cells. Ehrlich et al. [61] assembled a type of immunotoxin (BW-2) containing the mouse anti-human-HuD mAb and streptavidin/saporin complexes. It was reported that the intratumoral injection of immunotoxins decreased the local tumor progression in six xenograft mouse models of human SCLC without toxicity. [61,62] futurE dirEctions Improvements in therapeutic techniques that focus on the specific targeting potency and adverse effects of immunotoxins will be useful for lung cancer treatment. First, it is critical to identify new specific tumor antigens on lung cancer cells that can become potential targets. Fortunately, previous studies have revealed many tumor antigens expressed on lung cancer cells that can be potential targets, [63,64] including glycoproteins such as epithelial cell adhesion molecules, carcinoembryonic antigen, mucins, podoplanin (PDPN), and tumor-associated glycoprotein 72; growth and differentiation signaling receptors such as epidermal growth factor receptor (EGFR), human epidermal growth factor receptor (HER) 2, HER3, hepatocyte growth factor receptor, insulin-like growth factor 1 receptor, ephrin receptor A3, and tumor necrosis factor-related apoptosis-inducing ligand receptor 1; and stromal and extracellular matrix antigens such as fibroblast activation protein. Antibodies have been developed in previous studies for these potential targets and can be utilized to synthesize targeting immunotoxins to treat those cancer cells expressing specific antigens. For example, NZ-1 and D2C7 immunotoxins have been developed to specifically target PDPN and EGFR overexpressed on the tumor cell surface, respectively, both of which show a robust antitumor efficacy in the preclinical studies. [65,66] Currently, most literature about immunotoxins focuses on hematological malignancies and tumors restricted to a certain area (such as malignant brain tumors) due to adverse effects of systemic administration of immunotoxins (e.g., the immunogenicity of the immunotoxin, off-target toxicity, and VLS). [16] Thus, it is important to optimize the method for safer and more efficient delivery of immunotoxins in lung cancer treatment. As an initial step, orthotopic murine lung cancer models have been established using either human xenograft lung cancer cells or Lewis lung carcinoma cells, building a platform to investigate novel immunotoxins in orthotopic animal models. [67,68] Due to the rapid clearance of immunotoxins and potential immunogenicity, immunotoxins are usually administered by locoregional delivery into the tumor site instead of via systemic delivery. [16,69] Thanks to the development of therapeutic and imaging techniques, Niu et al. [70] successfully injected antitumor agents percutaneously into the lung tumor site using a fine needle under the guidance of computed tomography without any serious adverse event in patients. With the application of locoregional administration, immunotoxin therapy through intratumoral delivery has already been used to successfully treat patients with glioblastoma to increase the local drug concentration and minimize the systemic toxicity and immunogenicity. [71,72] Locoregional administration of immunotoxin therapy for the treatment of lung cancer can now move forward based on modern technique improvements. Although immunotoxin monotherapy has been proven to be effective for the treatment of many malignant tumors, its antitumor efficacy can further be enhanced by the appropriate combination strategies with other agents. Studies have shown that a type of newly developed immunotoxin may have better potency by exerting its cytotoxic moiety effects based on human-derived endogenous proteins, such as pro-apoptotic proteins or RNase. [73] Sensitivity of cancer cells to apoptosis will largely affect the cytotoxicity of these sorts of immunotoxins, and inactivation of p53 and upregulations of apoptosis inhibitors (e.g., B-cell lymphoma [Bcl]-2 and Bcl-xL) will lead to drug resistance. Thus, these immunotoxins may be more effective if combined with small molecule inhibitors of anti-apoptotic proteins to sensitize cancer cell apoptosis. [73] Besides, Leshem et al. [74] reported that the combination therapy of RG7787 immunotoxin with anti-cytotoxic T-lymphocyte-associated protein 4 (CTLA4) antibody (an immune checkpoint inhibitor) led to a high rate of complete remissions in their breast cancer model, indicating that the combinatorial therapy of immunotoxin and immune checkpoint inhibitors may promote the activation of the antitumor immunity to achieve a long-term tumor elimination. Currently, two anti-programmed cell death protein 1 (PD-1) immune checkpoint inhibitors, nivolumab and pembrolizumab, were approved by the US Food and Drug Administration (FDA) to treat NSCLC, which can be potential candidates to be combined with immunotoxins to treat advanced NSCLC in the future. In conclusion, the latest immunotoxins have emerged from many studies involving engineered immunotoxins that bind to tumor-surface epitopes with reduced in vivo toxicity and immunogenicity. In many preclinical and clinical studies, immunotoxins have displayed a different mechanism of tumor cell killing than traditional chemotherapy or radiation therapy. Further progress and improved clinical response of immunotoxin therapy against lung cancer depends on the identification of new tumor targets and optimized administration methods to promote its specificity and potency while minimize the adverse effect. Furthermore, immunotoxins may synergistically work with other therapeutics to enhance the antitumor efficacy as a combinatorial therapy. Acknowledgment We would like to thank Jenna Lewis for editing our manuscript. Financial support and sponsorship The work was supported by grants from the Young Scientists Fund from the National Natural Science Foundation of China (No. 81401896), and the Pujiang Talent Program from Shanghai Municipal Human Resource Bureau and Shanghai Science and Technology Committee (No. 14PJ1402000).
2018-04-03T00:51:57.267Z
2017-03-05T00:00:00.000
{ "year": 2017, "sha1": "b924a0e917ffdba1d1836e567264f5ad2cff7f80", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0366-6999.200540", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b924a0e917ffdba1d1836e567264f5ad2cff7f80", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
143791483
pes2o/s2orc
v3-fos-license
Impartial Spectator in the Marketplace of Ideas: The Principles of Adam Smith as an Ethical Basis for Regulation of Corporate Speech The corporate voice is arguably the loudest in mass communication today and has been the subject of a series of landmark Supreme Court decisions since 1978. This integrative essay offers an ethical basis for justifying regulation of corporate speech, based on the neglected moral and political theories of Adam Smith. His essential tenets on free markets are applied to the First Amendment marketplace of ideas concept that has been prominent in developing corporate free-speech rights. This essay argues that regulation of corporate speech on this basis can actually enable more ideas to flourish in the political marketplace—advancing utilitarian ideals of the common good. or political outcomes.6 Since Bellotti, federal and state governments have sought to regulate corporate speech without infringing the free-speech protection constitutionalized in that decision, in order to address the potentially damaging effects of corporate speech on democratic processes. However, the debate over whether justification exists for such regulation has continued. That debate centers on the question of whether regulation of corporate speech advances or diminishes free speech in a democratic society. The purpose of this integrative essay is to offer an ethical basis for justifying regulation of corporate speech, a basis derived from the theories of Adam Smith. Though he is more widely known for his economic principles, Smith was "a moral philosopher by profession, and his writings deal with ethics as much as economic^."^ Popular modern images of the eighteenth-century Scottish philosopher as simply a conservative economist and free-trade theorist neglect the larger thrust of his thinking. Criticism of corporate-speech regulation contends that such regulation is not justified and is not in the best interests of society. However, this study draws on Smith's concepts to argue the opposite. Essentially, Smith called for limiting government in order to allow the motivation of self-interest to flourish and generate material benefits for societybecause that advanced the utilitarian value of the common good, Smiths ultimate concern. However, he also emphasized that a system of justice was essential to protect all members of society as well as possibleincluding protecting free markets from domination by the most powerful business interests. The approach used in this study conceptualizes ethics as a rational process, which is based upon underlying principles, for addressing values in conflict. Smiths principles are argued here as an ethical basis for considering the conflicting values reflected in the debate over regulation of corporate speech. This approach draws upon utilitarianism, a school of thought from the teleological branch of ethics, which begins with the premise that consequences are important in deciding whether an act or a rule is ethical. Originally articulated by Jeremy Bentham and John Stuart Mill in the nineteenth century, utilitarianism proposes that justice can be determined through a process of ethical reasoning that considers the degree to which an action contributes to the greater societal good. "Act utilitarianism" is concerned with the ethics of specific decisions, while "rule utilitarianism" deals more broadly with the ethical justification for societal practices or institutions. The latter concept is employed in this study? While ethics and law are separate concerns in one sense-law being concerned more with what is, and ethics more with what ought to be-they are hardly unrelated. Ethical considerations must underlie the development and interpretation of law in order for justice to be served, particularly when competing values are at stake in such ways that the letter of the law does not offer clear resolution. This study's application of ethical principles as justification for legal doctrine broadly reflects the Ethics and Law IMPARTIAL SPECTATOR IN THE MARKETPLACE OF IDEAS Smithianconcept of such a relationship betweenethics and law. Behrman, for example, analyzes Smiths work in terms of key institutions based upon societal values and expressed ethics-r values in action-designed to balance and maximize individual freedom and social good? Niebuhr characterized law as "a compromise between moral ideas and practical possibilities."10 Broad philosophical concepts underlie or support many fundamentals of the law. American constitutional law, for example, begins with a priority on promoting the common welfare. First Amendment law reflects a philosophical interest in advancing such values as democratic governance, the search for truth, and individual fulfillment through freedom of expression." First Amendment issues frequently involve complicated questions of competing values, and this is very much the case in the debate over regulation of corporate speech. Therefore, this study seeks to advance the debate through an analysis of Smithian theory. That analysis supports a rule-utilitarianism argument that regulation of corporate speech is ethically sound in terms of the degree to which it contributes to the greater societal good. In particular, this discussion will argue that Smith's essential tenets on free markets can be applied to the First Amendment marketplace-ofideas concept** that has been prominent in shaping free speech rights for corporations. That is, Smith's concept of individuals competing equally in a free market toward the greatest good for society can be applied to the concept of ideas competing in a free market. His principles consistently provide support for this study's assertion that regulation efforts related to corporate speech do not reduce ideas in the political marketplace but enable more ideas to flourish-advancing utilitarian ideals of the common good. Adam Smith As a professor of moral philosophy at Glasgow University in the second half of the eighteenth century, Adam Smith lectured in theology, ethics, and jurisprudence. He became part of a circle of scholars at the Scottish universities at Glasgow and Edinburgh whose work is often referred to today as "the Scottish Historical School" or "the Scottish School."13 Broadly, their work analyzed historical changes in concepts of property and the effects of such changes on society. Smith became famous with the 1759 publication of The Theory of Moral Sentiments, in which he focused on ethical theory. In An Inquiry Into the Nature and Causes of the Wealth of Nations, published in 1776, Smith advanced his economic theories. After his death in 1790, students' notes from his lectures at Glasgow were published as Lectures on Jurisprudence, which focused on his theories of justice. Smith's main interest was in investigating "whether the effects of market commercial society were good or bad for individuals and government," and his "ultimate normative goal [was] the improvement of men and g~vernment.''~~Thus, "[tlhe political significance of Smiths writings derives from his concern not only for the economic wealth of the nation, but also for the well-being of society as a whole and for the freedom of the individual within that so~iety.'"~ Yet the popular image of Smith remains that of an economist who advocated the pursuit of profits governed by nothing but the "invisible hand."16 Although Smith actually based his economic theories upon his theories of jurisprudence, and those in turn were based upon his moral theories, "Smith's modern followers tend to be economists without a strong sense of civic life, and so that is how his admirers and detractors see Smith himself."17 As a result, "[elven in its traditional, run-of-theclassroom versions, the prevailing view of Adam Smith's philosophy renders him far more like Boesky and Gekko than even the most rabid reading allows."1s After Smith's death, the interpretation of his ideas was taken up by many who were interested only in his work on political economy.19 Wealth of Nations was published the same year that American revolutionaries declared independence, and the book was influential in the new nation. However, too many Americans learned economics from texts that tended to "distort both Smith's moral theory and his economics. These texts emphasized laissez-faire, a word Smith did not use, and competitive individualism, at the cost of the benevolence and justice which Smith emphasized."20 Because of these factors, "[tlhe main impact of Wealth of Nations was to establish a powerful economic justification for the untrammeled pursuit of individual self-interest."21 This essay argues that Smith's work in fact justifies the regulation of self-interest-but only to the extent that such pursuit endangers the common good. The next section briefly summarizes the corporatespeech case law as established by Bellotti and related decisions. That is followed by a discussion of issues under debate concerning regulation of corporate speech. Then Smith's work is analyzed in terms of the ethical arguments it supports concerning regulation of corporate speech. One of the major arguments asserted by the government in Bellotti as justification for regulation of corporate speech in referenda campaigns in Massachusetts was an interest in sustaining the active role of individual citizens in the electoral process and maintaining citizens' confidence in government. The government contended that the wealth and power of corporations could drown out other points of view and undermine democratic processes. The majority and minority on the Court split sharply over that issue in the 5-4 decision, with the majority emphasizing the listener's First Amendment right to receive information on the theory that it contributes to democratic decision making. Thus, the Court restricted government from limiting the marketplace's range of information and ideas-including corporate speech-to which the public is exposed.22 The Court has maintained this restriction on government in most areas of corporate speech. In Central Hudson Gas G. Electric Corp. v. Public Service Cornrnissi~n~~ in 1980, the question involved the right of government to regulate corporate speech relating to the 1970s energy Although the case is more often discussed today in terms of the balancing test it established for the protection of commercial Justice John Paul Stevens argued in his concurring opinion that the regulation in Corporate Speech and the First Amendment question suppressed corporate political speech beyond commercial speech because the banned speech could address questions under debate by political leadersz6 The Supreme Court ruled 8-1 that the regulation was not justified-even though it advanced the government's substantial interest in conserving energy-because it was more extensive than necessary to further that intere~t.2~ Another New York utility corporation successfully asserted First Amendment interests in Consolidated Edison Co. of New York v. Public Service Commission of New York ?8 The Court based its decision in part on the Bellotti holding that "the inherent worth of the speech in terms of its capacity for informing the public does not depend upon the identity of its source, whether corporation, association, union, or i n d i~i d u a l . "~~ A corporate newsletter published by Pacific Gas and Electric Company was the focal point of the controversy that produced Pacific Gas b Electric Co. v. Public Utilities Commission.30 When a consumer group complained to the Public Utilities Commission of California that the newsletter sometimes included items that could be considered political comment, the commission ruled that such groups could have access to "extra space" in the billing envelope.3l PG&E contended the order violated its First Amendment rights, and the Supreme Court agreed, unanimously finding in 1986 that the regulation impermissibly burdened the utility's free-speech rights by requiring it to associate with speech with which it might not agree?2 However, the Supreme Court has allowed government regulation that targets corporate speech in order to address corruption or the appearance of corruption. In Federal Election Commission v. National Right to Work Committee, decided in 1982,33 a challenge was brought against a federal campaign regulation that was designed to ensure that money used by corporations in political activity represented the speech interests of those whose money was involved.34 The Supreme Court upheld the regulation, unanimously ruling that the interests the government sought to protect were compelling enough to outweigh corporate First Amendment rights of association asserted by NRWC. The decision described the regulation as the culmination of a "careful legislative adjustment of the federal electoral laws. . . to prevent both actual and apparent corruption . . . [reflecting] a legislative judgment that the special characteristics of the corporate structure require particularly careful reg~lation."~~ In Federal Election Commission v. Massachusetts Citizens for Life"6 in 1986, the Court upheld the principle underlying the same regulation at issue in NRWCthat corporations amassing great wealth in the economic marketplace should not gain unfair advantage in the political marketplace. (However, the Court ruled that the regulation did not apply to Massachusetts Citizens for Life, a nonprofit corporation devoted to ending abortion, because it was formed to disseminate political ideas rather than to amass capital.)37 In 1990's Austin v. Michigan State Chamber of Commerce, the most recent corporate-speech case, the Court upheld a Michigan regulation addressing, in the words of the Court, "the corrosive and distorting effects of immense aggregations of wealth that are accumulated with the help of the corporate form and that have little or no correlation to the public's support for the corporation's political ideas. . . . It ensures that expenditures reflect actual public support for the political ideas espoused by corporations." Reasoning that "corporate wealth can unfairly influence elections when it is deployed in the form of independent expenditures, just as it can when it assumes the guise of political contributions," the Court held that state governments may regulate independent expenditures by corporation^.^^ In summary, the constitutional right for corporate speech established in Bellotti was reinforced in Central Hudson, Consolidated Edison, and Pacific Gas & Electric. It was tightened or focused in NRWC, MCFL, and Austin, clarifying the degree to which corporate First Amendment rights are less than those of individuals. The case law emphasizes that regulation of corporate speech should target prevention of corruption or the appearance of corruption, and that political speech by corporations should reflect public support, not just the economic power of the corporation. However, the Court has maintained a strong aversion to any content regulation of corporate speech. The government is required to establish compelling justification that a regulation of corporate speech addresses some form of real or apparent corruption in order to establish constitutionality. Parallels between Smithian theory and the Court's holdings and language in corporate-speech decisions will be detailed below, after discussions of the scholarly debate over corporate speech and of Smithian ethics in more depth. Critical themes that run through the literature on corporate speech highlight the debate over whether such speech undermines or advances democratic processes. The broad issues under contention have been characterized as an attempt to resolve a core philosophical conflict between liberal democratic ideology and organizational val~es,3~ or between individual freedom of expression and social utility.40 Rome and Roberts characterized the conflict as one between (1) the belief that corporate expression differs from individual expression to such an extent that it should receive lesser or no First Amendment protection, and (2) the belief that "protection of every species of expression . . . not only is protection of the right of the speaker but. . . is at least in part, for the benefit of listeners or recipient^."^^ Both camps are well represented in the literature. Greenwood dismissed corporate speech as antithetical to the basic principles of democracy and deserving of no constitutional p r o t e c t i~n .~~ Deetz asserted that in Bellotti, "the corporation is given rights like those of an individual, [but] the individual is not given the expression power of the corporation," and argued corporate influence is maintained through "a colonization of public decision making." 43 Many scholars writing in response to the Bellotti decision predicted correctly that it would greatly restrict state regulation of corporate spending to influence referenda voting. "It will be difficult to draft a state statute whose burden would be held a reasonable one," Fox f0recast.4~ Hart and Shore foresaw corporations of all sorts becoming "even more involved in electoral p o l i t i~s . "~~ Baker predicted that Bellotti would seriously undermine citizens' influence on democracy because corporate speech is dictated by profit-maximizing mandates of the market, not the human values of indi~iduals.4~ Prentice anticipated, accurately, it has turned out, that Bellot ti would allow government regulation of corporate electoral activity "when that activity is based upon solid evidence that the target activity would lead to corruption or the appearance of it." 50 Friedman and May strongly advocated such regulatory efforts: "If it weren't for our ultimate political sovereignty [as individuals], then our political speech would not have the constitutional importance which it now has.. . . Corporations are not, as such, sovereign members of our civil society. They exist at the sufferance of law and judicial ruling."51 Gowri also argued against unregulated corporate speech, proposing a system whereby corporations would disclose all political spending and offer rebates to shareholders who did not approve of the causes supported "If corporate speech could be brought into closer alignment with shareholder views, then the voice speaking to hearers in the marketplace would be closer to a human voice; and the loud competing views confronting other speakers would be closer to human views."52 On the other side of the debate, proponents have maintained it is healthier to free the corporate voice than to stifle it. Foreshadowing Bellotti a decade before the Supreme Court decision, Epstein wrote that "the expanding importance of governmental involvement in the operations of the economy . . . has resulted in the necessity of increased corporate political involvement." He argued that corporations "should be placed on a legal parity with other social interests" because the corporation "contributes to the maintenance of pluralistic democracy in America rather than endangers it."53 Barry contended that corporations serve a vital political function in a democratic society, that of upholding the property and contractual rights of their stockholders through lawful expansion of profits.54 Redish and Wasserman asserted that constitutional protection for corporate political speech fulfills First Amendment values because "the corporate form performs an important democratic function in facilitating the personal self-realization of the individuals who have made the voluntary choice to make use of it," and because "corporate speech may serve a vital role in checking potential government excesses."55 Some scholars have asserted that other forces are more effective than government at promoting responsible corporate speech. There is "a large social interest in hearing what corporations have to say about public issues," Sunstein argued, emphasizing that "no one is forced to believe what the corporations claim."56 To that end, individuals will be made aware by alternate sources if a corporation's actions are incongru-ent with its messages, Sethi c0ntended.5~ Butler and Ribstein made the case that "Corporate power may, in fact, better represent voter support than the groups that would gain from a reallocation of power," because "corporate speech must conform at least generally with the views of a cross-section of the community" or risk alienating shareholders, consumers, employees, and other publics critical to the success of the organization. 58 Ramler condemned the Austin decision, reasoning that restricting corporate freedom of speech would decrease "the amount of information upon which voters may rely to make intelligent decisions about the officials who will represent them in g~v e r n m e n t . "~~ Schofield predicted that Austin would make the First Amendment protection granted to corporate political speech by Bellotti "very easy to circumvent for a state that wishes to regulate 'free' political speech," contending that Austin "may have weakened other fundamental rights that are currently protected by the 'compelling state interest' requirement."60 Geary criticized Austin's definition of corruption as too broad. 61 In summary, the ideological debate regarding First Amendment protection for corporate speech is highlighted by sharp disagreement as to whether regulation of such speech enhances or diminishes the greater interests of a democratic society. This study advances the discussion by asserting justification for regulation of corporate speech based on ethical principles. Adam Smith's principles provide ethical direction by employing free-market theory to promote utilitarian ideals of the greater common good. In the next section, Smith's essential tenets on free markets are applied to the concept of the First Amendment marketplace of ideas. The argument is made that regulation efforts related to corporate speech canwork to expand the marketplace of ideas and enable more ideas to flourish, thus enhancing democratic processes and the common good. Given that Smith developed his ideas in a pre-capitalist, predemocratic world, his comprehensive eighteenth-century prescriptives cannot simply be transposed whole upon twenty-first-century corporate behavior. However, Smiths enduring principles remain useful in considering ethical issues today, particularly the corporate-speech issues addressed in this study. In fact, the frequent distortion of Smith's ideas reflects a failure to place them in their proper historical context and thus a crucial lack of awareness of the dominant economic realities under which he wrote. Smith's economic theories emphasized market forces and consumer autonomy as an alternative to the political economy of mercantilism. His system, which later would be referred to as capitalism, was "as revolutionary a concept with respect to the dominant mercantilism of its day as Marx's communism was to the capitalism of the mid-nineteenth century."62 The greatest priority of the economic system of mercantilism was enriching the nation-state, basically by maximizing exports and minimizing imports. Thus, "[m]ercantilism benefited producers and entrenched interests at the expense of consumers and the growing Considering Corporate Speech within the Ethics of Adam Smith middle classes, who were forced to pay inflated prices for domestically produced goods which were shielded from foreign competition by various protectionist mechanism^."^^ As Smith wrote: "It cannot be very difficult to determinewho havebeen the contrivers of this whole mercantile system; not the customers, we may believe, whose interest has been entirely neglected; but the producers whose interest has been so carefully attended to."64 So Smith was reacting against mercantilism, not defending the modern-day, private-enterprise system.65 The driving force behind Smith's vigorous critique of the status quo was his desire to improve the harsh living conditions he saw in Scotland and England."66 Rather than blaming the poor for their misfortunes, as mercantilist theory did, Smith blamed the economic system. To that end he called for abandoning mercantilist policies of sanctioning monopolies, putting quotas on imports, regulating tradesmen, and restricting other aspects of economic behavior. Smith opposed that sort of government regulation because it privileged the few at the expense of the many and prevented most from competing fairly in a free market. Thus we find "the thread that runs through all his works" is "how the market can be structured to make the pursuit of self-interest benefit When we consider the marketplace of ideas in terms of Smith's free market, it is clear that openness for all competitors and consumers is the priority. However, the interests of the most powerful competitors may work against openness. As Smith observed, "The interest of the dealers . . . in any particular branch of trade or manufactures, is always in some respects different from, and even opposite to, that of the publick. To widen the market and to narrow the competition, is always the interest of the dealers. To widen the market may frequently be agreeable enough to the interest of the publick; but to narrow the competition must always be against it."68 Smith's use of the term "invisible h a n d in Wealth of Nations does not represent a blanket defense of untrammeled self-interest, as it is often characterized in arguments against the regulation of business. Smiths "invisible h a n d represents instead a metaphor for the socially positive but unintended consequences that, as he theorized it, paradoxically can result from the pursuit of self-interest in market activity. Bishop explained that Smith found it desirable to allow individuals to base their economic choices on self-interest because the "constant drive of most people to improve their economic and social condition provided the incentive for individuals to direct their economic activities towards wealth production, and this ultimately would increase the overall wealth of society."69 Thus Smith was a champion of individualism, but not to the extent that its excesses destroyed community. He did not, after all, title his most famous work The Wealth oflndividuals. Smith's concept of limited government encouraging individual self-interest to flourish was based on what he saw as the relentless passion of humans for "bettering our condition, a desire which, though generally calm and dispassionate, comes with us from the womb, and never leaves us till we go in the grave. . . . In the whole interval which separates those two moments, there is scarce perhaps a single instant in which any man is so perfectly and completely satisfied with his situation, as to be without any wish of alteration or improvement of any kind."70 Thomas Jefferson regarded The Wealth of Nations as the best book available on political and he and others envisioned Smith's concept of self-interest being developed in terms of both economic and political involvement for all: Self-interest could only be accounted socially benign if it could be demonstrated that all this incessant striving after private ends did not lead to chaos. . . . [Smith theorized that] the urge to improve oneself through profitable exchanges prompted each to commit her and his resources most advantageously, and when disciplined by competition, led inexo- Smith's emphasis on equality of economic and political opportunity does not "imply that Smith favored equality of outcomes. Clearly he did not, nor did he think that such equality would be a result of a freemarket economy. But the market is most efficient and most fair when there is competition among similarly matched parties."73 In that vein, this study asserts that the marketplace of ideas will operate more efficiently and fairly when competing parties have similar opportunities to communicate ideas, and that a free marketplace of ideas contributes to utilitarian ideals of the greater common good. This essay does not suggest that any Supreme Court holdings in corporate-speech cases have been based expressly upon Smithian principles. In his dissenting opinion in the Central Hudson case, Justice William Rehnquist did make a brief reference to the concept of the marketplace of ideas being analogous to Smith's concept of a free economic market.74 However, none of the other justices has cited Smith or specifically articulated any of Smith's concepts in a corporate-speech decision. That said, this study maintains that when we apply Smithian principles to corporate speech in the manner articulated in this essay, we find that such application is consistent with the holdings and language of the Supreme Court's corporate-speech decisions. When characterized in terms of Smith's concepts, Supreme Court decisions on corporate speech represent an ongoing process aimed at preventing stronger competitors from diminishing freedom within the marketplace of ideas. In Bellotti and related decisions, the Court has maintained a difficult Smithian ethical balance regarding regulation of the corporate voice. The Court's corporate-speech decisions have em- Smithian Ethical Balance and the Supreme Court phasized preventing corruption and ensuring that corporate political speech represents public support. At the same time, those decisions stress a limiting of government, which promotes opportunity for the expression of the self-interest reflected in corporate speech. Such a balance advances ethical concerns, in Smithian theory, because it works in the long-term interests of society at large by resisting the narrowing of the marketplace. Prominent throughout Smith's work is the community-oriented concept of the "impartial spectator," articulated at length in his Theory of Moral Sentiments: "It is [the impartial spectator] who, whenever we are about to act so as to affect the happiness of others, calls to us, with a voice capable of astonishing the most presumptuous of our passions, that we are but one of the multitude, in no respect better than any other in it. . . . It is he who shews us the propriety of generosity and the deformity of injustice; the propriety of resigning the greatest interests of our own for the yet greater interests of others."75 This concept is crucial to Smiths concept of justice advancing the common good, which is equally central to both Theory of Moral Sentiments and Wealth of Nations.76 In the former, Smith wrote that a competitor "may run as hard as he can, and strain every nerve and every muscle, in order to outstrip all his competitors. But if he should jostle, or throw down any of them, the indulgence of the spectators is entirely at an end. It is a violation of fair play, which they cannot admit of."= And in Wealth of Nations, he wrote that government is responsible for "protecting, as far as possible, every member of the society from the injustice or oppression of every other member of it."78 Smith asserted the essential duties of government as external defense, justice, and public works.79 In this element of Smithian theory, "Laws of justice act as an impartial spectator. . . . . . The invisible hand, then, is a dependent, not an independent variable."80 Thus it distorts Smith's work to focus only on the self-interest concepts while ignoring the emphasis he placed on their context within a system of social justice. A system of justice, serving the function of the impartial spectator, must maintain a free market that protects "as far as possible, every member of the society from the injustice or oppression of every other member of it,"81 emphasizing liberty, competition, fair play, and limited government. The Supreme Court's corporate-speech decisions as a whole have served an "impartial spectator" function, seeking to ensure that some competitors in the marketplace of ideas do not "jostle, or throw down" any of the other competitorsas well as to prevent government from stifling expression of the self-interest reflected in corporate speech. Thus we see that Smith's theory of free markets and its application here to the marketplace of ideas are ethically consistent with justice related to corporate speech. The language of the Court in its corporate-speech decisions offers parallels between Smiths concept of individuals competing equally in a free market toward the greatest good for society and the concept of ideas competing in a free market. In the Bellot ti decision, the Court made it clear that any regulation of corporate speech could not be based solely on the corporate identity of the speaker, finding "no support in the First or Fourteenth Amendment, or in the decisions of this Court, for the proposition that speech that otherwise would be within the protection of the First Amendment loses that protection simply because its source is a corporation."82 The Court's decision struck down a Massachusetts statute prohibiting corporations from campaigning to influence the outcome of referenda that did not materially affect their business interests. The majority held that speech concerning the issues in a referendum on a state constitutional amendment is the type of speech indispensable to decision making in a democracy, and that corporate speech does not represent potential for corruption in referendum campaigns that focus on issues rather than individuals, because the former cannot be corrupted by political debt as the latter may.83 Thus, in Smithian terms, the Court began in Bellotti to define when the corporate speaker is only running as hard as he can within the bounds of fair play in the marketplace of ideas-and thus is not subject to government interference. Similarly, the Court held in the Central Hudson case that the New York Public Service Commission could not ban promotional advertising by electric-utility corporations in an effort to reduce energy consumption, because the government failed to show that a more limited regulation would not protect its interest in conservation.a In the Consolidated Edison case, the Court ruled that the state's Public Service Commission could not prohibit public utility corporations from discussing controversial issues of public policy in monthly utility-bill inserts. Allowing the government to determine what material was useful to consumers and what was not clearly represented content regulation of political speech, the Court said, a practice unconstitutional even when the source of such speech is a c0rporation.8~ In the Pacific Gas & Electric decision, the Court held that a corporation could not be forced to associate with speech with which it might not agree by subjecting it to a regulation requiring the corporation to include competing political messages in mailings of a corporate newsletter, "speech that the First Amendment is designed to protect."*'j In all those cases, the Court deemed the expressions of corporate speech involved to be-as Smithian theory would characterize it-fair play in the marketplace of ideas. In particular, the cases firmly established First Amendment protection for corporate speech from content regulation by government. As Justice Powell wrote in Central Hudson: "If the marketplace of ideas is to remain free and open, governments must not be allowed to choose 'which issues are worth discussing or debating." '87 However, consistent with Smithian theory that the indulgence of the spectators may end if a violation of fair play should occur, the Court has delineated ways in which corporate speech may corrupt the marketplace of ideas. In Bellotti, the Court said: "According to [the government], corporations are wealthy and powerful and their views may drown out other points of view. If [these] arguments were supported by record or legislative findings that corporate advocacy threatened imminently to undermine democratic processes, thereby denigrating rather than serving First Amendment interests, these arguments would merit our consideration." In that decision, the Court held there was no showing of democratic processes being undermined.m However, in later corporatespeech cases, the Court did in fact deem that undermining of democratic processes was addressed by the regulations at issue. In the NRWC decision, for example, the Court noted: "The governmental interest in preventing both actual corruption and the appearance of corruption of elected representatives has long been recognized, and there is no reason why it may not in this case be accomplished by treating unions, corporations and similar organizations differently from indi-vidual~,"*~ In Buckley v. Valeo in 1976, the Court had defined corruption as "a subversion of the political process" in which "[ellected officials are influenced to act contrary to their obligations of office."90 In NKWC, the Court accepted the government's assertion that the regulation of corporate speech under question in the case ensured "that substantial aggregations of wealth amassed by the special advantages which go with the corporate form of organization should not be converted into political 'war chests' which could be used to incur political debts from legislat o r~. ' '~~ In Smithian theory, such corruption would work to narrow the political marketplace of ideas because democratic processes would then be influenced more by unfair corporate influence on elected officials than by ideas competing freely. In language strikingly resonant of the Smithian emphasis on government maintaining fairness in the marketplace of ideas, the Court in the MCFL decision declared: Resources amassed in the economic marketplace may be used to provide an unfair advantage in the political marketplace. Political "free trade" does not necessarily require that all who participate in the political marketplace do so with exactly equal resources. . . . Relative availability of funds is after all a rough barometer of public support. The resources in the treasury of a business corporation, however, are not an indication of popular support for the corporation's political ideas. They reflect instead the economically motivated decisions of investors and customers. . . . [Tlhese resources may make a corporation a formidable political presence, even though the power of the corporation may be no reflection of the power of its ideas.92 In this assertion, the Court sought to prevent competitors with advantages in the economic marketplace (such as the corporation's limited liability and perpetual life) from utilizing those advantages to unfairly diminish the freedom of the marketplace of ideas. "This concern over the corrosive influence of concentrated corporate wealth reflects the conviction that it is important to protect the integrity of the marketplace of political ideas. . . . By requiring that corporate independent expenditures be financed through a political committee expressly established to engage in campaign spending,. . . [the regulation in question] seeks to prevent this threat to the political marketplace," wrote Justice William J. Brennan, Jr.93 In the Austin decision, the Court concluded, "Michigan identified as a serious danger the significant possibility that corporate political expenditures will undermine the integrity of the political process, and it has implemented a narrowly tailored solution to that problem." The Court held that the regulation requiring corporations to make campaign expenditures through separate funds that were solicited expressly for political purposes reduced the threat that "huge corporate treasuries amassed with the aid of favorable state laws will be used to influence unfairly the outcome of election^."^^ These assertions are clearly consistent with Smithian theory. Allowing corporate power to undermine the integrity of democratic processes would unfairly distort the political marketplace of ideas and work against utilitarian ideals of the common good that are represented in maintaining the freedom of that marketplace. That the Supreme Court has found potential in corporate speech for corruption of democratic processes would not surprise Smith. He "expected that concentrated economic resources could be readily translated into political influence, which he considered similar to other commodities for which there was a supply and demand."95 He believed that keeping markets free actually involved making sure powerful business interests did not use their influence to overwhelm the freedom of the market. "Smith thought . . . in particular, if business people pursued their self-interest in the political arena, they would only seek the overthrow of the free market system for their own benefit and everyone else's Smith found merchants and manufacturers "an order of men whose interest is never exactly the same with that of the public." 97 The arguments and evidence outlined above establish the usefulness of Smith's theories in providing an ethical basis justifying regulation of corporate speech-a basis that is consistent with the reasoning in Supreme Court cases upholding such regulation. Smith championed the concept of individuals competing with equal opportunities in a free market as a process that advanced utilitarian ideals of the greater good for society. The principles underlying that concept can be applied to the First Amendment concept of ideas competing in a free market, and doing so provides ethical justification for regulating corporate speech in an effort to protect democratic processes. In such an application, the justice system is crucial in acting as an "impartial spectator" to ensure that some competitors in the marketplace of ideas do not dominate it and disadvantage other competitors -but also to prevent government from stifling expression of the self-interest reflected in corporate speech. In its corporate-speech decisions, the Supreme Court has particularly emphasized preventing corruption and ensuring that corporate speech represents public support. The holdings and language of the Court in its corporate-speech decisions reflect Smith's "impartial spectator" function at work in the marketplace of Conczusion ideas, seeking to protect "as far as possible, every member of the society from the injustice or oppression of every other member of it,"98 and stressing fair play by competitors in that marketplace. Assessing regulation of corporate speech within these parameters provides ethical direction on Smith's terms, employing free-market theory in the manner he actually intended-as a means for advancing utilitarian ideals of the greater common good. 630 (1919), Justice Oliver Wendell Holmes articulated the reasoning underlying this enduring concept in law: "The best test of truth is the power of the thought to get itself accepted in the competition of the market." In his First Inaugural Address, Thomas Jefferson extolled "the safety with which error of opinion be tolerated where reason is left free to combat it." John Gabriel Hunt, ed., The Essential Thomas Jefferson (New York: Gramercy, 1984), 199. This concept, that truth naturally overcomes falsehood when they are allowed to compete, derives from Enlightenment philosophy regarding the value of free exchange of ideas and has been prominent in American discourse on freedom of speech and press since before the nation's founding. See Jeffrey A. Smith 16. "As every individual, therefore, endeavours as much as he can to both employ his capital in the support of domestic industry, and so to direct that industry that its produce may be of the greatest value; every individual necessarily labours to render the annual revenue of the society as great as he can. He generally, indeed, neither intends to promote the publick interest, nor knows how much he is promoting it. . . . NOTES [H]e intends only his own security. . . . [H]e intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention. Nor is it always the worse for the society that it was no part of it. By pursuing his own interest he frequently promotes that of the society more effectually than when he really intends to promote it." Adam Smith, A n Inquiry into the Nature and Causes of the Wealth of Nations 18. Robert C. Solomon, Ethics and Excellence: Cooperation and Integrity (New York: Oxford University Press, 1992), 85. The reference "more like Boesky and Gekko" is shorthand for the "greed is good" philosophy espoused by 1980s Wall Street icons Ivan Boesky, whose deals made him a multimillionaire before he was sent to prison for insider trading, and Gordon Gekko, the fictional character with a similar story in Oliver Stone's 1987 film, Wall Street. In Solomon's scathing rejection of associating that philosophy with Smith: "First, there is nothing in Smith's work that would even for a moment suggest that 'greed is good,' and the 'invisible hand' metaphor . . . plays a much smaller role in Smith's view of the market and morality than is usually implied. Second, Smith . . . emphasizes the importance of institutions and social and interpersonal relationships much more than he does our concern for individual selfinterest, though he does not, of course, deny the latter. Third, Smith's notion of 'self-interest' is not at all the asocial or antisocial sentiment it is usually made out to be. . . . Doing good is a matter neither of duty nor compulsion. . . but a genuine source of pleasure for its own sake." 19. J. Ralph Lindgren, The Social Philosophy of Adam Smith ( speech cases, then, a four-part analysis has developed. At the outset, we must determine whether the expression is protected by the First Amendment. For commercial speech to come within that provision, it at least must concern lawful activity and not be misleading. Next, we ask whether the asserted governmental interest is substantial. If both inquiries yield positive answers, we must determine whether the regulation directly advances the governmental interest asserted, and whether it is not more extensive than is necessary to serve that interest." 26. sending the bills by first-class mail at a minimum cost of between 16.5 and 17 cents each. The billing material weighed less than one ounce, so there was "extra space" in which additional material could be mailed for no additional postage. 263-264. In its decision, the Court established a three-part test to distinguish ideological corporations like MCFL from business corporations: "In particular, MCFL has three features essential to our holding that it may not constitutionally be bound by § 441b's restriction on independent spending. First, it was formed for the express purpose of promoting political ideas, and cannot engage in business activities. If political fundraising events are expressly denominated as requests for contributions that will be used for political purposes, including direct expenditures, these events cannot be considered business activities. This ensures that political resources reflect political support. Second, it has no shareholders or other persons affiliated so as to have a claim on its assets or earnings. This ensures that persons connected with the organization will have no economic disincentive for disassociating with it if they disagree with its political activity. Third, MCFL was not established by a business corporation or a labor union, and it is its policy not to accept contributions from such entities. This prevents such corporations from serving as conduits for the type of direct spending that creates a threat to the political marketplace." 38. Austin v. Michigan State Chamber of Commerce, 659-660. The Michigan regulation prohibited corporations from using general treasury funds to make independent expenditures in connection with state candidate elections, requiring that such expenditures be made from segregated funds raised from contributors to the fund and used solely for political purposes. Pacific Gas 39
2019-05-04T13:08:30.357Z
2002-06-01T00:00:00.000
{ "year": 2002, "sha1": "32e99a2f80e5b50e740a96b6df0e6d31ddecef10", "oa_license": "CCBY", "oa_url": "https://shareok.org/bitstream/11244/25306/1/10.1177.107769900207900209.pdf", "oa_status": "GREEN", "pdf_src": "Sage", "pdf_hash": "1d20fb2794d3d09e99e02700c8d39f76af712f04", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Sociology" ] }
209571904
pes2o/s2orc
v3-fos-license
Optimized functional and structural design of dual-target LMRAP, a bifunctional fusion protein with a 25-amino-acid antitumor peptide and GnRH Fc fragment To develop fusion protein of a GnRH Fc fragment and the integrin targeting AP25 antitumor peptide for GnRH receptor-expressing cancer therapy. The LMRAP fusion protein was constructed. A transwell invasion assay was performed. The gene mRNA and protein levels of GnRHR-I, α5β1, and αvβ3 in different cancer cell lines were assessed. Cell proliferation was measured using a cell counting kit-8. An antagonist assay was performed on GnRH receptors. Anti-tumor activity was evaluated with a mouse xenograft tumor model. Immunohistochemistry (IHC) was applied to detect CD31 and CD34 expressions. Pharmacokinetic characteristics were determined with an indirect competition ELISA. The developed bifunctional fusion protein LMRAP not only inhibited HUVEC invasion, but also inhibited proliferation of GnRHR-I, α5β1, and αvβ3 high expression cancer cells. The IC50 for LMRAP in the GnRH receptor was 6.235 × 10−4 mol/L. LMRAP significantly inhibited human prostate cancer cell line 22RV1 proliferation in vivo and in vitro. LMRAP significantly inhibited CD31 and CD34 expressions. The elimination half-life of the fusion protein LMRAP was 33 h in rats. The fusion protein made of a GnRH Fc fragment and the integrin targeting AP25 peptide retained the bifunctional biological activity of GnRHR blocking, angiogenesis inhibition, prolonged half-life and good tolerance. Introduction Hypothalamic decapeptide gonadotropin releasing hormone (GnRH), sometimes called luteinizing hormone-releasing hormone (LHRH), has an important role in mammalian reproduction regulation 1 . It has been shown that 86% of human prostate adenocarcinomas have high-affinity binding sites for GnRH. The GnRH receptor (GnRHR) has been detected at lower levels in the normal prostate compared to prostate cancer specimens. Some normal human prostate cell lines have no GnRH signaling 2 . Higher Gleason score tumors have fewer receptor numbers, but have higher affinity receptors 3 . In addition to prostate cancer, breast, endometrial, ovarian, pancreatic and hepatoma cancers, as well as endometrial cells in endometriosis, have cells that express GnRHR 4 . About 50% of breast cancers and 80% of endometrial cancers express both GnRH and GnRHR within the autocrine system 5 . The neutralizing effect of LHRH/GnRH with hormone-specific antibodies has been established in a wide range of species. Some studies have used passive immunization based on infusion of anti-LHRH antibodies 6 . GnRH vaccines have also been promising for managing hormone-dependent breast and prostate cancers 7e9 . However, the use of these vaccines clinically requires powerful adjuvant therapy to enhance antibody responses that could effectively block hormoneereceptor binding 10 . AP25 is a polypeptide that was designed in our laboratory by modifying an endostatin-derived peptide fragment, which was a 25-amino-acid arginine-glycine-aspartic acid (RGD)-modified polypeptide targeting avb3 and a5b1 integrins expressed in endothelial and tumor cells. Previous in vivo and in vitro experiments have indicated that this integrin antagonist peptide has an extraordinary antitumor effect on different types of cancer 11 . In this study, we developed a new strategy for GnRH receptorexpressing cancers by fusion of a GnRH Fc (fragment crystallizable) fragment and the AP25 antitumor peptide. The design idea was to maintain the antitumor epitope and activity of both AP25 and the GnRH Fc fragment. The direct fusion of functional domains may lead to misfolding of a product 12 , a low yield 13 , or impaired bioactivity or half-life 14 . The choice of a peptide linker that has the ability to maintain the domain function in the design of a bifunctional fusion protein is essential for maintaining bioactive molecules with an enhanced effect. By choosing a suitable peptide linker (flexible linker) and optimizing the structure of the fusion protein, we hypothesized that the bifunctional fusion protein may possess functions derived from each of their component moieties and this may achieve enhanced therapeutic effects. Animals Male BALB/c nude mice that were 6e8 weeks old, male and female BALB/c mice, and SpragueeDawley (SD) rats were purchased from the Nanjing Model Animal Research Center (Nanjing, China). All animals were given water and sterilized food. The Animal Care and Use Committee of the Nanjing Han and Zaenker Cancer Institute approved the study and it was strictly performed according to the Guide for the Care and Use of Laboratory Animals. Optimized structures of fusion proteins in the LMRAP series including linkers The sequence of AP25 was: ACDCRGDCFCGGGGIVRRADRA AVP. Construction of vectors The target genes of the three fusion proteins were cloned into EcoRI loci of the plasmid vector pEE12.4 by homologous recombination. The host bacteria were Trans1-T1 cells (Transgen Biotech, Beijing, China). TAA/TGA was set as the termination codon. After transformation, a transformed single colony was selected and inoculated into 2 mL LuriaeBertani (LB) medium containing ampicillin resistance. After 6e7 h of incubation at 37 C and shaking at 220 rpm (thermostatic oscillator, Taicang, China), the sequence of the correct bacterial solution was transferred to 300 mL LB medium containing ampicillin resistance with a 0.5% inoculation amount. After 16 h of shaking the culture at 37 C and 220 rpm (thermostatic oscillator), stable transfection plasmids were prepared with a Nucleo Bond Xtra Midi Plus EF (MN) kit (MachereyeNagel, Düren, Germany). Stable transfection screening The recombinant plasmid was transfected into Chinese hamster ovary (CHO)-K1 cells with a neon electrophoresis apparatus under the conditions of 1400 V, 20 ms and 2 pulses. Subsequent to transfection, the cells were incubated in 5 mL 4 mmol/L Glncontaining Dynamis (Gibco) medium that was preheated to 37 C for two days. They were then inoculated in 96-well plates at 5000 cells/well for three weeks. The cells were screened with 50 mmol/L L-methionine sulfoximine (MSX, SigmaeAldrich, St. Louis, MO, USA) at 37 C and cultured in a 7% CO 2 incubator for 3 weeks. The highly expressed clones that were grown in 96-well plates were subcultured from 96-well plates to 24-well stationary plates, and then were cultured again in 24-well plates. The volume of each hole was 2 mL, and the culture medium was Dynamis þ 25 mmol/L MSX. The culture conditions were 37 C, 5% CO 2 , and 220 rpm (thermostatic oscillator). Cells in the 24 deep-hole plates were diluted for 2e4 passages at a density of 0.3e0.5  10 6 /mL until the clones adapted to the suspension culture. The clones with the highest expression levels were selected for production and preparation of protein samples. Production and affinity chromatography purification of the fusion proteins LMRAP, LMRAP-A and LMRAP-B Cells were inoculated in 1 L Dynamis medium at a density of 0.5  10 6 /mL. The cells were fed batch culture for 14 days on a shaking bed of 37 C, 5% CO 2 and 130 rpm (thermostatic oscillator). On the third day, the temperature was dropped to 34 C, and on the third, fifth, seventh and tenth days, the cells were fed with 2  CD Efficient Feed C þ (Gibco) at 5%, 5%, 8% and 8% of the culture volume, respectively. On the seventh and tenth days, sugar was added at 3 g/L after nova detection and then the cells were harvested for 14 days. After centrifugation for 15 min, the supernatant was collected and filtered through a 0.45 mm membrane to collect the filtrate. The target protein was an Fc fusion protein, which could be captured by specific adsorption of an Fc fragment with the affinity filler Prosep Ultra Plus (Millipore, Burlington, Massachusetts, USA). First, the column was balanced with a threefold column volume equilibrium solution, phosphate buffered saline (PBS, SigmaeAldrich) at pH 7.0. After balancing, the retention time of the sample was controlled between 1 and 2 min according to the actual pressure of the column. After sampling, the column was washed with a five-fold column volume equilibrium solution. The protein sample was eluted with 50 mmol/L NaAceHAc (SigmaeAldrich), pH 3.6 buffer solution, and the retention time was controlled at 3 min. The ultraviolet (UV) value was observed for collection. Protein samples were quantified by 3 mol/L Tris (SigmaeAldrich) with pH ranging from 6.0 to 7.0. Ultrafiltration concentration Ultrafiltration membranes with a pore size of 30 kD and membrane area of 0.14 m 2 were selected. The membrane was coated with 50 mmol/L phosphate buffer (PB, SigmaeAldrich) and the displacement solution was pH 6.6. The pH in tank was the same as that in the displacement solution. The filter end was closed and the sample was slowly poured into the tank. The sample was recycled. After the concentration of the sample was stable, the filter end was opened, the volume concentration was controlled to the theoretical volume, and the filter end was closed for internal circulation. When the concentration was stable, the inlet and outlet were opened and the speed of the inlet and outlet was adjusted until a stable volume remained unchanged. After 10 volume changes, closing the inlet and outlet, and concentrating the sample to a certain volume, the outlet was closed. The internal circulation lasted for 30 min. After the internal circulation, the reflux end was opened to collect samples. A certain volume of displacement solution was poured in, the ultrafiltration equipment was washed, and the sample was collected. The final sample system was 50 mmol/L PB and 6% sucrose (SigmaeAldrich) and the pH was 6.6. The protein was then quantified. Confirmation of proteins sequences with liquid chromatographyemass spectrometer (LCeMS) A filter-aided sample preparation (FASP) 15 method was employed for enzymatic hydrolysis to obtain three final products. A total of 200 mg of proteins were combined with 30 mL SDT buffer [4% sodium dodecyl sulfonate (SDS, SigmaeAldrich), 100 mmol/L dithiothreitol (DTT, SigmaeAldrich), 150 mmol/L TriseHCl (SigmaeAldrich) pH 8.0]. Using repeated ultrafiltration (Microcon units, 10 kD), DTT, the detergent, and other lowmolecular-weight components were removed with UA buffer (8 mol/L Urea, 150 mmol/L TriseHCl pH 8.0). Next, 100 mL of 100 mmol/L iodoacetamide (IAM, SigmaeAldrich) in UA buffer were added to block the reduced cysteine residues and the samples were then incubated in darkness for 30 min. The filters were washed three times with 100 mL UA buffer and then twice with 100 mL 25 mmol/L NH 4 HCO 3 (SigmaeAldrich) buffer. Finally, 4 mg of trypsin (Promega, Madison, WI, USA) in 40 mL 25 mmol/L NH 4 HCO 3 buffer were used to digest the protein suspensions overnight at 37 C. The resulting peptides were then obtained as a filtrate. The peptides from each sample were desalted on C18 Cartridges [Empore™ SPE Cartridges C18 (standard density), volume 3 mL, bed I.D. 7 mm, SigmaeAldrich], concentrated with vacuum centrifugation and then reconstituted in 40 mL of 0.1% (v/v) formic acid. The peptide content was estimated using UV light spectral density at 280 nm with an extinction coefficient of 1.1 of a 0.1% (w/v) solution that was calculated based on the frequency of tyrosine and tryptophan in vertebrate proteins. A Q Exactive mass spectrometer (Thermo Fisher Scientific, Waltham, MA, USA) that was coupled to an Easy nLC (Proxeon Biosystems, Odense, Denmark) was used for LCeMS/MS analysis for 60 min, which was set in the project proposal 16 . The positive ion mode was used in the mass spectrometer. MS data were obtained using a data-dependent top10 method while the most abundant precursor ions were dynamically chosen from the survey scan (300e1800 m/z) for high energy collision induced dissociation (HCD) fragmentation. The maximum inject time was set at 10 ms and the automatic gain control (AGC) target was set at 3e6. The duration of dynamic exclusion was 40 s. The survey scans were acquired at m/z 200 at a resolution of 70,000. The resolution for the HCD spectra was set at m/z 200 at 17,500. The isolation width was set at 2 m/z and the normalized collision energy was set at 30 eV. The underfill ratio was defined as 0.1%, which specified the likely minimum percentage of the target value at maximum fill time. The peptide recognition mode was set at Enabled. MaxQuant software version 1.5.3.17 (Max Planck Institute of Biochemistry in Martinsried, Germany) 17 was used for analysis of the MS data. The target protein sequence database was searched to recognize MS data. The initial search setting was a precursor mass window of 6 ppm. The search employed the enzymatic cleavage rule of Trypsin/P and two missed cleavage sites were maximally allowed. A mass tolerance of 20 ppm was set for fragment ions: missed cleavage Z 2, enzyme Z trypsin, fixed variable modification was oxidation (M), modification was carbamidomethyl (C), and the decoy database pattern was reverse. A cutoff of 0.01 was used for the global false discovery rate (FDR) for protein and peptide identification 18 . Antagonist assay on GnRH receptors Cells (20 mL, 10,000/well) were grown with complete medium in 384-well plates to create a CHO-K1/GnRHR/Ga15 stable cell line (Genscript, Nanjing, China). After overnight incubation at 37 C/ 5% CO 2 , we added 20 mL/well dye and 10 mL/well gonadorelin or LMRAP (five-fold dilution, eight concentrations in triplicate) and then incubated the cells for 1 h. The plate was equilibrated at RT for 15 min and the fluorescence was detected using fluorescence image plate reader (FLIPR) Tetra (Molecular Devices, Los Angeles, CA, USA) 19 . A positive antagonist was used as the reference compound for sample concentration determination. Cell invasion assay A transwell invasion assay using Boyden chambers (BD Biosciences, Franklin Lakes, NJ, USA) with 8-mm pore size membranes coated with Matrigel was used to evaluate the cell invasive ability 20 . Human umbilical vein endothelial cells (HUVECs) were placed into the upper chamber of an insert in serum-free media. Media with 10% FBS was added to the lower chamber. The cells that had invaded through the membrane after several hours of incubation were stained with methanol and 0.1% crystal violet. They were then imaged and counted under a microscope in random fields at 100 magnification in each well. Cell viability assay A cell counting kit (CCK)-8 (EnoGene) was used to assess cell proliferation 21 . Briefly, cells were plated in 96-well plates at a density of 1  10 4 cells/well and were allowed to adhere overnight in a humidified atmosphere of 5% CO 2 at 37 C. Cells were incubated in a series of diluted concentrations of LMRAP. Cytotoxicity was measured by CCK-8 dye coloration after 72 h incubation. A total of 10 mL CCK-8 were added to each well. The plates were then incubated at 37 C for 4 h. The absorbance was measured at 450 nm with a microplate reader (Thermo Fisher Scientific). The IC 50 values were calculated with GraphPad Prism software (San Diego, CA, USA) and four-parameter curve fitting was employed. All experiments were carried out in six duplicates. 2.14. Anti-tumor activity in a mouse xenograft tumor model LMRAP antitumor activity was assessed in a human prostate carcinoma model by employing the 22RV1 human prostate cancer cell line 22 . The site's Institutional Animal Care and Use Committee approved the experimental protocol. In this experiment, BALB/c nude male mice that were 6e8 weeks old were implanted in the right flank subcutaneously with 5  10 6 22RV1 tumor cells. Animals (n Z 8 per group, n Z 16 in model group) were randomized for a tumor volume of 80e100 mm 3 at 15 days after tumor cell implantation. Animals received a tail vein injection (i.v.) of LMRAP at 12.5, 25, and 50 mg/kg for two weeks, a tail vein injection of AP25 at 20 mg/kg for two weeks, and a muscle injection (i.m.) of gonadorelin at 65 mg/kg for two weeks, AP25 20 mg/kg (i.v.) combined with gonadorelin 65 mg/kg (i.m.) for two weeks or tail vein injection (i.v.) of avastin 20 mg/kg on day 1 and day 8. Tumor size was measured every other day with digital calipers and the formula volume (mm 3 ) Z length  width 2 /2 was used for calculations. The mice were sacrificed at the end of the study by placing them in a CO 2 gas-filled chamber. The excised tumors were then recovered and weighed. IHC Histological sections from formalin fixed paraffin embedded (FFPE) xenograft tumors were used. IHC was applied to detect Cluster of differentiation 31 (CD31) and Cluster of differentiation 34 (CD34) expressions. IHC staining was performed with CD31 antibodies (EnoGene) and CD34 antibodies (EnoGene) at a 1:100 dilution. Immunostaining was carried out using routine methods. Positive signals of CD34 and CD31 were on the cell membrane. We evaluated the intensity of staining with a scale as previously described. The results were assessed using the following categories: staining intensity of null (0), weak (1þ), moderate (2þ), and strong (3þ). Two experienced pathologists judged all IHC staining results independently. Determination of LMRAP in SD rat plasma with an indirect competition enzyme-linked immunosorbent assay (ELISA) The dosage of tail intravenous administration of LMRAP in SD rats (3 females and 3 males) was 12.5 mg/kg. All procedures were approved by the Institutional Ethical Review Committees and conducted under the authority of the Project License. The experimental sampling time points were: SD rats at 5 min before administration and then 0, 5, 10, 30 min, 1,2,4,6,8,12,24,36,48,72,96,120,144,168,192, and 216 h after LMRAP administration. The animal weights were recorded. Eye frame blood was collected and the supernatant was centrifuged. Each serum sample of 100e200 mL was stored in an eppendorf (EP) tube at À80 C. Sampling times were clearly marked. The standard sample was diluted to a certain concentration gradient with the mixed solution of blank SD rat plasma and PBS. An indirect competitive ELISA was performed and the standard curve was made 23 . The optical density (OD) 450 nm values of standard samples with different concentration gradients were recorded as B, and the OD 450 nm values of standard samples without concentration gradients were recorded as B 0 . ELISA Calc software (Customized Applications Inc., Chicago Heights, IL, USA) was used to fit the logitelog linear regression and establish the standard curve. The fitting equation was: let P Z B/B 0 , q Z 1Àp, y Z ln(p/ q), and x Z lg(C ), then the equation was y Z aþb  X. The results were processed with pharmacokinetic software drug and statistics (DAS) 1.0 (Mathematical Pharmacology Professional Committee of China, Shanghai, China), ELISA Calc (Customized Applications, Inc.), statistic package for social science (SPSS) package (SPSS Inc., Chicago, IL, USA). The OD value (n < 3) was calculated with ELISA Calc software and then pharmacokinetic parameters were calculated with DAS 1.0 software. Determination of the maximum tolerated dose (MTD) To determine the MTD, 10 male and 10 female BALB/c mice were randomly assigned to the study. The animals received dose formulations containing LMRAP at various dosages via i.v. injection for a single dose in one day. If no obvious toxicity was observed for the single dose, the animals received dose formulations containing LMRAP at various dosages via i.v. injection three times a day. The MTD in this study was defined as the highest dose that was tolerated and that did not produce major lifethreatening toxicity in the 14-day study duration 24 . Statistical analysis The data are shown as the mean AE standard deviation. The significance of the results obtained from both groups was evaluated with a Student's unpaired t-test and one way analysis of variance (ANOVA). All statistical analyses were performed with SPSS 18.0 (SPSS, Inc.). A difference was considered statistically significant with a two-tailed P value less than 0.05. P < 0.01 was considered to designate a highly significant difference between the values. LMRAP series fusion protein design, expression, production and purification According to the arrangement of AP25, GnRH, the Fc fragment and the flexible linker sequence, three fusion protein sequences were designed and named LMRAP, LMRAP-A and LMRAP-B. The domain arrangements are presented in Fig. 1A. The target plasmid for each fusion protein was stably transfected into CHO-K1 cells. The clones with the highest expression levels of each fusion protein were selected for production and preparation of protein samples after stable transfection screening. The final products were identified with SDS-PAGE (Fig. 1B). Their primary sequences were confirmed with LCeMS/MS (Fig. 1CeE). The SDS-PAGE results indicated that the reduced molecular weights of three fusion proteins were each 34 kD. According to SDS-PAGE results, the non-reduced molecular weights of three fusion proteins were each 68 kD, indicating the presence of natural dimmers. We also confirmed the deglycosylated molecular weight with time-of-flight mass spectrometry (TOF-MS): the reduced molecular weight was 31,007 Da. This result matched the theoretical molecular weight (31,006 Da of monomer) very well. LCeMS/MS peptide mapping analysis indicated that their primary sequences were identical to the theoretical sequences. Effect of LMRAP, LMRAP-A and LMRAP-B on invasion of HUVECs To screen the anti-tumor activities of fusion proteins LMRAP, LMRAP-A and LMRAP-B, their anti-tumor activities were Dual-target LMRAP, shows anti-cancer potential evaluated by measuring the effect of in vitro experiments on the invasion of HUVECs. The results showed that AP25 had significant migration inhibition on HUVECs at 0.8 mmol/L. LMRAP inhibited HUVECs in a dose-dependent manner ( Fig. 2A and B). The inhibition rates of each dose of AP25 in groups of 0. (Fig. 2C). In this invasion inhibition experiment of AP25 fusion protein samples LMRAP, LMRAP-A, and LMRAP-B, the invasion inhibition activity of samples LMRAP and LMRAP-A at high concentrations was similar to that of AP25. Compared with the blank control group, LMRAP-B did not inhibit the invasion activity of HUVECs at all concentrations. LMRAP and LMRAP-A inhibited cancer cell proliferation in vitro The mRNA and protein expression levels of GnRHR-I, a5b1, avb3 were analyzed in human prostate cancer cells (22RV1, DU145, PC-3, and LNCaP), human ovarian cancer (SKOV3, OVCAR-3, SW626, and A2780) and human cervical-cancer cell lines (SiHa and HeLa). The results shown in Fig. 3 indicate that human prostate cancer 22RV1, human ovarian cancer SKOV3 and human cervical cancer SiHa had high GnRHR-I expression, while human prostate cancer PC-3 and human ovarian cancer A2780 had medium GnRHR-I expression ( Fig. 3A and F). Human cervical cancer SiHa, human ovarian cancer SKOV3 and human prostate cancer PC-3 had high a5b1 expression, while human ovarian cancer A2780, human prostate cancer 22RV1 and DU145 had medium a5b1 expression (Fig. 3B, C and F). Human ovarian cancer SKOV3 and human cervical-cancer cell lines SiHa and HeLa had high avb3 expression, while human prostate cancer 22RV1 had medium avb3 expression (Fig. 3DeF). Since LMRAP-B showed no obvious inhibitory effect on HUVEC invasion, LMRAP-B was not a good structure for further research. To assess the in vitro antiproliferative effect of LMRAP and LMRAP-A, cancer cells with different GnRHR-I, a5b1, and avb3 integrin expression levels were incubated with a series of increasing doses of LMRAP or LMRAP-A. Cell viability was determined with CCK-8. The results shown in Fig. 4 indicate that LMRAP significantly inhibited GnRHR-I, a5b1, and avb3 integrin high-expression cell viability in human prostate cancers 22RV1 and PC-3 ( Fig. 4A and D) and human ovarian cancer SKOV3 as well as A2780 cells ( Fig. 4B and C) in vitro from 6.25 to 12.5 mmol/L (P < 0.05), while LMRAP-A inhibited in vitro cell viability in 22RV1, PC-3 cells over 50 mmol/L. From 20 to 50 mmol/L, AP25 inhibited cell viability in PC-3 and SiHa cells in vitro (Fig. 4D and E). Gonadorelin showed no significant effect on cell proliferation for all tested cells in vitro. LMRAP had an improved cell proliferation inhibitory effect compared with AP25, which indicated that the GnRHR-I-specific binding of LMRAP might promote cytotoxicity of AP25. On the contrary, LMRAP did not show an obvious proliferation inhibition effect on cancer cells with low expression of GnRHR-I, a5b1, or avb3 integrin (Table 1). LMRAP showed the best antiproliferation activity compared with LMRAP-A and AP25. Functional characterization of LMRAP Based on computer construction technology, the three parts of AP25, GnRH, and Fc were relatively independent when changing flexible linkers and the combination of spatial structure and epitope did not affect each other. Antagonist assay results on the GnRH receptors of gonadorelin and LMRAP in a CHO-K1/ GnRHR/Ga15 stable cell line is shown in Fig. 5. The IC 50 s for gonadorelin (Fig. 5A) and LMRAP (Fig. 5B) were 1.641  10 À9 and 6.235  10 À4 mol/L, respectively. In vivo anti-tumor study of LMRAP The xenograft tumor model of nude mice was established by s.c. flank injection of human prostate cancer cell line 22RV1. This model was further used to evaluate the therapeutic efficacy of LMRAP in vivo. Different doses of LMRAP were injected into the tail vein for 14 consecutive days to evaluate the anti-tumor activity of LMRAP against human prostate cancer. The results showed ( Fig. 6A and B) that the relative tumor volume (RTV) of treatment group/RTV of control group (T/C, %) of LMRAP 12.5, 25 and 50 mg/kg for transplanted tumors of 22RV1 nude mice were 56.34%, 47.44%, and 32.16%, respectively, and the inhibition rates were 29.56%, 48.00%, and 61.97%, respectively. The T/C (%) of the control drugs AP25, gonarellin, the AP25/gonarellin combination, and avastin, were 70.83%, 82.19%, 50.52% and 15.23%, respectively, and the inhibition rates were 37.59%, 18.80%, 52.70%, and 82.72%, respectively. There was no significant difference in body weight gain between the treatment group and the model group, which indicated that no general toxicity was caused by the treatments. IHC staining of CD34 and CD31 Considering the importance of angiogenesis in prostate cancer progression, we further evaluated angiogenesis by IHC analysis of CD34 and CD31 expression in the xenografted 22Rv1 tumors. The results in Fig. 7A and B show that LMRAP significantly inhibited both CD31 and CD34 expression in prostate cancer (P < 0.01). Pharmacokinetic study of LMRAP injected into the tail veins of SD rats First, the working concentration of coated antigen and monoclonal antibodies was established. The optimal working concentration of coated antibodies and antigens was determined with the square array method. The antigens were diluted to 1:200, 1:400, 1:800, 1:3200, 1:6400, and 1:12,800 times and negative pore, and were coated transversely into the enzyme plate. After washing, the antibodies were diluted at 1:2 K, 1:4 K, 1:8 K, and 1:16 K and added vertically for ELISA detection. Finally, the dilution factor of OD 450 nm equal to 1.0 was chosen as the ideal concentration. According to the test results, the optimal concentration of the antigen was 1:800 and the dilution ratio of the monoclonal antibody was 1:4 K. The dilution ratio of the secondary antibody was 1:2000, which was the optimum concentration. The three batch standard curves of the LMRAP concentration in SD rat plasma ranged from 12,800 to 100 ng/mL. The curve equations were y Z 6.2824 À 2.0017x and y Z 6.1193 À 1.7859x and y Z 8.1738 À 2.5234x. The linear exponents R 2 were 0.9901, 0.9902, and 0.9974, respectively. The intra-batch precision of high (10,000 ng/mL), medium (1000 ng/mL), and low (200 ng/mL) concentration quality control samples was 9.39%, 5.87%, and 7.26%, the inter-batch precision was 10.55%, 7.42%, and 8.14%, and the recovery rates were 111.00%, 97.10%, and 100.64%, respectively. After tail vein injection of 12.5 mg/kg LMRAP in SD rats, the curve of the blood drug concentrationetime is shown in Fig. 8, and the pharmacokinetic parameters are shown in Discussion Prostate cancer accounts for one-fifth of new cancer diagnoses and it is the third leading cause of cancer death in the United States 25,26 . Androgen deprivation therapy is indicated for use in multiple clinical settings for advanced prostate cancer, which includes chemical castration consisting of GnRH agonists and antagonists 27 . One of the main concerns when using GnRH agonists, such as goserelin or gonadorelin, is a testosterone surge caused by initial stimulation of the pituitary gland, which may lead to a tumor flare, a rapid expansion of the prostate cancer, leading to pain and potential debilitation in patients, specifically with spinal metastases 28 . The GnRH agonists generated considerable side- effects including hot flashes, impotence, accelerated bone resorption, loss of muscle mass, loss of libido and, in some instances, profound psychologic effects 29 . An immunological approach to achieve androgen deprivation to treat prostate cancer, such as LHRH vaccinations, had also been tested in men 30 . Passive immunization, that is, infusion of anti-LHRH antibodies that neutralized the action of LHRH/GnRH through the involvement of hormone-specific antibodies, has been demonstrated in many animal species 6 . The GnRHetetanic toxoid conjugates, due to their large size, can induce anti-haptenic immunosuppression; however, this is difficult to reproduce on an industrial scale 31 . Studies have reported that the administration of either polyvalent or monoclonal anti-GnRH antibodies in males leads to cessation of spermatogenesis, decreased testicular size, and a severe reduction of testosterone levels, as does immunization with GnRH-carrier conjugates 32,33 . Angiogenesis is an important process that occurs in both physiological and pathological conditions. It has been shown that angiogenesis affects the behavior and biology of various neoplastic and non-neoplastic diseases 34,35 . Angiogenesis plays a crucial role in prostate cancer survival, progression, and metastasis 36e38 . It is a complicated process that depends on the balance between inhibitors and activators of angiogenesis 39 . Vascular endothelial growth factor (VEGF) and several neurosecretory peptides, such as bombesin and gastrin, are known to promote angiogenesis in prostate cancer 40 . Bifunctional molecules that have been classified as novel therapeutics have been shown to have multifunctional properties 41 . A fusion protein with two or more domains genetically fused together might have improved product stability. This might help with the acquisition of biological activity 42 . AP25 is an antiangiogenic and anti-tumor peptide with molecular targets, including integrin a5b1 and avb3. AP25 contains the sequence ES-2 and this sequence is included in one of the two active domains of endostatin. It induces the inhibitory effect of endostatin on angiogenesis 43 . GnRH is a hypothalamic decapeptide gonadotropin releasing hormone that binds to the GnRHR that is expressed in cancer cells, such as prostate cancer cells. The Fc fragment is part of the constant region heavy chain 2 (C H 2) and constant region heavy chain 3 (C H 3) functional region of IgG, and it can improve the stability and prolong the plasma half-life of a fusion protein. This is due to depressed kidney filtration and degradation prevention when binding to neonatal Fc receptor (FcRn) 44,45 . However, direct fusion of various functional domains may cause misfolding of fusion protein spatial structures, lower potency or inefficient expression 12e14 . To maintain domain function, the choice of a peptide linker in the design for bifunctional fusion is quite significant. Efficacy in the fusion protein domain separation is influenced by linker sequence flexibility. The linker GGGGSGGGGSGGGGS 46 , comprising small size amino acids (Gly, Ser), provides enhanced flexibility and mobility of the connecting functional domains to expose binding areas and avoid the coated Fc fragment, which promotes effective targeting. LMRAP is an Fc fusion protein produced by the fusion of functional proteins with GnRH Fc fragments and the AP25 peptide using genetic engineering technology. The Fc segment in molecular design originates from IgG4 and has a weak antibody dependent cellular cytotoxicity (ADCC) effect 47 . The fusion protein not only retains the biological activity of the functional proteins, but also prolongs the half-life, reduces glomerular clearance and avoids lysosome hydrolysis in cells. Fusion protein characteristics are influenced in various ways by the flexibility of the linker sequence 48 . Our study showed that in the sequence of LMRAP, GnRH-linker-hIgG4 Fc-linker-AP25 had the best activity compared with the sequence of LMRAP-A, AP25-linker-hIgG4 Fc-linker-GnRH and the sequence of LMRAP-B, AP25linker-GnRH-linker-hIgG4 Fc. LMRAP not only significantly inhibited the invasion activity of HUVECs, but also significantly inhibited GnRHR-I positive cell viability in human prostate cancers 22RV1 and PC-3, human ovarian cancer SKOV3, and A2780 cells in vitro. The IC 50 for LMRAP in GnRH receptors was 6.235  10 À4 mol/L by antagonist assay. In prostate cancer cells, GnRHR signaling includes activation of phosphoinositide (PI) and Gi turnover 49 . This signaling could further activate protein kinase C (PKC) and result in negative transmodulation of epidermal growth factor receptor (EGFR) due to Thr654 phosphorylation, which is known to downregulate EGFR and inhibit its signaling 50 . In addition, GnRH reduced cyclic adenosine monophosphate (cAMP) levels and EGF binding sites in prostate cancer cells 51 . LMRAP significantly inhibited human prostate cancer cell line 22RV1 proliferation in vivo, which is consistent with the in vitro study. This may be caused partly by GnRH receptor blocking. To further investigate whether LMRAP inhibited angiogenesis, microvessel density (MVD) using various endothelial cell markers including CD34 and CD31 were assessed. CD34 is a 110-kDa cell surface glycoprotein and it functions as a cellecell adhesion factor. It mediates the attachment of stem cells to stromal cells or the bone marrow extracellular matrix. CD31, a 130-kDa glycoprotein, is found on the surface of blood endothelial cells, platelets, lymphocytes, and macrophages. Both CD31 and CD34 can be used to demonstrate the presence of endothelial cells in histological tissue sections to assess tumor angiogenesis 52 . Our study showed that LMRAP significantly inhibited both CD31 and CD34 expression in prostate cancer xenografted tumor tissues, which further confirmed that LMRAP had bifunctional properties. The study of LMRAP pharmacokinetic characteristics in the plasma of SD rats indicated that the elimination half-life of fusion protein LMRAP was prolonged to 33 h compared with the polypeptide AP-25 having an elimination half-life of 55 min. Gonadorelin is not a cytotoxic drug and it cannot directly produce anti-tumor effects in terms of its mechanism of action. In molecular mechanism designs, we introduced the polypeptide AP25, which has anti-tumor effects. Therefore, in this study, the anti-prostate cancer effect of the fusion protein is mainly produced by AP25. Because of the addition of the Gonadorelin domain to the fusion protein, the fusion protein has the additional potential function of regulating the release of the gonadal axis hormone. In addition to the improved pharmacokinetic characteristics, LMRAP was also well tolerated in mice based on the absence of clinical side effects in all animals, minimal body weight and animal activities. The MTD was 307.2 times of the pharmacodynamic dose, which indicated that LMRAP had a good safety performance. Conclusions A new strategy for GnRH expressing cancers was developed by fusing the GnRH Fc fragment and the integrin targeting AP25 antitumor peptide. The fusion protein not only retained the bifunctional biological activity of GnRH receptor blocking and angiogenesis inhibition, but also prolonged the half-life, which provides a reliable basis for the later pre-clinical research. The clinical effect of the fusion protein may have better potential as a therapeutic agent.
2019-11-07T15:04:03.369Z
2019-11-02T00:00:00.000
{ "year": 2019, "sha1": "2c359214bdebf21467f542eabaf4a78b7a81b514", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.apsb.2019.10.010", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f10ef5851bdee37dc6e1e04ede91fec3f18e44d0", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
249856033
pes2o/s2orc
v3-fos-license
Differences in mental health inequalities based on university attendance: Intersectional multilevel analyses of individual heterogeneity and discriminatory accuracy There is an increasing focus on structural and social determinants of inequalities in young people's mental health across different social contexts. Taking higher education as a specific social context, it is unclear whether university attendance shapes the impact of intersectional social identities and positions on young people's mental health outcomes. Multilevel Analysis of Individual Heterogeneity and Discriminatory Accuracy (MAIHDA) was used to predict the odds that mental distress during adolescence, sex, socioeconomic status, sexual identity, ethnicity, and their intersections, were associated with young people's mental health outcomes at age 25, and whether this differed based on university attendance. Data from the Longitudinal Study of Young People in England cohort study were analysed with the MAIHDA approach, and the results did not reveal any evidence of multiplicative intersectional (i.e., aggravating) effects on young people's mental health outcomes. However, important main effects of social identities and positions (i.e., an additive model) were observed. The findings suggested that being female or identifying as a sexual minority increased the odds of young people experiencing mental health problems at age 25, although the odds of self-harming were half the size for sexual minorities who had attended university. Black and Asian individuals were less likely to declare a mental illness than White individuals. Young people who grew up in a more deprived area and had not attended university were more likely to experience mental health problems. These findings imply that mental health interventions for young people do not necessarily have to be designed exclusively for specific intersectional groups. Further, university attendance appears to produce better mental health outcomes for some young people, hence more investigation is needed to understand what universities do for young people, and whether this could be replicated in the wider general population. Introduction Experiencing mental health problems early in life can lead to profound adverse consequences for an individual's mental health outcomes in adulthood (Essau et al., 2014), with the potential for further negative impacts on their educational and employment life outcomes (Frijters et al., 2014;Hale & Viner, 2018). These mental health trajectories are also shaped by social group memberships, including social identities and social positions. For example, there may be a heightened risk of poor mental health and wellbeing outcomes for women (Rosenfield & Smith, 2012;Thorley, 2017), individuals from a low socioeconomic status (SES) background (Cosco et al., 2016;McLaughlin et al., 2012), individuals identifying as LGBT or a sexual minority (Plöderl & Tremblay, 2015;Russell & Fish, 2016), and those from an ethnic minority background (Stevenson & Rao, 2014). From a population health perspective, incidence of disease and poor health is influenced by social inequity or social policies (McAllister et al., 2018) and structural discrimination (Krieger, 2014) that advantage or disadvantage particular groups (Bauer, 2014;Rose, 1985). For example, trauma exposure and victimisation of Black individuals increases their risk of psychosis (Harnett & Ressler, 2021). Thus, social identities and positions might in fact be seen as proxies for systemic marginalisation, such as sexism, racism, homophobia, and classism (Dhamoon & Hankivsky, 2011;Evans, 2019a). In essence, although social identities and positions might occur at the individual level, their impact on predicting health outcomes is shaped by macro level factors: oppressive social relations (e.g., structural racism) are expressed in political, social, and economic processes that create unequal living and working conditions and harm the health of marginalized groups through multiple "pathways of embodiment," including social and economic deprivation, toxic/hazardous living conditions, social trauma, and inadequate healthcare (Homan, 2019, p. 492). In the current study, we examine how social identities and positions, and their intersections, predict the mental health outcomes of young people, with a particular focus on whether there are differences in relationships based on university attendance. Intersectionality and Multilevel Analysis of Individual Heterogeneity and Discriminatory Accuracy (MAIHDA) Influenced by Black feminism, Kimberlé Crenshaw introduced the term intersectionality to argue that disadvantage does not occur along a "single-axis framework" (Crenshaw, 1989, p. 140) of individual social identities, such as race or sex in isolation. Instead, Crenshaw (1991) examined how being a member of multiple social categories can help explain discrimination, such as comparing the experiences of Black women with those of White males. An intersectional perspective reinforces the position that health inequalities are a result of the structural power hierarchies that shape individuals' experiences (Dhamoon & Hankivsky, 2011). Intersectionality has traditionally been explored within a qualitative paradigm (Bauer et al., 2021). However, recently, intersectional scholars have used existing social identities and positions (e.g., sex, ethnicity, etc.) as provisional analytical categories (McCall, 2005) to draw on quantitative analyses (Codiroli Mcmaster & Cook, 2019), which has been found to be particularly beneficial within a population health context (Bauer, 2014). Quantitative approaches make it possible to determine whether multiple memberships of marginalised groups combine to have a cumulative or aggravating negative effect on health outcomes (Kern et al., 2020). A cumulative effect in which the social identities and positions act independently, is known as an additive model, whereas an aggravating effect in which there are interactions between categories indicating that characteristics multiply and amplify each other, is known as a multiplicative model (Kern et al., 2020). Adopting existing categories of social identities and positions for exploring intersectionality was termed by McCall (2005) as intercategorical complexity, so this approach lends itself to quantitative analyses. This contrasts with anticategorical complexity, which rejects the notion that social life can be reduced to categories, and intracategorical complexity, which focuses on inequalities within, rather than between, social groups (McCall, 2005). While single-level regression analyses have traditionally been used to construct interaction terms for evidence of intersectionality (Bell et al., 2019), Evans et al. (2018) proposed a multilevel approach to analysing quantitative intersectional data, which is considered to be "the new gold standard for investigating health disparities" (Merlo, 2018, p. 79). Known as Multilevel Analysis of Individual Heterogeneity and Discriminatory Accuracy (MAIHDA) (Merlo, 2018), Evans et al. (2018) outlined a number of advantages to this approach over single-level regressions. Firstly, estimates are adjusted to account for the sample size within a particular social stratum (social strata are the points of intersection between social identities). Therefore, MAIHDA has been recommended as the preferred intersectional analytic approach (for binary outcomes) when sample sizes are small and there are a large number of intersections (Mahendran et al., 2022). Secondly, interpretability can be increased through the use of graphs comparing the different outcomes across the social strata. These graphs also enable comparisons across combinations of privilege and marginalisation (e.g., low SES white males vs. high SES Black females), rather than simply one combination of privilege as the reference point for all other combinations (Evans et al., 2018). Finally, multilevel modelling positions the intersection between categories at the level of the social system rather than the social identity or position, which fits more closely with intersectionality theory: "intersectionality considers the interaction of such categories as organizing structures of society, recognizing that these key components influence political access, equality, and the potential for any form of justice" (Hancock, 2007, p. 64). The current mental health inequalities discourse highlights an interest in how intersectionality can be used to understand mental health outcomes (Fagrell Trygg et al., 2019). Thus far, the few studies using the MAIHDA approach for investigating mental health outcomes within an intersectional framework have found little evidence that inequalities are predominantly explained by a multiplicative model. For example, the first MAIHDA study examining mental health inequalities found that the majority of between-strata variance in depression could be explained by an additive model (Evans & Erickson, 2019). That is, the negative impact of membership of marginalised social groups was cumulative. Fagrell Trygg et al. (2019) note that some intersections may only hold relevance and meaning within certain population group contexts. Therefore, the roles of intersectional identities might benefit from being explored through the lenses of different social contexts (Evans, 2019b;Ghavami et al., 2016). Indeed, when drawing on data collapsed across multiple countries, Kern et al. (2020) found no evidence for multiplicative intersectional effects on adolescent mental wellbeing (life dissatisfaction and psychosomatic complaints), but when they considered variation in national contexts, they found evidence for more negative impacts on mental wellbeing for the multiply marginalised in some countries only. Thus, multiplicative models may still hold further explanatory power for understanding mental health inequalities within certain social contexts that are yet to be explored. University as a social context Worldwide, there are increasing concerns about the mental health of university students. An international survey found that over a third of students reported a lifetime disorder (Auerbach et al., 2018). Students are exposed to a particular set of psychosocial stressors and pressures to participate in risky behaviours (e.g., binge drinking and use of recreational drugs), which increase their risks of developing a mental health problem (Duffy et al., 2019). With around 75% of people experiencing a problem by age 24 (Jones, 2013), the period when the majority of students attend university (i.e., during late adolescence and young adulthood) occurs during a critical developmental stage. In England, over 50% of young people now participate in higher education (Department for Education, 2021), so university is an important social context that could be impacting on young people's mental health outcomes. University may also play a role in shaping mental health inequalities; although universities might aspire to increase opportunities for upward social mobility, they may simultaneously reinforce and strengthen dominant societal modes of elitism, privilege and inequality (Brennan & Naidoo, 2008). Despite this, intersectional frameworks for understanding this context are underexplored. Therefore, the current study uses the university as a social context, comparing outcomes for both those who had attended university and those who had not. It is also still not clear whether there are longer-term effects of university attendance on mental health outcomes. With the move towards university-based mental health and wellbeing interventions (Byrom, 2018), it is vital to understand whether certain intersectional groups are more in need of targeted approaches within the university space. That is, are the multiply marginalised more likely to have negative or positive mental health outcomes as a result of having been to university? This is important to understand, since targeted intervention risks stigmatisation if there is no evidence for those particular social groups being more in need of targeted support (Bauer & Scheim, 2019;Hernández-Yumar et al., 2018). Mental health stigma leads to negative stereotypes that can affect an individual's quality of life, and intersectional stigma has a compounding effect (Hermaszewska et al., 2022). The present study In the current study, an intercategorical approach to intersectionality was adopted, and MAIHDA analyses (Evans et al., 2018;Merlo, 2018) were performed to predict the odds of young people having mental health problems at age 25 based on the intersection of social identities and positions known to be associated with mental health problems (i.e., sex, SES, sexual identity and ethnicity). By the time this developmental stage occurs, it is anticipated that it would be possible to ascertain the longer-term effects of university on mental health outcomes. Additionally, since subjective social status has been found to be associated with ill-health (Singh-Manoux et al., 2003), we postulated that having a history of mental health problems might mean that it becomes intrinsically tied to a young person's social identity and/or position. Therefore, we also positioned experience of mental distress during adolescence as a social category in these intersectional analyses. Finally, in order to understand the role of university, as a social context, on shaping the impact of social identities and positions, and their intersections, these analyses were performed separately for those who had attended university and those who had not. Hence, we intended to answer the call for more quantitative intersectional research that considers the roles of different environmental social contexts (Evans, 2019b). The aim was to explore whether differences between social strata (i.e., a multiplicative model) explain mental health outcomes better than independent social identities and positions (i.e., an additive model). Thus, the research questions were: 1. Does the university context shape any multiplicative effects of social identities and positions on longer-term mental health outcomes? 2. Does the university context shape any additive effects of social identities and positions on longer-term mental health outcomes? Data and sample Survey responses from a representative panel study, the Longitudinal Study of Young People in England (LSYPE) (University College London, UCL Institute of Education, Centre for Longitudinal Studies, 2020), were analysed. Respondents (N = 15,770) were born in England in 1989-90, then followed up annually over seven recruitment sweeps between 2004 and 2010 (Waves 1-7; 14-20 years old), and again in 2015 (Wave 8; 25 years old). In Wave 4 there was also a boost sample of 352 respondents. In order to create combinations of social identities and positions (i.e., social strata) for the MAIHDA analyses, only participants for whom responses were available for all identity/position variables were included. Therefore, out of the 16,122 respondents who were part of the full LSYPE cohort (original data set plus the boost sample in Wave 4), 10,374 were excluded listwise from the model, with the main reason for this being due to the fact that some of the variables (e.g., university attendance and sexual identity) were recorded during a later recruitment sweep by which point there had been substantial attrition in participation in the LSYPE. This resulted in a sample size of 2605 for those who had not attended university and 2791 for those who had attended university. Table 1 displays a breakdown of descriptive statistics of the sample, taking into account listwise deletion of missing data. Social identities/positions and the social context The 12-item short version of the General Health Questionnaire (GHQ) (Goldberg & Williams, 1988) was used as a measure of adolescent mental distress at two time-points: age 15 and age 17. Each item uses a Likert response scale from 0 to 3 (e.g., not at all to much more than usual), which were then summed for a score of between 0 and 36 (higher scores indicating greater mental distress). The GHQ can be used as a screening tool for minor diagnosable psychiatric disorders in the general population, with the 11/12 threshold having the optimum sensitivity and specificity when scored using the above Likert response scoring method (Lundin et al., 2016). Therefore, scores of 12 and above were used to indicate a case of probable diagnosable mental health problem. Cronbach's alpha values showed the scale to have good levels of reliability at age 15 (α = 0.87) and age 17 (α = 0.86). Two groups were created based on the GHQ cut-offs: No mental distress at both ages 15 or 17 (i.e., GHQ score of 0-11 at both time-points); and mental distress at either ages 15 or 17 (i.e., GHQ score of 12+ at either time-point). Biological sex was coded as male or female based on participants' survey responses at the earliest time-point this variable was available (Waves 1-8). A binary variable for sexual identity was computed based on whether the respondent identified as heterosexual/straight or a sexual minority, the latter category consisting of the following responses: Gay/lesbian, bisexual, or other. This variable was based on the latest given response by the respondent from Waves 6, 7, or 8 (i.e., if the response was not available in the most recent sweep, the earlier response was used, but if it changed, the most recent response was used). Ethnicity included four groups: White, Black, Asian, or Other Ethnic Group (including Mixed). Ethnicity was predominantly taken from responses during Wave 1, but if not present, the response from Waves 2, 4, or 8 were used. Social deprivation was used as a measure of SES. It was based on the Income Deprivation Affecting Children Index (IDACI), which is a geographical indicator of whether the respondent grew up in an area with a larger proportion of children under 16 years old who live in a lowincome household. A geographical SES indicator was used to represent the structural inequalities of mental health outcomes, as an individual's mental health and wellbeing may be affected by the extent of their neighbourhood poverty and disadvantage (Graif et al., 2016;Ludwig et al., 2012). In order to use this continuous variable as a part of the social strata, a tertile split based on responses from the overall LSYPE data set was used to create three categories: lowest deprivation (individuals who had an IDACI score in the bottom tertile of all scores across respondents in the full LSYPE data set); medium deprivation (IDACI score in the middle tertile) and highest deprivation (IDACI score in the top tertile). Three or more categories are seen as preferable to only two categories, because this enables the slope representing the relationship between the predictor and outcome variables for the low vs. medium comparison to be different from the slope for the medium vs. high comparison (DeCoster et al., 2011). Tertile splits are also often used in MAIHDA research in order to create categories for intersections (e.g., Axelsson Fisk et al., 2018;Holman et al., 2020;Kern et al., 2020;Khalaf et al., 2020;Persmark et al., 2019;Wemrell et al., 2021). The full data set was used for creating this split, because recruitment of the complete cohort involved stratified sampling across all regions of England, so it should have been representative of the population of young people at the time it was collected. IDACI was measured at Waves 2 and 3. Responses from Wave 2 were used, but if missing and present at Wave 3, the responses from the later time-point were used. University attendance was a binary variable representing the social context, based on whether the respondent had been to university or not by age 25. Adulthood mental health problems at age 25 Three mental health outcomes were taken from Wave 8 responses to the LSYPE survey: mental distress, chronic mental illness, and self-harm. Firstly, respondents completed the GHQ at age 25 (α = 0.90) and two groups were created based on the same cut-offs used during adolescence: No mental distress at age 25 or mental distress at age 25. Secondly, respondents declared whether they had a longstanding illness and reported whether this illness was related to mental health. This was taken as a measure of whether they had declared a chronic mental illness or not at age 25. Thirdly, respondents reported whether they had selfharmed on purpose in the past year at age 25. Social strata A Stratum ID variable was constructed for the strata to indicate the intersectional group membership for each respondent, which is necessary for fitting the multilevel models (as discussed below). As outlined above, there were two categories for adolescent mental distress, two categories for sex, three categories for social deprivation, two categories for sexual identity and four categories for ethnicity. Therefore, for example, the ID code 22123 represents respondents who experienced mental distress at either ages 15 or 17 (2), are female (2), from an area with the lowest social deprivation (1), identify as a sexual minority (2), and of Asian ethnicity (3). By combining all combinations of social identity and position categories, there were 96 possible intersectional social strata (i.e., 2 × 2 × 3 × 2 × 4 = 96). However, due to there being no respondents in the LSYPE matching certain intersectional group memberships, there were 69 strata with responses to the outcome variables (i.e., adulthood mental health problems at age 25) for those who had not attended university, and 79 strata for those who had attended university. Statistical analyses MAIHDA analyses involve the fitting of multilevel models whereby social strata (as denoted by the stratum ID variable) are placed at level 2 and individual respondents are nested within this at level 1 (Evans et al., 2018). The total variance in the outcome is partitioned into between-strata (i.e., between intersections of identities/positions) and within-strata variance (i.e., within intersections of identities/positions). A null model (with no main effects included) is first produced to calculate a Variance Partition Coefficient (VPC). The VPC can be used in a similar manner to an R 2 model fit statistic to determine the extent to which the social strata can predict scores on the outcome variable (Kern et al., 2020). The VPC of the null model is a measure of the discriminatory accuracy of the different intersectional strata (Axelsson Fisk et al., 2018;Evans et al., 2018). Mathematically, the VPC is analogous to an intra-class correlation coefficient (ICC), which expresses the correlation in scores on the outcome variable between individuals within a cluster. Interpreted as a VPC, a greater ICC value indicates more between-strata variability in the outcome variable, with less variability being explained by differences between individuals nested within strata (Evans, 2019a;Holman & Walker, 2021;Merlo, 2018). Axelsson Fisk et al. (2018) proposed the following grading scale for assessing VPC values, which are multiplied by 100 and expressed as percentages (0-100): non-existent (0-1), poor (>1 to ≤ 5), fair (>5 to ≤ 10), good (>10 to ≤ 20), very good (>20 to ≤ 30), and excellent (>30) differentiation between strata. The VPC is equal to the between-strata variance (σ 2 u ) divided by the total variance. Total variance is the sum of the between-and within-strata variance (σ 2 u + σ 2 e ). In the current study, logistic regression models were used because all outcomes were binary. Hence, the within-strata variance (σ 2 e ) value is equal to the variance of the standard logistic distribution which is π 2 /3 (Goldstein et al., 2002), and can be substituted in the following equation: Since the VPC is calculated for the null model, the main effects and interaction effects are conflated, which means it is not clear how much of the variability in the outcome is explained by the additive models (i. e., the main effects) and how much by the multiplicative models (i.e., the interactions). Therefore, a model that includes main effects only is then produced to determine whether the additive main effects of the social strata (i.e., fixed effects) can explain variance in the outcome variable. The Proportional Change in Variance (PCV) value is used to estimate what percentage of variance in the VPC is accounted for by the additive main effects. A PCV value is calculated based on the difference in between-strata variance of the null and main effects models. The PCV is calculated by deducting the variance between strata for the main effects model from the between-strata variance for the null model, then dividing this by the between-strata variance for the null model: PCV values are also multiplied by 100 and presented as percentages, with a high score indicating that the between-strata variability is mostly explained by the main effects, and a low score suggesting it may be mostly explained by interactions between social strata. Finally, where the PCV values indicate that the between-strata variability is being mostly explained by interaction effects, an examination of the stratalevel residuals shows the extent to which the predicted score for each stratum differs from the expected score based on the additive main effects. In order to do this, the expected incidences of the outcome based on the additive main effects are subtracted from the aforementioned predicted scores, which results in a difference known as the strata-level residual. Negative residuals indicate that incidences for the stratum are lower than would be expected based on the additive main effects, whereas positive residuals are higher than expected. That is, the residual shows how much an interaction effect (i.e., the combination of multiple identities/positions) differs from what is explained by the main effects alone (i.e., can interactions explain the outcome better than the main effects?). If the 95% credible intervals for the residual do not cross zero, these effects are considered to be statistically significant. The MAIHDA approach down-weights the residuals for intersections with small samples, so these social strata will not have a disproportionate effect on the results (Bell et al., 2019;Mahendran et al., 2022). All multilevel models were fit using MLwiN 3.02 (Rasbash et al., 2020) called from Stata 16.1 using the runmlwin command (Leckie & Charlton, 2012). Following the same estimation approaches and options used in many previous MAIHDA analyses (e.g., Evans et al., 2018), all analyses used Bayesian Markov Chain Monte Carlo (MCMC) estimation (Browne, 2019) with diffuse (non-informative) priors. The burn-in phase was 5000 iterations with a total length of 50,000 iterations, and thinning every 50 iterations. Stata syntax was adapted from Axelsson Fisk et al. (2018) to fit the models and obtain 95% credible intervals around estimates. Research question 1: Does the university context shape any multiplicative effects of social identities and positions on longer-term mental health outcomes? Before taking into account the main effects, the VPC values for the null models (see Table 2) suggested fair to excellent levels of betweenstratum differences occurring at the intersectional strata level. This is supported by the graphs in Fig. 1, which display the predicted incidences of mental health problems at age 25 by social strata. In the graphs in Fig. 1, the predicted incidences (as represented by the black circles) are different for each stratum, indicating that there is variability occurring between strata. After taking into account the main effects, the PCV values were all above 90% (see Table 2), indicating that the main effects accounted for the majority of this variance for each outcome. Therefore, the additive models appeared to explain most of the differences in incidence of adulthood mental health problems between strata (as are extrapolated further below under Research Question 2). Fig. 2 displays the strata-level residuals (i.e., the extent to which each social stratum differed from what was explained by the main effects alone for each of the outcomes). In-line with what was suggested by the large PCV values, the 95% credible intervals for each of the intersectional effects cross zero, so they are all non-significant. Therefore, no evidence of multiplicative intersectional effects could be ascertained from these analyses. This means that incidences of adulthood mental health problems (i.e., mental distress, chronic mental illness, and self-harm) are better explained by the main effects than the interaction effects for both those who had attended university and those who had not. Tables A1-A6 in Appendix A include the predicted incidence scores within each social stratum for each model, and can be used to identify the different strata in Figs. 1 and 2. Research question 2: Does the university context shape any additive effects of social identities and positions on longer-term mental health outcomes? Focusing on the additive main effects only (Table 2), there were some differences in main effects based on university context. For those who had not attended university, experiencing mental distress during adolescence led to a greater likelihood of experiencing mental distress (at age 25), declaring a chronic mental illness, or reporting self-harm in the last year (3.1, 2.6, and 3.5 times greater than those who did not experience mental distress during adolescence, respectively). For those who had attended university, experiencing mental distress during adolescence still led to significantly greater odds of experiencing mental distress (at age 25) or reporting self-harm in the last year, although the sizes of these odds were smaller (2.3 vs. 3.1 times and 2.9 vs. 3.5 times, respectively). However, the odds of declaring a chronic mental illness among those who experienced mental distress during adolescence and had attended university were non-significant. Females were significantly more likely to experience mental distress (at age 25) and declare a chronic mental illness than males, for both those who had attended university (1.3 and 1.5 times, respectively) and those who had not (1.3 and 1.4 times, respectively). For those who had not attended university, respondents who had the medium or highest deprivation levels (IDACI) had a greater likelihood of experiencing mental distress (1.4 times for both compared to the lowest Table 2 MAIHDA models predicting likelihood of experiencing mental distress, declaring chronic mental illness, or declaring self-harm in the last year (at age 25), split by university attendance. Adulthood mental health problems at age 25 Mental distress (GHQ) Chronic mental illness Self-harmed Note. *95% credible intervals do not cross one, so effect is significant. OR = Odds Ratio. The Deviance Information Criterion (DIC) is used as a goodness-of-fit measure for Bayesian multilevel models; lower DIC scores indicate a better fit. Both the VPC and PCV values have been multiplied by 100 and presented as percentages. IDACI tertile). These same respondents who had the highest deprivation levels also had significantly greater odds of declaring a chronic mental illness (1.6 times). However, for those who had attended university, there were no significant associations between social deprivation and mental health problems. Sexual minority respondents were significantly more likely to experience mental health problems than heterosexual/straight respondents regardless of university attendance (2.1-7.2 times). Although sexual minority respondents who had not attended university had far greater odds of reporting having self-harmed than those who had attended university (7.2 vs. 3.9 times). Finally, Black and Asian respondents were less likely to declare a chronic mental illness than White respondents, regardless of university attendance (3.0-3.4 times). Discussion The current study aimed to ascertain whether the social context of university has an effect on shaping mental health inequalities, and whether such inequalities are multiplicative, additive, or both. For young people who held particular group memberships, the findings suggested that they were more likely to have better mental health outcomes if they had attended university. Analyses did not reveal any evidence of multiplicative intersectional effects, which is consistent with many other MAIHDA studies exploring various health inequalities (Holman et al., 2020). Therefore, social identities and positions do not Fig. 1. Predicted incidence (%) of mental health problems by social strata for the null models (main effects and interaction effects conflated). The black circles represent the predicted incidence in that particular social stratum. The vertical lines are 95% credible intervals. Strata have been ranked from intersections with the lowest to highest incidence rates. appear to amplify each other in their predictions of mental health problems at age 25. Instead, these social dimensions are layered and independent, so based on the current analyses, additive models appear to be most suitable for understanding mental health inequalities in young people. Similar, to Evans and Erickson's (2019) findings on depression among adolescents and young adults, the current study's findings suggest that interactions of social identities may not be appropriate for predicting longer-term mental health outcomes within certain contexts. Thus, the main social identities of young people that we investigated may help explain mental health inequalities better, but we need to be cautious when interpreting the impact of multiple identities. Hence, we will now focus on the additive model results. There were some differences in main effects based on university attendance. For respondents who had not attended university, experiencing mental distress during adolescence, being female, growing up in a more deprived area, and identifying as a sexual minority all appeared to increase the odds of experiencing mental distress at age 25. Not all of these main effects were present for those who had attended university. Females were more likely to experience mental distress or declare a mental illness than males, regardless of the university context. This is consistent with previous findings showing that females are more likely to experience internalising problems (anxiety and depression) (Rosenfield & Smith, 2012). There was a lack of association between experiences of adolescent mental distress and declarations of chronic mental illness at age 25 for those who had attended university (despite a relationship being present Fig. 2. Intersectional effects on the predicted incidence (%) of mental health problems by social strata. The black circles represent the predicted incidence in that particular social stratum based on the interaction effects minus the main effects, which is represented by the horizontal line. The vertical lines are 95% credible intervals. Strata have been ranked according to the extent to which each interaction effect differs from what is explained by the main effects alone. for those who had not attended university). This suggests that the university environment could be having a positive effect on outcomes even for those with a history of mental distress. On the one hand, it may be the case that young people who have not experienced mental distress during adolescence are more likely to attend university. On the other hand, 56.9% of respondents who experienced mental distress during their adolescence also attended university, compared to 53.4% of those who had not attended (see Table 1). Hence, this may suggest that universities are "reducing" the mental distress faced by young people. The mechanisms for how this is occurring are unclear, but it may be related to the sense of community, social networks, realisation of life goals or supportive culture of university. Policies have existed for some time that emphasise the benefits of universities fostering social cohesion and a sense of belonging among students, with the potential for positive impacts on their wellbeing (Ahn & Davis, 2020;Hughes & Spanner, 2019;Mountford-Zimdars et al., 2015). Furthermore, increasing efforts to embed mental health and wellbeing support at university (e.g., Byrom, 2018) could be having a longer-term benefit on graduates' mental health outcomes (e.g., through increasing resilience). University environments could therefore act as a protective factor against mental health problems. For those who had attended university, growing up in a more deprived area did not predict mental distress at age 25. It is unclear why this was the case, but one argument is that those from areas with the highest levels of deprivation were less likely to attend university and that is why there was no association. Indeed, within the university population, our descriptive statistics showed that 20.6% were from the highest deprivation group, whereas outside of university, 29.8% were from this same group. Alternatively, these positive effects could be due to the upward social mobility opportunities afforded by higher education rather than the environment specifically, since university education might reduce some of the economic disparities that lead to unequal health outcomes. For example, austerity measures that disproportionately impact on the most deprived groups are associated with poorer mental health outcomes (McAllister et al., 2018). However, individuals from a low SES background are also more likely to experience traumatic events that lead to mental distress, and these might occur early in life (Ashton et al., 2016;Hatch & Dohrenwend, 2007;Sweeney & Taggart, 2018). Therefore, the reduction in disadvantage that potentially results from a university education might not be enough to counter pre-existing mental distress from childhood and adolescence. Interestingly, despite previous research showing that ethnic minority individuals are at a greater risk of experiencing mental health problems (Harnett & Ressler, 2021), the current findings revealed that both Black and Asian individuals were less likely to declare a mental illness than White individuals, regardless of whether they had attended university. However, African-Caribbean groups in the UK have been found to experience stigma and negative pathways to accessing mental health services (e.g., police-enforced mandatory attendance at psychiatric services), which can delay their help-seeking compared to White individuals (Mantovani et al., 2017;Morgan et al., 2004). Similarly, South Asian individuals have conveyed reluctance sharing concerns at UK-based mental health services due to the perception that there will be a lack of sensitivity to their cultural needs (Bowl, 2007). Thus, the lower incidence of declarations of mental illness among Black and Asian respondents in the current study could be due to them not having sought help in the past that might have led to a diagnosis. Indeed, the absence of significant effects for ethnicity on the mental distress (GHQ) measure suggests that Black and Asian individuals may only be faring better than White respondents in terms of having a lower incidence of declaring a mental illness, not necessarily in terms of having fewer mental health problems. Sexual minority individuals were more likely to experience all types of mental health problems in either the university or non-university context. However, the odds ratios for self-harm were half the size for those who had attended university. For some, higher education is seen as an open and inclusive environment in which individuals are more free to explore their sexual identities (Formby, 2013(Formby, , 2015. Scourfield et al. (2008) found that one strategy of resilience for LGBT young people facing homophobia was to move to a gay-friendly safe place like university. In a survey of LGBTQ students in England, many respondents noted that they had seen posters related to LGBTQ issues around campus, which made them feel visible, and 76.5% of them also reported feeling comfortable challenging homophobic, biphobic or transphobic discrimination in the university environment (Grimwood, 2017). Thus, having a space to express their true sexual identities may have longer-term mitigating effects on the risk of self-harming behaviour for some sexual minority individuals. Limitations and implications One of the limitations of the current study is that the sample sizes were relatively small for examining intersectional effects, and some social strata were not present in the sample at all. It is anticipated that the MAIHDA approach will have addressed this issue by downweighting the residuals for intersections with small samples (Bell et al., 2019;Mahendran et al., 2022). However, it is also possible that the analyses will only have performed as well as (although not poorer than) main effects regressions (Mahendran et al., 2022). Nonetheless, main effects regressions would not have been a useful means of exploring multiplicative intersectional effects, so MAIHDA was the optimal approach for ascertaining the absence of a multiplicative model. The follow-up additive model would then have performed at least as well as a single-level regression analysis. Future studies might benefit from exploring different social identities and positions across new social contexts, drawing on data from larger cohort studies using the MAIHDA approach. It would also be useful to explore the role of different university types to help disentangle whether mental health outcomes differ based on environmental differences or as a result of receiving a university education. The implications of the current study are that interventions for marginalised groups might be beneficial if they are targeted at the broad social group memberships found to be associated with mental health problems, instead of being targeted at specific intersectional groups. That is, interventions could be designed for, for example, females and sexual minority individuals, rather than specifically targeted at sexual minority females only. This could benefit individuals who hold one or both group memberships and avoids the risk of stigmatisation (Bauer & Scheim, 2019;Hernández-Yumar et al., 2018). This may be a judicious approach until actual evidence has been found for combinations of identities and positions having an aggravating effect on mental health problems within the contexts investigated in this study. However, there is a recognition that sometimes when people share the same social identity or position, they may be able to feel more connection and work together (Haslam et al., 2022). Hence, if interventions are held in-person and in groups, then having interventions based on intersectional identities and positions may be appropriate. Furthermore, since outcomes appeared to be better for some marginalised groups who attended university, it would be useful to understand more about what appears to be benefiting those particular groups of individuals who have been to university. For example, the increase in university-based initiatives for promoting positive mental health and wellbeing may be key to supporting longer-term outcomes, so there is now a question about how these could be replicated within the general population. Conclusion In conclusion, use of the quantitative MAIHDA approach revealed that the university context does not appear to shape any multiplicative intersectional effects of social identities and positions on longer-term mental health outcomes. However, differences in mental health inequalities based on university attendance were found for the additive effects. Better mental health outcomes were found for sexual minorities,
2022-06-20T15:05:08.894Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "9d4ea6a49bf20d4ffef6c191e8229234d7ae2816", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.ssmph.2022.101149", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1a5b8c45de0bf09d7014530050e78fe30887552e", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
266285840
pes2o/s2orc
v3-fos-license
Design Principles for Distributed Context Modeling of Autonomous Systems The use of unmanned aerial vehicles (UAV) has seen a rapid increase due to the advancements in drone technology and the wide range of applications. Their adaptability and versatility make them suitable for a great variety of tasks. To fully realize their potential, an autonomous operation is crucial. For modeling environmental perception (i.e., contextual information) as a key enabler of autonomous operations, guiding principles are needed to support system designers in modeling contextual information for autonomous systems. This article precisely addresses this concern and seeks to establish a set of design principles for the distributed context modeling of autonomous systems, such as autonomous UAVs. This is achieved through a systematic review of the literature and the identification of meta-requirements by leveraging a generic context classification model, which serves as the foundation for deriving the design principles. Subsequently, these design principles undergo evaluation within the context of autonomous UAVs through a use case analysis. The goal of this research is to provide a foundation for the development of autonomous systems that can effectively perceive, interpret, and distribute their context. The design principles can serve as a prescriptive guide for the future development of autonomous systems, ensuring efficient and effective operations. I. INTRODUCTION While unmanned aerial vehicles (UAV), commonly referred to as drones, have been utilized in military contexts for a while, their civilian application has advanced considerably in recent times.The crucial factors for a sharp increase in commercial use are the rapid development in the field of drone technology and the wide range of possible applications.[1] In this regard, drones are particularly well-suited for surveying extensive terrains that are challenging to access.Drones can be used for a variety of purposes and in various applications, such as the observation, monitoring, and inspection of facilities, transportation of goods, or the support of postdisaster operations.However, drones can only realize their full potential if they can be deployed autonomously.Autonomous drones, equivalent to autonomous systems in general, are capable of performing a task assigned to them autonomously and largely without human intervention.[2] Autonomous systems differ from the automated systems widely used today in their high adaptability, for which it is necessary, among other things, that the autonomous system can perceive and interpret their environment, which is also referred to as context.Thus, the perception and interpretation of contextual information are essential prerequisites for fully autonomous behavior [3].Additionally, context information distribution between individual system participants offers enormous opportunities and can efficiently improve the overall system by providing additional contextual information and expanding the perception range.However, special requirements arise for a distributed context modeling of autonomous systems and require prescriptive knowledge for the development of such complex dependencies.Design principles can help to document such prescriptive knowledge and are a suitable medium to capture prescriptive design knowledge [4]. The goal of this research is to develop design principles for distributed context modeling of autonomous systems.For this purpose, a systematic literature review of distributed context models in the area of autonomous drones was conducted, but also other systems were considered, such as autonomous robots, industrial environments, or ubiquitous computing.The rest of this article is organized as follows.In Section II, a theoretical background is given to serve as a basis for context modeling of autonomous, cooperative systems.Section III presents the applied research method of developing design principles for distributed context modeling, and introduces a use case, which represents a cooperative system.The metarequirement generation is described in Section IV and finally, the design principles are elaborated in Section V.After an evaluation (Section VI), a short summary concludes this article and a research outlook is given in Section VII. II. THEORETICAL BACKGROUND A. CONTEXT MODELING Since the early 1990s, there has been a long-lasting interest in context-aware, intelligent systems that can adapt their behavior accordingly based on contextual inputs [5].For the development of such sophisticated and context-aware systems, an understanding of the notion of "context" is essential.While some researchers have tried to define what context is, the approach of Dey and Abowd in the field of ubiquitous computing prevailed, defining context as "any information that can be used to characterize the situation of an entity" [6]. Apart from defining context through conceptualization, efforts have also been made to categorize and subcategorize it into various types [7].Structuring relevant context information in the initial stages of engineering is known as "context modeling."A context model is a simplified depiction of context utilized to delineate and structure context elements.Additionally, a context element is defined as a fragment of context that characterizes a contextual aspect, such as the present location of the system being considered.With the high complexity of context modeling, some researchers elaborated on the fundamental characteristics that may influence a context model in its creation.For instance, Liu's classification [8] of context modeling presents a thorough outline of the fundamental conceptual aspects of context modeling.This categorization draws upon studies of context awareness and highlights five crucial dimensions and 17 distinctive characteristics of context modeling (see Table 1). The conceptual approach of context modeling has since been applied by researchers to other domains as well.In addition to an intensive investigation in the domain of ubiquitous computing, attempts have been made to transfer context modeling to the domain of industrial applications [9], [10], [11] and to the domain of autonomous systems [12], [13], [14], [15].In most cases, current methods concentrate on the development of individual systems rather than explicitly emphasizing the elicitation and documentation of context information, regardless of the application domain.However, for autonomous, collaborating systems, it is crucial and also challenging to give adequate attention to the system's context.This is due to the fact that not only the individual system perceives its context with the help of sensors in order to derive its individual behavior, but rather a group of systems exchanges context information with each other.This exchange of context information enhances perception considerably, but simultaneously introduces novel challenges, such as grappling with contextual uncertainty stemming from divergent sensor measurements. B. AUTONOMOUS SYSTEMS A system that can achieve a given goal independently and can adapt to the situation without human control or detailed programming is referred to as "autonomous" [16].The capabilities of such systems and their areas of application domains have expanded significantly in recent years, with widely acclaimed successes in several applications [17].The notion "autonomous system" is very broad in this sense, whereas two main types can be distinguished. 1) Autonomous systems that operate only in a virtual world such as the Internet.2) Autonomous systems that have an impact on the physical world, such as robots, UAVs, an autonomous energy management system, or an entire smart city that autonomously controls processes.This article will primarily focus on autonomous robot systems, which constitute the second type of autonomous systems mentioned before.To achieve their objectives, such systems must accurately perceive and evaluate the environment, the system's state, and the task's comprehension.Based on the system's state and the situation at hand, the system independently formulates and selects various actions to accomplish the respective objectives. Autonomous systems possess the capability to perceive and operate in complex and dynamic environments [18], [19] and accomplish diverse actions and tasks independently compared to automated systems [20], [21].Autonomy typically requires a system equipped with multisource sensors and software that can process complex tasks, enabling the system to achieve its goals and objectives independently without external communication or with limited communication with the outside world within a specific timeframe, and without the assistance of external intervention.Furthermore, such systems can learn and develop in unfamiliar environments, continually enhancing their task-completion capabilities and maintaining exceptional performance.Autonomy can be regarded as an advancement of automation, leading toward higher mobility and intelligence [22]. It can be recognized that incremental and agile software development processes become more and more important in the domain of autonomous system design [23].Although the applicability of agile methodologies to autonomous system design is subject to shifts in emphasis and preference within the engineering and development community, standardized design principles provide a structured framework for design decision-making, promote collaboration, enhance efficiency, and ensure a user-centered focus throughout agile software development processes.They are a valuable tool for agile teams looking to consistently deliver high-quality products that meet both user needs and business objectives.Thus, incorporating agile principles into the design paradigm can enhance the responsiveness and effectiveness of agile autonomous system development, particularly in addressing dynamic requirements gathering challenges. III. RESEARCH METHOD A. STUDY DESIGN Design principles, as a form of formalized knowledge, are becoming increasingly popular in many scientific fields because they allow researchers to capture abstract knowledge that relates to a class of problems rather than a single problem [4], [24].To ensure practical relevance and applicability, design principles should imply accessibility, effectiveness, and, most importantly, guidance for action. This scientific analysis uses a rigorous method to create design principles, which is based on paper [25].A graphical description of the method is shown in Fig. 1.This method not only supports various approaches to the formation of design principles (Design Expert Observation, Derivation from Laboratory Base Design Practice, Derivation from Design Practice, Experience, Review of existing Principles, Analysis of Existing Designs, etc. [26]) but has also been used by several researchers in the past to create design principle [27], [28], [29]. First, in accordance with the method for creating design principles from Möller et al. [25], it was defined what purpose the design principles to be formed should serve.The aim of the so-called Solution Objective (methodical step-see Section I) is to state the purpose of the design principles concisely and precisely. In the next step, the Research Context and the Research Approach (see Sections II and III) were defined.For the creation of design principles in this article, an empirical approach was chosen and existing designs of context models in different application areas were analyzed.Since the design principles to be created aim at providing design knowledge in advance to support the design of a context model before the design process has taken place, a supportive approach was chosen. Since context modeling for autonomous robot systems is still under development and thus subject to continuous change, existing concepts from relevant literature were evaluated for the creation of the design principles (empirical approach).A literature search was conducted, which was carried out according to established guidelines [30], [31].Various combinations of search terms (e.g., "context modeling" or "environment model") were used in a Google Scholar search.In order to investigate a broad spectrum of context modeling, different application domains were not narrowed down.To ensure timeliness, only articles no older than the year of publication in 2000 were reviewed.Furthermore, only articles that have undergone a peer review process were considered in order to meet a qualitative standard.In the case of suitable content, corresponding cross-references of the considered literature were also considered.In the end, 31 papers (see the Appendix) were considered as the Knowledge Base (see Section IV) of the analysis. Initially, Meta-Requirements (see Section V) were identified as a typical avenue for design principle generation.Following the advice of Koppenhagen et al. [32] the array of requirements has been clustered before the design principle formulation.This also guarantees that the design principles produced aim for issues that are of significant importance, rather than merely addressing a wide range of specific problems [25]. In the following, the design knowledge for the creation of context models of autonomous systems was derived in the form of Design Principles (see Section VI).The design principles follow the general concept of Gregor et al. [24], who defined the anatomy of design principles so that design principles are "understandable and useful in real-world design contexts."A linguistic template from Chandra et al. [4] was used because it provides conceptual guidance on the building blocks of the design principles in addition to linguistic guidance.The template is as follows [4]: "Provide the system with [material property -in terms of form and function] in order for users to [activity of user/group of users -in terms of action], given that [boundary conditions -user group's characteristics or implementation settings]." The evaluation of constructed artifacts such as design principles is an essential step in the design cycle to generate rigorous design knowledge.For preliminary Evaluation (see Section VII) each design principle has been applied to the use case "Tracking and Surrounding a Stationary Target with UAVs" after the creation process.Additionally, the collection of design principles has been assessed regarding their reusability following the framework of Iivari et al. [33].During evaluation, parts of the design principles and their characteristics were revised and whenever useful the descriptions were extended to increase the common understanding. B. USE CASE-TRACKING AND SURROUNDING A STATIONARY TARGET WITH UAVS For preliminary use case validation, a group of two homogenous, autonomous UAVs is considered, which means both UAVs have the same context perception and action capabilities.The UAVs use decentralized coordination, lacking a centralized control or leader who possesses complete information about the environment and makes decisions for each UAV.To simplify the process, the environment is divided into a square grid, allowing for the identification of specific grid cells in the scenario.Utilizing this information, each UAV is able to determine its coordinates and the current heading, specifying the UAV's orientation.The use case consists of two phases: the tracking phase and the surrounding phase, which are illustrated in Fig. 2(a) and (b). In the tracking phase, each UAV flies straight with constant speed and altitude until it detects the target ahead or the border of the area to be searched, which is known by the UAVs.It is assumed that the target T (i.e., a tower) is higher than the initial altitude of the UAVs.When a UAV reaches the end of the area to be searched, it rotates 120°and continues tracking.This blind search is continued until the target is found.As soon as the target is found, the surround phase is initialized.For this, the first UAV that finds the target transmits the coordinates of the target to the other UAV.The UAVs use direct communication and have the goal of staying equidistant from the target and from each other while surrounding the target.The surround task of the stationary target is the main goal of the UAVs in the proposed use case.The UAVs have to perceive their context and have to communicate this perception in order to be able to fulfill a collaborative task.Therefore, coordination mechanisms are necessary.However, this work focuses only on the creation of a context model for the design of a collaborative robot system.The design principles formulated in this study provide substantial guidance for the implementation of collaborative use cases involving robot systems. IV. META-REQUIREMENTS FOR DISTRIBUTED CONTEXT MODELS Meta-requirements concern a collection of requirements and are therefore formulated in an abstract and general way.Applying meta-requirements provides an effective method to support requirements specification completeness [34].The collected meta-requirements rely on the generic classification for context modeling (see Table 2), which acts as a structure for organizing them because it facilitates a comprehensive understanding of the context modeling subject.To achieve this, the 17 distinctive characteristics within the five dimensions The dimensions Context Acquisition, Context Modeling, and Context Filtering/Fusion were adopted unchanged for the creation of the meta-requirements, whereas Context Storage was renamed to Context Distribution to emphasize the architectural character of this dimension.These four dimensions can directly be used for the creation of technology-oriented design principles, as they have relevant implications for the context modeling of autonomous robot systems and are equally relevant for generic context modeling as well as for use cases in the field of autonomous robot systems. The dimension Context Application remains unconsidered, as the initial definition of its characteristics is exclusively focused on the field of mobile computing.However, in the field of autonomous systems, context information is used for the generation of situational awareness.The autonomous system shall be put in a state to perceive and understand its contextual environment.Based on this perception and its individual and collective goals, decisions shall be made to achieve these goals.Therefore, a distinction between different context applications for autonomous systems is not required and was not considered further for the creation of the meta-requirements. The aspect of Context Interoperability is also relevant, as several approaches of modeling the context typically lack formality and interoperability.An early proposal of Strang et al. [35] tried to close the formality gap by using ontologies as a fundament to describe contextual facts and interdependencies.Several researchers [36], [37], [38], [39] followed this approach and tried to countersteer the lack of formality.Additionally, IEEE introduced the standard 1872 "Standard Ontologies for Robotics and Automation" [40] to further formalize the creation of ontologies for robotic systems, which has been extended in 2021 to represent additional domainspecific concepts, definitions and axioms commonly used in autonomous robotics. Representing context information in a formal and standardized way is the key enabler for an intelligent use of this information.This task is often referred to as Context Reasoning, which has been included in the meta-requirements, as many researchers exploit it to approach the inherent complexity of context-aware applications [41], [42], [43], [44].As highlighted by Nurmi [45] context reasoning can be used for checking the correctness on the one hand and also for deducing new and relevant information from the various sources of context-data.Table 2 summarizes the six meta-requirements for distributed context models derived from the literature and the generic classification as introduced in Chapter II. V. DESIGN PRINCIPLES FOR DISTRIBUTED CONTEXT MODELING A. CONTEXT ACQUISITION 1) DESIGN PRINCIPLE 1 Provide the context model with convenient functionalities in order to allow the autonomous system to process sensed, derived, and externally provided context information, given that the context model enables efficient cooperation and multilateral information distribution. This design principle considers the different ways how context information can be acquired.Damak et al. [12], for example, define the importance of including the "system's environment, and external interfaces, as well as the system's features and characteristics" in their operational context model to include all instances of information of an autonomous system and the expected behavior.Therefore, context models must fulfill all three dimensions of context information acquisition to be used for autonomous systems. 2) USE CASE ILLUSTRATION For the considered use case, the UAVs need to be able to directly sense context information (e.g., GPS coordinates, motor RPM, etc.) and to derive information as well (e.g., flight speed and distance to target derived from GPS).For the collaborative part, it is also important that they are able to capture and process context information that is externally provided by other UAVs (e.g., position, surrounding speed, and distance). B. CONTEXT MODELING 1) DESIGN PRINCIPLE 2 Provide the context model with capabilities to represent context information in a structured and comprehensible way in order to allow the designers of an autonomous system to visualize, integrate, and implement the relevant context information, given that the context model makes efficient cooperation and multilateral information distribution possible. Model-based approaches are of particular relevance if persons with different working backgrounds have to interact during the design of complex systems.Models improve communication and information sharing as the problem is presented in a simple and comprehensible form.While some authors use graphical modeling approaches [46], [47], it can be recognized that ontology exploitation can be seen as quasistandard [36], [37], [42], [48], which is also summarized by Bayat et al. [38]: "The concepts in an ontology are, in general, the concepts that are shared by most of the community.Thus, we can say that an ontology captures a common understanding, or the consensual knowledge, about the domain." 2) USE CASE ILLUSTRATION The aspects are relevant for the Tracking and Surrounding use case.The representation of context information within an ontology allows the semantic visualization of all relevant context information and can, therefore, foster visualization among stakeholders.Thus, the rigorous specification in a machine-processable and formal way supports the integration and implementation of the context model into the autonomous system. C. CONTEXT FILTERING/FUSION 1) DESIGN PRINCIPLE 3 Provide the context model with filtering and fusion capabilities in order to allow the autonomous system to only use relevant and valuable context information, given that the context model enables efficient cooperation and multilateral information distribution. It is necessary to effectively reduce the large amount of context information by selective filtering since raw sensor data is simple, unstable, and inaccurate after acquisition.On one hand, semantic information can already be extracted from the sensor data, on the other hand, only relevant context information has to be processed by the autonomous system.The issue is summarized by Yeong et al. by highlighting that "it is always essential to consider the advantages, disadvantages, and limitations of the selected group of sensors …" [49]. The goal of context fusion is to generate a maximum gain from inconsistent information.This inconsistent, sometimes even contradictory information, results from the generation of context information via different sensor devices within a single system or the exchange of context information by cooperative system participants.Various researchers are exploring approaches and rules to maximize the gain from context fusion [15], [18], [49]. 2) USE CASE ILLUSTRATION Although Tracking and Surrounding a Stationary Target is a simplified use case representing cooperative tasks, the amount of possible context information could already be huge.However, for an efficient context model, it is not necessary to process more obstacles than necessary, as only the target or the other UAV are foreseen in the use case.Also, the implementation of knowledge fusion techniques can help to enhance the efficiency of this simplified use case whenever inconsistent information is generated. D. CONTEXT DISTRIBUTION 1) DESIGN PRINCIPLE 4 Provide the context model with capable interfaces that allow decentralized distribution of the relevant context information if there is no central distribution architecture applied in order to share context information between system contributors, given that the context model enables efficient cooperation and multilateral information distribution. This design principle is critical, especially for the cooperative and architectural aspects of the autonomous robot system.Distributed context models must either be designed in such a way that context information can directly be shared between system contributors or are collected and merged together by a collector (often a ground control station), which then distributes the information centrally.An example for a centrally organized architecture can be found by Cavaliere et al. [50], whereas de Freitas et al. [51] discusses a decentralized architecture for a "Multipurpose Localization Service for Cooperative Multi-UAV Systems." 2) USE CASE ILLUSTRATION Similar, for the considered use case, the UAVs would directly share the relevant context information during use case execution, as there is no central collector like a ground control station available. E. CONTEXT INTEROPERABILITY 1) DESIGN PRINCIPLE 5 Provide the context model with standardized data formats and communication protocols in order to enable interoperability of the context information between system contributors, given that the context model enables efficient cooperation and multilateral information distribution. This design principle refers to the capability of system contributors to be able to share context information by using common data formats and communication protocols.With the help of semantic interoperability, the system contributors are able to interpret the information exchanged meaningfully and accurately.Rode and Turner [52] emphasize the necessity "that there is a way for different agents to represent contexts and agree on the meaning of contextual knowledge.This implies that there exists a representation language for contextual knowledge." 2) USE CASE ILLUSTRATION The aspect of interoperability is of high relevance because the distribution of context information is the key enabler for cooperation for the use case "Tracking and Surrounding a Stationary Target with UAVs."An easy solution would be a consent to adopt identical semantic standards for the individual UAVs, which is straightforward for the simplified use case with two UAVs.However, this agreement becomes much more difficult and complex for larger system networks, especially if these are deployed by different operators. F. CONTEXT REASONING 1) DESIGN PRINCIPLE 6 Provide the context model with sufficient reasoning capabilities in order to allow system contributors to deduce new information from uncertain context information, given that the context model enables efficient cooperation and multilateral information distribution. Since context data are fundamentally characterized by uncertainty and imperfection [53], there is a need for reasoning of context information in autonomous systems.Even in a single system, sensor inaccuracies or connectivity failures results in the problem that not all context information is available with certainty at all times.Furthermore, if a context information comes from multiple sources, the information may become ambiguous.In this case, the task of reasoning is to identify possible errors, make predictions about missing values, and decide on the quality and validity of the collected information. For this rationale, reasoning plays an essential role in the decision-making process of the autonomous system, which defines its behavior based on the collected context information and a set of decision rules. 2) USE CASE ILLUSTRATION Considering the problem of imperfect context information is relevant for the use case "Tracking and Surrounding a Stationary Target with UAVs."Due to the nature of imperfect sensory data, it can be assumed that both UAVs generate different information about a specific sensed context information.With the integration of context reasoning, the error can possibly be identified, and the UAVs combine the probabilistic context information to act cooperatively. VI. PRELIMINARY EVALUATION AND DISCUSSION In this section, the created design principles for distributed context modeling of autonomous systems are examined and preliminary evaluated in an analytical way.Additionally, contributing aspects and limitations to the proposed design principles are discussed. A. PRELIMINARY EVALUATION As prescriptive statements, the proposed design principles are adequately general in order to address a class of artifacts rather than one specific instance [4], [24].For the evaluation of reusability, the "light reusability evaluation of design principles" framework proposed by Iivari et al. [33] is used, which uses the criteria Accessibility, Importance, Novelty and Insightfulness, Actability and Guidance, and Effectiveness, for the evaluation. For Accessibility, the set of design principles needs to be understandable and comprehendible for the target community.The proposed design principles are created for domain experts in the field of context modeling for autonomous systems and are expressed in understandable language.The understandable language and a recurring structure ensure successful communication with the domain experts, who can exploit the set of design principles as a practical guideline.The Importance to domain experts and practitioners is given by addressing distributed context modeling of autonomous systems as a real-world problem.Developments and innovations in the field of autonomous systems enable their use in various areas of society and bring the technology from the laboratory into public life.The complexity rises even further with distributed autonomous systems, and it is of crucial importance to consider all relevant aspects of the system design, including distributed context modeling.Regarding Novelty & Insightfulness, one can recognize that a collection of six design principles focusing on distributed context modeling to enable efficient cooperation and multilateral data distribution is a novel approach to the best of our knowledge.Relevant insights are given to meet expectations of the practitioners.The set of design principles is based on a context modeling classification model [8] and is derived from a case study including various context modeling approaches.Using this broad knowledge base and providing an exemplary use case, it is assured that the design principles "can be acted and carried out in practice" [33].Therefore, the requirements are met for both Actability and Guidance.By focusing only on the main dimensions of context modeling, a sound balance between guidance and flexibility is assured so that the design principles provide appropriate guidance without being too restrictive.A complete evaluation of Effectiveness, as mentioned by Iivari et al., would require a long-lasting "naturalistic approach so that a real instantiated system is used by real users in a real organizational context over a longer period so that possible effects of the system can be identified."However, the design principles are derived from practitioners for practitioners and are tailored for application level.Thus, they prove effective for the design and development of distributed context models in autonomous systems.The analytical evaluation refers to the entirety of all six design principles and is consistent with the evaluation framework of Iivari et al. [33].Despite their abstractness and reusability, the proposed design principles, as a form of generic and descriptive knowledge, provide sufficient guidelines for standardized implementation measures. B. PRACTICAL AND RESEARCH CONTRIBUTIONS The proposed design principles, drawn from existing literature, yield numerous practical insights, as exemplified in design principles 4 and 5, which pertain to distribution and interoperability.These two principles play pivotal roles in establishing collaborative systems, enabling assets with distributed context models to seamlessly exchange information with other system participants.Concurrently, the presented design principles offer a practical roadmap for the implementation of distributed context models in autonomous systems, providing practitioners with a clear framework encompassing the six meta-requirements and their associated technical implications.In the use case of "Tracking and Surrounding a stationary target with UAVs," these principles empower unmanned vehicles to autonomously execute missions, enhancing service performance.They are likely applicable to various similar use cases, serving as a starting point for guiding the next generation of practitioners who may need to address similar challenges [54]. This article consolidates knowledge from existing literature, focusing on prescriptive knowledge for designing distributed context models in autonomous systems, making notable research contributions.Design principles, traditionally subject to infrequent evaluation, become a central point of discussion in this article, addressing the need for their increased reusability.Moreover, this study not only spotlights open research inquiries and ongoing endeavors but also emphasizes the essential requirement for further evaluation.This emphasis is critical for bridging the gap that often exists between theoretical context modeling and its practical implementation. C. LIMITATIONS One of the limiting aspects is that the proposed design principles primarily focus on technical aspects, as they are derived from existing contextual models found in the available literature.It should be noted that these contextual models may not encompass all relevant factors that could exert an influence.Consequently, achieving a comprehensive, universally applicable set of design principles is uncertain.Thus, exploring the field of context modeling for autonomous systems requires a multimethod approach that extends beyond technical considerations, encompassing methodologies such as use case analysis, case studies, expert interviews, and more.Additionally, this contribution underscores the inadequacy of relying solely on literature-based approaches for developing comprehensive taxonomies of descriptive knowledge.Particularly, it highlights the need to enhance and expand upon Liu et al.'s [8] classification by introducing dimensions such as context distribution, context interoperability, and context reasoning.Moreover, it is important to recognize that the completeness of the proposed design principles remains elusive.As per Fu et al. [26], the definition of design principles is inherently use-case dependent and ever-evolving, representing a snapshot at a specific moment in time.Therefore, future assessments should adopt a broader perspective on completeness, incorporating diverse opinions and expert feedback. VII. CONCLUSION AND OUTLOOK In this article, design principles for distributed context modeling of autonomous systems have been elaborated.Based on an existing classification for context models, the essential aspects were summarized in meta-requirements to finally derive the design principles for distributed context modeling.The concept aims at the cooperative use of context information in a network of different actors.During the elaboration, special emphasis was put on improving and standardizing the creation and application of a distributed context model for autonomous systems.Furthermore, the conceptual approaches of classical context modeling were extended to consider requirements for distributed context information.On one hand, this distribution promotes cooperation among system participants, but on the other hand, it requires a higher focus in the area of context structuring and interoperability.Overall, the proposed design principles simplify the creation, integration, and implementation of a distributed context model for autonomous systems. The next step is to further validate the created design principles in real applications and expert interviews.In particular, this qualitative validation should prove the relevance and effectiveness of the design principles on the basis of instantiated distributed context models. APPENDIX Examination of the relevant literature and the contextual models used in each instance was an essential element of this research.Table 3 provides a summary and classification of the sources used to establish the design principles. FIGURE 2 . FIGURE 2. Schematic illustration of "tracking (a) and surrounding (b) a stationary target with UAVs." TABLE 2 . Short Description of the Meta-Requirements of the classification are used as a baseline: Context Acquisition, Context Modeling, Context Filtering and Fusion, Context Storage, and Context Application.
2023-12-16T16:42:42.310Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "ab6b1d5c143bc898ed3758c1464d48afb207049c", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/9745883/9956023/10356718.pdf", "oa_status": "HYBRID", "pdf_src": "IEEE", "pdf_hash": "8094914c059c120a7fe40c48fc78b486b739c401", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [] }
233298326
pes2o/s2orc
v3-fos-license
Potts shunt as an effective palliation for patients with end stage pulmonary arterial hypertension Background Potts shunt has been suggested as an effective palliative therapy for patients with pulmonary artery hypertension (PAH) not associated with congenital heart disease. Materials and methods This is a prospective single-center study performed to assess outcomes of Potts shunt in patients with PAH who are in functional class III or IV. Results 52 patients in functional class III/IV with pulmonary arterial hypertension without significant intra or extracardiac shunt on maximal medical therapy were evaluated and counseled for undergoing Potts shunt/patent ductus arteriosus (PDA) stenting. 16/52 patients (13 females) consented for the procedure; 14 patients underwent surgical creation of Potts, and 2 underwent transcatheter stenting of PDA, which physiologically acted like a Potts shunt. Standard medical therapy was continued in patients who did not consent for the procedure. 12/16 patients survived the procedure. Patients who did not survive the procedure were older, with severe right ventricular systolic dysfunction, and functional class IV. Patients who survived the procedure were followed up in the pulmonary hypertension clinic. The Median follow-up was 17 months (1–40 months). 11/13 patients discharged after the operation showed sustained clinical, echocardiographic, and biochemical improvement, which reduced need for pulmonary vasodilator therapy in 10/13 patients. There was one death in the follow-up period 16 months post-surgery due to lower respiratory tract infection. Conclusion Potts shunt is feasible in patients with PAH without significant intra or extracardiac shunts. It can be done safely with an acceptable success rate. Patient selection, preoperative stabilization, and meticulous postoperative management are essential. It should be performed at the earliest sign of clinical, echocardiographic, or laboratory deterioation for optimal outcomes. Long-term follow-up is required to see a sustained improvement in functional class and the need for a lung transplant in the future. Introduction Pulmonary arterial hypertension (PAH) is a progressive disease that can present at any age, from infancy to adulthood. Despite advances in medical therapy, prognosis remains guarded, with 5year survival ranging from 57 to 75%. 1 Moreover, prostacyclin analogues, which significantly reduce mortality are still unavailable or cannot be afforded in some developing countries. Until recently, lung or heart-lung transplant was thought to be the only surgical option available, which is also cost-prohibitive with limited availability and guarded long term outcomes. 2 Creating Potts shunt is considered as an alternative option in both adult and pediatric patients with PAH who show symptomatic deterioration on maximal medical therapy. 3,4 This innovative therapy postulates that by creating an unrestrictive communication between the descending aorta and left pulmonary artery, there would be a reduction in the right ventricular afterload with an improvement of the right ventricular function and right ventricle to pulmonary artery coupling. This would result in the conversion of an idiopathic PAH physiology to Eisenmenger physiology with better functional capacity and survival. 5,6 Potts shunt can be effectively created surgically, or as an alternative, one can stent the patent ductus arteriosus (PDA) in patients with probe patent ductus. 7,8 The creation of Potts shunt has shown morbidity and mortality benefits with a reduction in the requirement of pulmonary vasodilators. 3,9 We report clinical outcomes of the first 16 patients with PAH who underwent Potts shunt at our institute from April 2015eSeptember 2019. Material and methods This is a prospective observational study from a single tertiary care center from India. Institutional ethics committee clearance was obtained. Patient selection: Secondary causes of PAH were ruled out as per the protocol. 10 Patients with the following criteria were considered for performing Potts shunt. Inclusion criteria Patients with group I PAH without intra or extracardiac shunt or PAH out of proportion to the shunt They were receiving maximal doses of phosphodiesterase 5 inhibitors and endothelin receptor antagonists for at least six months prior to the procedure. Patients who could afford prostacyclin analogues were on maximal doses of the same. Functional class IV or deterioration in functional class on maximal medical therapy and Consented for undergoing Potts shunt. Exclusion criteria Patients/legal guardians not consenting for undergoing Potts shunt Significant intra or extracardiac shunt Clinical symptoms of syncope, functional class, and right heart failure were asked for in all the patients. Clinical evaluation included upper and lower limb oxygen saturations, signs of right failure like the elevation of the jugular venous pulse, presence of hepatomegaly, and edema feet were noted. X-ray chest and electrocardiography was performed in all the patients. A detailed echocardiogram was performed prior to the procedure, at the time of discharge, and during follow-up in all the patients. The following parameters were studied during echocardiography. 1. Structural heart defects were ruled out in the initial evaluation 2. Accurate estimation of pulmonary artery pressures whenever possible 3. Right ventricular size and function 4. Pulmonary artery acceleration time (PAAT) and RV Ejection time (ET) were measured using 2 d echocardiogram and Doppler study. 5. Assessment of right atrial pressure, pericardial effusion N terminal pro-brain natriuretic peptide (NT-proBNP) was measured before, immediately after the procedure and reassessed on follow up. Computerized tomography (CT) with a pulmonary angiogram was performed in all patients prior to the procedure. CT angiogram was used to ascertain the size of the interposition graft or PDA stent during the procedure (80% of the size of the descending aorta at the level of the diaphragm). 9 Cardiac catheterization was performed on 10/16. Six patients did not undergo right heart catheterization due to their vulnerable clinical scenario and unstable hemodynamic condition. Hemodynamic data were obtained in these six patients after induction of general anesthesia prior to surgery. Swan Ganz catheter of appropriate size was inserted through the right internal jugular vein for the same. Preoperative stabilization All patients were admitted 3 ± 1-day prior to the pediatric cardiac intensive care unit. All of them received inotropes (Dopamine), inodilators (milrinone), diuretics, intravenous sildenafil, and oral endothelin receptor blocker (Ambrisentan). Inhaled iloprost was continued in 3 patients. Inhaled nitric oxide was administered pre-operatively in 4/13 patients. Serial NT ePro BNP was monitored, and the patients were taken up for the procedure after demonstrating a serial drop in NT-pro BNP over 3e5 days and improvement in right heart function on echocardiogram. At least a 30% fall in NT pro-BNP and a similar improvement in the RV functional assessment on echocardiogram was considered sufficient for taking up the patient for Potts shunt. Procedure: Patients with a probe patent PDA identified on CT angiogram or on cardiac catheterization underwent PDA stent. The remaining patients underwent surgical creation of Potts shunt using an interposition graft. Surgical details of Potts shunt After initial stabilization, 14 out of 16 patients underwent surgical Potts shunt using an interposing tube graft through a left lateral thoracotomy approach without cardiopulmonary bypass (CPB) through the 4th intercostal space. We ensured that we could put the patient on CPB if required via aortic and pulmonary artery cannulation. Polytetrafluoroethylene (PTFE) graft was used for the shunt, and the size of the graft used was 80% of the size of the descending aorta. 9 (Table 1). Stenting of the duct Potts shunt can also be created by stenting the PDA if present. 11,12 Two patients underwent PDA stenting. The procedure was performed under general anesthesia. The femoral artery and venous accesses were obtained. An aortic angiogram was done in the lateral view to demonstrate the PDA. The PDA was stented using the antegrade approach in one and retrograde approach in the second patient. 6 mm bare-metal stents were used in both the patients. Post-operative management All the patients were managed postoperatively in the pediatric cardiac ICU. They were shifted to the PCICU on inhaled nitric oxide (20e30 ppm), IV sildenafil 1.6 mg/kg/day, IV milrinone (0.5e0.7 mcg/kg/min) and adrenaline infusion (0.04e0.08 mcg/kg/min). Postoperative monitoring included arterial pressure, upper limb and lower limb saturations, and Pao2. The inotropes and pulmonary vasodilators were finely tuned to achieve a difference in upper and lower limb Spo2 of 15e20% while maintaining normal cardiac output. Discharge Pulmonary vasodilators were continued at the time of discharge; anti-failure medications were continued as clinically indicated; additionally, all patients received antiplatelet agents. Follow up All patients who survived the procedure were followed up in the institutional PAH clinic. Pulmonary vasodilators were adjusted based on the functional class, echocardiographic findings, and difference in the upper and lower limb saturations. Improvement in functional class, shunting of the blood across Potts shunt, improvement in right ventricular function, and decrease in NT-proBNP levels were used as criteria to modify the pulmonary vasodilators on follow up. Survival analyses: Survival analysis was performed for the entire cohort as well as comparing patients who underwent Potts shunt/ PDA stenting by using Kaplan Meier graphs. Patients who underwent Potts shunt/PDA stenting were divided into two groups depending on the age (<16 years/! 16 years) right ventricular function (TAPSE 12 mm/TAPSE >12 mm), right atrial pressure (RAP < 8 mm Hg/ 8 mm Hg) and cardiac index (CI > 2.5 l/min/m 2 / CI 2.5 l/min/m 2 ) and the survival was compared using Kaplan Meier graphs. Statistical analyses Statistical analyses were performed using SPSS 20 software. Parametric data are expressed as mean ± standard deviation, and non-parametric data are expressed as median with ranges. Student t-test and Mann Whitney U tests were performed to compare the parametric and non-parametric data, respectively. Kaplan Mein survival graph was plotted, and the Log-rank test was used to compare survival between groups. Results Fifty-two patients with pulmonary arterial hypertension without significant intra or extracardiac shunt in functional class III/IV on maximal medical therapy were evaluated and counseled for undergoing Potts shunt/PDA stenting. 16/52 (32%) patients (13 females) consented for the procedure and underwent Potts shunt (14 surgical and 2 PDA stent) for PAH in our center. The median age was eight years. Demographic and hemodynamic data are presented in Tables 1 and 2, respectively. Echocardiographic data is presented in Table 3. 10/14 patients in the surgical group and 2/2 patients of the stenting group survived the procedure. After completion of the procedure, the patient's saturations were 10e20% lower in the lower extremity compared to the upper extremity. The average ventilation duration was 16 ± 4 hours. Inotropes were continued until the patients were hemodynamically stable. The postoperative duration of ICU stay was 4 ± 2 days, and the hospital stay was 10 ± 2 days. Immediate postoperative mortality There were four deaths in the immediate postoperative period. 2 patients did not tolerate the intra-operative clamping of the pulmonary artery, and in 2 patients, there was pulmonary hemorrhage with respiratory failure requiring extracorporal membrane oxygenator in 1. Follow up All the survivors were followed up in the PAH clinic. The median duration of follow-up was 17 months (1monthe40 months). 10/12 patients who survived the procedure had improvement in a functional class by at least one grade (Fig. 1). One of the two patients who did not improve post-procedure expired 20 months later due to lower respiratory tract infection. And the second is currently listed for transplant. Pulmonary vasodilators 10/12 surviving the patients received dual pulmonary vasodilators, and 2 received inhaled iloprost in addition to the oral medications till three months after surgery/PDA stent. At three months follow-up, patients with improvement in functional class and right ventricular function allowed discontinuation of iloprost Table 4 demonstrates the differences between the patients who benefited from Potts shunt vs. those who did not benefit. The patients who did not benefit from the procedure were older, had higher functional class, and worse RV function on echocardiogram, higher right atrial pressure, lower cardiac index, and higher NT epro-BNP levels (Figs. 3e6). Survival analyses: Kaplan Meier survival graphs were plotted; survival of the entire cohort was 44%, 34%, and 28% at six months, one year, and two years, respectively. The mean survival for patients undergoing Potts shunt was 28 ± 4 months vs. 13.6 ± 2.7 months (p ¼ 0.007), which was significantly better as compated to the patients who did not consent for the procedure (Fig. 7). Age less than 16 years, TAPSE more than 13 mm, right atrial pressure less than 8 mm Hg, and cardiac index more than 2.5 l/m/m 2 were associated with better survival (Fig. 8). Discussion Pulmonary arterial hypertension is a chronic and progressive disease with very high morbidity and mortality. 13,14 PAH associated mortality has decreased in the last few years, mostly secondary to early diagnosis, better risk stratification, and upfront dual and triple therapy in high-risk individuals. 15 However, prostacyclin analogues, which form the cornerstone of this management strategy, are not marketed in India and are beyond the reach of many. Similarly lung transplant is available in very few centers in India with limited medium term survival. 2 Patients with repeated syncopal episodes or evidence of right ventricular failure are traditionally referred for balloon atrial septostomy. However, it is limited by a very high incidence of decreasing in size and spontaneous closure over time. 16 Recently, use of atrial septal flow regulator has mitigated the risk of spontaneous decrease in size over time. However, the echocardiographic parameters, as well as the BNP, did not show significant improvement. 17 Also the need for pulmonary vasodilators remains unchanged after creation of an interatrial communication. Advantages of Potts shunt over creation of interatrial communication Creation of a non-restrictive communication between the left pulmonary artery and the descending aorta (Potts shunt) has shown significant mortality and morbidity benefits in patients with PAH. 3,12 Unlike atrial septostomy, Potts shunt does not create arterial desaturation in the upper part of the body including cerebral and coronary circulation and the shunt remains open throughout the cardiac cycle. 18 Improvement in functional class, reduction in need for PAH specific medications and improvement in right ventricular function has been demonstrated after creation of the Potts shunt. 19e21 Appropriate selection of cases and preoperative stabilization Timing of the Potts shunt, as well as preoperative stabilization, is critical for successful outcome. Creation of a Potts shunt is a highrisk procedure, the risk increases incrementally with worsening of functional class and deterioration in the right heart function. All the four immediate postoperative deaths and the two patients who did not show improvement after the surgery in our series were older were in functional class IV, had worse echocardiographic and hemodynamic parameters. Age more than 16 years, right atrial pressure more than 8 mm Hg, TAPSE <13 mm, and cardiac index less than 2.5 l/min/m 2 were associated with poor short and intermediate outcomes. Hence, it might be prudent to perform this high-risk procedure at the first sign of clinical/echocardiographic deterioration and not to wait until the patient is in functional class IV or severe right ventricular dysfunction ensues. Two of our patients had sub-systemic PA pressures at the time of performing Potts shunt. Both of them had a history of syncope on exertion, which disappeared after the procedure. PA pressure and PVRI are dynamic in nature and are known to increase on effort. Decompression of the RV by the Potts shunt during such episodic pulmonary hypertensive crisis offered symptomatic relief to these patients. Pre-operative stabilization and creation of the Potts shunt Pre-operative stabilization with milrinone, IV sildenafil, and nitric oxide is essential for successful post-operative outcomes. This is especially true in the Indian scenario where prostacyclin analogues are not readily available. Potts shunt can be done surgically through left lateral thoracotomy or in the cardiac catheterization laboratory by stenting the patent ductus arteriosus. 7,22,23 In our series, two patients with probe patent ductus underwent PDA stenting. Although transcatheter creation of Potts shunt in the absence of a probe patent ductus has been reported, procedural risks and long term follow up needs to be looked at carefully. 24 Surgically Potts shunt can be done by using an interposition graft or direct side to side anastomosis between the left pulmonary artery and the descending aorta. The creation of the shunt using a unidirectional valved conduit has also been described. 25 In our series, an interposition graft was used in all the patients who had surgical Potts shunt. The advantage of the interposition graft is that the flow can be controlled by putting a band across it if required. We could successfully discharge 12/16 patients who underwent the procedure at our center. Overall survival in patients undergoing Potts shunt/PDA stent was significantly higher than those who did not undergo the said procedure. Clinical, echocardiographic and laboratory improvements on follow up On follow up, all but two surviving patients demonstrated at least 1-grade improvement in functional class, reduced RV afterload and improvement in the right ventricular function in echocardiographic as well as laboratory parameters. 26 Most of the above factors have been shown to predict clinical outcomes in adult as 6. Box and Whisker plot comparing the levels of N terminal Pro brain natriuretic peptide (NT pro BNP) of group 1 (benefited from Potts shunt) and group 2 (Did not benefit from the Potts shunt). Patients in group 2 had significantly higher NT pro BNP levels as compared to those in group 1 (p value < 0.001). well as pediatric patients with PAH. 15 Hence improvement in these factors could translate into better clinical outcomes. We could successfully wean iloprost and endothelial receptor antagonists in patients demonstrating improvement of the above parameters. This might be an important indication to perform Potts shunt in a developing country like India, where either PAH-specific medications are not readily available or are beyond the reach of the majority of the population. The experience of our center is similar to the results described in other countries. 3,9,22 In addition, we have demonstrated that early referral for Potts shunt before severe RV dysfunction ensues is essential for optimal results. The immediate goal in creating Potts shunt in patients with PAH is to create physiology like Eisenmenger syndrome, thereby ensuring that the right ventricle never faces more than systemic pressures. Conclusion Potts shunt/PDA stenting is feasible in patients with PAH; it can be done safely with an acceptable success rate. Patient selection, preoperative stabilization, and meticulous intra and postoperative management are essential. For optimal outcomes, it should be performed at the earliest sign of clinical, echocardiographic, or laboratory deterioration before severe right ventricular dysfunction sets in. Long term follow-up is required to ascertain sustainable improvement in functional class and the need for a lung transplant. Limitations of the study Single-center experience with limited duration of follow up. Funding None. Declaration of competing interest None.
2021-01-07T09:00:50.321Z
2021-01-06T00:00:00.000
{ "year": 2021, "sha1": "71bbadccd3cbb6bcb504df5862944ede637e235e", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ihj.2021.01.007", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a584d87e30f8a6e3d1418acc410d309cbd987ce3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
937356
pes2o/s2orc
v3-fos-license
Advising overweight persons about diet and physical activity in primary health care: Lithuanian health behaviour monitoring study Background Obesity is a globally spreading health problem. Behavioural interventions aimed at modifying dietary habits and physical activity patterns are essential in prevention and management of obesity. General practitioners (GP) have a unique opportunity to counsel overweight patients on weight control. The purpose of the study was to assess the level of giving advice on diet and physical activity by GPs using the data of Lithuanian health behaviour monitoring among adult population. Methods Data from cross-sectional postal surveys of 2000, 2002 and 2004 were analysed. Nationally representative random samples were drawn from the population register. Each sample consisted of 3000 persons aged 20–64 years. The response rates were 74.4% in 2000, 63.4% in 2002 and 61.7% in 2004. Self-reported body weight and height were used to calculate body mass index (BMI). Information on advising in primary health care was obtained asking whether GP advised overweight patients to change dietary habits and to increase physical activity. The odds of receiving advice on diet and physical activity were calculated using multiple logistic regression analyses according to a range of sociodemographic variables, perceived health, number of visits to GPs and body-weight status. Results Almost a half of respondents were overweight or obese. Only one fourth of respondents reported that they were advised to change diet. The proportion of persons who received advice on physical activity was even lower. The odds of receiving advice increased with age. A strong association was found between perceived health and receiving advice. The likelihood of receiving advice was related to BMI. GPs were more likely to give advice when BMI was high. More than a half of obese respondents (63.3%) reported that they had tried to lose weight. The association between receiving advice and self-reported attempt to lose weight was found. Conclusion The low rate of dietary and physical activity advice reported by overweight patients implies that more lifestyle counselling should be provided in primary health care. There is an obvious need for improved training and education of GPs in counselling of overweight patients focusing on methods of giving dietary and physical activity advice. Background Obesity is a globally spreading health problem. Diabetes, hypertension, dyslipidaemia, cardiovascular disease, and some cancers are associated with obesity [1,2]. Those comorbidities have considerable health care and social cost [3,4]. The World Health Organisation report on obesity states that sedentary lifestyle and consumption of high-fat energy-dense diets are fundamental causes of the obesity epidemic [1]. Health promotion strategies, including behavioural interventions aimed at modifying dietary habits and physical activity patterns are essential in prevention and management of obesity. Primary health care has a unique opportunity for health promotion activities. A substantial part (60-70%) of population makes visits to their general practitioner (GP) each year [5]. There is sufficient evidence that the majority of people regard doctors as the best and most credible source of advice on a range of issues, including diet and physical activity [6,7]. However, the studies have shown low rate of counselling on lifestyle changes given to overweight patients in primary health care [8,9]. Prevalence of overweight and obesity is high in Lithuania [10]. Every tenth adult is obese and every third has overweight. Effective management strategies of overweight require common efforts of health care services and community. Primary health care as a separately organised sector of health services is a new concept in Lithuania. In 1995, the Ministry of Health approved the establishment of GP institution and defined its role. Programmes of training and retraining of GPs were started. Lithuanian regulation "The GP norm" specifies GPs' activities, including counselling on weight control. However, there is a lack of data how often GPs are giving advice to obese patients. This study is aimed at assessing the level of giving advice on diet and physical activity by GPs using the data of Lithuanian health behaviour monitoring among adult population within the framework of the international FINBALT HEALTH MONITOR project [11]. Methods Data from cross-sectional postal surveys of 2000, 2002 and 2004 were used. Nationally representative random samples were drawn from the population register. The sampling unit was an individual in all the surveys and no measures were taken to substitute for non-respondents. Each sample consisted of 3000 persons aged 20-64 years. The questionnaires were mailed in April and one Self-reported body weight and height were used to calculate body mass index (BMI), defined as weight in kilograms divided by the square of height in meters. BMI was categorised into four groups: of normal weight (BMI < 25 kg/m 2 ), overweight (BMI -25-29 kg/m 2 ), obese (BMI -30-34 kg/m 2 ) and severe obese (BMI ≥ 35 kg/m 2 ). The data of overweight and obese persons were included into the analysis. Information on advising in primary health care was obtained asking the following questions: 'During the last year (12 months) have you been advised to change your dietary habits?' and 'During the last year (12 months) have you been advised to increase your physical activity?' Education was measured by three educational levels: incomplete secondary, secondary and university. The respondents were grouped according to their place of residence as living in cities, towns or villages. Marital status was dichotomised as 'married' and 'others'. Information on perceived health was elicited by the following question 'How would you assess your present state of health?: 1) Good, 2) reasonably good, 3) average, 4) rather poor, 5) poor'. It was categorised as 'good' (1+2), 'average' (3) and 'poor' (4+5) ( Table 1). People were asked how often they had seen a GP during the previous 12 months. They were grouped as visiting 1-2 times, 3-4 times, and 5 times or more. Data were analysed using the statistical package SPSS version 12.1. One database was compiled from the three surveys and it included the corresponding variables described. The differences in the distribution of respondents by body-weight status were assessed using analyses of chi-squared tests. The odds of receiving advice on diet and physical activity were calculated using multiple logistic regression analyses according to a range of sociodemographic variables, perceived health, number of visits to GPs and body-weight status. The first category of each factor was the reference category. When the 95% confidence interval did not include 1, the odds ratio was considered to be statistically significant. The investigation was conformed to the principles outlined in the Declaration of Helsinki and approved by the regional ethics committee. Results The distribution of men and women according to BMI is shown in Table 2. Almost a half of respondents were overweight or obese. The prevalence of overweight and obesity increased with age both in men and women. GPs were not very active in advising overweight persons to change their dietary habits and to increase physical activity. Only one fourth of respondents reported that they were told to change diet ( Table 3). The proportion of persons who received advice on physical activity was even lower. The odds of receiving advice increased with age. The reported receipt of advice was not related to gender, education and place of residence. A strong association was found between perceived health and receiving advice. Persons with poor health were five times as likely to be advised as those with good health. The proportion of persons who reported that they had been advised increased with the increasing number of visits to a GP. The likelihood of receiving advice was related to BMI. GPs were more likely to give advice when BMI was high. More than a half of obese respondents (63.3%) reported that they had tried to lose weight. More women than men reported attempts to reduce weight. The association between receiving advice and self-reported attempt to lose weight was found. Men and women being advised to increase physical activity and women being advised to change diet were more likely to make attempts to reduce weight (Table 4). Discussion Our findings show that the proportion of the overweight respondents who had visited general practitioner during the past 12 months and received advice for losing weight was small. The receipt of advice was associated with age, perceived health, number of visits to GP and BMI. Respondents who were advised more often reported attempts to lose weight than those who were not advised. Our study has several limitations. The overweight and obesity were assessed using self-reported data on weight and height. Overweight persons are linked to underreport their weight. Therefore our sample could not included marginally overweight persons who possibly are less often advised than the obese ones. Another obvious limitation lies in the assessment of GPs activities based on patient's reports. People may underestimate the frequency of receiving advice. Other studies that evaluated counselling obese patients by GPs have shown that proportion of advised persons varies from 5% to more than 40% [8,9,12,13]. Several barriers in advising patients were emphasised by researchers. The patient's lack of motivation or will to make the required lifestyle changes is regarded as the main barrier. The study undertaken by the European Network for Prevention and Health Promotion in Family Medicine and General Practice (EUROPREV) demonstrated that more than half of GPs were sceptical about helping patients achieve or maintain normal weight [14]. In Canada 48% of GPs considered that dietary change had little effect on weight control [15]. GPs were more likely to provide advice to patients they believe most likely to change unhealthy behaviour [12,16]. Consistent with findings of previous studies, GPs in Lithuania tended to advise those who were most overweight or might have some health problems. Persons who frequently visited GP received advice more often than those who had check-up only once or twice a year. Time resources in primary health care are limited. Most GPs mentioned that one of the constraints to offering advice was a lack of time [14,17]. If GP spends time on advice this will reduce the time for the rest of the consultation. Several studies have shown that GPs knowledge about nutrition and physical activity in management of obesity is incomplete [8,18]. They expressed the need for clinical guidelines and supplementary training. Even when GPs have nutritional knowledge, they find it difficult to communicate this knowledge effectively. GPs would therefore benefit from additional training in counselling skills. The studies have reported that education and medical advice to lose weight were strongly associated with trying to reduce weight [12,13,19]. The same association was found in our study. Women appear to be more likely than men to be engaged in attempts to lose weight. One reason for such difference could be that women are more dissatisfied with their body shape compared with men [20]. Being thin is a desirable body ideal for women. They more often than men choose food they considered to be healthy while men more often prefer food they like [21]. In our study, overweight and obesity were estimated by using BMI which alone is not a sufficient predictor of metabolic abnormalities. There is more and more evidence that waist circumference is associated with all-cause mortality and risk of coronary heart disease [22,23]. Because weight loss can reduce risk factors for chronic diseases, appropriated determination of height, weight and waist circumference should be carried out in primary health care settings and health care professionals need to be more active in advising of obese patients on diet and physical activity. However, many doctors practising today in Lithuania have never been taught about counselling of obese patients. Conclusion The low rate of dietary and physical activity advice reported by overweight patients implies that more lifestyle counselling should be provided in primary health care. There is an obvious need for improved training and education of GPs in counselling of overweight patients focusing on methods of giving dietary and physical activity advice.
2017-06-22T04:04:23.638Z
2006-02-14T00:00:00.000
{ "year": 2006, "sha1": "bf12623c0cbebd9d96e1684091bb79e7b7040300", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-6-30", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0d95f9de49b9dcf15da34ae9aaf044a47d9ce826", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
224863674
pes2o/s2orc
v3-fos-license
Paradigm of new service development projects (NSDPs): “One Basket Fits all” Purpose –The aim of this research is to examine the key determinants influencing the success of new service development projects (NSDPs) across four service typologies context. Design/methodology/approach – The researchers used the scenario-based survey method in an NSDP setting. Structural equation modelling (SEM) was used to test the proposed hypotheses based on survey data from 570 managers under four service typologies. Findings – Service firms’ cross-functional integration (CFI) and internal project team efficiency (IPTE) positively influenced NSDPs. The results also indicated that both technology infrastructure (TI) and IPTE mediated the relationship between CFI and NSDPs. In addition, the mediation effect of TI existed between the relationship of IPTE and NSDPs. Furthermore, the proposed model confirms that, for NSDPs, the role of knowledge-sharing behaviour (KSB), authentic leadership (AL) and firm’s culture (FC) across the four service typologies moderated the relationship. Practical implications – With a better understanding of the dynamics of the aforementioned variables, service managers and the project team can more effectively develop and execute strategies for an NSDP. The article enables practitioners to expand their current understanding of NSDPs by providing insights of the unique antecedents that are significant for new service development across four service types. Originality/value – This research is the first of its kind to examine the mediating role of KSB and TI in determining NSDPs. This study provides one of the first empirical examinations on NSDPs in the context of four service typologies from the perspective of a developing country, where the service industry is competitive. The study demonstrates that the critical success factors of NSDPs do not differ across service types, thereby confirming the “One Basket Fits all” assumption in the current NSDP research study. Introduction The literature and empirical investigation on new service development projects (NSDPs) are receiving growing attention from both practitioners and researchers, expanding into many domains and disciplines, whilst researchers are offering new perspectives and tools on various dimensions of NSDP's success (Alam and Perry, 2002;Carbonell and Rodriguez Escudero, 2015;Farashah et al., 2019;de Oliveira and Rabechini, 2019;Garwood and Poole, 2018;Pivec and Ma cek, 2019). This is timely because there has been a call for research exploring service design priorities and specifically leveraging service design (Liu et al., 2020;Valtakoski et al., 2019). Existing literature offers research focussed on the constructs that improve the practice, discipline and success or failure of a project under a specific context, such as information system, construction, farming, enterprise resource planning (ERP), software, building and disaster management (Costantino et al., 2015;Jim enez-Zarco et al., 2011;Vasudevan et al., 2018); however, further research is still limited, specifically focussed on the new service development process. The extant literature was used to explain the prospects and margins of what is known by identifying, exploring, investigating, observing and exploiting new areas of project success, some of the most fundamental insights remain unexplored with regards to the four service typologies i.e. technology-, contact-, knowledge-and routine-intensive services . This includes projects that were undertaken by service firms to launch new service offerings to the market. In recent times, the ongoing transformation of service industry structures and the acceleration of innovation and competitive pressures have been observed by service firms (Chu et al., 2019;Martinez et al., 2019;Mao, 2019). The dynamic and diversified competitive market environments, service cost, service quality expectations and leadership in technology-based service may require service enterprises to offer a service towards their target consumers, which needs to develop by means of a successful NSDP process (Edvardsson et al., 2012;Storey et al., 2016;Storey and Hull, 2010). In the context of a developing country like Bangladesh, service providers are aware that present organizational structures and processes are inadequate to develop and launch services efficiently through appropriate NSDPs. Service firms constantly encounter complexities during the process of implementing new services by the firms as the existing literature does not provide on the critical success factors for NSDPs. Hence, decisions need to be taken for a successful NSDP regarding facilitating the success factors in the earlier phases may have a bigger impact on the NSDP's success compared to the later stages or during the operation of the project. If service managers and their team are not aware of the antecedents that may influence their objectives set from the initial phase, then the project will not be successful. Hence, this study will identify and construct the critical success factors for NSDPs, which will enable the service firms and their relevant stakeholders to evaluate and understand the overall project outcome. Previous studies mentioned that exploration of the factors and establishing the relationships amongst them by considering direct, indirect and external influences allow the firms to implement standard management skills in order to improve the firms' overall project performance (M€ uller and Jugdev, 2012;Pinto and Slevin, 1988a, b). Previous researchers identified the NSDP by clustering into four different typologiesroutine-, technology-, contact-and knowledge-intensive serviceswhich are also taken into consideration by the current study Matzner et al., 2018). The researchers define the first cluster embedded with the service firms featuring with a low degree of technology complexity and contact intensity, such as real-estate service providers, transportation, logistics, maintenance, banking and insurance service firms. The current research study used 140 respondents from this cluster in this study. The second cluster defines the services firms that have the highest degree of technology concentration and complexity combined with a relatively low degree of contact intensity while providing services to the customers. The examples of this category include engineering firms, repair shops, technical support service firms, etc. In total, 120 respondents in this category were used for this study. JCMARS 3,3 The third cluster features high degrees of labour-intensive and higher customer interaction but a low function of technological complexity. The examples of this category include customer care, retail house, healthcare services, hospitality and catering services. In this research, we collected data from 150 respondents in this category. The fourth cluster is embedded with knowledge-intensive services, which highlight both a high degree of complexity and contact intensity during service operations. These include education, legal services, consulting, medical and auditing services, which typically require a high degree of customer involvement and close connection with customers. A total of 160 respondents were taken from this category. Currently, most tools that depict the success of projects are developed in the field of project management and seem insufficient to fulfil this role in the existing service management literature Alam, 2012). In addition, previous studies suggest that a promising way to expand an overall framework for successful projects requires linking the traditional components of project success criteria with the critical success factors that influence the success of a project directly, indirectly and externally (Gardiner and Stewart, 2000). Furthermore, previous researchers, such as Biemans et al. (2016); Witell et al. (2017) and Jaw et al. (2010) only explored and tested specific new service development practices, the nature of service characteristics, innovation and their significance on NSDPs. However, academics and practitioners have not yet constructed and tested a model that provides a generic across the four service typologies in the context of NSDP research. To capture a comprehensive view of the nature of NSDP, a set of common measurable NSDP features were applied in this study in terms of resources (technology infrastructure [TI]), practices (knowledge-sharing behaviour (KSB) amongst the team members, authentic leadership [AL], institutional culture, etc.), methods (project team efficiency, cross-functional integration [CFI]) and results (success of NSDP) (Biemans et al., 2016;Antons and Breidbach, 2018). Therefore, the current research study addresses the existing research gaps and makes the following important contributions to the service development literature. This research focusses on an important aspect of conceptual integration, namely, the success of NSDP and examines the effect of significant antecedents in four different service typologies. This leads the researchers to seek answers of the following broad and specific research questions. Broad research question: RQ1. To what extent do the determinants -CFI, internal project team efficiency (IPTE), technology infrastructure (TI), KSB and ALinfluence the success of NSDP amongst four different service typologies? Specific research questions: RQ1a. Does TI mediate the relationship between IPTE and NSDP amongst the four service typologies? RQ1b. Does IPTE mediate the relationship between CFI and NSDP amongst the four service typologies? RQ1c. Do TIs mediate the relationship between CFI and NSDP amongst the four service typologies? RQ1d. To what extent KSB amongst the project team members for service firms moderates the relationship between IPTE and the success of NSDP? RQ1e. To what extent AL style in the NSDP moderates the relationship between CFI and the success of NSDP? New service development projects RQ1f. To what extent firm's cultures (FCs) for service firms moderate the relationship between CFI and the success of NSDP? In view of the above-mentioned research questions, the current study also considers the mediation influence of TI between IPTE and the success of NSDPs under four different service typologies. The researchers also simultaneously analyse the moderating effect of KSB between the relationship of IPTE and the success of NSDP. Furthermore, the study incorporates AL and FC variables as moderating influence between CFI and the success of NSDP, which could lead to a reconsideration of the existing literature in which the IPTE, TI, KSB, CFI, AL and FC are considered to be the predominant drivers of the success of NSDP. The remainder of the manuscript is organized as follows. The next section presents a critical literature review of the key variables to construct the conceptual model. Section 3 presents the methodology by justifying the proposed method and results of an empirical examination of the proposed model in the context of service firms in Bangladesh. Section 4 discusses the significance and implications of the findings. The paper concludes with conclusions, limitations and future directions. Conceptual framework and hypotheses development Existing literature reviews lack in examining the critical success factors of NSDP across the four service typologies. Moreover, research provides different perspectives of firms, employees or consumers which only extend the research width rather than building on existing knowledge depth. For example, Avlonitis et al. (2001) linked the service typology with the innovativeness that shapes a new service development, while Kuester et al. (2013) explored four service innovation types amongst service firm employees, categorizing them as efficient, innovative, interactive developers and standardized adopters. Cheng et al. (2012), on the other hand, explored the consumer aspect of service innovation. They empirically tested service innovation typologies and linked consumer involvement in the various NSDP process stages. Furthermore, Gustafsson et al. (2012) and Witell et al. (2014) examined how the practices and customers' involvement differ between firms developing incremental and radical innovative services. In sum, as four ranges of typologies for services exist, they have been developed and tested on the basis of theoretical considerations, with the objective of adding value to firms' operations, management and marketing of services. The relationship between IPTE and the success of NSDP The literature on project success argues that IPTE and CFI have a positive relationship with NSDPs (Cooper, 2019;St ahle et al., 2019;P erez-Luño et al., 2019;Bjorvatn and Wald, 2018;Andriopoulos et al., 2018). The researchers further emphasized that the role of these constructs must be considered for NSDP in the context of wider organizational strategy and the long-term fulfilment of stakeholders' expectation. From the above discussion, it has been argued that both CFI and IPTE are distinct and interrelated concepts that have a positive relationship with NSDPs. Figure 1 presents the conceptual model proposed and tested in this research. Past research reveals that the influence of project team efficiency significantly influences the success of a project, and the investigations by the previous scholars do not fully relate to the new service development context (Trischler et al., 2018;Chen et al., 2017a, b), as the current research aims to contribute in terms of four service typologies. The idea of "team" efficiency is explained as a small number of individuals with balancing skills who are equally committed to a common purpose, goal and working approach for which the group holds them mutually accountable (Katzenbach and Smith, 2015). The authors also differentiate between "team" and "working group" as they consider that a "team" provides greater performance than a "working group". Particularly, when a firm launches new service, it requires rational JCMARS 3,3 planning and execution, the team efficiency of the development process has been identified as a critical factor that is used to predict whether or not a new service will be successful in the market (Bstieler, 2005). Wirtz et al. (2008) revealed that team efficiency is one of the significant elements of service firms' superior performance. The relationship between CFI and the success of NSDP Again, Sherman et al. (2000); Holland et al. (2000) and Ernst et al. (2010) have made significant contributions concerning project success and the implications of CFI in different sectors. Previous research defines CFI as a behavioural approach by the team members, which captures the high level of communication and information sharing between members from different departments concurrently (Luca and Atuahene-Gima, 2007). Gebauer et al. (2008) found that cross-functional teams can combine knowledge and competencies of different perspectives within service organizations contributing to overall effectiveness. The perception of CFI of service firms, especially between management, administration and other departments, such as marketing, finance, production, human resources, research and development (R&D), has been strongly recognized as one of the key factors in the success of NSDP (Alam, 2002;Im and Workman, 2004;Krishnan and Ulrich, 2001). Hence, following hypotheses are derived for further examination: H1. There is a statistically significant and positive relationship between IPTE and the success of NSDP amongst the four service typologies. H2. There is a statistically significant and positive relationship between CFI and the success of NSDP amongst the four service typologies. The mathematical equation underpinnings of the above-mentioned hypotheses (See Figure 1) are as follows: New service development projects The mediating role of internal project team efficiency and technology infrastructure The competitiveness of service firms is increasingly influenced by their success in new service development (Krishnan and Ulrich, 2001;Cooper and Kleinschmidt, 1995). As the condition of the market has become increasingly competitive due to globalization and the adoption of new technologies by the service firms, it appears strategically important for the service firms to offer new services in a timely manner, which requires a substantial amount of TI (Neirotti and Pesce, 2019). In this study, TI is defined as one of the foundations of the firm's information technology portfolio combined with the technical and human-related assets that are shared throughout the company in the form of consistent cross-functional coordination and project team efficiency that aim at NSDPs' success (Bhatt and Grover, 2005). Thus, the higher levels of firm's TI result in a greater chance of NSDPs' success (S anchez-Morcilio and Quiles-Torres, 2016). Previous studies argued that the role of TI in the firm may optimize the team efficiency and CFI by which firms generate and deploy NSDPs (Chen, 2007;Denison et al., 1996;Hoegl and Gemuenden, 2001;Lovelace et al., 2001). When the IPTE is effective, the firm's TI needs to present and mediate the relationship between IPTE and NSDPs' success (Sicotte et al., 2019). Dimitriadis and Stevens (2008) found that management and technology need to be coordinated and aligned with the organization and its strategies for delivering improved service activities. Another study also revealed that firms exhibit greater success in their respective projects when IPTE mediates the relationship between CFI and projects' success (i.e. NSDPs) (Laurent and Leicht, 2019;P erez-Luño et al., 2019;St ahle et al., 2019). Again, TI mediates the relationship between CFI and the success of a project (i.e. NSDP) (Tornjanski et al., 2019;Pellathy et al., 2019;Daniel Sherman et al., 2005). Thus, following hypotheses are derived: H3. TI mediates the relationship between IPTE and NSDP amongst the four service typologies. H4. IPTE mediates the relationship between CFI and NSDP amongst the four service typologies. H5. TI mediates the relationship between CFI and NSDP amongst the four service typologies. The following mathematical equations derived from the above hypotheses (See Figure 1) are highlighted as follows: The moderating role of knowledge-sharing behaviour, authentic leadership and firm's culture KSB amongst the team members is a part of knowledge management that has been receiving a great deal of interest by managers and academics to investigate the process of managing the dynamics of group knowledge sharing, such as producing, capturing, storing, sharing and implementing knowledge amongst the team members in order to improve team efficiency (Madhavan and Grover, 1998;Lawson et al., 2009;Kremer et al., 2019;Duong and Swierczek, 2019). Within the above-mentioned parameters, KSB is crucial for a group to perform in an JCMARS 3,3 efficient way for a successful NSDP (Ouriques et al., 2019;Hoegl et al., 2003). Limited research evidence exists to support the moderating effects of KSB on the success of NSDP. However, project management studies provide some empirical support for the effects of KSB on the success of NSDP. For instance, Madhavan and Grover (1998) found that a higher level of KSB amongst the project team will enhance the relationship between team efficiency and new product development. In addition, KSB is beneficial for NSDP because (1) KSB allows the project team to have a resourceful knowledge repository that ultimately pushes them to be more efficient and effective for the successful completion of NSDP; (2) KSB allows the team to directly find the needed knowledge and thus helps the team's overall efficiency; (3) KSB makes all the members more likely to accept the new knowledge from others Chen et al., 2017a, b). Hence, it can be hypothesized that H6. The positive relationship between IPTE and the success of NSDP will be stronger when KSB amongst the project team members for service firms moderates the relationship. Based on the above hypothesis, the following mathematical equation is proposed: Project leadership requires a new dimension by using authentic style of leadership in order to adapt and meet the changing needs of the relevant stakeholders for 21st century products or service-intensive industry. In considering the relevance of leadership, a number of researchers explain its significance, while others have given considerable attention to AL style (Sok et al., 2018;Lloyd-Walker and Walker, 2011;Yang et al., 2011). There has been little empirical research on the moderating role of AL on the relationship between CFI and the success of NSDP (Zhu et al., 2019;Khan et al., 2014). The research study that has already been done indicates quite strongly that AL operates indirectly as an enabler of the project process by optimizing CFI (Oh et al., 2019;Toor et al., 2007). For example, the leadership role provided by the NSDP leaders helps the cross-functional team members to participate in developing the NSDP in line with the stakeholder's expectations. Therefore, combining CFI and AL in the context of new service development enhances the success of NSDP and thus, the cost of service development may reduce and promote the service firm's performance. Hence, when the degree of AL for NSDP projects is high, CFI in new service development strengthens the capability of service firms, thereby improving their overall project performance and further reducing costs. In other words, AL in the context of new service development strengthens the positive impact of CFI on NSDPs' performance (Floris and Cuganesan, 2019;Zhu et al., 2019;Swain et al., 2018). H7. The positive relationship between CFI and the success of NSDP will be stronger when AL style in the NSDP moderates the relationship. Thus, the study applies the following equation to test the moderation effect of AL: Again, a FC is one of the fundamental elements that foster overall integration of the functional team and project success on time. By considering this versatile element amongst the different types of industry, researchers have argued that the contribution of culture in projects' success is positive (Patterson et al., 2005;Ajmal and Koskinen, 2008;Wei and Miraglia, 2017;Teller, 2013). When the FC is strong, a cross-functional team plays an important role in strengthening the project's success (Van Poucke et al., 2018;Mueller, 2014Mueller, , 2015Hoda and Murugesan, 2016). A FC combines with the practices, symbols, values and assumptions that the members of the firm share with regard to reaching the objective through appropriate behaviour (Patterson et al., 2005;Schneider et al., 2013). FC provides a direction concerning New service development projects norms that stabilizes the methods of operation. Thus, project managers need to merge different organizational and professional cultures into one project culture that produce a successful NSDP (Ajmal and Koskinen, 2008). Previous researchers argue that FC serves as the foundation for management systems and practices, such as CFI (Kuo and Tsai, 2019;Bridges, 2018). Hence, the cultural traits of the organization set a high-level of social interaction amongst the team members that produces new knowledge that is legitimate and shared (Karlsen and Gottschalk, 2004). Considering the context, the novelty of FC in the success of NSDPs, service firms are unlikely to depend on new solutions, which, however, assist the service firms to apply a uniform standard of cultural norms across the team in order to optimize the speed of NSDP. Specifically, FC for service firms intensifies the positive impact of CFI on NSDPs. Thus, it can be hypothesized that H8. The positive relationship between CFI and the success of NSDP will be stronger when FC for service firms moderates the relationship. Based on the above hypothesis, the following mathematical equation is proposed: ðH 8Þ Figure 1 summarizes the hypotheses of this study in a conceptual model based on the abovementioned conceptual and theoretical foundation. Based on that H1-H8 are proposed for further empirical examination, as shown in Figure 1. The study used all the variables in the unobservable form, and each construct was formed by the indicator (as observable variable) using a first-order analysis by operationalizing reflective indicators (there are common factors within the indicators in each variable). This research involved 56 items representing the seven variables of this research. Sample and procedure The researchers appointed well-trained graduate research assistants from different marketing specialization courses under Master of Business Administration (MBA) and Executive Master of Business Administration (EMBA) programmes at a large private university in Bangladesh to assist in collecting the responses. The respondents were selected from the mid-level and senior executives who had different experience in terms of the NSDP initiated by their respective service firm, such as banks, restaurants, hotels, information technology (IT) firms, education and consultancy. In addition, the graduate research assistants in this project also applied the chain referral sampling method adopted from Harun et al. (2018) and Balaji et al. (2017), where the research assistants contacted several respondents (managers of service firms) to complete the survey. The data were collected between August 2018 and March 2019 from the executives, by the students of MBA or EMBA programmes by the classroom intercept and referral method using the executive workplace intercepts method. In total, the researchers collected 660 responses. Out of 660 instruments, only 570 instruments were fully completed by the respondents. Therefore, the researchers obtained an 83% response rate from this survey. A total of 570 useable responses were obtained where the researchers collected cluster 1 routine-intensive services: 140 responses, cluster 2 technology-intensive services:120 responses, cluster 3 contact-intensive services: 150 responses and cluster 4 knowledge-intensive services:160 responses from the four different service typologies. The majority of respondents (60%) were aged between 30 and 40 years. The overall sample consisted of 56% male respondents and 44% female respondents. JCMARS 3,3 The researchers operationalized a scenario-based survey to obtain quality and in-depth responses from the study's participants. The respondents were instructed to go through a successful service project scenario developed by the researchers in line with the theoretical framework and requested to respond to statements under each relationship of NSDPs. The project scenario was revised and improved through repeated review by three researchers in association with four academic experts and four project managers under each service typology to assess the clarity of meaning and correct understanding. The researchers applied the scenario-based survey as previous researchers explained that this approach has numerous advantages over the traditional survey method. These include creating a rational and realistic situation for the respondents, eliminating the difficulty in noticing the common success factors of NSDPs, minimizing the memory bias and overcoming the ethical issues that concern the respondents and their recalling of experiences (Andreassen and Streukens, 2012;Dabholkar and Spaid, 2012). The following NSDP scenario was used to obtain responses from the managers of the service firms: Imagine the following circumstances. You have been nominated by your company to be one of the team members for launching a new service offering to your target customers. This is your first time you are in such a team. You were chosen for this team because the firm understands that you have all the qualities that fit you in this position, such as efficiency, knowledge sharing behaviour, and a strong connection with the company. In addition, you also believe that to launch a successful project other important criteria are also significant for your team, such as cross-functional integration, favourable IT infrastructure, leadership, and total team efficiency. In addition, the researchers also conducted a pre-test of survey instrument of 50 service employees to ensure that the items reflect a true critical success factor of NSDPs. The study analysed the data extracted from the pilot survey to check the internal consistency and relevant factor structure to verify and purify the construct items, so that the final survey data would be fit for the research context and confirm the reliability and validity with more clarity (Flynn et al., 1990) (see Table 1). New service development projects Operationalization of the variables The adapted scales used to measure the constructs of this research were developed from previous research and are highlighted in Table 2. In order to assess CFI, respondents assessed their level of integration with other departments towards projects' success. Hence, CFI was measured using a ten-item Likert scale adapted from Sherman et al. (2000); Holland et al. (2000) and Ernst et al. (2010). Amongst the ten items, one item was developed in the reverse context (CFI10). To assess IPTE and TI, participants responded to the items adapted from Bstieler (2005) and Chen (2007). These scales consisted of ten items for assessing IPTE and ten items for assessing the service firm's TI. The researchers measured KSB using ten items adapted from Navimipour and Charband (2016); Kanawattanachai and Yoo (2002); Lewis (2003), and Scott and Tiessen (1999); KSB5 was structured in the reverse form. AL was measured on a ten-item Likert scale adapted from Lloyd-Walker and Walker (2011) and Toor et al. (2007). The measurement of FC used 16 items adapted from Patterson et al. (2005); Wei and Miraglia (2017) and Ajmal and Koskinen (2008). In addition, in this construct, FC7 was developed in the reverse form for the respondents. Finally, the NSDP was measured using nine items adapted from Cooper and Kleinschmidt (1988); Cooper and Kleinschmidt (1995) and Ernst et al. (2010). The Likert-type scales were measured on a five-point scale anchored by "1" strongly disagree and "5" strongly agree. The realism of the scenario was measured with a single item, "how realistic is the scenario described in the questionnaire?" (1 5 very unrealistic and 5 5 very realistic). Comprehension of the scenario was measured with the item, "the scenario described is easy to comprehend" (1 5 strongly disagree and 5 5 strongly agree). The current research study included multiple control variables, such as types of service firm, size of service firm and new service development budget. By including the type and size of service firm as an overall control variable, the study is capable of adjusting for any significant differences that may exist between the different typologies of the service clusters with regard to new service development success. In addition, the researchers also included project budget allocation and team size as overall control variables for this study. The size of a project team and allocation of budget for new service development are potential influencers of the quality of performance by the project team and management; individual team member support has a positive influence on the success of NSDP (Henard and Szymanski, 2001). Common method variance (CMV) The researchers managed and settled the common method variance (CMV) effect in the research design and data analysis stages by using the suggested guidelines by Podsakoff et al. (2003) andSerrano Archimi et al. (2018). In the research design stage, the project scenario and the survey instrument were reviewed by academic experts, service employees and NSDP managers. The survey items were also reversed coded by asking the respondents one negative statement under each construct in the survey questionnaire. The respondents were assured of anonymity. The researchers also emphasized that there were no right or wrong answers and asked the respondents to answer the survey questions as correctly as possible. In the statistical stage, the researchers ran Harman's one-factor test by applying an exploratory factor analysis with an unrotated factor solution for each data set (cluster 1 routine-intensive services: 140 responses, cluster 2 technology-intensive services: 120 responses, cluster 3 contact-intensive services: 150 responses and cluster 4 knowledgeintensive services: 160 responses). The test results revealed that the explained variance of each cluster of data was not above 29.68%, which confirms the threshold of 50% suggested by Podsakoff et al. (2003). In addition, the researchers also ran Harman's single-factor test using a confirmatory factor analysis (CFA). Malhotra et al. (2006Malhotra et al. ( , p. 1867) mentioned that "method biases are assumed to be substantial if the hypothesized model fits the data". Our JCMARS 3,3 Constructs Items Cross-functional integration (CFI) In the success of NSDP X, I as a team member (e.g. respondent from sales unit) integrated with (e.g. R&D, finance, marketing, HR and production) during the following NSD activities . . . CFI1 : CFI requires to plan and formulate new service development objectives CFI2: High level of information flow required between all the operational units of the service firms CFI3: Cross-functional participation needed to identify specific problems that are a barrier to the success of the new service development project CFI4: It is critical to explain the determination of the overall strategy across the departments before introducing the new service into the market. CFI5: The team measures the execution of testmarketing assessment cross functionally before market introduction of the new service CFI6: CFI is required to monitor competitors' reactions and their strategies CFI7: Unhealthy behaviour, such as distortion and withholding of information, always hurts decisions and creates distrust during interaction,and obstacles in the decision process of NSD (reverse question) Internal project team efficiency (IPTE) In the NSD project X, I (e.g. respondent from sales) need overall team efficiency in the following activities . . . IPTE1: The team should have the capability to reach the project objectives IPTE2: Meeting schedule amongst the project team executed on time IPTE3: The team promptly understands the market trend IPTE4: The team promptly executes the market trend IPTE5: The team is capable in terms of technical activities IPTE6: The team is proficient to easily forecast the unpredictable market that is hard to anticipate IPTE7: The team should not carry the project from beginning to end (reverse question) Technology infrastructure (TI) In the NSD project X, I need overall IT infrastructure in the following activities . . . TI1: The firm's current IT, which facilitates and showcases the services innovation database, is available to the project team TI2: The firm's current IT facilitates a competitive advantage over its competitors in NSD TI3: The firm's current IT infrastructure improves the CFI and decision-making process TI4: The current IT addresses the specific control requirements of the team for higher efficiency in NSD TI5: The current IT improves our NSD project's strategic planning process TI6: The current IT helps to make a pre-emptive strike against competitors in NSD TI7: The current IT helps provide minimum administrative support for the project team (such as billing, collection, inventory management) (reverse question) (continued ) Knowledge-sharing behaviour (KSB) In the NSD project X, I (e.g. respondent from sales) need overall KSB in the following activities . . . KSB1: The NSD project team members clearly have the knowledge that they need to share and which can guide them in doing the prescribed job KSB2: To do our work in NSD, we actually rely on standard procedures and practices to share knowledge KSB3: Team efficiency will be maximized through KSB amongst the team members, which has an essential role in leveraging the team resources KSB4: Team members share their knowledge when they trust their partners KSB5: Team neglect knowledge sharing between their project teams (reverse question) KSB6: Sharing knowledge and experience may reduce the costs associated with the NSD project KSB7: Explicit knowledge (i.e. data, information, documents, records, files, etc.) promotes knowledge sharing behaviour amongst the project team KSB8: Tacit knowledge (i.e. experience, thinking, competence, commitment, deeds, etc.) is also required to promote KSB amongst the project team Authentic leadership (AL) In the NSD project X, I (e.g. respondent from sales) expect my team leaders in the following role . . . AL1: AL is those who are confident, hopeful, optimistic, resilient and of high moral character towards achieving the success of the NSD project AL2: My project leader is true to himself/ herself (rather than conforming to the expectations of others) AL3: My project leader is motivated by personal convictions rather than to attain status, honours, or other personal benefits AL4: My project leader is original in the sense of not copying, that is, he/she leads from his/her own personal point of view AL5: My project leader takes pleasure in empowering others rather than concentrating power around him/ her AL6: My project leader is guided by qualities of the heart and mind together AL7: My project leader has emotional intelligence competency AL8: My project leader maintains a relationship amongst the cross-functional team that is unfair and biased (reverse question) To what extent do you agree with the following statements related to the success of the new service (project X) NSDP1: How successful was this new service development from the context of firm's overall profitability standpoint? (1 5 "a great financial failure" and 5 5 "a great financial success") NSDP2: Relative to your service firm's other new services, how successful was this new service in terms of profits? (1 5 "far less than our other new services" and 5 5 "far greater than our other new services") NSDP3: Relative to your service firm's objectives, how successful was this new service in terms of profits? (1 5 "far less than our objectives" and 5 5 "far exceeded our objectives") NSDP4: Relative to your service firm, the current new service exceeded sales expectations (1 5 "far less than our expectations" and 5 5 "far exceeded our expectations") NSDP5: Relative to your service firm, the current new service exceeded the return on investment (ROI) expectations (1 5 "far less than our expectations" and 5 5 "far exceeded our expectations") NSDP6: Relative to your service firm, the current new service exceeded senior management's expectations (1 5 "far less than our expectations" and 5 5 "far exceeded our expectations") NSDP7: Relative to your service firm, the current new service exceeded customer expectations (1 5 "far less than our expectations" and 5 5 "far exceeded our expectations") NSDP8: Relative to your service firm, the current new service exceeded the specialized knowledge of several different team members, which was needed to complete the project deliverables (1 5 "Strongly Disagree" and 5 5 "Strongly Agree") Table 2. New service development projects Harman's single-factor model under each cluster of data shows a poor fit (GFI 5 0.684; AGFI 5 0.628; NFI 5 0.617; IFI 5 0.684; TLI 5 0.642; RMR 5 0.123 and RMSEA 5 0.123), which supports the non-existence of CMV in our data set. To end, the study also used a common latent factor (CLF) test for each cluster, and the researchers assessed and compared the standardized regression weights of all the items for the models with and without CLF. The results revealed that regression weights under each item were found to be very small (<0.200), which confirmed that CMV is not a major issue for each cluster of data (Gaski, 2017;Siyal et al., 2019). These results indicate that CMV is not a major concern in this study. Data analysis method The study applied the CFA along with the structural equation modelling (SEM) technique using AMOS software. The current study adapted the two-stage data analysis procedure suggested by Anderson and Gerbing (1998). In the first stage, the measurement model was used to assess the reliability and validity of the adapted scales under each construct. In the second stage, the structural model was used to test the hypotheses proposed through the SEM technique. The researchers used SEM as the method has a combination of the exploratory factor analysis and multiple regressions (Ullman and Bentler, 2003). Turning to the mediation effect analysis, the researchers calculated the effect of the intervening variable, such as TI and IPTE. The current research study used the indirect effect method, which very much suits the SEM technique that was adopted in the data analysis (Baron and Kenny, 1986;Hayes, 2013). The study used standardized regression weights, p-value, the regression weights and the direct and indirect effects of the above-mentioned variables. Through these values, the study compared between the direct effect and the indirect effect via the changes of β values in the two scenarios (i.e. in existence of the mediator and without mediator) to examine the mediating effect of TI on the relationship between IPTE and the success of NSDP, IPTE on the relationship between CFI and the success of NSDP and TI on the relationship between CFI and NSDPs. Furthermore, for testing the moderating effect of KSB, AL and FC, the study used the interaction effect between the independent variables and the moderating variable on the dependent variables (Hakim and Fernandes, 2017;Famiyeh et al., 2018). Measurement model The current research study adapted the logic of Anderson and Gerbing (1998) in conducting the CFA (Table 3) to establish the construct reliability and discriminant validity of the multiitem scales adapted in this research. The researchers confirmed that the chi-square value for this model was significant {X 2 5 1049.826, with 389 degrees of freedom (df), p 5 0.000}. In addition, by considering the sensitivity of the sample size and complexity of the model, the researchers analysed the value of the goodness-of-fit index (GFI), Tucker-Lewis index (TLI), standardized root-mean-square residual (SRMR), root-mean-squared error of approximation (RMSEA) and the comparative fit index (CFI) to assess the model fit (Bagozzi and Yi, 1988). The results revealed that all the corresponding values of the CFA model appear to be in line or above the threshold that justifies the CFA analysis by using the constructs to indicate satisfactory model fit (GFI 5 0.912, AGFI 5 0.890, TLI 0.967, CFI 5 0.973, SRMR 5 0.028 and RMSEA 5 0.047). In addition, all the individual constructs, i.e. CFI, IPTE, TI, KSB, FC, AL and NSDP, exceed the recommended standards proposed by Bagozzi and Yi (1988) in terms of construct reliability (>0.80) and the average variance extracted (AVE) by the latent construct (>0.50). Furthermore, all item loadings had a significant t-value, thereby confirming convergent New service development projects validity. In addition, the AVE for each construct was greater than 0.5, further supporting the convergent validity of the measures (Fornell and Larcker, 1981). Meanwhile, the individual constructs' construct reliability and Cronbach's alphas indicate satisfactory reliability of the constructs recommended by Hair et al. (2010). The result of the squared root of AVEs of each construct was greater than the correlation it contributes to other constructs, which supports the measurement model and also confirmed discriminant validity (Fornell and Larcker, 1981). The invariance analysis The researchers performed the invariance analysis in order to meet the four service typologies uniformity of measures by the proposed model before hypotheses testing (Steenkamp and Baumgartner, 1998;Park et al., 2015). The researchers conducted the configural invariance test to determine whether service typology 1: technology-intensive services (cluster 1), service typology 2: contact-intensive services (cluster 2), service typology 3: knowledge-intensive services (cluster 3) and service typology 4: routine-intensive services (cluster 4) would use the same pattern in measuring the items adapted in this study under each construct. The results of the configural invariance analysis and metric invariance, as shown in Table 4, indicate that the χ 2 and model-fit indicators for each group are sufficient to support the configural invariance analysis of the construct. Finally, a partial metric invariance model with six of 56 invariance constraints relaxed was supported (Table 4). Structural model The results from the structural equation model indicate that the overall model provides a substantial fit with the data, in that all the recognized fit indices are acceptable (X 2 5 723.86; df 5 387; X 2 /df 5 1.87; CFI 5 0.967; IFI 5 0.968; TLI 5 0.960; RMSEA 5 0.054). The results of the path coefficients of the structural model, highlighted in Table 5, indicate that all seven paths in the proposed structural model were significant and positive towards their respective endogenous variables. CFI was found to have a positive and significant influence on NSDP (β 5 0.45, t 5 4.88, p < 0.01). In addition, IPTE also proved positive and significant towards NSDP (β 5 0.56, t 5 6.24, p < 0.01). Therefore, the results provide empirical support for H1 and H2 and that both CFI and IPTE have a positive and significant relationship with the service firm's NSDP. Furthermore, CFI has significant positive influence on IPTE (β 5 0.56, t 5 6.24, p < 0.01) and a positive effect on TI (β 5 0.34, t 5 3.77, p < 0.01). Regarding the effects of IPTE towards TI, the relationship also proved significant and positive (β 5 0.37, t 5 3.88, p < 0.03). Finally, TI has a direct positive influence on NSDP (β 5 0.42, t 5 4.58, p < 0.01). 0.82 Note(s): Average variance extracted (AVE); Cross-functional integration (CFI); Internal project team efficiency (IPTE); Technology infrastructure (TI) ; Knowledge-sharing behaviour (KSB); Authentic leadership (AL); Firm's culture (FC); New service development project (NSDP). *All parameter estimates are significant at the 0.001 level Table 3. JCMARS 3,3 Mediating effect of technology infrastructure and internal project team efficiency The study adapted the recommendations of Preacher and Hayes (2008) and Balaji et al. (2017) to test the mediation effects, which was run with 5,000 bootstrapping re-samples and a bias-corrected 95% confidence interval. If the confidence interval contains no zeros, then the indirect influence is considered to be non-significant. From the bootstrapping procedure, the results from the data revealed that the mediating effects of TI and IPTE had a significant total effect on NSDP (β 5 0.36, t 5 5.76, p < 0.01). When, we bring in TI as a mediator, the direct effect of IPTE on the success of NSDP reduces (β 5 0.31, t 5 4.66, p < 0.01), while the indirect effect of IPTE on NSDP via service firm's TI reaches a point of estimate on 0.05. Thus, the confidence interval at 95% contains no zeros (Lower 95% confidence interval 5 0.05; upper 95% confidence interval 5 0.01) and the indirect effect is significant. Therefore, we can confirm that TI mediates the relationship between IPTE and NSDP, thus supporting H3. The research study also supported H4 as the indirect effect of CFI on the success of NSDP via IPTE had a point of estimate of 0.07 (total effect: β 5 0.26, t 5 4.76, p < 0.01; after introducing the mediating effect of IPTE: β 5 0.19, t 5 3.56, p < 0.01) with the no zeros biascorrected 95% confidence interval: the lower 95% confidence interval 5 0.04 and upper 95% confidence interval 5 0.01. Thus, IPTE was proven to mediate the influence of CFI on the success of NSDP amongst the four service typologies. Therefore, we also accepted hypothesis 4 (H4). Linear relationship Note(s): Cross-functional integration (CFI); Internal project team efficiency (IPTE); Technology infrastructure (TI); Knowledge-sharing behaviour (KSB); Authentic leadership (AL); Firm's culture (FC); New service development project (NSDP) *All estimates are significant at the <0.05 level Table 4. Results of configural and metric invariance of the clusters Table 5. The results of SEM and its respective path co-efficient New service development projects Finally, for the mediating effect of TI, the results from the analysis indicated that the point of estimates for the indirect effect of CFI on NSDP is 0.08 (total effect: β 5 0.22, t 5 5.76, p < 0.01; after introducing the mediating effect of TI: β 5 0.14, t 5 4.56, p < 0.01). As the biascorrected 95% confidence interval also provided no zeros (lower 95% confidence interval 5 0.04; upper 95% confidence interval 5 0.01), H5 is supported by the results. Hence, the research study also confirmed that TI mediates the relationship between CFI and NSDP amongst the four service typologies. Moderation analysis The research study applied the interaction effect analysis in order to test the moderating effect between the independent variables and the moderating variable (Fairchild and MacKinnon, 2009). The following test results of the moderating effect are presented: Moderating effect of knowledge-sharing behaviour on internal project team efficiency on NSDP The result from the SEM analysis revealed an interaction coefficient (β) of 0.262, with a significant p-value, thereby proving that project team's KSB as acts as a moderator between project team efficiency and the success of an NSDP. The direct effect of IPTE on the NSDP has no significant influence when the KSB variable is a strong moderator between the relationships. The results also reflect that the value of the coefficient of the interaction effect is positive and hence, the effect of the variable KSB is said to be strengthening. From the slope analysis, the researchers found that the higher the influence of KSB impact, the greater the effect of IPTE on the service firm's NSDP. Thus, the outcome from this research accepted hypothesis 6 (H6). Moderating effect of authentic leadership on cross-functional integration on NSDP The current study proposed that the positive relationship between CFI and the success of NSDP will be stronger when the AL style by the NSDP leaders moderates the relationship. The results also supported the above statement as the interaction effect of AL is stronger and significant (See Table 6), which indicates that AL acts as a moderator between CFI and the success of NSDP. From the slope analysis, the research study concludes that the direct effect of CFI on the success of NSDP does not significantly influence the relationship, unless the presence of the AL variable is a significant moderator. This means that the higher impact of AL positively affects CFI on the success of NSDP; thus, this research accepted H7. Moderating effect of firm's culture on cross functional integration on NSDP The research's result also supported H8 as service FC positively moderates the effect of CFI on the success of NSDP (interaction term β 5 0.213, p 5 0.01). The slope analysis revealed that CFI*FC to NSDP 0.213 <0.01 (significant) Note(s): Cross-functional integration (CFI); Internal project team efficiency (IPTE); Technology infrastructure (TI); Knowledge-sharing behaviour (KSB); Authentic leadership (AL); Firm's culture (FC); New service development project (NSDP) *All estimates are significant at the <0.05 level Table 6. Results of SEM moderation JCMARS 3,3 the stronger and greater the influence of FC , the higher the level of CFI leading to the success of the NSDP for the service firms, while the researchers found no significant difference at lower levels of FC in the CFI and NSDP relationship. Discussion Recognizing the challenges of new service development and meeting the need of the target markets, NSDPs need to be successfully accomplished along with important phenomena that influence the overall performance of the service firms. A number of prior investigations were conducted and analysed the various aspects of new product development project success issues only and found that the success of the new product development significantly influences the overall success of the company (Henard and Szymanski, 2001;Carbonell and Rodr ıguez Escudero, 2019;Good and Calantone, 2019). The specific objective of this study was to recommend and empirically investigate an integrated conceptual model to examine the role of CFI and IPTE in influencing the success of NSDP, the mediating role of TI and IPTE and the moderating role of KSB, AL and FC towards the success of NSDP under the paradigm of four service typologies. The findings from the configural invariance test revealed that the determinants are significant across four service typologies and service firms may use the same pattern in measuring the items for the success of NSDP. Hence, the findings also revealed that the full-metric invariance was not supported due to the χ 2 difference between the non-restricted and the full-metric invariance models (Steenkamp and Baumgartner, 1998). Thus, the researchers relaxed step by step the invariance constraints on the basis of the respective modification indices. In the end, a partial-metric invariance model with six of 56 invariance constraints relaxed was supported. The findings support the previous study which confirms that the generic determinants for the success of NSDP do not vary at a significant level (de Jong and Vermeulen, 2003;Kindstr€ om and Kowalkowski, 2009;Menor and Roth, 2008;Paswan et al., 2009). The findings of the study both support and contribute to the service management and project management literature. In line with the existing research studies, this study corroborates the relationship amongst CFI, IPTE and the success of NSDP (Cooper, 2019;Laurent and Leicht, 2019;Castro et al., 2019;Hoegl and Parboteeah, 2006). In addition, the present study uses TI and IPTE to extend and expand the understanding of the relationship between these constructs. The present study finds that TI mediates the relationship between IPTE and the success of NSDP, which supports the previous findings of Bstieler (2005) and McNally et al. (2011). The findings also demonstrate that the success of NSDP elicit both a high and low level of KSB amongst the project team, which affects project team efficiency along with the success of NSDP. This happens because NSDP requires team members who have different perceptions, views, functional background, conflict-handling capabilities, motivation and knowledge (Mitchell and Boyle, 2010;Todorovi c et al., 2015). Similarly, the present study finds that the role of AL and FC moderates the relationship between CFI and the success of NSDP significantly. AL explains the leadership skills, which integrate transformational and ethical leadership skills with high level of transparency that guide the success of NSDP, where the leaders are true to themselves rather than using the role to simply develop an image amongst the team members (Avolio et al., 2004;Lemoine et al., 2019;Tonkin, 2013). Again, the complexity of NSDP requires the need for close collaboration in the cross-functional team from diverse backgrounds with different professional cultures and subcultures to aggregate in one frame through a standard culture established by the project team management (Ajmal and Koskinen, 2008;Bartsch et al., 2013;and Wiewiora et al., 2013). New service development projects The present research study finds that project team efficiency and CFI ability are regulated by the KSB, AL and FC, which strengthens the findings of Floris and Cuganesan (2019); Zhu et al. (2019); Donnelly (2019); Ajmal and Koskinen (2008) and Maitlo et al. (2019). A higher level of KSB amongst the team members and higher IPTE lead to a greater chance of success of NSDP, while no significant difference was observed at a lower effect level of KSB in the relationship between IPTE and NSDP. The following section explains the theoretical and managerial contributions. Theoretical contributions The results of the study contribute to the service management literature on the success of NSDP amongst the four typologies suggested by Jaakkola et al. (2017). Though, the result from this research study indicates that the critical success factors of NSDP do not differ across service types, thereby confirming the "One Basket Fits all" postulation in the current NSDP research study. The present study demonstrates that for developing a new service, the firms need to address CFI, IPTE and firm's TI, i.e. TI mediates the influence of CFI on the success of NSDPs. The same variable also mediates the influence of project team efficiency and the success of NSDPs. In addition, overall project team efficiency also mediates the relationship of CFI and the success of NSDPs. Furthermore, the current study contributes to the literature on new service development by highlighting the important moderating role of team's KSB, AL and FC to the success of NSDPs with a specific relationship. The current study also conceptualizes and empirically investigates the role of CFI on the NSDP. As the results indicate that CFI is critical for NSDP, we can conclude that prior research on CFI has ignored this by linking it with the mediating effect of IPTE and TI and the moderating effect by AL and FC, which are the significant role to NSDP across the four service typologies. In addition, the study also links with the prediction from resource dependency theory as the NSDPs require a higher level of interdependence amongst the functional divisions of the service firms (Kim and Wilemon, 2002). It is noteworthy that CFI alone cannot impact service development project success, the IPTE also has a strong impact, directly and indirectly towards NSDPs. The role of KSB is also dominant for functioning IPTE and NSDP. Hence, a rational belief is that a lack of KSB amongst the team members at the NSDP stage can cause a "good" service to fail. This study finds a positive impact of KSB on the NSDP amongst the four service typologies. Above all, the current research study is an attempt to examine the mediating role of TI and the moderating role of KSB, AL and FC in the success of NSDP in the context of a developing country. Managerial implications Traditionally, the successful development of a new service has been linked to the CFI amongst all the departments of the service firm. However, the results of this study indicate that the leader's role is vital and that service firms need to carefully consider managing and selecting authentic leaders to successfully run the project. The integration of the entire relevant department is critical to the success of NSDP, thus increasing the likelihood of new service success, which requires a strong culture that may guide and set the principles, norms and values across the NSDP team members. Therefore, managers of the NSDP should understand that CFI into the NSDP process is an effective way to bring the "voice of the staff" into the firm in order to make the project successful. This research further shows that fostering CFI amongt departments is not possible without an appropriate TI and IPTE in all the stages of the NSDP. Thus, service firm's managers need to focus on various training and development programmes to enhance the JCMARS 3,3 team efficiency in order to finish the NSDP on time. The researchers also recommend that training programmes need to be designed to equip service employees with the requisite skills to manage various service development projects. The training should be directed to the development of AL, the process of the adopting culture of the service firms, managing and sharing knowledge with their colleagues and the urgency of technology in order to enhance their capability for the success of NSDP. In addition, service firms also need establish a standard level of TI, which optimizes both team efficiency and CFI for the success of NSDP. Furthermore, a promising way could be to increase team efficiency by encouraging the team members to share the knowledge in the NSDP process. In sum, the present study advances the knowledge of service managers' understanding of the critical success factors of NSDP in the context of four service typologies. A more specific practical implication of this research finding suggests that service firms need to design an effective action programme for the NSDP process, where they should be concerned with the aforementioned antecedents and the relationship dynamics. Limitations and further research The present study investigates the critical success factors of NSDP in the context of four service typologies. This study contributes to the existing body of knowledge on leadership, FC, KSB and service firm's TI in the success of NSDP, while acknowledging that the limitations of the study also provide avenues for future research in the service management field. The researchers applied a scenario-based survey approach amongst the employees of service firms to minimize the memory bias associated with recalling the critical success factors of NSDP. Although this type of survey approach fulfils internal validity, it may lack external validity (Martinez et al., 2009). The current study applied the convenience and purposive sampling method due to the unavailability of a sampling frame of the target population of interest. Therefore, future research may be conducted by using probability sampling methods for better representation of the population. In addition, the current research study also extended in further by examining the ranking of importance for those factors across different service typologies. Finally, future research studies could examine other issues such as service firm's structural complexity, uncertainty of the market needs, pace of competition, dynamic information technology (IT) complexity, social-political and institutional complexities in NSDPs.
2020-10-19T18:09:26.217Z
2020-09-25T00:00:00.000
{ "year": 2020, "sha1": "88742899916aaa7fd97e29fd361fb4dde40e7747", "oa_license": "CCBY", "oa_url": "https://www.emerald.com/insight/content/doi/10.1108/JCMARS-09-2019-0035/full/pdf?title=paradigm-of-new-service-development-projects-nsdps-italicone-basket-fits-allitalic", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "209a480b75a9043a40e7648ada5db7996bb2e8e0", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
9525823
pes2o/s2orc
v3-fos-license
On the $\mathcal{NP}$-hardness of GRacSim Drawing and k-SEFE Problems We study the complexity of two problems in simultaneous graph drawing. The first problem, GRacSim Drawing, asks for finding a simultaneous geometric embedding of two graphs such that only crossings at right angles are allowed. The second problem, k-SEFE, is a restricted version of the topological simultaneous embedding with fixed edges (SEFE) problem, for two planar graphs, in which every private edge may receive at most $k$ crossings, where $k$ is a prescribed positive integer. We show that GRacSim Drawing is $\mathcal{NP}$-hard and that k-SEFE is $\mathcal{NP}$-complete. The $\mathcal{NP}$-hardness of both problems is proved using two similar reductions from 3-Partition. Introduction The problem of computing a simultaneous embedding of two or more graphs has been extensively explored by the graph drawing community. Indeed, besides its inherent theoretical interest [1,2,4,5,6,7,9,10,11,12,13,14,15,16,17,18,19,22,23,24,25,26], it has several applications in dynamic network visualization, especially when a visual analysis of an evolving network is needed. Although many variants of this problem have been investigated so far, a general formulation for two graphs can be stated as follows: Let G 1 = (V 1 , E 1 ) and G 2 = (V 2 , E 2 ) be two planar graphs sharing a common (or shared) subgraph G = (V, E), where V = V 1 ∩ V 2 and E = E 1 ∩ E 2 . Compute a planar drawing Γ 1 of G 1 and a planar drawing Γ 2 of G 2 such that the restrictions to G of these drawings are identical. By overlapping Γ 1 and Γ 2 in such a way that they perfectly coincide on G, it follows that edge crossings may only occur between a private edge of G 1 and a private edge of G 2 , where a private (or exclusive) edge of G i is an edge of E i \ E (i = 1, 2). Depending on the drawing model adopted for the edges, two main variants of the simultaneous embedding problem have been proposed: topological and geometric. The topological variant, known as SIMULTANEOUS EMBEDDING WITH FIXED EDGES (or SEFE for short), allows to draw the edges of Γ 1 and Γ 2 as arbitrary open Jordan curves, provided that every edge of G is represented by the same curve in Γ 1 and Γ 2 . Instead, the geometric variant, known as SIMULTANEOUS GEOMETRIC EMBEDDING (or SGE for short), imposes that Γ 1 and Γ 2 are two straight-line drawings. The SGE problem is therefore a restricted version of SEFE, and it turned out to be "too much restrictive", i.e. there are examples of pairs of structurally simple graphs, such as a path and a tree [6], that do not admit an SGE. Also, testing whether two planar graphs admit a simultaneous geometric embedding is N P-hard [16]. Compared with SGE, pairs of graphs of much broader families always admit a SEFE, in particular there always exists a SEFE when the input graphs are a planar graph and a tree [18]. In contrast, it is a long-standing open problem to determine whether the existence of a SEFE can be tested in polynomial time or not, for two planar graphs; though, the testing problem is N P-complete when generalizing SEFE to three or more graphs [22]. However, several polynomial time testing algorithms have been provided under different assumptions [3,4,11,12,24,26], most of them involve the connectivity or the maximum degree of the input graphs or of their common subgraph. In this paper we study the complexity of the GEOMETRIC RAC SIMULTANEOUS DRAWING problem [8] (GRACSIM DRAWING for short): a restricted version of SGE, which asks for finding a simultaneous geometric embedding of two graphs, such that all edge crossings must occur at right angles. We show that GRACSIM DRAWING is N P-hard by a reduction from 3-PARTITION; see Section 3. Moreover, we introduce a new restricted version of the SEFE problem, called K-SEFE, in which every private edge may receive at most k crossings, where k is a prescribed positive integer. We then show that K-SEFE is N P-complete for any fixed positive k, to prove the N P-hardness we use a similar reduction technique as that for GRACSIM DRAWING; see Section 4. Preliminaries Let G = (V, E) be a simple graph. A drawing Γ of G maps each vertex of V to a distinct point in the plane and each edge of E to a simple Jordan curve connecting its end-vertices. Drawing Γ is planar if no two distinct edges intersect, except at common end-vertices. Γ is a straight-line planar drawing if it is planar and all its edges are represented by straight-line segments. G is planar if it admits a planar drawing. A planar drawing Γ of G partitions the plane into topologically connected regions called faces. The unbounded face is called the external (or outer) face; the other faces are the internal (or inner) faces. A face f is described by the circular ordering of vertices and edges that are encountered when walking along its boundary in clockwise direction if f is internal, and in counterclockwise direction if f is external. A planar embedding of a planar graph G is an equivalence class of planar drawings that define the same set of faces for G. A plane graph is a planar graph with an associated planar embedding and a prescribed outer face. Let H be a plane graph. The weak dual H * of H is a graph whose vertices correspond to the internal faces of H, and there is an edge between two vertices if the corresponding internal faces in H share one or more edges. A fan is a graph formed by a path π plus a vertex v and a set of edges connecting v to every vertex of π; vertex v is called the apex of the fan. A wheel is a graph consisting of a cycle C plus a vertex c and a set of edges connecting c to every vertex of C; vertex c is the center of the wheel. NP-hardness of GRACSIM DRAWING In this section, we study the complexity of the following problem. . Question: Are there two straight-line planar drawings Γ 1 and Γ 2 , of G 1 and G 2 , respectively, such that (i) every vertex is mapped to the same point in both drawings, and (ii) any two crossing edges e 1 and e 2 , with e 1 ∈ E 1 \ E and e 2 ∈ E 2 \ E, cross only at right angle? Theorem 1. Deciding whether two graphs have a GRACSIM DRAWING is N P-hard. Proof. We prove the N P-hardness by a reduction from 3-PARTITION (3P). contains exactly 3 elements of A, whose sum is B? We recall that 3P is a strongly N P-hard problem [20], i.e., it remains N P-hard even if B is bounded by a polynomial in m. Also, a trivial necessary condition for the existence of a solution is that 3m i=1 a i = mB, therefore it is not restrictive to consider only instances satisfying this equality. We first give an overview of this reduction, then we describe in detail the construction for transforming an instance of 3P into an instance G 1 , G 2 of GRACSIM DRAWING, and finally we prove that an instance of 3P is a Yes-instance if and only if the transformed instance G 1 , G 2 admits a GRACSIM drawing. OVERVIEW The transformed instance G 1 , G 2 of GRACSIM DRAWING is obtained by combining a subdivided pumpkin gadget with 3m subdivided slice gadgets and m transversal paths; see Fig. 1 for an illustration. A pumpkin gadget consists of a biclique K 2,m+1 plus an additional edge, called the handle, that connects two vertices of the partite set of cardinality m+1; the two vertices of the other partite set are the poles of the pumpkin. A subdivided pumpkin is a pumpkin where each edge, other than the handle, is subdivided exactly once, while the handle is subdivided twice. We remark that it is not strictly necessary to use a subdivided pumpkin instead of a normal pumpkin, the only reason is to exploit the subdivision vertices as bend points, in this way we get more readable and compact GRACSIM drawings. Hereafter, when it is not ambiguous, we will use the terms pumpkin and slice in place of subdivided pumpkin and of subdivided slice, respectively. All the edges of a pumpkin are shared edges, that is, they belong to both graphs, therefore they cannot be crossed in any GRACSIM drawing. Moreover, any planar embedding of a subdivided pumpkin contains exactly two faces of degree seven and m faces of degree eight, the latter are called wedges and are the only (c) Wedge Wj, transversal path πj, and subdivided slices Sj1, Sj2, and Sj3 incident to both poles. Wedges are used to contain (subdivided) slice gadgets, which are 3m subgraphs attached to the two poles of the pumpkin, with no other vertices in common with each other and with the pumpkin. Every slice has a "width" that suitably encodes a distinct element a i of A-recall that two distinct elements could be equaland the structure of a slice is sufficiently "rigid" so that overlaps and nestings among slices cannot occur in a GRACSIM drawing. The basic idea of the reduction is to get the subsets A j (1 ≤ j ≤ m) of a solution of 3P, in case one exists, by looking at the slices in each wedge of a GRACSIM drawing, which implies that every wedge must contain exactly three slices whose widths sum to B. Of course, without introducing some further gadget, each wedge could contain even all slices, i.e. its width can be considered unlimited. Hence, in order to make all wedges of the same width B, m transversal paths are attached to the pumpkin, one for each wedge. Precisely, a transversal path is an alternating path that connects the two vertices of a wedge other than the poles and the subdivision vertices, and it contains only non-shared edges that belong alternatively to G 1 and to G 2 . Therefore, the pumpkin plus the transversal paths form a subdivision of a maximal planar graph, which has a unique embedding (up to a choice of the external face). Further, every transversal path has an "effective length" that encodes the integer B, which also establishes the width of the corresponding wedge. Crossings between slices and transversal paths are thus unavoidable in a GRACSIM drawing, because every transversal path splits its wedge into two parts, separating the two poles of the pumpkin; clearly, every slice crosses only one transversal path. However, by choosing a suitable structure for the slices, it is possible to form only crossings that are allowed in a GRACSIM drawing. The key factor of the reduction is to make it possible if and only if each slice of width a i can cross a portion of its transversal path with an effective length greater than or equal to a i . In other words, the slice structure and the transversal path effective length are defined in such a way that, in a GRACSIM drawing, (i) every transversal path cannot cross more than three slices, and (ii) the total width of the slices crossed by a same transversal path equals integer B, which yields a solution of 3P. CONSTRUCTION We now describe in detail a procedure to incrementally construct an instance G 1 , G 2 of GRACSIM DRAWING starting from an instance of 3P. At each step, this procedure adds one or more subgraphs (gadgets) to the current pair of graphs. As G 1 and G 2 have the same vertex set, for each added subgraph we will only specify which edges are shared and which are exclusive; the final vertex set will be known implicitly. Start with a biclique K 2,m+1 , and denote by s, t and by v 0 , v 1 , . . . , v m its vertices of the partite sets of cardinality 2 and m + 1, respectively. Add edge h = (v 0 , v m ) to the biclique, subdivide h twice, and denote by π h the resulting 3-edge path. Then, for every 0 ≤ j ≤ m, subdivide edge (s, v j ) ((t, v j ), respectively) exactly once, denote the subdivision vertex by v s j (v t j , respectively) and the 2-edge path obtained from this subdivision by π s (j) (π t (j), respectively). The resulting graph G p is the subdivided pumpkin and all its edges are shared edges, i.e. G p ⊂ G; vertices s and t are the poles of the pumpkin, while π h is called the subdivided handle of the pumpkin. Connect each pair of vertices v j−1 , v j (1 ≤ j ≤ m) of G p with a transversal path π j , consisting of 2B + 1 non-shared edges, so that edges in odd positions (starting from v j−1 ) are private edges of G 1 , while those in even positions are private edges of G 2 ; hence, every transversal path starts and ends with an edge of G 1 and has exactly 2B inner vertices. Integer B represents the effective length of a transversal path, which is defined as half the number of its inner vertices. For each integer a i ∈ A, (1 ≤ i ≤ 3m) construct a (subdivided) slice S i by suitably attaching two fan subgraphs and by subdividing a subset of their edges as follows (see, e.g., Fig.1(b)). Add a fan of a i + 2 vertices with apex at pole t and subdivide every edge incident to t exactly once; denote the resulting subdivided fan by F t i . Specularly, add a subdivided fan F s i with apex at the other pole s, having the same number of ver-tices as F t i . All the edges of F s i and F t i are shared edges, i.e. F s i ∪ F t i ⊂ G. Now, let π t i and π s i be the two paths of these fans, i.e. π t i = F t i \ {t} and π s i = F s i \ {s}. Visit path π t i starting from one of its end-vertices and denote the k-th encountered vertex by π t i (k) (1 ≤ k ≤ a i + 1); in an analogous way define the k-th vertex π s i (k) of path π s i . For each 1 ≤ k ≤ a i + 1, connect π s i (k) to π t i (k) with a private edge of G 2 . Further, for each 1 ≤ k ≤ a i , add a private edge of G 1 joining either π s i (k) to π t i (k + 1) or π t i (k) to π s i (k + 1) depending on whether k is odd or even, respectively. We conclude this construction by introducing the concepts of tunnel and of width of a slice. The tunnel ∆ i is the subgraph of S i induced by the vertices of π t i and π s i , i.e. It is straightforward to see that every tunnel is a biconnected internally-triangulated outer-plane graph, its weak dual is a path, and it contains exactly 2a i triangles. The width w(S i ) of a slice S i is defined as half the number of triangles in its tunnel. It is not difficult to see that the transformed instance of GRACSIM DRAWING contains 6Bm + 21m + 7 vertices and 10Bm + 20m + 7 edges, therefore its construction can be performed in polynomial time. We observe that the common subgraph is not connected. Indeed, G consists of the pumpkin G p along with all fans and all inner vertices of the transversal paths; thus, there are 2Bm isolated vertices in the common subgraph. Moreover, even G 1 and G 2 are not connected, because in addition to G they also contain their own private edges of slices S i (1 ≤ i ≤ 3m) and those of transversal paths π j (1 ≤ j ≤ m); in particular, due to the latter paths, G 1 and G 2 contain an induced matching of (B − 1)m and Bm (private) edges, respectively. CORRECTNESS We now prove that a Yes-instance of 3P is transformed into a Yesinstance of GRACSIM DRAWING, and vice-versa. (⇒) Let A be a Yes-instance of 3P, we show how to compute a GRACSIM drawing of the transformed instance G 1 , G 2 on an integer grid; it suffices to compute the vertex coordinates, because edges are represented by straight-line segments. The drawing construction strongly relies on the concepts of square cell and of cell array. A square cell, or briefly a cell, is a 4 × 4 square, with corners at grid points, and with opposite sides that are either horizontal or vertical. The diagonal of a cell connecting the bottom-left (top-left, respectively) and the top-right (bottom-right, respectively) corners is called the positive-slope diagonal (negative-slope diagonal). The center of a cell is the intersection point of its diagonals, which meet at right angles. Every cell contains four special grid points, called anchor points, which are the corners of a 2 × 2 square having the same center as the cell; two anchor points lie on the positive-slope diagonal while the other two are on the negative-slope diagonal. A horizontal cell array CA of length l > 0 is an ordered sequence c 1 , c 2 , . . . , c l of l cells such that any two consecutive cells c p , c p+1 (1 ≤ p < l) share a vertical side; namely, the right side of c p coincide with the left side of c p+1 . Consider now a solution {A 1 , A 2 , . . . , A m } of 3P for the instance A. For each triple A j (1 ≤ j ≤ m), denote its elements by a j1 , a j2 , a j3 , i.e. A j = {a j1 , a j2 , a j3 } ⊂ A, and denote by S j1 , S j2 and S j3 , and by ∆ j1 , ∆ j2 and ∆ j3 , the corresponding slices and their tunnels in the transformed instance. Embed each tunnel ∆ jk (1 ≤ k ≤ 3) on a horizontal array CA jk of length a jk in such a way that the private edges of G 2 are represented by the vertical sides of cells in CA jk . The private edges of G 1 are thus embedded on a sequence of a jk cell diagonals, whose slopes are alternately +1 (positive-slope diagonal) and −1 (negative-slope diagonal), starting from +1; hence, in every cell, the anchor points of one of the two diagonals are occupied, i.e. they overlap with a straight-line segment representing a private edge of G 1 , while the remaining two anchor points are (still) free. Place cell arrays CA jk one after another, from left to right, in increasing order of j = 1, 2, . . . , m and, in case of ties, in increasing order of k = 1, 2, 3. Also, leave a horizontal gap of one cell between intra-partition consecutive arrays and a horizontal gap of two cells in case of inter-partition consecutive arrays. Concerning the vertical placement proceed as follows. Let CA and CA ′ be two arbitrary consecutive arrays (intra-or inter-partition), with CA to the left of CA ′ . If CA has an even length, then CA and CA ′ are top-and bottom-aligned along the vertical axis, while if CA has an odd length, then CA ′ is shifted down of half a cell with respect to CA. It follows that the rightmost free anchor point of CA is always horizontally aligned with the leftmost free anchor point of CA ′ . Now, let R be the smallest rectangle containing all previous cell arrays with a top, right, bottom, and left margin of one cell. Place pole t (s, respectively) at a grid point above (under, respectively) the top side (bottom side, respectively) of R, as close as possible to its vertical bisector line, leaving a vertical offset of two cells; in Fig. 1(d) we deliberately increased this offset to get a better aspect ratio. Place vertex v j (0 ≤ j < m) at the grid point that is horizontally aligned with and to the left of the first free anchor point of CA j1 , leaving a margin of one cell; also, place vertex v m at the grid point that is horizontally aligned with and one-cell to the right of the rightmost free anchor point. Observe that v 0 and v m lie on the left and right side of R, respectively. Now, embed the vertices v t j and v s j (j = 0, 1, . . . , m) of the pumpkin G p along the top and bottom side of R, respectively, in such a way that they are vertically aligned with v j . Then, embed the missing vertices of the slices in an analogous way, that is a vertex adjacent to t (s, respectively) must be vertically aligned with its neighbor in the tunnel and must lie along the top side (bottom side, respectively) of R. Concerning the handle π h , place its subdivision vertex adjacent to v 0 at the point whose xand y-coordinates are one cell to the left of v 0 and one cell above t, respectively; with a symmetrical argument choose the position of the other subdivision vertex of π h . It is not hard to see that (i) no crossing has been introduced so far; (ii) slices S j1 , S j2 and S j3 are within wedge W j (1 ≤ j ≤ m); and (iii) every triangle in a tunnel contains exactly one free anchor vertex. To complete the drawing, it remains to embed the inner vertices of transversal paths, taking into account that every path π j will unavoidably cross the three slices in its edge W j . Place these vertices at the free anchor points, so that the p-th inner vertex of π j occupies the p-th free anchor point, from left to right. It turns out that the produced crossings will always occur at right angles and involve a private edge of G 1 and a private edge of G 2 . Note that this is possible because, by construction, w(W j ) = B = w(S j1 ) + w(S j2 ) + w(S j2 ), where w(W j ) is the width of wedge W j , which is defined as the effective length of π j . Indeed, π j has 2B inner vertices, there are 2(a j1 +a j2 +a j3 ) free anchor points in W j , and a j1 +a j2 +a j3 = B, since we start from a solution of 3P. (⇐) Let Γ 1 , Γ 2 be any GRACSIM drawing of G 1 , G 2 , and let Γ p be the drawing of G p induced by Γ 1 , Γ 2 . Also, let C j ⊂ G p (1 ≤ j ≤ m) be the cycle consisting of paths π s (j − 1), π t (j − 1), π t (j) and π s (j). We first claim that the following invariants are satisfied. (I1) C j (1 ≤ j ≤ m) is the boundary of a wedge W j in Γ p , where a wedge is a bounded or unbounded face of degree eight in Γ p . (I2) Transversal path π j (1 ≤ j ≤ m) is drawn within wedge W j . (I3) Any two slices cannot be contained one in another and do not overlap with each other except at poles s and t. (I4) Every edge of π j (1 ≤ j ≤ m) crosses at most one edge of a same slice. (I5) Every wedge contains exactly three slices. Let R b (C j ) and R u (C j ) be the bounded and the unbounded plane regions, respectively, delimited by C j in Γ p . Since v j−1 and v j are two vertices of C j , path π j has to be drawn within either R b (C j ) or R u (C j ), otherwise an inner edge of π j would cross an edge of C j , which is not allowed in a GRACSIM drawing of G 1 , G 2 because C j ⊂ G. Also, if π j is contained in R b (C j ) (R u (C j ), respectively), then all the other paths of the pumpkin that connect the two poles s and t must be drawn within R u (C j ) (R b (C j ), respectively). Invariants I1 and I2 are thus satisfied. Concerning invariant I3, it is immediate to see that any two slices cannot be contained one in another. Further, in case of overlap, an edge e 1 of a slice S 1 would cross a boundary edge e 2 of a slice S 2 , where e 2 is a private edge of G 2 and e 1 is a private edge of G 1 . But this is not possible, because the end-vertices of e 1 are also connected in S 1 by a 2-edge path consisting of a shared edge and of a private edge of G 2 . Invariant I4 holds because every transversal path π j (1 ≤ j ≤ m) can only cross edges of tunnels in W j , and every tunnel is drawn as a straight-line internally triangulated outer-plane graph. Therefore, π j cannot enter and then exit from a triangle with a same private edge in such a way that all edge crossings are at right angles. Namely, every triangle of a tunnel in W j takes at least one inner vertex of π j . We now show that invariant I5 is satisfied. It is straightforward to see that every slice must be drawn within some wedge, and all the slices in a wedge W j are crossed by its transversal path π j . In particular, π j has to pass through the tunnels of these slices and such tunnels are pairwise disjoint and none of them contains another. Suppose by contradiction that invariant I5 does not hold. Then, there would be a wedge W p (1 ≤ p ≤ m) containing at least four slices; recall that there are 3m slices to be distributed among m wedges. Let us denote such slices by S p1 , S p2 , . . . , S pk , with k ≥ 4, and let a pl ∈ A be the integer encoded by slice S pl (1 ≤ l ≤ k). Since each element of A is strictly greater than B/4, it follows that k l=1 w(S pl ) = k l=1 a pl > k l=1 B/4 ≥ B = w(W p ), thus wedge W p is not wide enough to host all its slices, a contradiction. In other words, the alternating path π p does not have enough inner vertices to pass through all the tunnels of slices in W p avoiding crossing that are not allowed in a GRACSIM drawing. Now, for each wedge W j (1 ≤ j ≤ m), denote by S j1 , S j2 and S j3 the three slices that are within W j , and let a j1 , a j2 and a j3 be their corresponding elements of A. We claim that a j1 + a j2 + a j3 = B. Indeed, it cannot be 3 k=1 a jk > B, because it would imply that 3 k=1 w(S jk ) > w(W j ), which is not possible as seen above. On the other hand, if 3 k=1 a jk < B, there would be some i=1 a i would be strictly less than mB, which violates our initial hypothesis on the elements of A. Hence, even this case is not possible. In conclusion, every wedge W j (1 ≤ j ≤ m) contains exactly three slices S j1 , S j2 and S j3 , each of these slices has a width w(S jk ) (1 ≤ k ≤ 3) that encodes a distinct element of A, and the sum of these widths is equal to B, i.e. w(S j1 ) + w(S j2 ) + w(S j3 ) = B. Therefore, the partitioning of A defined by A 1 , A 2 , . . . , A m , where A j = {w(S j1 ), w(S j2 ), w(S j3 )}, is a solution of 3P for the instance A. ⊓ ⊔ We conclude this section with two remarks. Remark 1. It is not hard to see that this reduction can also be used to give an alternative proof for the N P-hardness of SGE, which was proved by Estrella-Balderrama et al. [16]. N P-completeness of K-SEFE In order to increase the readability of a simultaneous embedding, which is particularly desired in graph drawing applications, one may wonder whether it is possible to compute a SEFE, where every private edge receives at most a limited and fixed number of crossings. We recall that there is no restriction on the number of crossings that involve a private edge in a SEFE drawing. Further, two private edges may cross more than once, and these multiple crossings could be necessary for the existence of a simultaneous embedding; however, Frati et al. [19] have shown that whenever two planar graphs admit a SEFE, then they also admit a SEFE with at most sixteen crossings per edge pair. Motivated by the previous considerations, we introduce and study the complexity of the following problem, named K-SEFE, where k denotes a fixed bound on the number of crossings per edge that are allowed. Problem: K-SEFE Instance: Two planar graphs G 1 = (V, E 1 ) and G 2 = (V, E 2 ), sharing a common subgraph G = (V, E) = (V, E 1 ∩ E 2 ), and a positive integer k. Question: Do G 1 and G 2 admit a SEFE such that every private edge receives at most k crossings? It is straightforward to see that K-SEFE is, in general, a restricted version of SEFE. Namely, for any positive integer k, it is easy to find pairs of graphs that admit a (K+1)-SEFE, and thus a SEFE, but not a K-SEFE. For example, consider a pair of graphs G 1 = (V, E 1 ) and G 2 = (V, E 2 ) defined as follows (an illustration for k = 4 is given in Fig. 2). The common subgraph G = (V, E) is a wheel of 2k + 5 vertices, where u 0 , u 1 , . . . u k+1 , v 0 , v 1 , . . . , v k+1 are the 2(k + 2) vertices of its cycle in clockwise order, Since G has a unique planar embedding (up to a homomorphism of the plane), the private edge (u 0 , v 0 ) of G 1 crosses all the k + 1 private edges of G 2 , i.e. all the edges (u i , v k+2−i ) with 1 ≤ i ≤ k + 1. Therefore, G 1 and G 2 admit a (K+1)-SEFE, and thus a SEFE, but not a K-SEFE. Fig. 2. A pair of graphs that admit a K-SEFE only for k ≥ 5. Proof. We use a reduction from 3P similar to that in the proof of Theorem 1; subdivision vertices are now omitted, since we are no longer in a geometric setting. CONSTRUCTION Start with a (non-subdivided) pumpkin G p ⊂ G whose vertices v 0 , v 1 , . . . v m are adjacent to the two poles s and t, and whose handle is a single edge (v 0 , v m ). Add a transversal path π j between every pair of vertices v j−1 and v j (1 ≤ j ≤ m). Differently from the proof of Theorem 1, π j has to contain 2B − 1 inner vertices and not 2B; the reason of this will be clarified later. Also, the effective length of π j is now defined as half the number of its edges, hence it is still equal to B. Slice gadgets S i (1 ≤ i ≤ 3m) and their tunnels ∆ i are also slightly modified and are defined as follows. For each integer a i ∈ A, create an alternating path π(S i ) of 2a i non-shared edges; thus, π(S i ) has 2a i + 1 vertices and its extremal edges never belong to the same graph G i (i = 1, 2). Construct a fan F t i by adding an edge between all the pairs of consecutive vertices of π(S i ) in even positions and by connecting such vertices to the pole t of the pumpkin; F t i \ {t} is a path of a i − 1 edges, because π(S i ) has a i vertices in even positions and a i + 1 vertices in odd positions. Similarly, construct a fan F s i by connecting the pole s with a path of a i edges passing through all the vertices of π(S i ) in odd positions. Slice S i is composed from the two fans F t i and F s i plus all the edges of π(S i ). Further, all the edges of fans are shared, while those of π(S i ) are not shared and belong alternatively to G 2 and to G 1 . The tunnel ∆ i of a slice S i is the subgraph that results from S i after removing the two poles s and t, i.e ∆ i = S i \ {s, t}. It is straightforward to see that every tunnel is a biconnected internally-triangulated outerplane graph, whose weak dual is a path, and it contains exactly 2a i − 1 triangles if the corresponding slice encodes integer a i . The width w(S i ) of a slice S i is defined as half the number of private edges in its tunnel ∆ i , thus w(S i ) = a i . It is not hard to see that the transformed instance G 1 , G 2 contains 4Bm + 9m + 3 vertices and 8Bm + 2m + 3 edges, thus its construction can be done in polynomial time. Furthermore, we observe that G, G 1 and G 2 are not connected. Indeed, G contains (2B − 1)m isolated vertices, i.e. all the inner vertices of transversal paths, while G 1 and G 2 contain an induced matching of (B − 1)m (private) edges each. CORRECTNESS Let A be an instance of 3P, and let G 1 , G 2 be an instance of 1-SEFE obtained by using the previous transformation. We show that A admits a 3-partition if and only if G 1 , G 2 admits a 1-SEFE drawing. (⇒) Suppose that A admits a 3-partition {A 1 , A 2 , . . . , A m }, then a 1-SEFE drawing of G 1 , G 2 can be constructed as follows. Compute a plane drawing Γ p of the pumpkin G p (see, e.g., Fig. 3(a)) such that (i) the external face is delimited by the edges (s, v 0 ), (v 0 , v m ) and (v m , s) and (ii) for each j = 1, 2, . . . , m edge (t, v j ) immediately follows edge (t, v j−1 ) in the counterclockwise edge ordering around t. Γ p contains m inner faces of degree four, delimited by edges (s, v j−1 ), (v j−1 , t), (t, v j ), (v j , s) (1 ≤ j ≤ m), which are the wedges W j of the pumpkin. Consider now each triple A j = {a j1 , a j2 , a j3 } (1 ≤ j ≤ m), and denote by S j1 , S j2 , S j3 the corresponding slices in the transformed instance. For each slice S jk (1 ≤ k ≤ 3), compute a plane drawing with both poles on the external face. Place these drawings one next to the other within wedge W j , in any order; for simplicity we may assume that S j1 is the leftmost slice, S j2 is the middle slice and S j3 is the rightmost one. Also, if necessary, flip each slice around its poles so that the leftmost private edge always belongs to G 2 ; clearly, this implies that the rightmost private edge belongs to G 1 . It is not difficult to see that the drawing produced so far is planar, i.e. even the private edges do not create crossings. Moreover, since w(W j ) = B = a j1 + a j2 + a j3 = w(S j1 ) + w(S j2 ) + w(S j3 ), every transversal path π j (1 ≤ j ≤ m) can be drawn within wedge W j in such a way that (i) every edge of π j crosses exactly one private edge of a tunnel in W j , and (ii) every crossing involves a private edge of G 1 and a private edge of G 2 . (⇐) We conclude the proof by showing that if G 1 , G 2 admits a 1-SEFE drawing Γ 1 , Γ 2 , then A admits a 3-partition. By a similar argument as that in the proof of Theorem 1, Γ 1 , Γ 2 induces a plane drawing Γ p of the pumpkin G p , in which each wedge W j , i.e. each bounded or unbounded face of degree four of G p , is delimited by a cycle C j consisting of edges (s, v j−1 ), (v j−1 , t), (t, v j ) and (v j , s), for some 1 ≤ j ≤ m. Further, path π j has to be drawn within W j , and for each 1 ≤ i ≤ 3m, fans F t i and F s i , and thus the slice S i they belong to must be placed within a same wedge. Let S j1 , S j2 , . . . , S jk be the slices within wedge W j , for some k ≥ 0. Since every private edge receives at most one (k = 1) crossing in Γ 1 , Γ 2 , it follows that k l=1 w(S jl ) ≤ w(W j ) = B, i.e. the number of edges of π j must be greater than or equal to the number of edges of tunnels in W j . We now show that there are exactly three slices in every wedge, i.e. k = 3. It cannot be k > 3, otherwise k l=1 w(S jl ) = k l=1 a jl > k l=1 B/4 ≥ B = w(W j ). On the other hand, it cannot be k < 3, otherwise there would some other wedge with k ′ > 3 slices; recall that there are a total of 3m slices and a total of m wedges. Suppose now that 3 l=1 w(S jl ) < w(W j ) = B, for some 1 ≤ j ≤ m. Then, there would exist some j ′ = j with 1 ≤ j ′ ≤ m such that 3 l=1 w(S j ′ l ) > w(W j ′ ) = B, otherwise it would be violated the equality 3m i=1 a i = mB. In conclusion, there are exactly three slices in every wedge, and the sum of their widths coincides with B. Therefore the partitioning Proof. Concerning the N P-hardness, it suffices to repeat the proof of Theorem 2, by replacing every private edge e of each tunnel of G i (i = 1, 2) with a set of k internally vertex-disjoint paths π 1 (e), π 2 (e), . . . , π k (e), consisting each one of two private edges of G i . We now introduce some definitions and then prove the membership in N P using an approach similar to that described in [21]. An edge crossing structure χ(e 1 ) of a private edge e 1 ∈ E 1 is a pair ε 2 , σ(ε 2 ) , where ε 2 is a multiset on the set E 2 \ Fig. 4. Illustration of the 2-edge penetration vulnerability. E with cardinality at most k, and σ(ε 2 ) is a permutation of multiset ε 2 . A crossing structure χ(G 1 , G 2 ) of a pair of graphs G 1 , G 2 is an assignment of an edge crossing structure to each private edge of E 1 . Of course, all crossing structures of G 1 , G 2 can be non-deterministically generated in a time that is polynomial in |V | = n, and they include the crossing structures induced by all K-SEFE drawings of G 1 , G 2 . We conclude the proof by describing a polynomial time algorithm for testing whether a given crossing structure χ(G 1 , G 2 ) is a crossing structure induced by some K-SEFE drawing of G 1 , G 2 . Let G ∪ be the union graph of G 1 and G 2 , i.e. G ∪ = (V, E 1 ∪E 2 ). For each edge e of G ∪ such that e ∈ E 1 \ E, consider its crossing structure χ(e) = ε 2 , σ(ε 2 ) , replace every crossing between e and the edges in ε 2 with a dummy vertex, preserving the ordering given by σ(ε 2 ), and then test the resulting (multi) graph for planarity. ⊓ ⊔ We conclude even this section with two remarks. Remark 3. The previous reduction cannot be successfully applied to SEFE, because of the 2-edge penetration vulnerability: every transversal path π j (1 ≤ j ≤ m) can pass through all the tunnels in W j using only its two first edges; an illustration of this vulnerability is given in Fig. 4. Also, any tentative to patch this vulnerability by replacing the transversal paths with different graphs, modifying the slices accordingly, always resulted in constructions in which overlapping slices were possible. Remark 4. From a theoretical point of view, it also makes sense to study a slightly different restriction of SEFE, where instead of limiting the number of crossings per edge, it is limited the number of distinct edges that cross a same private edge; recall that two private edges may cross each other more than once, which gives rise to a different problem than K-SEFE. We may call this problem K-PAIR-SEFE, because k is now the bound on the allowed number of crossing edge pairs involving a same edge. It is not hard to see that a reduction analogous to that given in the proof of Theorems 2 and 3 can be used to prove the N P-hardness of K-PAIR-SEFE. The interesting theoretical aspect of K-PAIR-SEFE is the following: if k is greater than or equal to the maximum number of edges of G i (i = 1, 2), then a K-PAIR-SEFE is also a SEFE; in particular, if k ≥ 3|V | − 6 the two problems are identical. Conclusions and Open Problems In this work we have shown the N P-hardness of the GRACSIM DRAWING problem, a restricted version of the SGE problem in which edge crossings must occur only at right angles. Then, we have introduced and studied the N P-completeness of the K-SEFE problem, a restricted version of the SEFE problem, where every private can receive at most k crossings. Our results raise two main questions. First, as already mentioned at the end of Section 3, it would be interesting to study the complexity of a relaxed version of the GRAC-SIM DRAWING problem, where a prescribed number of bends per edge are allowed; this open problem was already posed in [9]. In particular, it is not clear whether the reduction given in the proof of Theorem 1 can be adapted for proving the N P-hardness of the one bend extension of GRACSIM. Another interesting open problem is to investigate the complexity of K-PAIR-SEFE when the ratio |V |/k tends to 1 3 + 2 k from the right; we recall that for k ≥ 3|V | − 6, K-PAIR-SEFE and SEFE are the same problem, and that the N P-hardness of K-PAIR-SEFE strongly relies on a construction where the ratio |V |/k is significantly greater than 1 3 + 2 k .
2016-11-13T20:06:33.000Z
2016-11-13T00:00:00.000
{ "year": 2016, "sha1": "4acbdd7c361e810e84a94003459002372804ddf3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7155/jgaa.00456", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "4acbdd7c361e810e84a94003459002372804ddf3", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
49739927
pes2o/s2orc
v3-fos-license
Trigger and Timing Distributions using the TTC-PON and GBT Bridge Connection in ALICE for the LHC Run 3 Upgrade The ALICE experiment at CERN is preparing for a major upgrade for the third phase of data taking run (Run 3), when the high luminosity phase of the Large Hadron Collider (LHC) starts. The increase in the beam luminosity will result in high interaction rate causing the data acquisition rate to exceed 3 TB/sec. In order to acquire data for all the events and to handle the increased data rate, a transition in the readout electronics architecture from the triggered to the trigger-less acquisition mode is required. In this new architecture, a dedicated electronics block called the Common Readout Unit (CRU) is defined to act as a nodal communication point for detector data aggregation and as a distribution point for timing, trigger and control (TTC) information. TTC information in the upgraded triggerless readout architecture uses two asynchronous high-speed serial links connections: the TTC-PON and the GBT. We have carried out a study to evaluate the quality of the embedded timing signals forwarded by the CRU to the connected electronics using the TTC-PON and GBT bridge connection. We have used four performance metrics to characterize the communication bridge: (a)the latency added by the firmware logic, (b)the jitter cleaning effect of the PLL on the timing signal, (c)BER analysis for quantitative measurement of signal quality, and (d)the effect of optical transceivers parameter settings on the signal strength. Reliability study of the bridge connection in maintaining the phase consistency of timing signals is conducted by performing multiple iterations of power on/off cycle, firmware upgrade and reset assertion/de-assertion cycle (PFR cycle). The test results are presented and discussed concerning the performance of the TTC-PON and GBT bridge communication chain using the CRU prototype and its compliance with the ALICE timing requirements. Introduction A Large Ion Collider Experiment (ALICE) at the CERN Large Hadron Collider (LHC) is designed to address the physics of strongly interacting matter and the quark-gluon plasma (QGP) at extreme conditions of temperatures and energy densities. By inclusive studies of proton-proton (pp), proton-lead (Pb) and lead-lead (PbPb) collisions at the LHC, ALICE aims to study the properties of QGP. ALICE has been acquiring data since the year 2009, and has achieved significant milestones and discoveries since then. An increase in the beam luminosity at the LHC will commence from the year of 2020s, which will extend the physics reach of the experiments. ALICE will fully exploit the scientific potential offered by this third phase of LHC data taking run (Run 3 ) by upgrading the major detector systems, associated electronics and data acquisition systems [1]. For Run 3 , the collision energy of pp will reach 14 TeV, with maximum instantaneous luminosity of L = 5×10 34 cm −2 s −1 . For PbPb collisions, the center of mass energy per nucleon pair, √ s NN will be 5.5 TeV, at the instantaneous luminosity of L = 6 × 10 27 cm −2 s −1 . This will correspond to an interaction rate for PbPb collisions of 50 kHz, compared to the Run2 rate Email address: jmitra@cern.ch,jm61288@gmail.com (Jubin Mitra) 1 Corresponding author. of 8 kHz. The ALICE upgrade would witness an upsurge in the data volume, with an estimated data flow of > 3 TB/s. Existing trigger based readout architecture is not suitable to cope with hundred times increase in the data taking rate. To handle such data volume, a dedicated data balancing system is introduced in ALICE upgrade of the readout and trigger system [2] in the form of a Common Readout Unit (CRU). Being at the crossing point of ALICE data streams, CRU manages the aggregation of detector data stream, flow of control requests and distribution of trigger and timing signal information simultaneously. In this article, we focus on the discussion related to the trigger and timing distribution in ALICE using the CRU framework. The integrity of timing signal forms an important technical requirement of the Common Readout Unit (CRU) system. A detailed study has been performed to ascertain that the multiple high-speed communication links act together by being synchronous to the LHC clock signal to transmit the detector readout data and the timing signals with a constant latency. Moreover, phase information of the embedded clock needs to be preserved and the jitter introduced due to the effect of channel noise also to be retained at a low level. Tests are conducted to confirm the ability of the CRU system to retain the same behaviour with each power on/off cycle, firmware upgrade and reset assertion/de-assertion cycle (PFR cycle). The trigger distribution system using the CRU is an amalgamation of multi-link technologies involving different protocol standards. For an entire system to operate synchronously and efficiently, the individual designs are optimized to be able to work in coherence to the neighbouring blocks. In essence, the GigaBit Transceiver optical link (GBT) and the Timing, Trigger and Control (TTC) system based on Passive Optical Networks (TTC-PON) link work in conjunction to communicate the TTC information from the Central Trigger Processor (CTP) to a detector through the CRU electronics system. The propagation path involves multiple transition points that involves different protocol conversions, however their concurrent executions might interact subtly. These interactions and their interdependencies at juncture points are prone to stochastic fluctuations, and hence proper characterization is needed to affirm the behaviour is deterministic. Hence, piece-wise qualification tests are done, before the final goal to integrate, implement and deploy the design elements. The Intel Arria 10 development board is chosen as the test board, which carries the same FPGA chip that will be used in the final CRU development card of PCIe40 [3]. The advantage of having a development board is that it provides easy access to the pins and ports to conduct the signal integrity analysis. The article is organized as per the following. Section 2 gives an overview of ALICE trigger and the reasoning for trigger-less architecture for Run 3 . The data flow of the triggerless detector readout raises the need for the use of a new online data frame marker called the heartbeat (HB) trigger, which is also examined in the same section. The role of the TTC systems and the timing distribution to different parts of the ALICE experiment are outlined in the Section 3. High-speed serial links in the TTC communication is discussed in Section 4. The flow of detector raw data to the CRU using the GBT link is discussed in Section 5. In the Section 6, the use of a single CTP link to communicate with the multiple CRUs using a TTC-PON in a time multiplexing manner is elaborated. The symbiosis of the TTC-PON and the GBT link technology to form a TTC-PON and GBT link bridge made by the CRU firmware is discussed in brief in Section 7. The design integrates multiple links to detect any unwarranted behaviour in the system and to prevent a failure of cascaded type. Intrinsic system monitoring tools are built to gain statistics about the macroscopic behaviour of the system, which are explained in brief in Section 8. Section 9 covers the results related to the latency measurement, the jitter measurement, the Bit Error Rate (BER) measurement and the optimization of transceiver parameters. A discussion of the results is given in Section 10. Finally, a summary of the present studies and future outlook are presented in the last section. Triggered and triggerless architecture in ALICE The Central Trigger Processor (CTP) in ALICE manages the trigger decisions globally and supervise the production of trigger requests by combining the inputs from a system of trigger generating detectors. The CTP plays a pivotal role in identifying rare events, which are recorded for later analysis. In the Run2, the ALICE uses a hardware trigger strategy, where the signals from minimum bias data sample are selected using thresholds on event multiplicity, transverse momenta of tracks and other such observables combining several detectors [4,5]. In Run2, the maximum readout rate was limited to 500 Hz for Pb-Pb events. In case the trigger rate exceeds a sub-detector read-out capability, the system saturates and asserts a busy signal. In Run 3 , the ALICE would operate at six times the current peak luminosity of 10 27 cm −2 s −1 , collecting over a ten times the targeted integrated luminosity of 1 nb −1 for the allocated runtime and operating at the collision rate 50 kHz for Pb-Pb ions instead of 8 kHz [1]. The physics objective of the upgrade is to improve the precision of the measurement of QGP signatures. The QGP physics processes do not exhibit signatures that can be selected by hardware triggers directly. In the triggerless readout scheme, all events are readout. The upgraded event selection strategy uses a combination of triggerless readout scheme and the minimum bias trigger generated from the First Interaction Trigger (FIT) detector system [2]. The new readout architecture for timing and/or trigger distribution topology in ALICE is briefly explained in the Section 3. The principle of the ALICE upgrade read-out architecture relies on the Trigger and Timimg System (TTS) ability to efficiently distribute the critical TTC signals with constant latency over optical links to the read-out front-end cards and to receive the busy signal to throttle the trigger distribution when needed. All the data packets originating from the sub-detectors are time tagged. The transmission delay of the read-out data path is not stringent and can use non-constant latency links to the on-line farm. The triggerless data acquisition allows readout of multiple sub-detectors without stressing the trigger decision system when a sub-detector gets busy or faulty. In this manner, the continuous triggerless readout mode increases the event selectivity and allows sampling of the full luminosity. The only drawback that the upgraded system faces is to cope with the massive amount of total generated data that is approximately about 3.6 TB/s. The data flow before the final storage gets reduced by the combined effort of the Online and Offline (O2) computing systems. To delimit the overflow of the assembled events across the time frame boundary during data packet formation in the O2 system, a new trigger called heartbeat [6,7] is defined, as explained in the next paragraph. Heartbeat (HB) in a continuous readout mode are asserted periodically to delimit a stream of readout data [8]. As illustrated in Figure 1, HB trigger is used to generate a managable flow of the Heartbeat Frames (HBF). The HBFs are forwarded from the readout electronics to the CRU over the TTS links, where it is processed and forwarded to the First Level Processor nodes (FLP) and then to the Event Processing Node (EPN), as shown in Figure 2. For an each successful HBF delivery to the FLP, the CRU sends an HB acknowledge message to the CTP along with the information about the CRU data buffer. For the entire operation the LHC clock is used as a reference signal to synchronize the data flow. Under a nominal condition, the HBs rate correspond to one LHC orbit period of 89.4 µs or 10 kHz. One FLP accumulates 256 HBFs to generate a Sub-Time Frame (STF) every ∼ 22 ms, giving a rate of ∼ 50 Hz. Within the EPN the STFs coming from all the FLPs are aggregated over the same time period, that includes both triggered or continuously readout detectors to form a complete Time Frame (TF). Keeping the anticipated customization needed, both the HBF length and number of the HBF in a TF are programmable. The HBF header, trailer and other IDs are defined to aid the flow managament for the TTS links. One hot encoding is used for the 16 bit trigger codes of the HB trigger. ALICE Clock Distribution strategy for Run 3 The ALICE detector read-out system has three configuration modes to receive the TTS information [2] : I. Detectors with non-trigger latency critical systems use the CRU connectivity only, such as the Time Projection Chamber (TPC) and Muon Tracking Chambers (MCH) systems; II. Detectors with latency critical trigger information connects directly to the CTP, such as the Inner Tracking System (ITS) ; and III. Detectors that would not upgrade to new readout architecture use the C-RORC (Run2 readout card) to receive the TTS information on the ondetector electronics via the TTC protocol. The details of the connectivity of the three modes are highlighted in the Figure 2. The detectors which operate in Type I mode, use the TTC-PON and GBT bridge connection to forward the timing information as illustrated in the Figure 3. The detectors can operate in a continuous or a triggered readout mode. Depending on the configuration mode, the heartbeat trigger or physics trigger is employed. High-Speed Serial Links in the TTC communication For a communication system to operate reliably, one of the four classes of the clocking methods are employed, namely the asynchronous (uses no clock signal) scheme, the synchronous (uses same clock frequency and known phase) scheme, the mesochronous (uses same clock frequency and unknown phase) scheme and the plesiochronous (uses same clock frequency but with drifting phase) scheme [9]. In the implementation of the asynchronous serial links the clock is embedded within the data stream and behaves in the same manner as the synchronous communication system. The use of embedded clock removes the need for a separate dedicated connection for a clock communication apart from the data stream. However, for the receiver side to recover the embedded clock with efficiency there need to be sufficient transition density in the transmitted bits. Bit transition density is maintained with the help of scrambling algorithm. Plenty of commercially viable high-speed asynchronous link standards are available. However, those are not suitable for application in the LHC environment. The reason being the LHC operates at a unique frequency of 40 MHz that is not compatible with the standards of the other commercial links. The use of unconventional clock frequency for data payload communication led the CERN electronics team to develop a custom solution referred as the TTC interface link standards. The TTC standards used in CRU are the GBT and the TTC-PON. Comparison between the specification of the two standards are listed in the Table 1. For embedding the clock in the serial link and maintaining the bit transition density, the GBT uses the block interleaver while the TTC-PON uses the 8b/10b as channel encoding scheme. To prevent an anachronistic behaviour in the data packet formation, use of elastic buffers are not preferable. This approach helps to maintain the constant latency in the link that is necessary for arranging the data types having no time-stamp. Synchronous relationship of flow of events between the source and the receiver over an asynchronous link are preserved by maintaining a certain timing relationship at the physical level of the communication chain. An analogy has been given with the distributed systems that behave in a fully synchronous manner only after abiding certain degrees of synchrony [14]. Similarly an asynchronous system can act as a synchronous system provided it abides by certain constraints. In the comparison Table 2 an attempt has been made to correlate the constraints. The causal relationship of the events are not preserved at the phys- ical level of the protocol stack. Joint use of the clock synchronization and the syntonization for the recovery of the embedded clock is employed at the physical level. The approach ensures frequency stability with low jitter of the recovered phase locked clock. If the links are operated in latency optimized mode [15] then the latency or the delay path of the data lines remains constant, and is needed for the Timing, Trigger and Control (TTC) data communication. While other levels of complex synchronous information handling like the time-stamp and the trigger management are preserved at the higher level of the CRU firmware logic stack. GBT Specifications The GBT framework [11] defines the technology standards necessary to allow high-speed time critical data communication with high error resilience to communicate reliably from the LHC radiation zone to the readout electronics situated remotely. The GBT ecosystem, shown in the Figure 4 is composed of three parts, namely, the GBT ASIC that houses the Versatile Link chip along with the GBT slow control ASIC, Optical Fibre connection operating in a single mode (1310 nm) or a multi-mode (850 nm) , GBT the Slow Control ASIC and an FPGA programmed with the GBT logic core. The GBT link supports two modes of operation, the standard mode and the latency optimized mode. The latency optimized mode is needed for a time critical applications that requires constant latency. The GBT link supports two types of data frame formats, namely the GBT frame format or the Widebus format. The GBT frame format appends an error correction code formed from the Reed-Solomon algorithm cascaded with the interleaver and the scrambler. While the Widebus frame for- In ALICE, maximum 24 GBT links are required per CRU board. Most part of the CRU FPGA resources are needed for the detector specific logic. To consider a way to save on the GBT specific periphery logic resources a new design approach is required. The effect of optimization on saving the logic resources is studied by Baron et. al. [16] by sharing one decoder block for several links. Another level of optimization solution possible in the Arria 10 FPGA that saves on clocking resources and reduces intra-link clock skew is by using a x6 Physical Medium Attachment (PMA) bonded mode [17]. The PMA bonded mode allows six GBT links to be packed closely. Together the links are referred as the GBT Bank. In other words, the GBT Bank is that largest common group formed of the GBT links that are bonded or clubbed together for an FPGA resource optimization, which is six in this case. The bonded architecture comes with a constraint that all the links need to follow the same standards for a particular GBT lank. Individually the settings for the GBT banks are completely configurable. For example, if a designer needs to have 20 links per CRU board then one can split it as three GBT banks of six links and one GBT bank with two links. TTC-PON architecture Passive Optical Networks (PON) for the particle physics applications at CERN was first proposed in 2009 [18]. It was later extended for higher speed in 2013 [19]. Since then the protocol has went through much up-gradation. In our study we have used 2016 version of TTC-PON as shown in Table 1. The TTC-PON architecture is based on PON technology that finds application in Fiber To The Home (FTTH, FTTx) networks. TTC-PON is a single fibre, bi-directional, pointto-multipoint network architecture that uses optical splitters to enable a master node or Optical Line Terminal (OLT) to communicate with multiple slave nodes or Optical Network Units (ONUs) [13], as illustrated in Figure 6. The downstream (from OLT to ONUs) runs at 9.6 Gbps at operating wavelength band of 1577 nm, while the upstream (from ONU to OLT) runs at 2.4 Gbps operating in wavelength window of 1270 nm. Using the TTC-PON technology, the Timing, Trigger and Control (TTC) information from the CTP are communicated over an optical link in a time multiplexed fashion, that allows a single link to transmit the TTC information to be splitted among the multiple CRUs [20,21], as shown in Figure 3. The link topology reduces the number of links to be used and hence minimizes the hardware costs involved significantly. TTC-PON and GBT bridge for TTC communication The TTC-PON and GBT bridge connection is the interconnection between the two mutually independent GBT and TTC-PON links connected using a firmware defined logic. The bridge connection is dedicated for the delivery of the TTC payload. Different types of topology for the bridge connection are possible. For CRU design, the star topology is used for interconnection, where the TTC-PON forms the central nodal hub for forwarding the TTC information to 24 GBT links, as shown in the Figure 7. The link connection between a TTC-PON to the multiple GBT is elaborated. Initial implementation and testing was based on the scheme where the 240 MHz clock is recovered from the TTC-PON'15 protocol and then fed into the jitter cleaner before forwarding it to the GBT at 120 MHz as shown in the Figure 8 as the configuration-I. However, the implementation on the GBT side suffered from phase inconsistency of the forward clock with each power cycle or reset cycle. The issue arised because the divider of the Multi-Gigabit Transceiver (MGT) locks the recovered fabric clock at any of the rising edges of the serial clock. In the Figure 9 it is exhibited that for the 10,000 soft reset cycles of the firmware, the phase variation exhibits a uniform distribution over the range of [-4 ns,4 ns]. This has been solved by calibration logic that slips the GBT Tx clock to align with the phase of the recovered clock using a Finite State Machine (FSM) [17] . An improved design option emerged in the version upgrade of the Intel FPGA technology that allows the feedback compensation mode in the transceiver PLL to ensure the deterministic nature of the clock. However, the feature constraints the design solution to operate 240 MHz frequency [22]. Hence, the latest CRU firmware design uses frequency of 240 MHz to cross the entire link chain, instead of stepping down the frequency and stepping up again. The development led to the TTC-PON and GBT connection configuration-II as shown in the Figure 10. In both the configurations given in the Figures 8 and 10 the trigger data path has to cross two clock domains. Even if the clocks are phase locked and of the same frequency, the FPGA sees it coming from two different sources. Hence to avoid the metastability issue in the firmware design synchronizers are added [23]. Moreover, proper fitter constraints are applied in the firmware logic to lock the logic placements to reduce intra-links skew variation with each firmware upgrade. Design Resilience The CRU being a complex heterogeneous system has to deal with multiple links of different communication standards. During a stressful run-time scenario the stochastic fluctuations in the data link pathways might go outside the tolerable zone. The stochastic behaviour is associated with uncertainties and can trigger a chain of cascading upsets in the link chains. Hence, a quantifiable autonomous acquisition system is required to monitor and trace for any unwarranted behaviour. As a fall back solution the house keeping tools are included with the main CRU firmware to act as a caretaker to predict any errors and disruption by tracking the macroscopic behaviours of the CRU system. Any deviation of the system behaviour if registered then flags it as a warning or an error to the system management console at the online computing system of the AL-ICE. The inclusive monitoring system involves the three main tools to detect any aberration in the system behavior, as shown in the Table 3. The monitoring system aids in the increase of the resilience and reliability of the system. Results and Discussions The entire trigger related logic involves role and functioning of multiple blocks. Each sub-blocks are treated individually, tested, characterized and then integrated in the design system. The test systems are buffeted with various stress scenarios, before compiling the final result in optimum environment condition. Since the work deals with timing information transmission, hence tests that give information about the clock quality during a link transition are included. Tests that are entailed in the following sub-sections, are the latency measurement, the jitter measurement, the BER measurement and the optimization of transceiver parameters. Latency Measurement The latency measurement gives an estimation of the logic path delay involved and also senses whether the path traverses through an elastic or an inelastic buffer. Lower the latency is more suitable it is for communication of time-sensitive information, such that service response can be delivered in the shortest period. However, variable latency means the path is ideal for data payload but not for time-critical payloads like in the case of timing and trigger. So, it is the challenge for both the protocols, the TTC-PON and the GBT, to meet the low latency with the high throughput and yet be able to preserve the same latency without any variation over the entire run period. Since the serial links traverse through the multiple clock multiplication and the subsequent division zones, the links are subjected to the risk of sudden major variations in the latencies. In the latency measurement also checked for any significant deviations that can perturb the entire time-critical pathway. A special comma is used to measure latency in the TTC-PON. The comma character is sent every 8 µs (K28.1) and a flag at the Optical Line Terminal (OLT) level is created to indicate the value is send. A match-flag is created when the Optical Network Unit (ONU) received the special character (just after 8b10b decoder). The latency between those two flags are then measured using the oscilloscope. Several ONU RX resets were performed and the position between those flags was deterministic, as shown in the Figure 11. For the GBT measurement, an initial version of the PCIe40 DAQ Engine with the Arria 10 FPGA engineering sample is used. Since the setup does not allow to probe at the individual points, an ingenious solution to give a coarse estimate of the latency using firmware based measurement logic is defined. For the measurement a 32-bit ripple counter is used as the generated pattern to communicate over the GBT stream. The principle of the measurement is to enable the loopback, receive the packet, unwrap from the received GBT payload and then compare the received counter value with the sender's to estimate the round trip delay. As the GBT frame arrival rate is of 40 MHz, hence the course measurement of the round trip delay achieved is of 25 ns resolution. The round trip delay corresponds to the length of time a signal takes to be sent plus the length of time it takes for the reflected echo of that signal from the receiver to be registered. It includes the serialization and the deserialization time along with the propagation delay. For the measurement as can be seen in the Figure 12 Table 4 the latency of the GBT protocol with Tx and Rx configured in the latency optimized mode or the standard mode are tabulated. However, the measurement for the Wide-Bus mode is skipped, as there was no requirement from any ALICE detector group at the time of measurement. The GBT firmware used for this measurement is the development version used in the year 2015-16 that has got 120 MHz as word clock. Both the links exhibited stable latency even when presented with multiple soft or hard resets and power cycles. No unwanted significant deviations are registered. However, the addition of jitter is present which is a standard behaviour due to the unwanted channel interference and the system noises involved. Details about the jitter measurement of the recovered clock are covered in the following sub-section 9.2. Jitter Measurement The asynchronous fast serial trigger links (in the CRU application the GBT and the TTC-PON) embed clock signal in the serial data transmission line. The deterministic latency of the timing information transmission is maintained by embedding Since the design uses bonded application, hence the ATX PLL and the fPLL can only be used. The best jitter performance is seen using the ATX PLL over the fPLL. However, the recommendation based on the data rates from the Intel is to use the fPLL for the transmit PLL [22], hence in the design fPLL is used. During the pre-validation test the ideal test scenarios are prototyped with the FPGA development boards having the same family of FPGAs as the final production version. To emulate the CTP and the CRU hardware, the Kintex Ultrascale and the Arria 10 development boards are used respectively. For rapid prototyping of the test design, a split hardware setup is used to model the behaviour of the CRU. It means that the GBT pro-tocol is implemented in one Arria 10 FPGA board and the TTC-PON in other Arria 10 FPGA board, while the clock is transmitted from one board to the other after jitter cleaning it in the SI5344 PLL module. The split hardware model allows an easy access to the Test Points (TP) for the jitter measurement test setup as can be seen in the Figure 13. The settings of the configuration parameters of the SI5344 PLL module and the Versatile Link Demo Board (VLDB) used for the test is given in the Table 5 and the Table 6 respectively. The Phase noise representation used for the jitter measurement gives an accurate estimation of the phase fluctuations in the frequency domain analysis. Phase noise is determined as the ratio of the noise in a 1-Hz bandwidth at a specified frequency offset, f m , to the oscillator signal amplitude at frequency f O . The unit used is dBc/Hz, where dBc (decibels relative to the carrier) is the power ratio of a signal to a carrier signal, expressed in decibels. It is conventional to characterize an oscillator in terms of its single-sideband phase noise as shown in the Figure 14, where the phase noise is in dBc/Hz plotted as a function of frequency offset, f m , with the frequency axis on a log scale. The RMS jitter (in linear terms not dB) is calculated from a piecewise linear integration of the single sideband phase noise data points. The Equation 1 used for calculation is adapted from the Tutorial MT-008 Analog Devices [25]. The results are correlated with the Phase Noise Analyzer software generated values. where f O is the oscillator frequency. For performing the integration on the phase noise power values, the trapezoidal rule [26] is used over a defined bandwidth given by the Equation 2. Figure 14: Oscillator phase noise in dBc/Hz vs. frequency offset [25] The experiment calculates the RMS Jitter value from the phase noise power within the bandwidth of 10 Hz to 20 MHz. As the zone of operation is in the high frequency range, hence the effect of the lower frequency phase noise is neglected [27]. This implies that even though the integrated jitter value computed over the plot A is evaluated higher in relative to the other curve B does not implies that the curve A is better than the curve B. However, if the phase noise curve at the high frequency region of interest for the curve A is below the curve B, then the curve A is considered to be better in performance than the curve B. Such concept is applied in the interpretation of the phase noise measurement plots shown in the Figures 15, 16, 17, 18 and 19. The oscilloscope settings are kept same for the different measurements for the sake of consistency. The results contain the measurement of phase noise of clock output from the different test points (TP) in the link chain. The Arria 10 transceiver specific phase noise data points [28] relative to the reference clock phase noise are also included. The Arria 10 transceivers in the Intel data-sheet [28] gives the phase noise points for the reference clock operating at 622 MHz frequency. The REFCLK phase noise requirement at frequencies other than the 622 MHz is calculated using the Equation 3 adapted from the Intel data-sheet [28]. REFCLK phase noise at f (MHz) = REFCLK phase noise at 622 MHz + 20 * log( f /622) (3) The details of the tapping points used during the measurement is given as illustration in the Figure 13. Following measurements are conducted to evaluate the performance and to achieve the best jitter cleaning effect. Performance comparison between SI53XX PLL family The initial purpose for the study of phase noise measurement is to determine the PLL family that fulfills the CRU requirement. PLLs from the different vendors are characterized by the CERN electronic team members, out of which the PLLs from Silicon Labs SI53XX family found to be suitable for the jitter cleaning requirement for the LHC timing signal. Figure 15 gives comparative result of the jitter cleaning performance of PLLs belonging to the SI53XX family namely the SI5338 and the SI5344. The test demonstrates that the SI5344 PLL jitter cleaning is a better match for the phase noise requirement at 240 MHz reference clock frequency for the Arria 10 FPGA SerDes. The test points used for the measurements are TP1 and TP3. The study plays a significant role in deciding the PLL type to be installed in the CRU hardware PCIe40 DAQ Engine. The SI5345 PLL having 10 output nodes, is a variant of SI5344 PLL family [29], which is chosen as the onboard jitter cleaner for the CRU PCIe40 DAQ Engine [30]. Performance comparison with PLL bandwidth variation Following test is to determine at which bandwidth configuration the PLL performs at its best. The Figure 16 shows the phase noise study done on the clock signal tapped at the test point TP3. The test is to evaluate the effect of the bandwidth variation on the integrated RMS jitter. From the plot it can be inferred that the PLL gives best jitter cleaning performance for the 200 Hz bandwidth configuration mode. The tests with the two frequencies at test point TP3 is shown in the Figure 17. The CRU firmware with the latest specification uses 240 MHz frequency to make transit of the timing signal from the TTC-PON to the GBT without stepping down of frequency at intermediate points. The result shows that the jitter cleaner is able to satisfy the requirement of the Arria 10 FPGA jitter specification of SerDes at 240 MHz reference clock frequency. Effect of jitter cleaning performance on integrated GBT and TTC-PON chain The subsequent test for the SI5344 PLL performance is conducted while fitted in the integrated system. The test points TP1, TP2 and TP3 are used to derive the phase noise plot as shown in the Figure 18. The test result validates that the jitter cleaning performance of the PLL keeps the jitter within the tolerable level as specified for the Arria 10 FPGA. Comparison of jitter cleaner performance against an ideal test case scenario The purpose of the test is to compare the presence of jitter in an ideal experimental condition against the practical scenario with jitter cleaner in use. The terminal destination of the embedded clock signal in the link chain is the delivery to the GBT chipset. In our test case, we have used VLDB, as it houses the GBT chipset. The quality of the embedded clock received by the end-point VLDB is studied, where the recovered output clock frequency is set at 40 MHz and 80 MHz respectively. Two types of connection chains are constituted for the test. For the ideal test scenario, all the noisy source points are dropped and the transition points are minimized. It is composed of clock signal originating directly from the clock generator, that gets embedded using the CRU firmware to GBT payload and finally getting communicated to VLDB to be recovered in 40/80 MHz frequency. The setup connection is shown in the Figure 20. For the practical test scenario, the formerly used experimental setup is utilized as shown in the Figure 13. The results are plotted in the Figure 19 and test result are referred as "VLDB CLK OUT WITH GBT" and "VLDB CLK OUT WITH TTC-PON GBT BRIDGE" respectively. The results of the two test setups show strong positive correlation, hence it can be concluded that the SI5344 jitter cleaner can successfully be employed for the cleaning of the embedded clock during traversal of the TTC-PON and GBT bridge connection. [31] is the ratio of the output jitter to the applied jitter on the reference clock, where both the signals are measured as a function of the frequency. The calculated values of the JTF are given in the Table 8. To summarize the SI5344 PLL satisfies the jitter cleaning requirement of the clock needed in the TTC-PON and GBT bridge connection transition. BER Measurement Jitter is not the only contributing factor to bit errors; it can also be a consequence of amplitude noise. Bit Error Rate (BER) analysis is done for quantitative measurement of signal quality. The Intel Arria 10 development board is used for conducting the test. A 10G 850 nm Multimode Datacom SFP+ optical transceiver is configured to operate at 4.8 Gbps, the operating line rate of the GBT protocol. A customized variable fiber optic solution having in-line attenuator capability within the range of 0-60 dB is used for providing attenuation to the signal. For optical power measurement, a hand held power meter with a SC-ST connector is used. The attenuator cable adds an additional insertion loss of ≤ 3 dB to the entire test chain. Individual snapshot of the measurement setup is provided in the Figure 21. Interested users can use the firmware uploaded at the CERN Gitlab link [32] to reproduce the results in other hardware conditions. For the BER measurement default transceiver configuration is used. BER is evaluated from the ratio of the number of errors received to the total number of bits transmitted. Ideally as the number of transmitted bits approaches infinity, the BER becomes a perfect estimate. However, for practical tests there is a need for test procedure that allows to measure BER with a high confidence level. J. Rudd [33] has documented a method for reducing the test time for stressing a system, by calculating the number of bits needed to be transmitted to estimate error probability with a particular statistical confidence level. Equation (4) shows the trade-off between test time (T) and confidence level (CL). where n is the total number bits transmitted, N is the number of errors that occurred during the transmission and R is the line rate. For N = 0 the solution is trivial. In the work of Detraz et. al. [34] an effort has been made to define the minimum experiment time required for GBT BER measurement with different level of confidence as derived from the Eq. (4). Further the concept is extended and marked for the minimum measurement time needed for TTC-PON link also, as shown in Figure 22. BER Measurement for the GBT The GBT BER measurement for the GBT encoding scheme operating in the GBT mode and the Widebus mode is plotted in the Figure 23. An exponential fit to the readout data is done. Below −17 dBm receiver sensitivity, due to loss of clock, further BER measurement cannot be pursued. However, the plot can be extrapolated based on standard 'erfc' based nature of the curve, assuming Gaussian noise. Margin of Receiver Sensitivity for targetted between both the scheme = (15 − 12.9) dBm = 2.1 dBm (5) The difference measured is 2.1 dBm as given in Eq. (5). The result is in close agreement to the measurement conducted by Csaba Soos for GBT protocol implementation implemented on Xilinx FPGA [35], that is around 2.5 dBm. The GBT link Signal Quality. Data from the FPGA transceivers are transmitted using QSFP+ transceiver modules to convert the electrical signals to the optical signals for communication over a single mode fibre. A Lecroy Serial Data Analyser (SDA) oscilloscope is used for analyzing the signal quality. An eye diagram is used as an indicator to measure the quality of the optical transmission signals at the GBT line rate of 4.8 Gbps. The signal to noise ratio of the high-speed data signal is directly indicated by the amount of eye closure or Eye Height. For the GBT transmission signal using a QSFP+ transceiver module, an eye height of 406.6 mV and an eye width of 173.3 ps is achieved as shown in the Figure 24. Power Measurement. Intel internal monitoring tools are used to register the power consumed and the temperature of the Arria 10 FPGA chip during the test measurement. Figure 25 shows the power variation plot as monitored using the tool and Table 9 summarizes the results collected when a single GBT link under the different encoding scheme consumes different power. In the CRU project, the link connecting the radiation hard component to the non-radiation hard component, is based on the GBT link technology operating at 4.8 Gbit/s using a 850 nm multimode optical fibre. The link is connected between the versatile link transceiver to a Multifiber Push-On (MPO) optical connector at the CRU PCIe40 DAQ Engine using an optical fibre splitter. A study has already been conducted by Schwemmer et. al. [36] to note the performance on 400 m long OM3 and OM4 cables. So, further test on optical cable characterization is not pursued. Table 10. Transmitted power of the OLT is +3.67 dBm. For each change in the attenuation value, 5 × 10 11 bits are transmitted to measure the quantity of errors. The TTC PON Arria 10 Signal Quality. Figure 28 shows the transmission quality of the optical signal of the TTC-PON from the CTP prototype board, that uses a Xilinx Ultrascale FPGA. The Figure 29 shows the receiving signal quality of the TTC-PON as monitored within the Arria 10 FPGA using a transceiver toolkit (TTK). The Eye Width to the Eye height ratio of 53/39 is registered in the TTK for the test bits of 1.1E12 using the PRBS31 stress pattern. Transceiver Optimization Transceiver parameter tuning plays a significant role to reduce BER. A test procedure is developed to tune the high-speed link using the signal conditioning circuitry provided in the Arria 10 transceivers. Quartus v15.1 transceiver testing toolkit [37] is used to monitor the signal characteristics. Several articles are mentioned in the Altera (now Intel) literature the need for proper optimization of transceiver for a maximum performance [38] [37] [39] [40]. All those articles are dedicated to the old generation FPGAs like Stratix IV, Cyclone and others, hence a study was necessary to have a first-hand result of the effect of the transceiver tuning on the Arria 10 FPGAs [22]. For the transceiver tests, the line rate of the GBT that is 4.8 Gbps is The transceiver optimization by testing for all the combinations would be an inefficient approach as the time required would be very long. A short calculation is given to show the exact measurement time needed to scan for all the configurations. Empirically the reading time for each test configurations is found to be of 10 secs. With the allowed modifiable transceiver properties in Arria 10 , the configuration range possible for each parameter is listed in the Table 11. The total number of tuples of the configuration cases possible is a pure product of all the test cases, which is given by 32 x 63 x 63 x 31 x 15 x 16 x 8 or 7,559,516,160 (approx 7.5 billion) test cases. The total time needed to execute all the configurations is 2397 years (= 7,559,516,160 x 10 secs = 20,998,656 hrs = 874,944 days). Instead of spending huge computational time to look for an optimal solution, a good enough workaround is to find a potential suboptimal solution. Individual parameter configuration range are scanned by keeping all other parameters fixed at Intel default values, and obtained a range of optimized values determined from the eye-width to the eye-height ratio in the EyeQ signal monitoring tool. The linear nature of the configuration parameters causes the evaluated optimized values to appear in contiguous subsequence as depicted in the Figure 30. Out of the multiple parameters, the variation of only VOD parameter against the eye height and the eye width using a spider chart is plotted in the Figure 31 to illustrate the procedure. The parameters can be grouped to optimize separately without having any notable interference effect on the adjacent parameters. Like {VOD}, {Pre-Emphasis 1st Post Tap, Pre-Emphasis 1st Pre Tap}, {Pre-Emphasis 2nd Post Tap, Pre-Emphasis 2nd Pre Tap} and {VGA, EQUALIZATION} can be grouped separately. However, for the fast approximation, the order of optimization is kept same as in the order mentioned. The pictorial representation is illustrated in the Figure 30. Users can change the order for further optimization. By this method, the time taken reduced significantly. The total number of configuration cases comes down to 70 ( = 4 + (3 x 4) + (6 x 3) + (9 x 4)). The total time needed is of 11.66 mins (= 70 x 10 secs = 700 secs). The major reason to opt for the quick estimation is to characterize for more than 24 links per CRU board and repeat it for over 100 CRU boards in a short period of time where the result of one board cannot be applied to the other board. Hence, this heuristic approach is developed. After the parameters are optimized, the values for the default and the best conditions are shown in the Figure 32. For the entire test, PRBS31 is used to stress the system as it shows the maximum bit transitions among the other available patterns. The eye diagram against the default parameter and the optimized parameter are shown in the Figure 33. The different parameter configurations of the transceiver can share the same eye diagram values or the performance metrics. The set of those values or the configuration parameter tuples are referred as the solution space. The obtained best configuration parameters of the device under test is tabulated in the Table 13. The optimized configuration values are highly sensitive and depend on the FPGA process technology, the system temperature and the optical transceiver used. Hence, even for a minor hardware modification, the configuration parameter values need to be reevaluated. Discussions The behaviour of all the composite elements in the TTC-PON and GBT bridge connection is analyzed for compatibility regarding interconnection and interoperability. A detailed characterization test on the integrated prototype design is conducted to check for any unanticipated design faults before the final commissioning in CRU firmware. Four performance measuring metrics are used in the characterization: (1) Latency; (2) Jitter; (3) BER and (4) Transceiver parameter settings. The testing phase of the firmware has passed through several iterations of power on/off cycle, firmware upgrade and reset assertion/deassertion cycle (PFR cycle) for the reliability test of the communication bridge. During the entire study, large sets of empirical results are collected for analysis. The analysis of the statistics confirms that the TTC communication bridge connection with The results meet the specified trigger and timing communication standards hence no further compensatory measures are needed. To avoid any unprecedented failure during the data taking; a set of monitoring logic is integrated along with the CRU firmware logic core to register the macroscopic behaviour of the system, as a preventive measure, as mentioned in Section 8. The intrinsic details related to the CRU firmware designs are available at the ALICE-CERN CRU TWiki page [41]. Summary and Outlook We have carried out a detailed study of the trigger and timing distributions using the TTC-PON and GBT bridge connection in ALICE. The study is carried out using the CRU development boards for rapid dissemination of performance metrics. The results show that the TTC-PON and GBT can work in synergy to communicate successfully the timing and trigger information and can effectively be deployed. The study confirmed that the system behaviour is completely deterministic with multiple rounds of PFR cycle. The FPGA used in the CRU board is 20 nm Intel Arria 10 . The CRU firmware logic uses static placement configuration, hence the stress points remain fixed over the operation runtime. Future scope is to do a reliability study by accelerated stress scenario to mimic the ef- fect of degradation in the timing circuits and wearability in the logic/memory cells [42]. These would identify the stress hotspots and allow us to overcome the system faults by applying the mitigation solutions accordingly. Another study that is equally important is to have a data flow analysis of the spatiotemporal behaviour of the data traffic [43] for each sub-detector system to arrange and reallocate the CRU peripheral logic resources in an optimized manner.
2018-06-04T19:51:35.000Z
2018-06-04T00:00:00.000
{ "year": 2019, "sha1": "80708390ec46a67fa2a26d583479c2049bf3e93a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1806.01350", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "80708390ec46a67fa2a26d583479c2049bf3e93a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
8013681
pes2o/s2orc
v3-fos-license
Solid Organ Transplantation in Patients with Inflammatory Bowel Diseases (IBD): Analysis of Transplantation Outcome and IBD Activity in a Large Single Center Cohort Background Currently, limited data of the outcome of inflammatory bowel disease (IBD) in patients after solid organ transplantation (SOT) are available. We aimed to analyze effects of SOT on the IBD course in a large IBD patient cohort. Methods Clinical data from 1537 IBD patients were analyzed for patients who underwent SOT (n = 31) between July 2002 and May 2014. Sub-analyses included SOT outcome parameters, IBD activity before and after SOT, and efficacy of IBD treatment. Results 4.74% of patients with ulcerative colitis (UC) and 0.84% of patients with Crohn’s disease (CD) underwent SOT (p = 2.69 x 10−6, UC vs. CD). 77.4% of patients with SOT underwent liver transplantation (LTx) with tacrolimus-based immunosuppressive therapy after SOT. All LTx were due to primary sclerosing cholangitis (PSC) or PSC overlap syndromes. Six patients (19.4%) required renal transplantation and one patient (3.2%) heart transplantation. A survival rate of 83.9% after a median follow-up period of 103 months was observed. Before SOT, 65.0% of patients were in clinical remission and 5 patients received immunosuppressive therapy (16.1%). After SOT, 61.0% of patients were in remission (p = 1.00 vs. before SOT) and 29.0% required IBD-specific immunosuppressive or anti-TNF therapy (p = 0.54 vs. before SOT). 42.9% of patients with worsening of IBD after SOT were at higher risk of needing steroid therapy for increased IBD activity (p = 0.03; relative risk (RR): 10.29; 95% CI 1.26–84.06). Four patients (13.0%) needed anti-TNF therapy after SOT (response rate 75%). Conclusions SOT was more common in UC patients due to the higher prevalence of PSC-related liver cirrhosis in UC. Despite mainly tacrolimus-based immunosuppressive regimens, outcome of SOT and IBD was excellent in this cohort. In this SOT cohort, concomitant immunosuppressive therapy due to IBD was well tolerated. Introduction The clinical course of inflammatory bowel diseases (IBD) such as ulcerative colitis (UC) and Crohn's disease (CD) is typically characterized by alternating episodes of flares and remission. In up to one third of IBD patients, extraintestinal manifestations such as primary sclerosing cholangitis (PSC) or renal dysfunction (e.g., due to amyloidosis) are found [1][2][3]. PSC is a chronic cholestatic liver disease with chronic inflammation and fibrosis of hepatic bile ducts, resulting in liver cirrhosis and progressive impairment of liver function and consecutive liver failure in a subgroup of PSC patients [3,4]. Liver transplantation is currently the only curative therapy for PSC as medical treatments are limited and non-curative in PSC [5]. PSC is more frequent in UC patients than in CD patients with prevalence rates of PSC ranging from 0.76% to 5.4% in UC patients and from 1.2% to 3.4% in CD patients [1,[6][7][8]. Most IBD patients with PSC display a characteristic disease course compared to IBD patients without cholestatic liver diseases [4,[8][9][10][11][12][13][14][15][16][17]. Furthermore, the frequency of pancolitis is higher in UC-PSC patients with more right-sided colitis; and more of these patients have rectal sparing and backwash ileitis, although the course of UC is often mild [4, 9-12, 14, 15, 18]. In contrast, the risk of malignancies including colorectal cancer (CRC) and cholangiocarcinoma is significantly increased in UC patients with concomitant PSC, independently from the underlying risk of CRC in UC alone [13,16,17,19]. In addition, the risk of pouchitis was reported to be high after proctocolectomy with ileal pouch-anal anastomosis (IPAA) [9]. Given the high prevalence of PSC among IBD patients, PSC is the most frequent cause for liver transplantation (LTx) in IBD patients. Another less frequent cause for solid organ transplantation (SOT) in IBD patients is renal insufficiency, e.g., due to amyloidosis [2,20]. In IBD patients undergoing SOT, the disease course is highly variable after SOT and data on the subsequent IBD course after SOT are conflicting [2,3,[9][10][11][12][13][14][15][16][17][18][20][21][22][23][24][25][26][27][28][29][30][31]. A recently published meta-analysis included a total of 609 IBD patients of 14 clinical studies and investigated the natural history of IBD after LTx in patients with PSC/UC. Among these IBD patients, one third (31%) showed improvement of IBD activity after LTx, 39% of patients displayed no significant change of IBD activity, whereas in 30% of patients the IBD activity worsened after LTx with need for treatment intensification after LTx [5]. Similarly, after renal transplantation, approximately 30% of patients develop IBD flares and one fifth of patients have to undergo colectomy after renal transplantation [32][33][34][35]. Therefore, for approximately one third of IBD patients treatment has to be adapted due to the increasing activity of IBD after SOT. Anti-tumour necrosis factor alpha (TNF-α) therapy has proven to be an effective therapeutic option in patients with refractory IBD in numerous clinical trials. Therefore, anti-TNF-α therapy represents a treatment option in IBD patients who underwent SOT. However, clinical experience of anti-TNF-α therapy in IBD patients after SOT is very limited. To date, a total of 21 IBD patients including patients with UC, CD, indeterminate colitis and pouchitis, have been treated with infliximab or adalimumab after LTx [36][37][38][39][40]. Some case reports were published on anti-TNF-α therapy in IBD patients after renal transplantation but no data exist on anti-TNF-α therapy in IBD after heart transplantation [41,42]. Given the rare incidence of SOT in IBD patients, our large IBD patient cohort enabled us to perform a large single center study (n = 31 SOT cases) on the IBD disease course and anti-TNF-treatment efficacy before and after SOT in a well-characterized IBD cohort. One aim was to investigate the outcome of SOT in IBD patients and to evaluate the course of IBD before and after SOT. In addition, we aimed to analyze the treatment outcome of anti-TNF therapy among these patients. These data were finally compared to other available clinical trials and analyses of SOT in IBD patients. Ethical Statement All individuals gave their written, informed consent prior to study inclusion. The study was approved by the local Ethics committee (Ludwig-Maximilians-University Munich) and adhered to the ethical principles for medical research involving human subjects of the Helsinki Declaration. Study population All IBD patients were recruited from the IBD outpatient department of the University Hospital Munich-Grosshadern and from our Center for Solid Organ Transplantation (Ludwig-Maximilians-University Munich, Germany). Databases of all IBD patients who were followed at the IBD outpatient department and of all patients who underwent SOT at the University Hospital Munich-Grosshadern or were followed after SOT at our Center for Solid Organ Transplantation, respectively, were merged to identify IBD patients who underwent SOT. Two senior gastroenterologists viewed relevant data of the 31 IBD patients who underwent SOT between July 2002 and May 2014. Clinical data was collected prospectively. However, data analysis was performed retrospectively. Two senior gastroenterologists analyzed the data which were recorded by patients' chart analysis and a detailed questionnaire based on an interview at time of enrolment. All patients were regularly seen at the IBD outpatient department and at the Center for Solid Organ Transplantation at the University Hospital Munich-Grosshadern. The diagnosis of UC and CD was based on the Montréal classification including endoscopic, radiological, and histopathological parameters [43]. IBD activity was evaluated clinically before and after SOT and was based on endoscopic findings before and after SOT. Endoscopic assessment for UC was based on the Mayo endoscopic subscore with (0) for inactive disease, (1) for mild disease with erythema, decreased vascular pattern, mild friability, (2) for moderate disease with marked erythema, absent vascular pattern, friability, erosions and (3) for severe disease with spontaneous bleeding and ulcerations. For CD, endoscopic activity was defined as "remission"in case of the absence of erosions, ulcers and stenosis and fistulas, respectively and "mild"in case of signs of inflammation with erosions and absence of ulcers, stenosis and fistulas, respectively, and "severe"in case of ulcerations, stenosis or fistulas. For clinical assessment of CD, the Crohn's Disease Activity Index (CDAI) was used; a score of < 150 points was defined as clinical remission. For UC, the Clinical Activity Index (CAI, Lichtiger score) was used; a CAI of 4 points was defined as clinical remission. For endoscopic activity, the last endoscopy before SOT and the first endoscopy after SOT were analyzed. Steroid treatment after SOT for IBD was defined as daily steroid treatment > 10 mg prednisolone due to high IBD activity. Statistical analysis Data were described with proportions for categorical variables and median with range for continuous variables. Crude associations between categorical variables were assessed with the Chisquare test or the Fisher's exact test, where appropriate. Quantitative variables were compared between subgroups using Student's t-test. All tests were two-tailed and p-values < 0.05 were considered as significant. Solid organ transplantation in IBD patients Out of a total IBD cohort of 1073 CD patients and 464 UC patients analyzed in this study, 31 patients (2.0% of all IBD patients) underwent SOT during the study period (between July 2002 and May 2014). Among the 31 IBD patients were 22 UC patients (71.0%) and 9 CD patients (29.0%). Therefore, 0.84% of all 1073 CD patients and 22 of all 464 UC patients (4.74%) underwent SOT, confirming the increased incidence of SOT among UC patients compared to CD patients (p = 2.69 x 10 −6 , UC vs. CD). Twenty-four IBD patients underwent LTx (77.4%), six IBD patients underwent kidney transplantation (19.4%) and one patient underwent heart transplantation (3.2%; Table 1, Fig 1). Outcome of liver transplantation in IBD patients Twenty-four IBD patients including 21 UC patients and 3 CD patients underwent LTx for PSC or PSC/AIH overlap syndrome; one UC patient had concomitant PSC and hemochromatosis (Fig 1, Table 1). The median age at first diagnosis of liver disease was 27.2 years (range 12.5-56.0 years), compared to a median age of 21.2 years at first IBD diagnosis (range 9.1-50.0 years). The median age at first LTx was 41.2 years (range 27.1-66.0 years). Therefore, the median interval from first diagnosis of liver disease to LTx was 156.8 months (range 10.1-418.0 months). All patients with LTx received immunosuppressive therapy with tacrolimus after transplantation, five patients received tacrolimus in combination with mycophenolate mofetil (MMF) (20.8% of all LTx patients), 17 patients received concomitant steroid treatment (70.8%) and one UC patient received consecutively cyclosporine A and then tacrolimus (4.2%, Table 1). Three of the 24 IBD patients (12.5%) needed re-transplantation because of acute ischemic organ failure after first LTx; one of them (4.2%) required even a total of four LTx due to recurrent acute ischemic organ failures after transplantation (Table 1). One UC patient (4.2%) needed a second LTx one month after first SOT because of organ failure after recurrent cholangitis and intra-hepatic bleeding complications. Another UC/PSC patient (4.2%) needed retransplantation three years after first LTx because of chronic vascular complications resulting in a chronic ischemic organ failure ( Table 1). Outcome of renal transplantation in IBD patients Six IBD patients out of 1537 IBD patients analyzed (0.4%), including five CD patients (0.47% of all CD patients) and one UC patient (0.22% of all UC patients) underwent renal Table 1. Shown are the clinical characteristics of the 31 IBD patients who underwent solid organ transplantation (SOT). Given are sex, age, anti-reject immunosuppressive regimen, malignancies before/after organ transplantation, re-transplantation and reason for re-transplantation and severe complications after SOT, IBD activity before and after SOT, medical treatment of IBD and history of CD-related surgeries. The diagnosis and classification of UC and CD was based on the Montreal classification including endoscopic, radiological, and histopathological parameters [43]. transplantation for terminal renal failure (Fig 1, Table 1). Two out of these six patients (33%) were diagnosed with AA amyloidosis resulting in chronic renal failure. Another UC patient (16.7%) was diagnosed with IgA nephropathy. One CD patient (16.7%) was diagnosed with oxalate nephropathy and consecutive chronic renal failure needing hemodialysis followed by renal transplantation. Another CD patient (16.6%) developed an acute hemolytic uremic syndrome with acute renal failure and underwent renal transplantation (Table 1). In one CD patient with chronic renal failure, the cause for renal failure could not been diagnosed ( Outcome of solid organ transplantation in the IBD patient with heart transplantation One CD patient out of 1537 IBD patients analyzed (0.07%) underwent heart transplantation because of ischemic dilatative cardiomyopathy with consecutive congestive heart failure at age of 29 years (Fig 1, Table 1). This patient was diagnosed with CD at age of 22 years and received tacrolimus, MMF and steroid treatment after heart transplantation. The time from diagnosis of heart failure to heart transplantation was 15 months. IBD activity and medical treatment before and after solid organ transplantation Clinical characteristics of UC and CD based on the Montreal classification, disease activity before and after SOT, as well as history of IBD-related surgery and IBD-related medical treatment before and after SOT are given for the 22 UC patients and 9 CD patients in Table 1. All 31 IBD patients underwent endoscopy within a median of 16.5 months before SOT; they underwent also endoscopy within a median of 24.4 months after SOT. Overall, 20 of the 31 IBD patients were clinically and endoscopically in remission before SOT (64.5%). Six IBD patients had mild disease activity before SOT (19.4%) and five IBD patients had severe IBD activity before SOT (16.1%, Table 1, Fig 2A). After SOT, no activity of IBD was endoscopically seen in 19 of the 31 IBD patients (61.3%), while nine IBD patients had mild disease activity after SOT (29.0%) and three patients (9.7%) had severe disease activity ( Table 1, Fig 2B). History of medical treatment before and after SOT is given in Table 1 and Fig 3. Sixteen UC patients (out of 22 UC patients with SOT; 72.8%) received 5-amino-salicylic acid (5-ASA) treatment pre-SOT, eight patients received steroids (36.4%) and one patient with severe pancolitis received azathioprine (4.5%), while five UC patients had no maintenance treatment before SOT (22.7%, Table 1, Fig 3). After SOT, nine of the 22 UC patients received 5-ASA treatment (40.9%), three patients needed steroid treatment (13.6%) and two UC patients had immunosuppressive therapy with azathioprine after SOT (9.1%, Table 1, Fig 3). Two UC patients (9.1%) were treated with infliximab after SOT and another UC patient was treated with adalimumab 2.5 years after SOT (Table 1, Fig 3). Four of the nine CD patients had no maintenance treatment before SOT (44.4%, Table 1, Fig 3), two CD patients had immunosuppressive therapy with azathioprine (22.2%) and two other CD patients received 6-mercaptopurine (22.2%). Three CD patients had steroid treatment with two of them receiving concomitant immunosuppressive treatment with azathioprine and 6-mercaptopurine, respectively, and 5-ASA in one of these patients (33.3%, Table 1, Fig 3). After SOT, one CD patient was treated with infliximab (Table 1 and Fig 3). One patient received steroid treatment after SOT because of increased CD activity (11.1%), 3 patients had immunosuppressive therapy with azathioprine after SOT (33.3%) and five patients (55.6%) had no IBD-specific treatment after SOT (Table 1, Fig 3). IBD activity changes after solid organ transplantation A change of disease activity was seen in twelve of the 31 IBD patients after single organ transplantation (39%), while in 19 patients (61%) no significant influence of SOT on IBD activity was observed. Worsening of disease activity after SOT was seen in 7 patients (23%), while IBD activity decreased in five patients after SOT (16%; Table 1, Fig 4). Univariate analysis revealed that requirement of additional corticosteroid therapy (defined as prednisolone equivalent of 10 mg or greater for IBD activity and not for therapy of transplant rejection) was a good predictor for worsening IBD activity after SOT (p = 0.03; relative risk (RR): 10.29; 95% CI 1.26-84.06; Table 2). Table 1). All patients who died during the follow-up interval were male patients. Severe complications after solid organ transplantation Three male UC patients who underwent LTx for PSC (9.7%) died because of acute ischemic failure of the transplanted liver after a median of 5.8 months (range 3.6-8.9 months). One male CD-PSC patient (3.2%) died one year after LTx because of septic and bleeding complications at an age of 35 years. This patient received MMF, steroids and tacrolimus as immunosuppressive regimen after SOT. Another male UC patient who underwent a total of four LTx because of recurrent acute ischemic organ failures died six years after the last LTx because of septic complications and gastro-intestinal bleeding complications at an age of 35 years with concomitant immunosuppressive therapy with tacrolimus (3.2%, Table 1). Two years after LTx, another male UC-PSC patient died because of congestive heart failure (3.2%, Table 1). Despite incomplete data regarding the cytomegalovirus (CMV) infection status before SOT, none of the 31 IBD patients with SOT developed infectious complications related to CMV or CMV reactivation. Malignancies in IBD patients before and after SOT Almost one third of the 31 IBD-SOT patients were diagnosed with malignancies or dysplasia (n = 9, 29%). Six IBD patients (19.4%) were diagnosed with malignancy or dysplasia before SOT (6/9 patients, 66.7%: one male UC patient with abdominal cutaneous malignant mesothelioma; one female UC patient with cholangiocellular carcinoma diagnosed in the explanted liver; one male UC patient with hepatocellular carcinoma diagnosed in the explanted liver; one male UC patient with severe pancolitis was diagnosed with high-grade dysplasia-associated lesions (DALM); one male CD patient with cholangiocellular carcinoma diagnosed in the explanted liver and one UC patient with colorectal cancer before SOT; Table 1). Three IBD patients (9.7%) were diagnosed with malignancies after SOT (one female UC patient with adenocarcinoma of the papilla of Vater; one male CD patient with post-transplant lymphoproliferative disease (PTLD) and male CD patient with papillary renal cell carcinoma in the transplanted kidney; Table 1). Discussion The aim of this study was to analyze the effect of SOT on the IBD course. Only a minority of 2% of all IBD patients (31 out of 1537 IBD patients) needed SOT in our IBD cohort demonstrating that this is an overall rare event in IBD, especially in CD patients. Importantly, significantly more UC patients underwent SOT in our study cohort compared to CD patients due to the higher prevalence of PSC-related liver cirrhosis in UC (4.74% of all UC patients vs. 0.84% of all CD patients, p = 2.69 x 10 −6 ). All LTx were performed due to PSC or PSC overlap syndromes. Epidemiologic data from Northern European countries demonstrated a lifetime risk of 5% for developing PSC in IBD patients [6]. Also in Northern European countries, PSC is a major indication for LTx constituting approximately 17% of all indications for LTx in the general population (including IBD patients) [21]. Overall, outcome of SOT in the 31 patients was favourable in our cohort. The survival rate was 84% (n = 26) during a total follow-up of 103.0 months (range 7.0-182.0 months) and a median follow-up period of 33.3 months after SOT (Table 1). Five male IBD patients who underwent SOT died (16%) at a median age of 49.0 years. Most common complications were ischemic organ failure of the transplanted liver, septic complications as well as uncontrollable bleeding complications. Renal failure is a rare complication especially in patients with CD [20]. Age and duration of IBD have been identified as independent risk factors to develop renal failure [44]. Systemic AA amyloidosis is associated with IBD and at least 1% of IBD patients will develop amyloidosis [45]. Two of our CD patients needed renal transplantation for AA amyloidosis and had favourable long-term outcomes. An association between IgA nephropathy and IBD seems possible [46] and there is an between oxalate nephropathy and IBD since the prevalence of calciumoxalate urolithiasis is up to five-fold higher in CD than in the general population [47]. Hemolytic-uremic syndrome (HUS) is characterized by microangiopathic hemolytic anemia, impaired renal function and excessive platelet consumption leading to thrombocytopenia especially related to gastrointestinal tract infections with Shiga toxin-producing Escherichia coli (STEC) [48]. CD seems to be a likely predisposing factor for HUS because of recurrent gastrointestinal tract infections [49]. Importantly, the majority of IBD patients in our cohort received a tacrolimus-based antireject treatment regimen after SOT (87.1%). In some studies, this immunosuppressive treatment regimen was associated with an unfavourable outcome in IBD patients who underwent SOT with an up to four-fold higher risk of post-LTx IBD relapse [5,[50][51][52]. However, we could not confirm this unfavourable outcome in IBD patients with tacrolimus-based anti-reject treatment regimen post-SOT as in 61% of patients disease activity was not influenced by SOT (and SOT-associated immunosuppressive therapy) and 16% of patients had even improvement of disease activity after SOT. Cyclosporine-based anti-reject regimens after SOT were not associated with worsening of disease activity in patients with IBD [52,53]. However, only four of our 31 IBD patients (13.0%, 3 CD patients and one UC patient) had cyclosporine-based immunosuppression after SOT (Table 1). Disease activity did not change in two of these patients after start of cyclosporine A; two patients had mild activity after SOT while all patients were clinically and endoscopically in remission before SOT. However, our subgroup of patients with cyclosporine A treatment after SOT is too small to draw definite conclusions. These observations were confirmed by univariate analysis of risk factors, demonstrating no association between tacrolimus or cyclosporine treatment after SOT with worsening of disease activity. Steroid treatment for IBD after SOT was associated with active disease in this univariate analysis (p = 0.028, Table 2). This association may be most likely explained by the fact that patients with active IBD after SOT will be primarily started with steroid treatment to control disease activity considering the limited experience with other treatment options for IBD maintenance therapy after SOT such as anti-TNF treatment. Therefore, steroid therapy is not necessarily a predictor of disease worsening after SOT but rather an indicator for active IBD following SOT. Based on the results of a large Scandinavian meta-analysis with unfavourable outcomes of IBD under tacrolimus-based anti-reject treatment regimen after liver transplantation, Jørgensen et al. suggested a shift of immunosuppressive treatment to cyclosporine as potentially beneficial [5,21]. However, tacrolimus-based anti-reject therapy seems superior to cyclosporine-based anti-reject treatment regimen by significantly reducing the risk of acute rejection and steroidresistant rejection as well as the risk of graft loss [54]. For every 100 LTx patients treated with tacrolimus instead of cyclosporine, rejection and graft loss could be avoided in 9 and 5 patients, respectively [54]. None of the IBD patients in our cohort had severe episodes of acute rejection after SOT or loss of the transplant due to acute rejection reaction. Therefore, our data cannot support unfavourable outcomes of the IBD course in tacrolimus-treated patients. Taking the lower risk of acute rejection and steroid-resistant rejection as well as the lower risk of graft loss in patients with tacrolimus treatment into account, a switch to cyclosporine in IBD patients with SOT cannot be recommended considering the results of our study. Although calcineurin inhibitors (CNIs) are the main anti-reject treatment after LTx, CNI treatment is associated with unfavourable side effects such as worsening of renal dysfunction, neurotoxicity, and diabetes in patients following LTx. The use of mammalian target of rapamycin (mTOR) inhibitors after liver transplantation has been associated with favourable benefits on renal function but with efficacy comparable to CNIs and therefore would be a good alternative in IBD patients following LTx [55]. However, data on mTOR treatment for IBD are very limited and currently not established to control disease activity in patients with IBD [55]. Data on the prevalence of colectomy after SOT are conflicting. Whereas a progressive PSC with a consecutive need for LTx seems to be associated with a decrease of disease activity in some IBD/PSC patients, other clinical trials report a prevalence of colectomy of up to 35% in UC patients after LTx [56,57]. In our cohort, only one patient needed colectomy after SOT because of refractory pancolitis despite anti-TNF maintenance treatment with infliximab. In the literature, a total of 21 patients with anti-TNF treatment after SOT are reported to date [36][37][38][39][40]. The majority of these patients showed good response rates after start of anti-TNF treatment. With the exception of one study [37], which demonstrated in several patients infectious complications and a case of post-transplant lymphoproliferative disorder, there was also an overall good safety outcome (Table 3). Considering the patient number of these studies combined (n = 21), the clinical experience of anti-TNF-treated IBD patients with SOT is still very limited. In addition, the overall incidence of SOT in IBD is rare; therefore, very large studies are needed to draw definitive conclusions on the safety of anti-TNF therapies in SOT patients. In our cohort, four patients (13.0%) received anti-TNF treatment after SOT, including one CD patient after heart transplantation. This patient suffered from an inflammatory intestinal stenosis before heart transplantation. After start of infliximab, this patient was clinically in Table 2. Comparison of IBD-SOT patients with unchanged or improved IBD activity (n = 24) and IBD--SOT patients with worsened IBD activity (n = 7; univariate analysis). Steroid treatment for IBD after SOT was significantly associated with worsening of disease activity (p = 0.028). However, this association may be most likely explained by the fact that IBD patients with worsening of IBD activity after SOT will be primarily treated with steroid treatment rather than steroid treatment being an independent risk factor for worsening of IBD activity after SOT. Variable Disease activity unchangedor improved after SOT(n = 24) Worsening of diseaseactivity after SOT(n = 7) remission; endoscopically no signs of inflammation were seen. No side effects occurred. This is to our knowledge the first report of an IBD patient with infliximab treatment after heart transplantation. Overall, outcome of anti-TNF treatment was good in our cohort, although the number of patients is small. None of the four anti-TNF treated patients developed infectious complications; in one UC patient infliximab treatment was stopped prophylactically because of recurrent episodes of cholangitis most likely caused by stenosis of the biliary-enteric anastomosis. Despite the limited data on anti-TNF therapy, anti-TNF treatment seems effective and safe in IBD patients post-SOT and refractory to conventional treatment [36][37][38][39][40]. Only one UC patient needed surgery after SOT with proctocolectomy and ileal pouch-anal anastomosis for treatment-refractory UC. Tacrolimus-based reject therapy after solid organ transplantation had a favourable outcome in patients with IBD. The risk for colorectal cancer was low in our IBD cohort, only one UC patient (3%) was diagnosed with colorectal cancer before SOT and none of the IBD patients were diagnosed with colorectal cancer after SOT. However, given that the majority of patients were PSC-IBD patients with a high risk for developing colorectal cancer, annual screening colonoscopies were performed in most patients, likely contributing to the low number of colorectal cancers. OR Our study represents one of the largest single-center experiences on SOT outcomes in IBD patients. A major limitation of our study was the limited number of patients included in the Table 3. Given is an overview of publications on IBD patients who received anti-TNF therapy after solid organ transplantation including the 4 anti-TNF-treated patients of this study. analysis; e.g., the subgroup of anti-TNF-treated patients included only four patients. However, considering that the total number of all anti-TNF-treated IBD-SOT patients in the medical literature is only n = 21, this study adds important information to our knowledge of how to treat IBD patients after SOT. In conclusion, due to the stronger association of PSC-associated liver cirrhosis with UC (compared to CD), SOT is significantly more often required in UC (4.74% of our patients) than in CD (0.84% of our patients; p = 2.69 x 10 −6 , UC vs. CD). The overall outcome of SOT in our IBD cohort was favourable with a survival rate of 84%. Anti-TNF treatment was effective and safe in all IBD patients who underwent SOT. This suggests good safety aspects of anti-TNF-treatment in IBD after SOT, although larger, multi-center cohort analyses are needed to confirm these findings.
2016-05-04T20:20:58.661Z
2015-08-19T00:00:00.000
{ "year": 2015, "sha1": "8706efbe94d46bd1ba8ce91db3bf92127991c8ea", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0135807&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8706efbe94d46bd1ba8ce91db3bf92127991c8ea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
23332397
pes2o/s2orc
v3-fos-license
Trends in Down’s syndrome live births and antenatal diagnoses in England and Wales from 1989 to 2008: analysis of data from the National Down Syndrome Cytogenetic Register Objectives To describe trends in the numbers of Down’s syndrome live births and antenatal diagnoses in England and Wales from 1989 to 2008. Design and setting The National Down Syndrome Cytogenetic Register holds details of 26488 antenatal and postnatal diagnoses of Down’s syndrome made by all cytogenetic laboratories in England and Wales since 1989. Interventions Antenatal screening, diagnosis, and subsequent termination of Down’s syndrome pregnancies. Main outcome measures The number of live births with Down’s syndrome. Results Despite the number of births in 1989/90 being similar to that in 2007/8, antenatal and postnatal diagnoses of Down’s syndrome increased by 71% (from 1075 in 1989/90 to 1843 in 2007/8). However, numbers of live births with Down’s syndrome fell by 1% (752 to 743; 1.10 to 1.08 per 1000 births) because of antenatal screening and subsequent terminations. In the absence of such screening, numbers of live births with Down’s syndrome would have increased by 48% (from 959 to 1422), since couples are starting families at an older age. Among mothers aged 37 years and older, a consistent 70% of affected pregnancies were diagnosed antenatally. In younger mothers, the proportions of pregnancies diagnosed antenatally increased from 3% to 43% owing to improvements in the availability and sensitivity of screening tests. Conclusions Since 1989, expansion of and improvements in antenatal screening have offset an increase in Down’s syndrome resulting from rising maternal age. The proportion of antenatal diagnoses has increased most strikingly in younger women, whereas that in older women has stayed relatively constant. This trend suggests that, even with future improvements in screening, a large number of births with Down’s syndrome are still likely, and that monitoring of the numbers of babies born with Down’s syndrome is essential to ensure adequate provision for their needs. INTRODUCTION Between 1989 and 2008 two changes occurred that influenced the numbers of diagnosed Down's syndrome pregnancies, despite no change in the overall number of births in England and Wales. First was the considerable increase in maternal age, which is a major known risk factor for Down's syndrome. 1 2 Second was the increase in antenatal diagnoses of Down's syndrome, which included non-viable fetuses who would not have survived to term and therefore remained undiagnosed. 3 In the early years of the period from 1989 to 2008, the major indication for invasive antenatal diagnosis was a maternal age of 37 years or older. Since the mid-1990s, maternal serum testing and, later, measurement of fetal nuchal translucency, were successful screening tests, and antenatal screening has achieved higher rates of correct predictions and higher coverage year on year. In 2001, the UK National Screening Committee advised that all pregnant mothers should be offered one of the available screening tests for Down's syndrome, and their recommendations for 2007-10 are that these tests should have a positive rate of less than 3% and a detection rate of more than 75%. 4 This report describes the effects of the changes in maternal age and advances in screening on the incidence of live births with Down's syndrome, and on the number of antenatal diagnoses between 1989 and 2008 in England and Wales. Data collected The National Down Syndrome Cytogenetic Register 1 was set up on January 1 1989, and holds anonymous data from all clinical cytogenetic laboratories in England and Wales for more than 26 000 cases of Down's syndrome diagnosed antenatal or postnatally. 5 Almost every baby with clinical features suggesting Down's syndrome, as well as any antenatal diagnostic sample from a pregnancy suspected to have Down's syndrome, receives a cytogenetic examination, since the definitive test for the syndrome is detection of an extra chromosome 21 (trisomy 21). All clinical cytogenetic laboratories in England and Wales are asked to submit to the register a completed form for each such diagnosis and its variants. The form contains details of the date, place of, and indications for referral, maternal age, and family history. Most laboratories send a copy of this form to the referring physician for confirmation and completion. The data have been compared with those from other congenital anomaly registers and those of the UK Office for National Statistics. These comparisons have shown that since its inception the register has captured data for an estimated 93% of all diagnosed births and pregnancy terminations to residents of England and Wales. 6 All data are presented by financial year (from April 1 1989 to March 31 2008). Missing maternal ages Five per cent of records had missing maternal age, of which more than 95% were postnatal diagnoses. Every case was assigned a set of probabilities of the mother being aged from 15 to 50 years, calculated from the distribution of single years of known maternal ages registered in the same year, matched for antenatal or postnatal diagnoses. The probabilities were then used in any calculations involving maternal age. All women younger than 37 years were classified as younger women and women aged 37 and older as older women for presentation purposes. Age 37 years was chosen as the threshold because at the start of the register, age was used as the initial screening test with many women of this age or older being offered an antenatal diagnostic test before other antenatal screening tests became available. Adjustment for natural fetal losses The data included pregnancies that were diagnosed antenatally and subsequently terminated. Many of these pregnancies would not have survived to term and would therefore previously never have been diagnosed (miscarried fetuses are generally not karyotyped). For a comparison of the annual numbers of live births that would have occurred in the absence of antenatal diagnosis and subsequent terminations, adjustment must be made for the risk of a natural miscarriage. To do this, the number of terminations is weighted by the estimated risk of miscarriage, allowing for the fall in risk with increasing gestational maturity and the increase in risk with increasing maternal age. 7 For example, at a maternal age of 35 years, only 57% of Down's syndrome fetuses diagnosed at 13 weeks' gestation result in a live birth (the others miscarry or are stillborn), so that terminations that occurred at around 13 weeks' gestation to such mothers are weighted by 0.57 to estimate the numbers of live births that would have occurred at term. This means, for example, that the occurrence of two terminations at 13 weeks to 35 year old mothers is equivalent to around one birth occurring. Missing outcomes The outcome of each pregnancy diagnosed antenatally is followed-up, but ascertainment has been slow for certain laboratories in recent years. This is partly because of the increased used of private diagnostic testing, so that the place of testing is not the same as the place of pregnancy outcome. However, the reasons for missing outcomes are unrelated to the actual outcome and to maternal and gestational age in cases subsequently traced. To examine trends in the proportion of women deciding to continue with the pregnancy on receiving a antenatal diagnosis of Down's syndrome, we excluded all cases with missing outcomes. To check the validity of doing so, we also examined every year's outcome data separately, and data from a specific laboratory were included only if outcomes were available for more than 95% of diagnoses. With this smaller dataset, the estimated proportion of women deciding to terminate the pregnancy was the same as that derived from the first method of excluding all cases with missing outcomes, the findings of which are presented in the results. Trends in diagnoses and live births The table shows the increase in diagnoses of Down's syndrome between 1989 and 2008, from 1075 in 1989/ 90 to 1843 in 2007/8. These values include live births and stillbirths diagnosed postnatally, and outcomes after antenatal diagnoses (terminations, fetal losses, and a small number in which the pregnancy was continued to term). The number of affected live births was 752 in 1989/90 and 743 in 2007/8 (a 1% decrease). Around 92% of women who received an antenatal diagnosis of Down's syndrome decided to terminate the pregnancy, and this proportion was constant throughout the period covered by the register. Figure 1 compares the total number of Down's syndrome diagnoses (top line) with the estimated number of Down's syndrome live births that would have occurred in the absence of antenatal diagnoses and selective termination (middle line). The two lines differ because some of these pregnancies would have miscarried naturally and not resulted in a live birth. The bottom line gives the estimated number of Down's syndrome live births that did occur in the presence of antenatal diagnoses and selective termination. The difference between the bottom two lines is attributable to antenatal screening and subsequent terminations, the effects of which have clearly increased over time. Trends in maternal age The middle line in figure 1 is the number of live births expected in the absence of screening and subsequent terminations; the rise (from 959 in 1989/90 to 1422 in 2007/8) is therefore due to a true increase in the incidence of Down's syndrome, which can be attributed to the increase in maternal age. Figure 2a shows the changes in maternal age for all births in England and Wales, and figure 2b shows the consequent effect on Down's syndrome pregnancies. 8 The small increase in the number of older mothers has a large effect on the number of Down's syndrome pregnancies because the risk of an affected pregnancy is greatly increased for older mothers; the risk for a 40 year-old mother is 16 times that for a 25 year-old mother. Trends in antenatal screening Because maternal age is such a powerful predictor it is the most important element of estimation of risk in all screening programmes. In the early years of the register, maternal age was the only method of screening, and women older than 36 years were offered an amniocentesis. For women younger than 37 years old (fig 3a) few screening tests were available in 1989 and the early 1990s. Antenatal diagnosis was generally done in these women for other reasons, such as a family history of Down's syndrome, or findings of so-called soft markers at fetal ultrasound examinations, which became more common in the early 1990s. Even when validated screening tests became available, they had lower detection rates in younger than in older women. That the proportion of pregnancies diagnosed antenatally in younger women, which was 3% at the start of the register, began to increase rapidly after about 1993 to around 43% in 2007/8 is therefore unsurprising. Figure 3b shows that the proportion of pregnancies to women of 37 years and older diagnosed antenatally remained at around 70%, with the proportion diagnosed antenatally due to age alone being replaced with diagnoses due to other types of screening. An important consequence of these changes is that the mean age of mothers of live born children with Down's syndrome has risen over time, from 30. 6 years (sd = 6.1 years) in 1989/90 to 34.4 years (sd = 6. 8 years) in 2007/8, while the age of mothers of antenatally diagnosed cases has fallen. DISCUSSION We have shown in this paper that the two factors that influence the numbers of live births with Down's In older women, a constant proportion of around 70% of diagnoses of Down's syndrome are antenatal. In the early years of the register, this was because most accepted a diagnostic test because of their advanced maternal age alone, but it is now accounted for by most accepting a diagnostic test due to having had a different screening test. In younger women, the implementation of more recent methods of screening has had a greater effect, because reasons to offer them diagnostic tests were rare before the availability of these methods, and the earliest screening tests were not very sensitive. By 2007 all women should have been offered screening and the tests available were much improved, with higher detection rates and fewer false positives. Data from the register show that the proportion of antenatal diagnoses in younger women has increased rapidly, but the data shown in figure 3 suggest that this will also plateau at around 70%. In 1992, a prediction was made based on available evidence, that no more than 60% of all women would take up antenatal screening. 9 In view of the apparent plateau in uptake of antenatal diagnosis and the increasing maternal age, an increase in the number of affected births was predicted. However this prediction underestimated the future power and effectiveness of new screening techniques. The annual number of Down's syndrome live births has remained fairly steady as the number of pregnancies terminated balance those resulting from the age-related increase. This plateau will not necessarily remain if maternal age continues to increase and the proportion of parents accepting screening and opting for a termination remains the same or decreases. However, the proportion of women who decide to terminate the pregnancy when they receive an antenatal diagnosis of Down's syndrome has remained constant at 92% throughout the life of the register. The findings also show that parents currently expecting a baby with Down's syndrome tend to be older than those in previous cohorts, a fact that needs to be considered when planning long term care for those affected. Moreover, such care will need to be extended as life expectancy is probably rising faster in individuals with Down's syndrome than in others. These concerns might be mitigated somewhat by the much improved educational attainment and social acceptance of people with Down's syndrome. The National Down Syndrome Cytogenetic Register is a unique resource that has ascertained over 93% of diagnoses of Down's syndrome in all of England and Wales over 19 years. It has enabled the effects of changes in screening policies to be accurately moni- tored. The only other national dataset on this syndrome in England and Wales is that collected by the National Congenital Anomaly System, which is too incomplete to monitor trends. 6 10 Regional congenital anomaly registers collect data that enable monitoring of regional trends, but these registers cover only around 55% of all births in England and 100% of all births in Wales. 10 The main current weakness of the National Down Syndrome Cytogenetic Register is the necessity to estimate the number of live births, because of largely administrative delays in receiving data for some pregnancy outcomes after an antenatal diagnosis. However, these delays are unrelated to the outcome in cases that we have subsequently managed to trace, and we are finding that with increased resources cases are often traceable. Other countries have reported similar trends in Down's syndrome diagnoses, screening, and subsequent live births, generally by merging their birth registries with data from cytogenetic laboratories. [11][12][13][14][15] The National Down Syndrome Cytogenetic Register covers a greater population (all of England and Wales) and a longer period of changing diagnostic technologies than do registers from other countries. In conclusion, dramatic changes in demography have been offset by improved medical technology and have resulted in no substantial changes in the birth prevalence of this quite disabling condition. Despite these underlying changes, it is striking that for women older than 36 years with a Down's syndrome pregnancy, the proportion who have had an antenatal diagnosis has remained constant at 70%, and for all women with an antenatal diagnosis of Down's syndrome the proportion who decide to terminate the pregnancy has remained constant at 92%. These findings indicate that even with improved screening tests, a considerable proportion of women may still decide not to be screened, and a qualitative study investigating why some women decide not to be screened would be valuable. To ascertain whether the decision is an informed one and, if not, to address the lack of information, is important. Knowledge about how much the risk of fetal loss after amniocentesis or chorionic villus sampling influences the decision to have a diagnostic test would help us to predict the impact of the future introduction of non-invasive diagnostic tests. The lack of iatrogenic risk may result in a higher uptake than current diagnostic tests, which would reduce, or even abolish, invasive diagnostic tests, and could substantially increase the numbers of therapeutic abortions of affected fetuses at an earlier gestational age. 16 These future changes need to be closely monitored to ensure that appropriate resources are available both for the potentially increasing numbers of therapeutic abortions and also for the babies who will still be born with Down's syndrome. Contributors: JM is Director of the National Down Syndrome Cytogenetic Register and is responsible for the data and the analysis presented and worked jointly with EA in the writing of this paper. Haiyan Wu, Annabelle Stapleton, and Khadeeja Wahid maintain the National Down Syndrome Cytogenetic Register database. JM is the guarantor for the study.
2017-09-01T11:49:55.298Z
2009-10-26T00:00:00.000
{ "year": 2009, "sha1": "77f1d9fd49f1e981088212ecc6a70b462ff376fc", "oa_license": "CCBYNC", "oa_url": "https://www.bmj.com/content/339/bmj.b3794.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "77f1d9fd49f1e981088212ecc6a70b462ff376fc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268484837
pes2o/s2orc
v3-fos-license
OPTIMIZING ZERO BETA PORTFOLIOS: A COMPARATIVE ANALYSIS OF ROBUST AND NORMAL PORTFOLIO METHODOLOGIES : When building a “zero beta portfolio”, neglecting the parameters’ uncertainty may harm the investor. This paper analyzes a way to build a zero beta portfolio that does not consider only the parameter points estimates Introduction In the realm of unrestricted arbitrage, an investor theoretically holds the potential of securing a risk-free profit, as illustrated in Figure 1.This could be attained when two well-diversified portfolios, both possessing identical betas, produce varying expected returns.The strategy involves shorting the portfolio that yields a lower expected return and procuring the one that offers a higher expected return.The expected return point estimate of Portfolio A presented in Picture 1 may be lower than Portfolio B. However, if both estimations have large enough estimation errors it is reasonable that portfolio A could result in a higher realized return than portfolio B. From the beta perspective, neglecting its estimation error may also be bad for an investor seeking a statistical arbitrage.If one (or both) portfolios shown in Picture 1 have beta with a large estimation error, what seems to be a statistical arbitrage opportunity may also result in a loss for the investor. To illustrate that, picture 2 shows an example that the real beta of portfolio A is lower than portfolio B -even though the point estimates were the same. In the example shown in picture 2, if an investor short Portfolio A and buy Portfolio B, in a case of a bear market, portfolio B may have a higher loss than Portfolio A, resulting in a loss to the investor.Therefore, from the expected return perspective, for a certain value of beta, when an investor aims to develop a statistical arbitrage position, it may be rational to build two portfolios with the farthest point estimation from another and with the lowest estimation error possible.On the other hand, from the beta perspective, it may be rational to pick two portfolios with the closest point estimation possible and, also with low estimation error.Hence, when pursuing to analyze the feasibility of statistical arbitrage, it may be plausible to consider not only an accurate point estimation for the parameters but also their estimation errors.Then, this study aims to present a way to consider the parameters' uncertainty when building portfolios with long and short positions. It is worth mentioning that Göncü & Akyldirim (2016) and Anish (2021) state that once there is uncertainty about portfolio mean and standard deviation, statistical arbitrage is no longer a guaranteed approach, due to "error in trader's guess or forecast of the long-term mean levels".The expected profit is also based on the mean reversion price behavior (Ziping, Rui, & Palomar, 2019).About this problem, Do and Faff (2010) claim that there is a continuing downward trend in statistical arbitrage profitability. Beyond forecast errors that may cause a loss on long and short portfolios, 2010) state that arbitrageurs face other risks, such as "noise trader risk" -when illogical trading caused by noise traders prevent arbitrage approaches. The Beta and Its Uncertainty Using the Kalman Filter As a reminder, this study aims to build zero beta portfolios considering the uncertainties of the parameters.The Kalman Filter can be used as a tool to reduce the lack of precision caused by noise or other variables not considered in the valuation models, by minimizing the quadratic function of estimator error (Grewal and Andrews, 2014).By analyzing the characteristics of the problem presented in this paper, the Kalman Filter may be a useful tool to resolve that problem for beta estimation. The Assets Expected Return and Their Uncertainty Neto (2014) states that an asset's return is gauged by its value fluctuation plus the cash flow it generates.This research, therefore, proposes that a firm's expected return is the expected change in its market capitalization, determined by the ratio of the analyst's market cap forecast to the actual market cap at a given time (t), coupled with the projected dividends set by analysts.Furthermore, the standard deviation of all estimations -market cap and dividends -will quantify the uncertainties of these parameters. Therefore, the expected excess return of an asset can be stated as presented in Equation 1. Where: ̂ is the price target set by analysts, ̂ is the target dividend set by analysts. The parameter uncertainties will be measured by the standard deviation of all estimates, where ̂ and ̂ will be set by the deviations of the analysts' estimations and will be the uncertainty calculated by the Kalman Considering a naive approach where the uncertainties are independent, the sum of uncertain values can be calculated by adding the uncertainties. On the other hand, the product or quotient of uncertain values can be calculated by adding the fraction values uncertainties (Taylor, 1997). Therefore, by applying those rules, the uncertainty of the expected excess return can be expressed as Equation 2. ̂+ ∆ (2) Where: ∆ represents the parameters uncertainty.In addition, as is the same for all assets, in this paper its value will be set as zero. Uncertainties of the Parameters The expected return of a portfolio is determined by the weighted mean of the predicted returns of each constituent asset.Correspondingly, the beta value of the portfolio for each factor is the weighted mean of the respective betas of each asset.Both the projected return and beta values pertain to point estimation.Yet, as alluded to in the study's introduction, it may be prudent to acknowledge the inherent uncertainty of these parameters. When building a portfolio with a long position and another for a short position aiming to conduct a statistical arbitrage process, it may also be prudent to consider values at the interval estimations, to be aware of the risks the investor will bear.Hence, in light of the mean-variance approach, originally introduced by Markowitz (1952), the optimal "zero beta" portfolio in this research will be identified as the one possessing the greatest ratio of expected return to its projected uncertainty.This approach underscores the significance of balancing potential returns and risk, paving the way for a more comprehensive and informed portfolio management strategy. The "zero beta" portfolio means that the short and the long portfolios have the same beta value.In addition, the portfolio expected return will be set by the difference between the expected return of the long positions and the expected return of the short position. Such as Markowitz (1952), the expected returns of the portfolios (either the long or the short ones) are set by the weighted average of the individual assets' expected returns.Also, to pursue Markowitz's (1952) maximum mean-variance, the variance of each asset is equal to the sum of the combined fractional uncertainty of its beta, market cap target, and dividends, as shown in the next section.Finally, this study will state a naive assumption for the covariance uncertainty, that the uncertainty of the parameters between the assets is independent. The Data Data elements such as market capitalization, target market capitalization, distributed and expected dividends, along with their corresponding estimation standard deviations, have been obtained from the Refinitive Eikon Database.In order to compute the betas via the Kalman Filter methodology, this research took into account the weekly percentage shift in the asset's value and the equally-weighted average return.We initiated the beta estimation by performing a linear regression from January 18, 2013, through December 27, 2013. The coefficients obtained for each asset were subsequently implemented as an estimator for 2014.Additionally, we computed a fresh linear regression for each asset throughout 2014.Following that, the Kalman Filter, as described in section 2, was utilized in this research to amalgamate and estimate the beta for the successive years, in addition to their standard errors. During portfolio construction, computational constraints to optimally weight the assets restricted the portfolio to only one hundred assets per annum.In this research, we first established the long and short portfolios at the onset of 2015 and subsequently updated them at the start of every subsequent year through 2022.Furthermore, upon the conclusion of each period, the portfolios will be assessed and juxtaposed to a portfolio that employs identical parameters but does not take into account their uncertainties. The Results As shown in Table 1, the actual realized return for each year differs substantially from the portfolio's expected return.In all cases, the realized returns were much lower than the expected ones, and in some cases, the realized returns were even negative.Those results are consonant with the market efficiency hypothesis defined by Fama (1970), even for the weak form. 1 present the portfolio's expected return after running the optimization tool.Additionally, columns 2 and 4 present the realized return of the respective portfolios.From now on, the "zero beta" portfolios that maximized the ratio between the expected return by the parameters uncertainties will be called long-short robust portfolios, while the "zero beta" portfolios that simply maximized the expected return, neglecting the parameters uncertainties, will be called as long-short normal portfolio. As expected, the long-short robust portfolio was more stable over time than the long-short normal portfolio.Picture 3 presents the box plot calculated from the weekly returns of both portfolios, where the blue box plot represents the weekly returns of the long-short robust portfolio and the orange box plot represents the weekly returns of the "long-short" normal portfolio. Picture 3 -box plot from the weekly returns of both portfolios Source: self elaboration Since the long-short robust portfolio had a higher accumulated realized return from 2015 to 2022 and a lower standard deviation, compared to the long-short normal portfolio, consequently the robust portfolio resulted in a higher ratio of realized return divided by the variance, as shown in table 2. Conclusion and Future Studies Even though the long-short robust portfolio developed in this study had a positive accumulated return from 2015-2022, during some years the returns were even negative, in consonant with the market efficiency hypothesis defined by Fama (1970).Considering the parameters uncertainties to build the portfolio seemed to be positive to reduce its standard deviation.Future studies may include another risk factor besides the market risk, such as the economic factor suggested by Chan, Roll, and Ross (1986) and/or the five-factor asset pricing model proposed by Fama, and French (2015).It may also be worthwhile to test in other markets as well as during long periods. Figure 2 - Figure 2 -The risk of neglecting the estimations errors when pursuing an arbitrage Filter. According toXue, Di, & Zhang (2019),Qin, Kar, & Zheng (2016),Chen & Peng (2017), and Huang (2012) the security market is very complex and there are situations that historical data cannot be used to predict a security return and it is necessary to use expert's estimation.Echterling, Eierle, & Ketterer (2015) affirm that a common method presented in financial literature to set the implied cost of capital is the usage of analyst forecasts.Bielstein & Hanauer (2019) states that one of the practical difficulties of Markowitz's mean-variance portfolio optimization is to estimate the stock's expected return.For that parameter, the authors use analysts' forecasts to estimate the stock's expected return.Balakrishnan, Shivakumar, and Taori's (2021) empirical study concludes that "analysts' cost of equity estimates are meaningful expected return proxies".Fernandes, Ornelas, & Cusicanqui (2012) present a portfolio optimization technique that combines analysts' expectations with estimations' risk.Zhai and Bai (2018) build a portfolio with experts' opinions about the expected return, in which the returns distributions are considered as the securities' expected return uncertainty.Xue, Di, and Zhang (2019) discuss portfolio selection under an environment of uncertainty in which the expected return is extracted by an expert's estimation.Chen, Li, & Liu (2019), Chen & Peng (2017), and Huang (2012) portfolio selection articles consider experts' estimations for the securities return and treat them as uncertain, with intervals instead of only a point estimate.Fabozzi, Huang, & Zhou (2009) assert that the parameters estimate can be set by historical data or by expert prediction and, in the former case, instead of using the predictions of only one expert, it might be useful to combine the estimation from "several experts and consider each of their prediction as a likelihood distribution".Goetzmann and Massa (2005) construct a portfolio considering the dispersion of stock return opinion.Rapach, Strauss, and Zhou recommend that the combination of numerous forecasts delivers better empirical out-of-sample equity premium predictions when compared to individual forecasts.Finally,Verardo (2009) measured the uncertainty about a firm fundamental by the dispersion in analyst forecasts. Table 1 - Portfolios' expected and realized returns per year Table 2 - Portfolios return, standard deviation and
2024-03-17T16:15:30.859Z
2024-03-13T00:00:00.000
{ "year": 2024, "sha1": "6d1671388088cbf7fa249cc96e3b75296a949c6a", "oa_license": "CCBYNC", "oa_url": "https://ojs.revistacontemporanea.com/ojs/index.php/home/article/download/3631/2798", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "1b8778ba9609adbe17a0da468fe2cfd86def0418", "s2fieldsofstudy": [ "Economics", "Business", "Mathematics" ], "extfieldsofstudy": [] }
146953563
pes2o/s2orc
v3-fos-license
Comparison of frequencies of the notes between the ranges of B 3 to D 5 with human voice : A software design approach in MATLAB and its availability in music skill tests In this study, a program was developed that allows one to listen to sounds existing in computer, with the algorithms based on signal processing logic which is available via Fourier Transform in MATLAB program. The heard sound is recorded when it is repeated by human voice, and its frequency is compared to the frequencies of existing sounds in the computer to obtain measurement results. The first version of the program to be developed used piano notes in the range of B3-D5. The program generates measurement results in graphic form, corresponding to notes sung. In addition, the frequency results are shown in the information display. One of the main objectives of the study is to develop a measurement system that could be applied in music special aptitude tests. In music aptitude tests, in the questions about sounds repeating, the jury’s perception is altered by factors such as fatigue, and candidates’ measurement results are affected by this problem. This program seeks to eliminate this negative effect in the measurements results of the aptitude test. The first version of the program developed in this study was tested on 10 people. The Pearson correlation analysis was performed with the aim of examining the relationship between achievement levels of students in one voice field and fixed frequency values. The result of this analysis was a meaningful, positive and highlevel relationship between fixed values of 16 notes frequencies in the range of B3-D5 and singing values of the students regarding these 16 notes. INTRODUCTION Music is an art which has an important place in society.Performance of music is observed in different dimensions in human life.Those who wish to pursue the art of music must possess a certain degree of musical ability.Today, whether at a professional or amateur level, people begin their musical training only once their level of aptitude has been determined.To this end, students' skills and aptitudes in terms of music are measured and assessed in musical aptitude tests.In the field of measurement and evaluation, scores often consist of hearing, writing, and E-mail: ali.ayhan@inonu.edu.tr.Tel: +90 544 228 20 82. Authors agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License playing music (Sağer et al., 2014, 1). The programs take the candidates to the exams individually and as groups by determining levels according to the set of questions identified (Sağer et al., 2014, 36). Measurement of musical talent is a concept based on sensation.How does this seemingly arbitrary concept work?We can answer this based on what we know about the physical structure of the sounds and sound perception mechanism of our hearing system.A certain sound, that is closely connected to the brain and stored in its memory files, will evoke aural memories (Zeren, 2008: 1).The sounds stored in the mind at birth, and even before birth in the womb, create various images and concept maps in our brains.Every aspect of the art of music is linked to the special meanings and concepts generated by distinguishing among these sound images.The current preferred explanation for these concepts is a physical one.The third step of musical event is the analysis of the sound waves passing through hearing organs (ear and brain); in other words, it is the perception and evaluation of sound.A major part of this step includes the physical phenomenon.Therefore, it is related physically and is explained physically (Zeren, 2003:5). In physical terms, human beings have a limited ability to perceive sounds.The average human ear can detect sounds in the range of 20 to 20,000 Hz.However, people's perception of sound can be easily altered by environmental factors, among others.The main objective of this study is concerned with this situation.How can the errors based on perception in the results of musical aptitude tests be removed? There are several tests used to measure musical behaviours, abilities and skill levels.There are a variety of approaches for the classification of these tests.Boyle and Radocy (1987) have divided the music tests into four main categories: 1-Musical Aptitude Tests 2-Musical Achievement Tests 3-Musical Performance Tests 4-Musical Attitude and Other Emotional Variables Tests (Sağer et al., 2013: 543)". This study attempts to develop a program based on musical aptitude tests.It is not uncommon for members of the jury who evaluate the musical aptitude of the candidates to have some problems in evaluating the sounds produced by students, particularly as they become fatigued.This results in errors in measurement results which will affect the results of the exam andmore importantly-the prospects of the candidates. The main principle in musical aptitude tests is to eliminate this kind of measurement errors.However, much attention is given to this kind of problem; and errors related with measurement are nearly impossible to eradicate.In this vein, to what extent could computers be used to eliminate or minimize this kind of problem?The technical information we need to carry out such a study with current computer technology is given briefly bwlow. Moorer talks about the second aim in signal processing in his work titled, signal processing aspects of computer music.The results obtained in the synthesis which we call analysis based synthesis are used for forging the synthesis by developing them.Also, recent developments in this field have made this concept more appropriate than thought before.Thus, these developments promise that this field will be rich for future studies (Moorer, 1977: 4). The rich study field projected in 1977 is now at our fingertips with today's technological facilities.That is to say, techniques in the field of sound synthesis have undergone striking developments. Today, the most common sound synthesis is a computer and speakers that are controlled by it.From the simplest to the most complex of voice synthesis can be generated in this environment.Such a broad sound space is available to musicians that it provides a bridge between imagined and heard (Arapgirlioğlu, 2003: 161). On the basis of some of studies, sine curve models are often used for signal representation of music or speech, and analysis or transformation.The most important step for the emergence of the sine model is the estimation of the sine phases obtained from peaks which are acquired via approximate amplitude, frequency values and Discrete Fourier Transform (DFT).Estimates are quite easy when the signals are fixed, and Quick Fourier Transform Estimator (QIFFT) is used as a standard for this process.QIFFT uses a second polynomial model that consists of the maximum value of the spectral hit (peak) and amplitude of the peak hit and logarithm of the unwrapped phase (Röbel, 2003: 68). If we continue to go over these kind of studies, we have Naranjo and Koffi's study named "geometric image modelling of musical object" which emphasizes that, after the revolutions in graphic systems, drawing two-dimensional graphics of musical phrases or three-dimensional topographic maps is currently possible.When the article is investigated, it is understood that these drawings are made with Discrete Fourier Transform (DFT) (Naranjo and Koffi, 1988: 70). In Alm and Walker (2002)'s study, time-frequency analysis of musical instruments, there is detailed information on the principles and mathematical equations the Fourier Transform works with.The authors mentioned that Fourier series are classical mathematical theory which explains musical notes, and which formulas and methods are used in detail (Alm and Walker, 2002: 459-459). Studies about music education technology have gained momentum in the last twenty years.When these studies are assessed in terms of software programs, there are some headings below about used actively software related with music education and whose objectives are used. 1. Ear training programs prepared with various sound libraries.2. Notation programs 3. Virtual instrument programs 4. Instrument tuning programs 5. Studio recording programs Koç, who suggests that by assessing the general features of the computer-aided musical software it is possible to categorize these programs in groups, has classified the important field headings such as Music Education Programs (Instructional Software), practicebased programs (Practice/ Accompaniment Software), notation programs (Notation/Scoring Software), and "Sequencer" system (Sequencing Software); that is desktop system used for making by creating a sort of motion (Koç, 2004:2). Whichever program is used, they all have a common basis of computer code.We have the opportunity to make the computers operate precisely how we want with interfaces prepared using appropriate codes.Some very effective software created with computer codes is being used actively in music education today. Başuğur stresses that the onset of interactive software in education led to discussions and investigations about the applicability, positive and negative sides, and benefitcost aspects of these programs; however, with the good results obtained from the experiments, this new method understanding is no longer controversial.He mentions that interactive software is being used in world's most respectable music institutions, despite opposition which suggests that these programs eliminate emotion and spontaneity and limit creativity.The advantages of these software for lesson preparation or compensating for the personal shortcomings are significant (Başuğur, 2009: 2). Bauer et al. shed light on the situation of the music education and training in 1984 in their study.They talk about the researches and investigations which stress that, in those years, to be able to realize an effective music education, it was necessary to use the technological facilities effectively.The authors specifically talk about the fact that, even in those years, there was a tendency to assess and underline the necessity of the software and hardware resources (Bauer et al., 2003, 290).If we can make proper use of the technological facilities today, we can improve the quality of music education via technology projected. One of the programs designed for the development of musical talent is "Sibelius Auralia".In addition to providing rhythmic, melodic and range exercises this music education software aims to improve perception of AYHAN 1027 variation or repetition while making the listener hears with a musical ear.For this purpose, different test groups were created in the program.Through this program, musical ear could be improved in preparation for music aptitude tests.AMMA or "Advanced Measures of Music Audition" is another music education software.In this program, which has different versions for various age and education levels, there is a system that is used with the aim of measuring the reasoning of the performing music by asking questions already existing in sound library in different ways.This program could potentially be a highly beneficial source or preparation for music aptitude tests.While these programs provide a helpful source or preparation for prospective candidates, what about software that will increase the reliability of measuring results in different question types?Is it possible to design such a program?What positive contributions could eliminate the measuring problems encountered in music talent exams, comprehensively identifying the talent of the candidates?To design programs that will answer these questions, we must look for programs which facilitate software development.MATLAB is one of them.Algorithms have been designed to be used in the program as signal processing functions, and which can be implemented in MATLAB software. The basic working units in MATLAB are matrices.The elements of the matrices may be real, complex, or symbolic.The matrices may be column vectors, row vectors, or scalars.For our purposes, almost all units are vectors or scalars (Su, 2002:7). The widely used signal processing package Matlab® makes the division by N in the inverse transform, giving a version more closely related to the Fourier transform pair of (1) and (2).However, the sum will then only represent the integral (over a finite length time signal) when the forward transform is multiplied by delta (t), the time interval between samples (Havelock et al., 2008:83). It is possible to compare the human sounds with the recorded ones in the computer using the properties of MATLAB.Rumsey, in his study in 2004, mentions the MATLAB program as a popular program whose interfaces can be used in signal processing (Rumsey, 2004: 235).By using the signal processing and command lines in FFT functions of MATLAB editor, a musical aptitude test has been developed in this study.The basis of this application is to compare the frequencies of the human voice in single sounds.Further details about this program can be found in the findings and interpretations of this study. The objective of the study It is aimed that music software be prepared in computer media will contribute to the elimination of the measuring problems encountered during music talent tests, thereby fully identifying the talents of the candidates as part of a highly sensitive measuring program. The importance of the research The software that is prepared via this study will be a step towards eliminating human errors in scoring music aptitude tests, points of measurement and assessment which have a crucial position in educational plans.Additionally, in tests where the work expended in these exams is taken into consideration, the instalment of a multifaceted system in the exam environment will prevent a substantial loss of time and labour.Candidates are largely unable to perform satisfactorily, as they keenly feel the time problem and are made nervous as a result; this impediment is particularly apparent in the question types dealt with by this program.In music aptitude tests where there are long queues, candidates cannot perform to their actual potential in a measurement which is taken hurriedly.By creating a full music talent test, derived from the first version of this program, and setting it up to 5 on 6 computers, multiple candidates could be tested at the same time.Organized this way, the candidates would not get nervous due to long waits.Thus, the negative impact of time and overexcitement is eliminated when considered in terms of the results to be obtained from these studies; therefore, these contribute to the importance of the study. Premises In this study, it has been assumed that measurement problems in music aptitude tests could be eliminated via a new software that will measure frequencies, and this software can also be considered an application for music aptitude test consisting of codes written with the MATLAB program.In the first testing of this program-software the number of the subjects is 10, all of whom have prior music education.It is assumed that this number is sufficient to test whether this frequency measuring software runs properly. Limitations In the first version of the program that has been generated, the sound range of the notes measured is limited to B3-D5.Candidates submitting to the musical aptitude test are generally questioned about notes from this range.For this reason, files containing B3-D5 range (created using piano sounds, and the sound library of the program Logic Pro X) were generated in wav format for use in the MATLAB editing program.It is possible to measure their frequencies with the functions in MATLAB and compare with the human voice. METHODOLOGY In this chapter is given information about the method that will be followed according to the aim of the research and sub-problems. In this study, in MATLAB software, codes were written based on algorithms that will be used to analyse the items below. 1. Making people hear the sounds within the range of B3 and D5 in the audio library as they are played.To do this, by investigating the command lines in MATLAB, the related ones were used for the study.In this way, the necessary algorithms used for the running of the first version of the program were completed.2. Whenever these sounds are played they are shown in the related part of the program interface by measuring its frequency.For this purpose, the MATLAB editor section was used to create the application interface.3. Recording the sound heard through the microphone.The algorithm created with the command line with the save button during program execution using the computer's user's perception of his voice in wav format is provided.4.By measuring the frequency of the sound recorded, and displaying it in the related part of the program interface.The frequency of the candidate's voice and the ones already existing in the computer are shown on the information display.This is achieved by developing the algorithm in the first section of the application and measuring the frequency of sound both in the computer and as it is sung by the candidate.5. To name the recorded sounds, creating an 'add user' option.As this application has been designed for aptitude tests, recording candidates' information is important.For this purpose, a filing system has been created in the program.Candidates with doubts about their recordings can easily check their responses.In this way, the sounds candidates have sung are filed in the computer in wav format by writing the related information in the program.6. Creating the graphics of already existing sounds in the program as well as the ones yet to be recorded.Drawing graphic features in the program enhance its capacity for analysis and detailed research.Drawing graphics of amplitude, magnitude, frequency and duration of playing are produced using various command lines for sounds which the program processes as they are played or recorded.In the following sections of the study the information about these graphics are included. After this stage, the program has been tested on 10 different people, and the frequencies when they sang the notes between B3-D5 were measured. Only trained individuals were involved in the implementation of this experimental music training.Detailed information on the test results are in the findings and interpretations of this study. Research model This research is a trial in terms of developing a computer oriented program to measure and assess special aptitude exams.It is also a relational scanning model to assess the relationship between the points students produce through the program and its fixed frequency values. FINDINGS AND INTERPRETATION The interface of the program that was created using the As it is the first version of the program, the functions in the image such as 'play two sounds', 'play three sounds', 'play four sounds' have not been activated.In subsequent versions of the program, these functions will also be activated.For the time being, this section contains information about the core principles of the program and implementations on one voice. The notes in the range between B3-D5 were converted to wav format using the piano sounds in Logic Pro, when clicking the buttons with their names in the interface that was created with MATLAB program (Figure 2). The running principle of the buttons with which the notes are shown is generally provided by wavread and sound functions in the program. To save the sound of the note as it is reproduced by the human voice the save button is pressed for five seconds.To make a voice recording, the program's voice recording feature with Save button in the interface is used (har=dsp.AudioRecorder;).Thus, audio recording is made available (Figure 3). The frequency belonging to the recorded voice is measured by pressing the Result button.Results are displayed in the information screen in the interface of the program using the wavread and display functions in the related instruction line (Figure 4). The "Add User" button has been designed to create identification information for the sounds whose frequencies will be measured in the interface of the program.By clicking on the empty button shown below and typing one's name, it is easily possible to perform this process.For example, after playing the note A4, if you want to record it after singing, click the 'save button.'A wav file appears on the desktop with the name which was written on "Add User" (Figure 5).This is the section where instant information is displayed in the information screen such as "user was added successfully", or "recording has been completed; to see the result click on 'results button."The frequencies of the sounds played are filed under the "recorded sound" section in the user interface.When the 'results button is clicked, frequency of the recorded sound is shown in "recorded sound" section by measuring the frequency of the recorded voice (Figure 6). Data obtained from the program The frequency of the notes between the ranges of B3-D5 used firstly in the program was measured again by the program.The measurement results obtained are listed in Table 1. In the second stage of the study, the sounds have been reproduced using the program functions created with a microphone.Four screenshots obtained in this application are shown in Figures 7-10). Tables 2-3 were created to compare the frequency Regarding the first test results of candidates, the differences between the existing sounds in the computer and the ones sung by candidates and resulting from the release of the human voice were observed, and these differences did not exceed 10 hertz. In Table 1, interpreting the results of candidate P1, the difference between the frequency values does not exceed 5 Hz. and the closest value is lower than 1 Hz.This difference is due to the intensity of the vibrations caused by the human voice.Also, it is clear from the frequency comparisons in the table that the user who sought to imitate the sounds obtained a high rate of success.In broad sense, these differences are AYHAN 1031 considered acceptable when they are within the range of -10 and +10; it has been established that even a 5-Hertzdifference did not exceed in near-far distance. When the correlation results are analysed in Table 4, there is a positive and meaningful relationship between the fixed values of the frequencies of the notes between B3-D5 ( X =394.87) and students' singing values ( X =390.39) at the end of this program application (Figure 11).Regarding this finding, it could be said that students closely replicated fixed frequency values and were very successful in program implementation. Additionally, it appears students got very near to fixed frequency values and were very successful in the one voice exam, when we consider that the differences are between -10 or +10 hertz according to Pearson analysis results.Therefore, it seems reasonable to accept as correct those results which fall between -10 and +10 hertz of the exact pitch.Subsequent versions of the program will be developed accordingly. Graphics in the first step Figure 12 shows the frequency of the one voices used in the program.To draw the graphic, frequencies are taken from the notes between B3-D5 in the program's sound library with the function named as t.While drawing the graphic, the length of the frequency in seconds in x-axis, and width of it in (decibels) dB in y-axis are shown. Graphics in the second step In Figure 13, using the graphic drawing feature of the program and the t function mentioned above, magnitude values of the sound data are plotted along the x-axis to record the duration of the sound.The sound's magnitude in terms of dB is displayed along the y-axis. Graphics in the third step In Figure 14, firstly, the Fourier Transform algorithm is implemented to calculate the absolute value of the frequencies and length of the sound files of existing B3-D5 notes.The graphic of the related note is drawn by applying frequency (Hz) to the x-axis, and magnitude to the y-axis. Graphics obtained after the recorded voice in the fifth step Figure 16 shows frequency on the x-axis and magnitude AYHAN 1033 on the y-axis by using the functions of the recorded sound in the program and commands written, and which generally shows its absolute value is obtained.More detailed data and graphic displays are obtained by using the zoom in and out features in the program shown above in the third, fourth and fifth steps.In this study only some general information has been presented about these graphics. CONCLUSION AND SUGGESTIONS In accordance with the data obtained so far, it appears that measurement and assessment processes could easily be carried out with this software that has been prepared for single-note music talent exams.More importantly, reliable results are obtained.By analysing the graphics of the results, more detailed information is accessed. The use of this program, which has been developed with codes in MATLAB program and especially the Fourier Transform algorithm, in music aptitude tests could prevent erroneous measurement situations, which occur due to reliance on hearing based on human perception.This is very important when considering educational aspects.It is a well-known fact that one of the main objectives of education is measurement and evaluation.It becomes much more important when it comes to the musical aptitude tests where candidates are chosen according to their level of aptitude knowledge.It is very important that the students taking these exams can put faith in the reliability of the tests, and in doing so minimize their nervousness. Another aspect of the aptitude test is the excitement factor which affects the performance of the candidate.A program such as this could be applied to question types identifying intervals and chords of 2, 3 and 4 notes.Even so, the negative factors such as time and excitement could be reduced with an aptitude test measurement exam which includes repeating questions on the subjects of melody and rhythm.It would be fairly straightforward to incorporate these into the program by designing the necessary algorithms. In many music aptitude tests problems result from the weakness of the evaluating jury, as well as the stress of candidates who take the exams in short periods and in order.Sometimes, candidates may incorrectly sing the sound they hear due to excitement and tiredness after a prolonged wait.To avoid that, exams can be implemented individually in a more relaxed environment, using a lot of computers placed in special exam rooms equipped with security cameras.Candidates will be able to do their own exams whenever they are ready for the test. Additionally, in measuring a definitive scoring will emerge.Such an exam, ranking the candidates by comparing their frequencies, is needed when it is considered in these aspects. By developing the basic principles of the program's algorithm, a perfect measurement could be carried out in ear training exercises, by using these principles in twotone intervals, and basic situation and cycle of three or four chords. Thus, the material errors encountered in the aptitude test results will be reduced to a minimum.Although it may seem difficult at first, this function can be constructed logically.When two, three or four loud voices are directed to the person, she or he will give the sound one by one by decoupling them.To be able to measure the sounds given with certain time intervals and to record them with a microphone, by adding a few things to the algorithm which is used in measuring the one sound, it is possible to see the voice responses which are given to two, three or four voices, and the measured frequencies separately in the results section.In this way, the extent to which there is error or truth in implementations can be obtained more clearly and comprehensively, as it is applied to sounds other than a single voice. In order to improve the usefulness of this study, in the results of the first measurement, it has been observed that there is a meaningful and positive relationship in the Pearson correlation test results (**p<.01) of the musically talented students.Regarding this finding, it could be said that the students produced results significantly closer to fixed frequency values and became quite successful in the program implementation. It seems essential to use this application in aptitude test by developing it in this sense.By developing the first AYHAN 1035 version of the program, positive contributions to music education could be made with measurements at this level. Regarding the possibilities for its use in different fields, this program could potentially be used in medical studies related to voice, such as determining which vocal chords give which frequency intervals.It could be used as a competitive music game, by giving sounds in approximate levels as in a musical intelligence game. We can compare the sounds of piano and other instruments by using the 'save' button.For instruments other than piano, information about attitude and frequency of their characteristic sounds can be obtained by recording these instruments. Such examples can be increased easily.MATLAB commands offer us these facilities.Also, it is understood that the "musical aptitude test" developed in MATLAB program runs according to the foreseen objectives.All the results of this program and its implementation results (on trained people in single sound) show us that the musical aptitude test developed with this program will overlap with the objectives of this study and will shed light on further studies by constituting the basis of them.With the development of the program a full-fledged Music Aptitude Test Application can be developed for worldwide use.Thus, the project underlying this study will contribute to the field of music. Figure 1 . Figure 1.General overview of the program. Figure 7 .Figure 8 . Figure 7.The Measurement results from P1 Points to D4 Note Figure 9 .Figure 10 . Figure 9.The measurement results from P1 Points to A4 Note Figure 13 . Figure 13.Magnitude values of the sound data. Figure 14 . Figure 14.The absolute value of the frequencies and length of the sound files of existing B3-D5 notes. Figure 16 . Figure 16.frequency of the x-axis and magnitude of the y-axis Table 1 . The frequencies of the notes in the range between B3-D5 Table 2 . The frequency values obtained from the Sounds by B3-D5 computer and P1-P5 candidates Table 3 . Frequency levels of the sounds of the computer and P6-P10 candidates in B3-D5 Table 4 . Frequency levels of the sounds of the computer and P1-P10 candidates in B3-D5 the pearson test correlation analysis results **p<.01.
2018-12-29T06:10:03.733Z
2014-10-23T00:00:00.000
{ "year": 2014, "sha1": "07fbfdf29d90b2f1e5bc9d96a7ef866236da720a", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/ERR/article-full-text-pdf/35F05F548100.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "07fbfdf29d90b2f1e5bc9d96a7ef866236da720a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Psychology" ] }
243766214
pes2o/s2orc
v3-fos-license
Meta Distribution-optimal Base Station Deployment for Finite-Area Mobile Networks User mobility in cellular networks deployed in finite areas, results in the non-uniform spatial distribution of the mobile users (MUs), necessitating the effective deployment of base stations (BSs) in order to enhance the network connectivity. In this paper, we analyze the optimal non-uniform distribution of the BSs that provides ultra-reliable connectivity in the context of finite-area cellular networks, where the MUs employ the random waypoint (RWP) model. By using stochastic geometry tools, we first establish an analytical and tractable mathematical framework to investigate the uplink success probability in the context of non-uniform cellular networks. Furthermore, analytical expressions for the moments of the conditional success probability are derived, and a simple approximation of the meta-distribution (MD) is calculated, leveraging the moment-matching method with the Beta-distribution. Finally, we exploit the MD as a criterion for defining the optimal BS distribution, conditioned on the user’s mobility. Our results reveal that the developed framework provides guidance in the design of cellular networks to efficiently determine the optimal network spatial deployment that offer ultra-reliable connectivity based on the mobility of MUs. I. INTRODUCTION Network densification via dense deployment of small cells, such as pico-and fempto-cells, is advocated as the key-enabling technology to enhance the network capacity and achieve high throughput [1]. Nevertheless, the existence of areas with heavy traffic load (also known as hotspots), which highly depends on the properties of the user's mobility, may compromise the network performance [2]. Hence, the modeling and the analysis of the user mobility in the context of small-cell networks is of paramount importance, as insightful design guidelines can be derived in order to ensure ultra-reliable connectivity. Regarding the effect of the nodes mobility on the network performance, different mobility models are proposed. The most widely-used mobility model due to its tractability, is the random waypoint (RWP) model [3]. Authors in [4] evaluated the secrecy outage performance in the presence of moving interferers in a RWP model. For unmanned aerial vehicle (UAV) communications, the authors in [5] derived the statistics for the signal-to-interference ratio (SIR), in which multiple UAVs move according to the RWP model. Nevertheless, the abovementioned works assume infinite network area, and therefore, preserve the uniform properties of the node distribution. In practice, the cellular networks, and especially the small-cells of HetNet deployments, have finite boundaries [6]. In such finite-area networks, whose nodes follow the RWP model, the resulting (steady-state) node distribution is far from uniform [7]. Because of that, the ability of a device to access the network and its observed interference are subjected to the device location. Many works showed that the employment of RWP-based mobility models result in the concentration of the nodes close to the center of the area. Thus, the knowledge of the actual node distribution is of critical importance to study the impact of the mobility on the performance metric of interest. The above studies evaluate the considered performance metric i.e., coverage probability, throughput performance etc, at the typical link. While such metric is certainly important, it cannot reflect the performance variation among the individual users due to the spatial averaging involved. To overcome this limitation, the authors in [8] introduced the concept of metadistribution (MD), which provides a fine-grained information about the performance of the individual links. Therefore, such a metric can answer the critical question: "What fraction of devices in a finite-area cellular network achieve an SIR of θ with probability at least x, in a realization of the cellular network?", and hence to explore in depth how to exploit the node distribution in order to achieve ubiquitous connectivity. Motivated by the above, in this paper, we evaluate the MD of the uplink (UL) SIR in the context of finite-area cellular networks with non-uniform base station (BS) deployment, where the users employ the RWP model. The main contribution of this paper, is the development of a novel mathematical framework that determines the optimal spatial distribution of the BSs in respect with the RWP-based movement of the mobile users (MUs), which leads to the maximum network connectivity. By leveraging tools from stochastic geometry, we derive the b-th moment of the conditional success probability of finitearea cellular networks, and a Beta-distribution approximation of the MD is calculated. Finally, we exploit the MD as a criterion for defining the optimal BS distribution, conditioned on the user's mobility. Numerical results unveil that there exists an optimal BS deployment that maximizes the network connectivity according to the MUs' mobility, highlighting the impact of non-uniform MU distributions. II. SYSTEM MODEL A. Network Model We consider the UL of a single-tier cellular network, where the network's nodes are confined in a circular disk A ∈ R 2 of area |A| = πR 2 [2]. We model the locations of the BSs as a non-uniform Poisson point process (PPP) Φ b , of intensity λ b (r, φ) in the circular disk A and zero intensity elsewhere. Although real-world network deployments are usually nonuniform, PPP-based spatial deployment is chosen to facilitate the analysis and can be regarded as a benchmark for evaluating the performance achieved with more sophisticated point processes. For simplicity, we focus on the analysis of quadratic radially symmetric BS distributions such that the intensity function is given by such that there are, on average, 2π R 0 λ b (r)rdr = λ b |A| BSs in A [9]. As illustrated in Fig. 1, the parameter β b defines the BSs' deployment within A, and allows us to interpolate between three different BSs deployment distributions, namely uniform (β b = 0), near the border (β b > 0), and near the center of the deployment region (β b < 0). Based on Slivnyak's theorem [12], the BS at the origin becomes the typical BS under expectation over Φ b . Orthogonal access like orthogonal frequency-division multiple access is assumed i.e., each BS schedules only one MU on each resource block. MUs are spatially distributed according to a homogeneous PPP Φ u of intensity λ u . We consider the nearest-MU association rule i.e., the typical BS at the origin communicates with its closest MU. All wireless signals are assumed to experience both largescale path-loss effects and small-scale fading. Specifically, the small-scale fading between two nodes is modeled by Rayleigh fading with unit average power, where different links are assumed to be independent and identically distributed. Hence, the power of the channel fading is an exponential random variable with unit mean, i.e. h ∼ exp (1). For the large-scale path-loss, we assume an unbounded singular path-loss model, L(X, Y ) = X − Y −a which assumes that the received power decays with the distance between the transmitter at X and the receiver at Y , where a > 2 is the path-loss exponent. B. User Mobility Model The process representing the movement of a MU within the finite area A ∈ R 2 can be described as follows. Initially, each MU is placed at the point P 1 chosen from the uniform distribution Φ u ∈ A. Then, each MU uniformly chooses a destination (also called waypoint) P 2 in the region A and moves towards it with randomly selected speed i.e., u ∈ [u min , u max ], which remains constant during that movement. A new direction and speed are chosen only after the MU reaches the destination. It is important to mention here that, the MUs bounce back when they reach the boundary, and hence the number of MUs in A remains constant i.e., λ u |A|. Therefore, a MU starting near the boundaries of the network area clearly finds more destination waypoints in directions toward the center of the area than toward the border. As time passes and the MUs perform a number of movement periods, the spatial distribution of the MUs becomes more and more non-uniform. Finally, for a long running time of the movement process, a stationary distribution, also known as steady-state distribution, is achieved [3]. The intensity measure of the above-mentioned steady-state distribution of MUs follows as Λ u (B(o, r)) = λ u πr 2 α u + β u r 2 2 , where r ≤ R, and the intensity function is, thus, given by Note that, the special case β u = 2/R 2 refers to the RWP on border (RWPB) 1 mobility model, while the case where the MUs are static and therefore uniformly distributed i.e., u = 0, is capture by β u = 0. Then, the probability density function (pdf) of the distance R between the typical BS and its serving MU, is given by [3] The MD has been introduced as a performance metric that provides a more complete spatial distribution rather than merely spatial averages as performed in the majority of works in the literature. The MD of the SIR is the distribution of the conditional success probability given a realization of the PPP. Specifically, the MD of the SIR is a two-parameter distribution function, defined as where P s (θ) s the success probability conditioned on a PPP Φ, i.e. P s (θ) = P [SIR > θ|Φ] and P ! o is the reduced Palm probability [8]. For our network setup, the UL SIR at the typical BS, is given by where x 0 is the location of the MU which communicates with the typical BS, h x is the power of the channel fading between the typical BS and the MU at x, and Φ I ⊂ Φ u \{x 0 } represents the point process of active interfering MUs (see Section III-A). III. ANALYTICAL RESULTS FOR THE META-DISTRIBUTION In this section, we first characterize the received interference at the typical BS, and then derive the moments of the conditional success probability. We provide analytical expressions which will be useful for computing the MD of the UL SIR in Section IV. Throughout this paper, we will denote by r x the distance between the origin and a MU that is located at A. Interference Characterization Firstly, we investigate the statistical properties of Φ I which is the non-uniform distribution of the active interfering MUs. The locations of the active MUs can be seen as a Voronoi perturbed lattice process [10]. However, the analysis of the network performance with such approach is not mathematically tractable. Therefore, for tractability purposes, the aggregate interference seen at the typical BS is approximated by the interference seen from a non-homogeneous PPP [11], which is characterized in the following lemma. Lemma 1. The active interfering MUs follow a non homogeneous PPP Φ I with density function λ I u (r), that is given by where δ(r) represents the load factor of a cell at distance r, which is equal to γ[K, π(λ u (r)R 2 + Ω(r))] (R 2 λ u (r) + Ω(r)) K , (6) Ω(r) = KR 2 λ b (r), K = 3.575, and γ[·, ·] denotes the lower incomplete gamma function. Proof. See Appendix A. Recall that λ f (r) = λ f 1 − β f R 2 /2 + β f r 2 is the density function of the nodes that belong in the non-homogeneous PPP Φ f , where f = {b, u}. We simplify the analysis by considering the asymptotic case where λ u → ∞, which is in line with the expected huge proliferation of end-user devices in future communication systems. Hence, by approximating the above densities by a step function i.e., where R represents the maximum (minimum) distance of a node x ∈ Φ f from the origin for the case β f < 0 (β f > 0), where f ∈ {b, u}, and 1 X denotes the indicator function, where 1 X = 1 if X holds; otherwise 1 X = 0. Based on the Fig. 1, we assume R = R/ √ 2 and β u = β b = β. The validity of the assumptions will be shown in the numerical results. Proposition 1. In an ultra-dense MU deployment, the intensity of the active interfering MUs, can be re-written as where ∆(r) = 1 r<R if β < 0, otherwise ∆(r) = 1 r>R . B. Moments of Conditional Success Probability From the definition, the b-th moment of the MD is given by M b (θ) ∆ = E P s (θ) b . The following Lemma provides the analytical expressions for the moments of the conditional success probability. Lemma 2. The b-th moment M b of the conditional success probability for the typical BS in UL finite-area cellular networks with user mobility, is given by where f R (r) denote the pdf of the distance between a BS and its serving MU, which is given by the expression (2). Proof. By using the moment generating function of an exponential random variable [12], the conditional UL success probability. is given by P s (θ) = x∈Φ Then, the b-th moment of the conditional success probability, M b , can be written as where (9) follows from (5) and the probability generating functional of PPP. By un-conditioning on r x0 the derived expression using (2), we conclude to the desired expression. Even through the expressions in Lemma 2 can be evaluated using numerical tools, this could be difficult due to the presence of multiple integrals. To address this, we simplify the analysis by employing the assumptions used for the Proposition 1 and also consider the special case where a = 4. Lemma 3. In an ultra-dense MU deployment and a = 4, the b-th moment M b of the conditional success probability for the case β < 0, can be re-written as and for the case β > 0, is given by if β > 0, and arctan[φ] is the inverse of the tangent function. Proof. Based on the approximated intensity function λ u (r), the pdf can be calculated as f R (r) = d [1 − exp (−Λ(r))] /dr, where Λ(r) is the intensity function, and is given by Λ(r) = 2πλ u r 0 v∆(v)dv, where ∆(·) is given in Proposition 1. Then, based on the Bernoulli inequality i.e., (1 + x) y ≥ 1 + yx where x > −1, the moments of the conditional success probability can be simplified. Due to space limitations, the simplified expression is omitted. Finally, the desired expressions in Lemma 3 can be derived by substituting a = 4 and by un-conditioning on r x0 with the aid of f R (r). IV. META-DISTRIBUTION AND OPTIMAL DEPLOYMENT In this section, we evaluate the MD of the SIR to characterize the connectivity reliability when MUs employ the RWP model in finite-area networks. This fine-grained performance metric provides the information about what fraction of MUs in each realization of the point process can successfully decode the received signal with probability at least τ . Finally, the optimal deployment of the small cells that maximizes the MD of the SIR is investigated, aiming to provide ultra-reliable connectivity for the MUs. A. Meta Distribution and its Beta Approximation According to the Gil-Pelaez theorem [8], the MD can be calculated as where Im[z] denotes the imaginary parts of z ∈ C. Authors in [8] showed that the MD can be approximated by matching the mean and variance of the Beta-distribution with M 1 and M 2 given in Lemma 2. The first and second moments of a betadistributed random variable X with shape parameters η, > 0, are given by E[X] = η/(η + ) and E[X 2 ] = (η + 1)/(η + + 1). The following theorem provides the MD of our considered network setup. Theorem 1. The MD is approximated by the beta distribution for the considered network deployments, as where B(·, ·) is the Beta function, and where M 1 and M 2 are given in Lemma 2. B. Meta-Distribution Optimal Deployment From the network operator point-of-view, the deployment of the small cells focuses on eliminating any coverage hole in outdoor and indoor environments, so their distribution and density must ensure ubiquitous connectivity. Since the increase of the number of small cells is a prohibitively expensive approach, we intend to maximize the network connectivity by effectively deploying the small cells based on the traffic density (i.e. hotspots) of the network. Hence, in this section, we investigate the optimal distribution parameter of the BSs that provides ultra-reliable connectivity for the considered network setup. Specifically, based on the RWP-based movement of the MUs, we examine the parameter of the non-uniform distribution of the BSs that maximizes the achieved MD of the UL SIR. Let β * b represents the distribution parameter of the BSs that maximizes the expression (13) i.e., conditioned on β u , and the densities of the MUs λ u and the BSs λ b . Particularly, the equation (14) is a maximization over multiple integrals, and is therefore computationally difficult. As a result, an exact closed-form solution cannot be obtained and the problem should therefore be tackled numerically. V. NUMERICAL RESULTS The spatial densities of the BSs and the MUs are λ b = 10 −4 and λ u = 10 −4 , respectively, the path-loss exponent is set to a = 4 and R = 500 m. Fig. 2 illustrates the effect of the MU's mobility on the MD of the UL SIR for different reliability values τ , where the BSs are uniformly distributed in the area A, i.e. β b = 0. Specifically, we plot the approximated MD, that is given by (13), versus the decoding threshold θ (dB) for different reliability values τ = {0.1, 0.9} and mobility models β u = {0, −2/R 2 , 2/R 2 }. Firstly, Fig. 2 shows the good agreement between the theoretical (solid, dashed and dotted lines) and the simulation (markers) results, validating our analysis and verifying the accuracy of the approximation of the moments given in Lemma 2. An important observation from this figure is that, for each reliability value τ , the mobility of the MUs causes the reduction of the achieved MD of the uniformly deployed BSs, compared with the case where the MUs are static i.e., β u = 0. This was expected since, the RWP-based mobility of the MUs in finite-area networks, results in the MUs' concentration near the center (β u = −2/R 2 ) or the borders (β u = 2/R 2 ) of the network. As a result, the number of BSs whose Voronoi-cell is empty of MUs i.e., inactive BSs, is increased, and hence, the percentage of uniformly distributed BSs that successfully decode the received signal is reduced. This observation highlights the demand of a nonuniform deployment of BSs in order to address the negative effect of the MUs mobility on the network performance. Fig. 3 reveals the impact of MUs' density on MD of the UL SIR for different spatial distribution parameters β u , where β u = {0, −2/R 2 , 2/R 2 }. Specifically, Fig. 3 demonstrates the network performance that can be achieved both by our proposed scheme (i.e. β b = β * b ) and a conventional scheme (i.e. β b = β u ). It is interesting to note that at low MUs' densities, the increasing number of MUs improves the network performance. However, by increasing the MUs' density beyond a critical point, the network performance decreases. This observation is based on the fact that at low density values, the number of BSs with at least one MU in their serving area becomes larger and therefore, the fraction of BSs that successfully achieve an SIR of θ (dB) increases. In contrast, for high MUs' densities, the overall interference caused by the active MUs increases, thereby reducing the ability of the BSs to decode the received signal and consequently, the fraction of BSs that successfully achieve an SIR of θ (dB) decreases. Moreover, for the ultradense MU deployments, the network performance converges to a constant floor. This observation was expected, since both the number of active interfering MUs and the overall network interference remain constant, due to the assumption that each BS schedules only one MU on each resource block. Fig. 3 shows that, the adoption of the simplified model for nodes' density functions provides lower complexity methodology for evaluating the system performance in the context of ultra-dense deployments, without being significantly deficient in accuracy. Finally, Fig. 3 reveals that the performance achieved by our proposed scheme outperforms that of the conventional scheme, highlighting the importance of our proposed technique for achieving ultra-reliable connectivity. Fig. 4 shows the optimal deployment of BSs β * b that maximizes the MD as a function of the non-uniform MU distribution parameter β u , for different ratios of densities i.e., λ u /λ b . We can easily observe that, for RWP-based mobility models with β u > 0, the optimal BS distribution parameter is also β * b > 0. On the other hand i.e., β u < 0, the optimal BS distribution parameter depends on β u and the ratio λ u /λ b . As we can easily observe, by increasing the ratio λ u /λ b , the optimal BS distribution parameter β * b decreases for a given MU distribution parameter β u . VI. CONCLUSION In this paper, we studied the MD of the UL SIR for finitearea non-uniform cellular networks, where the MUs employ the RWP model. Specifically, based on the MUs' mobility, we derived the optimal parameter of the spatial distribution of BSs, which maximizes the achieved MD. By applying tools from stochastic geometry theory, the moments of the conditional success probability were derived analytically and the actual MD was approximated using the moment-matching method for the Beta-distribution. Our results highlight the impact of the MUs mobility on the optimal BS distribution, providing guidance for the planning of cellular networks in order to achieve ultra-reliable connectivity. A future extension of this work is the consideration of multi-cell heterogeneous deployments and investigate spatially correlated finite area cellular networks. APPENDIX A PROOF OF LEMMA 1 The probability density function (pdf) of the size of a Voronoi cell at distance r from the origin, is accurately predicted by a Gamma distribution [10] f A (x, r) = (λ b (r)) K K K Γ [K] where where A is a random variable that denotes the size of the Voronoi cell, K = 3.575 and Γ[·] denotes the Gamma function. For tractability purposes, we consider a constant MUs' intensity for a small area around a random point. Then, the load factor δ(r) is obtained by first writting the probability of having n number of MUs in a cell with cell-size A given by P A (n) = (λu(r)A) n n! exp(−λ u (r)A), solving it for n = 0, and then integrating it over the distribution of A. Hence, the load factor δ(r), is given by In order to account the fact that a single MU is associated with each BS, the resulting intensity λ u (r)δ(r) must be appropriately thinned with the probability 1 − exp −πλ u (r)δ(r)r 2 [10]. Hence the expressions of Lemma 1 are derived.
2021-11-06T13:09:07.949Z
2021-09-20T00:00:00.000
{ "year": 2021, "sha1": "f7fa1957013649d6208af6a8ccd8137556afb501", "oa_license": "CCBY", "oa_url": "https://zenodo.org/record/5707358/files/Meta%20Distribution-optimal%20Base%20Station%20Deployment%20for%20Finite-Area%20Mobile%20Networks.pdf", "oa_status": "GREEN", "pdf_src": "IEEE", "pdf_hash": "f7fa1957013649d6208af6a8ccd8137556afb501", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
17154151
pes2o/s2orc
v3-fos-license
Effective Selection of Translation Model Training Data Data selection has been demonstrated to be an effective approach to addressing the lack of high-quality bitext for statistical machine translation in the domain of interest. Most current data selection methods solely use language models trained on a small scale in-domain data to select domain-relevant sentence pairs from general-domain parallel corpus. By contrast, we argue that the relevance be-tween a sentence pair and target domain can be better evaluated by the combination of language model and translation model. In this paper, we study and exper-iment with novel methods that apply translation models into domain-relevant data selection. The results show that our methods outperform previous methods. When the selected sentence pairs are evaluated on an end-to-end MT task, our methods can increase the translation performance by 3 BLEU points. * Introduction Statistical machine translation depends heavily on large scale parallel corpora. The corpora are necessary priori knowledge for training effective translation model. However, domain-specific machine translation has few parallel corpora for translation model training in the domain of interest. For this, an effective approach is to automatically select and expand domain-specific sentence pairs from large scale general-domain parallel corpus. The approach is named Data Selection. Current data selection methods mostly use language models trained on small scale indomain data to measure domain relevance and select domain-relevant parallel sentence pairs to expand training corpora. Related work in literature has proven that the expanded corpora can substantially improve the performance of ma- There isn't ready-made domain-specific parallel bitext. So it's necessary for data selection to have significant capability in mining parallel bitext in those assorted free texts. But the existing methods seldom ensure parallelism in the target domain while selecting domain-relevant bitext.  Available domain-relevant bitext needs keep high domain-relevance at both the sides of source and target language. But it's difficult for current method to maintain two-sided domain-relevance when we aim at enhancing parallelism of bitext. In a word, current data selection methods can't well maintain both parallelism and domainrelevance of bitext. To overcome the problem, we first propose the method combining translation model with language model in data selection. The language model measures the domainspecific generation probability of sentences, being used to select domain-relevant sentences at both sides of source and target language. Meanwhile, the translation model measures the translation probability of sentence pair, being used to verify the parallelism of the selected domainrelevant bitext. Related Work The existing data selection methods are mostly based on language model. Yasuda et al. (2008) and Foster et al. (2010) ranked the sentence pairs in the general-domain corpus according to the perplexity scores of sentences, which are computed with respect to in-domain language models. Axelrod et al. (2011) improved the perplexitybased approach and proposed bilingual crossentropy difference as a ranking function with inand general-domain language models. Duh et al. (2013) employed the method of (Axelrod et al., 2011) and further explored neural language model for data selection rather than the conventional n-gram language model. Although previous works in data selection (Duh et al., 2013;Axelrod et al., 2011;Foster et al., 2010;Yasuda et al., 2008) have gained good performance, the methods which only adopt language models to score the sentence pairs are sub-optimal. The reason is that a sentence pair contains a source language sentence and a target language sentence, while the existing methods are incapable of evaluating the mutual translation probability of sentence pair in the target domain. Thus, we propose novel methods which are based on translation model and language model for data selection. Training Data Selection Methods We present three data selection methods for ranking and selecting domain-relevant sentence pairs from general-domain corpus, with an eye towards improving domain-specific translation model performance. These methods are based on language model and translation model, which are trained on small in-domain parallel data. Data Selection with Translation Model Translation model is a key component in statistical machine translation. It is commonly used to translate the source language sentence into the target language sentence. However, in this paper, we adopt the translation model to evaluate the translation probability of sentence pair and develop a simple but effective variant of translation model to rank the sentence pairs in the generaldomain corpus. The formulations are detailed as below: Where ( ) is the translation model, which is IBM Model 1 in this paper, it represents the translation probability of target language sentence conditioned on source language sentence . and are the number of words in sentence and respectively. ( ) is the translation probability of word conditioned on word and is estimated from the small in-domain parallel data. The parameter is a constant and is assigned with the value of 1.0. is the lengthnormalized IBM Model 1, which is used to score general-domain sentence pairs. The sentence pair with higher score is more likely to be generated by in-domain translation model, thus, it is more relevant to the in-domain corpus and will be remained to expand the training data. Data Selection by Combining Translation and Language model As described in section 1, the existing data selection methods which only adopt language model to score sentence pairs are unable to measure the mutual translation probability of sentence pairs. To solve the problem, we develop the second data selection method, which is based on the combination of translation model and language model. Our method and ranking function are formulated as follows: Where ( ) is a joint probability of sentence and according to the translation model ( ) and language model ( ), whose parameters are estimated from the small in-domain text. is the improved ranking function and used to score the sentence pairs with the length-normalized translation model ( )and language model ( ). The sentence pair with higher score is more similar to in-domain corpus, and will be picked out. Data Selection by Bidirectionally Combining Translation and Language Models As presented in subsection 3.2, the method combines translation model and language model to rank the sentence pairs in the general-domain corpus. However, it does not evaluate the inverse translation probability of sentence pair and the probability of target language sentence. Thus, we take bidirectional scores into account and simply sum the scores in both directions. Again, the sentence pairs with higher scores are presumed to be better and will be selected to incorporate into the domain-specific training data. This approach makes full use of two translation models and two language models for sentence pairs ranking. Corpora We conduct our experiments on the Spoken Language Translation English-to-Chinese task. Two corpora are needed for the data selection. The indomain data is collected from CWMT09, which consists of spoken dialogues in a travel setting, containing approximately 50,000 parallel sentence pairs in English and Chinese. Our generaldomain corpus mined from the Internet contains 16 million sentence pairs. Both the in-and general-domain corpora are identically tokenized (in English) and segmented (in Chinese) 1 . The details of corpora are listed in Table 1 System settings We use the NiuTrans 2 toolkit which adopts GIZA++ (Och and Ney, 2003) and MERT to train and tune the machine translation system. As NiuTrans integrates the mainstream translation engine, we select hierarchical phrasebased engine (Chiang, 2007) to extract the translation rules and carry out our experiments. Moreover, in the decoding process, we use the NiuTrans decoder to produce the best outputs, and score them with the widely used NIST mt-eval131a 3 tool. This tool scores the outputs in several criterions, while the case-insensitive BLEU-4 (Papineni et al., 2002) is used as the evaluation for the machine translation system. Translation and Language models Our work relies on the use of in-domain language models and translation models to rank the sentence pairs from the general-domain bilingual training set. Here, we employ ngram language (Stolcke, 2002) to train the in-domain 4-gram language model with interpolated modified Kneser-Ney discounting (Chen and Goodman, 1998). The language model is only used to score the general-domain sentences. Meanwhile, we use the language model training scripts integrated in the NiuTrans toolkit to train another 4-gram language model, which is used in MT tuning and decoding. Additionally, we adopt GIZA++ to get the word alignment of in-domain parallel data and form the word translation probability table. This table will be used to compute the translation probability of general-domain sentence pairs. Baseline Systems As described above, by using the NiuTrans toolkit, we have built two baseline systems to fulfill "863" SLT task in our experiments. The In-domain baseline trained on spoken language corpus has 1.05 million rules in its hierarchicalphrase The results show that General-domain system trained on a larger amount of bilingual resources outperforms the system trained on the in-domain corpus by over 12 BLEU points. The reason is that large scale parallel corpus maintains more bilingual knowledge and language phenomenon, while small in-domain corpus encounters data sparse problem, which degrades the translation performance. However, the performance of General-domain baseline can be improved further. We use our three methods to refine the generaldomain corpus and improve the translation performance in the domain of interest. Thus, we build several contrasting systems trained on refined training data selected by the following different methods.  Bidirectional TM+LM: Data selection by bidirectionally combining translation and language models (equal weight). Results of Training Data Selection We adopt five methods for extracting domainrelevant parallel data from general-domain corpus. Using the scoring methods, we rank the sentence pairs of the general-domain corpus and select only the top N = {50k, 100k, 200k, 400k, 600k, 800k, 1000k} sentence pairs as refined training data. New MT systems are then trained on these small refined training data. Figure 1 shows the performances of systems trained on selected corpora from the general-domain corpus. The horizontal coordinate represents the number of selected sentence pairs and vertical coordinate is the BLEU scores of MT systems. From Figure 1, we conclude that these five data selection methods are effective for domainspecific translation. When top 600k sentence pairs are picked out from general-domain corpus to train machine translation systems, the systems perform higher than the General-domain baseline trained on 16 million parallel data. The results indicate that more training data for translation model is not always better. When the domainspecific bilingual resources are deficient, the domain-relevant sentence pairs will play an important role in improving the translation performance. Additionally, it turns out that our methods (TM, TM+LM and Bidirectional TM+LM) are indeed more effective in selecting domainrelevant sentence pairs. In the end-to-end SMT evaluation, TM selects top 600k sentence pairs of general-domain corpus, but increases the translation performance by 2.7 BLEU points. Meanwhile, the TM+LM and Bidirectional TM+LM have gained 3.66 and 3.56 BLEU point improvements compared against the generaldomain baseline system. Compared with the mainstream methods (Ngram and Neural net), our methods increase translation performance by nearly 3 BLEU points, when the top 600k sentence pairs are picked out. Although, in the figure 1, our three methods are not performing better than the existing methods in all cases, their overall performances are relatively higher. We therefore believe that combining in-domain translation model and language model to score the sentence pairs is well-suited for domainrelevant sentence pair selection. Furthermore, we observe that the overall performance of our methods is gradually improved. This is because our methods are combining more statistical characteristics of in-domain data in ranking and selecting sentence pairs. The results have proven the effectiveness of our methods again. Conclusion We present three novel methods for translation model training data selection, which are based on the translation model and language model. Compared with the methods which only employ language model for data selection, we observe that our methods are able to select high-quality domain-relevant sentence pairs and improve the translation performance by nearly 3 BLEU points. In addition, our methods make full use of the limited in-domain data and are easily implemented. In the future, we are interested in applying our methods into domain adaptation task of statistical machine translation in model level.
2015-06-05T01:59:53.000Z
2014-06-01T00:00:00.000
{ "year": 2014, "sha1": "aeacba3c85b22ee3d85011e0a472bc246a5bba45", "oa_license": null, "oa_url": "http://aclweb.org/anthology/P/P14/P14-2093.pdf", "oa_status": "GREEN", "pdf_src": "ACL", "pdf_hash": "aeacba3c85b22ee3d85011e0a472bc246a5bba45", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
13755021
pes2o/s2orc
v3-fos-license
Surface plasmon polaritons in the ultraviolet region We study a surface plasmon polariton mode that is strongly confined in the transverse direction and propagates along a periodically nanostructured metal-dielectric interface. We show that the wavelength of this mode is determined by the period of the structure, and may therefore, be orders of magnitude smaller than the wavelength of a plasmon-polariton propagating along a flat surface. This plasmon polariton exists in the frequency region in which the sum of the real parts of the permittivities of the metal and dielectric is positive, a frequency region in which surface plasmon polaritons do not exist on a flat surface. The propagation length of the new mode can reach a several dozen wavelengths. This mode can be observed in materials that are uncommon in plasmonics, such as aluminum or sodium. INTRODUCTION Our appetite for higher-speed devices inevitably leads to the transition from electronic or optoelectronic to all-optical devices. At the same time, the necessity for higher clock frequencies for information processing requires greater integration of photonic devices and scaling them down to nanometers. In optics, dielectric fiber replaces the coaxial and strip transmission lines. However, the characteristic transverse size of a fiber line is orders of magnitude larger than the characteristic size of components of semiconductor integrated circuitry. Moreover, the radius of curvature for optical line bending reaches hundreds of microns. The resulting large total size of the device hinders the realization of a high clock frequency which is limited by the signal propagation time within the device. A possible solution for this problem is a transition from photons to surface plasmon-polaritons (SPPs). The SPP is an electromagnetic wave propagating along the interface between a metal and dielectric. The wavelength of an SPP, 2 / Re SPP SPP k λ π = , is smaller than that of the electromagnetic wave in the dielectric with permittivity d ε , 0 / d λ ε . The SPP is therefore confined to the metal surface with the transverse (perpendicular to the surface) confinement length ( ) << . This enables the miniaturization of optical devices and the transition from electronics to on-chip plasmonics technology possible [1][2][3][4][5][6]. The main obstacle to the use of SPPs in applications is ohmic loss in the metal. This substantially decreases the SPP propagation length, pr l and also raises the minimum SPP wavelength, cutoff SPP λ . SPPs with SPP λ smaller than cutoff SPP λ cannot exist [1]. This weakens the transverse confinement of the SPP. The values of pr l , cutoff SPP λ , and δ are strongly determined by the geometrical configuration. In the simplest geometrical configuration of an SPP propagating along the flat interface between half-spaces filled by the metal and dielectric, the wavenumber of the SPP is given by where 0 k is the wavenumber of the electromagnetic wave in free space [1]. At low frequencies The SPP propagation length can be increased by shifting into the low-frequency part of the spectrum. The decrease in the frequency corresponds to the movement along curve 1 in Fig. 1a starting from the point 1 A . Note that both SPP λ and pr l increase with decreasing frequency, but pr l increases more rapidly. Therefore, a decrease in the frequency corresponds to a movement to the right along the horizontal axis in Fig. 1b. As curve 1 in Fig. 1b shows, the transverse confinement length increases with a decrease in frequency (this curve starts at the point 1 A which corresponds to the plasmon dispersion curve crossing the light cone). The propagation length may also be increased, even to hundreds of wavelengths, by using thin metal films [7]. The corresponding dispersion curve for long-range SPPs is determined by the equation where h is the thickness of the metal film. According to Eq. (2), the dispersion curve of the long-range SPP is close to the light cone (see curve 2 in Fig. 1a), so its wavelength SPP λ is about 0 λ , while the confinement length tends towards infinity [7][8][9]. In other words, the SPP tends to a plane wave, and the plasmonic thin films turn into an analog of a single-wire transmittance line (see curve 2 in Figs. 1a, b). At the point 2 A , curve 2 ends crossing the line cone. Such a transmittance line has no advantages compared to a common optical dielectric waveguide. Using chains of metal nanoparticles allows one to decrease both the wavelength of an SPP and its transverse confinement length [10][11][12][13][14][15][16]. If the dipole moment of a nanoparticle is parallel to the axis of the chain, then in the quasistatic and tight binding approximation, the dispersion law of such a chain is ( ) α ω ε ω ε ε ω ε = − + is the polarizability of a metal nanosphere of radius r [17]. The shortest wavelength is of the order of the distance between nanoparticles, a. Since the SPP frequency is close to the plasmon resonance of the nanoparticle, in the quasistatic approach, the electric field is mainly concentrated inside the nanoparticles [18]. Therefore, losses in such chains are large, and the propagation length of an SPP is only a few SPP λ (in Figs. 1a, b, curve 3 originates and ends at the points 3 A and 3 B that are the points at which the dispersion curve crosses the light cone). Even though the SPP confinement length in a chain may be small (of the order of a), the propagation length is negligibly short. Thus, in the visible region, even the best plasmonic materials, such as gold and silver, are not suitable for applications that require 0 δ λ << and pr l of the order of a dozen SPP wavelengths. Crossing from optical to the infrared region can significantly improve the characteristics of the system. This has been observed in chains of strongly elongated plasmonic particles [19] (see curve 3 in Fig. 1b). For example, in a chain of spheroids with the period of 20 nm and the ration of semiaxes of 0.15, the propagation length is about 700 SPP λ . The search for new plasmonic materials has been growing sharply in recent years [20][21][22][23]. In particular, the transition to the ultraviolet part of the spectrum brings into consideration new materials, such as aluminum, sodium, and rubidium [22,24]. It allows for significant enhancements of fluorescence and of the rates of photochemical reactions [24][25][26][27]. In Sections II-IV, using aluminum as example, we consider propagation of strongly localized SPPs. The effects under consideration cannot be observed in silver and gold. It would also be highly desirable to obtain topologically more complicated interfaces that combine advantages of the aforementioned systems without their shortcomings. corresponds to an SPP propagating on a flat surface of silver half-space, the green line 2 corresponds to a long-range SPP propagating along a silver 30-nm-thin film. In this systems, the unlimited growth of δ is due to the transition to the IR region. The purple 3, blue 4, and brown 6 lines show propagation lengths of SPPs along the chain of silver nanospheres, the nanostructured silver-vacuum interface, and nanostructured aluminum-vacuum interface, respectively. Cyan line 5 corresponds to the spoof. Curves 3 and 5 have been calculated by using formulas from Refs. [19] and [28], respectively. Curves 1-5 have been calculated for values of metal permittivity taken from Ref. [29], while curve 6 is calculated using the data from Ref. [30]. In Fig. 1b, points i C correspond to the boundary of the visible region assumed to be 780 nm; segments i i A B correspond to the visible region. For numerical calculations in this figure and in the manuscript, we assume that the dielectric is vacuum with 1 d ε = . In this paper, we demonstrate that a periodically nanostructured metal-dielectric interface (an array of metal nanoparticles deposited on the metal surface) supports an SPP mode with a short transverse confinement length that is comparable to the period of the structure (see curves 4 and 5 in Figs. 1a, b). In contrast to spoof SPPs that exist on structured surfaces in the frequency range in which [28,[31][32][33] (curve 5 in Fig. 1), the eigenfrequency of this mode is in the ultraviolet part of the spectrum in which Re ( ) 0 d m ε ε ω − < < . This is far from the usual plasmonic resonances ensuring a longer propagation length. In addition, the wavelength of the SPP is approximately the same as the period of the surface structure. This differs from a spoof SPP which wavelength belongs to the first Brillouin zone near the light cone. The SPP that we consider has a subwavelength confinement thanks to its small wavelength. This SPP mode cannot be observed in traditional plasmonic materials due to high losses caused by interband transitions in this part of the spectrum. In aluminum, the required frequencies are in the ultraviolet region, in which losses are relatively small because interband transitions are in the visible region. Therefore, the propagation length of the SPP can be as large as hundreds of nanometers on a periodically nanostructured aluminum-dielectric interface. NANOSTRUCTURED SURFACE In the absence of losses, on a rough metal surface, an additional SPP may arise [34]. The solution corresponding to this SPP has been obtained with the assumption that for a smooth surface, near the frequency for which ( ) m d ε ω ε = − , the group velocity of the SPP is zero. This provides a resonance interaction of the field with all harmonics of the roughness. As a result, the frequency curve splits and the second SPP mode with large k arises. However, in a lossy system, on a smooth surface, an SPP with sufficiently large wavenumbers does not exist. In addition, the assumption of zero group velocity, which is necessary for the second branch of the SPP, is not realistic. Therefore, it is not clear whether in a lossy system an additional SPP may arise. To model periodically nanostructured metal-dielectric interface, we consider a system shown in To solve Maxwell's equations, we make the coordinate transformation [35,36] which makes the interface flat, but coefficients in the equations become periodic with respect to x . This method is applicable for a large amplitude of roughness when the Rayleigh hypothesis is invalid [37,38]. Within the framework of this method, the tangential components of the electric and magnetic fields are represented as a series of Bloch harmonics. This allows one to reduce Maxwell's equations to the system of linear differential equations with constant coefficients and to obtain expressions for the fields in both media analytically. By using the Maxwell boundary conditions, one can obtain the propagation constant ( ) ANALYSIS OF THE DISPERSION CURVES FOR THE NANOSTRUCTURED SURFACE For an SPP, a periodically nanostructured surface is a photonic crystal. For typical plasmonic materials such as silver and gold, the SPP curve in the second Brillouin zone cannot be observed due to high losses in the UV frequency region in which these losses are due to the interband transitions. In the UV part of the spectrum of aluminum, losses are small because interband transitions are in the visible part of the spectrum. Therefore, in aluminum, this SPP dispersion branch can be observed. First, we consider a lossless vacuum-aluminum system. The dispersion functions, ( ) band gap should arise in this region. However, there is a pass band because, in a periodic system, the energy can be transferred by evanescent fields [11]. In a lossless system, the dispersion curves for interfaces with technologically achievable amplitudes of the surface perturbation ( 5 h = nm in our calculations) are shown in Fig. 4a. In this case, the band gap arises for the wavelengths 97 nm -207 nm. In the first Brillouin zone, the dispersion curve of the SPP becomes non-monotonic, and a point in which the group velocity is zero arises. In the second Brillouin zone, the dispersion curve moves completely to the frequency region defined by the inequality Re ( ) m d ε ω ε > − . In this zone, the waves are backward. Losses change the SPP dispersion curves significantly (see curve 4 and 5 in Figs. 1 a, b and 4b). One can see that in the first Brillouin zone, at the point in which in the absence of loss, the group velocity is zero, the SPP curve splits into two branches (curves 4 and 4'). Both of these branches are in the visible region. One of the branches (curve 4) has a negative slope corresponding to the backward wave. This wave exists far from the light cone where it may be strongly confined. Because the respective wavenumber is of the order of / 2a π (see Fig. 3b). However, computer simulations show that on an aluminum surface, the propagation length of the SPP associated with this curve is small. It is not more than the SPP wavelength. Even for silver, which in visible does not have interband transitions, the propagation length does not exceed a wavelength of the SPP. The dependence of the confinement length of this mode in silver is shown by curve 4 in Fig. 1b. In this figure, as well as in Fig. 5, the line numbering is the same as in Fig. 1. Losses cause significant distortion of the dispersion curve in the second Brillouin zone. For typical plasmonic materials such as silver and gold, the SPP curve in the second Brillouin zone cannot be observed due to high losses in the UV frequency region in which these losses are due to the interband transitions. In aluminum, interband transitions are in the visible part of the spectrum, and in the UV region, losses are small compared to silver and gold. Therefore, in aluminum, this SPP dispersion branch can be observed. Since the SPP dispersion curve 6 in Fig. 4b is near the second band gap, the wavelength of the SPP should be determined by the period of the surface structure. Indeed, this wavelength is related to the propagation constant 2 / The smallness of the SPP wavelength implies the subwavelength confinement of the SPP. Indeed, numerical calculations presented in Fig. 5 show that the field intensity is mainly confined near the surface. Since the frequency of the SPP is equal to the light frequency in a vacuum, it is clear that the field of the SPP is confined on a subwavelength scale. where pr l is defined as the distance over which the field intensity decreases by the factor of e . On the sine surface with the amplitude of 5 nm, the maximum value of the SPP propagation length is about 17 SPP wavelengths. By optimizing the surface structure, one can increase this length significantly. By using the Nelder-Mead method for optimization [40], we find that the optimal surface profile is given parametrically by the equations: . where the height is 18.5 h = nm, the period is 10 a = nm, 2.5 γ = nm, and v is the parameter varying in the range from -/ 2 a to / 2 a . The profile of the surface is shown in Fig. 6. With the surface profile given by Eq. (6), the SPP propagation length is about 53 SPP λ or 628 nm. Its frequency dependence is shown in Fig. 7. MECHANISM FOR THE FORMATION OF THE STRONGLY CONFINED SPP To explain the formation of the strongly confined SPP, let us consider the chain of spikes with the profile: where 2h is the height of a spike, / 2 b is its width, and a is the distance between centers of neighboring spikes. Electric charge accumulates on inhomogeneities with a small radius of curvature increasing the electric field strength near the spike [5]. Spikes, therefore, have high polarizabilities. In addition, the high polarizability of a spike can arise due to localized plasmon resonances [41][42][43]. If spikes have the same shape and are located close to each other, then a propagating mode arises [16]. Assuming that the chain of spikes is equivalent to the chain of two-dimensional dipoles with the polarizability ( ) α ω , one can obtain the dispersion law for the propagating mode. For the sake of specificity, we assume that a dipole moment of a spike is directed along the SPP propagation direction, x . The dipole moment of the i-th spike is determined by the equation: where , ( ) x n i E x is the electric field of the n-th dipole acting on the i-th dipole. It can be determined by using the equation: where ( ) is the Green function of the two-dimensional Helmholtz equation. Since the chain is periodic, the dipole moments of spikes are related via the Bloch theorem: ( ) . x ik n i a n i (11) Let us assume that the polarization of the surface structure is defined by Eq. (7) α ω ε ω π − = − is the polarizability of a metal and max ε is the permittivity for which the polarizability reaches its maximum value. The dispersion curves calculated by using the coordinate transformation [35,36] and with the help of Eqs. (7) and (9) are shown in Fig. 8. For calculations, we assumed that 10 a b = = nm, 5 h = nm, and max 0.5 d ε ε = − . Both curves are qualitatively the same: they have the same slopes and are approximately in the same frequency region. This shows that strongly confined SPP modes arise due to the high polarizability of surface inhomogeneities. Fig. 8. The dispersion curves calculated by using the coordinate transformation [35,36] (the red line) and with the help of Eqs. (11) and (12) (the blue line). The latter curve is extended to the second Brillouin zone. CONCLUSION In this paper, we have shown that highly confined SPPs traveling dozens of wavelengths may exist in the far-ultraviolet region (~80 λ nm). We demonstrate that SPP modes, which are strongly confined in the direction perpendicular to the direction of propagation, could arise on a periodically nanostructured interface between a metal and dielectric in the frequency range, in which ( ) Re m d ε ω ε > − . In aluminum, this inequality is fulfilled in the ultraviolet part of the spectrum. For these frequencies, there are no SPPs travelling along flat surfaces. The SPP can propagate on distance of several dozen SPP wavelengths. On the nanostructured interface considered, the modes arise due to tunneling between neighboring inhomogeneities similar to a chain of plasmonic particles. As a result, the wavenumber of the SPP is determined by the period of the nanostructure, which may be 10-20 nm. A high value of the SPP wavenumber results in strong confinement, which is crucial for SPP sensing and enhancement of nonlinear effects. Strong field localization also makes possible further miniaturization of a variety of "plasmonic optical" devices.
2018-05-08T05:00:07.461Z
2017-11-22T00:00:00.000
{ "year": 2017, "sha1": "ac90eb758e526812a170b5da02c9192d2511e693", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.26.009050", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "ac90eb758e526812a170b5da02c9192d2511e693", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine", "Materials Science" ] }
4575803
pes2o/s2orc
v3-fos-license
Regulation of Wnt signaling by nociceptive input in animal models Background Central sensitization-associated synaptic plasticity in the spinal cord dorsal horn (SCDH) critically contributes to the development of chronic pain, but understanding of the underlying molecular pathways is still incomplete. Emerging evidence suggests that Wnt signaling plays a crucial role in regulation of synaptic plasticity. Little is known about the potential function of the Wnt signaling cascades in chronic pain development. Results Fluorescent immunostaining results indicate that β-catenin, an essential protein in the canonical Wnt signaling pathway, is expressed in the superficial layers of the mouse SCDH with enrichment at synapses in lamina II. In addition, Wnt3a, a prototypic Wnt ligand that activates the canonical pathway, is also enriched in the superficial layers. Immunoblotting analysis indicates that both Wnt3a a β-catenin are up-regulated in the SCDH of various mouse pain models created by hind-paw injection of capsaicin, intrathecal (i.t.) injection of HIV-gp120 protein or spinal nerve ligation (SNL). Furthermore, Wnt5a, a prototypic Wnt ligand for non-canonical pathways, and its receptor Ror2 are also up-regulated in the SCDH of these models. Conclusion Our results suggest that Wnt signaling pathways are regulated by nociceptive input. The activation of Wnt signaling may regulate the expression of spinal central sensitization during the development of acute and chronic pain. Introduction During the development of chronic pain, spinal neurons in the spinal cord dorsal horn (SCDH) become sensitized and hyper-active (termed central sensitization). A spectrum of neuronal and glial processes has been implicated in the establishment of central sensitization. For instance, in the spinal nerve ligation (SNL) and spared nerve injury (SNI) models of neuropathic pain, the central terminals of primary sensory neurons were reported to sprout [1][2][3][4]. This sprouting may increase inputs of nociceptive signals. Indeed, increased release of neurotransmitters or neuromodulators such as glutamate, substance P, prostaglandin E2 (PGE 2 ) and calcitonin-gene related peptide (CGRP) were reported in animal pain models (reviewed in [5]). Another neuronal alteration associated with central sensitization is the expression of long-term potentiation (LTP) at the synapses in superficial layers of the SCDH, which is considered to be a critical synaptic mechanism underlying chronic pain [6,7] and a potential target for chronic pain therapy [8]. Furthermore, loss of inhibitory functions of GABAergic and glycinergic interneurons may contribute to enhanced pain sensitivity in chronic pain [9,10]. In addition to neuronal changes, more recent studies revealed an important role of glial cells, especially microglia and astrocytes, in central sensitization, and glia are emerging as a promising target for chronic pain treatment [11]. Activated microglia and astrocytes facilitate the development of central sensitization by releasing chemokines, cytokines and neurotrophins [12][13][14]. These factors can markedly enhance the excitability of neurons processing nociceptive input. For example, tumor necrosis factor-alpha (TNFα), a key proinflammatory cytokine, was shown to increase the frequency of excitatory postsynaptic currents (EPSCs) and N-methyl-D-aspartate (NMDA) currents in lamina II neurons by stimulating TNF receptor subtype-1 and 2 (TNFR1 and TNFR2) in an inflammatory pain model [15]. Despite significant progress in identifying various cellular processes that contribute to central sensitization and chronic pain, the molecular mechanisms by which the spectrum of cellular alterations is initiated and established remain poorly understood. In this study, we report the spatial distribution of specific Wnt signaling proteins in mouse spinal cords and the regulated expression of the proteins in multiple pain models. Our results reveal the expression of Wnt signaling proteins in the superficial layers of the SCDH and the up-regulation of their expression in acute and chronic pain models. These findings indicate that Wnt signaling pathways may play a role in the regulation of central sensitization and chronic pain development. Results Spatial distribution of β-catenin and Wnt3a in the mouse SCDH Because the Wnt/β-catenin pathway plays important roles in synaptic plasticity such as long-term potentiation (LTP) [22,33], we were interested in testing if this pathway is involved in the regulation of central sensitization. As an initial step toward this goal, we performed fluorescent immunostaining in naïve mice to determine the spinal distribution of β-catenin and Wnt3a, two signaling proteins in the canonical pathway. We observed that β-catenin immunostaining formed a predominant band in the dorsal horn, although a low level of signal was detected throughout the gray matter of the spinal cord ( Figure 1). To define further the laminar distribution of the protein in the SCDH, we used molecular markers to label the specific layers of the dorsal horn. We found that β-catenin was enriched in lamina II, both the inner (IIi) and outer (IIo) segments (Figure 1), while its staining in lamina I was relatively low (Figure 1 A1-2). Staining for isolectin B4 (IB4) and PKCγ, considered as specific markers for the outer and inner segments of lamina II, respectively, confirmed the presence of β-catenin in both the lamina IIi and IIo (Figure 1 B1-2 and C1-2). These observations indicate that the β-catenin is enriched in lamina II of the SCDH. Previous studies revealed that β-catenin is expressed in hippocampal neurons [33]. We performed double-staining experiments, using NeuN to label neuronal cell bodies. As shown in Figure 2 A-C, label for β-catenin in lamina II was observed in regions surrounding neuronal nuclei labeled by NeuN, indicating that the majority of β-catenin was in neuronal cytoplasm. On the other hand, β-catenin staining in non-neuronal cell bodies (NeuN-negative; DAPI-labeled) was detectable but relatively low. The β-catenin label was also clustered into small spots or dots ( Figure 2). Because β-catenin is enriched in synapses [33], we next tested if the clustered β-catenin dots corresponded to synapses. To this end, we performed double-labeling experiments with β-catenin and synapsin I (pre-synaptic marker) or PSD95 (post-synaptic marker). We observed that β-catenin staining substantially overlapped with that of synapsin I or PSD95 Next, we determined the spatial distribution of Wnt3a, a Wnt ligand that activates the canonical pathway. Figure 3 A-B, Wnt3a was detected throughout the dorsal horn, with the highest concentration in the superficial layers. In addition, some brightly stained profiles that are likely to be cell bodies in the gray matter were also detected. To determine the spatial distribution of Wnt3a, we double-labeled Wnt3a with SP or PKCγ. The results showed that Wnt3a signals were observed in regions labeled by both SP and PKCγ, indicating that Wnt3a is enriched in the laminae I and II (Figure 3 C-F). Relatively low levels of Wnt3a staining were also observed in deep SCDH layers ( Figure 3 E-F). Previous studies demonstrated that Wnt3a was localized in the cell bodies and dendrites in hippocampal neurons [22,33,51]. Similarly, we found in their cell bodies and dendrites. Similarly, Wnt5a protein is also mainly expressed in neurons in the SCDH, while its co-receptor Ror2 in both neurons and astrocytes (submitted). Wnt3a and Wnt5a protein in mouse dorsal root ganglia (DRG) We also determined the cellular localization of Wnt3a in DRGs. As shown in Figure The expression of Wnt3a and β-catenin in SCDH superficial layers suggests a potential role of Wnt signaling in nociceptive processing. Thus, we sought to determine whether peripheral painful stimulation affected the expression of the Wnt signaling proteins in SCDH. We first employed the capsaicin pain model, created by intradermal (i.d.) injection of capsaicin in hind paw [52]. It is well established that this pain model develops central sensitization [50,53,54]. Following capsaicin administration, mice showed that Wnt3a, active β-catenin (ABC) and total βcatenin (TBC) increased in the SCDH during the period of increased mechanical sensitivity (Figure 6 B-D). Consistent with the assumption that Wnt3a and β-catenin are in the same (canonical) pathway, Wnt3a, ABC and TBC proteins followed similar temporal profiles of up-regulation. The protein levels started to increase at 1 h after capsaicin injection and peaked at 3-5 h. Furthermore, significantly higher levels of these proteins were still observed at 9 h (Figure 6 B-D). The magnitude of increase differed for each protein: Wnt3a peaked at~2.5 fold increase whereas ABC or TBC peaked at~1.8 fold increase. Although cautions were taken to avoid potential contamination of the dorsal horn tissues from DRGs and dorsal root fibers, we anticipate that there were still peripheral fibers intermingling in the dissected dorsal horn. Thus, although it is likely that the observed up-regulation of Wnt signaling proteins was mainly contributed by the dorsal horn cells, we cannot exclude the possibility that the up-regulation also occurred in peripheral sensory neurons. In addition, we also examined the effect of capsaicininduced pain on proteins in the non-canonical pathways. We focused here on Wnt5a, a prototypic Wnt ligand that activates the non-canonical pathways. As shown in Figure 6 E, Wnt5a was also induced in the SCDH following i.d. injection of capsaicin. The temporal profile of capsaicin-induced Wnt5a alteration differed from that of Wnt3a and β-catenin. The Wnt5a up-regulation peaked at 2 h after capsaicin injection, but returned to baseline by 3 h (Figure 6 E). These data indicate that capsaicin up-regulates Wnt5a in a more rapid and transient manner. Furthermore, we also examined the temporal profile of Ror2, a Wnt5a receptor tyrosine kinase that activates JNK signaling [55]. Similar to Wnt5a, Ror2 was also transiently up-regulated (Figure 6 F). Compared with Wnt5a, the Ror2 up-regulation was delayed by 1 h (Figure 6 F). The overlapping but distinct temporal profiles of Wnt5a and Ror2 indicate that Wnt5a does not solely depend on Ror2 to transmit signals. Regulated expression of Wnt signaling proteins in the HIV-gp120 pain model We next determined the regulated expression of Wnt proteins in the HIV gp120 pain model. Previous work established that intrathecal injection (i.t.) of HIV-gp120 protein induces hyperalgesia and mechanical allodynia in animals [56][57][58][59]. Indeed, following gp120 administration, mice showed a progressive decrease in PWT evoked by von Frey filaments (Figure 7 A). The mechanical allodynia was observed at 1 h after gp120 injection, and fully developed by 2-5 h. Immunoblotting results showed that Wnt3a, ABC and TBC progressively increased in We also examined the expression profiles of Wnt5a and Ror2 in the gp120 pain model. The results showed that Wnt5a rapidly increased and peaked at 15-30 min after gp120 injection (Figure 7 E). Similar to the Wnt5a expression in the capsaicin pain model (Figure 6 E), the up-regulation of Wnt5a was relatively transient and returned to baseline by 2 h (Figure 7 E). Like Wnt5a, Ror2 also rapidly increased and peaked at 15-30 min after gp120 injection (Figure 7 F). Unlike Wnt5a, expression levels of Ror2 were maintained at significantly higher levels over baseline for 3 h (Figure 7 F). In addition, the magnitude of the Ror2 increase (3.6 fold) was higher than that of Wnt5a (1.9 fold). Regulated expression of Wnt signaling proteins in the neuropathic pain model Next, we were interested in examining the regulatory effect induced by peripheral nerve injury on Wnt signaling proteins. In this experiment, we used the neuropathic pain model produced by unilateral L5 spinal nerve ligation (SNL) [60], which is a well-established model that develops various hallmarks of chronic pain and central sensitization including neuroinflammation, hyperexcitation of spinal dorsal neurons and disinhibition of inhibitory interneurons [61][62][63]. As shown in Figure 8A, one week after SNL, the mice demonstrated increased paw withdrawal frequencies in response to mechanical stimulation with von Frey filaments: 0.10 g force, 92.86 ± 3.59% compared to 7.15 ± 1.84% (p < 0.05, n = 6) and 0.40 g force, 98.57 ± 1.43% compared to 18.57 ± 2.6% (p < 0.05, n = 6), for the sham-operated mice. Immunoblotting analysis of the SCDH from SNL mice at one week post-ligation showed that Wnt3a was significantly up-regulated in the ipsilateral (ipsi) compared to the contralateral (contra) side (5.9 fold, p < 0.01) (Figure 8 B). Similarly, both ABC (Figure 8 C) and TBC (Figure 8 D) were increased in the ipsi side of the SCDH with similar magnitudes of increase (2.0 fold, p < 0.05 for ABC and 1.6 fold, p < 0.05 for TBC). In addition, we also observed that non-canonical pathway signaling proteins, Wnt5a (Figure 8 E) and its co-receptor Ror2 (Figure 8 F), increased in the SNL model. Thus, Wnt signaling proteins are up-regulated following peripheral nerve injury. Discussion We describe here the expression of Wnt signaling proteins in the SCDH and their change in expression in three pain models. We show that β-catenin is enriched in neurons in lamina II, and that Wnt3a is abundant in neurons in the superficial layers (laminae I-III). We also show that these and other Wnt signaling proteins (Wnt5a and Ror2) are up-regulated in the SCDH in these pain models. Our data suggest potential involvement of Wnt signaling pathways in the regulation of central sensitization in acute and chronic pain. Future studies are warranted to directly test this hypothesis. Wnt signaling may contribute to chronic pain via multiple routes. We found that β-catenin is enriched in SCDH lamina II, especially at synaptic regions. Lamina II neurons, which include both excitatory and inhibitory interneurons, play crucial roles in central sensitization [48,64,65]. Because β-catenin is known to regulate synaptic transmission, synapse/spine assembling and remodeling [38,39], the observation of enriched β-catenin protein at the synapses in lamina II suggests that canonical Wnt/ β-catenin signaling may regulate synaptic plasticity in the neurocircuitry processing nociceptive input in the SCDH. Consistent with its role in central sensitization, β-catenin is up-regulated in capsaicin, HIV gp120, and SNL pain models. While we found β-catenin is upregulated at 7 days after SNL, a recent study showed that this protein significantly increases at 1 and 3 days and returns to baseline at 7 days in the rat SCDH after unilateral spared nerve injury (SNI) [66]. These findings indicate that the regulated expression of β-catenin in the SCDH of different neuropathic pain models follows different temporal patterns. In further support of a role of β-catenin signaling, Wnt3a, a prototypic Wnt ligand for the canonical Wnt/β-catenin pathway, is also expressed in the superficial laminae (including lamina I) and is up-regulated in these pain models. Previous studies show that activation of NMDA receptors by synaptic stimulation elicits Wnt3a secretion from hippocampal synapses to activate β-catenin signaling and facilitate long-term potentiation [33]. One may conceive that activation of NMDA receptors in the SCDH by nociceptive stimuli could also cause Wnt3a secretion to facilitate central sensitization via β-catenin. Signaling proteins in the non-canonical pathway, including Wnt5a and Ror2, are also up-regulated in the pain models. Recent studies have shown that Wnt5a is an NMDAR-regulated protein [34] and critical for the differentiation and plasticity of excitatory synapses [21,23]. Ror2 may mediate the activity of Wnt5a in the regulation of synapse differentiation [67]. In addition, Wnt5a also regulates GABA receptor recycling at inhibitory synapses [68]. These previous findings suggest that the observed up-regulation of Wnt5a and Ror2 in these pain models may also contribute to synaptic remodeling during the development of chronic pain. Neuroinflammation in the SCDH is a constant manifestation of chronic pain in animal models. Proinflammatory factors such as IL-6, IL-1β, TNF-α and MCP-1 play important roles in the initiation and maintenance of chronic pain [11,15,69]. Recent studies have suggested that Wnt5a signaling may regulate the peripheral inflammatory response in chronic disorders, including sepsis [70], rheumatoid arthritis [71], atherosclerosis [72], melanoma [73], and psoriasis [74]. Wnt5a is known to activate CaMKII signaling to modulate the macrophage-mediated inflammatory response [70]. Our previous studies revealed that Wnt5a evokes the expression of proinflammatory cytokines (IL-1β and TNF-α) in primary cortical cultures, indicating a role of Wnt5a in the regulation of neuroinflammation in CNS [75]. Wnt5a is up-regulated by i.t. gp120 and peripheral nerve injury, and each of these is known to induce persistent neuroinflammation in SCDH [76][77][78]. We propose that one potential mechanism by which up-regulated Wnt5a may facilitate chronic pain development is by promoting neuroinflammation. The temporal expression of proteins in canonical and non-canonical pathways appears to follow differential profiles after pain induction. β-catenin and Wnt3a, which are in the canonical pathway, displayed a gradual increase following capsaicin or HIV-gp120 administration. Their gradual up-regulation is correlated with the progressive development of capsaicin-induced and gp120-induced mechanical allodynia and stays at a peak level when mechanical sensitivity starts decreasing. On the other hand, Wnt5a and Ror2 in the non-canonical pathway showed a more rapid but transient increase; their up-regulated expression came back to baseline when capsaicin-induced allodynia was still at a maximal level. These observations suggest that the canonical and non-canonical Wnt signaling pathways may have distinct biological functions in different phases of chronic pain development. Experimental animals Young adult male C57 BL/6 J mice (8-10 weeks), purchased from Jackson Laboratory (Bar Harbor, Maine, USA) were used for all studies. Animals were housed in a constant-temperature environment with soft bedding and free access to food and water under a 12/12-h lightdark cycle. All animal procedures were performed in accordance with an animal protocol approved by the Institutional Animal Care and Use Committee at the University of Texas Medical Branch (protocol #: 0904031) and adhered to the guidelines of the International Association of the Study of Pain for the ethical care and use of laboratory animals [79]. Capsaicin pain model The mouse capsaicin pain model was generated as described [52]. Briefly, mice were anesthetized with isoflurane (2% for induction and 1.5% for maintenance) in a flow of O 2 and placed in a prone position. For each mouse, 5 μl of capsaicin (0.5% in saline containing 20% alcohol and 7% Tween 80; purchased from Sigma) was injected intradermally (i.d) into the plantar region of hind paw using a 30 gauge needle attached to a Hamilton Syringe. Mice injected with vehicle were used as controls. Five minutes later, injected mice were returned to their home cages. HIV-gp120 pain model The recombinant HIV-gp120 protein (HIV Bal gp120; NIH AIDS Research and Reference Reagent Program) in PBS was stored in a −80°C freezer. At the time of injection, gp120 was slowly thawed, diluted to a concentration of 20 ng/μl in ice-cold PBS and maintained on ice. For gp120 administration, mice were anesthetized under 2% isoflurane, and 5 μl gp120 (100 ng) was intrathecally (i.t) injected into the subarachnoid space between the L5 and L6 vertebrae using a 30 gauge needle attached to a Hamilton Syringe [57,80]. Mice injected with vehicle were used as controls. Neuropathic pain model Peripheral neuropathy in mice was produced by a unilateral L5 spinal nerve ligation as previously reported [60,81]. Briefly, mice were anesthetized with 2% isoflurane, and the left L5 spinal nerve was isolated and tightly ligated with 7-0 silk thread. Mechanical sensitivity was assessed 7 days after ligation. Immunohistochemistry Adult mice were deeply anesthetized with 4% isoflurane and perfused transcardially with 50 ml of D-PBS, followed by 50 ml of paraformaldehyde (PFA; 4% in 0.1 M phosphate buffer). The L4 and L5 DRG, and lumbar spinal cord tissues were dissected out, post-fixed in the same PFA solution for 3 hr at 4°C, and then cryoprotected in sucrose solution (30% in 0.1 M phosphate buffer) overnight at 4°C. Transverse sections (15 μm) were prepared on a cryostat (Leica CM 1900) and thawmounted onto Superfrost Plus microscope slides. For immunostaining, sections were incubated in blocking (See figure on previous page.) Figure 8 Up-regulation of Wnt signaling proteins in the neuropathic pain model. A. Neuropathic pain was induced 1 week post L5 spinal nerve ligation (SNL, n = 6). Mice with sham operation (without SNL, n = 6) were used as controls. B-F. Wnt3a (A), ABC (B), TBC (C), Wnt5a (D), and Ror2 C (E) proteins in the ipsilateral (ipsi) and contralateral (contra) sides of the SCDH at 7 days after unilateral L5 spinal nerve ligation (SNL). Compared with the contra side, significant increases in the levels of Wnt signaling proteins were detected in the ipsi side of SNL but not control mice (n = 3). Data are summarized in graphs at right (**, p < 0.01; *, p < 0.05; #, p > 0.05; student's t-test). buffer (5% BSA and 0.3% Triton X-100 in 0.1 M phosphate buffer) for 1 h at room temperature, followed by overnight incubation with anti-β-catenin (1:500, BD: 610153), anti-substance P (SP, 1 ImmunoResearch Laboratories), followed by incubation with DAPI (Sigma). IgG from the same animal sources was used as negative controls for immunostaining. Images were captured using a laser confocal microscope (Zeiss). Mechanical allodynia For the capsaicin and gp120 pain models, a series of calibrated von Frey filaments (0.1 to 2.0 g) were applied to the plantar surface of the mouse hind paw using the "up and down paradigm" described previously [82,83]. Mechanical allodynia was assessed by changes in paw withdrawal threshold in response to von Frey stimuli. For the SNL neuropathic pain model, mechanical sensitivity was assessed before and seven days after ligation by paw withdrawal frequencies in response to von Frey stimuli as previous reported [60,81]. Data analysis and statistics Densitometry of Western blotting was conducted and quantified using the ImageJ software (NIH) with β-actin as the loading control. Values were represented as mean ± SEM of 3 separate experiments. Statistical analysis was performed using Prism 5 (GraphPad) software. One-way ANOVA or student's t-test was used to analyze data from different groups. Two-way repeated measures ANOVA with one repeated factor (time) was used for mechanical threshold data analysis (p < 0.05 was considered significant).
2016-05-12T22:15:10.714Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "c19aac42099c2e21f07e5958a9174cba9a8ac2e4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/1744-8069-8-47", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c19aac42099c2e21f07e5958a9174cba9a8ac2e4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
55573251
pes2o/s2orc
v3-fos-license
Study on the Bridge Surface Deicing System in Yuebei Section of Jingzhu Highway The bad snowy weather in early 2008 induced large scale damages of the transportation system in Southern provinces of China, and seriously influence the normal running of society and economy. The establishment of highway deicing and snow-melting system has been the urgent task. Because of the structure characters and the particularity of construction technology of bridge system, its system design and construction become in to the key problems. In this article, combining the characters of bridge section in Yuebei section of Jingzhu Highway, we compared present highway and bridge snow-melting deicing technologies, and selected proper technologies to establish the corresponding deicing program to enhance the traffic quality and reduce the quantity of accident. Introduction In the cold areas of north China, long freezing time and large snowfall induce road surface friction coefficient decreases obviously than other seasons, which makes vehicle running and breaking become difficult, and easily induces traffic accidents, and largely reduces the traffic ability of highway, and brings large losses to human living and social economy.The snow disaster of south China in Jan of 2008 knocked the alarm bell for local traffic management department, so it is imperative to establish corresponding emergency snow-melting and deicing system under the situation.For the development of highway and bridge snow-melting and deicing system, foreign and domestic relative industrial management departments and scientific research institutions have made active attempts. In 1992, under the united support of US DOE, DOT, Federal Highway Administration and National Base Research Fund, US begun to implement the plan of HBT (Heated Bridge Technologies), and systematically studied the problem of heating snow-melting and deicing for highway and bridge.Since 1998, US OSU (Oklahoma State University) begun to develop the research about the heat liquid cycle snow-melting and deicing technology for highway and bridge under the financial support of DOE, Federal Highway Administration and Oklahoma Traffic Office, and established the largest highway and bridge expert experiment system in the world.The Terrestrial Heat Center in Oregon Institute of Technology implemented comprehensive comparison and analysis to the highway surface snow-melting and deicing technology, and empirical studied the ramp section in the Oak Ridge highway and Wyoming Cheyenne highway in Virginia. Since 1994, Japan Misawa Environmental Technology Co, Ltd developed about 40 items of highway snow-melting and deicing sample engineering by virtue of terrestrial heat and solar energy early or late, and made large contributions for the protection of biological environment.In 1995, Japan National Resource Environment Research Institute established the first automatic highway surface heat storage cycle heat liquid snow-melting and deicing system in the city of Ninohe by the help of OECD and IEA, and the test indicated that the system could save 84% of electrical energy than the heating cable system.Cooperating with Yamaguchi University, Japan 8 th Technical Consultation Company carefully studied and compared the snow-melting and deicing scheme of Ushinogou highway tunnel exit, and finally adopted the mode of the heating tube with natural resources. In 1994, cooperating with Zurich Polydynamics Ltd, Switzerland Highway and Bridge Committee developed the energy storage snow-melting and deicing experiment on the bridge in the Darligen Section of Switzerland A8 Highway.Scholars in Poland Warsaw University simulated and computed the solar energy heat storage transfer process which buried pipes under the bridge surface.Since 1980, Iceland begun to utilize its abundant terrestrial heat resource, extended the application of road snow-melting and deicing engineering, and the utilization area in the whole Iceland has achieved 740 thousand m 2 at present.At present, urban road snow removing in China mainly depends on snow-melting and deicing and manual cutting ice and removing snow, and the snow removing on main highways gives priority to chemical snow-melting and mechanical removing.The physical snow-melting is only limited in colleges and scientific research institution or small scale experiment because of late researches.Li, Yanfeng and Wu, Haiqin of Beijing University of Technology, and Harbin Institute of Technology implemented former researches to the electrical heating highway surface and bridge snow removing technology, and Zhuqing and Zhaojun of Tianjin University implemented detailed theoretical research about the application of solar energy soil heat storage technology in highway snow-melting and deicing with Tianjin Municipal Development Ltd. Applicability comparison of modern deicing technologies in Yuebei section The deicing system is the representative capital dense system, and it needs to be invested by large scale manpower, material resources and financial resources in the process of development, construction and operation.At present, the usual snow-melting and deicing technologies mainly include manual snow and ice removing, chemical snow and ice melting, mechanical snow and ice removing and physical snow and ice removing technology. The manual deicing method could remove ice and snow with better effect. The chemical deicing technology is to bestrew chemical medicaments on the highway surface to reduce the melting point and melt snow and ice, and accordingly remove snow and ice, and this method is a sort of highway surface ice and snow removing measure in international common use.The mechanical deicing technology is the method which utilizes machines to remove snow and ice from the highway surface. The physical deicing technology mainly includes following aspects at present. (1) Energy storage highway deicing technology is to establish the energy storage cycle system which could heat the highway surface through the flow of heat liquid stored in the cycle pump in ice and snow weather, and accordingly achieve the effect of removing snow and ice. (2) Electrothermal process highway surface deicing technology is to lay heating resistance wire or electric materials, and electrify and heating the highway surface to deice when ice and snow come. (3) Heating mechanical composite deicing technology is to combine mechanical method with heating method, exert their own advantages and increase the efficiency of removing ice and snow.According to the analysis among various deicing technologies, and the comparison results are seen in Table 1. Yuebei section of Jiangzhu Highway is located in mountainous region of special geographical environment, and the highway network combination has special characters.So we should seriously select proper snow-melting and deicing technology to realize the optimization of benefit and cost.There are 74 bridges which are 14% of total mileage in Yuebei section of Jingzhu highway, where the quantity of larger bridge which span exceeds 500 meters is four.Under general situation, the bridge surface spreading is smaller than 10cm.So, it is very important that the adopted heating mode or radiating materials don't influence the normal work status of bridge surface, and the spreading of heating system should not influence the using performance of waterproof and use of bridge surface.The construction of bridge surface heating system needs special technical requirement, and at the same time, the bridge is exposed in air, and it has multiple radiating surfaces, and the air circulates quickly, and the heat consumption is much larger than pure highway surface, and the utilization rate of heat efficiency or heat is much lower than the highway surface, so we need develop the control system which can automatically adjust the energy supply tension according to exterior environment temperature and air flow speed.Because the bridge temperature fields induced by heating are different, so the temperature difference between bridge surface and girder bottom will induce additive temperature stress.Therefore, the bridge surface deicing system design will face more limitations. Though the manual deicing method could eliminate the ice layer on the bridge surface, but it has low efficiency, expensive charge, and too long response waiting time, and it will influence vehicle traffic and safety and induce the damage of bridge surface when working.Chemical deicing would easily damage the environment on both sides of the highway, the vegetation will wither and the drinking water will be polluted.At the same time, chloride deicing agent will largely influence the performance of the structure of the material.The costs that induce structure cauterization and damage environment because of using chloride deicing agents was 4% of GDP, and the repair charge every year is about 200 billion dollars which is 4 times of initial construction charge.50% of 102 bridges investigated in Copenhagen have serious reinforcing steel bar corruption.The crossroad at Xizhimen of Beijing has been only used for 20 years, but serious concrete flaking and reinforcing steel bar corruption occurred in bridge surface and pier.The simple mechanical deicing method has slow speed and the cutting method and knocking method could easily damage the highway surface, and it will form water leakage and structure damage.And the purchase and maintenance costs of machine are too high, the operation personnel are deficient, and the flexibility is bad, which all limit the large scale using of mechanical deicing method.Because of large energy consumption and expensive operation charge, the electric heating highway surface deicing technology can not be implemented, and it is only be the assistant measure to be considered.Because of the limitation of the length of bridge section, the energy storage highway deicing technology which buries cycle liquid pipe under the bridge surface should be seriously considered to use, and the application of this technology could fully utilize natural regenerated energy resource and save energy, and the surface heat storage rate can achieve 36%, and the efficiency of environmental protection and reasonable resource using is very obvious, and it is very convenient to implement automatization and disposal in time.But its concrete implementation needs implement large scale highway surface pipe spreading, the establishment of environmental supervision control system and energy supply system, and it needs consuming large manpower, material and financial resources, and should implement comprehensive plan as a whole, so this technology is fit for the deicing scheme in the new building highway bridge. Heating mechanical composite deicing technology utilizes the heating equipment to heat the frozen ice and snow before removing ice and snow, and properly enhance the temperature of ice and snow layer, reduce the tension of the ice and snow layer, reduce the difficulty removing snow, and enhance the speed of mechanically removing ice and snow.The technology could improve the original deicing machine to make its performance accord with the requirement of composite deicing technology and effectively reduce the initial charge, and effectively reduce the damage of highway surface and bridge surface and extend the using life of highway and bridge effectively.Based on the comparison of performance indexes of various technologies and economic feasibilities, in this article, we apply the heating mechanical composite deicing technology in the bridge surface deicing scheme in Yuebei section of Jingzhu Highway. Heating mechanical composite deicing method The thickness and tension of the snow and ice layer largely influence the snow-removing and deicing effect.Table 2 is the relationship that the anti-cutting intension coefficient of manual hardening snow changes with temperature and density.Table 3 is the relationship of the rigidity of ice changes with the temperature.Therefore, the anti-cutting tension and compressive stress of the ice and snow layer obviously increase with the decrease of temperature and the increase of density, and the ice and snow layer is denser and the temperature is lower, and it is harder to be removed.Therefore, the simple mechanical deicing method has slow speed and the cutting method and knocking method could easily damage the highway surface, and it will form water leakage and structure damage to the highway surface. The method in the article will utilize the heating equipment to properly enhance the temperature of the ice layer before deicing and reduce the tension of the snow and ice layer and the resistance of mechanical shovel, and enhance the removing clearance rate and work speed.And the method will control the temperature of the ice layer to make the average temperature lower than 0 oC, so the ice layer could not be melted, which can avoid consuming large heat quantity because of the melting of ice, and reduce energy consumption and cost.There are many heating methods, where the microwave heating and far infrared heating are deserved to be adopted.But the microwave heating method needs specially develop special equipments, and especially the leakage of microwave will harm human and environment.In this article, we put forward the mature direct-fired far infrared heating method to deice the ice, which uses liquid-petrol gas as fuel and has little environmental pollution.Figure 1 is the sketch of heating mechanical composite snow removing and deicing design, and it is composed by a tractor and a half-hang deicing car dragged by the tractor.The ice and snow removing method still adopt the mechanical method, and the head of the tractor is the snow shovel designed according to advanced surface, and the snow shovel will quickly shovel the snow, and the half-hang car loads the heating equipment to remove the ice, the mightiness steel wire roller and cutting shovel.The snow shovel shovels the snow, and when the ice snow layer is thinner, the far infrared equipment is used to heat the ice snow layer and reduce the anti-cutting and compressive stress of the snow and ice layer, and then the steel wire brush and the cutting shove are utilized to clear the thinner ice and snow layer stayed on the bridge surface, and the design forming certain angle between the steel wire brush and the cutting shovel with the advance direction will push the ice and snow to the side of the bridge. The advanced snow shovel with automatic avoiding equipment and strong profile modeling ability includes four sections, which can press close to the highway surface furthest.From the highway snow removing standard, under the premise avoiding damaging highway surface, the snow shovel could reduce 10~6cm of the ice snow layer, and when the depth of snow ice layer is thinner, the far infrared heating equipment is utilized to enhance the average temperature of ice and snow, reduce the cutting tension of machine, enhance the clearance rate and reduce the damage to removing knife.In the scheme of the article, we want to adopt metal fiber burner to make far infrared radiator, and the burner adopts pre-mixed gas surface burning technology.Comparing with other surface burners, the burning tension of the metal fiber burner is high, the adjustment range is large (same burn could realize red fire or blue fire), and the burning is very equal, and the burning efficiency is high (the radiation efficiency of infrared ray could achieve 50%), and it possesses low pollution release, low pressure dropping, high adverse security, good heat expansion control, anti-heat collision, quick cooling and response control.The experiment indicated in the opening environment, the surface temperature of the metal fiber burner that the fuel surface is upward changes from 750 oC under 100kW/m2 to 1000oC under 500kW/m2, and when the burner is in the closed environment, the surface temperature and radiation efficiency will further be enhanced, and the burner will take heat radiation as main heat release form in this temperature area.Considering the ice and snow removing speed and running cost, we select the heating temperature in 800oC. Heat transfer computation and analysis in the ice layer heating process To select proper heating temperature to fulfill the requirement of deicing speed and cost, we study the heating and temperature ascending process of ice and snow layer, and use the method of numerical simulation to choose optimal work parameters. According to deicing and snow removing requirement and the space condition of the car, the heater is flat-shaped, the board length is 3m and the board is 300mm apart from the upper surface of the ice and snow layer, and the temperature of the board surface is 1100 oC, and the thickness of the ice and snow layer is 10mm, and the thickness of bitumen layers is 150mm, and the thickness of the cement hardpan is 200mm, and the thickness of secondary-ash soil base layer is 300mm. The heating process of ice layer is an unstable heating process, and in the simplified computation and analysis, we simplify the heat transfer model as follows.The car speed is slow (<5m/s), the temperature difference between air and ice surface is small than 20k, and because the convection and heat exchange only occupies little part for the radiation heat exchange, so we don't consider it in the computation and we only consider the influence of radiation heating.The side of the heater is heat preservation material, and its temperature is not high comparing with the surface of the ice and snow layer, so the radiating heat is not considered.Because of small temperature difference, various sides that ice and snow layer is vertical to the highway surface can be regarded as heat-isolated surface, and they have no heat flows.The heat transfer process is simplified as one-dimensional unstable heat transfer.The ice surface boundary condition is that the temperature under 400mm of bridge surface is constant.The material character implementing radiation heat exchange boundary on the heating surface and ice surface can refer relative materials.The initial temperature field with 400mm depth from the ice layer to the bridge surface can be enacted by measurement conditions of relative literatures, and it can be computed by ANSYS software.In this computation, we compute the temperature field according to appointed air temperature and soil temperature, and after checking the real measurement results, we will compute the temperature field covering the ice layer under invariable soil temperature and air temperature based on the computation result, and then we will compute the heating process to the initial temperature field as the heating process.From relative materials, the far infrared area is the main heat radiation absorption of ice, and under the condition that there are no exact experiment materials of the ice layer, and approximately the ice penetration depth in the far infrared area is 10mm, and the radiation adsorption rate is 0.5, and we take the radiation heat as the volume heat load to compute, and the results are seen in Table 4 which include deicing speeds that the temperature of the ice snow layer increases to 0 oC and corresponding fuel consumptions under different time and air temperatures in one day, and in the Table, when the air temperatures are -2 oC and -4 oC, we can only use mechanical machines to deice the ice and snow without the heating equipment. Conclusion Winter highway bridge deicing and snow removing is the important part of the work for the traffic management department, and it has important function to maintain normal social and economic living order, and the adoption of concrete method is limited by the cost.Aiming at the concrete task of bridge deicing in Yuebei section of Jingzhu Highway, in this article, we analyzed and compared characters, adaptive range and costs of modern mainstream deicing technologies.Finally, the heating mechanical composite deicing technology could better accord with the deicing demand of the highway because of its cheap cost, strong applicability and flexible maneuverability, so it is the optimal alternatives.In the implementation of concrete scheme, we should further study many problems such as machine maintenance, frost alarming and the harmony and organization of deicing.At the same time, because of concise running management, obvious effect, low energy consumption and beneficial environment, the energy storage highway and bridge snow-melting and deicing technology should also be considered in the deicing system of the new building highway and bridge project. Table 1 . Performance comparison of various deicing technologies Table 2 . Anti-cutting intension coefficients of manual hardening snow Table 4 . Deicing speed and fuel consumption under different temperature condtions Figure 1.Sketch Heating Mechanical Composite Deicing Car Figure 2. Heating Computation Sketch of Ice Layer
2018-12-05T08:30:46.219Z
2009-02-09T00:00:00.000
{ "year": 2009, "sha1": "9966d312733acc978dc8c83152455e19b39bfc7d", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/ijbm/article/download/746/718", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "9966d312733acc978dc8c83152455e19b39bfc7d", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Economics" ] }
90752784
pes2o/s2orc
v3-fos-license
A Neuro-Fuzzy System For Diagnosis of Soya-Beans Diseases http://escipub.com/research-journal-of-mathematics-and-computer-science/ 1 http://escipub.com/research-journal-of-mathematics-and-computer-science/ 2 INTRODUCTION The technological evolution in computing has been the principal tool in increasing agricultural products. Nevertheless, there are numerous problems and constraints working against the bountiful and quality harvest of soyabean commercial production Dugje et al (2009). ICTs play vital role in facilitating agricultural growth. The scientific and technological developments which include e-agriculture, decision support system for farmers and mobile applications have tremendously delivered relevant services for farmers in tackling all forms of crop diseases. ICTs have promoted new farming techniques and distributed new knowledge through the use of computing technology for facilitating diagnosis and treatment of crop diseases, Swanson and Rajalahti (2010). Expert system is a branch of artificial intelligence that is highly beneficial to many experts in various fields in providing solution to some uncertain and imprecise tasks. The capacity and efficiency of expert system in imitating human reasoning process and providing relevant advice similar to human expertise has singled it out as one of the artificial intelligence branches widely embraced in many fields today, Yialouris and Sideridis (1996). Neuro-fuzzy is also one of the artificial intelligence (AI) techniques which has always been adopted by some researchers for decades, in developing systems that provided optimal solution to problems that are vague and imprecise in nature. Neuro-fuzzy is an efficient technique that combines the strength of neuralnetwork and fuzzy logic together by utilizing the approximation method of neural-network to compute the parameters of a fuzzy system. The architecture of neuro-fuzzy with various functionalities in layers makes it more suitable to develop a system that will have its final output more optimized. The choice of neuro-fuzzy technique for this work is justified as a result of need to accept five major symptoms of soyabean disease as input parameters for disease classification and computation of intensity proportion of a particular disease. STATEMENT OF THE PROBLEM Findings have shown that non-availability of improved and modern technologies is the main constraints in agricultural production in Nigeria, Asoegwu (2007). Few researches in the past have virtually demonstrated the fuzzy-logic based system for diagnosing crop diseases while a few authors only adopted neural-network technique to develop expert system for soyabean. But, hybridization techniques of AI in which different techniques are combined together to develop more robust expert system is fairly ignored Dybowsky and Gant( 2015). Due to this research gap, there is a need to apply a hybrid of neural network and fuzzy logic together to form a robust and highly intelligent system for optimal solution. AIM AND OBJECTIVES The aim of this study is to develop an interactive neuro-fuzzy based system for identifying specific disease and determining the degree of damage (intensity level) perpetrated on a soyabean plant. The specific objectives are; I. Review literature on interactive neurofuzzy based system. II. Design a neuro-fuzzy based system with five input parameter fields as symptoms on root, leaves, stem, flowers and pod, with two outputs for disease type and intensity level of the disease respectively. III. Model the dataset extracted from a database for training through the aid of Adaptive Neuro-Fuzzy Inference System (ANFIS). IV. Implement,test and evaluate the neurofuzzy based system. SIGNIFICANCE OF THE STUDY Indigenous farmers usually diagnose crop diseases in forms of linguistic values that are Rahmon et al., RJMCS, 2018; 2:13 INTRODUCTION The technological evolution in computing has been the principal tool in increasing agricultural products. Nevertheless, there are numerous problems and constraints working against the bountiful and quality harvest of soyabean commercial production Dugje et al (2009). ICTs play vital role in facilitating agricultural growth. The scientific and technological developments which include e-agriculture, decision support system for farmers and mobile applications have tremendously delivered relevant services for farmers in tackling all forms of crop diseases. ICTs have promoted new farming techniques and distributed new knowledge through the use of computing technology for facilitating diagnosis and treatment of crop diseases, Swanson and Rajalahti (2010). Expert system is a branch of artificial intelligence that is highly beneficial to many experts in various fields in providing solution to some uncertain and imprecise tasks. The capacity and efficiency of expert system in imitating human reasoning process and providing relevant advice similar to human expertise has singled it out as one of the artificial intelligence branches widely embraced in many fields today, Yialouris and Sideridis (1996). Neuro-fuzzy is also one of the artificial intelligence (AI) techniques which has always been adopted by some researchers for decades, in developing systems that provided optimal solution to problems that are vague and imprecise in nature. Neuro-fuzzy is an efficient technique that combines the strength of neuralnetwork and fuzzy logic together by utilizing the approximation method of neural-network to compute the parameters of a fuzzy system. The architecture of neuro-fuzzy with various functionalities in layers makes it more suitable to develop a system that will have its final output more optimized. The choice of neuro-fuzzy technique for this work is justified as a result of need to accept five major symptoms of soyabean disease as input parameters for disease classification and computation of intensity proportion of a particular disease. STATEMENT OF THE PROBLEM Findings have shown that non-availability of improved and modern technologies is the main constraints in agricultural production in Nigeria, Asoegwu (2007). Few researches in the past have virtually demonstrated the fuzzy-logic based system for diagnosing crop diseases while a few authors only adopted neural-network technique to develop expert system for soyabean. But, hybridization techniques of AI in which different techniques are combined together to develop more robust expert system is fairly ignored Dybowsky and Gant( 2015). Due to this research gap, there is a need to apply a hybrid of neural network and fuzzy logic together to form a robust and highly intelligent system for optimal solution. AIM AND OBJECTIVES The aim of this study is to develop an interactive neuro-fuzzy based system for identifying specific disease and determining the degree of damage (intensity level) perpetrated on a soyabean plant. The specific objectives are; I. Review literature on interactive neurofuzzy based system. II. Design a neuro-fuzzy based system with five input parameter fields as symptoms on root, leaves, stem, flowers and pod, with two outputs for disease type and intensity level of the disease respectively. III. Model the dataset extracted from a database for training through the aid of Adaptive Neuro-Fuzzy Inference System (ANFIS). IV. Implement,test and evaluate the neurofuzzy based system. SCOPE OF THE STUDY This study is strictly restricted within the scope of using a particular type of neuro-fuzzy known as Adaptive Neuro-Fuzzy Inference System (ANFIS) to develop a system that capture specific input parameters as symptom on leaves, pod, root, flower and stem of soyabean plant and identify disease-type the plant infected with, along with the extent or degree of the infection as output. LITERATURE REVIEW The adoption of expert system in providing upto-date information and diagnosis of crop diseases in agriculture could be dated back to 1980. It is not a new concept because a large number of agricultural institutes and researchers across the globe have been developing different types of expert system for local farmers within their regions and catchment areas. Every expert system requires human expertise to provide knowledge-base that could be encoded in solving related problems in a specified domain, Patterson (2004). A tremendous advancement of technology in software and hardware industries has provided opportunities for researchers to explore every aspect of artificial intelligence techniques in building relevant systems and devices for indigenous farmers. It should be noted that expert system can be developed in different ways, depending on the theory or technique adopted by developers to enhance its efficiency. In computer science, many researchers have adopted the use of neural network, and fuzzy logic as artificial intelligence techniques in enhancing the effectiveness and efficiency of expert systems for agricultural use. Farming all over the world has embraced all forms of technological advancement in classifying livestock disorders, tactical solutions for crop cross-breeding and diagnosing crop diseases, Khan (2008). According to Duan (2005), most existing expert system for diagnosis was confined to medical field; and robust agricultural expert systems for practitioners are still few in number, Gal et al (2011). Also, numerous researches equally acknowledged the scarcity of works on expert systems for diagnosing and managing the pests and diseases for a certain crop (Yialouris and Sideridis, 1996;Mahaman et al, 2003;Clark et al, 1991;Koumpouros, 2004). In addition to this, most researches still focus on adoption of only one artificial technique for development of expert system for diagnosis of nutritional disorders in crops like tomato, soyabean, cassava, rice etc. A comparison study conducted on various expert systems that were not artificial intelligence-based on agriculture by Babu et al (2006), established that most classical expert systems had the capacities for fertilization scheduling, assessment of a farm and pest control, diagnosis and classification of crop diseases. Artificial intelligence is one of the most widely used concepts in computer science by researchers for modelling and simulating unambiguous tasks. Previous works have adopted various methodologies and techniques in building expert system. Diversity of methodologies mostly used in previous works include neural network, fuzzy logic, case-based reasoning, intelligent agent system, objectoriented methodology, database methodology, knowledge-based system and ontology, Shafinah et al (2013) . Pratibha and Toran (2012) developed a neural network model to detect infected and noninfected area on soyabean plant. The symptoms are quantified and used as the dataset for training and learning. It employed back-propagation algorithm for effective learning and this made the model to be more robust in classifying and detecting the two linguistic outputs as crisps referred to as noninfected and infected areas in the model. But the proportion of these two areas could not be ascertained by the model. The main objective of the system developed and implemented by Singh et al (2011) was to provide accessible graphic user interface for literate and its compliance by soyabeans' farmers, in order to have up-to-date knowledge on how to diagnose soyabean diseases based on symptoms. The functionality of the webbased system strongly depend on fuzzy-logic technique; having symptoms as linguistic values being converted into fuzzy-sets with the help of triangular and trapezium membership functions and adopted Center of Area (COA) as the defuzzification method for computation of weighted average of the fuzzy set. The strength of the system was responsible for its capacity to identify specific pest responsible for certain damages, and suggesting control measures for farmers. The knowledge-base of the system only consists of limited and common pest and their symptoms. The system was inefficient to tackle and diagnose pests and diseases that are not included in the rule-base. The web-based fuzzy system proposed by Saini et al (2011) for pest management equally adopted fuzzy logic technique with more robust features to estimate pests' activity level on soyabean. It was a decision support system for farmers to acquire knowledge and information needed to manage pests and diseases attack on crops. Most researchers and software developers have discovered the potential of AI techniques as powerful engine in developing expert systems. The use of classical expert system in most disciplines have become obsolete due to imprecise and vagueness nature of some contemporary problems which classical expert system could not solve. Examining the contribution of Li et al (2002) in the development of a web-based expert system for diagnosis of fish diseases as an intelligent system with embedded 400 rules-base and graphical user interface for users was another contribution to knowledge. It was a flexible webbased application used in diagnosing fresh water fish diseases. The system could only identify disease-type but didn't have capacity in determining the intensity rate of the disease In building efficient expert system, some researchers have discovered the hybridization approach as the best option in providing optimal decision support expert system for farmers. Integration of more than one artificial intelligence techniques to complement each other as adopted in this work is known as hybridization. Gouzalez-Diaz et al (2009) investigated the need to assist farmers in developing expert system that could appropriately identify harmful organisms affecting pepper crop in Spain. The artificial intelligence technique adopted for the system development was rule-base knowledge with user interface for capturing of symptoms and embedded IF--THEN structure to infer the expert rules for possible organism affecting the pepper plant. Although, the reasoning and knowledge-representation of the system were highly modular, and expressed as a unit of knowledge in solving some problems. Nevertheless, the system was found to be complicated in modification of the existing knowledge base, particularly when new rules are introduced to the knowledge base which might contradict previous rules. The proposed work of Mahaman et al (2002) equally used rule-base knowledge as a powerengine house for the expert system developed to diagnose attack of pests of honeybees on crops and provide suitable treatments. It was a Boolean logic approach method and lack capacity to provide comprehensive and specific treatment for various degrees of attack. Pests and diseases affecting bountiful yields of tomato production were seriously tackled by integral intelligent system developed by Lopez-Morales (2008). The rule-base system was implemented to prevent, diagnose and control possible attack of tomato crop with pests and diseases. It was more useful for farmers who could read and access internet. But, the system was too wordy and difficult for farmers who were illiterates. Operational Automatic Identification Tool (OAIT), supported with rule-base was designed for indigenous farmers that could not specifically identify pest attacking their crops. The main functional module of the tool was its potentiality for disease classification and suggested treatment Mahaman et al (2003). Rice plant is more affected with paddy diseases. Researches have shown that, rice farmers were more threatened and discouraged in planting rice in some regions and areas as a result of attack of paddy diseases. The proposed model of Abdullah et al (2007) was a robust and enterprise image-based diagnosis expert system to diagnose paddy disease in rice. The expert system used fuzzy logic architecture, such as membership functions to define the degree of vagueness in appearance of lesion and colours on infected area in a rice plant for proper diagnosis. The system had better performance than other expert systems that explored rule-base knowledge technique, yet, it was a model, simulated with MATLAB for research purpose and not for end-users' advantage because of lack of friendly userinterface. The agricultural expert system developed by Chakraborty (2008) was a fuzzy-based system that was able to provide prediction for the likely degree of occurrence malformation diseases in mangoes. Although, the system explored all the features of fuzzy logic with one type of membership function for a better performance, but in comparison with the fuzzy-based expert system proposed by Cintra et al (2011) for diagnosing tomato disease, was relatively lower in performance. The system developed by Cintra, provided various input parameters for each disease and their corresponding output. The work demonstrated Triangular membership function, Trapezoidal membership function and Gaussian membership function to define fuzzy sets for the system, with Mamdani fuzzy inference method to formulate suitable rules based upon the rule-knowledge and decision made. The output of the system was more reliable and delivered optimal solution. Zhang et al (2010) developed an expert system with artificial neural network to diagnose various diseases that attack tomato. A large dataset was used for training and testing, and recorded high percentage of accuracy, estimated to be 95%. Nevertheless, it was not a friendly user application for end-users. The hybrid system developed by Kolhe et al (2011) was a web-based intelligent system that provided graphical user interface for end-users to diagnose diseases that affect oilseed-crops. The system was designed in an object-oriented approach, with rule-base stored in MS-SQL server database engine and fuzzy-logic was adopted for drawing inferences. The hybridization of the system comprises of fuzzylogic, object-oriented and rule-base knowledge technique. It was highly scalable with good functionalities to diagnose crop diseases. Kaloudis et al (2005) also designed an expert system that combined the rule-base knowledge and object oriented technique to diagnose forest pests that damage forest trees. The system was developed with n-tier architecture. An enterprise relational database management system such as MS-SQL Server was used as a base for the experts' rules while the interface was designed with ASP.NET, and C# was used for the middle tier coding. Its benefits to users include audio-visual functionality and visual recording. But still, technically deficient in accurate computation of imprecise and vague problems. As earlier mentioned in some literatures regarding the scarcity of robust expert system that explored the integration of variety of AI techniques, there is an urgent need to fill this gap by designing and developing expert systems that combine strength of various powerful techniques to form hybrid solution for indigenous soyabean-farmers in diagnosing and managing farm pests and diseases. DATASET DESCRIPTION The dataset that will be used for implementation will be extracted from Soybean Large Dataset (SLD), University of Califonia Institute(UCI) machine learning repository. In this study, the intention of the researcher is to collect more than one thousand (1000) records of soyabean plant disease symptoms and corresponding target output from online SLD database, and employ the assistance of agriculture experts in the domain of crop planting for proper interpretation of the dataset. For the purpose of clarity of the dataset, each record in the dataset will be arranged with soyabean plants on rows and each column to consist of root, leaves, stem, flower and pod. These are the major components soyabean plants that can be infected with fungi, virus, pests and nematodes. When a soyabean plant is attacked with fungi, the symptom can easily be manifested on either the leaves or pod, and so on. INPUT VARIABLES In any dataset, the input variables are the most important parameters which are subjected to investigation by farmers into the system in order to form basis for disease diagnosis. The description of each attribute(input variable) is given below: Very High (VH) = "5" DESCRIPTION OF THE MODELING TOOL The proposed Neuro-Fuzzy system will be developed through MATLAB software with an ANFIS (Adaptive Neuro Fuzzy Inference System) box as shown in Fig. 3.1. The use of MATLAB guarantees result accuracy and still remain the best tool for system training and testing within short time (Maryam and Laya, 2016). The following steps are involved in modeling with ANFIS editor in MATLAB in respect to soyabean disease classification and intensity level computation. Step 1: The collection of symptoms from various soyabean plants as input and target output in pairs will be identified and be allocated for training and testing. Step 2: The dataset will be saved in MS-Excel file format and imported into the workspace of MATLAB by using uiimport command Step 3: To display ANFIS editor dialogue box, anfisedit command will be typed in the MATLAB command area. Step 4: In the ANFIS editor environment, by clicking "Load Data" command button, the data is loaded from the specified dataset for training and testing, and also to be plotted on the plot region. Step 5: To view the structure and model of the proposed system based on the input and output, the "Generate FIS" command and structure button are clicked respectively. Step 6: Under "Train FIS" section group, one can select FIS model parameter hybrid optimization method (a mixture of back propagation and least squares method). Also, in this section, the training epoch's number and error tolerance will be chosen. Step 7: By clicking "Train now" button, FIS model will be trained while the membership function parameters will also be adjusted and the training data error will be plotted in the region. Step 8: Under the "Test FIS" section group, Test button will be clicked to validate the trained FIS. ANFIS (Adaptive Neuro Fuzzy Inference System) ARCHITECTURE Adaptive Neuro-Fuzzy Inference System (ANFIS) is one of the hybrid neuro-fuzzy inference expert systems and it works in Takagi-Sugeno-type fuzzy inference system, which was developed by Jyh-Shing and Roger Jang (1993). The technique provides a method for the fuzzy modeling procedure to learn information about a data set, in order to compute the membership function parameters that best allow the associated fuzzy inference system to track the given input/output data. This learning method works in a manner similar to that of neural networks. In this work, ANFIS as a hybridization of fuzzy inference system and artificial neural network is adopted as the power-engine for the proposed system. The design of appropriate membership functions to produce input-output pairs and construction of fuzzy if-then rules are carried out by ANFIS. The block diagram of ANFIS architecture is presented in Fig. 3 Where x and y are the linguistic variables as input into node i, where A is a linguistic label (Very Low, Low, Medium, High, Very high) associated with this node function is the membership function of Ai and it specifies the degree to which the given x, y satisfies the quantifier Ai. (x), (y) can adopt any fuzzy membership function. Layer-one is the fuzzification layer. Layer-two has fixed nodes in which the incoming signals from Layer-one are multiplied and the product sent out. Each node output represents the firing strength of the rule. The output of this layer is presented as follow: Layer-three also consists of fixed nodes. The ith node calculates the ratio of the ith rule's firing strength to the sum of all rule's firing strengths. Output of this layer is called normalized firing strengths. Layer-four, the nodes are adaptive nodes like Layer-one. In this layer, the product of the normalized firing strength, with the first order polynomial, is the output of each node in layer four. PROPOSED NEURO-FUZZY SYSTEM The diagram in Fig 3.3 below consists of five stages: input stage, fuzzification, rule base, inference engine, and defuzzification. The first stage allows the crisp inputs such as manifested symptoms on root, leaf, stem, flower, and pod to be passed into the fuzzification stage of second step in order to be converted into fuzzy inputs with the help of gaussian membership function. From the fuzzifier, the membership functions of the fuzzy input are fed into the neural network block which consists of inference engine connected to the rule-base. Back propagation algorithm will be used to effectively train the inference engine for the appropriate selection of rule base. It should be noted that; the purpose of training the inference engine with back propagation algorithm is for the proper rules to be generated which would be fired from neural network to produce linguistic output. The defuzzifier then converts the linguistic output generated by neural network to crisp output i.e. the severity level of a particular soyabean disease in floating number. System Validation It is a process of presenting untrained input/output dataset to the trained Fuzzy Inference System (FIS) in order to ascertain the extent to which the FIS model would determine the intensity level corresponding to the set output values. The proposed neuro fuzzy system will be validated through the testing data supplied to it after it has been trained. Mean Square Error (MSE) will be used as an evaluating formular to check the accuracy performance of the model by measuring the differences between original data and obtaining values. MSE formular is given below: n is the number of pair-data in the dataset a(i) are the set of ith desired output b(i) are the set of ith target output Defuzzification This technique involves computation of sampled membership functions in order to establish their membership grade to be used in fuzzy logic expression and thereby determine outcome region or produce a single scalar quantity. Out of the two methods of defuzzification which include; Mean of Maximum (MOM) and Center of Area or Centroid (COA), COA is the most popular and appealing of all the defuzzification methods (Sugeno, 1985;Lee, 1990). For this reason, COA method will be adopted to calculate the weighted average of the fuzzy set in the system by using the algebraic expression below; Where z is the output variable ( ) is the membership function of the aggregated fuzzy set A with respect to z Fuzzy Rule Base Fuzzy implication rules will be adopted to mimic expert's reasoning with statements that are imprecise by nature. With the assistance of Agriculture experts, more than one thousand rules will be generated based on diverse symptoms affecting root, stem, leaf, pod and flower of a soyabean's plant in accordance with the knowledge on the disease domain. The fuzzy rules are strictly linguistic rules of IF ----THEN format. Where A is referred to as the premise and B depicts the consequence of the rule. In the rule bank, there will be one thousand rules. Each rule is a collection of fuzzy sets (Very Low, Low, Moderate, High, and Very High) from a list of symptoms that have occurred, combined together with "AND" in order to show the definite status of a specific soyabean disease. The rules are shown in The proposed system will provide optimal benefits over the shortcomings found in works that were reviewed, because, once the system has been set-up, neuro-fuzzy technique embedded within the system, has the capacity to identify which of the rules have been developed by the system in order to be examined by experts to ensure that the problems are appropriately addressed. This system will be more effective and efficient to use in the diagnosis of soyabean diseases and determining the intensity level of the disease by using ANFIS. The output will show the intensity and classification of any disease as very low, low, medium, high and very high. The design of the system can be divided into 3 stages: ANFIS model development, network training and system validation and testing. When this system is implemented, it will be found that neuro-fuzzy based system will be more suitable and feasible to be used as a supportive tool for soyabeans disease diagnosis.
2019-04-02T13:14:31.994Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "b0f02f26c73e42c1f4d2daef59c1b757bd23bdf8", "oa_license": null, "oa_url": "https://escipub.com/Articles/RJMCS/RJCMS-2018-04-0501", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d014affb1f23d8e4e16781ff6f59c96a51266510", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
52825033
pes2o/s2orc
v3-fos-license
Intractable Otitis Media Presenting as Falsely Positive for Proteinase 3-ANCA : A Case Report INTRODUCTION Antineutrophil cytoplasmic antibodies (ANCAs) to myeloperoxidase (MPO) and proteinase 3 (PR3) are associated with primary vasculitis affecting smallto medium-sized vessels. Systemic vasculitis involving these antibodies is known as ANCA-associated vasculitis (AAV) [1]. The initial signs of AAV are otologic symptoms, such as otitis media, hearing loss, vertigo, and facial palsy. Nonetheless, the AAV diagnosis is often challenging when symptoms are localized to the ear [2]. For this reason, the study group of the Japan Otological Society recently proposed a new diagnosis: otitis media with AAV (OMAAV) [3]. OMAAV is classified if the following three criteria (A, B, C) are fulfilled: (A) a disease onset with initial sign/symptoms due to intractable otitis media with effusion or granulation, which was resistant to antibiotics and insertion of tympanic ventilation tubes; (B) at least one of the following three findings: (1) positivity for the serum MPOor PR3-ANCA; (2) histopathology consistent with AAV, which is necrotizing vasculitis predominantly affecting small vessels with or without the granulomatous extravascular inflammation; and (3) at least one accompanying sign/ symptom of the AAV-related involvement other than the ear (eye, nose, pharynx/larynx, lung, kidney, facial palsy, hypertrophic pachymeningitis, and others); and (C) exclusion of the other types of intractable otitis media, such as bacterial otitis media, cholesterol granuloma, cholesteatoma, malignant osteomyelitis, tuberculosis, neoplasm, and eosinophilic otitis media, as well as exclusion of other autoimmune diseases and vasculitis diseases other than AAV, such as Cogan’s syndrome and polyarteritis nodosa, among others. According to these criteria, OMAAV should only be diagnosed when the patient is positive for ANCAs or vasculitis, of which the latter is revealed during a pathological examination [3]. However, vasculitis is rarely diagnosed after the pathological examination of a head and neck lesion. Thus, positivity for ANCAs is an important finding in the diagnosis of OMAAV. Intractable Otitis Media Presenting as Falsely Positive for Proteinase 3-ANCA: A Case Report Herein, we report a case of otitis media caused by methicillin-resistant Staphylococcus aureus (MRSA), presenting as falsely positive for proteinase 3 (PR3)-antineutrophil cytoplasmic antibodies (ANCA).A 47-year-old woman was referred to our hospital with a complaint of left otorrhea.An otorrhea culture yielded MRSA, and the patient was treated using tympanoplasty.Postoperative administration of teicoplanin lead to drug-induced neutropenia and was discontinued 4 days after the operation.One month after the operation, the patient's otorrhea recurred, and it was accompanied by hearing impairment.The otorrhea culture yielded MRSA again, while serum was positive for PR3-ANCA (6.8 U/mL).As MRSA was detected in the patient's otorrhea sample, she was treated with linezolid.Her symptoms then improved immediately.Although the PR3-ANCA positivity remained, the patient's otorrhea and hearing impairment had not recurred for 3 years when this report was submitted.Therefore, we conclude that this is a case of false PR3-ANCA positivity. KEYWORDS: Methicillin-resistant Staphylococcus aureus, granulomatosis with polyangiitis, Wegener granulomatosis, teicoplanin, linezolid Masahiro Okada , Hideo Ogawa , Koichiro Suemori , Daiki Takagi , Masato Teraoka , Hiroyuki Yamada , Naohito Hato CASE PRESENTATION A 47-year-old woman with a history of hypertension was referred to our hospital with a complaint of left otorrhea that had persisted for 5 years.An otoscopic examination revealed that the tympanic membrane of her left ear was perforated (Figure 1a).A sample of the otorrhea was cultured, yielding MRSA.An audiogram showed mixed hearing loss in the woman's left ear (Figure 2a), and computed tomography (CT) revealed that the tympanic cavity was slightly clouded (Figure 3a). Based on the diagnosis of chronic otitis media with MRSA, we performed a left tympanoplasty, with post-operative administration of teicoplanin (teicoplanin F; Fuji Pharma, Tokyo, Japan).Four days after the operation, the patient's temperature increased, and she developed a urinary tract infection.A blood test revealed a white blood cell (WBC) count of 1600/μL, with a differential neutrophil count of 640/μL, as well as elevated C-reactive protein (CRP) levels 4.6 mg/dL (normal range 0.00-0.20 mg/dL).The woman was diagnosed with drug-induced neutropenia caused by teicoplanin, and her antibiotics were discontinued.Her fever then subsided, and her WBC and neutrophil counts were normalized for several days.However, the left otorrhea recurred 1 month after the operation.Again, hearing impairment was observed (Figure 2b), and CT revealed that the tympanic cavity was slightly clouded (Figure 3b).A tympanic ventilation tube was inserted, and the ear was irrigated.However, these treatments were ineffective (Figure 1b).MRSA was once again detected in her otorrhea, while her serum PR3-ANCA levels-tested using the chemiluminescence enzyme immunoassay (CLEIA)-had increased to 6.8 U/mL (normal: <3.5 U/mL).No disorders were detected in any other organs (kidneys, lungs, eye, etc.).A pathological examination of the patient's middle ear granulation revealed non-specific inflammation.As MRSA was detected in a sample of her left otorrhea, she was administered linezolid (Zyvox; Pfizer, New York, USA) for 2 weeks, even though she was positive for PR3-ANCA.Following this treatment, the patient's otorrhea and hearing loss quickly improved (Figure 2c).After the treatment, her PR3-ANCA levels remained mildly elevated (6.5 U/mL), although her otitis media had not recurred for 3 years after the article was submitted.Therefore, we concluded that the patient was falsely positive for PR3-ANCA. DISCUSSION The typical clinical features of OMAAV, recently proposed by the Japan Otological Society, are the following: (1) intractable otitis media with 338 J Int Adv Otol 2018; 14(2): 337-40 4) facial palsy and hypertrophic pachymeningitis (occasionally) [3] .According to the OMAAV diagnostic criteria, other disease, including intractable bacterial otitis media, must be excluded before OMAAV can be diagnosed [3] .In the present case, the otitis media was intractable, and the patient was positive for PR3-ANCA.However, MRSA was detected in her otorrhea, and neutropenia caused by teicoplanin was presumed to be one of the reasons for her intractable otitis media.In addition, linezolid administration improved her clinical symptoms immediately, and the disease did not recur, even though neither glucocorticoids nor immunosuppressants were used.Therefore, we concluded that the patient was falsely positive for PR3-ANCA. False positivity for ANCA can occur in various diseases, such as other autoimmune diseases (ulcerative colitis, rheumatoid arthritis, systemic lupus erythematosus); infections (bacterial, fungal, tuberculosis); and malignant tumor [4][5][6] .In this regard, higher ANCA titers and the involvement of multiple affected organ systems may help to discriminate between AAV and other diseases in ANCA-positive patients [5] .In the cases of a low ANCA titer in which symptoms are localized to the ear, as in the present report, the possibility of false ANCA positivity should be considered. The mechanism of false ANCA positivity is unknown.Some patients with PR3-ANCA-associated vasculitis have antibodies that react with a protein produced from PR3-antisense RNA.The amino acid sequence of this protein is partially homologous with a protein found in many microbes and viruses, including Staphylococcus aureus.Therefore, it has been speculated that such bacterial organisms mimic the peptide sequences of granule components and that this leads to the PR3-ANCA production [8] .It follows that the false ANCA positivity in the present case may have been related to a chronic MRSA infection. Yamauchi et al. [7] reported a case of tuberculous otitis media presenting false PR3-ANCA positivity.Such intractable tuberculous otitis media does share some clinical features with MRSA infection with false ANCA positivity, as well as with OMAAV.It follows that this disease might be misdiagnosed as OMAAV, and various diseases that can yield false ANCA positivity must be excluded when diagnosing OMAAV.Conversely, Azuma et al. [9] reported a case of otitis media with MPO-ANCA being positive in which the otorrhea culture showed MRSA infection.In that case, the administration of an anti-MRSA drug was ineffective, while immunosuppressant therapy did improve the otic symptoms.The authors concluded that the otitis media was a symptom of vasculitis.In future similar cases, tentative use of antibiotics might be useful in diagnosis. In addition, the ANCA detection method should be considered carefully.A variety of different methods have been developed to detect ANCA, such as the enzyme-linked immunosorbent assay (ELISA), capture ELISA, anchor and CLEIA.In particular, CLEIA, which has high sensitivity and specificity, has yielded a larger number of false-positive results than ELISA [10] .In the present study, we tested ANCA using CLEIA.Therefore, it may be that this detection method influenced the results of the PR3-ANCA detection. In Japan, the number of patients diagnosed with AAV has increased two-to threefold over the past 10 years.The number of patients with OMAAV is expected to increase accordingly.Thus, in cases of intractable otitis media, clinicians should consider the possibility of OMAAV.However, they should also exclude other diseases, even in cases of the ANCA positivity, which were mentioned as diagnostic criteria for OMAAV. Informed Consent: Written informed consent was obtained from patient who participated in this study. Figure 1 ,Figure 2 Figure 1, a, b.(a) Left tympanic membrane (TM) before surgery.The TM is thickened and perforated.Otorrhea is observed.(b) Left TM after insertion of tympanic tube.The TM is still thickened. Figure 3 , Figure 3, a, b.(a) CT scan before surgery.The tympanic cavity is slightly clouded.(b) CT scan performed when left otorrhea recurred 1 month after surgery.The tympanic membrane is thickened, and the tympanic cavity is slightly clouded.No improvement is observed.A mastoidectomy was performed.
2018-10-02T01:19:39.456Z
2020-03-02T00:00:00.000
{ "year": 2018, "sha1": "87df8a20cf510ed69dffa2ffa6fe2a5416591421", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5152/iao.2018.4746", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "87df8a20cf510ed69dffa2ffa6fe2a5416591421", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269665945
pes2o/s2orc
v3-fos-license
New insights into ATR inhibition in muscle invasive bladder cancer: The role of apolipoprotein B mRNA editing catalytic subunit 3B Background Apolipoprotein B mRNA editing catalytic polypeptide (APOBEC), an endogenous mutator, induces DNA damage and activates the ataxia telangiectasia and Rad3-related (ATR)-checkpoint kinase 1 (Chk1) pathway. Although cisplatin-based therapy is the mainstay for muscle-invasive bladder cancer (MIBC), it has a poor survival rate. Therefore, this study aimed to evaluate the efficacy of an ATR inhibitor combined with cisplatin in the treatment of APOBEC catalytic subunit 3B (APOBEC3B) expressing MIBC. Methods Immunohistochemical staining was performed to analyze an association between APOBEC3B and ATR in patients with MIBC. The APOBEC3B expression in MIBC cell lines was assessed using real-time polymerase chain reaction and western blot analysis. Western blot analysis was performed to confirm differences in phosphorylated Chk1 (pChk1) expression according to the APOBEC3B expression. Cell viability and apoptosis analyses were performed to examine the anti-tumor activity of ATR inhibitors combined with cisplatin. Conclusion There was a significant association between APOBEC3B and ATR expression in the tumor tissues obtained from patients with MIBC. Cells with higher APOBEC3B expression showed higher pChk1 expression than cells expressing low APOBEC3B levels. Combination treatment of ATR inhibitor and cisplatin inhibited cell growth in MIBC cells with a higher APOBEC3B expression. Compared to cisplatin single treatment, combination treatment induced more apoptotic cell death in the cells with higher APOBEC3B expression. Conclusion: Our study shows that APOBEC3B’s higher expression status can enhance the sensitivity of MIBC to cisplatin upon ATR inhibition. This result provides new insight into appropriate patient selection for the effective application of ATR inhibitors in MIBC. Introduction Currently, cisplatin-based chemotherapy is the standard chemotherapy regimen for patients with advanced muscleinvasive bladder cancer (MIBC) [1][2][3].However, even immune checkpoint inhibitors, as the palliative first-line setting, combined with cisplatin do not improve survival [4][5][6].Moreover, although avelumab is approved to be administered as a maintenance treatment following platinum-based chemotherapy, the survival benefit is approximately six months, while a long-term survival benefit is observed in only a limited number of patients [7][8][9].Therefore, novel treatment options are required to overcome these limitations and improve the cisplatin efficacy for the treatment of MIBC. Ataxia telangiectasia and Rad3-related (ATR) kinase is a key checkpoint molecule that initiates DNA damage response when DNA damage occurs at specific sites within singlestranded DNA, including stressed replication forks [10,11].Therefore, ATR inhibition can force cell cycle progression with incomplete DNA, inducing cell apoptosis [12].Previous in vitro studies have presented the potential activity of several ATR inhibitors in various cancer cell lines [12][13][14][15][16].However, a clinical phase 2 study using ATR inhibitors in MIBC failed to show significant survival improvement [17].Although statistical significance was not achieved, the combination of ATR inhibitor and cisplatinbased chemotherapy showed a numerically improved survival [17].Moreover, another study on ovarian cancer conducted at the same time succeeded in deriving significant results [18].Therefore, the study raised a need to select suitable patients with MIBC for ATR inhibitor treatment. Apolipoprotein B mRNA editing catalytic subunit 3B (APOBEC3B), an endogenous carcinogen, is overexpressed in approximately two-thirds of patients with MIBC [19,20], and its overexpression is associated with enhancement of the ATR signaling pathway [19][20][21][22].This may be because APOBEC3B induces ATR activity by increasing replication stress in the process of cytidine deamination to create an abasic site [22,23].Similarly, a previous cell line study also reported a relationship between APOBEC3B overexpression and anti-cancer activity of ATR inhibition [22]. Accordingly, we hypothesized that the combination of ATR inhibitors and cisplatin in APOBEC3B-overexpressing MIBC may be more effective in increasing the sensitivity of MIBC to cisplatin.Therefore, our study aimed to evaluate the potential of a combination treatment strategy of cisplatin and ATR inhibitor in APOBEC3B-high expressing MIBC. Patient population and tissue samples This study evaluated patients diagnosed with bladder cancer between 2013 and 2021 at St. Vincent's Hospital, Suwon, Republic of Korea.The study was approved by the Institutional Review Board of St. Vincent's Hospital of the Catholic University of Korea (grant numbers: VC21ZASI0036 and VC20SISI0187) and was conducted in accordance with the principles of the Declaration of Helsinki and its later amendments.Written informed consent was obtained from all participants. All patients underwent surgery, including transurethral resection of their bladder tumor or radical cystectomy.The pathological diagnosis was muscle-invasive urothelial carcinoma in all included patients.A formalin-fixed, paraffin-embedded tumor block was used to evaluate APOBEC3B protein expression, and patient medical records were thoroughly reviewed.All clinical information was extracted anonymously in a de-identified manner. Immunohistochemistry to interpret APOBEC3B expression Immunohistochemistry (IHC) staining was performed as described previously [24].Rabbit polyclonal anti-APOBEC3B antibody (ab191695, Abcam, Cambridge, UK, dilution 1:200) was used as a primary antibody against human APOBEC3B, and rabbit monoclonal anti-ATR (phospho S428) antibody (ab178407; Abcam, Cambridge, UK, dilution 1:200) was used as a primary antibody against human ATR.The nuclear and cytoplasmic staining of APOBEC3B and ATR in each sample was evaluated by an independent pathologist using a semiquantitative score.The staining intensity was interpreted as 2+ of the control, 1+ of weaker, 3+ of stronger, and 0 of negative staining.The H-scores were calculated on a scale of 0-300 by multiplying the staining intensity (0, no staining; 1, weak; 2, moderate; and 3, strong) according to the percentage of cells (0%-100%) at each intensity level. Cell lines and culture conditions The MIBC cell lines evaluated in this study (HT-1376, 5637, HT1197, and 253 J) were obtained from the Korean Cell Line Bank (KCLB; Seoul, Republic of Korea).The cell lines were cultivated according to KCLB recommendations.Briefly, the cells were cultured in Dulbecco's Modified Eagle Medium (DMEM; 90%) with fetal bovine serum (FBS; 10%), penicillin, and streptomycin at 37°C in a humidified environment. Real-time polymerase chain reaction analysis Real-time polymerase chain reaction (PCR) was performed to confirm the expression of APOBEC3B messenger RNA (mRNA) in the cell lines.RNA was extracted from each cell using an RNA-spin TM Total RNA Extraction Kit (iNtRON Biotechnology, Seoul, Republic of Korea).The extracted RNA was quantified using a NanoDrop TM Spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA), and 2 µg of complementary DNA (cDNA) was subsequently synthesized using an AMPEGENE Ò cDNA Synthesis Kit (Enzo Life Sciences, Farmingdale, NY, USA).Real-time PCR was performed using DNA Master SYBR Green I and Light Cycler 2.0 real-time PCR equipment (Roche, Basel, Switzerland).The primers used for human APOBEC3B were GACCCTTTGGTCCTTCGAC (sense) and GCACAGCCCCAGGAGAAG (antisense).The primers for human β-actin were GTCCACCTTCCAGCAGATGT (sense) and AAAGCCATGCCAATCTCATC (antisense).The detailed experiments were performed as previously described [21]. Cell viability assays Each cell line was seeded in a 96-well plate (5 × 103 in 100 µL), and reagents were added depending on concentrations and timings within various assays, as described below.Cell viability was determined using a Cell Counting Kit (CCK; EZ-Cytox Cell-Based Assay, DoGenBio Co., Ltd., Seoul, Korea).According to the CCK Assay Kit protocol, 10 µL of CCK was added to each plate, and the cell count was read at 490 nm using a microplate reader 1-4 h after incubation; cell viability was measured 48 h after drug treatment, depending on various drug concentrations.In the timedependent assay, cell viability was evaluated using 3 µM cisplatin and 2.5 µM VE-821, an ATP competitive ATR inhibitor, and incubation was maintained for up to 72 h. Assessment of cell apoptosis and cell cycle analysis Annexin V and propidium iodide (PI) apoptosis assay kits (Thermo Fisher Scientific, Waltham, MA, USA) were used according to the manufacturer's instructions.Cells treated with the apoptosis assay kit were evaluated using flow cytometry.The detailed experimental process was described previously [20].Annexin V-negative and PI-negative cells were considered viable, annexin V-positive and PI-negative cells were considered early apoptotic cells, and annexin Vpositive and PI-positive cells were considered late apoptotic cells.For cell cycle analysis, samples were stained using a BD Cycletest_Plus DNA Kit and assessed using flow cytometry by Beckman Coulter's Navios model.The result was analyzed using Beckman Coulter's kaluza analysis version 2.1 program. Statistical analysis The differences in APOBEC3B expression and cell viability in each cell line were compared using t-test and Mann-Whitney U test.APOBEC3B expression levels and response rates to chemotherapy were compared using chi-square tests.The association between APOBEC3B and ATR was also analyzed using a chi-square test.All statistical analyses were performed using SPSS statistical software version 25 (IBM Corp., Armonk, NY, USA).Results with a two-sided p < 0.05 were considered statistically significant. APOBEC3B expression and tumor response to cisplatin in patients with MIBC We first evaluated APOBEC3B expression using immunohistochemical staining in samples obtained from 61 patients with MIBC (Suppl.Table S1).Strong (3+), moderate (2+), and weak (1+) intensities of APOBEC3B expression were observed in 43 (70.5%),16 (26.2%),and 2 (3.3%) patients, respectively (Fig. 1a).No instances of negative APOBEC3B expression were observed.We classified patients into the high (3+) and low expression groups (2+ or less) according to the intensity of APOBEC3B protein expression.The mean H-score significantly varied between the high and low APOBEC3B expression groups (p < 0.0001).Notably, the squamous-differentiated area of the tumor was determined to be almost unstained with the APOBEC3B antibody (Fig. 1b).No associations between tumor response and APOBEC3B were observed.Additionally, neither TNM stage nor tumor recurrence was associated with APOBEC3B expression (Suppl.Table S1). ATR expression and its association with APOBEC3B in patients with MIBC Immunohistochemical staining (Fig. 2a) revealed that similar to APOBEC3B, ATR was expressed in over 50% of patients with MIBC.Strong (3+), moderate (2+), and negative intensity of ATR expression was observed in 36 (59.0%), 5 (8.2%), and 20 (32.8%) patients, respectively.Similar to the interpretation for APOBEC3B, we classified patients into high (3+) and low (2+ or less) expression groups according to ATR expression intensity.Cytoplasmic ATR expression without nuclear staining was interpreted as negative (Fig. 2b).A significant correlation was observed between ATR and APOBEC3B expressions (p = 0.039, OR = 3.255) which was linear (p = 0.040) (Table 1).As for clinical factors, recurrence was associated with ATR expression.However, response to cisplatin was not associated with ATR expression (Suppl.Table S2). APOBEC3B expressions and ATR-Chk1 signals in bladder cancer cell lines As for real-time PCR results, HT-1376, 5637, and 253J MIBC cells showed higher mRNA APOBEC3B expression than HT1197 cells (Fig. 3a).Therefore, we designated HT-1376, 5637, and 253J cell lines as the cells with higher expression of APOBEC3B and the HT-1197 cell line as the cells with lower expression of APOBEC3B. The APOBEC3B protein expression levels were consistent with the mRNA expression levels in MIBC cell lines (Fig. 3b).The expressions of Chk1 and pChk1 tended to increase in MIBC cells with higher APOBEC3B expression, compared to MIBC cells with lower APOBEC3B expression (Fig. 3b).VE-821 treatment (5 µM) increased γH2AX expression and decreased pChk1 expression, but not Chk1 expression (Suppl.Fig. S1). Antitumor activities of ATR inhibitor and cisplatin alone and their combination in bladder cancer cell lines Cisplatin alone inhibited cell proliferation in a dose-dependent manner in MIBC cells with higher APOBEC3B expression.Additionally, low cisplatin (3 µM) modestly inhibited cell proliferation at 48 h.Whereas VE-821 alone showed antiproliferative activity at lower concentration (2.5 µM) only for 253J cells at 48 h, but not in other MIBC cells (Figs. 4a, 4b).For dose determination of the combination, the cisplatin and VE-821 doses were fixed, and cell growth curves were observed over time.Low-dose cisplatin (3 µM) modestly inhibited cell growth regardless of cell lines.Whereas low dose VE-821 (2.5 µM) did not show any antitumor activity over time in cell viability assays (Figs.4c-4d). To enhance their antitumor effect, we implemented a combination treatment.Considering the possibility of synergistic interaction between two drugs, low doses (3 µM for cisplatin and 2.5 µM for VE-821), which showed insufficient effect with a single treatment, were selected as the combination concentration.VE-821 (2.5 µM) in combination with cisplatin (3 µM) greatly suppressed cell viability in MIBC cells expressing higher APOBEC3B levels but not in HT1197 cells expressing lower APOBEC3B levels (Fig. 4e).The difference in antitumor activity appeared at 24 h after combination treatment and gradually increased as time elapsed (Fig. 4e). Notably, the anti-proliferative activity of the combination treatment was pronounced even at the lower concentration of VE-821 (1 µM) in the MIBC cells with higher APOBEC3B expression (Fig. 4f).As the concentration of VE-821 increased, cell growth inhibition gradually increased in HT-1197 cells, which were insensitive to VE-821 (2.5 µM) single treatment.The antitumor activity of combination treatment was more evident in cells expressing higher APOBEC3B when the cell growth curve for a single treatment of two drugs was compared with that of the combination treatment (Figs. 4c-4e). Induction of apoptotic cell death by ATR inhibition and cisplatin We performed flow cytometric analysis of cell apoptosis induced by the combination treatment lower concentrations of cisplatin (1 µM) and VE-821 (2.5 µM).Both VE-821 and cisplatin alone had little induction of apoptotic cell death in HT-1376, 5637, and HT-1197 cells, whereas their combination induced apoptotic cell death in both HT-1376 and 5637 cells with higher APOBEC3B expression.Notably, apoptosis induced by the combination treatment was most pronounced in 5637 cells.Compared with cisplatin alone, apoptotic cell death by the combination treatment in the 5637 cells significantly increased from 30.1% to 63.8% (Figs.5a-5h).In contrast, no additional apoptosis was found in HT-1197 cells with lower APOBEC3B expression (Figs. 5i-5n). Cell cycle progression by ATR inhibition and cisplatin The cell cycle states impacted by the combination treatment (1 µM for cisplatin and 2.5 µM for VE-821) were different depending on the APOBEC3B expression.No significant cell cycle changes occurred in the 5637 cells treated with VE-821 alone.However, when cisplatin alone was administered, the G0-G1 subpopulation decreased, and the S subpopulation notably increased.Meanwhile, VE-821 combined with cisplatin markedly reduced the G2-M subpopulation and greatly increased the S phase subpopulation compared to cisplatin alone (Suppl.Fig. S2a).However, no cell cycle changes were observed in HT-1197 with a lower expression of APOBEC3B (Suppl.Fig. S2b). Discussion This study aimed to investigate the differential effects of cisplatin and ATR inhibitor combination treatment based on APOBEC3B expression in patients with MIBC.The combination treatment proposed herein effectively suppressed cell growth when APOBEC3B expression was higher.In contrast, no difference was observed when APOBEC3B expression was lower.These results indicate that ATR inhibitors can increase the sensitivity to cisplatin in patients with MIBC with higher APOBEC3B expression.Thus, it is crucial to identify patients with MIBC suitable for clinical treatment using ATR inhibitors.The most well-known epidemiologic cause of bladder cancer is smoking [25].However, genomic analyses reveal that MIBC primarily exhibited in the APOBEC3-mediated mutations and not in smoking-related mutations [19,26], suggesting that APOBEC3B plays an important role in MIBC pathogenesis.Moreover, APOBEC3B overexpression is reportedly observed in approximately two-thirds of patients with MIBC [19,24].Therefore, a biologically deeper analysis and understanding of APOBEC3B overexpression is necessary for developing treatment strategies.Similarly, our study observed APOBEC3B overexpression in approximately 70% of patients with MIBC. We observed a significant correlation between APOBEC3B and ATR expression in patient tissue samples and MIBC cells.This result is consistent with that of previous studies [12,22].In terms of the correlation, Buisson et al. showed that APOBEC3B high expression induced abasic sites and increased stress at the replication fork, leading to ATR pathway activation to protect cancer cells from replication stress [22].Swanton et al. presented that the expression of pRPA (S33; a marker of replication stress) decreased in APOBEC3B knockdown lung cancer cells compared to APOBEC3B-expressing lung cancer cells [23].We also observed that ATR inhibition induces DNA damage in APOBEC3B high expression cells by analyzing molecules such as pChk1 or γH2AX.To elaborate in detail, APOBEC3B catalyzes the deamination of cytosine to uracil in DNA, and the substituted uracil is subsequently removed by uracil DNA glycosylase, creating abasic sites [27].This induced DNA damage leads to replication stalling and increased replication stress [22].The exposed single-stranded DNA at the slowed replication fork is bound by RPA, which recruits ATRIP, Rad17, TopBP1, and others [11].This complex activates ATR, which in turn phosphorylates CHK1 [11].This process stabilizes the replication fork and regulates checkpoints, allowing replication to complete [11].Therefore, ATR inhibition results in checkpoint defects and a replication catastrophe, suppressing cell growth and survival.This suggests ATR as a novel therapeutic target in MIBC with high levels of APOBEC3B expression. The assumption that ATR inhibition would be effective in a state of increased replication stress can also be inferred from a previous clinical trial, wherein patients with ovarian cancer showed prolonged survival with ATR inhibition plus DNA damaging cytotoxic chemotherapy [18].Conventionally, ovarian cancer has increased replication stress [28].Ovarian cancer harbors a loss of cell cycle checkpoints related to TP53 mutations, premature cell cycle progression due to cyclin E1 (CCNE1) amplification, and deficiencies in DNA repair processes [18].Similarly, MIBC with high levels of APOBEC3 expression exhibits more genetic alterations related to the cell cycle checkpoint, including TP53 and ATM, compared to MIBC expressing low levels of APOBEC3 [19].Cisplatin reportedly acts as a DNA adductor via cross-linking between nucleotide bases [29].This crosslink slows the progression of the cell cycle and induces abnormal cell cycle arrest through the activation of the Chk1 pathway [29,30].Therefore, if ATR inhibition is applied in MIBC with high APOBEC3B expression, which occurs in a state of increased replication stress, the DNA damage induced by cisplatin will be more difficult to repair than usual, resulting in a greater increase in abnormal cell cycle arrest and apoptosis compared to cisplatin alone.We analyzed this using MIBC cells with varying expressions of APOBEC3B.As expected, the combination treatment in higher APOBEC3B MIBC increased apoptosis compared to cisplatin alone.Further, in cell cycle analysis, we observed a decrease in the proportion of cells in the G0-G1 and G2 phases and an increase in the S and M phases, suggesting that the changes may have led to decreased preparation phase and increased active phase in the cell division cycle, ultimately contributing to the increased cisplatin sensitivity. Taken together, patients with MIBC with a higher APOBEC3B expression may be an effective target population for cisplatin-based chemotherapy and ATR inhibitor combination treatment. However, our study has several limitations.First, our results reporting a relationship between ATR and APOBEC3B must be interpreted cautiously due to the small sample size.Referring to previous studies using public data from The Cancer Genome Atlas and the Beijing Genomics Institute, APOBEC3 in MIBC has been shown to be related to DNA damage genes, including ATR [19].More extensive research in diverse cohorts with larger sample sizes is needed for validation. Additionally, our study's sample size was inadequate to analyzing clinical characteristics associated with APOBEC3B expression in MIBC.In our previous study, differences in survival were observed in patients with metastatic bladder cancer based on APOBEC3B expression, and a potential relation with tumor-infiltrating lymphocytes was proposed [24].Another study has also shown that bladder cancer with high APOBEC3B expression has a better prognosis compared to those with low expression, and higher infiltration of various immune cells and expression of immune checkpoints, including CD276 [31].Thus, it is necessary to analyze the possibility of a correlation between APOBEC3B and basic clinical information in various cohorts. Next, our study did not conduct a detailed molecular evaluation of how ATR inhibition increases cisplatin sensitivity.This analysis could have strengthened the mechanistic understanding of why ATR inhibition was effective only in cases of high APOBEC3B expression observed in our study.However, Sigala et al. previously reported that ATR is involved in the translocation of Serinearginine protein kinases related to basic cellular processes such as mRNA splicing, chromatin reorganization, and cell cycle regulation into the nucleus, which could be linked to the DNA damage pathway and influence sensitivity to chemotherapeutic agents [32]. Lastly, we did not create APOBEC3B overexpressed or silenced cells using methods like siRNA to investigate the function of APOBEC3B under in vitro conditions.This is a notable limitation, as cell lines with modulated APOBEC3B expression could unveil deeper mechanisms linking ATR inhibitors and APOBEC3B expression, alongside showcasing their clinical relavance.To address this concern, our study was conducted using cell lines with different levels of APOBEC3B expression.Considering the rarity of negative APOBEC3B expression in MIBC, we believe that our approach is likely to offer insights with clinical implications.As replication stress is influenced by a variety of factors [30], there could be several uncontrolled variables affecting our results besides APOBEC3B expression.Additionally, we did not explore the potential effects of the relationship between APOBEC3B and other APOBEC3 family members.Recent reports suggest that APOBEC3A and APOBEC3B can influence each other's expression and activity in relation to APOBEC mediated mutation burden [33], indicating the need for further research in this area. Therefore, caution is necessary when interpreting our results, and further investigations are warranted to explore not only the ATR-Chk1 pathway but also various factors that can increase replication stress [30].Our findings should be considered as highlighting the need for additional research into one of the potential options that could enhance the efficacy of existing standard treatments for MIBC. In conclusion, our study demonstrated that a higher expression of APOBEC3B enhances the sensitivity of MIBC to cisplatin upon ATR inhibition.This is associated with the upregulation of the ATR-Chk1 pathway and induction of DNA damage and immature cell cycle progression upon ATR inhibition.These findings explain why the combination of cisplatin and ATR inhibitors in MIBC with higher expression of APOBEC3B led to a more pronounced inhibition of cell growth and increased apoptotic cell death compared to cisplatin alone, while no such effects are observed in lower expression of APOBEC3B.To the best of our knowledge, this is the first in vitro study on the selective application of an ATR inhibitor in MIBC cell lines.Furthermore, a correlation between APOBEC3B and ATR expression was observed in actual patient tissues, with a high expression of both proteins observed in approximately two-thirds of patients with MIBC.These results provide new information about appropriate patient selection for the effective application of ATR inhibitors in MIBC.Moreover, as the effect of immune checkpoint inhibitors can be closely related to platinum sensitivity [8,34], our study results related to platinum sensitivity not only provide the possibility of enhancing the therapeutic effect of cisplatin in MIBC but also potentially offer translationally important insights into the new treatment strategies FIGURE 1 . FIGURE 1. Apolipoprotein B mRNA editing enzyme catalytic subunit 3B (APOBEC3B) expression following immunohistochemical staining in patients with bladder cancer.(a) Findings for muscle invasive bladder cancer (MIBC; urothelial carcinoma).Tissue samples from patients with bladder cancer immunohistochemically stained for APOBEC3B expression (ab191695; Abcam, Cambridge, UK, dilution 1:200).(b) Squamous differentiation.There is almost no staining with the APOBEC3B antibody in the squamous differentiated area. FIGURE 2 . FIGURE 2. Ataxia telangiectasia and Rad3-related (ATR) kinases expression following immunohistochemical staining in patients with bladder cancer.(a) Findings for muscle invasive bladder cancer (MIBC; urothelial carcinoma).Tissue samples from patients with bladder cancer immunohistochemically stained for ATR expression (ab178407; Abcam, Cambridge, UK).(b) Results for nuclear and cytoplasmic staining.ATR is stained in the cytoplasm but not in the nucleus. FIGURE 3 . FIGURE 3. Apolipoprotein B mRNA editing enzyme catalytic subunit 3B (APOBEC3B) expression and ataxia telangiectasia and Rad3-related (ATR) activity in bladder cancer cell lines.(a) Real-time polymerase chain reaction (PCR) findings.APOBEC3B mRNA expression is significantly different between cell lines expressing high levels of APOBEC3B (HT1376, 5637, 253J) and the cell line expressing low levels of APOBEC3B (HT1197; *p < 0.05).Relative values (1/2 −ΔΔCt ) of HT1376, 5637, 253J, and HT1197 are 0.00999, 0.00457, 0.00544, and 0.000494, respectively, using lightcycler software 4.1.(b) Western blotting.APOBEC3B protein expression is consistent with the mRNA expression levels observed in this study.Cell lines expressing high levels of APOBEC3B have more prominent Chk1 expression than the cell lines expressing low levels of APOBEC3B. FIGURE 4 . FIGURE 4. Cell viability assays following treatment with cisplatin and ataxia telangiectasia and Rad3-related (ATR) inhibitor (VE-821) in bladder cancer cell lines.(a) Concentration dependent cell growth curves for cisplatin treatment.Regardless of APOBEC3B expression, cisplatin shows a dose-dependent effect at 48 h in MIBC cells.(b) Concentration dependent cell growth curves for VE-821 treatment.Regardless of APOBEC3B expression, VE-821 shows a dose-dependent effect at 48 h in MIBC cells.(c) Time dependent cell growth curves for cisplatin treatment.A low dose of cisplatin (3 μM) is administered in all cell lines.(d) Time dependent cell growth curves for VE-821 treatment.All cell lines are treated with low doses of VE-821 (2.5 mM).The efficacy of combination therapy (c) is more pronounced than that of low doses of both monotherapies, which were determined to be ineffective.(e) Time dependent cell growth curves for combination treatment: 2.5 mM of VE-821 was added to 3 mM of cisplatin in all cell lines.Combination therapy shows a synergistic effect in highexpressing APOBEC3B cell lines compared to that in the low-expressing APOBEC3B cell line.(f) Concentration dependent cell growth curves for combination treatment.The addition of VE-821 is effective even at a low concentration (1 mM) in the high-expressing APOBEC3B cell lines. FIGURE 5 . FIGURE 5. Apoptotic cell death by ataxia telangiectasia and Rad3-related (ATR) inhibition in bladder cancer cells showing high expression of apolipoprotein B mRNA editing enzyme catalytic subunit 3B (APOBEC3B).(a) Findings in comparison with the control group.The control group comprised untreated HT-1376 cells.Flow cytometry was performed to identify apoptotic cell death.(b) Results of cisplatin single treatment.There is almost no induction of apoptotic cell death in HT-1376 cells treated with low-dose cisplatin (1 µM).(c) Findings (continued) in VE-821 (an ATR inhibitor) single treatment.VE-821 shows little induction of apoptotic cell death in HT-1376 cells treated with low dose VE-821 (2.5 µM).(d) Results for evaluations of the combination treatment in HT-1376 cells.The combination of cisplatin (1 µM) and VE-821 (2.5 µM) induces apoptotic cell death at 48 h after treatment.(e-h) Findings in 5637 cells.Flow cytometry was performed to identify apoptotic cell death.(i-l) Results of HT-1197 cells.Flow cytometry was performed to identify apoptotic cell death.(m, n) Comparison apoptotic cell deaths between cisplatin single treatment and combination treatment in 5637 and HT-1197 cells.
2024-05-11T16:20:07.982Z
2024-05-23T00:00:00.000
{ "year": 2024, "sha1": "9fb86a641caff137415d0da96bcfd3bd2754bd66", "oa_license": "CCBY", "oa_url": "https://www.techscience.com/or/online/detail/20568/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4aaae310477f7752bd92179c9d830f2c6bdd4e05", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
45803348
pes2o/s2orc
v3-fos-license
Protein O-Mannosyltransferases B and C Support Hyphal Development and Differentiation in Aspergillus nidulans ABSTRACT Aspergillus nidulans possesses three pmt genes encoding protein O-d-mannosyltransferases (Pmt). Previously, we reported that PmtA, a member of the PMT2 subfamily, is involved in the proper maintenance of fungal morphology and formation of conidia (T. Oka, T. Hamaguchi, Y. Sameshima, M. Goto, and K. Furukawa, Microbiology 150:1973-1982, 2004). In the present paper, we describe the characterization of the pmtA paralogues pmtB and pmtC. PmtB and PmtC were classified as members of the PMT1 and PMT4 subfamilies, respectively. A pmtB disruptant showed wild-type (wt) colony formation at 30°C but slightly repressed growth at 42°C. Conidiation of the pmtB disruptant was reduced to approximately 50% of that of the wt strain; in addition, hyperbranching of hyphae indicated that PmtB is involved in polarity maintenance. A pmtA and pmtB double disruptant was viable but very slow growing, with morphological characteristics that were cumulative with respect to either single disruptant. Of the three single pmt mutants, the pmtC disruptant showed the highest growth repression; the hyphae were swollen and frequently branched, and the ability to form conidia under normal growth conditions was lost. Recovery from the aberrant hyphal structures occurred in the presence of osmotic stabilizer, implying that PmtC is responsible for the maintenance of cell wall integrity. Osmotic stabilization at 42°C further enabled the pmtC disruptant to form conidiophores and conidia, but they were abnormal and much fewer than those of the wt strain. Apart from the different, abnormal phenotypes, the three pmt disruptants exhibited differences in their sensitivities to antifungal reagents, mannosylation activities, and glycoprotein profiles, indicating that PmtA, PmtB, and PmtC perform unique functions during cell growth. Protein glycosylation, which is a major posttranslational modification, plays essential roles in eukaryotic cells from fungi to mammals (19). N-linked oligosaccharides in glycoproteins that share relatively common structures are structurally classified into high-mannose, complex, and hybrid types (3). Olinked oligosaccharides in glycoproteins are diverse with respect to their sugar components and the mode of sugar linkages among the eukaryotic organisms (8,19). O mannosylation, which is commonly found in the glycoproteins of fungi, has been extensively studied in the budding yeast Saccharomyces cerevisiae (4,21,35). The initial reaction of mannose transfer to serine and threonine residues in proteins is catalyzed by protein O-D-mannosyltransferase (Pmt) in the endoplasmic reticulum (ER), where dolichyl phosphate-mannose is required as an immediate sugar donor (4). In the Golgi complex, O mannosylation in S. cerevisiae is linearly elongated by up to five mannose residues by mannosyltransferases (Mnt) that utilize GDP-mannose as the mannosyl donor. At least six Pmt-encoding genes (PMT1 to -6), three ␣-1,2-Mnt-encoding genes (KRE2, KTR1, and KTR3), and three ␣-1,3-Mnt-encoding genes (MNN1, MNT2, and MNT3) are known to be involved in O mannosylation in S. cerevisiae (21,31,45). The Pmt family of proteins can be classified into the PMT1, PMT2, and PMT4 subfamilies based on phylogeny (6). Proteins of the PMT1 subfamily form a heteromeric complex with proteins belonging to the PMT2 subfamily, and PMT4 subfamily proteins form a homomeric complex (7). Simultaneous disruptions of three different types of PMT genes were lethal (4), suggesting that each class provided a unique function for O mannosylation. Yeasts other than S. cerevisiae, such as Schizosaccharomyces pombe (38,41), Candida albicans (29), and Cryptococcus neoformans (28), possess three to five pmt genes, which have been characterized. Several studies provide evidence that protein O mannosylation modulates the functions and stability of secretory proteins and thereby affects the growth and morphology of these yeasts. O mannosylation by Pmt2 in S. cerevisiae (ScPmt2) provides protection from ERassociated degradation and also functions as a fail-safe mechanism for ER-associated degradation (11,13,23). Likewise, in C. albicans, CaPmt1-and CaPmt4-mediated O mannosylation specifically protects CaSec20 from proteolytic degradation in the ER (40). Cell wall integrity is maintained in S. cerevisiae by increased stabilization and correct localization of the sensor proteins ScWsc and ScMid2 due to O mannosylation by ScPmt2 and ScPmt4 (20). Similarly, the stability and localization to the plasma membrane of axial budding factor ScAxl2/ Bud10 is enhanced by ScPmt4-mediated O mannosylation, in-creasing its activity (32). ScPmt4-mediated O glycosylation also functions as a sorting determinant for cell surface delivery of ScFus1 (30). CaPmt4-mediated O glycosylation is required for environment-specific morphogenetic signaling and for the full virulence of C. albicans (29). With respect to filamentous fungi like Aspergillus that develop hyphae in a highly ordered manner, which then differentiate to form conidiospores, little is known about the function and synthetic pathway of the O-mannose-type oligosaccharides. O-Glycans in glycoproteins of Aspergillus include sugars other than mannose, and their structures have been determined (8). The initial mannosylation catalyzed by Pmts is found in Aspergillus and occurs as in yeasts (8). We characterized the pmtA gene of Aspergillus nidulans (AnpmtA), belonging to the PMT2 subfamily, and found that the mutant exhibited a fragile cell wall phenotype and alteration in the carbohydrate composition, with a reduction in the amount of skeletal polysaccharides in the cell wall (26,33). Recently, the Afpmt1 gene belonging to the PMT1 family of Aspergillus fumigatus, a human pathogen, was characterized. AfPmt1 is crucial for cell wall integrity and conidium morphology (46). In this study, we characterize the pmtB and pmtC genes of A. nidulans to understand their contribution to the cell morphology of this filamentous fungus. We also demonstrate that the PmtA, PmtB, and PmtC proteins have distinct specificities for protein substrates and function differently during cell growth of filamentous fungi. MATERIALS AND METHODS Strains, media, and growth conditions. The A. nidulans strains (listed in Table 1 PO 4 , and Hunter's trace elements, pH 6.5). Liquid growth experiments to allow hyphal development in a submerged culture were done by inoculation of 2 ϫ 10 8 conidia into 100 ml MM or YG medium in 500-ml shaking flasks. The flasks were reciprocally shaken at 120 rpm at 30°C. Standard transformation procedures for A. nidulans were used (44). Plasmids were propagated in Escherichia coli XL-1 Blue. Genomic DNA and total RNA of A. nidulans were prepared as previously described (26). Southern and Northern hybridizations were done using a DIG labeling kit (Roche) according to the manufacturer's protocols. Isolation of the AnpmtB and AnpmtC genes. All oligonucleotide primers used in this study are listed in Table S1 in the supplemental material. Based on a multiple-sequence alignment among Pmt proteins of A. nidulans, S. cerevisiae, and C. albicans, degenerate oligonucleotide primers for the amplification of A. nidulans pmtB and pmtC genes were synthesized. Using primers pmtB-F/pmtB-R and pmtC-F/pmtC-R, regions of AnpmtB and AnpmtC, respectively, were amplified from A. nidulans A26 genomic DNA and used as probes to screen an A. nidulans cosmid library (Fungal Genetics Stock Center) for entire AnpmtB and AnpmtC genes. Isolated AnpmtB and AnpmtC genes were sequenced using a LIC4200L DNA sequencer (Li-Cor). The cDNAs of pmtB and pmtC were amplified by reverse transcription-PCR using total RNA from strain A26 with primer pairs An-pmtB-RT-F/An-pmtB-RT-R and An-pmtC-RT-F/An-pmtC-RT-R, respectively. The amplified DNA fragments were inserted into pGEM-T Easy (Promega) and sequenced. The sequences were analyzed with Genetyx (Genetyx Corp., Japan). BLAST searches were done using the A. nidulans genome database at http://www.broad.mit.edu/annotation/genome/aspergillus _group/MultiHome.html. For disruption of AnpmtB with ptrA, conferring resistance to pyrithiamine, plasmid pGEM-⌬pmtB was constructed by insertion of a 2.0-kb KpnI fragment containing ptrA, amplified with primers ptrA-KpnI-F and ptrA-KpnI-R from pPTR I (TakaraBio), into the KpnI site of AnpmtB, cloned into pGEM-T Easy after PCR amplification with primers pmtB-around-F and pmtB-around-R. Strain AKU89 was transformed with pGEM-⌬pmtB linearized with NaeI. The disruption of AnpmtB in pyrithiamine-resistant transformants was confirmed by Southern blot analysis using a 1.1-kb region of AnpmtB amplified with primers pmtB-pr-F and pmtB-pr-R as a probe and by PCR using primers F2-AnpmtB and R2-AnpmtB. Analysis of the efficiency of conidiation. About 10 5 conidia were spread onto an 84-mm minimum agar medium. After 3 days of incubation at 30°C or 42°C, the conidia formed were suspended in 5 ml 0.01% (wt/vol) Tween 20 solution and counted using a hemocytometer. Microscopy. Submerged hyphae of A. nidulans were observed as follows. Conidia were inoculated into liquid medium, and then the culture was poured into a petri dish containing glass coverslips. After incubation at 30°C or 42°C for 10 to 50 h, the submerged hyphae adhering to the coverslip were stained and fixed with Myco-Perm Blue (Scientific Device Laboratory). The aerial hyphae of A. nidulans were observed as follows. Conidia were inoculated on agar medium and incubated at 30°C or 42°C for 2 to 7 days. The adhesive side of Fungi-Tape (Scientific Device Laboratory) was gently pressed against the aerial hyphae. The tape with the aerial hyphae was mounted on a glass slide and then stained and fixed with Myco-Perm Blue. The hyphae were observed using a Nikon Eclipse E600 microscope. Expression of GAI of A. awamori. For expression of the gene encoding glucoamylase I (GAI) of Aspergilllus awamori (AaglaA), pGEM-glaAargB was constructed as follows. AaglaA amplified by PCR from pBR-glaA (9) with primers F1-PglaA and R1-TglaA was inserted into pGEM-T Easy to yield pGT-glaA. The argB gene was amplified from pDC1 (1) with primers argB-ApaI-F and argB-ApaI-R and inserted into the ApaI site of pGT-glaA to yield pGEM-glaAargB. Strains AKU89, ⌬AnpmtB, and ⌬AnpmtC were transformed with pGEM-glaAargB, and integration of the AaglaA expression cassette was confirmed by Southern blot analysis. The selected argB ϩ transformants carrying AaglaA were cultured in 100 ml culture A medium (27) with 2.0% (wt/vol) maltose and 0.05% (wt/vol) glucose to induce AaglaA expression for 36 h at 30°C. Culture filtrates of the transformants were concentrated by centrifugation through Microcon YM-10 (Millipore) filter units and loaded onto sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE). Detection of GAI was done by immunoblotting using an anti-GAI-antibody as described previously (27). Triton X-100 membrane protein fraction preparation. A. nidulans was grown in liquid MM and harvested. Freeze-dried cells were mechanically broken with the Multi-beads shocker with an equal volume of 0.5-mm glass beads, suspended in TM buffer (50 mM Tris-Cl, 5 mM MgCl 2 , Complete protease inhibitor [Roche], pH 7.5) and centrifuged at 17,530 ϫ g for 10 min. The resultant pellet was washed two times with TM buffer, resuspended in extraction buffer (50 mM Tris-Cl, 5 mM MgCl 2 , 1% [vol/vol] Triton X-100, pH 7.5), and centrifuged at 17,530 ϫ g for 10 min. The supernatant designated as a Triton X-100 fraction contained solubilized membrane proteins that were separated by SDS-PAGE and stained with Coomassie brilliant blue (CBB). Glycoproteins were detected by lectin blotting (10) using alkaline phosphatase-conjugated concanavalin A (EY Laboratory Inc.). RESULTS AnpmtB and AnpmtC genes encode Pmts. We previously characterized AnpmtA encoding Pmt of A. nidulans (26). In the present paper, we describe the remaining Anpmt genes, termed AnpmtB and AnpmtC, which we cloned from a genomic cosmid library of A. nidulans using probes prepared with degenerate primers (see Materials and Methods). BLAST searches against the A. nidulans genome database with the obtained cDNA sequences of AnpmtB and AnpmtC identified the genes as AN4761.3 and AN1459.3, respectively. AnpmtB is a gene of 3,044 bp containing six exons and five introns and encodes the AnPmtB protein, which consists of 918 amino acids with a putative molecular mass of 103.3 kDa. AnpmtC is comprised of 2,424 bp with three exons and two introns and encodes a protein, AnPmtC, of 773 amino acids with a putative molecular mass of 88.2 kDa. AnPmtB and AnPmtC share relatively low (34.5%) amino acid sequence homology with each other and 37.5% and 32.8%, respectively, with AnPmtA. AnPmtB showed the highest sequence homology with Pmts from other sources, such as AfPmt1 (76.0%), CaPmt1 (41.9%), and ScPmt1 (39.6%). The highest sequence homology of AnPmtC was with ScPmt4 (48.4%). Thus, the AnPmtA, AnPmtB, and AnPmtC proteins belong to distinct families, namely, PMT2, PMT1, and PMT4, respectively, based on the phylogenetic tree constructed by the unweighted-pair group method using average linkages (34) (see Fig. S1 in the supplemental material). We previously demonstrated that AnpmtA was constitutively transcribed during incubation in liquid MM at 30°C. As determined by Northern blotting, AnpmtB and AnpmtC were expressed in liquid MM for 16 h to 48 h at 30°C (Fig. 1), indicating that both genes, as well as AnpmtA, as previously reported (26), are functional throughout hyphal development, whereas the expression levels of AnpmtB and AnpmtC decreased over time, suggesting that the genes are mainly required for early stages of hyphal development. Disruptions of pmtB and pmtC genes. To understand the effects of AnpmtB and AnpmtC on the growth of A. nidulans, we disrupted each gene in A. nidulans AKU89 by gene replacement with ptrA ϩ (see Fig. S2 in the supplemental material), yielding the ⌬AnpmtB and ⌬AnpmtC strains, respectively. Southern blot analysis using the 3Ј region of AnpmtB or AnpmtC as a probe and PCRs with primer pairs F2-AnpmtB/ R2-AnpmtB and F3-AnpmtC/R3-AnpmtC revealed that sitespecific recombination had occurred at the AnpmtB or AnpmtC locus and that a single copy of ptrA ϩ had been integrated into the chromosomal DNA. We introduced wt AnpmtB and wt AnpmtC genes into the ⌬AnpmtB and ⌬AnpmtCstrains, yielding B⌬AnpmtB and C⌬AnpmtC, respectively. PCRs with primer pairs F-pmtBPr/ R-pmtBPr and F1-AnyrG/R1-AnpyrG, or F-pmtCPr/R-pmtCPr and F1-AnyrG/R1-AnpyrG, revealed that site-specific recombination of wt AnpmtB or wt AnpmtC had occurred at the pyrG locus and that a single copy of the wt pmt gene had been integrated into the chromosomal DNA (see Fig. S2 in the supplemental material). Underglycosylation of heterologous GAI in Anpmt disruptants. GAI from A. awamori is an extracellular protein, consisting of three domains, namely, the amino-terminal catalytic domain, a serine/threonine-rich region that is glycosylated, and the starch binding domain at the carboxy terminus (12). We previously used GAI as a reporter to measure glycosylation activity and demonstrated that AnpmtA disruption affected the O mannosylation of GAI (26). We integrated the GAI-encoding gene into the ⌬AnpmtB and ⌬AnpmtC strains (see Materials and Methods) and determined whether the absence of either PMT activity would have an effect on the glycosylation of GAI. The electrophoretic mobilities of GAI secreted from the ⌬AnpmtB and ⌬AnpmtC strains were similar to each other on 7% SDS-PAGE but slightly faster than that of GAI produced by the wt strain (Fig. 2). This result indicates that both AnpmtB and AnpmtC disruptions led to underglycosylation of GAI. Thus, in A. nidulans, complete glycosylation of GAI in vivo requires the presence of all three functionally active Pmts, AnPmtA (26), AnPmtB, and AnPmtC. AnPmtB functions independently of AnPmtA. Since PMT1 and PMT2 subfamily proteins form heterodimers, whereas the PMT4 subfamily proteins homodimerize (7), we expected that the ⌬AnpmtB strain would show a phenotype comparable to that of the ⌬AnpmtA strain (26). However, in contrast to the ⌬AnpmtA strain, the ⌬AnpmtB strain showed colony phenotypes more similar to those of the wt strain at 30°C, and only at an elevated temperature of 42°C were slightly smaller colonies formed, which was remedied in the presence of 0.6 M KCl as an osmotic stabilizer (Fig. 3). Furthermore, the ⌬AnpmtB strain formed submerged hyphae similar to those of the wt strain, although with more frequent hyphal branching (Fig. 4). Aerial hyphae of the ⌬AnpmtB strain developed normal conidiophores with wt conidia, but some hyphae ended in abnormally swollen vesicles containing a few conidia. Accordingly, the number of conidia formed in the ⌬AnpmtB strain was reduced to 56% of the wt hyphae cultivated under similar growth conditions (on MM at 30°C for 3 days). The B⌬AnpmtB strain carrying wt pmtB at the pyrG locus in the ⌬AnpmtB strain showed a phenotype identical to that of the wt strain with respect to colony and hyphal morphologies ( Fig. 3 and 4), confirming that disruption of AnpmtB affects only the function of AnpmtB. The fact that the ⌬AnpmtB strain phenotypes were completely different from those of the ⌬AnpmtA strain (26) suggested that the proteins have independent functions that do not rely on their heterodimerization, as reported for members of the PMT2 and PMT1 subfamilies (7). We further assessed the relative contributions of these proteins to the growth of A. nidulans by testing the ⌬AnpmtA-AnpmtB double disruptant (see Fig. S2 in the supplemental material). The growth of the ⌬AnpmtA-AnpmtB strain was severely impaired at 30 and 42°C, and although it slightly improved upon addition of the osmotic stabilizer, colony formation was under all conditions exceedingly more impaired than was observed in the case of the single disruptants, suggesting a synthetic defect (Fig. 3) (26). Also, abnormalities observed in the hyphal structure of the ⌬AnpmtA-AnpmtB strain were cumulative with respect to the single disruptants (Fig. 4). The hyphae in the double disruptant were slightly swollen, with balloon structures characteristic of the ⌬AnpmtA strain (26), and hyperbranching, as found in the ⌬AnpmtB strain. These results confirmed that AnPmtA and AnPmtB have independent functions and that disruption of these genes caused divergent phenotypes in A. nidulans. Disruption of AnpmtC impairs hyphal elongation and conidium formation. Of the three Anpmt disruptions, removal of AnpmtC caused the most remarkable defect in colony formation, which was significantly recovered only at 42°C in the presence of 0.6 M KCl (Fig. 3), 0.8 M NaCl, or 1.2 M sorbitol as an osmotic stabilizer (data not shown). Interestingly, osmotic stabilization and high temperature restored wt-like extension of submerged ⌬AnpmtC strain hyphae, which otherwise remained aberrantly swollen and branched out frequently in a random spatial pattern with shorter cells (Fig. 5). Stalks with a vesicle at the hyphal tip developed after 40 h at 42°C in liquid cultures under conditions of osmotic stabilization. Conidiophores were not observed in aerial hyphae of the ⌬AnpmtC strain unless the cultures were grown on plates containing an osmotic stabilizer, and conidia were formed only at 42°C despite an aberrant conidiophore structure containing several clusters of sterigmata and conidia without vesicles. Conidium formation was reduced to 6% of wt levels under these conditions. Despite osmotic stabilization, conidia were not produced from sterigmata at 30°C. The C⌬AnpmtC strain carrying wt pmtC at the pyrG locus in the ⌬AnpmtC strain showed a phenotype identical to that of the wt strain with respect to colony formation, hyphal morphology, and conidiation ( Fig. 3 and 5), confirming that disruption of AnpmtC affected only the function of pmtC. Sensitivity to antifungal reagents. All Anpmt disruptants differed from each other and from the wt in their morpholo-gies. Osmotic stabilization of the media helped to reduce many of these defects, suggesting that the cell wall no longer fully contributed to maintaining the proper architecture of these filamentous fungi. Since Congo red, micafungin, and calcofluor white (CFW) are known to inhibit cell wall synthesis, we determined the sensitivities of pmt disruptants to these compounds (Fig. 6). Compared to the wt strain, both the ⌬AnpmtB and ⌬AnpmtC strains were more sensitive to Congo red and micafungin, which inhibit the production of ␤-glucans and ␤-1,3-glucans. In contrast, and unlike the hypersensitive ⌬AnpmtA strain (26), neither the ⌬AnpmtB nor the ⌬AnpmtC strain showed sensitivity to CFW, which inhibits chitin synthesis. Glycosylation mutants of yeast are hypersensitive to hygromycin B (HygB), probably due to increased permeability of the cell wall. Indeed, we previously found that the ⌬AnpmtA strain was hypersensitive to HygB (data not shown). Of the other Anpmt disruptants, only the ⌬AnpmtC strain was more sensitive to HygB than the wt strain. The ⌬AnpmtB strain, however, did not show any sensitivity to HygB. Glycoprotein profiles of Anpmt disruptants. To obtain a better understanding of the substrate specificities of individual AnPmts, we compared the glycoprotein profiles from the three Anpmt disruptants on SDS-PAGE (Fig. 7). Secretory proteins that are subjected to protein glycosylation often localize to the plasma membrane. We therefore prepared membrane proteins extracted by Triton X-100 and analyzed them by staining and lectin blotting. Staining with CBB revealed comparable sets of membrane proteins from the wt, ⌬AnpmtA, and ⌬AnpmtB strains. In the protein set of the ⌬AnpmtC strain, however, proteins larger than 100 kDa and with a mass of about 75 kDa were reduced, whereas proteins in the region of 65 kDa appeared to be more abundant. Lectin blotting with concanavalin A revealed an increase in mannose-containing glycoproteins of about 80 kDa in the ⌬AnpmtC strain and a decrease in those of about 40 kDa in the ⌬AnpmtC strain. The glycoprotein profile of the ⌬AnpmtB strain was almost indistinguishable from the wt profile, suggesting a minor role in glycosylation of proteins localizing to the plasma membrane. Substrate specificities of AnPmts. In S. cerevisiae, ScPmt2 and ScPmt4 mannosylate the plasma membrane proteins ScWsc1 and ScMid2, which function as cell wall stress sensors (20). We tested whether the A. nidulans PMT2 (AnPmtA) and PMT4 (AnPmtC) proteins had comparable substrate specificities and therefore cloned the gene encoding AN5660.3 (termed AnWscA), which we identified in the A. nidulans genome as a homolog of ScWsc1. AnWscA is composed of 280 amino acids and shares 31.2% amino acid identity with ScWsc1. The protein has an N-terminal signal sequence of 24 amino acids, as predicted by the SignalIP program, and contains a Wsc motif (amino acids 25 to 123) rich in cysteine residues, a serine-and threonine-rich region (amino acids 124 to 186), a transmembrane region (amino acids 187 to 210), and a cytoplasmic domain at the C terminus (amino acids 211 to 280) (Fig. 8A). Three putative N-glycosylation sites are found at Asn135, Asn176, and Asn258, while 31 (19 serines and 13 threonines) out of the 63 residues of the serine/threonine-rich region can serve as O-glycosylation sites. We expressed a 3HA-tagged version of AnwscA with the HA epitope tag attached to the C terminus in the wt and the Anpmt disruptants and assessed the extents of glycosylation by comparing the gel mobilities of the tagged proteins (Fig. 8B). The wt strain produced AnWscA-3HA proteins with an apparent molecular mass of 50 kDa, which is higher than the calculated molecular mass of 33.5 kDa due to N and O glycosylations. Deletion of the A. nidulans PMT1 protein in the ⌬AnpmtB strain did not affect the mobility of the tagged protein on SDS-PAGE. However, absence of the PMT2-and PMT4-like proteins in the ⌬AnpmtA and ⌬AnpmtC strains caused slightly faster mobility of AnWscA-3HA, presumably due to underglycosylation of the tagged protein. In addition, several bands of around 20 to 25 kDa were detected in these disruptants, but not in the wt strain or when AnPmtB was absent, suggesting that glycosylation protects the protein from N-terminal degradation. These results indicate that AnWsc-3HA is a natural substrate of AnPmtA and AnPmtC, supporting their classification as PMT2 and PMT4 proteins. DISCUSSION Proteins going through the secretory pathway are posttranslationally modified by O glycosylation, which is generally protein O mannosylation in fungi. In Saccharomyces, Schizosaccharomyces, Candida, Cryptococcus, Trichoderma, and Aspergillus, initial protein O mannosylation is catalyzed by Pmts (9). Accordingly, all three pmt genes of A. nidulans contribute to normal hyphal development and are expressed throughout growth, implying that protein O mannosylation plays important roles for this fungal strain. We previously characterized AnpmtA and AapmtA genes encoding PmtA belonging to the PMT2 subfamily (26,27). Here we characterized two Anpmt genes encoding AnPmtB and AnPmtC belonging to the PMT1 and PMT4 subfamilies, respectively. In A. nidulans, the phenotypes caused by gene disruption of AnpmtA and AnpmtB are different, and those of the double disruptant are cumulative with respect to each single pmt disruptant, suggesting that unlike most yeast PMT1 and PMT2 subfamily proteins, the corresponding AnPmtB and AnPmtA proteins in A. nidulans function in an independent manner. Interestingly, ScPmt6, the third PMT2 protein in S. cerevisiae, also does not behave as a canonical PMT protein in the sense that no interactions with other Pmts or with itself have been observed (7). We attempted to determine the in vivo substrate specificities of AnPmt proteins by assessing the extent of glycosylation of GAI. Underglycosylation of GAI in the absence of AnPmtA was demonstrated previously (26). Strains in which either AnpmtB or AnpmtC was disrupted also secreted underglycosylated GAI, indicating that the three AnPmts share substrate specificity for GAI and are involved in the mannosylation of the high number of hydroxyamino acids. In contrast, AnPmtB did not significantly contribute to the FIG. 4. Hyphal morphology of Anpmt disruptants. The wt strain AKU89 and the ⌬AnpmtB, B⌬AnpmtB, and ⌬AnpmtA-pmtB strains were grown at 30°C in liquid MM or MMU (MM supplemented with glycosylation of AnWscA-3HA. However, in the absence of either AnpmtA or AnPmtC, AnWscA mannosylation was affected, and two major protein bands with molecular masses between 20 and 25 kDa were formed, suggesting that proteolytic cleavage had occurred at around the middle of the protein, which corresponds to the serine/threonine-rich region. Thus, AnPmtB has a substrate specificity different from that of AnPmtA and AnPmtC. In S. cerevisiae, Wsc1, Wsc3, and Mid2, which act as the cell wall sensors, were determined to be substrates for ScPmt1-ScPmt2 complexes and ScPmt4 protein (20). In particular, ScPmt4 preferentially mannosylates the Ser/Thr-rich region flanked by a membrane anchor of secretory proteins (14). Thus, the substrate specificities toward Wsc proteins are conserved between the same PMT subfamily proteins of Saccharomyces and Aspergillus. Disruption of pmt genes in Aspergillus led to phenotypes with pleiotropic abnormalities. As in the case of AnWscA, underglycosylation by a defect in Pmt activity may lead in general to proteolytic cleavage within the serine/threonine-rich regions of the Pmt substrates and subsequent underrepresentation of active versions of these proteins at their sites of action. In the absence of either the A. nidulans PMT2 protein AnPmtA or A. fumigatus PMT1, the mutant fungi lost their cell wall integrity, resulting in repressed colony formation (26,46). AnpmtA and Afpmt1 disruptants were therefore also hypersensitive to high temperature, CFW, and HygB. In contrast, AnpmtB disruption did not significantly affect colony formation or make the fungus hypersensitive to CFW and HygB despite the high homology between AnPmtB and AfPmt1, which argues that they belong to the same PMT1 subfamily. While the AnpmtB disruptant was further sensitive toward high concentrations of Congo red or micafungin, the major phenotype characterizing this mutant was its highly branched hyphae. Therefore, presumably due to the absence of a protein normally glycosylated by AnPmtB, the mechanism by which the germination site for a new hypha is determined is no longer properly regulated. Thus, there is a possibility that AnPmtB is involved in polarity maintenance. Since the disruption of Afpmt1 does not affect the polarized growth of A. fumigatus, this strengthens our finding that PMT1 subfamily proteins of A. nidulans and A. fumigatus have different functions. Not only the disruption of AnpmtB, but also that of Angmt and AfmsdS genes involved in protein glycosylation, caused abnormal polarity (16,17). Thus, some downstream, but unidentified, glycoproteins seem to control hyphal polarity. Of the three analyzed AnPmts, AnPmtC appeared to be the most essential, as the absence of the protein caused the severest growth defect in terms of a repressed growth rate and aberrant morphology. The vital role AnPmtC plays for hyphal development and morphogenesis resembles that of AnChsB, a class III chitin synthase involved in the synthesis of cell wall chitin during hyphal growth and conidiation (2,43). AnchsB mutants grow as minute colonies, form hyphae with a very high degree of branching, and cannot conidiate (15), a phenotype very similar to that of the AnPmtC disruptant. As AnChsB is a membrane protein of 916 amino acids that contains a total of 130 serine and threonine residues, it is tempting to speculate that the protein is a specific substrate for AnPmtC or interacts closely with one. Interestingly, a null mutant for another chitin synthase gene, AncsmB, forms abnormally branched conidiophores (37). Conidiophores generated in the absence of AnPmtC were also abnormal, with several clusters of sterigmata and conidia without vesicle formation. Hyphal polarity, as well as cell wall integrity, is closely associated with the synthesis and degradation of ␣and ␤-glucans and chitin. A. nidulans chiA encodes a class III chitinase with a Ser/Thr/Pro-rich region and a glycosylphosphatidylinositol anchor attachment motif. AnChiA is heavily O glycosylated and localizes at hyphal branching sites (42); however, disruption of the gene did not affect the hyphal and conidiophore morphology, as we observed in the absence of AnPmtA or AnPmtC, but decreased the hyphal growth rate (36). In S. cerevisiae, ␤-1,3glucanosyltransferase (ScGas1), which is localized at the cell Total proteins were stained with CBB, and the glycoproteins were detected by lectin blot analysis with concanavalin A as described in Materials and Methods. WT, ⌬A, ⌬B, and ⌬C indicate proteins from wt, ⌬AnpmtA, ⌬AnpmtB, and ⌬AnpmtC cells, respectively. Lanes M contained Precision plus protein standards used as molecular mass markers. Proteins that appeared in the wt but disappeared in the ⌬Anpmt strain are indicated by black arrowheads; proteins that appeared in the ⌬Anpmt strain but not in the wt are indicated by white arrowheads. 1472 GOTO ET AL. EUKARYOT. CELL surface via a glycosylphosphatidylinositol anchor, is a substrate of ScPmt4 and ScPmt6 (5,39). The Scgas1 null mutation resulted in defective cell wall architecture. A. fumigatus genes homologous to ScGAS1 were found to be Afgel1 and Afgel2. Disruption of Afgel1 did not cause a phenotype, but the Afgel2 disruptant exhibited slower growth than the wt, abnormal conidiogenesis, and altered cell wall composition (22). Both proteins contain a serine/threonine-rich region near the C terminus, and it will be interesting to know which AnPmts glycosylate these proteins. An increasing number of genes responsible for synthesis and degradation of the cell wall and polarity establishment and maintenance have been identified in filamentous fungi. However, the localization and glycosylation of most of these proteins remain to be characterized. We are currently in the process of identifying these target proteins using the pmt disruptants, hoping to reveal how the glycoproteins maintain fungal morphology, hyphal development, and differentiation.
2018-04-03T06:09:00.235Z
2009-07-31T00:00:00.000
{ "year": 2009, "sha1": "4ac44faf31ba2ffea2249aa627dc4220151f6706", "oa_license": null, "oa_url": "https://ec.asm.org/content/8/10/1465.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "aeb5102ea27a63babd45817dcbf6a3064b1f1360", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
11356921
pes2o/s2orc
v3-fos-license
Nocturnal haemoglobin oxygen saturation variability is associated with vitamin C deficiency in Tanzanian children with sickle cell anaemia Aim To compare pulse oximetry in children with sickle cell anaemia (SCA) and controls and test the hypothesis that vitamin C deficiency (VCD; <11.4 μmol/L) is associated with nocturnal haemoglobin oxygen desaturation in SCA. Methods We undertook nocturnal and daytime pulse oximetry in 23 children with SCA (median age 8 years) with known steady-state plasma vitamin C concentrations and 18 siblings (median 7 years). Results Median nocturnal delta 12 s index (delta12 s), a measure of haemoglobin oxygen saturation (SpO2) variability, was 0.38 (interquartile range 0.28–0.51) in SCA and 0.35 (0.23–0.48) in controls, with 9/23 and 6/18, respectively, having a delta12 s >0.4, compatible with obstructive sleep apnoea (OSA). Eleven of twenty-three with SCA had VCD; logged vitamin C concentrations showed a 66% decrease per 0.1 unit increase in delta12 s ([95% CI −86%, −15%]; p = 0.023) and delta12 s >0.4 was associated with VCD (odds ratio 8.75 [1.24–61.7], p = 0.029). Daytime and mean nocturnal SpO2 were lower in SCA but there was no association with vitamin C. Conclusion Obstructive sleep apnoea (OSA), detected from nocturnal haemoglobin oxygen saturation variability, is common in Tanzanian children and associated with vitamin C Deficiency in SCA. The direction of causality could be determined by comparing OSA treatment with vitamin C supplementation. INTRODUCTION Erythrocytes containing haemoglobin S (HbS) experience chronic redox imbalance from increased production of hemichromes and therefore reactive oxygen species (ROS). The associated haemolysis contributes to many of the pathophysiological pathways in sickle cell anaemia (SCA), potentially mediated by oxidant stress, decreased nitric oxide bioavailability, inflammation and hypoxia. The compromise of endothelial function may be exacerbated by intermittent nocturnal hypoxia (1) associated with obstructive sleep apnoea (OSA), common in SCA (2). The delta 12 s index (delta12 s), the absolute difference in haemoglobin oxygen saturation (SpO 2 ) between successive 12-s intervals, measures baseline SpO 2 variability. In adults in the general population, delta12 s values of >0.4 predict an apnoea ⁄ hypopnea index (AHI) of >15 with 88% specificity and 70% sensitivity (3). In 71 children with sickle cell disease enrolled in the Sleep Asthma cohort (4), using the same cut-offs for delta12 s and AHI, specificity and sensitivity were 100% and 89%, respectively (Gavlak et al., unpublished). OSA is associated with oxidative stress and endothelial dysfunction (5). Supplemental vitamin C improves endothelial function in OSA in adults in the general population (6). Although vitamin C deficiency (VCD) appears to be common in SCA children (7), the possibility of an association with low SpO 2 , either intermittent or chronic, has not been explored. To test the hypothesis that low antioxidant status is associated with intermittent and ⁄ or chronic hypoxia in children with SCA, we undertook overnight pulse oximetry in well SCA children who were enrolled in an African urban Abbreviations AHI, Apnoea ⁄ hypopnea index; BMI, Body mass index; CPAP, Continuous positive air pressure; delta12 s, Delta 12 second index; FEV1, Forced expiratory volume at 1 second; Hb, Haemoglobin; MCHC, Mean cell haemoglobin concentration; OSA, Obstructive sleep apnoea; ROS, Reactive oxygen species; SCA, Sickle cell anaemia; SpO2, Haemoglobin oxygen saturation; VCD, vitamin C deficiency. cohort and sibling controls; those with SCA also had steady-state vitamin C levels measured. Ethical permission was granted by the Muhimibili University of Health & Allied Sciences ethics committee (Ref: MU ⁄ RP ⁄ AECNoI.XII ⁄ 77). Children were recruited from confirmed HbSS patients enrolled in a cohort study at Muhimbili National Hospital, Dar-es-Salaam, from April to July 2009 and their siblings. They were not selected as having sleep or breathing problems. Informed consent was obtained from parents of the children; where appropriate, assent was obtained from children themselves. Pulse oximetry was sampled in the day at rest and over a single night using a 2-s averaging time and 1 Hz sampling rate (Masimo Radical, Irvine, CA, USA). Data analysis was performed with Download 2001 software (Stowood Scientific, Oxford, UK). Poor perfusion, low signal IQ and movement artefact data were rejected. Analysis software yielded standard measures including mean and minimum SpO 2 , delta12 s and desaturation index of 3% or greater from baseline. Analyses of artefact-free recordings were conducted and data were compared between children with SCA and their siblings using the independent t-test for normally distributed data or the non-parametric Mann-Whitney U-test. Steady-state vitamin C concentrations were measured using a fluorometric method by Human Nutrition Research, Cambridge, UK, in plasma samples separated and stabilized within 2 h of collection with metaphosphoric acid. In the children with SCA, associations between logarithmically transformed vitamin C concentrations and oximetry variables were assessed using linear regression and by logistic regression of VCD and binary oximetry data. All oximetry variables were tested for associations with the potential covariates: age, sex, body mass index (BMI)-z-score for age and averaged steady-state haemoglobin, from data collected at routine clinic visits and entered into the cohort study database. RESULTS Eighteen control siblings, six boys, median age 7 (range 2-12) years, underwent overnight pulse oximetry, as did 23 children with SCA, 13 boys, median age 7.8 (range 2.9-15.1) years, who had had steady-state vitamin C concentrations measured prior to the sleep study. Ethics was not granted for venepuncture in the controls. Descriptive statistics for haemoglobin and pulse oximetry data in controls and children with SCA and for steady-state vitamin C in those with SCA are given in Table 1. Daytime haemoglobin oxygen saturation was lower in the children with SCA than in the controls and there was a trend for lower mean nocturnal haemoglobin oxygen saturation but there was no difference between children with SCA and controls in sleep duration, minimum overnight SpO2, number of overnight SpO 2 dips >3% ⁄ hr and the delta 12 s index ( Table 1). Forty-eight per cent (11 ⁄ 23) of children with SCA had VCD (<11.4 lmol ⁄ L), a similar proportion to all patients with SCA with data available (58%; 463 ⁄ 799) but higher than the proportion in a historical group of Tanzanian control children (32%, 24 ⁄ 74) (Cox et al., unpublished data). There was no association between vitamin C and age, sex, nutritional status (BMI-z-score) or steady-state haemoglobin in the children with SCA. In the children with SCA, geometric mean vitamin C decreased by 66% per 0.1 unit increase in delta12 s ([95% CI )86% ⁄ )15%] p = 0.023) but delta12 s was not associated with duration of sleep, age, sex, BMI-z-score or steadystate haemoglobin. Vitamin C concentration also decreased with higher numbers of episodes of SpO 2 desaturations >3% ⁄ h (6.2% decrease [95% CI )11.8% ⁄ )2.5%], p = 0.042). There were no associations with vitamin C and the other oximetry variables. A high delta12 s (>0.4) was significantly associated with an odds ratio for VCD of nearly nine times greater (Table 2). DISCUSSION In line with previous data (1), our study reports lower daytime and mean nocturnal haemoglobin oxygen saturation in children with SCA than in controls, although the latter did not reach statistical significance. OSA is commoner in black children, but the limits acceptable as within the normal range have not been defined in this population (8). There are few data comparing measures of desaturation and OSA between children with SCA and ethnically matched controls, although in one study, mean and minimum overnight SpO 2 of <95.8 and <80%, respectively, were not seen in 50 controls, half of whom were siblings (2). Interestingly, in our data, 2 and 3 controls, respectively, had values below the mean and minimum overnight SpO 2 in Samuels' study of children living in England (2). In addition, there was no difference between children with SCA and sibling controls in study duration, minimum overnight SpO 2 , the number of SpO 2 dips >3% and delta12 s, a measure of the variability of SpO 2 predictive of OSA in adults in the general population and children with SCA. Our sample was small, and further studies should explore genetic and environmental factors, including VCD, in children with SCA and ethnically matched controls, both siblings and unrelated. This is the first report to test for an association between vitamin C status and nocturnal oximetry measures in SCA. We hypothesized that intermittent and ⁄ or chronic nocturnal hypoxia would be associated with low vitamin C status, because of the effects of associated oxidant stress. Delta 12 s correlated with vitamin C concentrations in children with SCA and VCD was associated with greatly increased odds of a high delta12 s. OSA is known to cause intermittent hypoxia, and vitamin C concentrations were also inversely correlated with the number of dips in SpO 2 >3% ⁄ h, although with VCD and the number of dips in SpO 2 >3% ⁄ h dichotomized this relationship did not reach statistical significance. There is evidence for links between VCD and endothelial function in adults with OSA and in children with SCA. In a study by Grebe et al. (6), patients with untreated OSA had significantly reduced endothelial-dependent vasodilation compared to controls, an effect suggested to be mediated via increased oxidant stress. Intravenous vitamin C supplementation increased endothelial vasodilation in the patients with OSA but had no effect in controls; unfortunately, vitamin C concentrations were not reported in that manuscript. In another study of adult patients with SCA, not investigated for OSA, vitamin C oral supplementation (300 mg ⁄ day) for 6 weeks decreased forearm vascular resistance as well as increasing forearm blood flow and the vasodilator effect of warmth stimulation (9). Patients with SCA may have increased antioxidant requirements because of production of ROS from unstable erythrocytes. Exposure of sickle erythrocytes to vitamins C & E in vitro reduced markers of erythrocyte oxidant stress (10). In addition, markers of lung function have been associated with the percentage of irreversibly sickled cells (11), which was shown by the same group to be decreased by vitamin C supplementation (12). Our study suggests a link between SpO 2 variability and vitamin C deficiency in SCA. However, it is not possible to conclude that low vitamin C concentrations are a result of SpO 2 variability and hypoxia and not causal. Vitamin C may be important in the response to hypoxia, through its role in carotid body sensitivity (13) and stabilization by vitamin C-dependent prolyl-hydroxylase enzymes of hypoxia inducible factor 1 (HIF-1), which regulates acute and chronic hypoxic responses (14). In support of low vitamin C being causal for intermittent hypoxia is the observation that vitamin C supplementation reversed age-associated depression in the hypoxic hyperventilatory response in elderly subjects (15). Differences in the hypoxic hyperventilatory response might be important in adaptations to both chronic and intermittent hypoxia in conditions, such as SCA, in which OSA is also a feature. Further studies adequately powered to examine measurements of SDB other than the delta 12 s index in children with SCA, and including vitamin C measurement in control children with and without OSA, are justified. To guide future therapeutic interventions, the direction of causality could be tested by investigating the effect of continuous positive airway pressure treatment on vitamin C concentrations (16) and the effect of vitamin C supplementation on OSA and responses to hypoxia.
2017-06-23T02:29:32.035Z
2011-04-01T00:00:00.000
{ "year": 2011, "sha1": "291b3e6e28d83d1caf801db4f9d18ad05ddfe2e6", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1651-2227.2010.02078.x", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "bcc703c2caca2c47e66e060ae613bf0e09b8829c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2315279
pes2o/s2orc
v3-fos-license
Decreased secretion of adiponectin through its intracellular accumulation in adipose tissue during tobacco smoke exposure Background Cigarette smoking is associated with an increased risk of type 2 diabetes mellitus (T2DM). Smokers exhibit low circulating levels of total adiponectin (ADPN) and high-molecular-weight (HMW) ADPN multimers. Blood concentrations of HMW ADPN multimers closely correlate with insulin sensitivity for handling glucose. How tobacco smoke exposure lowers blood levels of ADPN, however, has not been investigated. In the current study, we examined the effects of tobacco smoke exposure in vitro and in vivo on the intracellular and extracellular distribution of ADPN and its HMW multimers, as well as potential mechanisms. Findings We found that exposure of cultured adipocytes to tobacco smoke extract (TSE) suppressed total ADPN secretion, and TSE administration to mice lowered their plasma ADPN concentrations. Surprisingly, TSE caused intracellular accumulation of HMW ADPN in cultured adipocytes and in the adipose tissue of wild-type mice, while preferentially decreasing HMW ADPN in culture medium and in plasma. Importantly, we found that TSE up-regulated the ADPN retention chaperone ERp44, which colocalized with ADPN in the endoplasmic reticulum. In addition, TSE down-regulated DsbA-L, a factor for ADPN secretion. Conclusions Tobacco smoke exposure traps HMW ADPN intracellularly, thereby blocking its secretion. Our results provide a novel mechanism for hypoadiponectinemia, and may help to explain the increased risk of T2DM in smokers. Introduction Over 1.3 billion people smoke worldwide, and even more are exposed to second-hand smoke. Smokers often exhibit impairments in insulin-mediated glucose handling and an increased incidence of type 2 diabetes mellitus (T2DM) [1]. Smoking cessation improves these conditions [2]. Nevertheless, mechanisms by which smoking impairs insulin-stimulated glucose metabolism and increases T2DM are still unclear. Materials We purchased polyclonal antibodies against mouse adiponectin and ERp44 (endoplasmic reticulum [ER] resident protein of 44 kDa) from Cell Signaling. Monoclonal antibodies against Ero1 L-α (ER oxidoreductase 1-Lα) and GAPDH, as well as secondary antibodies (anti-rabbit and anti-mouse IgG horseradish peroxidase conjugates), were from Santa Cruz. The antibody against DsbA-L (disulfidebond A oxidoreductase-like protein) was from Abcam. Tobacco smoke extract (TSE, 100%) with water-soluble components was prepared by using a Kontes gas-washing bottle to bubble mainstream smoke from research cigarettes through serum-free, phenol red-free RPMI media containing 0.2% BSA (RPMI/BSA), followed by filtration (0.22 μm) and standardization according to their absorbance at 320 nm, as we previously published [12,13]. Cell culture, preparation of primary mouse adipocytes, and TSE exposure Murine 3T3-L1 preadipocytes (ATCC) were cultured and differentiated into adipocytes as described [14]. Briefly, two days after reaching 100% confluence, the cells were stimulated for an additional two days with FBS/DMEM containing 100 nM insulin, 0.5 M IBMX, 0.25 μM dexamethasone, and 1 μM rosiglitazone. Cells were then maintained in FBS/DMEM medium with 100 nM insulin for another 2 days to differentiate into mature adipocytes with fat droplets. Cells were serumstarved for 3 h in DMEM containing 0.2% BSA, followed by exposure to 0-1.5% TSE for 0-20 h. Primary mouse adipocytes were prepared as described [15]. Briefly, epididymal adipose tissue from wild-type C57BL/6 mice (Jackson Laboratory) was placed in pre-warmed DMEM with 10% FBS and penicillin/streptomycin, and then minced into 5-10 mg pieces. Minced tissue fragments were filtered through a nylon mesh (350-μm pore size) and washed with DMEM. Then 200-300 mg of minced, filtered tissue was placed into 1 ml DMEM with 0.2% BSA and penicillin/streptomycin for 18 h before being treated without or with 1.5% TSE for an additional 20 h. At the end, the supernatants were collected and the explants were lysed for ADPN immunoblots. TSE exposure of wild type mice To mimic the effects of tobacco smoke exposure on a non-respiratory organ with well-controlled dosing, we followed the recently established methodology of intraperitoneal administration of TSE [16][17][18]. Wild-type mice were injected intraperitoneally in the lower left quadrant of the abdomen with 400 μl of pre-warmed, filtered TSE diluted to 20% strength in RPMI-1640 (this amount of TSE is equivalent to smoking 2 packs of cigarettes for a 60-kg person) or RPMI-1640 alone (control) on days 1, 3, 5, 8, and 10 [16]. Twenty-four hours after the final injection, the mice were euthanized by an overdose of pentobarbital. We collected whole blood by cardiac puncture and then epididymal adipose tissue from the lower right abdominal quadrant, away from the application sites. All animal protocols were approved by the Institutional Animal Care and Use Committee of the Philadelphia Veterans Administration Medical Center. Immunoblots Immunoblots were performed as described in our previous publications [12,13]. For detection of adiponectin oligomers and multimers, cells were lysed in non-reducing lysis buffer and loaded onto a gel without boiling. Determination of adiponectin concentration Total adiponectin concentrations in conditioned media from control and TSE-treated 3T3-L1 adipocytes and in plasma from control and TSE-treated mice were measured by ELISA for mouse ADPN according to the manufacturer's instruction (BioVendor). Data analysis All column and line graphs depict mean ± SEM of data that passed tests for normality. Comparisons amongst three or more groups were performed using one-way analysis of variance (ANOVA), followed by pairwise comparisons using Student-Newman-Keuls (SNK) test, with p < 0.05 considered significant. Comparisons between two groups used Student's unpaired, two-tailed t-test. Results and discussion Tobacco smoke exposure decreases secretion of ADPN while inducing its intracellular accumulation We found that TSE exposure caused dose-and timedependent suppression of total ADPN secretion from 3T3-L1 adipocytes, while increasing total intracellular ADPN detected by immunoblots ( Figure 1A-1D). Viability of the adipocytes was unaffected by these low concentrations of TSE (not shown). Inhibition of total ADPN secretion into the conditioned medium from TSE exposed 3T3-L1 adipocytes was confirmed by ADPN ELISA ( Figure 1E). In addition, exposure of mouse primary adipocytes to 1.5% TSE ex vivo for 20 h significantly suppressed ADPN secretion, while increasing intracellular ADPN content ( Figure 1F). Importantly, exposure of mice to TSE also induced a large decrease in plasma concentrations of total ADPN in vivo ( Figure 1G), consistent with prior publications in smoke-exposed mice [11] and in human smokers [6,7]. Thus, smoke-induced retention of ADPN within Figure 1 Tobacco smoke exposure decreases secretion of ADPN while inducing its intracellular accumulation. Panels A,B: Representative immunoblots (A) and summary statistics (B) for dose-dependent effects of TSE on ADPN accumulation in conditioned medium and in cellular homogenates of 3T3-L1 adipocytes during a 20-h incubation. Panels C,D: Representative immunoblots (C) and summary statistics (D) of the time course of the effects of TSE on ADPN accumulation in conditioned medium and in cellular homogenates of 3T3-L1 adipocytes exposed to 1.5% TSE for 0-20 h. Panel E: ADPN concentrations measured by ELISA in the culture supernatants of 3T3-L1 adipocytes exposed for 20 h to 0 (control) or 1.5% TSE . Panel F: Immunoblots of ADPN in conditioned medium and in cellular homogenates of primary mouse adipocytes treated without or with 1.5% TSE for 20 h. Panel G: Immunoblots of total plasma ADPN in mice after RMPI (Control) or TSE injections. Panels B, D, E, G, n = 3-5. In panels B and D, P < 0.01 by ANOVA of all cellular values, and P < 0.01 by ANOVA of all medium values. *P < 0.05, **P < 0.01, ***P < 0.001 vs. control values (0% TSE or t = 0) by the SNK test. In panels E and G, Student's t-test was used. adipocytes contributes to the decreased secretion of ADPN from adipose tissue and low plasma levels of ADPN in smokers [6,7,9,10]. Tobacco smoke exposure traps HMW ADPN intracellularly Among the three different multimeric forms of ADPNs, HMW ADPN has been shown to be the most biologically active [4,5] in promoting insulin-induced glucose handling [3][4][5][6]. In the current study, we assessed the three major multimeric forms of ADPNs by immunoblots under non-reducing conditions [19]. We found that the decrease in total ADPN secretion from cultured 373-L1 adipocytes after TSE exposure ( Figure 1A-E) was mainly attributable to decreased secretion of HMW ADPN (Figure 2A,B), accompanied by increased intracellular accumulation of HMW ADPN (Figure 2A,B). Likewise, we found that mice injected with TSE exhibited a loss of mainly HMW ADPN from plasma ( Figure 2C) and HMW ADPN accumulated in their adipose tissue ( Figure 2D). Tobacco smoke exposure dysregulates the expression of ADPN chaperones Assembly and secretion of adiponectin oligomers from adipocytes is tightly regulated by thiol redox status in the ER through ERp44 and Ero1-Lα [20,21]. ERp44 is an ER resident chaperone that inhibits the secretion of ADPN through thiol-mediated retention, while Ero1-Lα DsbA-L in 3T3-L1 adipocytes exposed to 1.5% TSE for 0-20 h. Panel B shows confocal fluorescent micrographs of representative 3T3-L1 cells that were stained simultaneously with anti-ERp44 (red) and anti-ADPN (green) antibodies, as well as DAPI (blue; nuclear stain). The yellow color in the merged images (Merge) demonstrates co-localization of ERp44 and ADPN in the ER around the nucleus. releases HMW adiponectin from ERp44 [20,21]. In addition, DsbA-L has been shown to promote adiponectin multimerization and secretion [22,23]. In the current study, we found that TSE exposure of cultured adipocytes induced time-( Figure 3A) and dose-(not shown) dependent up-regulation of ERp44 and down-regulation of DsbA-L. TSE exposure, however, did not affect the amount of Ero1-Lα in adipocytes ( Figure 3A), suggesting that the high intracellular levels of ERp44 would be unopposed. Additionally, our confocal microscopic analyses revealed that TSE exposure of adipocytes increased intracellular staining for ERp44 (red) and ADPN (green, Figure 3B). Importantly, this intracellular ADPN colocalized with ERp44 (yellow color in the merged images, Figure 3B), indicating ADPN accumulation in the ER, presumably physically associated with ERp44. We conclude that tobacco smoke exposure suppresses ADPN secretion from adipocytes by specifically trapping HMW ADPN intracellularly, thereby contributing to decreased blood levels of ADPN in smokers. These results provide a novel mechanism for hypoadiponectinemia, which may help to explain impaired insulin-mediated glucose handling and the increased risk of T2DM in smokers.
2018-02-02T06:50:28.597Z
2015-05-09T00:00:00.000
{ "year": 2015, "sha1": "4a1ebc367d33368031b967da2667bdf9deb3ae09", "oa_license": "CCBY", "oa_url": "https://nutritionandmetabolism.biomedcentral.com/track/pdf/10.1186/s12986-015-0011-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4a1ebc367d33368031b967da2667bdf9deb3ae09", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254641282
pes2o/s2orc
v3-fos-license
Whether interstitial space features were the main factors affecting sediment microbial community structures in Chaohu Lake Sediments cover a majority of Earth’s surface and are essential for global biogeochemical cycles. The effects of sediment physiochemical features on microbial community structures have attracted attention in recent years. However, the question of whether the interstitial space has significant effects on microbial community structures in submerged sediments remains unclear. In this study, based on identified OTUs (operational taxonomic units), correlation analysis, RDA analysis, and Permanova analysis were applied into investigating the effects of interstitial space volume, interstitial gas space, volumetric water content, sediment particle features (average size and evenness), and sediment depth on microbial community structures in different sedimentation areas of Chaohu Lake (Anhui Province, China). Our results indicated that sediment depth was the closest one to the main environmental gradient. The destruction effects of gas space on sediment structures can physically affect the similarity of the whole microbial community in all layers in river dominated sedimentation area (where methane emits actively). However, including gas space, none of the five interstitial space parameters were significant with accounting for the microbial community structures in a sediment layer. Thus, except for the happening of active physical destruction on sediment structures (for example, methane ebullition), sediment interstitial space parameters were ineffective for affecting microbial community structures in all sedimentation areas. Introduction Submerged sediments often exhibit strong microbial activities (greenhouse gas metabolism (Aben et al., 2017;Comer-Warner et al., 2019), nitrogenous compound metabolism (Reis et al., 2019;Liu and Yang, 2020;Tan et al., 2020), etc.) and are often biodegradation hotspots that balance global biogeochemical cycles (Mercier et al., 2014;Derrien et al., 2019). The relationships among sediment microbial community structure and environmental factors, such as pH, redox gradients (Ruuskanen et al., 2018), dissolved oxygen, total organic carbon content (Bryant et al., 2012), and sand features (Legg et al., 2012), have attracted much attention in recent years. Among these physicochemical factors, sediment interstitial space volume (porosity) can directly affect the absolute abundance of microbial communities, as it is the primary living space for sediment microorganisms (Hassard et al., 2017;Ahmerkamp et al., 2020). However, it is still not clear whether the features of the sediment interstitial space have effects on microbial community structures (Rebata-Landa and Santamarina, 2006;Scheidweiler et al., 2021). The sediment interstitial space has two components, interstitial water and interstitial gas space. The structure of the interstitial space is initially sustained by sediment particles (Van Damme and Henri, 2018). The volume and structure of the interstitial space can be affected by particle features (sizes and evenness) (Mckinley et al., 2011;Dowey et al., 2017), gas emissions (Jain and Juanes, 2009;Lu et al., 2021), and microbial activities (cementation and mineralization) (Vorhies and Gaines, 2009;Gerbersdorf and Wieprecht, 2014). Thus, interstitial water content, interstitial gas space content, and particle features are all characteristics of the interstitial space of sediments. Recent studies have shown that sediment microbial community structures are strongly correlated with the sediment interstitial space volume (Li et al., 2020), particle sizes (Highton et al., 2016), and interstitial water content (Hollister et al., 2010;Zhang et al., 2021). These findings suggest that sediment interstitial space features may have effects on sediment microbial community structures. However, since the existing correlations are insufficient to confirm the bioeffects of the above sediment interstitial space features, further study is required. In a stable sedimentary environment, sediment depth is typically the main direction of the environmental gradient. Environmental factors vary consistently along the depth direction, for example, TOC (Zan et al., 2011), dissolved oxygen (D'Hondt et al., 2015;Han et al., 2016), temperature (Biddle et al., 2011), pH (Nielsen et al., 2010), etc. Meanwhile, these environmental factors could significantly affect sediment microbial communities (Fierer and Jackson, 2006;Gilbert et al., 2012). As a result, it is common to find that sediment microbial communities usually vary consistently with sediment depth Hiraoka et al., 2019). Since the sediment interstitial space features may also vary consistently along the sediment depth direction, it becomes harder to distinguish whether the correlations among interstitial space features and sediment microbial communities result from the bioeffects of the interstitial space features. Therefore, the objectives of our study were (1) to determine the correlations of sediment interstitial space features (including total interstitial space volume, interstitial volumetric water content, interstitial gas space content, interstitial particle sizes, and evenness) with microbial community structure and, (2) to distinguish whether any discerned correlations have significantly affected microbial community structures. For this purpose, we analysed the abovementioned physicochemical parameters of sediments and calculated their correlations with the relative abundance of individual OTUs. Through comparison with sediment depth, the effects of these parameters on microbial community structures in sediments were determined. The results stressed the importance of analysing the interconnections among environmental factors with sediment depth in investigating their relationships with sediment microbial community sstructures. Materials and methods Environmental background and sediment sampling work Chaohu Lake (560-825 km 2 , 16°C on average, 31°25′N-31°43′N, 117°17′E-117°50′E) is the fifth-largest shallow (2.8 m on average) freshwater lake in the Chang Jiang basin, China. It has 35 tributaries, of which the Hangbu River (length: 145 kilometres; area: 3064 km2) is the largest (3.06 billion m 2 a year). Water quality assessments [in China Surface Water Quality Standards (GB3838-2002): Grade V show that Chaohu Lake is heavily eutrophic, especially in the western area (Shang and Shang, 2007). Compared with the western and northern surrounding areas, where more than 6 million people live with developed urban industries, the watershed of the Hangbu River consists of agricultural fields. The water quality of the Hangbu River (Grade II) is also much less impacted than the western lake (see Figure 1). In this study, an in situ sampler equipped with a heavy hamper was used to obtain sediment samples at four sampling sites along a transect from the Hangbu estuary to the western part of central Chaohu Lake on June 2, 2019. To improve the representativeness of this study, the four sampling sites were set in three kinds of sedimentation areas (River dominated, transition area, and lake dominated). Principles of dividing different sedimentation areas were their changes of hydraulic conditions (velocity, flow direction) and particle properties. The detailed description and discussion of this part can be seen in our previous study (Lu et al., 2021). As a result, the intervals of sampling sites 2, 3, and 4 were all approximately 600 metres. Sampling site 5 was in the centre of the western part of Chaohu Lake. The distance between site 4 and site 5 was approximately 8.5 kilometres. The water depths were 2.25 m, 2.55 m, 1.30 m and 3.00 m at sampling sites 2 ~ 5. Four sampling tubes with sediment were obtained by an in situ sampler equipped with a heavy hamper (the structure of the sampler can be seen in the Supplementary files of our previous studies (Lu et al., 2021)). During the short transportation time (less than 2 h), ice bags and an insulated cabinet were used to keep the sediments at a stable status. The depths of the four sediment columns varied from 45 cm to 60 cm. After they were transported into the laboratory, the sampling tube was divided vertically, and the sediment columns were immediately separated into 5 cm sections (Reasons for setting 5 cm as a layer were that it can smoothly showed the changes of microbial Frontiers in Microbiology 03 frontiersin.org community structures and met the limitations of the measuring device (TR-6D)). During the dissection process, physicochemical parameters such as volumetric water content, and other parameters (later description.) were measured at each sediment layer. The sediment sample at each 5 cm layer was placed into a sterile plastic bag and squeezed evenly before it was divided into five small plastic bags for testing different parameters. The divided sediment samples that were used for sequencing were stored in a refrigerator (−80°C). In addition, for each sampling location, 36 sediment samples obtained from the top 9 layers (0-45 cm) were selected for further analysis. More details of the sampling protocol can be found in a previous study (Lu et al., 2021). Data collection of sediment interstitial space features Six environmental factors, including sediment depth (cm), volumetric water content (%), total interstitial space percent (%), gas space percent (%), average particle size (μm), and particle evenness, were measured at each of the 5 cm increments. The volumetric water content Moi v ( ) ( ) was measured using a soil water content meter (TR-6D, Shunkeda, Beijing, China). Its physical measurement was determined as follows: ( ) is the volume of the pore water. V s ( ) is the volume of the solid particles. V a ( ) is the volume of the gas space. V T ( ) represents the total volume of the sediment layer sample. The percentage of layered gas space volume (VP a ( ) ) was calculated as follows (Lu et al., 2021): VP a ( ) is the gas space volume percent. ρ w ( ) is the pore water density, which was measured gravimetrically. Moi m ( ) is the sediment mass water content, which was measured by the drying method described in (Lu et al., 2021). ρ w s & ( ) represents the density of mixed sediment and was measured by the submerged method (Lu et al., 2021). The data of each layered sediment sample consisted of the average value of five replicates. The total interstitial space volume percentage was the sum of the gas space volume percentage and volumetric water content. The equation is as follows. The locations of sampling sites (Sampling site 1 was upstream of site 2. The sediments there were nearly all composed of sand and could not be obtained by our sampling devices). TIS Moi Frontiers in Microbiology 04 frontiersin.org The average particle sizes were measured by a laser particle sizer (LS13320, Beckman Coulter, Brea, CA, United States). Particle evenness was the coefficient of variation (Cv). Similarly, each layered sample had five replicates. DNA extraction and high-throughput sequencing The homogenized sediment samples that were stored in the refrigerator (−80°C) were freeze-dried. Then, three subsamples of 250 mg freeze-dried sediment from each layer were weighed for DNA extraction, and a PowerSoil DNA Isolation Kit (QIAGEN, Carlsbad, United States; previously from MoBio Laboratories Inc.) was used. The absorbance ratio of DNA subsamples at OD260/280 and OD260/230 were controlled under the range of 1.7-1.9 and 2.0-2.5, respectively. DNA subsamples that could not satisfy the above conditions were extracted again. After extraction, three qualified DNA subsamples of each layer were mixed and used for PCR amplification. Here, a pair of universal primers, 515F (5'-GTGYCAGCMGCCGCGGTAA-3′) and 926R (5'-CCGYCAA TTYMTTTRAGTTT-3′), were used to amplify the V4-V5 variable region of 16Sr RNA to detect bacteria and archaea (Alma et al., 2015). In addition, extraction kit elution buffer was used as a negative control. The PCR cycling procedure was set as follows: 5 min for initial denaturation at 95°C, followed by 25 cycles of 95°C for 30 s, 50°C for 45 s, 68°C for 90 s, and a final extension at 68°C for 10 min. Then, the amplified DNA products were purified by using the OMEGA DNA purification kit (Omega Bio-Tek Inc., Doraville, GA, USA) and further purified by electrophoresis in agarose gels before using the Monarch ® DNA gel Extraction Kit (New England Biolabs, USA) for gel extraction. Finally, the PCR products were sent for high-throughput sequencing on an Illumina HiSeq2500 platform (2 × 250 paired ends, Illumina, San Diego, USA) at the Biomarker Technologies Corporation, Beijing, China. The number of layered sediment samples was 36 in total. During the process of controlling the quality of the original sequencing data (2,880,313 reads in total), raw sequence data (2,772,268 tags) were merged with FLASh v1.2.7 (Magoc and Salzberg, 2011), which removed sequences whose length was less than 250 bp. Then, Trimmomatic v0.33 (Bolger et al., 2014) was used to filter the low-quality sequence data (detection: 50 bp; quality: less than 20) and obtain clean tags (2,727,994). UCHIME v4.2 (Edgar et al., 2011) was used to overlap the PE reads, filter and obtain high-quality sequences (2,680,295) (threshold of chimaeras: 80% similarity). The OTUs (operational taxonomic units) were clustered at 97% similarity by using Usearch (Edgar, 2013), and those whose relative abundance was less than 0.005% were filtered (residual tags: 1,813,882). Based on the Silva database (release 128, http://www.arb-silva.de), the residual OTUs (2,219 in total) were annotated. Further, to avoid the effects of non-microbiota on subsequent analysis, chloroplast and mitochondria were removed by QIIME (Zhang et al., 2019). At last, the residual OTUs (2,210 in total) was standardized and resampled by using package vegan (2.6.2, function: rrarefy) in R Language. Besides, detailed information on sequence quality control and a summary of annotation (taxonomy) at each layer of different sampling sites can be seen in Supplementary Table S1. The detailed OTU table can be seen in an Excel file (Supplementary files). Statistical analysis At each layer of the four sampling sites, the relative abundance of each OTU was calculated (RA OTU ). Then, Spearman correlation analysis was applied among the above six environmental factors (sediment depth, volumetric water content, total interstitial space, gas space, particle size, and particle evenness) and the relative abundance of OTUs at each sampling site. According to the results, those OTUs whose significance (P OTU ) was larger than 0.05 were removed. Since this analysis aimed to calculate which OTUs exhibited correlations with environmental factors, regardless of positive correlations or negative correlations, they were counted together. Thus, correlation coefficients (R OTUs ) were converted into absolute values and ranked from largest to smallest. Furthermore, for each environmental factor, to see how their correlated OTUs varied with correlation coefficients, the scatter figure of correlation coefficients versus significance (X 1 vs. YP) and relative abundance of correlated OTUs (X 1 vs. YRA) were calculated and plotted ( Figure 2). The process is shown in Figure 3. Additionally, to examine the significance of the p-value in Figure 4, FDR (False discovery rate) was been set as lower than 5% and it was been calculated in the way of Benjamini -Hochberg. PCoA analysis was conducted in R language (Package: vegan 2.6.2; Function: cmdscale; Distance settings: unweighted unifrac). PERMANOVA analysis was performed in R language (Package: vegan 2.6.2; Function: Adonis; Distance settings: unweighted unifrac; Permutations: 999) and plotted in Python. RDA analysis was conducted and plotted in R language (Package: Vegan 2.6.2; settings: rda(scale = FALSE), using permute to calculate significance). Distributions of the sediment interstitial space features The measurement data are briefly summarized in Table 1 (more details can be seen in Supplementary Table S1). From the Hangbu estuary to the western Chaohu Lake centre, the values of the total interstitial space percentage ranged from 56.1 to 88.2%. Except for sampling site 3, the percent values decrease with depth. In addition, compared to the estuary area, interstitial space Frontiers in Microbiology 05 frontiersin.org volumes were much larger in the central lake. Volumetric water content values ranged from 49.3 to 88.2% and were similarly distributed with interstitial space percent. Gas space only continuously existed in the Hangbu estuary and ranged from 0 to 17.93%. The distributions bore no relationships with sediment depth. The average values of particle sizes at each sampling site ranged from 7.32 μm to 338.6 μm and decreased sharply from the Hangbu estuary to the lake center. The vertical fluctuations in the particle size distributions at the four sampling sites were much larger the closer the sample site was to the upstream reaches of the Hangbu estuary. In addition, the particle evenness of each sampling site ranged from 0.76 to 1.64, and its values in the central lake were much larger than those in the estuary area. In addition, PCoA and PERMANOVA analysis were applied to investigate the dissimilarities of the microbial community at each sampling site. The results were plotted as Figure 6. The results of them indicated that the similarity of microbial community at sampling site 3 was the highest one. The similarity of microbial community at sampling site 5 was the lowest one. In the results of PCoA analysis, space distances of different sediment layers were showed (Site 5 > Site 4 > Site 2 > Site 3). In the result of PERMANOVA analysis, the similarities of microbial community structures in different sediment layers at the four sites (chamber size, Site 3 > Site 2 > Site 4 > Site 5) were in accordance with the methane bubble (gas space) emission stages (emission stage: Site 3 > Site 2 > Site 4 > Site 5). Correlations among sediment interstitial space features and the microbial community Based on the results of Spearman correlation analysis among individual OTUs and six environmental parameters, the The top ten phyla at each sediment layer. To compare the results of our method with the results of classical method, RDA analysis was conducted and the results were plotted in Figure 7. As it showed, the model explained value was only 38.30% in total (after adjusted was 25.53%). Axis of RDA1 and RDA2 were, respectively, account for 68.14 and 12.31% of the total model explained value. The results of the detailed information about each environmental factors can be seen in Table 2. From Figure 7 and Table 2, the accountable level of the six environmental factors can be ranked as: sediment depth (significant) > Volumetric Water Content (significant) > Total Interstitial Space (significant) > Particle evenness (significant) > Gas space (significant) > Particle average size (insignificant). Thus, when we regardless of the different results in particle features, to some extent, the results of RDA analysis were similar to our correlation analysis, which based on individual OTUs. However, no matter RDA analysis or our correlation analysis, they were all statistics results. Given the possible effects of collinearity, more discussion shall be conducted to investigate whether these environmental factors have effects on the microbial community in a layer (or in a site column). FIGURE 4 The dissimilarities of the microbial community at each sampling site. Analysis of sediment depth and gas space with their correlated OTUs In our study, sediment depth had the highest relative abundance of the correlated OTUs compared with other environmental factors at all sampling sites (except for sampling site 3, Figures 4, 7). In other words, among these six environmental factors, sediment depth was the closest to the main direction of the sediment environmental gradient. In addition, in Chaohu Lake sediment, other studies also showed that the vertical distributions of many environmental factors were consistent with sediment depth, for example, TP (total phosphorus content), TOC (total organic carbon content) (Zan et al., 2011), TN, Pb (Chen et al., 2013), etc. Since sediment depth was just a spatial direction with no direct bioeffects on microbes, it can be used as a representative of the main environmental gradient to analyze its relationships with the other five environmental factors and microbial community structures. However, the correlated OTUs of sediment depth varied apparently with different sampling sites (21% ~ 62%). There probably some unknown factors which impeded sediment depth to become the main environmental gradient at all sites. As the results showed in Figure 4, from abundant to less, the correlated OTUs of sediment depth decreased sharply from site 5 to site 4, site 2 and site 3. The decreasing order was in accordance with the increasing order of the methane bubble emission stage (Site 2: emission stage II; Site 3: emission stage II to III; Site 4: emission stage I; Site 5: not happened) we have studied before (Lu et al., 2021). Given that gas space formation (methane emissions (Chen and Slater, 2016;Lu et al., 2021)) can destruct sediment structures and promote excess pore water exchange in different layers (Lu et al., 2021), the microbes in pore water probably could be carried away and moved to other layers. As a result, with the destruction effects of gas space become stronger, the community similarities in different layers would subsequently become larger. To confirm the above assumption, five pieces of evidence can be used to supported it. First, our previous study (Lu et al., 2021) concluded that the gas space formed by excess methane emissions can change the sediment interstitial space and pore water exchange (up to 17%), so it could physically move the microbes in pore space. Second, from Figure 5, with the increase in methane emission stages, the relative abundance of main phyla became more even along the sediment depth direction (for example, Proteobacteria, Chloroflexi, Acidobacteria, Bacteroidetes, Verrucomicrobia, Planctomycetes, etc.). Third, the distributions of the phyla with larger cell sizes were limited, but the smaller cell sizes of the OTU phyla became more even as the methane emission stage increased. The above phenomenon was in accordance with the description in Figure 8. For example, Cyanobacteria live in the underlying water, and their cell sizes are usually larger than those of other bacteria (smallest cyanobacteria: Picocyanobacteria (Jasser and Callieri, 2017), 0.70 ± 0.46 μm 3 on TIS, Total interstitial space percent; VWC, Volumetric water content; G-space, Gas space percent, P-size, Particle average size; P-evenness, Particle evenness. The notation of "--" refers to the missing data (During the measurement, the sediment layer was too dry and broken by the instrument probes). In addition, no continuous gas space data was observed in the lake later (Sampling site 5). Frontiers in Microbiology 09 frontiersin.org average (cell size of different classes multiply the frequency of different classes), (Albertano et al., 1997)). The distributions were limited in the sediment layers where the total interstitial space was large ( Figure 5; Table 1). Moreover, the cell size of Proteobacteria is usually smaller (0.03 μm 3 on average, Delaware estuary, (Cottrell and Kirchman, 2004)) than that of other bacteria (Krieg et al., 2010). The relative abundance became more even (from site 2 to 5, standard deviation: 4.9, 3.0, 6.6, and 8.8%) with increasing emission stage, and none of the relative abundances correlated with the total interstitial space (site 3, Figure 4). Thus, if gas space variations help mix the microbial communities at different layers, the transport of those larger cell size bacteria (Cyanobacteria) and smaller size of bacteria will satisfy the above descriptions. Fourth, Figure 4 shows that the phyla of correlated OTUs varied with different sampling sites. They are not methanogens or methanotrophs and have motilities or filiform structures to resist pore water exchange, for example, Bacteroidetes (Prolixibacteraceae (Watanabe et al., 2020)), Actinobacteria (Fernández-Gómez et al., 2013;Anandan et al., 2016), and Chloroflexi (Anaerolineae (Yamada et al., 2006)). Fifth, the decrease of the similarities (the kinds and contributions of the correlated phyla between sediment depth and volumetric water content) was in accordance with the changes of methane emission stage. (More details have been discussed in Section 4.3.) Overall, gas space may not be the effective factor to account for the community structure in a layer, but it can affect the similarities of microbial community structure among different layers of a sampling site. Analysis of particle features (size and evenness) with their correlated OTUs RDA analysis and correlation analysis all showed that particle features were not the main effective factor to sediment microbial community structures at all sampling sites. However, to investigate why the correlated OTUs of particle features vary markedly with different sampling sites. Comparisons between particle features The variations of the correlations among interstitial space features and relative abundance of correlated OTUs. Frontiers in Microbiology 10 frontiersin.org and the main environmental gradient (depth) were discussed. (Sediment depth was not the main environmental gradient at site 3, so comparison at site 3 was been removed.) Compared with sediment depth, particle size had similar correlated phyla with similar abundance components (Figure 4, sediment depth: Proteobacteria (23.28%), Acidobacteria (17.79%), Chloroflexi (14.02%), and Planctomycetes (6.55%); particle size: Proteobacteria (28.29%), Acidobacteria (18.35%), Chloroflexi (8.02%), Planctomycetes (7.05%)). Moreover, from site 2 to site 5 (site 3 was removed), the Spearman correlation coefficients between sediment depth and particle size were 0.33, −0.89, and 0.07, respectively. The coefficient values were consistent with the relative abundance of the correlated OTUs of particle size, which were 0.9, 34.2, and 0.9% at sites 2, 4, and 5, respectively. As the distribution of particle size approached the sediment depth direction, the relative abundance of the correlated OTUs of particle size increased. The above results indicated that the correlations among particle size and its correlated OTUs were probably pseudo correlations. At least, particle size was not the main effective environmental factor for sediment microbial community structure in Chaohu Lake. For particle evenness, from sampling sites 2 to 5, the relative abundance of its correlated OTUs was 1.5, 15.3 and 1.5%, respectively. They were also consistent with the Spearman correlation coefficients between particle evenness and sediment depth, which were 0.61, −0.75, and 0.42 at each sampling site. Since the abundance of the correlated OTUs was only abundant at sampling site 4, the question was also raised about whether the correlations between particle evenness and its correlated OTUs were true. According to Figure 4, except for sampling site 4, the components of the correlated phyla of particle evenness were different from each other, both in the kinds and contributions of phyla. Given that the relative abundances of the correlated OTUs of particle evenness were at an exceptionally low level at sites 2, and 5, the above analysis supports the premise that particle evenness was also not the main effective environmental factor for the sediment microbial communities in Chaohu Lake. Results of RDA analysis. Microbiology 11 frontiersin.org Analysis of volumetric water content and total interstitial space with their correlated OTUs Frontiers in For volumetric water content and total interstitial space, they all have a large amount of correlated OTUs (Figure 4). RDA analysis in Figure 7 also showed that they may have effects on microbial community structure (axis length). However, in Figure 4, the similarities of the correlated OTUs in abundance and phyla kinds required further discussion between the two factors and sediment depth. (Sediment depth was not the main environmental gradient at site 3, so comparison at site 3 has been removed.) The correlated OTUs of six environmental factors (Spearman correlation coefficients larger than 0.8). Frontiers in Microbiology 12 frontiersin.org According to Figure 4, the correlated phyla of volumetric water content and their abundance were like the phyla correlated with sediment depth. In addition, in Figures 4, 6, their similarities (kinds and contributions of correlated phyla) obviously decreased from sampling site 5 to site 4, and site 2. This tendency was in accordance with the methane emission characteristics mentioned above. Thus, correlations between volumetric water content and its abundant correlated OTUs probably resulted from the similar distributions between volumetric water content and sediment depth. Meanwhile, this similar distribution may be significantly affected by excess methane emissions. To confirm the above assumptions, there were three points. First, the correlative OTUs of volumetric water content was more abundant at the sampling sites where volumetric water content was correlated with sediment depth. From sampling sites 2 to 5, Spearman correlation coefficients between volumetric water content and sediment depth were − 0.93, −0.91, and − 0.90, respectively. Meanwhile, the relative abundances of the correlated OTUs of volumetric water content were 37.8, 39.8, and 52.2% at each sampling site. For sediment depth, the relative abundances of its correlated OTUs were 41.4, 54.8, and 62.2% at sites 2, 4 and 5, respectively. Second, as it showed, the correlative OTUs of volumetric water content were far less than the correlative OTUs of sediment depth at all sampling sites. Third, at sampling site 3, Volumetric water content nearly has no corelative OTUs, when it was uncorrelated with sediment depth. This implied that the effects of volumetric water content on microbial community structure were far less than the main environmental gradient. Similar to volumetric water content, the total interstitial space has similar kinds of correlated phyla with sediment depth (Figure 4, main phyla: Proteobacteria, Acidobacteria, Chloroflexi, Bacteroidetes, Planctomycetes, etc.). From sampling sites 2 to 5 (site 3 was removed), the results of Spearman correlation analysis between total interstitial space and sediment depth were − 0.90, −0.90, and − 0.89, respectively. The correlated OTUs of the total interstitial space were also only abundant at the sites where the total interstitial space was correlated with sediment depth. Besides, the correlated OTUs of the total interstitial space were also far less than the correlated OTUs of sediment depth at all sites. Total interstitial space has nearly none correlated OTUs at the site 3. Thus, it was probably not the main factor for microbial community structure. Overall, for volumetric water content and total interstitial space, at the sampling sites (2, 4, and 5), their correlations with The theoretical processes of how methane bubble variations affect sediment microbial communities along depth direction. Frontiers in Microbiology 13 frontiersin.org some OTUs were probably the pseudo correlations, because of the similar distributions with the main environmental gradients. Conclusion Overall, among these six space parameters, sediment depth was the closest one to the main environmental gradient for sediment microbial community structure. Then, this study discussed whether five sediment space were effective in affecting microbial community structure in Chaohu Lake sediments. The conclusions were that, gas space which caused by excess methane emissions can promote the mixture of microbial communities in different layers. However, including gas space, all these five environmental factors, they were ineffective for affecting the microbial community structure in a sediment layer. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary material.
2022-12-15T14:36:27.414Z
2022-12-14T00:00:00.000
{ "year": 2022, "sha1": "2ffda5eaf23c818858c13ef2924a1a7b4438b958", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "2ffda5eaf23c818858c13ef2924a1a7b4438b958", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
265364587
pes2o/s2orc
v3-fos-license
A Temporal Assessment of Burglary at Residential Premises in the Newlands East Policing Precint Knowing when crimes occur most predominantly in a specific area, such as which hour/s of the day, which day/s of the week, or which month/s of the year, is fundamental for the formulation of crime prevention strategies. This information facilitates operational as well as tactical resource deployment in areas at times when they are needed the most. This article aimed to contribute to this knowledge by exploring when most residential burglaries occurred in the Newlands East policing precinct in the Durban area. This was deemed necessary as residential burglary had been reported as the highest property-related crime in the study area for the 5-year period (2015–2019) preceding the study. To ensure that the aim of this study was achieved, a qualitative research design was utilized which assisted the researcher to focus on the temporal assessment of burglary at residential premises in the Newlands East policing precinct. Data collection was achieved by means of Focus Group Discussions and semi-structured interviews that involved a total of 37 participants comprising of South African Police Service members, Community Policing Forum representatives, local Ward councillors, and ordinary community members. The overall results suggest that the Newlands area experiences fluctuations in the frequency of residential burglaries throughout the year by hour, day, week, month, and season. This information can inform policymakers and law enforcement agents on when to implement crime prevention strategies. Introduction Time is an important risk factor that enhances opportunities for habitual and opportunistic burglars to commit their crime (Ozkan, 2013).Coupe and Blake (2006) argue that opportunities to commit burglaries differ between day and night as affluent but less guarded properties are targeted during the day and wellguarded properties are targeted at night.Correlations between climate and weather and residential burglaries have also been found (Hird & Ruparel, 2007;Linning, 2015;Ranson, 2014).For example, in hot climates a lack of air conditioning may facilitate night-time burglaries as windows may be left open (Hamilton-Smith & Kent, 2005).Obtaining data from interviews with active burglars, Coupe and Blake (2006) found that burglars monitored the daily activities of people in neighborhoods before targeting a residence at a certain opportune time.For instance, when mothers take their children to school or leave to pick them up again, they leave the premises unattended for a set period each weekday.Other research studies have discovered that the average burglary rate decreases on weekends compared to workdays (Breetzke, 2016;Peng et al., 2011), indicating that weekend burglaries were more likely to be suppressed by people's routine activities.Ozkan (2013) further emphasizes that temporal patterns can explain a lot of circumstances leading to incidences of burglary.The mode of entry, for example, can be determined by the time of day.Breaking doors and smashing door locks or windows generally occur during the day and are avoided by burglars operating at night.Night-time burglary operation thus tends to be stealthier.Moreover, in the summer months burglars may use the opportunity provided by open windows to enter a home (David, 2003;Hamilton-Smith & Kent, 2005).This means that the time of day and the season can be pivotal in a burglar's decision to "hit" a target.Closed and locked windows and doors means that the availability of potential targets shrinks dramatically.Burglars also avoid tall buildings and upper floors in tall buildings, visible first floors, and well-lit areas were gaining access through open windows on a summer's night will be difficult.Neighborhoods where households with air conditioning systems are prevalent are thus often avoided.Consequently, several studies on residential burglary have proposed that the rate of this crime varies between areas and may be higher at specific times of the day, on specific days of the week, and during certain seasons of the year (Breetzke, 2015;Henry & Bryan, 2000;Linning, 2015). Moreover, criminal victimization is highly concentrated among prior crime victims.According to research, prior housebreaking victimization best predicts future housebreaking victimization risk (Clark, 2018;Sidebottom, 2012;Yang, 2006).Huigen (2020) asserts that, while most burglaries are a one-time occurrence, many residences are repeatedly targeted.In most cases of repeat housebreaking, the same criminals commit both the first and the follow-up burglary, especially if the time between the two incidents is short (Everson & Pease, 2001;Huigen, 2020).Following a housebreaking, the danger of becoming a victim is temporarily increased not only for the household that was burglarized first, but for surrounding properties as well.This risk diminishes in time and space, spreads up to a few hundred metres, and lasts a month or two (Johnson et al., 2007). The Current Study This article aims to offer new insights into the temporal distribution of burglary at residential premises in the Newlands East Policing Precinct.Based on the South African Police Services (SAPS) Statistics, residential burglary in the Newlands East Policing Precinct has remained consistently high since 2015 as indicated in Table 1. The available statistics on property-related crimes in the Newlands East policing precinct indicate that burglary at residential premises remained high over a five-year period (2015-2019), with high peaks in 2017 and 2019 and a slight decline in 2016 and 2018.Although the reasons for this decline were beyond the scope of the study, it is undeniable that the many residential burglaries in the Newlands East policing precinct have impacted this community negatively as burglary at residential premises does not only have financial implications for innocent and often poor residents, but victims may be harmed psychologically. Furthermore, when the nature and extent of burglary at residential premises are considered, it has become an escalating problem nationally and has been categorized as the most feared crime in South Africa by the South African Victims of Crime Survey (Statistics South Africa, 2017/18).This is not surprising as this crime has continued to be the dominant in South Africa as it accounts for 54% of all household crimes in the VOCS (Statistics South Africa, 2019).Africacheck (2019) reports that as many as 220,865 house burglaries were recorded in 2019 with an average of 605 houses burgled per day. Against this backdrop, it is imperative to understand when these burglaries are occurring in order to implement effective crime prevention strategies.Furthermore, knowing when most residential burglaries are likely to occur is vital in the policing and crime prevention sphere as this knowledge will allow law enforcement to allocate resources strategically.Andresen and Malleson (2013, p. 32) emphasis that "it is in the best interest of policymakers to understand not only which crime prevention methods are most effective, but [also] where and when they are appropriate to apply." Theoretical Framework Burglary at residential premises phenomenon is best understood within the theoretical tenets of the Routine Activities Theory (RAT).The RAT was developed by Cohen and Felson (1979) in the late twentieth century and is one the most influential perspectives for explaining crime patterns.The authors developed this theory to clarify the rise in property crimes in the United States which occurred in conjunction with a rise in economic prosperity post World War II.Their seminal research, titled Social change and crime rate trends: A routines activity approach, explains how the dispersion of activities away from home and family may result in a rise in opportunities to commit a crime (Cohen & Felson, 1979).After World War II, socioeconomic changes were introduced, particularly as women began working away from home.Many had to travel far from their residential areas, leaving their homes and possessions unguarded while their children were at school.Furthermore, Cohen and Felson (1979, p. 593) argue that the tenets of the RAT are embedded in "the recurrent and prevalent vocational and leisure activities individuals undertake on a regular day-to-day basis."This theory puts emphasis on where and when people are, what they are doing, and what happens to those individuals because of their situation in time and place (Clarke & Felson, 1993).This theory refers to direct-contact predatory crimes that are "predatory violations involving direct physical contact between at least one offender and at least one item or object which that offender attempts to take or damage" (Cohen & Felson, 1979, p. 589).This theory stipulates that "crime is the result of the convergence in time and in space of motivated offenders, suitable targets, and the absence of capable guardians." The likelihood of the three components coming together changes in time because of the presence of routine activities.These activities could be daily and obligatory (periods within a day when an individual is working or in school), weekly (weekday versus weekend schedules, for example going to church on Sundays), or even yearly (annual school schedules that determine when the youth attend school and when they do not) (Brunsdon et al., 2009).These activities are usually quite stable in time and place and do not alter much.Additional activities (such as social events) are, on the other hand, optional and less time bound.People choose whether to participate in such activities as well as when they will do so (Lebeau & Corcoran, 1990).According to this theory, the possibility of the convergence of the three major components increases during periods when people are engaged in regular routine activities, which may impact the decision by a burglar to commit a crime at a particular location at a specific time. Methodology To explore the temporal distribution of burglary at residential premises in the Newlands East policing precinct, the researcher adopted a qualitative research design.Qualitative research seeks to explore, describe, and analyse the meaning of individual lived experiences pertaining to a particular phenomenon in order to determine "how [the participants] perceive it, describe it, feel about it, judge it, remember it, make sense of it, and talk about it with others" (Patton, 2002, p. 104, cited in Marshall & Rossman, 2014).This distinctive feature rendered the qualitative approach imminently suitable for this research study as it enabled the researcher to explore the perceptions and views of SAPS officials, local Ward councillors, Community Police Forum (CPF) members, as well as community members on the temporal distribution of burglary at residential premises in the Newlands East policing precinct.The qualitative design was also suitable to elicit the participants' views and perceptions regarding the topic under investigation as it facilitated a setting in which the selected participants could frankly and comprehensively explain and describe their feelings and experiences.Some of the selected participants were usually the first people to respond to a crime scene and they were also the first to receive reports of crime incidences, while others had been victims or were familiar with victims of residential burglary and had therefore often witnessed the temporal distribution of residential burglary in the study area. Study Population The target population is the population to which the researcher would ideally like to generalise his or her results (Welman et al., 2005).The target population of the current study comprised all the SAPS officials at the Newlands East police station, executive members of the Community Police Forum (CPF) associated with this police station, all the community members in the study area, as well as all the Ward councillors of the area.As this population was far too large to include in the study, sampling was conducted to select appropriate participants.The categories of participants as set out above were selected to gain the most accurate responses regarding the research challenge at hand.This was predicated on the belief that conducting interviews with all the officials and stakeholders would be difficult in practice due to time, cost, and geographic constraints.As a result, the study's population was limited to a carefully selected sample. Sample Selection and Sample Size A sample is a subset of a population's constituents that is used to generate generalizations about the entire population.The ideal sample is one that provides a perfect representation of a population with all the relevant features of the population (Blaikie, 2003).The study population (Table 2) consisted of members of the SAPS, members of CPFs, Ward councillors, and ordinary community members.The study sample that was selected from these groups finally comprised of 37 participants in total.The total number of community participants from the two study areas was 30 comprising of 10 CPF executive members and 20 general community members.These participants were engaged in three focus group discussions (FGDs).Five SAPS officers and two councillors, one from each area, were also involved in the study, totalling seven participants who were individually involved in key informant interviews (KIIs). Sampling Procedure To select the actual participants for this study, two sampling methods were used, namely purposive and snowball sampling.Purposive sampling was used primarily to select the key informants, namely the Ward councillors, the SAPS officials as well as the CPF members known to the researcher, while snowball sampling was used to recruit the community members.The researcher asked the initially identified respondents (Ward councillors, SAPS and CPF members) who had been selected by means of convenience sampling if they knew anyone from the community with similar ideas or circumstances who would be interested in participating in the study.Gatekeeper's permission was obtained from the Ward councillors to involve these selected community members in the study. Data Collection Techniques Individual or group interviews and the observation of people and study surroundings are the two most common methods of gathering qualitative data in the social sciences.The two main methods of gathering data used in the present study were interviews (KIIs) and FGDs.Individual face-to-face and in-depth interviews were conducted with five SAPS officials and two Ward councillors (seven in total) as key informants, while FGD were conducted with 30 CPF and community members in groups of 10 CPF members and 10 and 10 community members.(It is noteworthy as stated in footnote 6 that some CPF members were regarded as key informants due to their wide knowledge and understanding of issues in the study area.) Data Analysis To analyse the data, the thematic analysis method was used.Thematic analysis is a method for identifying, analysing, and reporting patterns (or themes) within data and thus the researcher can organize and describe an extensive data set in detail (Braun & Clarke, 2006).This method comprised six stages as proposed by Braun and Clarke (2006): (a) becoming familiar with the data, (b) generating initial codes, (c) searching for themes, (d) reviewing themes, (e) defining and naming themes, and (f) producing the report.In the first stage, the researcher read and listened to the audio-recorded interviews multiple times to become acquainted with their content and to ensure reliability in decoding the information from the audio recordings of the in-depth interviews and FGDs in the transcripts.Familiarization with the data also enabled the researcher to acquire an initial comprehension of the information.By doing so, it enabled the researcher in the second stage to be able to distinguish vital components and topics that were applicable to the problem, questions, and objectives of this study.The vital components and topics of information that emerged from the participants' data were examined, and this included both similar and contradictory elements that were examined according to their relevance to the objectives of the study.During the third stage, the researcher looked for common denominators and differences within and across the material and these common denominators and differences were formulated as themes.The researcher divided the key information into themes and those that were not applicable were discarded.Relevant themes were compared with those revealed by the literature review.The researcher could then, in stage four, review the themes to be certain that each was logical and that it would fit among the coded information and the other themes.In the fifth stage, the researcher defined all the themes, which enabled the final stage, which was the draughting of the report. Season of the Year To ascertain the seasonal periodicity of burglary, a particular question in this regard was posed to the participants. In which season are most burglaries committed?.... Is there a specific reason for more burglaries during a particular season of the year? It was noted that the responses provided answers that were very similar to the responses for time of the year.The responses revealed that the hot summer holiday period in December/January was the time that residents chose to travel and be away from home, and for most this was a key factor that contributed to residential burglary in the study area.The participants explained that during the hot summer holidays, people were more likely to spend time away from their properties and this was therefore a perfect season for burglaries.This season was also commonly deemed to increase the number of vacated homes and thus decreased the presence of a "capable guardian."Below are responses that highlight this point: "I would think residential burglaries tend to increase during the warm seasons.I say this because residents are often away for holiday purposes and properties are often empty and unguarded" (FGD-A: 03:01)."Generally, during the summer holidays in Newlands people are usually away from their homes as they go to beaches and other places of entertainment and this absence from their houses increases the likelihood of their property being burglarized" (KII-A: 01:01)."I believe during the summer holidays Newland's people mostly visit the beaches during those times, attend parties.There's just a lot going on at that time as others leave their houses vacant" (FGD-A: 03:03). Months of the Year The question was posed: In which month/months is burglary at residential premises most likely to occur...? Is there a specific reason for this? It has been argued that burglary at residential premises, just like any other crime, is periodic (David, 2003).This is to say that similar fluctuations in victimization rates occur year after year during the same months and seasons (Lauritsen & White, 2014).This notion was also evident in the responses of most of the participants who argued that burglaries reached their peak in the month of December.This majority group reasoned that this was due to an increase in people's activities away from their homes during this time.Below are excerpts that exemplify this view: "December.I believe that during these times most residents are in holiday mode, and families are often away from their homes.This creates an opportunity for the perpetrators" (FGD-A: 03:01)."December, so towards and during the festive season as there is so much activity going on during that time, especially away from people's homes" (KII-A: 01:03)."Burglaries occur more in December, so during the December holidays until the 2nd of January.People living in Westrich are targeted more during these times as they leave their houses unoccupied and go to the farm and when they come back, they find that the house has been broken into.That is one of the patterns, especially in Westrich, and mostly it is community people who are involved because they know that those people are not at home" .(KII-A: 06:06) The RAT posits that there are three plausible reasons why residential burglaries are high in the month of December.First, the pleasant weather during the December holiday season causes more individuals to be outside and away from their homes, which raises the risk of burglary at residential premises.Second, the number of people in Newlands who stay at home in December diminishes during the vacation as many visit their families on farms and in the rural areas while many others go to the numerous beaches in Durban.This results in an increase in the number of vacated properties which makes residential burglary easier as there are no efficient guardians who will deter crime.Moreover, researchers have indicated that property crimes are often caused by a perceived need for cash immediately and houses that are likely to be targeted are those with cash and valuables (Cohn & Rotton, 2003).These elements are predominated in the hot summer months of December in South Africa, when people get bonuses, buy asserts like jewellery, televisions, smart phones, and computers and are less likely to stay at home.Mbonambi (2018) indicates at times homeowners advertise their belongings by leaving empty boxes outside their house for trash collection, which makes criminals aware.Moreover, the South African Victims of Crime surveys revealed that the most stolen items during residential burglaries are television sets, clothing or linen, computer equipment, mobile phones and accessories, tools, small electrical appliances (e.g., toasters, kettles, and microwave ovens), jewellery, and money, among others.The considerable many second-hand merchandise traders in South Africa create opportunities for offenders to commit propertyrelated crimes as they know they will have a market to trade their assets with impunity (Statistics South Africa, 2016).These findings also support Fitzgerald and Poynton's (2011) theory that housebreakers focus on possessions that are of high value and that can be easily disposed of for cash. Days of the Week The question was posed: On which days of the week are most burglaries committed...weekdays, at weekends?Is there a specific reason for this? Individuals and families have different lifestyles on weekends than they have during the week, therefore it is possible that weekend crime patterns will differ from those during the week.Responses to the above question revealed that the participants shared a similar understanding of burglary patterns associated with weekday-weekend frequency.The participants seemed knowledgeable and experienced as they all perceived those residential burglaries occurred most during the week.They also demonstrated a clear understanding of why this pattern was prevalent.Some responses that exemplify this view are the following: "It is during the week, from Monday to Friday.In most cases you would find that people are home during the weekend and no housebreaker wants to confront home dwellers, so they use the chance when nobody is home" (KII-A: 01:03)."It occurs more during the week, as people are mostly away from home during the week with children at school, parents at work.Therefore, there is no one at home to prevent the crime from occurring" (FGD-A: 03:10)."Mostly from Monday to Friday" (KII-A: 01:01). The above comments enlighten the point that the obligatory activities of people during the week provide opportunities for residential burglary.Individuals leave their homes unattended when they go to work and the children attend school, while on the weekend individuals are more likely to be at home.It is thus less likely to that opportunities for burglary are presented during weekends.In general, all the above responses suggest that weekend burglaries are likely to be suppressed by people's routine activities.This finding is corroborated by Breetzke (2016), who conducted a similar study in the City of Tshwane, and Peng et al. (2011), who conducted their study in China.All these authors agree that burglaries are more likely to occur during the week than on the weekend. Time of the Day The question was posed: At what time of the day do most burglaries occur...during the day, during the night?Is there a specific reason for this? According to Ozkan (2013), the time of day can be a risk factor for residential burglary.Coupe and Blake (2006) argue that burglary opportunities differ during daylight and night hours while Mpofu (2019) explains that the logic behind this is that housebreakers are rational in their approach toward targeting their potential victim/s and they will therefore cautiously predetermine a target associated with minimal risk rather than behave impulsively.Thus, because residential burglary is a passive crime, the offender will always choose a time and place that will limit the possibility of encountering targeted victims.When the study participants were asked what time of day burglaries were most likely to occur, they all shared the view that burglaries occurred predominantly during the day.The reason they shared for this observation was that families were engaged in daily activities during daylight hours such as going to work and attending school.This view is supported by both the RAT and the rational choice theory (RCT).For instance, RAT proposes that the absence of a capable guardian such as a homeowner or housewife during the day is an enabling factor for residential burglary, while RCT posits that criminals rationalize that they will not get caught during the day as they will be unobserved when no one is home.The following are examples of the comments that expressed the participants' view regarding this question: "During the day, I think the perpetrators maximize more on the time when residents are away for work or school purposes" (FGD-A: 03:01)."During the day, because nobody is at home during the day" (FGD-A: 06:03)."It happens mostly during the day.I think the perpetrators know that individuals are at work, and they target those houses" (KII-A: 01:03)."In my Ward, I have noticed that these criminals prefer to break into people's homes during the day because they know that many of the residences here are empty.They are so deliberate in their movements because they do not want to encounter the people" (KII-A: 06:06)."Mostly it's during the day when people are out going to work; that's when they get the opportunity to get into people's houses because in the afternoon or at night people are back from work so that deters them" (KII-A: 01:05). The above responses revealed that housebreakers in the study area avoided interaction with residents, which is a view that is corroborated by David (2003).Therefore, before targeting a property to commit a burglary, the perpetrator/s clearly weigh their options.In this case, they become acquainted with the residents' daily routines and pounce when the victims leave their property, probably knowing when they will return.As a result, knowing victims' movements makes it simple for them to commit a burglary (and even multiple burglaries) during the day.Clearly, the participants confirmed earlier findings that burglary at residential premises occurs at times that are most convenient for perpetrators and that housebreakers consider their options carefully before "hitting" a target.When all the above data are evaluated, a clear finding is that homes in the study area are more likely to be targeted during the day by burglars than during the evening or night, although burglaries in the latter hours are also likely to occur.This finding is in line with the definition of burglary at residential premises that states that such events generally happen "with no contact between the victim/s and the perpetrator/s" (Africacheck, 2017).It should be noted, however, that this finding conforms with international studies that argue that residential burglaries occur mostly during the day.Conversely, most South African studies have found that residential burglaries occur during the afternoon and early hours of the morning (Breetzke, 2015;Zinn, 2008).Furthermore, most Victims of Crime surveys (Statistics South Africa, 2011, 2014, 2016) reveal that residential burglaries tend to be committed at night.These contradictory findings confirm the importance of studying residential burglary temporal patterns in a particular setting on a regular basis to ensure vigilance and control.As it is almost certain that studies in other South African cities will produce different results as each city and community is unique and is impacted by different environmental factors that affect the magnitude and nature of crime, continued surveys and studies on burglary rates are essential as both the frequencies and nature of burglaries need to be monitored to ensure effective response to this crime. Conclusion and Recommendations This article has explored the temporal aspects of burglary at residential premises in the Newlands East policing precinct in the Durban area.The overall results suggest that the Newlands area experiences fluctuations in the frequency of residential burglaries throughout the year by hour, day, week, month, and season.Underpinning the data and findings using the RAT has proven to be quite useful in better understanding these temporal patterns.One pivotal finding is that seasonal variations in crime differ among cities and/or nations simply because people's routine activities differ.These differences may be attributed to many factors that include, but are not limited to major holidays, weather trends as well as schooling schedules (Breetzke & Cohn, 2012;Carbone-Lopez & Lauritsen, 2013;McDowall et al., 2012).The literature suggests that there are no general temporal variations for residential burglaries but that every city is unique and has different attributes that can shape as well as influence the movement and behavior
2023-11-23T16:24:31.851Z
2023-11-21T00:00:00.000
{ "year": 2023, "sha1": "5faf9627f2398ec4971b7d0da123f3173cf8c828", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0972558X231204399", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "380f8f396a32517b09c343f61d1c1df141b3520b", "s2fieldsofstudy": [ "Law", "Sociology" ], "extfieldsofstudy": [] }
269736851
pes2o/s2orc
v3-fos-license
The Impact of Online Reviews on Consumers’ Purchase Intentions: Examining the Social Influence of Online Reviews, Group Similarity, and Self-Construal : Consumers often rely on evaluations such as online reviews shared by other consumers when making purchasing decisions. Online reviews have emerged as a crucial marketing tool that offers a distinct advantage over traditional methods by fostering trust among consumers. Previous studies have identified group similarity between consumers and reviewers as a key variable with a potential impact on consumer responses and purchase intention. However, the results remain inconclusive. In this study, we identify self-construal and group similarity as key factors in the influence of online review ratings on consumers’ purchase intentions. We further investigate the role of consumers’ self-construal in shaping consumers’ perceptions of online reviews in terms of belongingness and diagnosticity. To test the hypothesis, we conducted a 2 (online review rating) × 2 (group similarity) × 2 (self-construal) ANOVA on 276 subjects collected through Amazon Mechanical Turk (MTurk), and contrast analysis and PROCESS macro model 12 were used for the interaction effect analysis and moderated mediation analysis. Our findings reveal that consumers with an inter-dependent self-construal are sensitive to both review ratings and group similarity with regards to their purchase intentions. They demonstrate a positive purchase intention when both group similarity and online review ratings are high. However, their purchase intention is not influenced by review ratings when group similarity is low. Conversely, consumers with an independent self-construal exhibit a more positive purchase intention when the online review rating is high, irrespective of group similarity. Additionally, our study highlights the mediating roles of perceived diagnosticity and belongingness in the relationship between online review ratings, group similarity, self-construal, and purchase intentions. Results show significant indirect effects for perceived diagnosticity and belong-ingness, meaning that the impact of online review ratings on purchase intention is mediated by these two variables. The outcomes of our research offer theoretical and practical implications concerning online reviews and suggest new avenues for future research in the area of online consumer behavior. Introduction In today's digital age, consumers no longer rely solely on traditional advertising for information on product purchases [1].To make decisions, consumers actively seek information that incorporates evaluations and experiences from their fellow consumers, notably through online reviews.Studies indicate that more than half of consumers consider online reviews to be a critical source of information when deciding on a purchase [2].This behavioral shift can be attributed to the fact that most consumers place greater trust in recommendations from their peers than in traditional advertising [3].Consequently, online reviews have become critical communication channels for companies [4,5]. Previous studies have emphasized the importance of considering reviewer characteristics when determining the impact of online reviews on consumers' purchasing de-cisions [6,7].Among these characteristics, the perceived similarity between the review group and the consumer stands out as a crucial factor [8,9].Generally, a higher perceived similarity leads to greater trust in and conformity to the reviews [9,10].This is because perceived similarity to a specific group can significantly influence consumer perceptions, attitudes, and behaviors [11].However, conflicting findings exist, with some studies suggesting that similarities to the online review group may not affect consumers' purchase intentions [12] or may even have a negative impact [13].These findings demonstrated that the impact of perceived similarity varies depending on consumers' needs and interests.This suggests that to effectively examine the influence of perceived similarity in online reviews, consumer characteristics must be taken into account in the study.Self-construal is a variable that influences consumers' perceptions of their relationships with others and the extent to which groups shape consumers' attitudes and behaviors [14].In other words, self-construal is a variable that must be considered when investigating the impact of group similarity in online reviews.However, previous studies have shown limited interest in this aspect.By presenting self-construal as a key variable, we aim to derive more meaningful results and implications. Given the inconclusive results of previous research, the influence of group similarity in online reviews remains unknown.Therefore, for companies to successfully implement marketing strategies utilizing online reviews, it is crucial to examine when and under what circumstances group similarity between consumers and the online review group exerts an influence.In this study, the concept of self-construal is introduced to investigate this effect.Self-construal refers to how individuals perceive themselves in relation to others, which can significantly influence their reaction to online reviews [14].Self-construal is a variable that affects consumers' perceptions of relationships with others and the degree of group influence on consumers' attitudes and behaviors [15,16].In other words, self-construal is a variable that must be considered when examining the influence of group similarity in online reviews.However, unfortunately, there was a lack of interest in this in previous studies.We aim to derive clear results and implications by presenting self-construal as a key variable. Previous studies have highlighted that marketing strategies employing social norms, such as online reviews, exert social influence that significantly affects consumer behavior [17].Social influence is broadly categorized into informational influence (diagnosticity of information) and normative influence (belongingness to a group) [18,19].These influences suggest that consumer conformity to majority evaluations stems from consumers being more likely to perceive information as accurate when part of a group norm.We predicted that the influence of online reviews would differ depending on consumers' selfconstrual and group similarity.Specifically, consumers with an independent self-construal, perceiving themselves as "me", tend to refer to the opinions of others to confirm confidence in themselves, regardless of who those others are [20].They use the reviews of others to enhance the accuracy of their choices and boost their confidence.Therefore, irrespective of their similarity to the review group, the higher the online review rating, the higher the purchase intention of consumers with an independent self-construal.Conversely, consumers with an interdependent self-construal perceive themselves as "us", and tend to strive for harmony within the in-group by accepting the in-group's opinions [20].Consequently, they are more influenced by online reviews from the in-group than the out-group, and online reviews are expected to have a normative influence rather than an informational influence on them.Consequently, consumers with an interdependent self-construal will exhibit more positive purchase intentions when their in-group's online review ratings are higher. The purpose of this study is twofold.First, we examined the impact of online reviews on consumers' purchase intentions, with a focus on group similarity and self-construal.Previous studies emphasized that group similarity must be considered when examining the effectiveness of online reviews.However, previous studies failed to yield consistent results due to their inadequate consideration of consumers' characteristics.This study proposes self-construal as a key moderating variable and examines how group similarity has differential influence depending on the type of consumer.Furthermore, informational and normative influences are presented as mediating factors between online reviews and the mechanism by which influences are activated.This is based on the interaction of group similarities and self-construal.The results of this study are expected to have practical implications for companies that use online reviews as marketing tools. Social Influence of Online Reviews (Informative and Normative) Online reviews are user-generated product-related information provided by consumers based on their experiences, and constitute the most common form of online wordof-mouth communication [21].Previous research has indicated that online reviews are perceived as more authentic and persuasive than traditional advertisements and communication strategies [22,23].Consequently, online reviews significantly influence other consumers' purchasing decisions.Previous studies have suggested that positive online reviews create favorable consumer responses such as consumer trust, positive attitudes toward the product, and purchase intention [24][25][26][27]. Moreover, online reviews wield a strong social influence.The most crucial factor influencing consumer purchase behavior is the online review rating [1,28].Companies present consumers' evaluations in the form of star ratings to convey opinions about a product [29].When the opinion of a majority of online reviewers about a product is showcased, consumers are encouraged to align themselves with that prevailing sentiment. Previous studies have noted that online reviews using ratings, among other elements, are representative of marketing using social norms [29][30][31].Social norms-related marketing significantly influences consumers in two ways [19].The first is the informational influence.This social influence arises from consumers' desire for accurate information to use in decision making.When purchasing a product, consumers seek cues to reduce uncertainty and ensure that products meet their expectations.When ratings indicate that many people have a specific evaluation or have taken certain actions, consumers perceive the information as useful and accurate for their own decision making, conformed to established norms.Positive evaluations for a specific product indicate a high likelihood that the product will meet consumer expectations [9,32].In other words, the informational influence of online reviews serves as highly diagnostic information that assists consumer decision making. The second type is the normative influence.This is the motivation that consumers feel to meet the expectations of others, arising from their desire to maximize social outcomes related to reward and punishment [33].These consumers tend to conform to established norms to positively shape others' perceptions of them and avoid negative consequences, such as disappointment or criticism.Consequently, consumers accept others' opinions to receive compensation or avoid punishment, even if a decision is incorrect [34,35].In other words, online reviews serve as group norms and consumers seek a sense of belonging to the group by following online reviews. In summary, online reviews have both informational and normative effects.The informational influence of online reviews leads consumers to perceive information as more diagnostic, whereas the normative influence fosters a stronger sense of belonging.Consequently, the more positive the content of the online review, the higher the purchase intention of consumers for the product. Group Similarity Perceived social norms compel consumers to align with the expressed opinions in online reviews.Positive online reviews contribute to more favorable product evaluations and increased purchase intentions.However, the impact of online reviews is determined not only by the content but also by the characteristics of the reviewer [36], and certain groups can use their social influence to heighten consumers' awareness [37]. Existing research highlights the pivotal role of perceived similarity between consumers and reference groups [32], suggesting that consumers engage in social comparison to assess themselves relative to others, especially with those sharing similar attributes [38].Even in online shopping contexts, group similarity remains a critical factor to consider, as it exerts a significant influence on consumers' purchasing decisions [10,39].Previous studies have indicated that individuals are more likely to align with the behavior of groups that match factors such as age, gender, or personality [8,40,41].For instance, Shang et al. [42] demonstrated that consumers were more likely to donate when informed about donations from people of the same gender.Similarly, Murray et al. [41] found a significant reduction in adolescent smoking when peers of the same age participated in an antismoking campaign.Goldstein et al. [9] emphasized the importance of group similarity in consumers' social identity, revealing that consumers in a hotel aligned more with groups similar to themselves.Group similarity is a crucial variable for assessing consumer adherence to social norms. However, other studies have indicated that group similarity does not consistently influence consumer attitudes or behaviors.Moon and Sung [13] demonstrated that consumers with a high need for uniqueness deliberately diverge from the opinions of groups with a high similarity to maintain their uniqueness.In their study, group similarity had a negative effect on consumer behavior.Moreover, Racherla et al. [12] found that the significance of group similarity on enhancing trust in online reviews occurs only in high-involvement situations.This is because, in consumer purchasing situations, similarity with the online review group serves as a central rather than a peripheral cue, diminishing the influence of similarity in low-involvement situations. In summary, while similarity to online review groups can play a role in eliciting consumer responses, previous studies also suggest instances in which the influence of group similarity disappears or becomes negative.Therefore, for companies to utilize online reviews effectively, it is important to identify the factors that influence group similarity.To address this, self-construal, a variable representing consumers' personal tendencies, is adopted as a key variable in this study.In particular, we examine how the social influence (informational/normative) of online reviews varies depending on the interaction between group similarity and self-construal. Self-Construal Self-construal refers to the extent to which individuals perceive themselves as distinct entities separated from others or as interconnected entities within relationships [14].It encompasses both independent and interdependent self-construal [20].Individuals with an independent self-construal think in terms of "I" and perceive themselves as unique entities, distinct from others.Conversely, those with an interdependent self-construal think in terms of "we" and perceive themselves as an integral part of the social context, and strive to maintain harmonious relationships with others [20,43,44].Differences in thinking styles based on self-construal influence consumer perceptions of relationships as well as information processing, emotional expression, and perceptions of object fit [45][46][47]. Self-construal significantly influences how consumers process information presented by a reference group.Individuals with an independent self-construal value self-confidence over conforming to group norms during information processing [20].This does not imply that they are impervious to group influence; rather, their aim is to enhance the accuracy of their judgments by seeking others' opinions [32].Online review ratings act as cues for those with an independent self-construal, boosting their confidence in the accuracy of their judgments.As a result, individuals with an independent self-construal tend to evaluate products more positively when they have higher review ratings, regardless of the type of review group.For them, a high rating indicates a low likelihood of product failure upon purchase. By contrast, individuals with an interdependent self-construal place importance on relationships and group harmony, striving to reinforce bonds by adhering to in-group norms [43].However, they do not value relationships with all groups equally [48].Hesapci et al. [49] and Duclos and Barasch [50] noted that individuals with an interdependent self-construal are more influenced by the in-group than the out-group, indicating that those with an interdependent self-construal are more responsive to in-group than out-group influences, particularly when group similarity is high. The anticipated variation in the social influence of online reviews, encompassing both informational and normative effects, is expected to vary based on group similarity and consumer self-construal.For individuals with an independent self-construal, the diagnostic perception due to informational influence is expected to be more prevalent.These consumers seek confidence in their purchases through reliable information [15,20,51].They also use the opinions of others as a tool to evaluate their own judgments and decisions, regardless of the type of group [20].Online reviews play a crucial role in this evaluative process [52].High ratings in online reviews serve as valuable information, instilling confidence in potential buyers and guiding them toward satisfactory product choices [53].Conversely, online reviews with lower ratings may serve an informative purpose, causing consumers to hesitate and reconsider their decision to purchase a particular product.Consequently, consumers with an independent self-construal are expected to experience higher diagnosticity when online reviews are positive, regardless of group similarity, with no significant change in the perception of belongingness based on group similarity or online reviews.This is because, for consumers with an independent self-construal, social influence operates primarily through informational influence rather than normative influence [51]. Hypothesis 1 (H1). The perceived diagnosticity of consumers with an independent self-construal increases when the online review rating is high (vs.low), irrespective of group similarity. Hypothesis 2 (H2). The perceived belongingness of consumers with an independent self-construal is not influenced by group similarity and the online review rating. Individuals with an interdependent self-construal are expected to prioritize normative influence over informational influence in assessing online reviews [32].Focused on conforming to group norms for relationship formation, they place greater value on relationships within their in-groups and are less influenced by out-groups [49,50].Therefore, individuals with an interdependent self-construal are expected to perceive a greater sense of belongingness when online reviews from groups that are similar to their own are positive.However, when group similarity is low, perceived belongingness is not expected to be influenced by online ratings.Additionally, because they emphasize normative influence over informational influence in online reviews [51], no differences in the perceptions of diagnosticity are expected based on group similarity and online review ratings.Based on these considerations, the following hypotheses are proposed: Hypothesis 3 (H3).The perceived diagnosticity of consumers with an interdependent self-construal is not influenced by group similarity and the online review rating. Hypothesis 4 (H4). The perceived belongingness of consumers with an interdependent selfconstrual changes according to group similarity and the online review rating. Hypothesis 4a (H4a). When group similarity is high, perceived belongingness increases more when the online review rating is high (vs.low). Hypothesis 4b (H4b). When group similarity is low, perceived belongingness does not change based on the online review rating. Consumer purchase intentions are expected to vary according to group similarity, online review ratings, and self-construal.Consumers with an independent self-construal that emphasizes accuracy in decision making [15,20,51] perceive greater diagnosticity in online reviews that are positively rated by many people, which leads to a more positive purchase intention. Hypothesis 5 (H5). The purchase intention of consumers with an independent self-construal is more positive when the online review rating is high (vs.low), regardless of group similarity. However, consumers with an interdependent self-construal, who make decisions based on in-group opinions [49,50,54], will perceive a different level of belongingness and thus have different purchase intentions based on group similarity and online review ratings.When group similarity is high, perceived belongingness increases, and purchase intention is more positive when the online review rating is high compared to when it is low.However, when group similarity is low, there will be no difference in purchase intention because they are not affected by the online review rating of the out-group.Based on the above discussion, the following hypotheses were derived: Hypothesis 6 (H6).The purchase intention of consumers with an interdependent self-construal changes according to group similarity and the online review rating. Hypothesis 6a (H6a). When group similarity is high, purchase intention is more positive when the online review rating is high (vs.low). Hypothesis 6b (H6b). When group similarity is low, purchase intention does not differ based on the online review rating. Finally, we anticipate that the impact of online review ratings, group similarity, and self-construal on consumers' purchase intentions will be mediated by perceived diagnosticity and belongingness.As previously mentioned, consumers with an independent self-construal, who base their decisions on informational rather than normative social cues [15, 20,51], are expected to determine purchase intention based on perceived diagnosticity.Conversely, consumers with an interdependent self-construal, who base their decisions on normative influence [20,51], are expected to form purchase intentions based on perceived belongingness.Based on these premises, the following hypotheses were derived: Hypothesis 7 (H7).The influence of the online review rating, group similarity, and self-construal on purchase intention is mediated by perceived diagnosticity and perceived belongingness. Research Framework and Data Collection We investigated the impact of online review ratings, group similarity, and self strual on consumer purchase intentions.Furthermore, perceived diagnosticity an longingness were introduced as mechanisms underlying this influence. The experiment was conducted from 17 July 2023 to 20 July 2023.A total of 267 jects participated in the experiment in exchange for a small incentive (USD 0.6) thr Amazon MTurk.All participants were residents of the United States, comprising 166 Research Framework and Data Collection We investigated the impact of online review ratings, group similarity, and self-construal on consumer purchase intentions.Furthermore, perceived diagnosticity and belongingness were introduced as mechanisms underlying this influence. The experiment was conducted from 17 July 2023 to 20 July 2023.A total of 267 subjects participated in the experiment in exchange for a small incentive (USD 0.6) through Amazon MTurk.All participants were residents of the United States, comprising 166 men (62.2%) with a mean age of 34.15 (SD = 9.96, range = 21-74) (Table 1). Experimental Design and Procedure This study employed a 2 (online review rating: high vs. low) × 2 (group similarity: high vs. low) × 2 (self-construal: independent vs. interdependent) between-subjects factorial design.Participants were randomly assigned to experimental conditions based on online review ratings, group similarity, and self-construal, resulting in eight groups, each group comprising 27-49 subjects. First, participants engaged in tasks related to self-construal.Tasks pertaining to self-construal involved priming and manipulation checks to assess self-construal.The participants' self-construal was primed using the method outlined by Chen [46] and Gardner et al. [55].To prime self-construal, participants were presented with a task involving paragraphs corresponding to different self-construal types (i.e., independent or interdependent).Participants in the independent self-construal condition were instructed to read short paragraphs about a city trip and then count the number of pronouns in the text.The pronouns presented to them were singular words (e.g., he, she, me, I, you, mine, and yours).Participants in the interdependent self-construal condition were given the same task, but the pronouns presented to them were plural words (e.g., we, they, our, and their).Following this task, participants responded to 6 questions assessing the effectiveness of the self-construal manipulation (e.g., independent: focused on "myself" vs. interdependent: "me and my family"). Next, to manipulate group similarity, participants were asked about their mobile phone brand (e.g., Samsung Galaxy, Apple iPhone).They were then presented with experimental stimuli containing product information and online reviews (see Appendix A).The stimuli provided information about earbuds released by a fictitious American venture company designed to be compatible with mobile phones, including Samsung Galaxy and Apple iPhones.To highlight group similarities, participants were informed that the reviews were written by users of Samsung Galaxy and Apple iPhones.High group similarity was established when both the participants and the reviewers used the same mobile phone brand, whereas low group similarity occurred when they used different brands (e.g., high group similarity: participant's brand-Samsung, reviewers' brand-Samsung).Product ratings in the experimental stimuli were based on user review ratings and categorized into the following two conditions: high and low ratings.The review ratings were presented to the participants in the form of star ratings. After viewing the stimulus, participants responded to 3 items to check the manipulation of group similarity, 4 items assessing perceived diagnosticity (informative influence), 5 items evaluating feelings of belongingness (normative influence), and 4 items measuring their intention to purchase.Finally, participants answered demographic questions.The measurement items and reliability of the constructs are listed in Table 2. • Your thoughts about the message were focused on just yourself. • Your thoughts were focused on just you. • You thought about you and your family. • Your thoughts about the message were focused on you and your family. • Your thoughts were focused on you and your family. Group Similarity (perceived similarity to others) [57] • Samsung Galaxy users (Apple iPhone users) reflect who I am. • Samsung Galaxy users (Apple iPhone users) are similar to me. • Samsung Galaxy users (Apple iPhone users) are very much like me. • These reviews are helpful for me to evaluate the earbuds. • These reviews are helpful for me to understand the performance of the earbuds. • This review is diagnostic. • The review provided me with information to evaluate the earbuds' quality.α = 0.777 Perceived Diagnosticity (informative) [58,59] Perceived Belongingness (normative) [60,61] • If I use the product (earbuds), I think I belong to the same group as the reviewers (Samsung Galaxy, Apple iPhone, or other mobile phone users). • If I use the product (earbuds), I feel a sense of belonging with the reviewers (Samsung Galaxy, Apple iPhone, or other mobile phone users). • If I use the product (earbuds), I feel close to the reviewers (Samsung Galaxy, Apple iPhone, or other mobile phone users). • If I use the product (earbuds), I feel like I'm with the reviewers (Samsung Galaxy, Apple iPhone, or other mobile phone users). • If I use the product (earbuds), I feel socially connected to the reviewers (Samsung Galaxy, Apple iPhone, or other mobile phone users). Purchase Intention [62] • The likelihood of me buying the product (earbuds) is very high. • I would consider buying the product (earbuds) of this brand. • The probability that I would like to buy the product (earbuds) of this brand is very high. • My willingness to buy this product (earbuds) is very high.α = 0.862 Manipulation Checks We conducted a manipulation check for the variables of interest (self-construal and group similarity).In the 2 (group similarity) × 2 (review rating) × 2 (self-construal) analysis of variance (ANOVA) focusing on the self-thought index, self-construal exhibited a significant main effect (F = 21.570,p < 0.001).Participants with an independent self-construal (M = 5.46) were more self-focused than those with an interdependent selfconstrual (M = 4.73).For the other-thoughts index, in the 2 (similarity) × 2 (review rating) × 2 (self-construal) ANOVA, the main effect of self-construal was also significant (F = 10.275,p < 0.05), with individuals with an interdependent self-construal (M = 5.34) showing more focus on others than those with an independent self-construal (M = 4.82). In the analysis of perceived similarity, through the 2 (group similarity) × 2 (review rating) × 2 (self-construal) ANOVA, the main effect of group similarity was significant (F = 16.171,p < 0.001).Participants perceived greater group similarity when the mobile phone brands they used matched with the reviewers' (M = 5.43) compared to when they did not (M = 4.88).In the manipulation checks for self-construal and group similarity, the effects of the other variables were not significant (all p > 0.1). Independent Self-Construal: Perceived Diagnosticity (Informative)/Belongingness (Normative) An ANOVA was conducted to examine the influence of self-construal, group similarity, and review ratings on perceived diagnosticity and belongingness (Tables 3 and 4).The following results were obtained: First, the analysis on perceived diagnosticity showed a marginally significant main effect of group similarity (F = 3.284, p = 0.071), indicating that participants perceived higher diagnosticity when group similarity was high (M = 5.34) compared to when it was low (M = 5.16).Second, the main effect of the review ratings was significant (F = 10.989,p < 0.05), with participants perceiving higher diagnosticity for high-rated reviews (M = 5.47) than for low-rated reviews (M = 5.06).Third, the threeway interaction among self-construal, group similarity, and review ratings on perceived diagnosticity was significant (F = 6.987, p < 0.05).In contrast, the analysis of participants with an independent self-construal (Figure 2 showed that these individuals perceived online reviews with higher ratings as more diagnostic than low-rated ones, irrespective of group similarity.Specifically, in the case of high group similarity, participants perceived greater diagnosticity when the online review rating was high (M = 5.63) compared to when it was low (M = 5.10; F = 4.714, p < 0.05).Similarly, under low group similarity, they perceived greater diagnosticity for high-rated reviews (M = 5.65) compared to when it was low (M = 4.84; F = 12.119, p < 0.05), supporting H1.This means that, as proposed in Hypothesis 1, consumers with independent self-construal perceived greater diagnosticity of online reviews with high (vs.low) ratings when group similarity was high (vs.low).Furthermore, analyses based on online review ratings showed no significant difference in diagnostic perceptions concerning group similarity in either the high or low online review rating groups (high review rating-low group similarity: 5.65 vs. high group similarity: 5.63; F = 0.010, p > 0.1) (low review rating-low group similarity: 4.84 vs. high group similarity: 5.10; F = 1.115, p > 0.1).An ANOVA on perceived belongingness showed that none of the main effects were significant (all p > 0.1).More importantly, the three-way interaction effect on perceived belongingness was significant (F = 5.765, p < 0.05).In contrast, focusing on participants with an independent self-construal (Figure 2) revealed that these individuals did not perceive belongingness differently depending on group similarity and online review ratings (high group similarity-low review rating: 5.29 vs. high review rating: 5.11; F = 0.325, p > 0.1) (low group similarity-low review rating: 4.96 vs. high review rating: 5.00; F = 0.021, p > 0.1).These results support H2.In other words, as proposed in Hypotheses 1 and 2, the results confirmed that consumers with an independent self-construal perceive diagnosticity over belongingness when evaluating online review ratings.Furthermore, in both groups with high and low online review ratings, no significant difference was observed in the perception of the sense of belongingness based on group similarity (high review rating with low group similarity: M = 5.11; F = 0.133, p > 0.1: low review rating with low group similarity: M = 4.96 vs. high group similarity: M = 5.29; F = 1.096, p > 0.1). Interdependent Self-Construal: Perceived Diagnosticity (Informative) and Belongingness (Normative) The contrast analysis of the perceived diagnosticity of participants with an interdependent self-construal (Figure 3) showed that their perceptions of diagnosticity varied depending on the review rating and group similarity.In the case of high group similarity, participants perceived greater diagnosticity when the review rating was high (M = 5.65) compared to when it was low (M = 5.07; F = 7.604, p < 0.05).However, in the case of low group similarity, perceived diagnosticity did not vary depending on the review rating (low review rating: 5.24 vs. high: 4.87; F = 2.320, p > 0.1).Contrary to the prediction of Hypothesis 3, consumers with interdependent self-construal perceived online reviews with high ratings as more useful compared to those with low ratings when group similarity was high.This result indicates that H3 was not supported.Furthermore, where online review ratings were high, individuals perceived greater diagnosticity when group similarity was high than those with low group similarity (high online review rating-low group similarity: 4.87 vs. high group similarity: M = 5.65; F = 10.785,p < 0.05).However, in instances where the review ratings were low, there was no significant difference in the perception of diagnosticity based on group similarity (low review rating with low group similarity: M = 5.24 vs. high group similarity: M = 5.07; F = 0.617, p > 0.1).In other words, individuals with an interdependent self-construal perceived greater diagnosticity from reviews when both group similarity and review ratings were high. The results of the contrast analysis of the perceived belongingness of participants with an interdependent self-construal showed significant differences in perceived belongingness based on group similarity and online review ratings (Figure 4).Specifically, in conditions of high group similarity, participants reported a greater sense of belongingness when the review rating was high (M = 5.68) compared to when it was low (M = 4.57; F = 16.872,p < 0.001).Consumers with interdependent self-construal perceived a higher sense of belongingness through high (vs.low) online review ratings when group similarity was high.However, in scenarios with low group similarity, perceived belongingness did not significantly differ based on the online review rating (low review rating: M = 4.97 vs. high: 4.86; F = 0.123, p > 0.1).This implies that for consumers with interdependent self-construal, when group similarity was low, the impact of online review ratings on the perception of belongingness was not significant.These results show H4a and H4b were supported.Further analysis on perceived belongingness based on online review ratings revealed that, in cases where the review ratings were high, individuals experienced heightened perceived belongingness when group similarity was high compared to when it was low (high review rating with low group similarity: M = 4.86 vs. high group similarity: 5.68; F = 7.120, p < The contrast analysis of the perceived diagnosticity of participants with an interdependent self-construal (Figure 3) showed that their perceptions of diagnosticity varied depending on the review rating and group similarity.In the case of high group similarity, participants perceived greater diagnosticity when the review rating was high (M = 5.65) compared to when it was low (M = 5.07; F = 7.604, p < 0.05).However, in the case of low group similarity, perceived diagnosticity did not vary depending on the review rating (low review rating: 5.24 vs. high: 4.87; F = 2.320, p > 0.1).Contrary to the prediction of Hypothesis 3, consumers with interdependent self-construal perceived online reviews with high ratings as more useful compared to those with low ratings when group similarity was high.This result indicates that H3 was not supported.Furthermore, where online review ratings were high, individuals perceived greater diagnosticity when group similarity was high than those with low group similarity (high online review rating-low group similarity: 4.87 vs. high group similarity: M = 5.65; F = 10.785,p < 0.05).However, in instances where the review ratings were low, there was no significant difference in the perception of diagnosticity based on group similarity (low review rating with low group similarity: M = 5.24 vs. high group similarity: M = 5.07; F = 0.617, p > 0.1).In other words, individuals with an interdependent self-construal perceived greater diagnosticity from reviews when both group similarity and review ratings were high. The results of the contrast analysis of the perceived belongingness of participants with an interdependent self-construal showed significant differences in perceived belongingness based on group similarity and online review ratings (Figure 4).Specifically, in conditions of high group similarity, participants reported a greater sense of belongingness when the review rating was high (M = 5.68) compared to when it was low (M = 4.57; F = 16.872,p < 0.001).Consumers with interdependent self-construal perceived a higher sense of belongingness through high (vs.low) online review ratings when group similarity was high.However, in scenarios with low group similarity, perceived belongingness did not significantly differ based on the online review rating (low review rating: M = 4.97 vs. high: 4.86; F = 0.123, p > 0.1).This implies that for consumers with interdependent self-construal, when group similarity was low, the impact of online review ratings on the perception of belongingness was not significant.These results show H4a and H4b were supported.Further analysis on perceived belongingness based on online review ratings revealed that, in cases where the review ratings were high, individuals experienced heightened perceived belongingness when group similarity was high compared to when it was low (high review rating with low group similarity: M = 4.86 vs. high group similarity: 5.68; F = 7.120, p < 0.05).By contrast, when the review rating was low, there was no significant difference in the perception of belongingness based on group similarity (low review rating with low group similarity: M = 4.97 vs. high group similarity: M = 4.57; F = 2.074, p > 0.1).0.05).By contrast, when the review rating was low, there was no significant difference in the perception of belongingness based on group similarity (low review rating with low group similarity: M = 4.97 vs. high group similarity: M = 4.57; F = 2.074, p > 0.1). Purchase Intention The results of the ANOVA (Table 5) revealed that the main effect of the online review rating was significant (F = 15.939,p < 0.001), indicating that participants had a more positive purchase intention when the review rating was high (M = 5.47) compared to when it was low (M = 4.86).The main effect of self-construal was marginally significant (F = 3.058, p = 0.082).Compared with participants with an interdependent self-construal (M = 5.02), those with an independent self-construal showed a higher purchase intention (M = 5.32).Moreover, the three-way interaction among self-construal, group similarity, and the review rating on the purchase intention was significant (F = 6.008, p < 0.05).The contrast analysis results (Figure 4) indicated that participants with an independent self-construal had a more positive purchase intention with high review ratings, regardless of group similarity.In high group similarity conditions, participants exhibited a positive purchase intention when the online review rating was high (M = 5.65) compared to when it was low (M = 5.02; F = 4.937, p < 0.05).Similarly, in situations with low group similarity, purchase intention was more positive when the online review rating was high (M = 5.57) compared to when it was low (M = 4.96; F = 4.971, p < 0.05).The purchase intention of the participants with an interdependent self-construal varied based on group similarity and the review rating.In situations with high group similarity, their purchase intention was more positive when the online review rating was higher (M = 5.72) compared to when it was low (M = 4.57; F = 21.996,p < 0.001).However, in situations with low group similarity, purchase intention did not differ according to the review rating (review rating-low: 5.08 vs. high: 4.87; F = 0.553, p > 0.1) (Figure 4).Thus, H5 and H6 (H6a and H6b) were supported. In summary, consumers with an independent self-construal who perceive diagnosticity in high review ratings exhibit more positive purchase intentions when review ratings are high (vs.low), irrespective of group similarity.However, consumers with an interdependent self-construal, who perceive both diagnosticity and belongingness in high review ratings only when group similarity is high, show more positive purchase intentions in high (vs.low) review ratings of groups with high similarity. Analysis based on online review ratings revealed individuals with an independent self-construal did not exhibit different purchase intentions based on group similarity, regardless of whether the review rating was high or low (high online review rating with low group similarity: M = 5.57 vs. high group similarity: 5.65; F = 0.095, p > 0.1; low review rating with low group similarity: M = 4.96 vs. high group similarity: 5.02; F = 0.040, p > 0.1).In contrast, individuals with an interdependent self-construal displayed varied purchase intentions according to group similarity under both high and low review ratings conditions (high review rating with low group similarity: M: 4.87 vs. high group similarity: 5.72; F = 9.381, p < 0.05) (low review rating-low group similarity: 5.08 vs. high group similarity: 4.57; F = 4.046, p < 0.05). Mediation Analysis (Perceived Diagnosticity/Belongingness) Mediation analysis was conducted to explore how perceived diagnosticity and perceived belongingness mediate the effects of online review ratings, group similarity, and self-construal on purchase intention.Model 12 of the PROCESS macro was applied for analysis [63], with bootstrapping analysis involving 10,000 resamples to assess the mediation effects [64].In this model, online review ratings were considered the independent variable, group similarity and self-construal were the moderating variables, perceived diagnosticity and belongingness served as mediating variables, and purchase intention was the dependent variable.The results indicated that the influence of online review ratings on purchase intention was mediated by perceived diagnosticity and belongingness, indicating significant indirect effects for perceived diagnosticity (indirect effect = 0.77, 95% CI: 0.2106-1.4136)and perceived belongingness (indirect effect = 0.45, 95% CI: 0.0832-0.9519).Therefore, H7 was supported. Next, the following analysis was conducted to examine the path of influence of each variable (online review rating, group similarity, self-construal) on consumers' purchase Purchase Intention The results of the ANOVA (Table 5) revealed that the main effect of the online review rating was significant (F = 15.939,p < 0.001), indicating that participants had a more positive purchase intention when the review rating was high (M = 5.47) compared to when it was low (M = 4.86).The main effect of self-construal was marginally significant (F = 3.058, p = 0.082).Compared with participants with an interdependent self-construal (M = 5.02), those with an independent self-construal showed a higher purchase intention (M = 5.32).Moreover, the three-way interaction among self-construal, group similarity, and the review rating on the purchase intention was significant (F = 6.008, p < 0.05).The contrast analysis results (Figure 4) indicated that participants with an independent self-construal had a more positive purchase intention with high review ratings, regardless of group similarity.In high group similarity conditions, participants exhibited a positive purchase intention when the online review rating was high (M = 5.65) compared to when it was low (M = 5.02; F = 4.937, p < 0.05).Similarly, in situations with low group similarity, purchase intention was more positive when the online review rating was high (M = 5.57) compared to when it was low (M = 4.96; F = 4.971, p < 0.05).The purchase intention of the participants with an interdependent self-construal varied based on group similarity and the review rating.In situations with high group similarity, their purchase intention was more positive when the online review rating was higher (M = 5.72) compared to when it was low (M = 4.57; F = 21.996,p < 0.001).However, in situations with low group similarity, purchase intention did not differ according to the review rating (review rating-low: 5.08 vs. high: 4.87; F = 0.553, p > 0.1) (Figure 4).Thus, H5 and H6 (H6a and H6b) were supported. In summary, consumers with an independent self-construal who perceive diagnosticity in high review ratings exhibit more positive purchase intentions when review ratings are high (vs.low), irrespective of group similarity.However, consumers with an interdependent self-construal, who perceive both diagnosticity and belongingness in high review ratings only when group similarity is high, show more positive purchase intentions in high (vs.low) review ratings of groups with high similarity. Analysis based on review ratings revealed individuals with an independent selfconstrual did not exhibit different purchase intentions based on group similarity, regardless of whether the review rating was high or low (high online review rating with low group similarity: M = 5.57 vs. high group similarity: 5.65; F = 0.095, p > 0.1; low review rating with low group similarity: M = 4.96 vs. high group similarity: 5.02; F = 0.040, p > 0.1).In contrast, individuals with an interdependent self-construal displayed varied purchase intentions according to group similarity under both high and low review ratings conditions (high review rating with low group similarity: M: 4.87 vs. high group similarity: 5.72; F 9.381, p < 0.05) (low review rating-low group similarity: 5.08 vs. high group similarity: 4.57; F = 4.046, p < 0.05). Mediation Analysis (Perceived Diagnosticity/Belongingness) Mediation analysis was conducted to explore how perceived diagnosticity and perceived belongingness mediate the effects of online review ratings, group similarity, and self-construal on purchase intention.Model 12 of the PROCESS macro was applied for analysis [63], with bootstrapping analysis involving 10,000 resamples to assess the mediation effects [64].In this model, online review ratings were considered the independent variable, group similarity and self-construal were the moderating variables, perceived diagnosticity and belongingness served as mediating variables, and purchase intention was the dependent variable.The results indicated that the influence of online review ratings on purchase intention was mediated by perceived diagnosticity and belongingness, indicating significant indirect effects for perceived diagnosticity (indirect effect = 0.77, 95% CI: 0.2106-1.4136)and perceived belongingness (indirect effect = 0.45, 95% CI: 0.0832-0.9519).Therefore, H7 was supported. Next, the following analysis was conducted to examine the path of influence of each variable (online review rating, group similarity, self-construal) on consumers' purchase intentions.Upon examining the specific paths of influence of online review ratings based on self-construal and group similarity, the mediating effect of perceived diagnosticity (Table 6) was not significant in the condition of interdependent self-construal and low group similarity.For subjects with an independent self-construal, perceived diagnosticity mediated the influence of online review ratings on purchase intention, regardless of group similarity.Conversely, in subjects with an interdependent self-construal, the mediating effect of perceived diagnosticity was significant only when group similarity was high.Finally, the mediating effect of perceived belongingness was not significant for participants with an independent self-construal and was significant only in the condition of high group similarity for participants with an interdependent self-construal (Table 7). Discussion This study investigated the effects of online review ratings, group similarity, and self-construal on consumer purchase intentions.Additionally, we explored the underlying mechanisms of perceived belongingness and diagnosticity in shaping these intentions.The results highlighted varied patterns in purchase intentions based on the interplay between group similarity and online review ratings for consumers with interdependent and independent self-construals.Specifically, consumers with an interdependent self-construal were more influenced by group similarity, showing heightened purchase intentions when online review ratings were high.However, in situations with low group similarity, purchase intentions remained unaffected by online review ratings.Conversely, consumers with an independent self-construal were significantly influenced by online ratings, showing more positive purchase intentions when the ratings were high, irrespective of group similarity. Previous studies examining the impact of group similarity in online reviews have yielded inconsistent results.However, this study demonstrated that the conflicting findings from previous research could be reconciled by considering self-construal.Additionally, we investigated the underlying mechanisms of perceived belongingness and diagnosticity in shaping these intentions.Prior research suggests that marketing strategies employing social norms exert social influence, perceived by consumers as either informational or normative [9,18,19,32].Our findings indicate that the influence of social norms on consumer responses varies depending on their self-construal.Among consumers with an interdependent self-construal, perceived belongingness and diagnosticity played a mediating role.Conversely, for those with an independent self-construal, purchase intentions were solely mediated by perceived diagnosticity.In essence, this study is significant, as it confirms that differential perception (information usefulness and belongingness) operates based on self-construal regarding the influence of online review ratings and group similarity on consumer response (Table 8). Academic Implications These findings have several important implications.First, the study corroborated the role of group similarity in enhancing the effectiveness of online reviews, building on existing literature that highlighted the influence of social norms and social influence [9,17].Notably, this study manipulated group similarity within the context of consumer-brand dynamics, offering meaningful insights that are distinct from previous studies [65,66].Second, the research revealed noteworthy variances in consumers' purchase intentions contingent on group similarity and self-construal within the framework of online reviews.This study contributes to the literature on self-construal and social norms by confirming that the types of groups influencing consumers with independent or interdependent selfconstruals are distinct. Finally, this study proposes that social influence, comprising normative and informational components, serves as the underlying mechanism for the positive impact of online reviews on purchase intention.Importantly, it reveals that the prominence of each influence type shifts based on group similarity and self-construal.For consumers with an independent self-construal, perceived diagnosticity (informational influence) mediates the impact of online reviews on purchase intentions.Conversely, for those with an interdependent self-construal, both perceived belongingness (normative influence) and perceived diagnosticity (informational influence) mediate this effect.Understanding the different mechanisms at play distinguishes this study from previous research [9,11], emphasizing the contextual dependence of reviews contingent on group similarity and individual consumer characteristics. Practical Implications The study also offers practical insights for marketers aiming to use online reviews to formulate effective strategies.First, it highlights the significant role of group similarity in influencing purchase intentions, demonstrating that consumers perceive a high level of group similarity solely by being informed that they are using the same product brand as the reviewer.Consequently, companies can enhance perceived group similarity by leveraging consumers' past purchase history rather than relying on demographic information or community affiliation details. Second, the results underscore that the effects of online reviews and group similarity can vary based on consumers' self-construal.Therefore, companies should implement diverse online review strategies based on consumers' self-construal.For those with an independent self-construal, positive online reviews are effective regardless of group simi-larity because they assess the quality and utility of products through reviews.By contrast, because consumers with an interdependent self-construal are influenced primarily by product evaluations from their in-group, marketers should use positive reviews from the same consumer group. Third, understanding consumers' self-construal is crucial for companies seeking to implement effective online review strategies.Collecting data on targeted consumers and inferring their self-construal based on demographics, age, gender, and culture can inform tailored strategies.Finally, companies can manipulate consumer self-construal through priming techniques within purchasing situations.Priming, as demonstrated in this study's experiment, enables temporary changes in consumers' tendencies.Consumers' self-construal can be manipulated by having them engage in specific tasks, as demonstrated in this study, but it can also be manipulated by the messages they encounter.Additionally, consumers' self-construal may vary depending on their motivations for purchasing a product [56].For instance, when consumers aim to buy a product for personal satisfaction or enjoyment, their self-construal may shift toward independence.On the other hand, when purchasing a product for the well-being or safety of their family, it may become more interdependent.By incorporating priming methods, message types, and purchase situations that influence self-construal, companies can establish more effective online review strategies. Limitations and Future Research We would like to address several limitations and propose directions for future research based on our findings.First, despite observing variations in consumers' purchase intentions based on the variables examined in this study, we noted that purchase intention was relatively high across all groups.This may be attributable to the nature of the experimental product, the Bluetooth earbuds, which are readily accessible to consumers.Bluetooth earbuds, which are available in diverse brands and price ranges, may engender inherently positive consumer attitudes.Future research should consider conducting experiments using different products to ensure the generalizability of the findings.Furthermore, the comparatively elevated purchase intention may be attributed to the online review ratings presented to the participants.In this study, the stimuli were accompanied by review ratings of 4.6 points in the high condition and 3 points in the low condition.While a significant difference was noted between these two levels, it may be challenging to categorize a rating of 3 points as low.Consequently, future research should adjust the ratings to align with the conditions and the characteristics of the target group. Second, the products utilized in prior studies on online reviews (e.g., printers, sneakers, electronic devices, and shampoos) were confined to a few categories [57,58].In line with this trend, our study used electronic devices as experimental stimuli.However, online shopping platforms offer a wide array of products beyond electronic devices.For instance, consumers may purchase experience products such as tickets for movies or musical performances online.Future research should explore a broader range of product categories to enhance the external validity of our findings.Specifically, when dealing with experience products, where consumer perceptions are influenced by higher uncertainty, reliance on other people's reviews may play a more significant role.Therefore, future studies should consider these aspects. Third, consumers' online purchase intentions may be influenced by factors such as the characteristics of the reviewer or review.For instance, giving cues that enhance review reliability, such as top reviewers or best reviews [59], may increase consumer perceptions of information accuracy.Even in situations where group similarity is high, review or reviewer ranks can exert a more significant influence.Future research should explore these aspects to better understand the dynamics of online shopping. Finally, it is noteworthy that, even for products with high review ratings, the presence and number of visual images in the review can vary.Kim et al. [60] highlighted that consumers respond positively to reviews accompanied by images or those with a substantial number of images.Future research in this domain has the potential to offer valuable insights for companies seeking to optimize their online product presentations. Appendix A2. Low Rating (Apple iPhone) Condition This review is about earbuds, a new product launched by an American venture company.These earbuds are compatible with all phone brands, such as Apple iPhone and Samsung Galaxy, regardless of the model.The company conducted consumer testing before launching the product, and posted reviews for each mobile phone brand used by consumers.Apple iPhone users' reviews are as follows. Figure 1 Figure 1 . Figure 1 depicts the research framework of this study. Table 2 . Measurement items and reliabilities. Table 6 . Results of mediation analysis for perceived diagnosticity. Table 7 . Results of mediation analysis for perceived belongingness. Table 8 . Results of hypothesis.
2024-05-12T15:13:01.124Z
2024-05-10T00:00:00.000
{ "year": 2024, "sha1": "11ef644b3da7115bf4617081fdfd5f2584a9bd8e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/0718-1876/19/2/55/pdf?version=1715335403", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "28c9444cae157f78a120bff8fb9ec6bb18097bc7", "s2fieldsofstudy": [ "Business", "Psychology" ], "extfieldsofstudy": [] }
248020348
pes2o/s2orc
v3-fos-license
Salinity-induced chemical, mechanical, and behavioral changes in marine microalgae This study examines how salinity reduction triggers the response of three marine microalgae at the molecular and unicellular levels in terms of chemical, mechanical, and behavioral changes. At the lowest salinity, all microalgal species exhibited an increase in membrane sterols and behaved stiffer. The glycocalyx-coated species Dunaliella tertiolecta was surrounded by a thick actin layer and showed the highest physiological activity, negatively affecting cell motility and indicating the formation of the palmella stage. The lipid content of membrane and the hydrophobicity of cell were largely preserved over a wide range of salinity, confirming the euryhaline nature of Dunaliella. The species with calcite-encrusted theca Tetraselmis suecica exhibited the highest hydrophobicity at the lowest salinity of all cells examined. At salinity of 19, the cells of T. suecica showed the lowest growth, flagellar detachment and the lowest cell speed, the highest physiological activity associated with a dense network of extracellular polymeric substances, and a decrease in membrane lipids, which could indicate develepment of cyst stage. The organosilicate encrusted species Cylindrotheca closterium appeared to be salinity tolerant. It behaved hydrophobically at lower salinity, whereas becoming hydrophilic at higher salinity, which might be related to a molecular change in the released biopolymers. This study highlighted the interplay between chemistry and mechanics that determines functional cell behavior and shows that cell surface properties and behavior could serve as stress markers for marine biota under climate change. Introduction Microalgae differ in their ability to survive and thrive in saline environments under the influence of osmotic stress (Helm et al. 2004). Because salinity can affect metabolic processes and water balance above or below the cell's isosmotic point (Kefford et al. 2002), microalgae have evolved various intracellular and extracellular osmoregulatory mechanisms to control osmotic stress in the face of salinity changes in the external environment (Gustavs et al. 2010;Shetty et al. 2019). At the cellular level, a change in salinity causes a variety of non-specific biochemical changes in the synthesis of active compounds, such as lipids, carbohydrates, carotenoids, steroids, sterols, and other secondary metabolites, changes in membrane permeability with a disruption of ion homeostasis (Benavente-Valdés et al. 2016;El-Kassas and El-Sheekh 2016;Minhas et al. 2016). How microalgae adapt to environmental changes in salinity has been extensively studied (Borowitzka 2018a, b;Foflonker et al. 2018). The adaptive response of microalgae could be manifested by altering membrane fluidity, reducing protein synthesis, accumulating compatible solutes to maintain cell osmolarity, regulating photosynthesis to balance energy production and consumption (Barati et al. 2019;Pavlinska et al. 2020), and releasing extracellular polymeric substances (Decho and Gutierrez 2017;Ivošević DeNardis et al. 2019;Mišić Radić et al. 2021). The present study investigates the response of marine microalgae to salinity fluctuation in terms of chemical, mechanical, and behavioral changes to improve our understanding of the mechanisms and pathways through which a salinity stressor modulates the adaptive responses of microalgae at the individual cell level. Three widely distributed marine algal species, a biflagellate green alga with glycocalyx surface coating Dunaliella tertiolecta, a tetraflagellate green alga with calcite-encrusted theca Tetraselmis suecica, and a gliding diatom with organosilicate cell wall Cylindrotheca closterium, were studied at selected salinity levels that simulated changes from the euhaline to the mesohaline range of the salinity spectrum in marine systems. Cell suspensions Three species of marine algae were selected as model organisms: Dunaliella tertiolecta (Chlorophyceae, CCMP 1320, Culture Collection Bigelow Laboratory for Ocean Sciences, Bigelow, MN, USA), Tetraselmis suecica (Chlorophyceae, CCAP 66 / 22A, Collection and Protozoa, Scottish Marine Institute, Oban, UK), and Cylindrotheca closterium (Bacillariophyceae, CCMP 1554, Culture Collection Bigelow Laboratory for Ocean Sciences). Cells were grown in 0.22µm filtered natural seawater (salinity of 38), diluted to a specific salinity with MilliQ water, and then enriched with f/2 growth medium (Guillard 1975). Cultures were maintained in a water bath under controlled conditions (constant shaking (20 rpm), 12 h:12 h light to dark cycle with an irradiance of 31 µmol photons m −2 s −1 ). Algal species were grown at four selected salinities of 9, 19, 27, and 38 (control). The average cell abundance in triplicate samples was determined using a Fuchs-Rosenthal hemocytometer. Growth rate and doubling time were determined in the exponential growth phase of algal cells (Kim 2015). Cells were harvested at stationary phase (15 days) by centrifugation (2000 × g, 3 min), and the washed pellets were resuspended twice with seawater of the corresponding salinity. The last pellet was resuspended in 2 mL of filtered seawater and served as the stock cell suspension. Confocal microscopy Confocal measurements were performed with a Leica TCS SP8 Laser Scanning Confocal Microscope (Leica Microsystems GmbH, Germany) equipped with a white-light laser and using a 63 × (N.A. = 1.4) oil immersion objective. The excitation wavelengths and emission ranges were optimized using the spectral scan option. A commercial dye SiR-Actin (excitation maximum 650 nm, detection range 670-700 nm) was used to visualize actin filament structures of D. tertiolecta and T. suecica. Visualization of the actin filament structures of C. closterium was not possible, probably because of its organosilicate cell wall. Autofluorescence of the algal cells was detected at an excitation maximum of 650 nm and a detection range of 720-750 nm. Sample preparation for confocal imaging The slides for confocal microscopy were washed in glass beakers with ethanol followed by ultrapure water. A stream of nitrogen was used to dry the slides. To prepare the slides for cell immobilization, 50 µL of 0.2% (w/v) polyethyleneimine (PEI, Sigma-Aldrich, USA) was added to the center of the clean slide and allowed to stand for 30 min. The PEI droplet was then removed and the center of the slide was rinsed three times with ultrapure water. The isolated algal cells (as described in the "Cell suspensions" section) were fluorescently labeled using the Sir-Actin kit (50 nmol Sir-Actin, 1 μmol Verapamil) from Tebu-bio (Ile-de-France, France). A stock solution of 500 μM Sir-Actin was prepared in anhydrous dimethyl sulfoxide (99.8% DMSO). To an aliquot of 50 μL of the washed cells, 0.5 µL of the Sir-Actin fluorescent dye was added. Finally, 20 µL of the stained cultures was added to the center of the glass slide and allowed to settle for 30 min. To prevent evaporation of the droplet, the slides were kept in a Petri dish with moist absorbent paper until imaging. Motility analysis Cell movements were recorded as 10 consecutive video files (.avi format, duration 5 s, 50-60 frames per second, image size: 340 × 250, 4 × 4 binning) under an Olympus BX51 microscope (10 × magnification). The video files were used as input to the open-source image processing software ICY (http:icy.bioimageanalysis.org) to analyze motility and trajectories of 1118 cells. Details of motility analysis are given in Novosel et al. (2020Novosel et al. ( , 2021. The software package R (R Core Team, 2020) was used for additional statistical analyses. Box plots and plots of probability density distributions of speed and search radius were obtained. The distributions for different salinities were tested using the Shapiro normality test and the Wilcoxon-Mann-Whitney test. Electrochemical method The electrochemical method of polarography and chronoamperometry of oxygen reduction at the dropping mercury electrode (DME) was utilized to characterize the organic constituents, such as biopolymers and fluid microparticles in the aqueous electrolyte solution based on molecular adsorption or particle adhesion to the DME (Žutić et al. 1999Svetličić et al. 2006;Pletikapić and Ivošević DeNardis 2017). Briefly, adsorption of organic molecules at the DME interface can be characterized by recording the polarographic maximum of Hg(II) ions, which is an alternative approach to measuring dissolved organic carbon in seawater (Hunter and Liss 1981). The surfactant activity of seawater is expressed as the equivalent amount of the nonionic synthetic surfactant Triton-X-100 (polyethylene glycol tert-octylphenyl ether) in milligram per liter. In contrast, the adhesion of fluid organic particles to the DME interface can be characterized by recording spike-shaped amperometric signals Svetličić et al. 2000;Ivošević DeNardis et al. 2007, 2015Novosel et al. 2021). Whether the adhesion is spontaneous depends on the properties of the three interfaces in contact, based on the modified Young-Dupré equation (Israelachvili 1992). Electrochemical measurements of algal cells Electrochemical measurements of algal cell samples were performed in an air-permeable and thermostatic Metrohm vessel with a three-electrode system. The working electrode, the dropping mercury electrode, had the following characteristics: dropping time: 2.0 s, flow rate: 6.0 mg s −1 , maximum surface area: 4.57 mm 2 . All potentials were referenced to a potential measured at a reference electrode, i.e., Ag/ AgCl (0.1 M NaCl) separated from the measured dispersion by a ceramic frit. A platinum wire was used as the counter electrode. An aliquot of the cell suspensions was added to 25 mL of filtered seawater (pH 8.0) of the corresponding salinity and then poured into a Methrom vessel at 20 °C. Electrochemical measurements were performed using a 174A Polarographic Analyser (Princeton Applied Research, USA) connected to a computer. Analog data acquisition was performed using a DAQ card-AI-16-XE-50 (National Instruments, USA). Data analysis was performed using the application developed in LabView 6.1 software (National Instruments). Electrochemical characterization of the algal cell samples was performed by recording polarograms of oxygen reduction (current-potential curves) and current-time curves over 50 mercury drops at constant potentials (time resolution: 50 s). Signal frequency was expressed as the number of amperometric signals from the cells over a 100-s period. Surfactant activity was measured by adding 0.5 mL of 0.1 M HgCl 2 to the sample before measurement. Atomic force microscopy imaging Atomic force microscopy (AFM) imaging was performed using a Multimode Scanning Probe Microscope with Nanoscope IIIa controller (Bruker, USA) equipped with a 125 µm vertical engagement (JV) scanner. Contact mode imaging in air was performed with silicon nitride cantilevers (DNP, Bruker, nominal frequency of 18 kHz, nominal spring constant of 0.06 N m −1 ). The linear scanning rate was between 1.5 and 2 Hz and the scan resolution was 512 samples per line. To minimize the interaction forces between the tip and the surface, the set point was kept at the lowest possible value. NanoscopeTM software (Bruker) was used to process and analyze the images. Sample preparation for AFM imaging of cells and released extracellular polymers Cells of D. tertiolecta, T. suecica, and C. closterium grown at salinities of 9, 19, 27, and 38, respectively, were separated from the growth medium by centrifugation as described in the "Cell suspensions" section. Unmodified mica was used as substrate for AFM imaging of D. tertiolecta and C. closterium, whereas polyethyleneimine (PEI; Sigma-Aldrich) modified mica was used for imaging of T. suecica (Novosel et al. 2021). The sample preparation protocol for AFM imaging required fixation of the D. tertiolecta suspension. A 5 µL aliquot of the cell suspension was pipetted onto freshly cleaved or PEI-modified mica and placed in a closed Petri dish for 1 h to allow the cells to settle and adhere to the surface. For rinsing, the mica was immersed in a glass of ultrapure water for 30 s three times and then dried. The mica discs were then taped to a metal sample pack with doublesided tape and imaged with the AFM. Atomic force microscopy working in force spectroscopy mode Measurements of the physical properties of the algal cells were performed using the Nanowizard IV AFM system (Bruker-JPK, Germany) in force spectroscopy mode in combination with an Olympus IX72 inverted optical microscope (Olympus Corporation, Japan). MLCT-D silicon nitride cantilevers were used to indent the algal cells. They were characterized by a nominal spring constant of 0.03 N m −1 and a half-opening angle of 21°. The spring constants of the cantilevers were calibrated using the thermal noise method (Sader et al. 1995). Measurements were made in the central region of the cell body regardless of the type of algal cells. However, in the case of D. tertiolecta and T. suecica cells, a scan area of 3 µm × 3 µm was chosen, over which a grid of 6 × 6 points was placed. For the cells of C. closterium, the size of the scan area was 1 µm × 1 µm, over which a grid of 2 × 2 points was defined. Force curves were recorded at an approach and retract velocity of 8 µm s −1 with a maximum force of 4 nN and curve lengths of 4 µm (C. closterium and T. suecica) and 6 µm (D. tertiolecta). Measurements were made at 18 °C in seawater at salinities of 9, 19, 27, and 38. The recorded data, i.e., the force curves, were analyzed using JPK Data Processing Software (Bruker-JPK, Germany). Young's modulus determination The apparent Young's modulus (E) was determined by applying the Hertz-Sneddon contact model (Sneddon 1965). Here, the four-sided pyramid probe was used. Therefore, the relationship between the loading force F and the indentation depth δ is: where E′ is the reduced Young's modulus considering the elastic modulus of algal cells (E cell ) and cantilever (E tip ) is given by: α is the open-angle of the tip, µ is the Poisson's ratio (µ cell and µ tip are Poisson's ratios related to the compressibility of the algal cells and indenting cantilever). Since E cell < < E tip , the following approximation is obtained: In our analysis, µ cell equals 0.5 because we assume that the algal cells are incompressible. The maximum indentation depth did not exceed 1 µm, and the AFM data fit the model over a whole indentation range. The calculated E values were presented as box plots, distinguishing the median and the first (Q1) and third (Q3) quartiles. The adhesive and hydrophobic properties of the algae were extracted from the retracting part of the force curve. They were quantified using the maximum work of adhesion (W adh ), defined as the area enclosing the negative force values. Hydrophilic (bare, silicon) and hydrophobic (trichlorooctadecylsilane (OTS, Sigma-Aldrich)) cantilevers were used to characterize the adhesive and hydrophobic properties of cell probes with different chemical properties. The functionalization of the OTS tips was performed by chemical vapor deposition. Silanization of the cantilevers was performed in a desiccator for 2 h, and the probes were used immediately. Scanning area, grid density, velocity, and the number of examined cells are specified in the "Atomic force microscopy working in force spectroscopy mode" section. The degree of hydrophobicity of the algal cell was defined as the difference between maximum adhesion values obtained for the interaction of the algal cell with untreated and CH 3 -functionalized AFM cantilevers. It was quantified with ΔW adh = W adh (no OTS) -W adh (OTS) (Novosel et al. 2021). Sample preparation for force spectroscopy measurements The protocol for sample preparation was recently published in Novosel et al. (2021). Briefly, algal cells were immobilized on a glass coverslip coated with PEI. The PEI was deposited by drop-casting technique (1 h), rinsed with seawater and dried with a stream of nitrogen. In the case of C. closterium, a 200-µL cell suspension was placed on a substrate for 1 h. The sample was then rinsed 3 times with 200 µL of filtered seawater. For D. tertiolecta and T. suecica cells, the following procedure was used. A volume of 1.5 mL of D. tertiolecta and T. suecica suspensions was centrifuged at 265 × g for 3 min and at 940 × g for 5 min, respectively. After removing 1 mL of the medium, the obtained pellet of algal cells was vortexed. Then, 1 mL of seawater was added and the cells were centrifuged at the same speed and duration. The supernatant was removed and the cells were suspended in 400 µL of the filtered seawater. Then, 100 µL of the cell suspension was placed on a PEI-coated glass slide for 30 min. Finally, the samples were rinsed 3 times with 100 µL of the filtered seawater. Lipid extraction Lipid extraction was performed from 50 mL of algal cell monoculture at the stationary growth phase. The sample was filtered through a pre-fired (450 °C/5 h) 0.7 µm Whatman GF/F filter. Extraction was performed using a modified onephase solvent mixture of dichloromethane-methanol-water (Bligh and Dyer 1959): 10 mL of one-phase solvent mixture dichloromethane/methanol/deionized water (1:2:0.8 v/v/v) and 5-8 µg of standard methyl stearate (to estimate recoveries in subsequent steps of sample analysis) were added to the cut filters. They were then ultrasonicated for 3 min, stored overnight in the refrigerator, filtered through a sinter funnel into a separatory funnel, washed again with 10 mL of the one-phase solvent mixture and then once with 10 mL of dichloromethane/0.73% NaCl solution (1:1 v/v) and finally with 10 mL of dichloromethane. Lipid extracts collected in dichloromethane were evaporated to dryness under nitrogen flow and dissolved in 34 to 54 µL dichloromethane before analysis. All solvents were purchased from Merck Corporation (USA). Results Cell growth dynamics Figure S1 shows the growth curves of three selected algal monocultures (D. tertiolecta, T. suecica, C. closterium) studied at four selected salinities (9, 19, 27, and 38). The initial number of inoculated cells in the growth media was similar for all species studied, approximately 4.0 × 10 4 cells mL −1 . All selected microalgae persisted in the salinity range from 9 to 38. The calculated growth rates and doubling times of three algae in the exponential growth phase at the salinity studied are summarized in Table S1. All microalgae had the shortest doubling time and fastest growth at salinity of 9. Confocal images of algal cells grown at the corresponding salinities are shown in Figures S2-S4. Microscopic observations of D. tertiolecta at salinity of 27 showed no changes in autofluorescence or actin composition compared to cells grown at salinity of 38 ( Figure S2a and b). At salinity of 19, D. tertiolecta cells build up actin layer ( Figure S2c), which is particularly pronounced at salinity of 9 ( Figure S2d). In addition, as observed in the transmitted light channel, at salinity of 9, some cells lost their flagella and became rounder, and the actin layer appeared thicker than in the control. No changes in autofluorescence or actin composition were observed in T. suecica over the entire range of salinity examined ( Figure S3a-d). However, as observed in the transmitted light images, T. suecica cells tend to lose their flagella as salinity decreases. This effect was most evident at salinities of 19 and 9, where almost all cells have lost their flagella and the detached flagella accumulate around the cells. The cells of C. closterium maintained both shape and autofluorescent properties throughout the salinity drop ( Figure S4a-d). As observed in the transmitted light channel, droplets accumulated inside the cells with decreasing salinity, although no trend was noted. Motility characterization Qualitative insights into the movement of D. tertiolecta cells at selected salinities are shown in Figure S5. At salinity of 9, approximately 66% of the cells (109 cells) were stationary or oscillating around a center (Tables S2.1), while the remainder (55 cells) exhibited considerable trajectories. At salinity of 19, a total of 68 cells were counted in the sample, of which about 57% (39 cells) were stationary. At salinity of 27, approximately 63% (41 cells) were stationary, while 37% (24 cells) exhibited considerable movement and were quantified. In contrast, at salinity of 38, most cells (79%) were in motion, while the minority (21%) were stationary. Box plots of cell speeds of D. tertiolecta are shown in Fig. 1a. The median of the speed at salinity of 9 was 29 µm s −1 , while the medians at salinities of 19 and 27 were identical: 50 µm s −1 . At salinity of 38, the median speed was significantly higher: 75 µm s −1 . The Shapiro test for normality yielded p = 2.7 × 10 −10 , 1.5 × 10 −9 , 3.4 × 10 −5 , and 1.8 × 10 −6 , confirming that the density distributions of speeds were very far from normal. The Wilcoxon rank sum test showed that the density distributions of speeds were significantly different for cells grown at salinities of 9 and 19, 27 and 38, but not for cells grown at 19 and 27. Because the group of cells that were stationary or vibrating around the center exhibited significantly different motion than the group of cells that were moving, it was important to note the speeds of the moving cells. The average speeds at salinities of 9, 19, 27, and 38 were 74 µm s −1 , 103 µm s −1 , 77 µm s −1 , and 81 µm s −1 , respectively (Table S2.2a). Boxplots of the search radius of D. tertiolecta cells are shown in Fig. 1b. The median search radius at salinity of 9 19, 27, and 38 (control) was 2 µm, whereas the medians at salinities of 19 and 27 were 4 and 3 µm, respectively. At salinity of 38, the median search radius was significantly larger: 18 µm (Table S2.2.b). The Shapiro test for normality yielded p = 2.2 × 10 −16 , 7.3 × 10 −16 , 2.1 × 10 −13 , and 2.2 × 10 −16 , respectively, confirming that all density distributions of the search radius were very far from normal. The Wilcoxon rank sum test showed that the density distributions of the search radius were significantly different for cells grown at salinities of 9 and 19, 27 and 38, but again not for cells grown at 19 and 27 (p = 0.88). The group of cells that moved consistently had an average search radius of 12 µm, 27 µm, 79 µm, and 72 µm at salinities of 9, 19, 27, and 38, respectively. In the same order of salinity, the linearity of motion was 0.1, 0.09, 0.32, and 0.45, respectively. Thus, the linearity was the same at salinities of 9 and 19 and 3 to 4.5 times smaller than at 27 and 38. Qualitative insights into the movement of T. suecica cells grown at selected salinities are shown in Figure S6. At salinity of 9, approximately 43% of cells (35 cells) were stationary or showed oscillatory movement in place (Tables S3.1), whereas the majority of cells (46 cells) were clearly moving. At salinity of 19, a total of 124 cells were counted, with approximately 59% being stationary (73 cells). At salinity of 27, approximately 27% (26 cells) were stationary, while 73% (69 cells) showed significant movement and were quantified. In contrast, at salinity of 38, only 6% of cells were stationary, while 94% of cells moved vigorously. Box plots of cell speeds of T. suecica are shown in Fig. 2a. The median of the speed was 78 µm s −1 at salinity of 9, while the medians were 50 µm s −1 at salinity of 19 and 112 µm s −1 at salinities of 27. At salinity of 38, the median speed was significantly higher: 201 µm s −1 (Table S3.2a). The Shapiro test for normality yielded p = 4 × 10 −4 , 1.8 × 10 −10 , 0.07, 8.8 × 10 −5 , respectively. Only the density distribution at salinity of 27 did not deviate significantly from normality. The Wilcoxon rank sum test showed that the density distributions of speed were significantly different from each other, except for the distributions at salinities of 9 and 27, for which p = 0.05 was determined. The group of uniformly moving cells grown at salinities of 9, 19, 27, and 38 had an average speed of 124 µm s −1 , 155 µm s −1 , 137 µm s −1 , and 201 µm s −1 , respectively (Table S3.2). Boxplots of the search radius of T. suecica cells are shown in Fig. 2b. The median search radius at salinity of 9 was 6 µm, while the medians at salinities of 19 and 27 were 4 and 12 µm, respectively. At salinity of 38, the median search radius was an order of magnitude larger: 102 µm (Table S3.2b). The Shapiro test for normality yielded p = 4.6 × 10 −16 , 2.2 × 10 −16 , 6 × 10 −16 , and 1.4 × 10 −15 , respectively, confirming that all density distributions of the search radius are very far from normal. The Wilcoxon rank sum test showed that the density distributions of the search radii were significantly different from each other. The search radii of cells grown at salinities of 9 and 19 (p = 0.027) and cells grown at salinities of 9 and 27 (p = 0.22) were the narrowest. The group of significantly moving cells grown at salinities of 9, 19, 27, and 38 had an average search radius of 57 µm, 121 µm, 61 µm, and 174 µm, respectively. The linearity of moving cells with increasing salinity was 0.26, 0.58, 0.19, and 0.55, respectively, and reached the highest values at salinities of 19 and 38. Electrochemical characterization of algal cells and released surface-active organic matter The chronoamperometric curves for oxygen reduction recorded in the cell suspension of D. tertiolecta in seawater at potential of − 400 mV showed signals attributable to the adhesion of single cells to the charged interface (Fig. 3a). The dependence of the signal frequency of D. tertiolecta grown at different salinities on the applied potentials is shown in Fig. 3b. The potential range of cell adhesion was defined with critical potentials of adhesion at the positively Ec + and negatively charged interface, Ec − . The most negative and the most positive potentials, where at least one amperometric signal occurs per 10 consecutive I-t curves, correspond to the critical potentials (Žutić et al. 1993;Ivošević et al. 1994). The narrowest potential range of adhesion was recorded in the D. tertiolecta cell culture grown at salinity of 9, characterized by critical potentials of − 140 mV and − 990 mV in seawater, while the widest potential range of cell adhesion was recorded in the cell suspension grown at 38 salinity, from − 110 to − 1240 mV, corresponding to favorable growth conditions. The frequency of amperometric signals increased with decreasing salinity due to the lower ionic strength of the medium, which was reflected in the increase in oxygen reduction current, thus enhancing the amperometric signals. The maximum number of amperometric signals occurred at potential of − 400 mV for all four salinities studied, as the interfacial tension is close to the maximum value (electrocapillary maximum). At potential of − 400 mV, the mercury electrode was positively charged and there was an electrostatic attraction between the positively charged interface and the negatively charged D. tertiolecta cells. By changing the potential in either direction, the interfacial tensions decreased and the number of amperometric signals from the cells decreased accordingly. At potential of − 800 mV, the mercury was negatively charged and the signal frequency decreased due to electrostatic repulsion with the negatively charged D. tertiolecta cells. Conversely, the chronoamperometric curves recorded in the suspensions of T. suecica and C. closterium were perfectly regular because there was no adhesion to the charged liquid interface due to cell rigidity (Novosel and Ivošević DeNardis 2021). The electrochemical characterization of the released surface-active organic matter in the growth medium was determined by recording polarograms (current-potential curve) of Hg(II), which is proportional to the surfactant activity in the sample. The surfactant activity of the sample corresponded to a quantitative measure of the physiological activity of the cells in the growth medium. The data showed that the surfactant activity of the cells gradually decreased as follows: T. suecica (19) > D. tertiolecta (9) > D. tertiolecta (19) ∼ T. suecica (27) (Fig. 3c). Nanoscale imaging of algal cell and released extracellular biopolymers Nanoscale imaging of single algal cells was performed at salinities of 9, 19, 27, and 38 (control). Regardless of salinity, all three species retained the same general cell shape. Dunaliella tertiolecta cells grown at all salinities tested had an ovoid shape with two flexible flagella. Tetraselmis suecica cells had an ellipsoidal shape at all salinities tested and the cell surface had granular structures corresponding to micropearls (Novosel et al. 2021). Most of the cells of T. suecica grown at salinity of 38 had flagella, and only half of the cells grown at salinity of 27 had flagella, whereas the cells grown at salinities of 9 and 19 had no flagella. The cells of C. closterium grown at all salinities tested had an elongated shape with flexible rostrae that could be clearly distinguished from the central part of the cell. Three morphologically distinct parts could be distinguished on the cell: the girdle band, the valve, and the raphe (Pletikapić et al. 2012;Novosel et al. 2021). Based on AFM image analysis, the morphological parameters (length, width, height, and surface roughness) of cells grown at selected salinities are summarized in Table S4. The size of D. tertiolecta and T. suecica cells had the highest values at salinity of 38. The size of both, D. tertiolecta and T. suecica, grown at salinities of 9, 19, and 27 was similar and smaller than cells grown at 38. The roughness of D. tertiolecta cell surface was highest at salinity 38 and similar for salinities of 9, 19, and 27. The roughness of the cell surface of T. suecica was similar at all tested salinities. The length, width, and height range of C. closterium grown at salinities of 9, 19, and 27 were similar and greater compared with cells grown at salinity of 38. The supramolecular organization of the released extracellular polymers (EPS) of D. tertiolecta, T. suecica, and C. closterium at selected salinities are shown in Fig. 4. Around the cells of D. tertiolecta grown at salinities of 38 and 27, only globules and no fibrils or fibrillar networks were observed. Globules and some single fibrils were observed around the cells grown at salinity of 19. Around the cells grown at salinity of 9, a material consisting of globules, single fibrils and flat smooth structures was noted (Fig. 4a, b). The extracellular biopolymers around T. suecica grown at salinities of 9, 19, and 27 were in the form of a dense fibrillar networks and were located all around the cells, whereas no fibrillar material was observed at control salinity of 38 (Fig. 4c, d). The fibrils that formed the network ranged in height from 5 to 50 nm, with the highest network density found around the cells at salinity of 19 ( Figure S7). The extracellular biopolymers of C. closterium were in the form of single fibrils, locally cross-linked fibrils, and globules. For C. closterium grown at salinities of 9 and 19, a denser EPS material around the cells was noted and the fibrils exhibited a higher degree of cross-linking (Fig. 4e, f), whereas C. closterium grown at salinity of 27 had a lower degree of cross-linked fibrils near the cells, while single fibrils were mostly observed further from the cells. Nanomechanical characterization of algal cells by AFM The local elastic properties (E) of algal cells were quantified using the apparent Young's modulus calculated for the maximum indentation depth. At salinity of 38 (control), the cells of D. tertiolecta are characterized by the lowest E values. The cells of T. suecica are stiffer, while the local E values of C. closterium can be up to several MPa. The difference in mechanical response of these cells to compression could be due to differences in cell morphology. The cells of T. suecica are surrounded by close-fitting theca of fused organic scales. The cells of C. closterium contain stiff chloroplasts in the girdle band, and the cells of D. tertiolecta are covered only by the thick plasma membrane (Oliveira et al. 1980;Medlin and Kaczmarska 2004). Figure 5 shows the overlay of the box plots and Young's modulus distributions obtained for the algal cells of D. tertiolecta, T. suecica, and C. closterium cultured at salinities of 9, 19, 27, and 38, respectively. Because the distributions of Young's moduli are broad and not symmetrical, we compared the median values of the cell populations studied. The median was accompanied by an interquartile range (IQR), which describes where the central 50% of the data lie (median (IQR). Statistical significance was determined using the Kruskal-Wallis ANOVA test (p < 0.05) to confirm differences between groups. Regardless of algal species, decreasing salinity increased the apparent Young's modulus (Fig. 5). The statistical significance for all groups was less than 0.0001 at the 0.05 level (Kruskal-Wallis ANOVA test) compared to the control group. Median (IQR) values obtained for D. tertiolecta cells increased from 3.5 kPa (3.2 kPa) at 38 to 8.6 kPa (7.6 kPa) at 27, 6.6 kPa (4.8 kPa) at 19, and 8.1 kPa (4.4 kPa) at 9, respectively. Statistical significance (p < 0.05) was also found for the measurements at salinities of 9 and 19, and at 19 and 27, whereas no statistically significant difference was found at salinities of 9 and 27. A similar trend of elastic modulus changes was observed in T. suecica cells. The cells were stiffer at low salinities, and E (27) > E (9) > E (19). The corresponding medians were 138 kPa (190 kPa), 151 kPa (223 kPa), and 115 kPa (198 kPa), respectively. The value of E determined for T. suecica at salinity of 38 was 46 kPa (80 kPa). Weaker statistical significance was found between algal cells cultured at salinities of 9 and 27 (p = 0.0011), and Overlay of the box plots with Young's modulus distributions, obtained for D. tertiolecta, T. suecica, and C. closterium cells at salinities of 9, 19, 27, and 38. A box with whiskers represents a median ± interquartile range (Q3-Q1). Statistical significance was obtained from the Kruskal-Wallis ANOVA test at the level of 0.05 (*** p < 0.001, ns-not statistically significant) 19 and 27 (p = 0.0019). There was no statistical significance between cells of T. suecica cultured at salinities of 9 and 27. For C. closterium, the salinity stress increased the Young's modulus from 215 kPa (436 kPa) at normal conditions to 362 kPa (742 kPa) at salinity of 27. A further decrease in salinity to 19 and 9 was accompanied by an increase in the E to 553 kPa (532 kPa) and 537 kPa (759 kPa), respectively. The statistical significance for the 19 and 9 groups and 27 and 19 groups of C. closterium cells was less than 0.05, while the p value determined for the 9 and 27 groups was less than 0.001. Adhesive and hydrophobic properties of algal cells The adhesive properties, quantified by the maximum work of adhesion (W adh ), of algal cells grown at different salinities on chemically modified probes were studied. Cells were indented with hydrophilic (OTS −) and hydrophobic (OTS +) AFM probes. The change in hydrophobic properties (ΔW adh ) of the algal cell surface was determined by subtracting the work of adhesion determined for bare and OTS-coated cantilevers (Table S5): ΔW adh = W adh(no OTS) − W adh(OTS) . A positive value of ΔW adh indicated that hydrophilic interactions dominate, while negative values indicated that hydrophobicity smothered the hydrophilicity. The mean values of the maximum work of adhesion (± standard error of the mean) obtained from measurements with bare and OTS-coated cantilevers are shown in Table S5. We hypothesized that algal cells change their adhesion properties under salinity stress. Indeed, the hydrophobic properties of the cells changed depending on the salinity studied (Fig. 6), and these changes were species-dependent. The resulting chemical properties of microalgae are shown in Figure S8. The data show that in the case of D. tertiolecta, the hydrophobic and hydrophilic properties were rather balanced, being characterized by low values of ΔW adh (see Table S5). Only at salinities of 9 and 27 the cells became slightly more hydrophobic, as ΔW adh is − 0.018 ± 0.014 fJ and 0.047 ± 0.012 fJ, respectively. More pronounced salinity-dependent changes in surface properties were observed in T. suecica and C. closterium. At salinities of 38 and 27, ΔW adh of T. suecica was close to zero. A further decrease in salinity resulted in high negative ΔW adh values. In addition, we observed a significant decrease in the probability of adhesion to the bare AFM probe (P noOTS ), accompanied by an increase in P OTS (see Figure S8 and Table S5). While the overall surface properties of T. suecica changed from balanced to hydrophobic with decreasing salinity, in the case of C. closterium we found that salinity can be a trigger between hydrophobic and hydrophilic surface properties. At salinity of 38, C. closterium cells had a hydrophilic surface (ΔW adh = 0.0583 ± 0.0072 fJ). When the salinity decreased to 27, the ΔW adh increased to 0.163 ± 0.085 fJ. Moreover, when the salinity decreased to 19, the relationship between the surface properties of the algal cells changed, and the cells of C. closterium became highly hydrophobic. At salinity of 9, the resulting work of adhesion was still negative. Moreover, the results of adhesion probability showed the affinity to hydrophilic materials depending on the structure and properties of the microalgal species (see Figure S8). Dunaliella tertiolecta showed a very low adhesion probability to the hydrophilic surfaces, in contrast to the cells of C. closterium, which preferentially adhered to the hydrophilic probe throughout the salinity studied. In the case of T. suecica, at salinities of 27 and 38, P noOTS > P OTS , while at salinities of 9 and 19, P noOTS < < P OTS . Lipid characterization of algal cells The changes in cellular content of membrane lipids, ST, GL, and PL, caused by the decrease in salinity in the microalgal cell cultures of D. tertiolecta, T. suecica, and C. closterium are shown in Fig. 7. In general, the least changes in total membrane lipids and classes with decreasing salinity were observed in D. tertiolecta (Fig. 7a, d, g, and j). In T. suecica, decreasing salinity resulted in a decrease in cellular content of total membrane lipids with a decrease of GL, followed by an increase of PL. In the microalga C. closterium, a decrease in total membrane lipid concentration, GL and PL was observed with a decrease in salinity from 38 to 9, but without a particular trend of change. Parallelly, a statistically significant (p < 0.05) increase in the cellular content of sterols was observed with the decrease in salinity in all three microalgae. At salinity of 9, the sterol content in diatom C. closterium was increased 3.8-fold compared to salinity of 38. tertiolecta (a, d, g, and j), T. suecica (b, e, h, and k) and C. closterium (c, f, i, and l) at salinities of 9, 19, 27, and 38: total membrane lipids (a-c), sterols (ST) (d-f), glycolipids (GL) (g-i), and phospholipids (PL) (j-l). Data are shown as mean ± SD However, the highest cellular sterol content was observed in T. suecica (Fig. 7e). Discussion We performed a comprehensive biophysical characterization at the level of a single algal cell to maintain structural integrity and clarify the poorly understood relationships between the chemical composition and mechanical properties of microalgae grown under selected salinity conditions. Three morphologically distinct marine microalgal species, D. tertiolecta, T. suecica, and C. closterium, were grown under four salinity conditions to mimic the broad salinity range in marine systems. We used a workflow that included chemical characterization (membrane lipids, hydrophobicity), nanomechanical characterization (stiffness), and behavioral characterization (physiological activity, motility, and adhesion to an interface) of microalgae in the stationary growth phase. The responses of microalgae subjected to different salinity may also reflect cellular stress, resulting from the shortterm salinity stress imposed on the cells which could trigger various morphological and biochemical changes (Borowitzka 2018b). Microalgae respond to abiotic stress through numerous mechanisms, including the adaptation of lipid composition and quantity to the new conditions. In response to various external stimuli, many microalgal species have evolved the ability to efficiently modify lipid metabolism by switching between nonpolar storage lipids (Thompson 1996;Guschina and Harwood 2006) and polar structural lipids. Storage lipids, composed mainly of polyunsaturated fatty acids are important for maintaining spontaneous curvature and flexural rigidity of membranes (De Carvalho and Caramujo 2018), while structural lipids are responsible for membrane fluidity, cell signaling pathways, and response to changes in the cellular environment (Aratboni et al. 2019;Rogowska and Szakiel 2020). Sterols are integral nonpolar components of eukaryotic membranes where, together with phospholipids, they regulate membrane permeability and fluidity and play an important role in sensing osmotic changes (Zelazny et al 1995). Notably, the cells of T. suecica were found to have the highest sterol production, in contrast to those of D. tertiolecta. In all three microalgae studied, total cellular concentrations of ST increased with decreasing salinity, indicating decreased membrane fluidity and increased cell hydrophobicity (Figs. 5 and 7). In contrast to the effects of decreasing salinity examined in our study, most studies of microalgal salinity accommodation examined the effects of higher than optimal salinity on lipid metabolism. Responses of D. tertiolecta and Dunaliella salina to growth in salinity concentrations above those of seawater resulted in a decrease in total sterol yield (Francavilla et al. 2010). Profiling of microalgal membrane lipids is species-and abiotic stressor-specific (Ahmed et al. 2015;Novak et al. 2019). While changes in total PL and GL were observed in T. suecica and C. closterium, D. tertiolecta slightly altered cellular membrane lipid content. As observed in our concurrent study, the minimal but statistically significant changes in lipid remodeling in D. tertiolecta in response to low salinity (3) are attributed to the fact that D. tertiolecta is genetically adapted to large salinity fluctuations through polar lipid composition (Vrana et al. under revision). The fact that no linear response to salinity reduction was observed in T. suecica and C. closterium may indicate a change in nutrient and light availability for growth at different salinities. Similarly, no trend in total lipid content was observed in T. suecica growing in a salinity range of 15-90 g L −1 (Venckus et al. 2021). As an euryhaline species, D. tertiolecta largely maintained membrane lipid content and hydrophobicity across the studied salinity range. When salinity decreased to 9, Dunaliella species became stiffer, which was accompanied by an increase in sterols, the formation of a thick actin layer, and pronounced physiological activity in the form of globular structures, which could indicate the transitioning of cells into palmella stage (Wei et al. 2017). Salinity-induced changes in cell stiffness would consequently affect cell adhesion behavior at the interface. The adhesion of D. tertiolecta cultured at salinity of 9 exhibited a narrow potential range, suggesting that the cells were stiffer and more hydrophobic than those cultured at salinity of 38 (Fig. 3b). These observations were consistent with the nanomechanical characterization of Dunaliella cells at selected salinities. Aging of Dunaliella cells also leads to changes affecting cell stiffness, hydrophobicity, and adhesion behavior, which may be related to molecular modification of the cell envelope (Pillet et al. 2019). The second species, a calcite-encrusted thecate T. suecica, showed the most pronounced salinity-related changes which could indicate the formation of a cyst stage. At salinity of 9, cell exhibited the loss of flagella, the highest cell stiffness and hydrophobicity, accompanied by an increase in ST content and a decrease in GL content, which affected the distribution of surface charges and accordingly caused changes in the adhesive properties of the microalga (Figs. 6 and 7, Fig. S8). At salinity of 19, T. suecica showed the most pronounced physiological activity among all the species studied in the form of a dense EPS network (Figs. 3c and 4, Fig. S7), which was accompanied by a decrease in total membrane lipid content (Fig. 7b). A fundamental role in EPS production is played by variation in salinity, which exerts oxidative stress on cells and affects cellular ion balance (Guzman-Murillo and Ascencio 2001;Parra-Riofrío et al. 2020). At lower salinity, cellular ion concentrations increased and their ion ratios were constant, whereas at salinity higher than 20, ion ratios became variable (Kirst 1990), affecting the amount of EPS production in T. suecica and its adaptation mechanisms to osmotic stress (Guzman-Murillo and Ascencio 2001). Cylindrotheca closterium, a species enclosed within an organosilicate frustule, showed good adaptation to a wide range of salinity (Glaser and Karsten 2020). At lower salinities of 9 and 19, the cells of C. closterium showed hydrophobic behavior, whereas at higher salinities of 27 and 38, they showed a hydrophilic character accompanied with physiological activity (Figs. 3c and 6). This result could suggest that the observed salinity-induced transition of C. closterium cell properties from hydrophobic to hydrophilic might be related to the amount and ratio of sulfated (sPS) and carboxylated (cPS) polysaccharides, as has been reported for the adaptation of marine species to high salinity environments (Aquino et al. 2011;Arata et al. 2017). In this way, EPS composition mainly determined the adhesiveness of algal cells (Xiao and Zheng 2016). Whether a microalgal cell behaves hydrophobically or hydrophilically determines its ecological role in marine systems, whether they live in the benthos and/or form colonies or live as plankton (Griffiths and Harrison 2009;Ozkan and Berberoglu 2013;Novosel et al. 2021). At the nanometer level, the morphology of microalgal cells did not show any specific changes, except for the change in cell size and the loss of flagella in the cells of T. suecica. The cells of D. tertiolecta and T. suecica grown at lower salinities were smaller than those grown at a salinity of 38. These results are consistent with the commonly known fact that phytoplankton cell size decreases not only with temperature (Atkinson et al. 2003) but also with decreasing salinity (Litchman et al. 2009). Fu et al. (2014) showed that the cell volume of D. salina fluctuated continuously for 10 days as a result of high salinity stress, eventually stabilizing at a slightly larger cell size compared to unstressed conditions. The loss of flagella in the cells of T. suecica grown at lower salinity could indicate development of cyst stage, as has been reported for other flagellated microalgae (Borowitzka and Siva 2007;Ma et al. 2012;Wei et al. 2017;Shetty et al. 2019;Hyung et al. 2021). On the other hand, the cells of C. closterium were larger compared to cells grown at salinity of 38, presumably not only related to salinity but also to its life cycle, in which the maximal cell size is attained by initial cells sprouting from the fully grown auxospores (Vanormelingen et al. 2013). Reducing the salinity from 38 to 9 resulted in a much higher secretion of biopolymers in all species, which allowed the stressed cells to survive under unfavorable conditions. The extracellular biopolymers secreted by stressed D. tertiolecta cells changed their supramolecular structure with decreasing salinity, i.e., the spherical structure observed at salinity of 38 changed to a fibrillar structure at salinity of 9. Fibrillar structures were also present when cells were grown at a lower temperature (Novosel et al. 2021). The greatest difference in EPS organization was observed in T. suecica cells. Cells grown at salinities of 9, 19, and 27 released fibrillar networks, with the highest network density observed at salinity of 19 ( Figure S7), which is similar to dense EPS network formation of C. closterium cells at 30 °C. In contrast, only spherical material was present around T. suecica cells exposed to temperature stress (Novosel et al. 2021). At the population level, rapid and quantitative highthroughput analysis of several hundred cells allowed characterization of microalgal motility behavior (Novosel et al. 2020). Microalgal motility behavior depends on the complexity of the flagellar system. For example, biflagellated Dunaliella cells had about 3 times lower cell speed than tetraflagellated T. suecica (Figs. 1 and 2). Both cells showed that cell motility is dependent on salinity. At salinity of 9, the population of vibrating cells predominated, moving around the spot with minimal cell speed. When the salinity was increased to 38, the number of motile cells increased, and the cell speed and search radius increased accordingly. At salinity of 38, the search radius of both flagellar cells ranged from 6 to 13 body lengths per second and showed brisk movement. As cell motility can be influenced by cell physiological activity (Mayali et al. 2008), the pronounced cell physiological activity detected in T. suecica cell cultures at salinity of 19 could interfere with cell speed and search radius (Figs. 2, 3c, and 4c). To place this study in a broader context, we compared the influence of individual abiotic stressors: temperature vs. salinity as the main environmental indicators of climate change on the adaptation response of selected microalgae in the stationary growth phase. A color wheel illustrating chemical, mechanical, and behavioral changes in terms of hydrophobicity, stiffness, EPS production, and motility of microalgae when exposed to a temperature maximum (Novosel et al. 2021) and salinity minimum is shown in Fig. 8. Our results showed that the adaptive response of microalgae is species-specific and stressor-specific. Decrease in salinity triggered profound chemical, mechanical, and behavioral response in the studied microalgal cells. All three selected species became stiffer and behaved hydrophobically, while differing in physiological activity. Although cells of T. suecica are enclosed in a calcite-coated theca, they appeared to be sensitive to hyposaline conditions, as indicated by the highest hydrophobicity and physiological activity. In contrast, temperature did not elicit a major adaptive response in T. suecica, demonstrating a temperature tolerance. The green alga Dunaliella, which is surrounded by a glycocalyx layer, showed a profound chemical and mechanical response under both stressors, consistent with the extremophilic nature of Dunaliella. In contrast, the pennate diatom C. closterium, enclosed in an organosilicate frustule, maintained a nearly constant hydrophobicity and EPS production, regardless of the stressors studied. Such environmental adaptation of diatom cells derives from a long-lasting evolutionary advance in genetic, physiological, and morphological traits (Falkowski et al. 2004;Armbrust 2009). Our results, based on a study conducted in the laboratory, showed that the adaptive response of algae to changes in abiotic stressors could be identified and quantified. In this way, the present fundamental study may help to understand how salinity controls the diversity, structure, and function of microbial communities in aquatic systems, with only those microorganisms that have successfully adapted to salinity being able to survive the ongoing and projected salinity fluctuations (Baek et al. 2011;Triadó-Margarit and Casamayor 2012;Bautista-Chamizo et al. 2018). Conclusion We investigated the salinity-induced adaptive response of two green microalgae and a diatom in the terms of chemical, mechanical, and behavioral changes. Our results showed that the adaptive response of microalgae is species-and salinityspecific. Although covered only with glycocalyx coat, the cells of D. tertiolecta adapt to a wide range of salinity levels without significant changes in membrane lipids and hydrophobicity, confirming their euryhaline nature. Dunaliella tertiolecta responded to a decrease in salinity to 9 with the formation of a thick actin layer, increase in cell stiffness, sterol content and physiological activity, and a decrease in motility, likely leading to the formation of a palmella stage. The cells of T. suecica proved to be sensitive to salinity fluctuations, despite being surrounded by a calcite-encrusted theca. In particular, a decrease in salinity to 19 in T. suecica resulted in growth reduction, loss of flagella, decrease in motility, increase in cell stiffness, hydrophobicity, sterol content, and the formation of a dense EPS network with a concomitant decrease in membrane lipids, which might indicate the progressing of cells into the cyst stage. Cylindrotheca closterium, enclosed in organosilicate frustule, proved to be tolerant to a decrease in salinity. At salinities of 9 and 19, C. closterium became stiffer and hydrophobic, whereas at salinities of 27 and 38, it behaved softer and hydrophilic, which might be related to a molecular change in the released biopolymers. This comprehensive study Fig. 8 Comparison of adaptive responses of microalgae triggered by individual stressors (temperature of 30 °C vs. salinity of 9). Temperature-induced adaptive responses of microalgae have been described in detail (Novosel et al. 2021). Color shading refers to the response of microalgae compared to control conditions (temperature of 18 °C and salinity of 38) provided a fundamental biophysical understanding of the adaptive mechanisms of individual algal species to changing salinity. Such understanding offers the basis for deciphering the complex interactions between abiotic stressors on microalgae in marine systems. The abovementioned changes in microalgal response showed that cell surface properties and behavior could be considered as markers of stress in marine communities and could be used to predict the effects of climate change on aquatic communities.
2022-04-08T15:09:41.120Z
2022-04-07T00:00:00.000
{ "year": 2022, "sha1": "64b381acaeabe261d572d76849c37a4fc430ebc3", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10811-022-02734-x.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "57de93bf7614b2ac2847add956a9314b0d0c5805", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
6183421
pes2o/s2orc
v3-fos-license
Chart Pruning for Fast Lexicalised-Grammar Parsing Given the increasing need to process massive amounts of textual data, efficiency of NLP tools is becoming a pressing concern. Parsers based on lexicalised grammar formalisms, such as TAG and CCG , can be made more efficient using supertagging, which for CCG is so effective that every derivation consistent with the supertagger output can be stored in a packed chart. However, wide-coverage CCG parsers still produce a very large number of derivations for typical newspaper or Wikipedia sentences. In this paper we investigate two forms of chart pruning, and develop a novel method for pruning complete cells in a parse chart. The result is a wide-coverage CCG parser that can process almost 100 sentences per second, with little or no loss in accuracy over the baseline with no pruning. Introduction Many NLP tasks and applications require the processing of massive amounts of textual data. For example, knowledge acquisition efforts can involve processing billions of words of text (Curran, 2004). Also, the increasing need to process large amounts of web data places an efficiency demand on existing NLP tools. TextRunner, for example, is a system that performs open information extraction on the web (Lin et al., 2009). However, the text processing that is performed by TextRunner, in particular the parsing, is rudimentary: finite-state shallow parsing technology that is now decades old. TextRunner uses this technology largely for efficiency reasons. Many of the popular wide-coverage parsers available today operate at around one newspaper sentence per second (Collins, 1999;Charniak, 2000;Petrov and Klein, 2007). There are dependency parsers that operate orders of magnitude faster, by exploiting the fact that accurate dependency parsing can be achieved by using a shift-reduce linear-time process which makes a single decision at each point in the parsing process (Nivre and Scholz, 2004). In this paper we focus on the Combinatory Categorial Grammar (CCG) parser of Clark and Curran (2007). One advantage of the CCG parser is that it is able to assign rich structural descriptions to sentences, from a variety of representations, e.g. CCG derivations, CCG dependency structures, grammatical relations (Carroll et al., 1998), and first-order logical forms (Bos et al., 2004). One of the properties of the grammar formalism is that it is lexicalised, associating CCG lexical categories, or CCG supertags, with the words in a sentence (Steedman, 2000). Clark and Curran (2004) adapt the technique of supertagging (Bangalore and Joshi, 1999) to CCG, using a standard maximum entropy tagger to assign small sets of supertags to each word. The reduction in ambiguity resulting from the supertagging stage results in a surprisingly efficient parser, given the rich structural output, operating at tens of newspaper sentences per second. In this paper we demonstrate that the CCG parser can be made more than twice as fast, with little or no loss in accuracy. A noteworthy feature of the CCG parser is that, after the supertagging stage, the parser builds a complete packed chart, storing all sentences consistent with the assigned supertags and the parser's CCG combinatory rules, with no chart pruning whatsoever. The use of chart pruning techniques, typically some form of beam search, is essential for practical parsing using Penn Treebank parsers (Collins, 1999;Petrov and Klein, 2007;Charniak and Johnson, 2005), as well as practical parsers based on linguistic formalisms, such as HPSG (Ninomiya et al., 2005) and LFG (Kaplan et al., 2004). However, in the CCG case, the use of the supertagger means that enough ambiguity has already been resolved to allow the complete chart to be represented. Despite the effectiveness of the supertagging stage, the number of derivations stored in a packed chart can still be enormous for typical newspaper sentences. Hence it is an obvious question whether chart pruning techniques can be profitably applied to the CCG parser. Some previous work (Djordjevic et al., 2007) has investigated this question but with little success. In this paper we investigate two types of chart pruning: a standard beam search, similar to that used in the Collins parser (Collins, 1999), and a more aggressive strategy in which complete cells are pruned, following Roark and Hollingshead (2009). Roark and Hollingshead use a finite-state tagger to decide which words in a sentence can end or begin constituents, from which whole cells in the chart can be removed. We develop a novel extension to this approach, in which a tagger is trained to infer the maximum length constituent that can begin or end at a particular word. These lengths can then be used in a more agressive pruning strategy which we show to be significantly more effective than the basic approach. Both beam search and cell pruning are highly effective, with the resulting CCG parser able to process almost 100 sentences per second using a single CPU, for both newspaper and Wikipedia data, with little or no loss in accuracy. The CCG Parser The parser is described in detail in Clark and Curran (2007). It is based on CCGbank, a CCG version of the Penn Treebank developed by Hockenmaier and Steedman (2007). The stages in the parsing pipeline are as follows. First, a POS tagger assigns a single POS tag to each word in a sentence. Second, a CCG supertagger assigns lexical categories to the words in the sentence. Third, the parsing stage combines the categories, using CCG's combinatory rules, and builds a packed chart representation containing all the derivations which can be built from the lexical categories. Finally, the Viterbi algorithm finds the highest scoring derivation from the packed chart, using the normal-form log-linear model described in Clark and Curran (2007). Sometimes the parser is unable to build an analysis which spans the whole sentence. When this happens the parser and supertagger interact using the adaptive supertagging strategy described in Clark and Curran (2004): the parser effectively asks the supertagger to provide more lexical categories for each word. This potentially continues for a number of iterations until the parser does create a spanning analysis, or else it gives up and moves to the next sentence. The parser uses the CKY algorithm (Kasami, 1965;Younger, 1967) described in Steedman (2000) to create a packed chart. The CKY algorithm applies naturally to CCG since the grammar is binary. It builds the chart bottom-up, starting with two-word constituents (assuming the supertagging phase has been completed), incrementally increasing the span until the whole sentence is covered. The chart is packed in the standard sense that any two equivalent constituents created during the parsing process are placed in the same equivalence class, with pointers to the children used in the creation. Equivalence is defined in terms of the category and head of the constituent, to enable the Viterbi algorithm to efficiently find the highest scoring derivation. 1 A textbook treatment of CKY applied to statistical parsing is given in Jurafsky and Martin (2000). Data and Evaluation Metrics We performed efficiency and accuracy tests on newspaper and Wikipedia data. For the newspaper data, we used the standard test sections from (ncmod num hundred 1 Seven 0) (conj and 2 sixty-one 3) (conj and 2 hundred 1) (dobj in 6 total 7) (ncmod made 5 in 6) (aux made 5 were 4) (ncsubj made 5 and 2 obj) (passive made 5) Seven hundred and sixty-one were made in total. CCGbank. Following Clark and Curran (2007) we used the CCG dependencies for accuracy evaluation, comparing those output by the parser with the gold-standard dependencies in CCGbank. Unlike Clark and Curran, we calculated recall scores over all sentences, including those for which the parser did not find an analysis. For the WSJ data the parser fails on a small number of sentences (less than 1%), but the chart pruning has the effect of reducing this failure rate further, and we felt that this should be factored into the calculation of recall and hence F-score. In order to test the parser on Wikipedia text, we created two test sets. The first, Wiki 300, for testing accuracy, consists of 300 sentences manually annotated with grammatical relations (GRs) in the style of Briscoe and Carroll (2006). An example sentence is given in Figure 1. The data was created by manually correcting the output of the parser on these sentences, with the annotation being performed by Clark and Rimell, including checks on a subset of these cases to ensure consistency across the two annotators. For the accuracy evaluation, we calculated precision, recall and balanced F-measure over the GRs in the standard way. For testing speed on Wikipedia, we used a corpus of 2500 randomly chosen sentences, Wiki 2500. For all speed tests we measured the number of sentences per second, using a single CPU and standard hardware. Beam Search The beam search approach used in our experiments prunes all constituents in a cell having scores below a multiple (β) of the score of the highest scoring constituent for that cell. 2 The scores for a constituent are calculated using the same model used to find the highest scoring derivation. We consider two scores: the Viterbi score, which is the score of the highest scoring sub-derivation for that constituent; and the inside score, which is the sum over all sub-derviations for that constituent. We investigated the following: the trade-off between the aggressiveness of the beam search and accuracy; the comparison between the Viterbi and inside scores; and whether applying the beam to only certain cells in the chart can improve performance. Table 1 shows results on Section 00 of CCGbank, using the Viterbi score to prune. As expected, the parsing speed increases as the value of β increases, since more constituents are pruned with a higher β value. The pruning is effective, with a β value of 0.01 giving a 55% speed increase with neglible loss in accuracy. We also studied the effect of the beam search at different levels of the chart. We applied a selective beam in which pruning is only applied to constituents of length less than or equal to a threshold δ. For example, if δ = 20, pruning is applied only to constituents spanning 20 words or less. The results are shown in Table 2. The selective beam is also highly effective, showing speed gains over the baseline (which does not use a beam) with no loss in F-score. For a δ value of 50 the speed increase is 78% with no loss in accuracy. Note that for δ greater than 50, the speed reduces. We believe that this is due to the cost of calculating the beam scores and the reduced effectiveness of pruning for cells with longer spans (since pruning shorter constituents early in the chart-parsing process prevents the creation of many larger, low-scoring constituents later). Table 3 shows the comparison between the inscoring but more accurate derivation; and two, a possible increase in recall, discussed in Section 3, can lead to a higher F-score. side and Viterbi scores. The results are similar, with Viterbi marginally outperforming the inside score in most cases. The interesting result from these experiments is that the summing used in calculating the inside score does not improve performance over the max operator used by Viterbi. Table 4 gives results on Wikipedia text, compared with a number of sections from CCGbank. (Sections 02-21 provide the training data for the parser which explains the high accuracy results on these sections.) Despite the fact that the pruning model is derived from CCGbank and based on WSJ text, the speed improvements for Wikipedia were even greater than for WSJ text, with parameters β = 0.005 and δ = 40 leading to almost a doubling of speed on the Wiki 2500 set, with the parser operating at 90 sentences per second. Cell Pruning Whole cells can be pruned from the chart by tagging words in a sentence. Roark and Hollingshead (2009) used a binary tagging approach to prune a CFG CKY chart, where tags are assigned to input words to indicate whether they can be the start or end of multiple-word constituents. We adapt their method to CCG chart pruning. We also show the limitation of binary tagging, and propose a novel tagging method which leads to increased speeds and accuracies over the binary taggers. Binary tagging Following Roark and Hollingshead (2009), we assign the binary begin and end tags separately using two independent taggers. Given the input "We like playing cards together", the pruning effects of each type of tag on the CKY chart are shown in Figure 2. In this chart, rows repre- sent consituent sizes and columns represent initial words of constituents. No cell in the first row of the chart is pruned, since these cells correspond to single words, and are necessary for finding a parse. The begin tag for the input word "cards" is 0, which means that it cannot begin a multi-word constituent. Therefore, no cell in column 4 can contain any constituent. The pruning effect of a binary begin tag is to cross out a column of chart cells (ignoring the first row) when the tag value is zero. Similarly, the end tag of the word "playing" is 0, which means that it cannot be the end of a multi-word constituent. Consequently cell (2, 2), which contains constituents for "like playing", and cell (1, 3), which contains constituents for "We like playing", must be empty. The pruning effect of a binary end tag is to cross out a diagonal of cells (ignoring the first row) when the tag value is zero. We use a maximum entropy trigram tagger (Ratnaparkhi, 1996;Curran and Clark, 2003) Table 5: Accuracy and speed results for the binary taggers on Section 00 of CCGbank. assign the begin and end tags. Features based on the words and POS in a 5-word window, plus the two previously assigned tags, are extracted from the trigram ending with the current tag and the five-word window with the current word in the middle. In our development experiments, both the begin and the end taggers gave a per-word accuracy of around 96%, similar to the accuracy reported in Roark and Hollingshead (2009). Table 5 shows accuracy and speed results for the binary taggers. 4 Using begin or end tags alone, the parser achieved speed increases with a small loss in accuracy. When both begin and end tags are applied, the parser achieved further speed increases, with no loss in accuracy compared to the end tag alone. Row "oracle" shows what happens using the perfect begin and end taggers, by using gold-standard constituent information from CCGbank. The F-score is higher, since the parser is being guided away from incorrect derivations, although the speed is no higher than when using automatically assigned tags. Level tagging A binary tag cannot take effect when there is any chart cell in the corresponding column or diagonal that contains constituents. For example, the begin tag for the word "card" in Figure 3 cannot be 0 because "card" begins a two-word constituent "card games". Hence none of the cells in the column can be pruned using the binary begin tag, even though all the cells from the third row above are empty. We propose what we call a level tagging approach to address this problem. Instead of taking a binary value that indicates whether a whole column or diagonal of cells can be pruned, a level tag (begin or end) takes an integer value which indicates the row from which a column or diagonal can be pruned in the upward direction. For example, a level begin tag with value 2 allows the column of chart cells for the word "card" in Figure 3 to be pruned from the third row upwards. A level tag (begin or end) with value 1 prunes the corresponding row or diagonal from the second row upwards; it has the same pruning effect as a binary tag with value 0. For convenience, value 0 for a level tag means that the corresponding word can be the beginning or end of any constituent, which is the same as a binary tag value 1. A comparison of the pruning effect of binary and level tags for the sentence "Playing card games is fun" is shown in Figure 4. With a level begin tag, more cells can be pruned from the column for "card". Therefore, level tags are potentially more powerful for pruning. We now need a method for assigning level tags to words in a sentence. However, we cannot achieve this with a straighforward classifier since level tags are related; for example, a level tag (begin or end) with value 2 implies level tags with values 3 and above. We develop a novel method for calculating the probability of a level tag for a particular word. Our mechanism for calculating these probabilities uses what we call maxspan tags, which can be assigned using a maximum entropy tagger. Maxspan tags take the same values as level tags. However, the meanings of maxspan tags and level tags are different. While a level tag indicates the row from which a column or diagonal of cells is pruned, a maxspan tag represents the size of the largest constituent a word begins or ends. For example, in Figure 3, the level end tag for the word "games" has value 3, since the largest constituent this words ends spans "playing card games". We use the standard maximum entropy trigram tagger for maxspan tagging, where features are extracted from tag trigrams and surrounding fiveword windows, as for the binary taggers. Parse trees can be turned directly into training data for a maxspan tagger. Since the level tag set is finite, we a require a maximum value N that a level tag can take. We experimented with N = 2 and N = 4, which reflects the limited range of the features used by the taggers. 5 During decoding, the maxspan tagger uses the forward-backward algorithm to compute the probability of maxspan tag values for each word in the Table 6: Accuracy and speed results for the level taggers on Section 00 of CCGbank. input. Then for each word, the probability of its level tag t l having value x is the sum of the probabilities of its maxspan t m tag having values 1..x: Maxspan tag values i from 1 to x represent disjoint events in which the largest constituent that the corresponding word begins or ends has size i. Summing the probabilities of these disjoint events gives the probability that the largest constituent the word begins or ends has a size between 1 and x, inclusive. That is also the probability that all the constituents the word begins or ends are in the range of cells from rows 1 to row x in the corresponding column or diagonal. And therefore that is also the probability that the chart cells above row x in the corresponding column or diagonal do not contain any constituents, which means that the column and diagonal can be pruned from row x upward. Therefore, it is also the probability of a level tag with value x. The probability of a level tag having value x increases as x increases from 1 to N . We set a probability threshold Q and choose the smallest level tag value x with probability P (t l = x) ≥ Q as the level tag for a word. If P (t l = N ) < Q, we set the level tag to 0 and do not prune the column or diagonal. The threshold value determines a balance between pruning power and accuracy, with a higher value pruning more cells but increasing the risk of incorrectly pruning a cell. During development we arrived at a threshold value of 0.8 as providing a suitable compromise between pruning power and accuracy. Table 6 shows accuracy and speed results for the level tagger, using a threshold value of 0.8. Model Speed We compare the effect of the binary tagger and level taggers with N = 2 and N = 4. The accuracies with the level taggers are higher than those with the binary tagger; they are also higher than the baseline parsing accuracy. The parser achieves the highest speed and accuracy when pruned with the N = 4 level tagger. Comparing the oracle scores, the level taggers lead to higher speeds than the binary tagger, reflecting the increased pruning power of the level taggers compared with the binary taggers. Final experiments using gold training and self training In this section we report our final tests using Wikipedia data. We used two methods to derive training data for the taggers. The first is the standard method, which is to transform gold-standard parse trees into begin and end tag sequences. This method is the method that we used for all previous experiments, and we call it "gold training". In addition to gold training, we also investigate an alternative method, which is to obtain training data for the taggers from the output of the parser itself, in a form of self-training (McClosky et al., 2006). The intuition is that the tagger will learn what constituents a trained parser will eventually choose, and as long as the constituents favoured by the parsing model are not pruned, no reduction in accuracy can occur. There is the potential for an increase in speed, however, due to the pruning effect. For gold training, we used sections 02-21 of Tables 7 and 8, where each row represents a training data set. Rows "binary gold" and "level gold" represent binary and level taggers trained using gold training. Rows "binary self X" and "level self X" represent binary and level taggers trained using self training, with the size of the training data being X sentences. It can be seen from the Tables that the accuracy loss with self-trained binary or level taggers was not large (in the worst case, the accuracy dropped from 84.23% to 83.39%), while the speed was significantly improved. Using binary taggers, the largest speed improvement was from 47.6 sentences per second to 80.8 sentences per second (a 69.7% relative increase). Using level taggers, the largest speed improvement was from 47.6 sentences per second to 96.6 sentences per second (a 103% relative increase). A potential advantage of self-training is the availability of large amounts of training data. However, our results are somewhat negative in this regard, in that we find training the tagger on more than 40,000 parsed sentences (the size of CCGbank) did not improve the self-training results. We did see the usual speed improvements from using the self-trained taggers, however, over the baseline parser with no pruning. Conclusion Using our novel method of level tagging for pruning complete cells in a CKY chart, the CCG parser was able to process almost 100 Wikipedia sentences per second, using both CCGbank and the output of the parser to train the taggers, with little or no loss in accuracy. This was a 103% increase over the baseline with no pruning. We also demonstrated that standard beam search is highly effective in increasing the speed of the CCG parser, despite the fact that the supertagger has already had a significant pruning effect. In future work we plan to investigate the gains that can be achieved from combining the two pruning methods, as well as other pruning methods such as the self-training technique described in Kummerfeld et al. (2010) which reduces the number of lexical categories assigned by the supertagger (leading to a speed increase). Since these methods are largely orthogonal, we expect to achieve further gains, leading to a remarkably fast wide-coverage parser outputting complex linguistic representations.
2014-07-01T00:00:00.000Z
2010-08-23T00:00:00.000
{ "year": 2010, "sha1": "0d2a6c902285f7f58ff73546fe44533e43d66720", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "826c1011c1dfa19f582645f5c988a2ccc73265eb", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
246866837
pes2o/s2orc
v3-fos-license
Online Health Information Seeking Behaviors Among Older Adults: Systematic Scoping Review Background: With the world’s population aging, more health-conscious older adults are seeking health information to make better-informed health decisions. The rapid growth of the internet has empowered older adults to access web-based health information sources. However, research explicitly exploring older adults’ online health information seeking (OHIS) behavior is still underway. Objective: This systematic scoping review aims to understand older adults’ OHIS and answer four research questions: (1) What types of health information do older adults seek and where do they seek health information on the internet? (2) What are the factors that influence older adults’ OHIS? (3) What are the barriers to older adults’ OHIS? (4) How can we intervene and support older adults’ OHIS? Methods: A comprehensive literature search was performed in November 2020, involving the following academic databases: Web of Science; Cochrane Library database; PubMed; MEDLINE; CINAHL Plus; APA PsycINFO; Library and Information Science Source; Library, Information Science and Technology Abstracts; Psychology and Behavioral Sciences Collection; Communication & Mass Media Complete; ABI/INFORM; and ACM Digital Library. The initial search identified 8047 publications through database search strategies. After the removal of duplicates, a data set consisting of 5949 publications was obtained for screening. Among these, 75 articles met the inclusion criteria. Qualitative content analysis was performed to identify themes related to the research questions. Results: The results suggest that older adults seek 10 types of health information from 6 types of internet-based information sources and that 2 main categories of influencing factors, individual-related and source-related, impact older adults’ OHIS. Moreover, the results reveal that in their OHIS, older adults confront 3 types of barriers, namely individual, social, and those related to information and communication technologies. Some intervention programs based on educational training workshops have been created to intervene and support older adults’ OHIS. Conclusions: Although OHIS has become increasingly common among older adults, the review reveals that older adults’ OHIS behavior is not adequately investigated. The findings suggest that more studies are needed to understand older adults’ OHIS behaviors and better support their medical and health decisions in OHIS. Based on the results, the review proposes multiple objectives for future studies, including (1) more investigations on the OHIS behavior of older adults above 85 years; (2) conducting more longitudinal, action research, and mixed methods studies; (3) elaboration of the mobile context and cross-platform scenario of older adults’OHIS; (4) facilitating older adults’OHIS by explicating technology affordance; and (5) promoting and measuring the performance of OHIS interventions for older adults. (J Med Introduction During the past decade, the rapid development of information and communication technologies (ICTs) has increased laypeople's access to health information sources and is constantly reshaping their health information-seeking behaviors [1]. Online health information seeking (OHIS) serves multiple purposes, such as understanding disease symptoms, assessing disease risks, finding treatment choices, managing chronic conditions, and preparing for patient-doctor communication [2]. Studies have revealed that OHIS has become one of the most common everyday life experiences across the entire lifespan [3]. In recent decades, the aging of the world population has led to significant demographic transitions that have never occurred before in human history. Societies with large aging populations face great challenges to their health care sectors with respect to an increasing prevalence of chronic conditions among older adults and a sharply rising demand for health care resources. As older adults are more likely to experience illness and chronic conditions than younger people, they have a greater need for health information [4]. With the world population aging, increasing numbers of health-conscious older adults are seeking health information to make better-informed health decisions [5]. Many hopes are placed on ICTs to empower the aging population, promote public health, and alleviate the burden of health care systems. However, there is some skepticism regarding whether older adults really benefit from current technological advancements [6]. Although some studies have found that the adoption and use of ICTs to address health concerns have remained at a relatively low rate among older adults [7], other studies suggest that older adults are increasingly engaged in internet surfing [8]. These mixed results suggest that the OHIS behavior of older adults is still insufficiently investigated. Despite scattered empirical studies on the topic, few scoping or systematic reviews have directly addressed the OHIS behaviors of older adults and synthesized this body of knowledge. Chang and Huang [9] recently reviewed antecedents that predict general consumers' OHIS behaviors (ie, health status, self-efficacy, health literacy, availability, credibility, emotional responses, and subject norms). Although the review found that age is a significant moderator of the correlations between the antecedents and OHIS, it provided few details on older adults' health information behaviors. Hunsaker and Hargittai [8] synthesized quantitative literature on general internet use among older adults. Although their review addressed the relationship between older adults' health and internet use, OHIS was neither specified nor teased out from the general internet use behaviors. Therefore, the type of health information sought by the participating older adults and the factors that influenced older adults' OHIS reported in the literature are unclear. Waterworth and Honey [10] reviewed 8 empirical studies of OHIS among older adults and discussed facilitators of and barriers to older adults' OHIS. However, the number of studies included in this review was limited, and it can hardly provide a comprehensive understanding of OHIS among older adults. Gaps in the existing research indicate that a systematic scoping review on older adults' OHIS is necessary because it will not only enhance our knowledge of human information behaviors and practices but will also inform better health information system designs and ensure better information services for older adults. Motivated by the existing research gaps, this systematic scoping review examines the state of research on older adults' OHIS and reveals the types and sources of health information that the older adults seek, factors that influence older adults' OHIS, barriers to older adults' OHIS, and interventions that are available. The purpose of this systematic scoping review is to provide our readers with an overview of how OHIS among older adults has been studied and present implications for future research. It aims to answer the following questions: 1. What types of health information do older adults seek and where do they seek health information on the internet? 2. What are the factors that influence older adults' OHIS? 3. What are the barriers to older adults' OHIS? 4. How can we intervene and support older adults' OHIS? Literature Search This review follows the guidelines of the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) [11]. We were also inspired by the recommended framework for conducting systematic reviews in information-related fields by Okoli [12]. The bibliographic database search strategies were developed after consulting an academic librarian at the first author's university. First, we searched the following databases: Web of Science; Cochrane Library database; PubMed; MEDLINE; CINAHL Plus; APA PsycINFO; Library and Information Science Source; Library, Information Science and Technology Abstracts; Psychology and Behavioral Sciences Collection; Communication & Mass Media Complete; ABI/INFORM; and ACM Digital Library. These databases were chosen because they cover the academic disciplines (eg, medicine, medical informatics, communication, psychology, and information and library science) that are most likely to study older adults' OHIS behaviors. Second, the search queries contained the following categories and keywords: people (older adults, elderly, aging, senior, seniors, older people, aged 60, aged 65), behavior (find, search, seek, access, retrieve), place (internet, online, web), object (information), and attribute (health, medicine, drug, nutrition, diet, wellness, illness). Specific queries were run in the topic, title, and abstract fields, depending on the database (see Multimedia Appendix 1). The initial search was performed in November 2020. Third, we captured additional articles using Google Scholar by tracking the citations and references in the articles found in the databases and in other relevant reviews. In addition, we supplemented relevant articles by searching Google Scholar directly. All the studies identified during the database searches were imported into the reference management software Zotero, and duplicates were removed. Eligibility Criteria We developed a series of inclusion and exclusion criteria to identify articles relating to older adults' OHIS behaviors. The inclusion criteria were as follows: (1) The articles should pertain to health-related contexts, including areas such as health, mental health, diet, and nutrition. (2) The article should describe OHIS behaviors (eg, general OHIS, selection and use of health information sources, and adoption and use of health information). (3) The article should focus on older adults (Note that although the search strategies indicated 2 commonly accepted lower age boundaries, 60 and 65 years, to identify older adults, it did not exclude other ways to describe the population); studies that clearly mentioned the population of older adults or contained explicit, equivalent claims were eligible. (4) The research should be empirically based. (5) The articles should have been published in a peer-reviewed journal or in conference proceedings. (6) When we identified more than 1 paper published by the same author on the same topic, we selected only the most recent one. (7) The articles should be written in English. Our exclusion criteria were as follows: (1) The articles did not pertain to a health-related context. (2) The articles were not about OHIS behaviors; for instance, some articles focused only on general ICT use or adoption behaviors, were more concerned with technology-related rather than information-related issues or addressed only older adults' health literacy or eHealth literacy and did not investigate their OHIS. (3) The articles did not focus on older adults; we specifically excluded articles that treated age merely as a predictor or moderator in studying the OHIS of the general population, as it is evident that age influences people's OHIS behaviors. (4) The articles were not based on empirical research; this criterion helped eliminate opinion pieces, brief communications, editorial commentaries, and reviews. (5) The articles were not peer-reviewed (eg, a self-archived manuscript). (6) The articles were not written as full papers (eg, abstracts, posters, or letters). (7) The articles were not written in English. Screening Procedure The procedure for screening articles was based on the eligibility criteria. The initial search used database search strategies and identified 8047 publications. After duplicates were removed, the data set consisted of 5949 publications for screening. The screening involved 3 stages. In the first stage, all the 3 authors reviewed the titles and abstracts of a sample of 300 articles from the search results, and then discussed and refined the screening criteria. In the second stage, we selected another 300 articles randomly from the search results as a test set. The feasibility criteria were verified independently by 2 of the authors (SS and MZ). Intercoder agreement (κ=0.816) indicated satisfactory reliability. Discrepancies were discussed and resolved by involving the third author (YZ), and the eligibility criteria were further refined accordingly. In the third stage, author MZ screened the remaining articles based on the eligibility criteria using the titles and abstracts, and author SS validated the results. Discrepancies were resolved by involving author YZ. The whole screening procedure resulted in 279 articles for full-text analysis. To read and code the full-length articles downloaded from the databases, we used the MAXQDA 2020 software, which is designed for analyzing computer-assisted qualitative and mixed methods data, texts, and multimedia data. During the full-text analysis, we excluded 211 articles by applying the eligibility criteria. The remaining 68 articles were retained, and 8 more eligible articles were identified through citation tracking with the assistance of Google Scholar. In total, 75 articles were selected for the systematic scoping review. Data Extraction and Analysis We used Excel (Microsoft Corporation) to extract and record the basic information of the articles in the sample, including the author(s), title, publication year, publication name, and publication type (eg, journal vs conference). We used thematic content analysis in an iterative manner to identify the evidence regarding our research questions [13]. Several lists of codes were generated during 2 rounds of full-text coding procedures. In the first round, all the authors participated in the open and selective coding processes until a coding schema emerged and converged. In the second round, MZ coded the full texts by applying the coding schema, and SS validated all the codes. The intercoder reliability of the thematic content analysis reached 85%. Discrepancies were solved by involving YZ in the discussion. Basic Characteristics of the Included Articles After screening, the final sample consisting of 75 articles was obtained, as shown in Figure 1. The articles were published between 1997 and 2020 (see Multimedia Appendix 2). Trend observations revealed that the number of publications in this subject area increased over time and that the OHIS of older adults began to receive considerable attention in the last 3 years (see Figure 2). The articles in the sample were mostly published after 2006 (n=69, 92%), which relates closely to the boom in social media. Of all the articles, 72 (96%) were published in journals, and the remaining 3 (4%) were published in conference proceedings. The articles originated from 17 countries (based on the first author's affiliations), with the top 3 being the United States (n=44, 58.67%), Australia (n=5, 6.67%), and China (n=4, 5.33%). The top 4 journals publishing these articles include the Journal of Medical Internet Research (n=8, 10.67%), Educational Gerontology (n=4, 5.33%), Journal of Health Communication (n=3, 4%), and Library & Information Science Research (n=3, 4%), indicating the multidisciplinary nature of the sample. The systematic scoping review first investigated how the included 75 articles defined the target population of older adults. The cutoff ages for defining older adults were determined. More than half of the articles used samples of older adults aged above 60 years. Furthermore, 16 articles (21.33%) defined older adults as those aged 65 years and above, and 23 (30.67%) had cutoff ages ranging from 60 to 64 years. In addition, we noted some papers that defined the older adult group more loosely. For example, the cutoff age in 17 articles (22.67%) ranged from 50 to 54 years, and 14 articles (18.67%) used samples with minimum ages ranging from 55 to 59 years. Moreover, 5 of the articles (6.67%) did not specify precise age distributions. The research methods varied across the 75 studies. Regarding methodological approaches, we found that 45 studies (60%) used quantitative approaches, 22 (29.33%) employed qualitative approaches, and 8 (10.67%) were based on mixed methods designs, using a combination of quantitative and qualitative methods. As for specific methods, surveys (n=28, 37.33%) and interviews or focus groups (n=25, 33.33%) were the primary methods used, followed by secondary data analysis (n=6, 8%) and experiments (n=4, 5.33%). In terms of data sources, most of the studies were based on primary data (n=65, 86.67%) and a few on secondary data (n=10, 13.33 %). Concerning the types of data, we found 59 studies (78.67%) based on cross-sectional data and 16 (21.33%) based on longitudinal data. Internet-Based Health Information Types and Sources Information types and information sources are 2 frequently reported aspects of information in OHIS studies [14]. For our analysis, we adapted the typologies of health information types from Kent et al [15] and Ramsey et al [16]. The results presented in Table 1 suggest that older adults often search the internet for information on specific diseases because they want to obtain a general idea of their condition before diagnosis or treatment so that they know what to expect and can be better prepared to face stressful situations [17]. The health problems mentioned in these 75 articles are mainly cancer (n=10, 13.33%), mental health problems (n=5, 6.67%), chronic conditions (n=4, 5.33%), and physical diseases (n=4, 5.33%). Aside from this disease information, the most frequently mentioned types of information are related to medication or treatment, nutrition or exercise, medical research, disease symptoms, and health promotion. Some articles mentioned that older adults also use the internet to seek support groups or interpersonal advice, health insurance information, health news, and health policy information. Of note is that more than half of the articles (n=40, 53.33%) used the umbrella term health information, without specifying any type of health information content. Furthermore, the types of content were not mutually exclusive. For example, a single article might mention more than 1 type of information (eg, older adults seeking information for cancer-related symptoms and medication). Most of the articles in the sample (n=58, 77.33%) used the general internet to represent all the web-based sources of health information. Further, 26 articles (34.67%) described health websites as sources of internet-based health information for older adults; among these, the owners of the websites varied, consisting of educational, commercial, government, and nonprofit entities. Moreover, general search engines such as Google were the third most frequently mentioned sources in the studies (n=17, 22.67%), suggesting that older adults often use general search engines to start OHIS [18][19][20]. Further, 11 articles (14.67%) mentioned older adults' use of social media (eg, Facebook, Twitter) and blogs in OHIS. Only 3 articles (4%) addressed older adults' use of patient portals, and 2 articles (2.67%) were about older adults' use of mobile internet services. Table 2 shows the health information sources mentioned in the studies. Table 2. Internet-based health information sources mentioned in the studies (N=75). Factors That Influence Older Adults' OHIS Behaviors Among the 75 articles, 35 (46.67%) treated OHIS as a variable or construct. These articles quantitatively measured OHIS with various scales or proxy variables. Among them, 27 (36%) regarded OHIS as a dependent variable and explored the antecedents of older adults' OHIS. Further, 4 (5.33%) treated OHIS as an independent variable, and the remaining 4 (5.33%) treated OHIS as neither a dependent nor an independent variable but provided only descriptive analyses. Because the articles that employed quantitative approaches primarily concerned the antecedents of older adults' OHIS, we summarize the antecedents in Table 3. We summarize the main influencing factors that appeared in the investigations. The antecedents of older adults' OHIS fall mainly into 2 categories, namely individual-related characteristics and source-related characteristics. Within the individual-related characteristics, 12 subcategories were observed, including demographics, anxiety, beliefs, attitudes, self-efficacy, personality, health status, medical history, health care service availability, source experience, health literacy and motivations. Among the source-related characteristics, credibility, usefulness, and ease of use were the 3 most frequently mentioned factors. Table 3. Factors influencing older adults' online health information seeking behaviors. Barriers to OHIS of Older Adults Rather than treating OHIS as a variable, 40 of the 75 articles (53.33%) treated OHIS as a process. Of these studies, 29 (38.67%) explored the barriers that older adults encounter during OHIS. The results suggest that older adults may experience many barriers preventing successful OHIS, as shown in Table 4. In the prior studies, we identified 3 main types of barriers (ie, individual, social, and ICT), 11 subtypes, and 38 specific issues. Table 4. Barriers to older adults' online health information seeking behavior. Social barriers Social stigmas [65] Stigma of mental health problems [66] Stigma of sex-related health problems Lack of social support [66,67] Lack of informational support [17,50,68] Lack of organizational support (eg, health care services) [57,65] Lack of instrumental support (eg, instructions on computer use) [49,69] Lack of intergenerational support (eg, not living with children) [70,71] Lack of peer support (eg, hard to get support from friends) ICT b barriers Lack of IT c infrastructure [29] Lack of ICT devices [71] Low accessibility to medical records Problematic information quality [64,72] Misinformation [73,74] Conflicting health information Studies Barrier types [65,73] Irrelevant information Information overload [20,48,71] Overwhelming health information on the internet [58,64,70] Overwhelming extraneous information and pop-ups Unsatisfactory user experiences [75,76] Unsatisfactory interactivity and navigability [72,75] Unsuitable font sizes [76,77] Dense text and lack of visual elements [51,72,75] Confusing layouts [39,45,78] Insufficient ease of use [51,56,59] Frustrating user experiences a OHIS: online health information seeking. Regarding individual barriers, some studies found that older adults' OHIS could be hindered by age-related functional decline, including vision impairment, poor eye-hand coordination, physical challenges (eg, back pain), and illness. Moreover, some studies reported several aspects indicating low literacy among older adults that prevented effective OHIS, including limited English language skills, lack of basic health knowledge, limited digital literacy, undeveloped information literacy, and low health or eHealth literacy. Moreover, some studies found that older adults' perceptions of low self-efficacy regarding computer use, reading, learning, and evaluation of health information reduced their willingness toward OHIS. Other findings revealed that negative attitudes toward internet use or general technology and privacy concerns about using technology decreased older adults' intentions to search information on the internet. The results also revealed that beliefs regarding the external locus of the control of health care and fatalistic beliefs reduced older adults' active OHIS. As for social barriers, studies suggested that older adults may have some social stigma concerning OHIS when it comes to mental and sex-related health problems. Moreover, older adults often report a lack of social support in their OHIS, including informational, organizational (eg, health care services), instrumental (eg, instructions on computer use), intergenerational (eg, support from children), and peer support (eg, support from friends). In terms of ICT use, analysis of the studies revealed that many older adults do not possess information technology devices, and they reported low accessibility to medical records. Moreover, the quality of general health information on the internet is problematic. Older adults are likely to encounter misinformation, conflicting information, and irrelevant information during their OHIS. Furthermore, they often confront information overload when reading health information due to overwhelming amounts of irrelevant information or pop-ups. Moreover, older adults' OHIS may lead to some unpleasant and frustrating user experiences, such as unsatisfactory interactivity and navigability, unsuitable font sizes, dense text lacking visual elements, confusing layouts, and complicated site designs. Interventions for Older Adults' OHIS Given the abovementioned barriers, it is essential to provide older adults with additional support to facilitate their OHIS. We identified 11 studies (14.67%) among the 75 that used educational training programs to facilitate and intervene in older adults' OHIS, as shown in Table 5. Among these, 10 of the 11 studies provided offline workshops, and 1 conducted an online workshop. The offline workshops were conducted in community settings (eg, public libraries, schools, or medical centers) and included face-to-face instruction. We identified only 1 study that used an internet-based tutorial to improve older adults' ability to distinguish high-quality internet-based health forums from low-quality ones. Among the 11 articles, 9 described training programs with multiple sessions, each lasting 2 to 3 hours, and the duration of the programs varied from 1 to 4 months; the other 2 studies used 1-time training sessions. [20] and the overall response to the program was positive. Method: Pre-vs posttest surveys, face-to-face interviews A significant improvement in the ability to use a computer or navigate the web was observed (P<.001). The average navigational skills self-efficacy score for health web sites (P<.001) and computers (P<.001) improved. A computer learning center located in the community 2-step training: (1) Training of internet navigators: 13 hours of basic training in computer skills over 13 weeks, plus a 4-hour specific training on 2 health websites and training on how to support peers during the process. No. of participants: 8 (2) Training of older adults living in affordable housing: 2-hour session on To increase access to and use of 2 prominent health websites: Med-linePlus.gov and NIHSe-niorHealth.gov Bertera et al [67] basic computer skills and use of 2 specific health websites. Partnering with Seniors for Better Health: Classes included 2 components, computer literacy and health information search strategies. No. of participants: 112 To assist older adults with retrieving and evaluating health information resources on the internet Chu et al [68] cy when retrieving and evaluating internet-based health information (P<.001). Method: Posttest interview Qualitative assessment by asking participants questions such as "Did your levels of participation A large suburban public library and 2 community cen-Workshops: 2-hour sessions once a week over 5 weeks The sessions used constructivist teaching techniques and self-directed learning. No. of participants: 70 To improve the ability to locate health information Campbell [79] in your health care change since you began using the internet?" ters for older adults Method: Pre-vs. posttest surveys; survey 1 year after the training Statistically significant differences were found between baseline and 5-week follow-up results A large suburban public library and 2 community centers for older adults Workshops: 2-hour sessions once a week over 5 weeks To improve basic skills for searching health information on the internet Leung et al [82] Method: Pre-vs posttest surveys, survey 6 months after the training Participants experienced reduced anxiety concerning computers and increased confidence in locating health information. To teach older adults to access and use highquality internet-based health information Xie and Bugg [84] Method: Pre-vs posttest surveys; survey conducted 6 weeks after the training Participants experienced reduced anxiety, increased confidence, and a sense of self-efficacy at the end of the 5-week program and 6 weeks after program completion (P<.001). A parish-sponsored, older adult leisure learning center Educational program: 2-hour sessions once a week over 5 weeks. No. of participants: 12 To enhance older adults' ability to grasp and manage health-related information retrieved from the internet and act accordingly Chu and Mastel-Smith [85] Method: Experimental group vs control group survey comparison Compared to the control group, the experimental group participants rated higher usability and learned more information on a new website. Internet-based setting Educational programs: 70 minutes to complete an educational online program and answer questions. No. of participants: 64 To improve the eHealth literacy of adults aged 50 years and older Fink and Beck [86] a OHIS: online health information seeking. Further, 4 of the 11 programs were guided by established theories, models, or concepts (eg, the self-efficacy theory and the health belief model). All the studies involved some form of evaluation, including postsession surveys or interviews, preversus postintervention comparisons, and experimental versus control group comparisons. In addition, 5 studies evaluated the effectiveness of the intervention outcomes from a longitudinal perspective over a period ranging from 1 month to 1 year to the competence of the program. Among all the studies, 9 statistically assessed the effects of the intervention. Measures varied across the studies; these included opinions from surveys on the internet, self-efficacy in seeking health information, and anxiety regarding computer use. All the articles reported some positive outcomes of the intervention programs. Principal Findings This systematic scoping review provides an overview of OHIS behaviors among older adults, as shown in Figure 3. Overall, the findings of this paper reveal core elements of OHIS among older adults. First, the types and sources of health information that older adults search for were clearly presented. Then, a portion of the studies explored the main factors influencing older adults' OHIS behaviors, which can be categorized as individual-related and source-related characteristics. Then, we identified the barriers to OHIS behavior in older adults from existing literature, including individual barriers, social barriers, and ICT barriers. Finally, this paper provides an in-depth analysis of the interventions mentioned in some of the included papers to support OHIS behaviors among older adults. We believe that the framework of this paper can, to some extent, help researchers to better position their research objectives in future studies so that the objectives correspond to specific dimensions for in-depth empirical investigation. Regarding the first research question, the results show that older adults sought various types of health information on the internet, including information about specific diseases, medication and treatment, nutrition and exercise, medical resources, disease symptoms, health promotion, support groups and interpersonal advice, health insurance information, and health news or policies. The information sources included health websites, general search engines, social media and blogs, patient portals, and mobile devices. The types of health information sought differed from those that interest young people. According to a recent systematic review [87], adolescents and youths (<24 years) search the internet for daily health-related issues, physical and psychological well-being, sexual health, social problems, and culturally sensitive topics. Compared to the adolescent and youth population, older adults tend to search more for disease-related health information topics. As for the second research question, the results point to 2 main types of factors influencing older adults' OHIS: individual-related characteristics and source-related characteristics. The individual-related characteristics include demographics, anxiety, beliefs, attitudes, self-efficacy, personality, health status, medical history, health care service availability, source experience, health literacy, and motivations. Among the source-related characteristics, credibility, usefulness, and trust were the 3 factors most frequently mentioned in the studies. We noted that the primary factors influencing older adults' OHIS differ from those influencing young adults. A systematic review of studies investigating young adults' (<24 years) OHIS [87] revealed that the most frequently mentioned influencing factors were gender, age, educational status, emotional characteristics, engagement in risky behaviors, and eHealth literacy. The results for the third research question reveal that older adults might encounter 3 types of barriers during their OHIS, including individual barriers (eg, low literacy), social barriers (eg, social stigmas), and ICT-related barriers (eg, lack of ICT devices). These barriers may hinder effective OHIS behaviors of older adults. The results suggest some differences from the findings on young adults' OHIS. For the adolescent and youth population (<24 years), the main barriers to OHIS include online privacy and concerns about information credibility [87]. Although some studies report low health literacy among adolescents [88], older adults seem to have more difficulties in this respect than adolescents [89,90]. As for the fourth research question, the review found that many intervention programs have been created to support older adults' OHIS; they primarily use educational training workshops in offline and online formats. Most training programs contained multiple sessions, with each session lasting 2 to 3 hours; the duration of the programs varied from 1 to 4 months, and all the programs reported at least some positive effects in support of older adults' OHIS. Implications for Future Research Overall, this systematic scoping review identified the need for more in-depth research on older adults' OHIS. As can be seen from the aforementioned evidence, a subset of studies have treated OHIS as a variable or construct and focused on exploring the factors influencing OHIS in older adults. Other studies treat OHIS as a process and investigate how the older adults search the internet for health information. However, given the complexity of the health conditions of older people and a projected future intensification of information overload, older adults will encounter more serious problems when searching for health information on the internet, such as how to select from among multimodal information sources, how to express health information needs, and how to evaluate health misinformation. Considering the growing population of older adults, the importance of internet-based information seeking for overall public health, and the lack of best practices, more research on this topic is needed. In this section, we propose several directions for future research based on gaps identified in the review. Investigations on the OHIS Behavior of Older Adults Above 85 Years With the accelerating pace of global aging, the population of older adults is steadily growing. Instead of classifying the large population of older adults as one group, researchers are advocating for a more precise segmentation of this population, such as the youngest-old (65 to 74 years), middle-old (75 to 84 years), and oldest-old groups (above 85 years) [91]. Regarding OHIS, the age distribution of the samples in this systematic scoping review indicates that the exploration of OHIS by the oldest-old group is very limited [92]. Most articles included in this review have focused on the youngest-and middle-old groups [30], whereas there is a lack of research on the health information needs and behaviors of the oldest-old group. Future OHIS research can be appropriately skewed toward the oldest-old group to consider the physiological and psychological characteristics, the unique information needs, and explore the influences, processes, and health outcomes of the OHIS of this group more empirically within the framework of everyday information mastering [93]. Conducting More Longitudinal, Action, and Mixed Methods Research As for research methods, most current studies use cross-sectional data collection methods and pay little attention to longitudinal approaches. In future, more consideration can be given to the adoption of longitudinal methods, such as the experience sampling method and the ethnographic approach. In particular, for intervention studies on OHIS behaviors in older adults, educational training programs with long time spans could provide data to improve OHIS performance and the health literacy of older adults. More participatory action research at the community level would enrich the network of actors in OHIS for older adults and engage more participants, thereby promoting interdisciplinary and collaborative health information practices in this population. In addition, future studies might consider more mixed methods approaches to leverage the advantages of qualitative and quantitative approaches and triangulate primary data with secondary data. Existing mixed methods studies have been conducted mainly based on quantitative questionnaire analyses as well as qualitative focus groups, and a richer mix of methods is to be further explored for this topic in future. Finally, as prior studies have relied heavily on self-reported data, future studies could consider more behavioral data using methods such as eye-tracking and electroencephalograms. Elaboration on Mobile Context and Cross-platform Scenario of Older Adults' OHIS Information types and information sources are the essential contextual factors in OHIS [94][95][96]. However, this review found that most studies on older adults' OHIS do not clearly explain what health-related information was involved or from where the information was gathered. In terms of information types, current studies mainly focus on searches for disease and treatment information. More studies are needed to address other types of health information that older adults might seek, such as information on environmental health and disease prevention. Regarding information sources, studies are needed to investigate older adults' use of mobile devices for OHIS. With the development of the mobile internet and the internet of things, OHIS scenarios for older adults are changing. Mobile device-based health information access can more effectively meet the health information needs of older adults, facilitate daily health monitoring and self-tracking, and improve context-driven, health-related decision-making among older adults. For example, increasing numbers of older adults are seeking health information on their smart phones through short video apps like TikTok [97,98]. Furthermore, in addition to searching for health information on their mobile devices, increasing numbers of older adults are using mobile social apps to create content [99]. Future research could focus more on the relationship between OHIS and health-related content generation by older adults. In addition, further exploration of complicated OHIS scenarios is needed. For example, with the popularity of wearable devices and the development of various health-related vertical search platforms, a portion of the older adult population with higher information literacy will become more proficient at searching for a full range of health information using various smart devices and immersive technologies [100], such as interacting with information through voice recognition and gesture control. Thus, explorations of cross-platform and cross-device seeking behaviors in OHIS by older adults are needed. Meanwhile, in addition to active information seeking, more types of seeking behaviors, such as passive exposure, information encountering, and surrogate health information seeking [101,102], deserve attention and further investigation. In particular, the influences and positive outcomes of searching as learning during OHIS by older adults is a topic worth exploring. Facilitating Older Adults' OHIS by Explicating Technology Affordance This review revealed that current research on factors influencing OHIS in older adults focuses more on demographic issues and individual-related characteristics than on source-related factors. In recent years, increased emphasis has been placed on aging-friendly designs in human-computer interaction [103], and the user experience-oriented design of various social apps and smart devices is centered on the needs and behavioral preferences of older adults, with an interest in meeting their personalized requirements. We believe that the affordance of technology in aging-friendly design is also a highly influential factor for promoting OHIS in older adults. It would be fruitful to integrate the uses and gratifications theory with the affordance lens to better promote the positive impact of new media platforms on older adults' information-seeking behaviors [104,105]. More attention needs to be placed on the ease of use, usability, and sociability of aging-friendly information sources and information systems. In particular, in the upcoming human-centered artificial intelligence era, older people's perception of the trustworthiness of multimodal information sources and their trust in algorithm-based content recommendations will continue to change. Therefore, the age-appropriate design of OHIS needs to constantly break away from stereotypes of older people and re-establish a more adaptive mental model. The lens of the affordance theory could be applied to help situate OHIS for older adults in the context of information practices, promoting deep reflection on the interaction of actors with sociocultural environments and on the mediated nature of technology [106]. For instance, an OHIS platform should provide rich technology affordances for older adults and provide targeted support for active health information access, information encounters, and information avoidance problems in different sociocultural environments. Future research could focus more on how technology affordance can better mediate older adults' OHIS gratification by attempting to build a more detailed affordance typology [107]-such as handling, effecter, and motivational affordances-to measure older adults' gratifications for OHIS using social media. Promoting and Measuring the Performance of OHIS Interventions for Older Adults The results show that older adults encounter many barriers in OHIS; thus, many intervention programs have been created to support their searching. However, current intervention programs still leave considerable room for improvement. First, current educational training programs are generally small-scale ones, making it difficult to reach a wide group of older adults; most programs are offline workshops, and there are few internet-based programs. Future OHIS interventions for older adults need to offer more technology-mediated web-based programs and provide richer formats than workshops and tutorials, such as distance education for older adults using gamification and immersive technology. Moreover, most current intervention programs operate in the United States; older adults living in less developed countries or areas received less attention. Future studies on OHIS in older adults must involve more trans-and cross-national, or regional and cross-cultural comparative studies to further explore the influence of sociocultural factors on older adults' OHIS behaviors. We also recommend that more information and communication technology for development (known as "ICT4D") projects focus on upgrading OHIS and improving the same for older adults [108], thereby better promoting health literacy and health mobility for older adults in developing countries and regions. In particular, researchers need to draw more on the design science research paradigm. Design science research is an innovative and often iterative problem-solving process that builds and evaluates artifacts [109]. In our research context, the purposeful artifacts could be search systems, training courses, workshops, tutorials, or citizen science programs. In the building phase of artifact development, most units of analysis relate to offline workshops and neglect other types of artifacts. It is also noteworthy that current intervention studies lack a theoretical lens, and only a few studies have designed interventions based on theoretical foundations. Future interventions for older adults' OHIS need to embrace the theoretical considerations that design science research has been advocating [110]. In the evaluation phase of artifact development, current studies lack long-term assessments of intervention effects. Future studies should consider more participatory action research to iteratively test the effects of OHIS interventions on older adults and select some specific health domains-such as chronic diseases, cancer, and mental health-for attempting to verify the actual effects of OHIS interventions on information literacy, health literacy, and health outcomes of older adults. In addition, future studies could contemplate providing various forms of support based on the perspectives of older users, allowing them to participate in the project design process and thus help them overcome search barriers. Limitations This systematic scoping review has several limitations. The first one is in terms of search sources. Owing to the interdisciplinary nature of OHIS research in older adults, although we tried to search multiple databases using relevant keywords and consulted academic librarians to improve our search strategy, it was nevertheless inevitable that some literature would be missed, especially relevant research in unofficially published conference proceedings. The backward and forward strategy can be further used to expand the literature search sources in future [111]. Second, in terms of the literature type, this review mainly focuses on empirical studies, whereas some opinion papers, descriptive cases, and short communications on OHIS for older adults were excluded from our literature pool, and some complementary analyses of such nonresearch articles can be conducted in future. Finally, in terms of the analytical approach for searching literature, this study did not conduct a comparative chronological analysis of the literature in different periods, which to a certain extent could not fully reveal the impact of technological and sociocultural changes on older adults' OHIS behavior. In future, the introduction of knowledge graphs can be considered to map the themes of the literature at different stages. Conclusions This review provides an overview of how older adults' OHIS has been studied. It reveals that older adults search for various types of health information on the internet using different types of web-based sources and that their OHIS is jointly influenced by source-related and individual-related factors. Their difficulties in searching arise from individual, social, and ICT-related barriers. Some educational intervention programs that support older adults' OHIS have been initiated in the form of web-based and offline workshops. Furthermore, the review reveals that the topic of older adults' OHIS is understudied, although the number of studies is increasing. Nevertheless, more studies are needed to understand the problems associated with older adults' interactions with health information and better support them in their decision-making when they are searching for medical and health information on the internet. Based on the findings of the review, the authors propose several objectives for future research.
2022-02-17T06:17:24.226Z
2021-11-08T00:00:00.000
{ "year": 2022, "sha1": "6aa52340543d83d18bc8d85121f385a7b9515fb1", "oa_license": "CCBY", "oa_url": "https://www.jmir.org/2022/2/e34790/PDF", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7c4722a5d8577724fb7e0aea38a522320deb1216", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
33516977
pes2o/s2orc
v3-fos-license
Viral modulation of stress granules Highlights ► Assembly of SGs can be dramatically influenced by viruses. ► Viruses have elaborated mechanisms to impose a blockade/induction to SG assembly ► New knowledge on the SG biology may be beneficial in developing new anti-viral drugs. Introduction Exposure of cells to environmental stress (e.g., heat shock, UV irradiation, hypoxia, endoplasmic reticulum (ER) stress and viral infection) trigger a rapid translational arrest generating polysome disassembly . This event triggers a molecular triage, where the affected cell must make a decision on the fate of mRNA that is released from polysomes: decay or silencing (Anderson and Kedersha, 2008). For these events, cells have elaborated different classes of RNA granules named processing P-bodies (PBs) or stress granules (SGs) that contribute to the regulation and lifecycle of mRNAs. Both PBs and SGs contain share proteins and are assembled in cells subjected to stress, but differ in: (i) only PBs are observed in unstressed cells, (ii) SG assembly typically requires phosphorylation of translation initiation factor eIF2␣, but not PB assembly (Fig. 1), and (iii) PBs contain proteins involved in mRNA decay, whereas SGs contain proteins of translation initiation complex (Eulalio et al., 2007). During a stress response, cells induce a shut-off of cellular protein synthesis and subsequently promote SG assembly (Anderson and Kedersha, 2009). Different pathways in SG assembly have Control of translation by eukaryotic initiation factor 2 (eIF2). eIF2 bound to GDP (eIF2-GDP) is recycled to the active eIF2-GTP by a reaction catalyzed by eIF2B. Once recycled, eIF2-GTP forms a ternary complex with initiator-methionine tRNA (Met-tRNAi) and 40S ribosome resulting in 43S pre-initiation complex. Four kinases activated by hemin deficiency/oxidative stress (HRI), viral infection (PKR), endoplasmic reticulum stress/hypoxia (PERK/PEK) and amino acid starvation/UV irradiation (GCN2); can phosphorylate eIF2 subunit ␣, stabilize eIF2-GDP-eIF2B complex (inactive) and prevents eIF2 recycling. These events result in a shut-off of the host protein synthesis and subsequently SG assembly (Fig. 2, i). been described. The most popular pathway is the phosphorylation of the critical translation initiation factor, eIF2␣ by a family of four serine/threonine kinases HRI, PKR, PERK/PEK and GCN2. HRI (eIF2␣K1) is activated in heme deprivation and oxidative stress (Han et al., 2001); PKR (eIF2␣K2) is activated by viral infection (Williams, 2001); PERK/PEK (eIF2␣K3) is activated in the presence of unfolded proteins in the endoplasmic reticulum (ER) and dur- ing hypoxia (Harding et al., 2000); and GCN2 (eIF2␣K4) is activated during amino acid starvation and UV irradiation (Jiang and Wek, 2005). Each kinase causes the phosphorylation of the ␣-subunit of eIF2 at Ser52, which implies the tight binding with eIF2B, inhibiting the exchange of GDP for GTP ( Fig. 1). Therefore, there is a decrease in translation tertiary complex assembly (eIF2/GTP/Met-tRNA) which suppresses the initiation of translation and promotes SG assembly (Fig. 2, step i) . Other mechanisms independent of the phosphorylation of eIF2␣ have also been explored. Hippuristanol and Pateamine A, drugs that inhibit the helicase activity of eIF4A, are able to induce the assembly of SGs (Fig. 2, step ii) (Dang et al., 2006;Mazroui et al., 2006). As well, the overexpression of SG markers (Anderson and Kedersha, 2008), such as TIA1 (Kedersha et al., 1999) or G3BP-1 (Tourriere et al., 2003), can trigger the assembly of SGs (Fig. 2, step iii). The activation of eIF2␣ kinases by viral infection may result in the inhibition of cellular protein synthesis (Walsh and Mohr, 2011) and/or promotion of autophagy, process involving lysosomaldependent recycling of intracellular components (Talloczy et al., 2002). Moreover, some viral proteins can bind eIF4A (Aoyagi et al., 2010;Page and Read, 2010). All of these mechanisms induce SG assembly (i.e., shut-off of cellular protein synthesis), but the viruses have found ways to bypass the hostile environment generated by the cell to ensure their survival. In the last decade, several studies have also demonstrated that the assembly of SGs can be dramatically influenced by viruses: the induction and blockage of SG assembly mediated by viral infections have both been described as means to promote virus replication (Beckham and Parker, 2008;Montero and Trujillo-Alonso, 2011;White and Lloyd, 2012). In this review we will summarize the current understanding that exists between different virus families and the regulation of stress granules. Fig. 2. SG assembly pathways. Polysomes disassembly can lead to the assembly of cytoplasmic granules know as processing P-bodies (PBs) or stress granules (SGs). If deadenylation (e.g., CCR4/Not1), destabilization (e.g., TTP/XRN1) and decapping (e.g., DCP1/DCP2) complex; and even RISC (Ago) complex are recruited to mRNA, these will be targeted to PBs. Conversely, if TIA-1/TIAR or proteins such as G3BP/USP10 are recruited to the stalled initiation complexes, these will be directed to SGs. Different pathways in SG assembly are described (in red): (i) phosphorylation of eIF2␣ induced by the exposure to different stress inducers (e.g., arsenite and thapsigargin) (Fig. 1); (ii) Hippuristanol and Pateamine A, drugs that inhibit the helicase activity of eIF4A altering ATP binding or ATPase activity; and (iii) the overexpression of SG markers, such as G3BP or TIA-1. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of the article.) Virus-mediated blockade of SG assembly In 2002, the first evidence was reported showing an interaction between viruses and what we understand to be protein components of SGs. Li et al. showed that the negative strand 3 terminal stem-loop structure present in the genome of West-Nile Virus (WNV) interacts with two SG markers, TIA-1 and TIAR . In support of the necessity for these virus-host interactions, WNV replication was reduced when TIAR −/− cells were infected . WNV is a neurotropic flavivirus responsible for viral meningoencephalitis which has an enzootic cycle between mosquitoes and birds, but can infect amphibians, reptiles, horses and humans (Dauphin et al., 2004). Moreover, Emara et al. expanded these observations to other members of the same viral flaviviridae family, where TIA-1/TIAR were shown to co-localize with viral replication complexes (dsRNA and NS3) in both WNVand dengue virus-infected cells (Emara and Brinton, 2007). SGs can be induced in mammalian cells by several drugs (Kedersha and Anderson, 2007), apparently as a consequence of the phosphorylation of eIF2␣. In order to determine if viral infection would have any effect on SG assembly, Baby Hamster Kidney (BHK) cells were infected with wild-type WNV and subjected to arsenitemediated oxidative stress. Infected cells were found to be resistant to SG induction (Emara and Brinton, 2007). However, recent studies showed that chimeric WNV produces high levels of an early viral RNA (W956IC), allowing PKR activation and subsequent induction of SG, likely due to translational arrest (Courtney et al., 2012). Another flavivirus, Hepatitis C Virus (HCV), the major etiologic agent of hepatitis C in humans, is able to disrupt PB assembly but at the same time, promote SG assembly during the course of viral infection (Ariumi et al., 2011). However, late in HCV infection corresponding to 48 h post-infection, G3BP-1 and DDX6, both components of SG (Table 1), are found to co-localize with the HCV core, resulting in the suppression of SG assembly. This blockade to SG assembly was found to be due to an interaction between G3BP-1 and the HCV non-structural protein, (NS)5B and the 5 end of the HCV minus-strand RNA (Yi et al., 2011). Thus, as shown in the examples above, through sequestration of factors essential for the assembly of SGs, several viruses have elaborated mechanisms to impose a blockade to SG assembly. Some viruses inhibit cap-dependent translation (hence host cell mRNA translation) to ensure the synthesis of their own proteins. Pelletier et al. discovered that the translation of the uncapped picornaviral mRNA is mediated by an RNA structure known as the internal ribosome entry site, IRES, at the 5 end of the viral RNA (Pelletier et al., 1988). Infection by poliovirus (PV), the etiologic agent of paralytic disease known as poliomyelitis, induces the inhibition of cap-dependent translation initiation by the cleavage of the translation initiation factors eIF4GI, eIF4GII, and PABP mediated by viral proteinases (Gradi et al., 1998;Kuyumcu-Martinez et al., 2002). SG assembly is induced at a very early time post-PV infection (at approximately 2-4 h), but later, SGs disappear because the same viral 3C proteinase (3Cpro) cleaves G3BP-1, but not TIA-1 or TIAR, and thereby prevents SG assembly (White et al., 2007). The SGs found in PV-infected cells contain viral RNA and TIA-1, but are compositionally distinct since they exclude well-described SG components such as G3BP-1, PABP, and eIF4G, all of which are eventually cleaved by 3Cpro (Piotrowska et al., 2010;White and Lloyd, 2011). Furthermore, PV infection also disrupts the assembly of PBs. Also during PV infection, Xrn1, Dcp1a and Pan3, three factors involved in mRNA decapping, degradation and deadenylation, respectively, undergo degradation or cleavage by the viral 3Cpro (Dougherty et al., 2011). Likewise, Cricket Paralysis Virus (CrPV) infection in Drosophila cells leads to a rapid shut-off of host protein synthesis concomitant with phosphorylation of eIF2␣ (Wilson et al., 2000). Because these characteristics are common to the induction of SGs, Khong et al. investigated the assembly of SG after CrPV infection. Through an immunofluorescence assay, the authors showed that Rox8 and Rin, Drosophila SG marker homologs of TIA-1 and G3BP-1, respectively, do not aggregate in CrPV infected cells, even in the presence of SG inducers such as heat shock, oxidative stress and Pateamine A. It was also demonstrated that CrPV viral 3C proteinase is sequestered to SGs under cellular stress but not during virus infection (Khong and Jan, 2011). Another picornavirus, Theiler's murine encephalomyelitis virus (TMEV) which causes a demyelinating disease similar to multiple sclerosis in the central nervous system, also inhibits SG assembly. Borghese et al. showed that TMEV infection induces SG assembly, but the expression of the leader (L) protein during infection was sufficient to inhibit SG assembly induced by arsenite-mediated oxidative stress or by thapsigargin-mediated ER stress. Unlike the effects induced by PV 3C proteinase, G3BP-1 was not cleaved by TMEV and was in fact found in SGs post-TMEV infection (Borghese and Michiels, 2011). For efficient protein synthesis, mRNA circularization is required during translation. PABP, that is bound to poly (A) 3 tail, interacts with eIF4GI at the 5 , causing circularization of the mRNA by linking the 5 and 3 mRNA ends, increasing the binding of eIF4E to the cap (Lopez-Lastra et al., 2010). Rotavirus, the causative agent of a common infantile gastroenteritis, subverts the host translation machinery at this step. Because rotavirus mRNAs are capped but lack poly(A) tails, the virus-encoded protein, non-structural (NS) P3, binds to a consensus RNA sequence in the 3 end of viral mRNA, enabling mRNA circularization by interaction with eIF4GI (Piron et al., 1998). As a consequence, a shut-off of host protein synthesis ensues and thereby provides an advantage for viral protein synthesis. In infected cells, Montero et al. found that eIF2␣ is phosphorylated during the entire virus replication cycle but this does not have an impact in the formation of viroplasms (cytoplasmic viral factories found in rotavirus-infected cells) or viral replication and surprisingly, SG assembly was not induced. One possibility for explain this observation may be due to PABP, a component of SG (Table 1), is able to translocate from the cytoplasm to the nucleus in rotavirus infected cells in a NSP3-dependent manner (Montero et al., 2008). Instead, Junin virus (JUNV), that is responsible for Argentine hemorrhagic fever, is able to impair the phosphorylation of eIF2␣. Linero et al. showed that in JUNV-infected Vero cells exposed to arsenite-mediated oxidative stress, eIF2␣ phosphorylation was impaired but this did not lead to the induction of SG assembly (Linero et al., 2011). Furthermore, the JUNV nucleoprotein (N) and/or the glycoprotein precursor (GPC) was responsible for this virus-induced blockade to SG assembly. Rather, when JUNV-infected cells were treated with hippuristanol, an eIF4A-helicase activity inhibitor that induces SGs in an eIF2␣-independent manner (Mazroui et al., 2006), SG assembly was observed in 100% of cells indicating that JUNV affects an unidentified event downstream of eIF2␣ phosphorylation or the integrity of viral mRNAs on polysomes (Linero et al., 2011). Another virus that efficiently shuts off host protein synthesis is influenza A virus (IAV) (Kash et al., 2006). IAV is an animal pathogen that causes severe respiratory disease and pandemics in humans around the world. Viral transcription involves a cap-snatching mechanism during which a nucleotide sequence between 10 and 20 nt, including the 5 cap structure, is cleaved from the 5 end of cellular mRNAs. This sequence is used to prime transcription on the viral genome and is ultimately used during translation initiation of viral mRNAs (Lopez-Lastra et al., 2010). Additionally, IAV encodes capbinding proteins that are able to preferentially recognize capped viral mRNAs. The influenza non-structural protein 1 (NS1) binds eIF4GI and PABP-1, thus stimulating the assembly of the translation initiation complex on capped IAV mRNAs (Lopez-Lastra et al., 2010). IAV actively suppresses SG assembly during viral infection, thereby allowing translation of viral mRNAs. Complete inhibition of SG assembly is dependent on the function of NS1 and its ability to inhibit PKR, the double-stranded RNA-activated protein kinase (Khaperskyy et al., 2011). Recently, retroviruses such as the human immunodeficiency virus type-1 (HIV-1) and human T-cell lymphotropic virus type-1 (HTLV-1) were shown to impose a blockade to SG assembly in infected cells. Recent work from the authors' laboratory showed that HIV-1 preferably assembles ribonucleoprotein complexes to which Staufen1, the viral genomic RNA and the structural protein Gag are recruited, called Staufen1 HIV-1-dependent RNPs (SHRNPs). These were compositionally different than SGs since they did not contain many of the classical SG marker proteins G3BP-1, eIF3, TIA-1, TIAR, HuR, PABP-1, but contained Staufen1. The assembly SHRNPs during the late stages of viral replication is believed to impose a blockade to the assembly of SGs but to favor the encapsidation of HIV-1 genomic RNA into assembling virus (Abrahamyan et al., 2010;White and Lloyd, 2012). Follow-up work, reported at the last International Nucleocapsid (NC) Meeting in Barcelona, Spain in September 2011, now demonstrates that the viral Gag protein controls the kinetics of SG assembly and interferes with the cellular stress response pathway (Valiente-Echeverría et al., unpublished). The oncoretrovirus, HTLV-1 elicits a blockade to SG assembly in a different manner and this was found to be mediated by the viral regulatory protein, Tax. Legros et al. observed that Tax relocated from the nucleus to the cytoplasm in response to environmental stress. While Tax is present in the cytoplasm, it interacts with histone deacetylase 6 (HDAC6), a critical component of SGs (Kwon et al., 2007), and thereby impairs SG assembly (Legros et al., 2011). While the details on the mechanisms by which viruses elicit favorable environments in which to replicate will require further work, the sequestration of critical factors for the induction of SGs by viral proteins appears to be an increasingly studied area of research and should yield important new information on how viruses gain control over host cell biology. While all of the examples described above belong to RNA viruses, Herpes simplex virus (HSV) and Cytomegalovirus (HCMV) are the only members of the DNA virus family that have been shown to regulate SG assembly. HSV-1 causes a shut-off of host cell protein synthesis by the virion host shutoff (Vhs) protein and subsequently induces degradation of cellular RNAs (Kwong and Frenkel, 1987). Several Adenosine-Uracil (AU)-rich binding proteins that promote mRNA stability, such as TIA-1/TIAR, and TTP (Bevilacqua et al., 2003), were upregulated in HSV-1 infected cells (Esclatine et al., 2004). TTP and TIA-1/TIAR were activated during the infection and accumulated in the cytoplasm, but only TTP was able to interact with Vhs. As a consequence, SGs were not observed after infection (Esclatine et al., 2004). More recently, Finnen et al. have shown that HSV-2 infection blocks SG accumulation in cells exposed to arsenite-mediated oxidative stress, but not in cells exposed to Pateamine A, a drug that induces SG assembly in an eIF2␣-independent manner (Finnen et al., 2012). These results were similar to those found in JUNV infected cells described above (Linero et al., 2011). On the other hand, HCMV infection induces an unfolded protein response (UPR), activates PERK, but eIF2␣ phosphorylation levels were limited and viral RNA translation was maintained (Isler et al., 2005b). Likewise, the same group showed that SG assembly was suppressed in HCMV infected cells treated with the ER stressor, thapsigargin (Isler et al., 2005a). As discussed in the previous section, viruses have chosen different mechanisms to inhibit the SG assembly to ensure efficient and unmitigated replication. Virus-mediated induction of SG assembly Some studies have demonstrated that the SG assembly is not always correlated with a shut-off of host protein synthesis (Kimball et al., 2003;Loschi et al., 2009). Moreover, other authors have showed that SGs could sequester apoptotic molecules favoring cell survival upon exposure to certain types of stress such as heat shock (Kim et al., 2005;Tsai and Wei, 2010). Thus, a virus-mediated induction of SG assembly also represents a strategy employed by some viruses to ensure replication. Respiratory Syncytial Virus (RSV), which is responsible for lower respiratory tract illnesses in both infants and the elderly, induces SGs during the course of infection (Lindquist et al., 2010). Lindquist et al. showed the correlation between higher viral protein levels and the presence of SGs in infected cells. In addition, G3BP −/− cells, that are unable to generate SGs because of a disrupted g3bp gene locus, exhibited diminished RSV replication (Lindquist et al., 2010). However, a later study by the same group concluded that the stress response may not play an important role in viral replication. They did not see a difference in viral replication in cells that were not able to elicit a stress response because PKR was depleted by siRNA (Lindquist et al., 2011). This later study also noted that RSV infection does cause eIF2␣ phosphorylation and PKR is needed to induce SGs during viral infection. These results indicate that the assembly of SG neither aids nor interferes with the replication of this virus. The role of the stress response involving SGs in the Reoviridiae family of viruses has been shown to be implicated in viral replication. Mammalian orthoreovirus (MRV) infection in humans is usually asymptomatic or associated with symptoms of a common cold. During the early stages of infection, MRV induces SG assembly and the expression of ATF4, a transcription factor, through eIF2␣ phosphorylation (Smith et al., 2006). The assembly of SGs creates a competitive advantage for the viral mRNA to be translated because cellular mRNAs are sequestered in SG. When ATF4 is expressed in MRV infected cells, viral production increases by up to 100-fold (Smith et al., 2006). A later study implicated a role for SG assembly in viral replication since SG formation occurs after viral uncoating but before viral mRNA transcription (Qin et al., 2009). Qin et al. (2011) found that viral mRNAs escape translational inhibition when SGs are disrupted and viral translation occurs in the presence of high levels of phosphorylated eIF2␣ in a manner that is independent of PKR inhibition. This study also mentions that MRV-infected Cos7 cells are able to block the assembly of SGs induced by arsenitemediated oxidative stress later in infection (Qin et al., 2011). The implication of these findings is that the stress response and the resulting assembly of SGs must be involved in the early stages of the viral replication cycle but is ultimately detrimental to the virus if it is not able to disassemble SG during later stages of infection. Semliki Forest Virus (SFV), which causes lethal encephalitis in rodents, seems to modulate the cellular stress response in a similar fashion than MRV. Upon infection, SFV is able to induce the phosphorylation of eIF2␣ and promote SG assembly in mouse embryo fibroblasts (MEF) (McInerney et al., 2005). Despite a shut-off of host protein synthesis during these events, SFV is still able to translate its mRNA due to a translational enhancer element present in the viral genome. This study also indicated that areas around viral RNA in the cytoplasm were devoid of SGs. This observation likely indicates that viral proteins or viral RNA could locally disassemble SG to favor viral translation and this was shown to correlate with increased vRNA levels (McInerney et al., 2005). The theme of utilizing the stress response to shut-off of host protein synthesis appears once again in Coronaviridae. The mouse hepatitis coronavirus (MHV), which is closely related to the SARS coronavirus, has been shown to subvert the host translation machinery through eIF2␣ phosphorylation (Raaben et al., 2007). eIF2␣ phosphorylation also leads to the assembly of SG and PB. A genome wide microarray analysis of regulated mRNAs in MHVinfected LR7 cells revealed the decrease in the expression of many cellular mRNAs, which may be due to an increase in PBs activity and function (Raaben et al., 2007). Likewise, viral RNAs transcripts make up 40% of total RNA in the cell, so the virus may be overloading the host cell cytoplasm to ensure that its transcripts will be translated (Raaben et al., 2007). However, the authors come to the conclusion that the inhibition of cellular translation is not beneficial to the virus since in systems lacking the ability to inhibit cellular translation, viral production did not change and thus, the assembly of SG in MHV-infected cells does not appear to dramatically favor viral replication (Raaben et al., 2007). Finally, Rubella virus (RUBV) infection generates aggregates of G3BP-1 in the cytoplasm (Matthews and Frey, 2012). These aggregates differ from typical SG because they do not contain proteins such as PABP and TIA-1 (Table 1). RUBV is a positive strand RNA where viral replication is mediated for intermediary double stranded RNA (dsRNA). Matthews et al. found that G3BP-1 does not overlap with dsRNA, but rather colocalizes with viral ssRNA in perinuclear clusters (Matthews and Frey, 2012), suggesting that these may represent sites of encapsidation (Beatch and Hobman, 2000). Conclusions and future directions Despite an intensifying research focus to understand the relationships between the cytoplasmic RNPs called SG and virus replication (refer to Table 2), many questions remain to be answered in this growing field of virology. The roles for many SG components (Table 1) that have been found to participate in viral replication either by inclusion or exclusion still remain incompletely defined in host cell biology. As well, the literature has only touched the surface as to how viruses hijack and commandeer SG components. In several cases in which SG assembly is shown to be inhibited, it remains unclear if viruses block the assembly or induce the disassembly of SG. There is also a need to determine at what level viruses are hijacking or co-opting the host cell stress responses that exhibit SG. There is also a need to understand how SG may lead to deleterious effects if they remain present during viral infection. Indeed, further characterization of a virus' ability to overcome the inhibition of SG assembly or induce their assembly to prevent translation of host mRNAs may be beneficial in developing new anti-viral drugs that could be useful against multiple viruses. Anti-cancer drugs such as etoposide, bortezomib and doxorubicin, do induce SG assembly, however their roles as anti-virals are not known (Arimoto et al., 2008;Fournier et al., 2010;Morita et al., 2012). The many mechanisms by which viruses inhibit or induce SG may pose a problem to developing a broad anti-viral drug targeting SG. Viruses such as PV, which inhibit SG formation through cleavage, would likely be unaffected by drugs that activate the stress response upstream of these cleaved factors. Another caveat to the potential use of these drugs is that SG formation may help the replication of certain viruses which induce SG to create a better environment for viral replication. The knowledge gained on the biology of SG and how it is influenced by viral infections will play a role in further characterizing innate responses to infection and how this system can be taken advantage of to curb viral infections.
2018-04-03T02:27:10.743Z
2012-06-14T00:00:00.000
{ "year": 2012, "sha1": "8044048844aa8add4e27e5c058562b7211181e5d", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.virusres.2012.06.004", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "e8db3b6bd52fdb768bc4dd17896c38bc9cf3687f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
247980980
pes2o/s2orc
v3-fos-license
Preliminary Assessment of Individual Zone of Optimal Functioning Model Applied to Music Performance Anxiety in College Piano Majors Individual zone of optimal functioning (IZOF) is a psychological model studied and applied to quantify athletes’ anxiety and predicts their achievement in sports competitions. This study aimed to determine the application of the IZOF model to evaluate music performance anxiety (MPA) in pianists because the causes of anxiety in athletes and musicians may be similar. A total of 30 college-level piano-major students were included in the study, and the anxiety level in performance was scored by the Competitive State Anxiety Inventory-2 questionnaire. In the first phase, participants recalled and self-scored the four important performances in the past year. Notably, seven piano teachers scored the performances. Both results were combined to identify the individual IZOF zone. Each student showed different anxiety scores for cognitive state anxiety (CA), somatic state anxiety (SA), and self-confidence (SC). In the second phase, all participants scored their anxiety level 1 day before the final performance, and the same judges evaluated the performance immediately afterward. A total of 60% of the participants who had at least two subscales inside the IZOF received performance scores greater than 90. In conclusion, the IZOF model provides information for both piano teachers and pianists to help review their anxiety intensity and predict their performance scores to some extent. INTRODUCTION Research on music performance anxiety (MPA) has been conducted for several decades and is still ongoing (Fishbein et al., 1988;Steptoe, 2001;Kenny and Osborne, 2006;Kenny, 2011;Topoglu, 2014;Guyon et al., 2020b). MPA is a globally negative and debilitating psychological phenomenon in musicians regardless of age, gender, experience, practicing time, and music genre (Brugués, 2011a,b;Studer et al., 2011;Barbar et al., 2014;Nusseck et al., 2015;Bannai et al., 2016;Sousa et al., 2016;van Fenema et al., 2017;Burin et al., 2019;Guyon et al., 2020a). MPA had been identified in music students and shown statistically significant differences in various psychological constructs, including optimism, self-efficacy, achievement motivation, and sensitivity to reward and punishment (Alzugaray et al., 2016). A significant relationship was reported between the age of starting musical training and the individual's current perceived level of MPA; students who started at the age of 7 or younger showed lower levels of MPA (Zarza- . Furthermore, the MPA level increased among advanced conservatory students during their 4-year university-level studies . A previous study revealed that 33.9% of participants had used substances to cope with MPA, and more than half of them had considered abandoning their musical studies. Participants who used substances had more frequent thoughts of giving up their musical career and had a higher level of MPA than control students (Orejudo Hernández et al., 2018). The relevance of family support for self-efficacy in public performance was mediated through MPA directly and showed consequent differences between genders (Zarza- Alzugaray et al., 2020). Social supports, such as the role of parents, teachers, and peers, were crucial for predicting self-efficacy for learning in students from advanced music schools (Orejudo et al., 2021). Nevertheless, MPA is a validated construct that can harm musicians' performance quality and their careers (Osborne and Kenny, 2005;Yoshie et al., 2009;Davison, 2020). Musicians may be ashamed to admit that they are suffering from performance anxiety (Bodner and Bensimon, 2008;Brugués, 2009). However, performance anxiety represented a series of psychosomatic manifestations and was a furtive concept for musicians, causing doubt about their performance quality (Lee, 1988). In addition, music educators had often consciously avoided this issue in their teaching process since anxiety management was typically beyond their training, talent, practice, experience, and dedication (Nideffer and Hessler, 1978). Anxiety was thought to have both facilitated and attenuated individuals' performances (Burton and Naylor, 1997). Performers with facilitative anxiety often described it as excitement, being pumped, or being "in the zone, " and they did not seek help from psychologists or other treatment professionals for assistance to reduce their anxiety (Lehrer et al., 1990;Robertson and Eisensmith, 2010). Wolfe also noted that MPA had positive effects on performance and explained these as an adaptive component of MPA (Wolfe, 1989). The adaptive component, also known as functional anxiety, readied the performer for the challenge ahead by directing preparatory arousal into practical task-oriented actions (Mor et al., 1995). Therefore, anxiety reduction may not be the most appropriate strategy for intervention to manage performance anxiety and achieve peak performance (Chamberlain and Hale, 2007). Increasing clinical reports, especially in the field of music performance, shows that some musicians need to experience pre-performance anxiety to perform at their best level (Nideffer and Hessler, 1978). In these cases, MPA was viewed as a more positive emotion in the performance of specific individuals (Kendrick et al., 1982;Brodsky, 1996;Kim, 2005;McGinnis and Milling, 2005). Meanwhile, MPA was reported to be a more neutral concept and was viewed as a daily healthy aspect of stress and anxiety intrinsic to the music profession. Brodsky pointed out the complex designs of previous studies and revealed the misleading definitions and ineffective remedies for managing performancerelated psychological problems in musicians, indicating that the interaction between anxiety level and actual performance remained in question and needed more research (Brodsky, 1996). We would naturally hesitate to face these contradictory views and treatments relative to MPA. If there is a type of anxiety that facilitates performance, how would it present? If this anxiety feels differently to different individuals, what would be the difference between those who perceive anxiety as excitement and those who perceive anxiety as a catastrophe? Various representative theories explained the relationship between performance and emotions (reflecting upon mental and physical arousal). Sports psychologists increasingly agreed that unidimensional approaches to the arousal-performance or anxiety-performance relationship were ineffective and simplistic (Hanin, 2000). Thus, approaches that used a single cumulative score of anxiety to demonstrate the relationship between performance and emotions were inadequate for examining an occupation with the complex emotional and motor skill requirements of music performance. More multidimensional approaches were called for anxiety-related research. In the 1980s, Hanin introduced the theory of the individual zone of optimal functioning (IZOF), which proposed that an athlete's performance was successful when his or her precompetitive anxiety was within or near the optimal zone (Hanin, 2000). It was a theoretical, multidimensional approach to describe, predict, and explain athletes' performance-related, biopsychosocial states that affected individual activity. Athletes were asked to imagine their biopsychosocial states, and a personal IZOF was established to predict their future performances. The IZOF proposed that an athlete's performance was successful when his or her pre-competitive anxiety was within or near the IZOF, which had been widely applied among athletes (Hanin, 2000(Hanin, , 2010Harmison, 2006;Robazza et al., 2016Robazza et al., , 2018Ruiz et al., 2017Ruiz et al., , 2019Cooper et al., 2021), and in physical activity at school (Robazza and Bortoli, 2005;Morano et al., 2020). Compared to research exploring emotions in sports and the individual optimal zone, far less research had been published applying IZOF theory on MPA treatments (Yao, 2016). In music performance circumstances, as the subjective experience of anxiety varied from person to person, the optimal zone differed from person to person. By defining the optimal functioning zone for individual pianists and predicting upcoming performance results, this study verified that IZOF can still effectively describe, predict, explain, and regulate piano performance-related biopsycho-social states as well. In particular, the location and width of the IZOF helped determine a possible range of performance scores. We conducted a pilot study and found the IZOF zone in two cases. The best performances by these two pianists were presented in the IZOF zone with a significantly higher IZOF score than the average out-of-zone score (Yao, 2016). In this retrospective study, IZOF was assumed to be fully applied in piano performance analysis. Moreover, the performance prediction process showed that it was vital to know each pianist's IZOF since it varied widely from person to person and may determine each pianist's personal mental and physical practice. This study aimed to clarify pianists' personal IZOF zone, assess the contribution of MPA on their optimal performance, and examine the prediction accuracy of future performance results. This information may help pianists to prepare more and regulate their mental and physical states before future performances. Participants A total of 30 participants aged between 18 and 24 years were enrolled, including 7 male and 23 advanced female pianists. Participants were all undergraduates with piano majors in a conservatory in Beijing, China. The students came from 13 different provinces in China. At the time of the survey, 6 were sophomores, 10 were juniors, and 14 were seniors, all in a 4year bachelor's degree system. The study protocol is shown in Figure 1. Ethics Statement This study was conducted anonymously. No names or other identifying personal data were recorded. Consent forms were sent to the participants to be filled out and signed before the study began, and all included students provided signed informed consent. This study had no risks associated with the physical or psychological state of the participants. Scoring the Piano Performances The performances of each participant were evaluated by a group of seven professional college piano teachers. Each teacher evaluated the performance of each participant on a scale of 1-100, where 1 = worst possible performance and 100 = best possible performance. The judges were told to score the performance immediately after the performances based on the participants' playing. The score was to represent an overall impression of their performances. The highest and lowest scores were removed in the final grading, and the remaining five scores were selected and used to calculate the average score of the final performance result. The First Phase of This Study: Locating the Zone In the first phase of the study, participants were required to reflect upon their past four performances (2 mid-term and 2 final examinations in the past academic year) and complete the Competitive State Anxiety Inventory-2 (CSAI-2). The CSAI-2 was a self-reported inventory with 27 simple questions. It took about 5 min to complete each evaluation and was used to measure the performance anxiety state. It showed the anxiety level of the three different dimensions (subscales), namely, somatic anxiety (SA), cognitive anxiety (CA), and self-confidence (SC). The subscale scores of each dimension ranged from 9 to 36. According to data collected from the CSAI-2, the IZOF in SA, CA, and SC dimensions was identified separately for each pianist. Statistically, IZOF is shown as M ± 1/2 SD. M is the mean of CSAI-2 subscales corresponding to the personal best piano performance, and SD is the standard deviation of CSAI-2 subscales. The difference (D) between the mean of in-IZOF and out-of-IZOF was calculated as performance scores and the percentage of the CSAI-2 subscale when the participant was below, in, or above the IZOF zone. Furthermore, the relative efficacy of the method for determining anxiety was compared by the percentage of outstanding or less-than-outstanding performance on the IZOF (i.e., percentage of outstanding performances inside the IZOF and less-than-outstanding outside the IZOF). We identified the outstanding performance as mean plus one SD of the 30 pianists with four performance scores. Outstanding performances in the first phase of the study were set at 92 (performance score mean: 88.98, SD: 3.47). Performance evaluations were made by seven piano teachers of judges as mentioned earlier. The Secondary Phase of This Study: Performance Prediction In the second phase, we evaluated the predictive accuracy of CSAI-2. The IZOF zone was identified again. Predictions can be projected for each subject based on their answers to the CSAI-2 before an upcoming jury. The IZOF theory was used for performance prediction and analysis. Subjects answered the CSAI-2 on the day before their final jury of the semester. Anxiety intensity from three subscales was compared with the upper and lower thresholds of corresponding zones to see if the subjects' performances fell within their zones. After the final jury, performance evaluations were made and collected by the same group of judges using the same method. Data were collected to examine the hypothesis that the IZOF model can help to predict the upcoming performance and be fully applied within the piano performance anxiety description, explanation, assessment, and performance prediction. Statistical Analysis Continuous data were presented as mean, SD, minima, and maxima. Categorical data were presented as count and percentage. In the first phase of this study, we conducted the IZOF for each pianist and calculated the performance score difference between in-IZOF and out-of-IZOF. Furthermore, we calculated the percentage of outstanding or less-thanoutstanding performance on the IZOF. In the second phase of this study, a correlation between how many pianists were in or out of their IZOF and performance was analyzed. We calculated the predictive in-zone performances and the statistical description of the actual performance score. Scatterplots were drawn to show the distance between pre-performance CSAI-2, their IZOF, and the jury's judged performance score. The distance from the closest zone border and performance score was conducted, and the distance was set to 0 if a value falls within the zone. We used the Spearman correlation coefficient to show the correlation between the distance and the performance score because these data did not show normal distribution. A two-sided p-value of <0.05 was regarded as statistically significant. Data management and statistical analyses were conducted using SAS version 9.4 software (SAS Institute, Inc.). Table 1 summarizes the overall descriptive statistics for CSAI-2 subscales. The average CA score of 30 students corresponding to the best performances is 18.0 ± 4.9 (minimum-maximum: 11-34), the average SA score is 17.3 ± 4.8 (minimum-maximum: 11-26), and the average SC score is 20.9 ± 5.5 (minimum-maximum: 11-31). Students' states of CA, SA, and SC are different according to personal best performance. The individual IZOF by M ± 1/2 SD is shown in Supplementary Table 1. RESULTS The D (mean of in-IZOF-mean of out-of-IZOF) ranks from large to small showed that the differences in SA and SC have a IZOF, individual zone of optimal functioning; M, mean of CSAI-2 subscales corresponding to the best performance; SD, standard deviation of CSAI-2 subscales; S, score of performance. Table 1). The different levels of performance scores in CA, SA, and SC range from 3.3 to 11. The average D in CA, SA, and SC are 6.2 ± 1.9, 6.2 ± 2.0, and 6.1 ± 2.1, respectively. For example, if we consider student #14, the different levels in all three subscales reach 11 and three performance scores are in the IZOF area (75%) with the best performance score of 91 ± 1 points. Table 2 shows the average correct classification (in percentage) of outstanding performances inside IZOF and less-thanoutstanding performances outside IZOF with the CSAI-2 questionnaires. Outstanding performances were set at a score of 92 points. The IZOF in CA, SA, and SC results in an average of 84.2, 80.8, and 77.5%, respectively, correct predictions (all range of the three subscales: 50-100%). similar trend (Supplementary Two figures were presented to display two contrasting cases of personal IZOF. Figure 2 shows the performance distribution in the three subscales of student #14. It shows that if the student has a high CA (IZOF: 24.0-28.0) and SA (IZOF: 23.4-26.6) and a low SC (IZOF: 13.9-16.1), he or she would perform well. Figure 3 shows that if student #29 has a low CA (IZOF: 18.0-20.0) and SA (IZOF: 15.6-16.4) and a high SC (IZOF: 20.8-25.2), he or she would have a less-than-ideal performance. In the second phase study, the 30 participants conducted the new IZOF. Table 3 shows the prediction in-zone performances and the statistical description of the actual performance score. A total of 14 (46.7%) students have all three subscales inside the IZOF, and the average performance score is 93.4 ± 1.5 (minimum-maximum: 91-96). A total of 60% of the participants have at least two subscales inside the IZOF FIGURE 2 | The best performance scores and CSAI-2 subscales for participant #14. The green band represents the IZOF of participant #14. The red and blue horizontal lines represent the lowest and highest scores of the best performance scores. The blue spots indicate the performance scores. and also receive performance scores ≥90 out of 100. In total, 10 (33.3%) students have none of the subscales inside the IZOF, and the average performance score is 86.2 ± 1.4 (minimum-maximum: 84-88). Totally, 18 (60%) and 16 (53.3%) students have CA and SA scores that fall above or in the IZOF, respectively, while 12 (40%) students have SC scores that fell below the IZOF. Figure 4 shows the correlation between distance from closest zone border and performance score. The distance of CA, SA, and IZOF has significant strong negative correlation with the pianist's performance score (CA: ρ = −0.79, p < 0.001; SA: ρ = −0.86, p < 0.001). The distance of SC and IZOF has a moderate negative correlation with the pianist's performance score (CA: ρ = −0.55, p = 0.002). DISCUSSION This is the first study to apply IZOFs to musicians using CSAI-2 subscales. The results have verified the individual nature of each pianist relative to each subscale and demonstrated the zone's efficiency in describing the relationship between MPA and optimal performance. The IZOF theory can be used for pre-performance anxiety analysis and performance prediction. The IZOF zone was found for two cases in our previous pilot study. All of their best performances were presented in the IZOFs, and their average in-zone performance score was significantly better than the average out-zone score (Yao, 2016). This study further revealed the regulation and relationship between an individual's anxiety intensity and their piano performance results. Everyone has different optimal levels of anxiety intensity. Therefore, applying the IZOF theory to music performance offers a new perspective on managing performance anxiety. With the help of the IZOF model, the study can define the "zone" in a quantified and measurable way. Moreover, with more than four sets of CSAI-2 data provided by pianists, the IZOF model may well be applied to predict pianists' upcoming performances more precisely. Music performance anxiety has been observed from different perspectives and studied with countless methods for many years, and researchers will continue studying this area with the help of cognitive and psychological science as it is developed. However, no matter how deeply this area has been studied, individual differences in reaction to performance anxiety issues cannot be denied, especially those of advanced music majors in colleges. Music interpretation is based on technique but is an emotion-supported performance activity. It involves a great deal of personal and emotional investment, which FIGURE 3 | The best performance scores and CSAI-2 subscales inside or outside the IZOF for participant #29. The green band represents the IZOF of participant #29. The red and blue horizontal lines represent the lowest and highest scores of the best performance scores. The blue spots indicate the performance scores. increases uncertainty and contributes to anxiety. Individual reactions to MPA vary widely among college-level pianists. In China, students enrolled in music conservatories have already achieved an advanced level of proficiency in piano performance. However, not all of them are aware of their optimal zone for performance and attempt to master every public performance consistently with applied consciousness. As a result, even after years of training, only a few piano majors end up with a career in professional performance. With the application of the IZOF model, young pianists may become aware of certain dimensions that impact their performances in addition to technical skills and finger abilities. Therefore, knowing that the IZOF may help to enhance performance and improve personal satisfaction maybe even more important than deciding whether one should continue a performance career despite their MPA issues. One participant who has a performance score greater than 90 has only one subscale inside the IZOF (Table 3, Combination 5). The participant's CA score is 19 and SC score is 28, both are very close to the lower thresholds of the optimal CA zone (19.68) and the lower thresholds of the optimal SC zone (28.11). This contradictory result may be eliminated by improving the accuracy of identifying the IZOF zone. Increasing the measured frequency of conducting IZOF or using prospective studies instead of recalling may help with improvement. Limitations This study has several limitations. First, although scholars in sports psychology have called for testing the IZOF model in more performance-related domains (Spielberger, 2013), few studies of MPA have adopted it as an applicable theory. Therefore, only limited resources can be found for comparison. Second, MPA might not be the only component affecting piano performance. The effects of other factors such as selfefficacy or social support may be underestimated and need to be considered. Third, fewer piano juries and participants may result in biased analyses and restrictions associated with future performance prediction. Piano juries are typically held four times per year, far less frequently than sports performances. Fourth, subjects were told to reflect on their most impressive performance to fulfill the retrospective recollection, which may increase the difficulty of defining the precise zone based upon fewer recollections and may result in inherent biases. Fifth, the lack of long-term data for tracking may decrease the accuracy of defining the IZOFs. Finally, since the scores did not show fluctuation for well-trained advanced performing musicians, the differences in performance scores for each person were very subtle (e.g., the lowest score was 84 and the highest was 95 on a scale of 0-100), which may affect the accuracy of prediction. CONCLUSION Personal IZOF zones were identified for each of the 30 pianists. Notably, 60% of the participants had at least two subscales within the IZOF and also received performance scores ≥90 out of 100. The IZOF model provides information for both piano teachers and pianists to help review their anxiety intensity and predict their performance scores to some extent. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. AUTHOR CONTRIBUTIONS ZY: guarantor of integrity of the entire study, study concepts, study design, definition of intellectual content, literature research, data analysis, statistical analysis, manuscript preparation, manuscript editing, and manuscript review. YL: literature research, clinical studies, experimental studies, and data acquisition. Both authors contributed to the article and approved the submitted version. FUNDING This research project was supported by the Science Foundation of Beijing Language and Culture University (supported by "Fundamental Research Funds for the Central Universities") (Approval number: 19YBB25).
2022-04-07T13:25:47.462Z
2022-04-07T00:00:00.000
{ "year": 2022, "sha1": "d86c4dcb67ba8c1d71634ec16658ca60cce39e1e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "d86c4dcb67ba8c1d71634ec16658ca60cce39e1e", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
153312695
pes2o/s2orc
v3-fos-license
Finite-size and finite-time effects in large deviation functions near dynamical symmetry breaking transitions We introduce and study a class of particle hopping models consisting of a single box coupled to a pair of reservoirs. Despite being zero-dimensional, in the limit of large particle number and long observation time, the current and activity large deviation functions of the models can exhibit symmetry-breaking dynamical phase transitions. We characterize exactly the critical properties of these transitions, showing them to be direct analogues of previously studied phase transitions in extended systems. The simplicity of the model allows us to study features of dynamical phase transitions which are not readily accessible for extended systems. In particular, we quantify finite-size and finite-time scaling exponents using both numerical and theoretical arguments. Importantly, we identify an analogue of critical slowing near symmetry breaking transitions and suggest how this can be used in the numerical studies of large deviations. All of our results are also expected to hold for extended systems. I. INTRODUCTION In recent years, there has been much interest in large deviation functions (LDFs, see [1] for a review) encoding the probability of atypical fluctuations in time-averaged observables of manybody quantum [2][3][4][5][6][7][8][9][10] and classical stochastic systems . Of special interest have been LDFs of the time-averaged current and activity, the latter quantifying the mean frequency of dynamical events during a given observation period. Since both quantities are determined by the full history rather than the instantaneous state, even in thermal equilibrium, their LDFs can exhibit unexpected behaviors. In particular, even if the steady-state probability distribution of instantaneous quantities, such as the density profile of particles in the system, contains no singularities, the LDF of time-averaged quantities can be singular, giving rise to a dynamical phase transition (DPT). Most of the DPTs have been obtained in many-body extended systems 1 whose sizes are taken to be infinite. It is natural to ask how much of the observed phenomenology is related to the fact that these systems are extended. In this paper, we address this question by introducing a class of models consisting of a one-site (or single-box) system connected to a pair of reservoirs and studying their current and activity large deviations. Instead of taking a limit where the system size goes to infinity, we utilize a recently introduced formalism [67] where N , the maximum number of particles in the box, is arbitrarily large. Applying the saddle-point method, it is shown that even such models can exhibit DPTs induced by the breaking of the particle-hole symmetry, which was theoretically predicted [47,48] and numerically observed [68] in extended systems, with exactly the same critical exponents. Importantly, the reduced dimensionality of a single-box model allows us to easily predict and confirm the effects of finite time, T , and finite size, N , on the critical phenomena near a symmetrybreaking DPT for arbitrary hopping rates. In previous studies of extended systems, finite-size scaling theories have been proposed for first and second-order DPTs of an exclusion process [41,51,69] as well as for kinetically constrained models [54][55][56]70]. Much less is known about finitetime effects 2 , with only a few results concerning diffusive [71] and super-diffusive [72] relaxations of density fluctuations far away from any DPTs. For symmetry-breaking DPTs in extended systems with open boundaries, Ref. [48] used heuristic arguments to predict finite-time and finite-size scaling exponents. These, however, have not been verified. In this paper, based on studies of finite-T saddle-point trajectories and an exact diagonalization of the transition matrix at finite N , we identify both the finite-T and finite-N scaling exponents and propose a scaling form encompassing 1 See [53,[63][64][65][66] for exceptions. 2 As we will see, the LDF in the infinite-time limit is given by the maximum eigenvalue of a well-defined operator, while the finite-time behavior of the LDF involves more eigenvalues. both. In particular, we are able to characterize in detail the different finite-T scaling regimes. We find a regime where the initial condition strongly influence the LDF and, as one might expect, a late regime where the initial conditions do not play any role. The results show that, near a symmetrybreaking DPT, a phenomenon analogous to critical slowing appears. Namely, the relaxation of the system from a given initial condition becomes anomalously slow as the DPT is approached. This might be used to locate such DPTs in numerics [73][74][75][76][77][78][79][80][81] and possibly experiments by data collapse. The paper is organized as follows. In Sec. II, we introduce the single-box models and present a path-integral representation of their statistics. In Sec. III, we discuss how the theory of symmetrybreaking DPTs and the associated critical behaviors can be derived using a saddle-point method in the joint limit T → ∞ and N → ∞. In Sec. IV, based on both numerical diagonalization and theoretical arguments, we study finite-size and finite-time effects, allowing us to characterize the critical features of the DPT. Finally, we conclude in Sec. V. II. SINGLE-BOX MODELS WITH PARTICLE-HOLE SYMMETRY In this section, we describe the general setup considered in our study. First, we introduce a general class of single-box models. Focusing on a subclass of such systems which obey a particlehole symmetry, we formulate their coarse-grained descriptions for large N . This allows us to study their DPTs using saddle-point asymptotics. A. General single-box models We consider a single box, whose state is characterized by the number of particles n inside. The box can hold at most N particles (0 ≤ n ≤ N ) and is coupled to a pair of particle reservoirs. The left (right) reservoir is described as a box with a fixed number of particlesn a (n b ). The particles are exchanged with the left reservoir according to where W R (n 1 , n 2 ) (W L (n 1 , n 2 )) denotes the rate of hopping from the left (right) box to the right (left), see Fig. 1. Similarly, the exchange with the right reservoir is described by We are interested in the statistics of current and activity during a time interval t ∈ [0, T ]. Defining the number M R (T ) (M L (T )) of rightward (leftward) hops across any of the two bonds N = 16 < l a t e x i t s h a 1 _ b a s e 6 4 = " c h x l c O Z 3 a t g g f n 4 0 < l a t e x i t s h a 1 _ b a s e 6 4 = " u c P B w u t I I h L p Q f j h X q q D + Z y V T s k = " > A A A C A X i c b V D L S s N A F J 3 U V 6 2 v q B v B z W A R K k h J q q D u C m 5 c V r E P a E K Y T C f t 0 J l J m J k I J d S N v + L G h S J u / Q t 3 / o 3 T N g t t P X D h c M 6 9 3 H t P m D C q t O N 8 W 4 W l 5 Z X V t e J 6 a W N z a 3 v H 3 t 1 r q T i V m D R x z G L Z C Z E i j A r S 1 F Q z 0 k k k Q T x k p B 0 O r y d + + 4 F I R W N x r 0 c J 8 T n q C x p R j L S R A v u g H X g c 6 Y H k 2 d 2 4 4 o V I Q h G g U 3 E S 2 G W n 6 k w B F 4 m b k z L I 0 Q j s L 6 8 X 4 5 Q T o T F D S n V d J 9 F + h q S m m J F x y U s V S R A e o j 7 p G i o Q J 8 r P p h + M 4 b F R e j C K p S m h 4 V T 9 P Z E h r t S I h 6 Z z c q 2 a 9 y b i f 1 4 3 1 d G l n 1 G R p J o I P F s U p Q z q G E 7 i g D 0 q C d Z s Z A j C k p p b I R 4 g i b A 2 o Z V M C O 7 8 y 4 u k V a u 6 Z 9 X a 7 X m 5 f p X H U Q S H 4 A h U g A s u Q B 3 c g A Z o A g w e w T N 4 B W / W k / V i v V s f s 9 a C l c / s g z + w P n 8 A j j O W R Q = = < / l a t e x i t > W R (n,n b ) < l a t e x i t s h a 1 _ b a s e 6 4 = " l K C t k w f H Q 8 l 9 M l x e c c K m T w q t p b I = " > A A A C A X i c b V D L S s N A F J 3 U V 6 2 v q B v B z W A R K k h J q q D u C m 5 c V r E P a E K Y T C f t 0 J l J m J k I J d S N v + L G h S J u / Q t 3 / o 3 T N g t t P X D h c M 6 9 3 H t P m D C q t O N 8 W 4 W l 5 Z X V t e J 6 a W N z a 3 v H 3 t 1 r q T i V m D R x z G L Z C Z E i j A r S 1 F Q z 0 k k k Q T x k p B 0 O r y d + + 4 F I R W N x r 0 c J 8 T n q C x p R j L S R A v u g H X g c 6 Y H k 2 d 2 4 I k 6 9 E E k o g v A k s M t O 1 Z k C L h I 3 J 2 W Q o x H Y X 1 4 v x i k n Q m O G l O q 6 T q L 9 D E l N M S P j k p c q k i A 8 R H 3 S N V Q g T p S f T T 8 Y w 2 O j 9 G A U S 1 N C w 6 n 6 e y J D X K k R D 0 3 n 5 F o 1 7 0 3 E / 7 x u q q N L P 6 M i S T U R e L Y o S h n U M Z z E A X t U E q z Z y B C E J T W 3 Q j x A E m F t Q i u Z E N z 5 l x d J q 1 Z 1 z 6 q 1 2 / N y / S q P o w g O w R G o A B d c g D q 4 A Q 3 Q B B g 8 g m f w C t 6 s J + v F e r c + Z q 0 F K 5 / Z B 3 9 g f f 4 A j w y W R g = = < / l a t e x i t > W L (n,n b ) < l a t e x i t s h a 1 _ b a s e 6 4 = " connecting the reservoirs to the system, we have the time-averaged current per bond and the time-averaged activity per bond The joint scaled cumulant generating function (CGF) Ψ(λ, µ) for J T and K T is defined as where · denotes the average over histories. Using standard methods, described in Appendix A, one can show that with an effective Hamiltonian H λ,µ (n,n) ≡ W R (n a , n) en +(µ+λ)/2 − 1 + W L (n a , n) e −n+(µ−λ)/2 − 1 + W L (n,n b ) en +(µ−λ)/2 − 1 + W R (n,n b ) e −n+(µ+λ)/2 − 1 . Heren is a momentum (integrated along the imaginary axis) conjugate to n, and the Lagrange multiplier λ (µ) is a counting variable conjugate to J T (K T ). We are mainly interested in models presenting second-order singularities in the scaled CGF. As we show below, these naturally occur for a class of models whose dynamics obey a particle-hole symmetry. For simplicity, we first consider the case where the two reservoirs have equal densities n a =n b = N 2 , which captures all the essential physics of the DPT. The generalization to the boundary-driven casen a =n b is discussed in Appendix B. B. Particle-hole symmetric models The particle-hole symmetry is implemented by choosing a dynamics which is invariant under the combined operation of the particle-hole exchange and the exchange of the reservoir locations. This is achieved by imposing As stated above, we focus on the case where the reservoir densities are N/2. We also assume that each hopping across a bond obeys local detailed balance, so that the rate of a rightward hop and that of a leftward one differ only due to a global field (bulk drive): Here ν > 0 controls the strength of the field. To simplify the notation, we write the rate of a rightward hop from the left reservoir into the box as Then, using Eqs. (8), (9), and (10), the four hopping rates in Eqs. (1) and (2) can be written as We note that, to impose the bound 0 ≤ n ≤ N , the hopping rates are further constrained by With these choices, the Hamiltonian in Eq. (7) takes the form which can be rewritten as Here we used definitions γ ≡ (ν + 1)/ν and We note that the unbiased state λ = µ = 0 corresponds to z = 1. From Eqs. (6), (14), and (15), one observes that the scaled CGF Ψ depends on λ and µ only through z. We also note that z which reflects the Gallavotti-Cohen symmetry [82]. So far we have described the microscopic dynamics in the sense that the discrete nature of the particles is maintained. We next formulate a coarse-grained description of the dynamics for large N , which makes the models easier to study by changing to continuous state variables and facilitating saddle-point techniques. C. Coarse-grained description for large N To take the large-N limit, it is useful to define the rescaled fields (ρ,ρ) and introduce the rescaled time t and observables n → N ρ,n →ρ, where k is a positive number determined by the structure of the hopping rates (see below for examples). We note that the constraint (12) can now be written as which ensures 0 ≤ ρ ≤ 1. Using these in Eqs. (6) and (14), we obtain a rescaled path-integral representation for the scaled CGF ψ(z(λ, µ)) = N −k Ψ(λ, µ), namely with the action where the Hamiltonian is given by The particle-hole symmetry of the system is reflected in the symmetry of the action For N 1, from Eqs. (19), (20), and (21), we find that ψ can be obtained by a saddle-point asymptotics where the minimum action is achieved by real-valued ρ andρ obeying the Hamiltonian dynamicṡ Although ψ(z) is defined only in the T → ∞ limit, the above saddle-point trajectories still describe the histories dominantly contributing to the finite-time scaled CGF whenever N is large. III. SYMMETRY-BREAKING DYNAMICAL PHASE TRANSITIONS We now calculate the scaled CGF ψ of the single-box model and show that, with a proper choice of rates, the model displays the same DPTs exhibited by extended systems. In particular, we are interested in the DPTs between a particle-hole symmetric phase and one where the symmetry is broken. A. Particle-hole symmetric phase It is easy to see that, for any λ and µ, yields a time-independent, particle-hole symmetric solution for Eqs. (24) and (25). If this symmetric saddle-point profile truly minimizes the action, Eq. (23) implies Note that from here on we use the shorthand notations In Appendix C, we discuss the condition for the symmetric solution in Eq. (27) to be the dominant profile in the unbiased state z = 1. We find that v(ρ) being a monotonically decreasing function of ρ is a sufficient condition. We also note that the mean current and activity are obtained from the above relations as A second-order DPT occurs when this symmetric solution becomes unstable with respect to small fluctuations as the value of z is changed. To this end, in the next section we study the Gaussian fluctuations of the action. B. Stability analysis The fluctuations of the action around the symmetric saddle-point solution (27), are described by the Gaussian action where ϕ ω andφ ω are Fourier transforms of ϕ andφ defined as The eigenvalues of M for the typical state z = 1 are given byv ± v 2 − 4(v ) 2 − ω 2 4 > 0, so that the symmetric solution is always stable in this case. As z moves away from 1, the profile becomes whose roots are given by For a DPT to occur, at least one of the roots should be real and positive. If this is the case, there are two possible scenarios: This case requiresv > 0, and the only positive root is which is always greater than 1 and reaches the minimum at ω = 0. Thus a DPT occurs due to a time-independent mode at which is always greater than 1. Revisiting Eq. (15), this implies that the symmetric (symmetry-broken) phase occupies the low-activity, low-current (high-activity, high-current) regime. A phase diagram in the λµ-plane corresponding to this scenario is shown in Fig. 2(a). As will be shown later, a DPT between these two phases occurs as a second-order singularity of ψ shown in Fig. 2(b), with the optimal density ρ * z minimizing the action exhibiting clear bifurcations shown in Fig. 2(c) and corresponding to the symmetry breaking. Case ofvv Here a positive root exists if and only ifv < 0. It is then given by which is always less than 1 and reaches its maximal value at ω = 0. Again, a DPT occurs due to a time-independent mode at z = z c given by Eq. (37), which satisfies 0 < z c < 1. Combining this with Eq. (15), we find that the symmetric (symmetry-broken) phase occupies the high-activity, high-current (low-activity, low-current) regime. A phase diagram in the λµ-plane for this scenario is illustrated in Fig. 2(d), with second-order singularities of ψ and the optimal density ρ * z shown in Fig. 2(e,f). We note that while scenario 1 has been observed before in extended systems [43,47,48], we are not aware of any example of scenario 2, although it bears some similarities to the DPTs of the WASEP with open boundaries [47,48,68] if one shifts λ and µ appropriately. In all scenarios, a symmetry-breaking DPT occurs due to a time-independent mode. We next derive a Landau theory from first principles to describe the nature of the DPT in detail. C. Exact Landau theory for dynamical phase transitions Having shown that the DPTs are induced by time-independent modes, Eqs. (20) and (23) imply that the scaled CGF takes the form where ρ andρ are time-independent solutions of Hamilton's equations (24) and (25), and m = ρ − 1/2 is an order parameter quantifying the broken particle-hole symmetry. In the vicinity of a DPT, where z = (z − z c )/z c is of order m 2 , one can straightforwardly check that yields a time-independent solution of Eqs. (24) and (25) up to order m 2 . Using this solution in Eq. (39) and expanding in m, we obtain with the coefficients The solution satisfies Eq. The Landau theory obtained above has the same form as the one describing symmetry-breaking DPTs in extended systems [47,48]. Thus the universal features of such DPTs are captured by our large-N single-box models, whose only degree of freedom plays the role of the largest-wavelength mode in extended systems. Below we explicitly construct a single-box model motivated by the Katz-Lebowitz-Spohn (KLS) model [83] which illustrates the phenomenology described so far. Next, we examine the statistics of finite-frequency modes, which contains crucial information about the relaxation of the system near the transition. In particular, we find a behavior analogous to critical slowing down. D. Critical slowing down Let us define z ≡ (z − z c )/z c . In the symmetric phase (forv z < 0), from Eqs. (19), (31), and (32), we find that the Gaussian fluctuations around ρ = 1/2 are characterized by the probability where has dimension of time. In the frequency space, the variance of the above distribution is given by where · z denotes an average over the ensemble biased by z. After applying the Fourier transform, the temporal correlations are obtained as Thus τ z is clearly interpreted as a correlation time, and its divergent behavior τ z ∼ | z | −1/2 near a DPT implies critical slowing down. While this derivation is valid only in the symmetric phase, it is natural to expect that the same scaling behavior will still hold in the symmetry-breaking phase. E. Example of symmetry breaking: Symmetric Antiferromagnetic Process The KLS model is defined on a lattice where each site is occupied by at most one particle. The dynamics of the particles depend on nearest-neighbor interactions. Recently, it was shown that the KLS model, when connected to two reservoirs, exhibits a DPT when the interactions are sufficiently strongly antiferromagnetic [47]. In this case, the particles prefer a profile with only every second site occupied, which amounts to having a density ρ = 1/2. Then the noise strength in the dynamics is found to have a local minimum at ρ = 1/2. To mimic this behavior, we study a single-box model with the hopping rates with ε > 0. These rates fulfill the conditions for the particle-hole symmetry and the bounded range of occupancy given in Eqs. (8) and (12). They also ensure that the hopping rate attains a local minimum when the two sites involved have an average occupancy n 1 +n 2 2 = N 2 . For this reason, we refer to this model as the Symmetric Antiferromagnetic Process (SAP). For large N , we can use Eqs. By Eqs. (15) and (37), we obtain The corresponding Landau theory is derived from Eq. (41) as Thus, if ε > 2 so that the coefficient of m 4 is positive, the model exhibits symmetry-breaking DPTs with the symmetry-broken phase occupying the high-current, high-activity regime. An example was already shown for ε = 17 in Fig. 2(a-c). We again stress that this Landau theory is a direct analogue of the one describing the symmetry-breaking DPT of the KLS model in extended systems. Interestingly, if we generalize the model to negative values of ε (allowing the interactions to be ferromagnetic), the Landau theory predicts symmetry-breaking DPTs for −1 < ε < 0 as well. In this case, as illustrated for ε = −1/2 in Fig. 2(d-f), the symmetry-broken phase corresponds to the low-current, low-activity regime. For the sake of brevity, through the rest of this paper, we shall focus on the proper SAP with ε > 2; however, all the results we discuss below are also easily applicable to the DPTs for −1 < ε < 0. IV. EFFECTS OF FINITE T OR N The simplicity of the single-box model provides a convenient avenue for addressing the effects of finite T or N on the symmetry-breaking DPTs, which are the main subject of this section. First, taking N → ∞ but leaving T finite, we calculate analytically the optimal trajectory from a given initial state and show how its final point scales with T as the system approaches a symmetrybreaking DPT. Second, we consider the case T → ∞ with N finite and identify the exponents governing the finite-N critical scalings near the DPT. These results allow us to build a comprehensive scaling theory near a symmetry-breaking DPT for finite T and N . Formulation of the problem Near a DPT we only need to consider trajectories which are close to the symmetric solution (27). With these considerations in mind, it is convenient to perform a canonical change of variables Since the transformation has a unit Jacobian, it does not introduce any additional term in the action. Thus, using Eqs. (20) and (21), the leading-order correction to the action arising from nonzero ϕ andφ is obtained as whereS with the effective Hamiltonian Our goal is to minimize ∆S z [ϕ,φ] for given values of z and ϕ(0), the value of ϕ at time t = 0. In other words, we first find the action of the optimal Hamiltonian trajectory from ϕ(0) to ϕ(T ) with the latter allowed to take any value; then, among all such trajectories, we choose the value of ϕ(T ) which gives the minimal action. Exact calculation of the optimal final point To carry out the calculation of ϕ(T ), we write the variations of ∆S z for fixed ϕ(0) and ϕ(T ): This gives us as expected Hamilton's equationṡ Then, using Eq. (52) and allowing variations of ϕ(T ), we obtain This implies that, among all the solutions of Eq. (56), the one with the minimal action satisfieŝ To proceed, we note that the above relation gives a conserved "mechanical energy" of the Hamiltonian dynamics as a function of ϕ(T ): With this the minimum of ∆S z can be written as Differentiating the rhs with respect to ϕ(T ) and using Eq. (58), we find that the minimal ∆S z requires ϕ(T ) In the following discussions, the optimal ϕ(T ) is obtained by solving this equation. Numerical results for the SAP With Eqs. (24), (25), and (61), we are ready to calculate the optimal finite-T trajectories for given z and ϕ(0). We first consider numerical solutions and identify different scaling regimes, each of which will be described by analytical arguments later. In Fig. 3, we illustrate such trajectories for the SAP with ε = 4 in the symmetry-broken phase, all of them starting from the initial state end up much closer to the symmetric state ρ = 1/2. As is evident from the data collapse, ϕ(t) and ϕ(T ) exhibit different scaling behaviors near a DPT. In Fig. 4, using the SAP with ε = 4, we show that ϕ(T ) exhibits three different scaling regimes depending on the duration of the observation period T : • Regime I. If the observation period is not long enough, the initial state ϕ(0) heavily influences the entire trajectory, including the final state ϕ(T ) obeying The above scaling behavior is shown in Fig. 4(a). • Regime II. As the observation period becomes longer, the initial-state dependence starts to disappear after a time scale ϕ(0) −1 , beyond which proximity to the critical point becomes manifest in the power-law decay as also shown in the middle section of Fig. 4(b). At this stage, there is no distinction between the symmetric ( z < 0) and symmetry-broken ( z > 0) phases. • Regime III. When T is sufficiently larger than the correlation time scale τ z ∼ | z | −1/2 , ϕ(T ) converges exponentially to zero in the symmetric phase (see Fig. 4(c)) and to nonzero values in the symmetry-broken phase (see Fig. 4(b)), as we show below: Based on these scaling behaviors, one can infer the following scaling forms describing the crossovers between adjacent scaling regimes: 0)) between regimes I and II, | z | F 2 (T 2 a z ) between regimes II and III. To be consistent with the scaling behaviors in each regime, the functions F 1 and F 2 should satisfy The existence of such F 1 (F 2 ) is manifest in the data collapse(s) shown in Fig. 4(a) (Fig. 4(b, c)). Due to the simplicity of the single-box models, all the numerical results discussed above can be theoretically derived from first principles, as we now show. Derivation of the scaling theory To analytically calculate ϕ(T ) satisfying Eq. (61), one needs to examine the form of the Hamilto- In what follows, we approximate h(ϕ,φ) by using Eq. (21) in Eq. (54) and expanding the latter for small ϕ andφ to obtain where c ≡ γvz c and L z are as defined in Eqs. (41) and (42), respectively. As we show, the results below are unaffected by the neglected higher-order terms. This approximate formula has a convenient interpretation as the Hamiltonian of a Newtonian particle of mass 1 2c , velocityφ and position ϕ in an unstable quartic potential −L z (ϕ), represented schematically in Fig. 5. Using Eqs. (54) and (59), the energy conservation h(ϕ,φ) = E(ϕ(T )) implieŝ Near a symmetry-breaking DPT, it is natural to expect that the optimal trajectory stays close to the symmetric solution (27). Thus the initial velocity should be in the uphill direction. For generic situations near the DPT, we expect ϕ(0) to be well within the unstable branches of the potential (i.e., |ϕ(0)| | z | 1/2 ), see Fig. 5. In this case, the sign ofφ(0) should be opposite to that of ϕ(0). Using the above relation and Eq. (61), we obtain This can be further simplified to by using Eq. (41) and noting that Eqs. (59) and (68) give where the second approximation is due to the quartic potential Then, using a Taylor expansion, Eq. (71) can be approximated as implying ϕ(T ) ∼ ϕ(0)/T . This scaling behavior is self-consistent if and only if the latter two terms on the rhs are much smaller than T , which requires T | z | −1/2 and T 1/ϕ(0). Since we have already assumed ϕ(0) | z | 1/2 , the latter condition is automatically implied by the former. Therefore which is the same as Eq. (62). For the moment, we assume that the dominating term on the rhs is given by the second argument of max[·], so that which yields ϕ(T ) ∼ T −2 . This is self-consistent if T | z | −1/2 (by comparison between which is identical to (63). It is straightforward to show that other choices of dominating terms in Eq. (75) do not lead to self-consistent results. Regime III. -Finally, we consider the case where the contribution from z ϕ 2 is not negligible. Depending on the sign of a z , it is natural to divide this regime into two different cases. For a z < 0 (inside the symmetric phase), the integral in Eq. (71) can be dominated solely by z ϕ 2 . Since z ϕ 2 c ϕ(T ) 2 + b ϕ 4 requires the range of the integral to satisfy implying ϕ(T ) ∼ | z | e −2 √ γvzc|a z |T . This scaling behavior is consistent with the range of the above integral if and only if T which reproduces the first part of Eq. (64). On the other hand, if a z > 0, we have For Eq. (71) to be consistent with positive and arbitrarily large T , the value of ϕ(T ) must be such that the denominator of the integrand in Eq. (71) remains positive but approaches arbitrarily close to zero in some part of the trajectory. Thus, ϕ(T ) eventually converges to a nonzero value which is in agreement with the second part of Eq. (64). As was already shown in Fig. 3, this limiting value of ϕ(T ) is not equal to a minimum of L z located at m z = a z 2b but satisfies 0 < ϕ(T ) < m z < ϕ(0). Even then, the integral in Eq. (71) is dominated by the interval satisfying ϕ m z , where the denominator of the integrand is very small. This implies that, as T becomes larger, the trajectory stays close to m z for a longer period of time, as clearly shown in Fig. 3. These derivations fully justify the scaling behaviors stated in Eqs. (62), (63), and (64). Since the Landau-theory approach we have followed is rather general, we expect that similar behaviors will be observed not only in the DPTs of the single-box SAP, but in the broader range of the generic symmetry-breaking DPTs described in Sec. III. General formalism In this case, one cannot rely on the saddle-point method as fluctuations are not negligible. Instead, we consider the limit T → ∞ by studying the spectral properties of the stochastic process. To this aim, we consider a vector in the Hilbert space representing a biased distribution where J t and K t are as defined in Eqs. (3) and (4), respectively, and · n(t)=n denotes an average over all histories under the constraint that the box has n particles at time t. Then it is known (see, for example, [84]) that |G λ,µ (t) evolves according to where the tilted generator W λ,µ is an (N + 1)-by-(N + 1) matrix defined as with integer indices m ∈ [1, N − 1] and n ∈ [0, N ] and where δ i,j denotes the Kronecker delta. The Perron-Frobenius theorem also implies that the leading eigenvalue Λ 0 (λ, µ) is always unique, so that Ψ cannot have singularities at finite N . However, by examining how Ψ develops a secondorder singularity in ψ as N → ∞, one can identify the scaling exponent governing finite-N effects in the λµ-plane. Moreover, the spectral gap whose inverse characterizes the relaxation time scale, is also useful as it reflects the effects of finite N on the critical slowing down. Exact numerical diagonalization of the SAP Using the SAP hopping rates (47) and the reservoir densitiesn a =n b = 1/2 in Eq. (84), the tilted generator of the SAP is obtained as In Fig. 6(a), we show the second-order derivative of the scaled CGF Ψ, which is calculated from the leading eigenvalue Λ 0 by Eq. (85). In the N → ∞ limit, as discussed below in Sec. III C, the second derivative of asymptotic scaled CGF ψ (thick black curve) has a jump discontinuity at λ = 0 as the symmetry is broken (for comparison, the continuation of the contribution from the symmetric solution is shown by a dashed black curve). While Ψ at finite N (thin colored lines) is always smooth, N −4 ∂ 2 λ Ψ clearly approaches ∂ 2 λ ψ as N becomes larger. The inset shows that λ x (N ), defined as the value of λ where the finite-N and the asymptotic curves cross each other, converges to the DPT λ = λ c according to a power-law decay N −2/3 . We thus observe that the scale of λ characterizing the onset of finite-N effects is given by λ ∼ N −2/3 . In Fig. 6(b), we show how the spectral gap of W z obtained at different values of N can be collapsed. As N increases, one observes a collapse to a linear behavior both in the main plot and the (log-linear) inset, which implies a scaling form (after replacing λ with a z ) where the function G shows the asymptotic behaviors As expected the critical slowing down (i.e., divergence of τ z as z → 0) is constrained by the finite value of N . While these observations are based on the numerical diagonalization of the SAP, we argue that they are relevant to a broad range of symmetry-breaking DPTs induced by the same mechanism, as supported by a heuristic argument described below. Argument for finite-N scaling To understand the finite-N scaling exponents identified above, we study how the finite-N corrections can become large enough to erase the second-order singularity of the scaled CGF ψ. Integrating the Gaussian fluctuations described by Eq. (43), the correction to ψ in the symmetric phase (corresponding to a z < 0, as explained in Sec. III B) is As a corollary, the correction to ψ is given by We note that the factor T in the denominator is always cancelled by the IR cutoff of the integral. Moreover, due to critical slowing down (i.e., small τ −2 z in the denominator), near a DPT the low-frequency range dominates the integral. Thus we can write which implies that δψ can remove the jump discontinuity of ψ only if | z | N −2/3 . Assuming the scaling behavior to be homogeneous within the regime, this gives a heuristic explanation for why the finite-N scaled CGF Ψ converges to the asymptotic ψ according to a power-law decay N −2/3 , as shown in the inset of Fig. 6(a). We note that this argument is fully analogous to that for the finite-size scaling theory for symmetry-breaking DPTs in extended systems [48], with N playing the role of the linear system size; hence the same exponent 2/3 governs the finite-size scaling in both types of systems. We now turn to the scaling behavior of the spectral gap ∆Λ, whose inverse captures the dominant time scale. Close to a DPT on the side of the symmetric phase (a z < 0), if the finite-N effects Here N k−1 stems from the rescaling of time shown in Eq. (17), and | z | 1/2 reflects the critical slowing down τ z ∼ | z | −1/2 . On the other hand, if we approach a DPT from the side of the symmetrybroken phase (a z > 0) while keeping outside the finite-N scaling regime, the intermittent flipping between the two symmetry-broken solutions ϕ = ±m z yields the dominant time scale. Since the effective potential scales as L z ∼ m 4 z and the time scale of the dynamics is given by τ z ∼ | z | −1/2 , the cost of action associated with a single flip satisfies which in turn implies the mean flipping time with a positive constant c . Thus the scaling of ∆Λ in this regime is given by The crossover between the above two scaling regimes is described by a scaling form where the asymptotic behaviors of G are given by which is consistent with Eq. (89) and Fig. 6(b). Our argument thus suggests that the finite-N scaling behaviors observed numerically in the SAP in Sec. IV B 2 are also valid for a broad range of models with symmetry-breaking DPTs. C. Extended scaling hypothesis for finite T and N Combining all the scaling properties discussed in this section, we propose a joint scaling form covering the case where N and T are both finite. If O is an observable that scales as N y at criticality, and if · ϕ(0),z denotes an average over all histories constrained by the given values of ϕ(0) and z, we propose an extended scaling hypothesis valid close to a DPT where T in the last argument is already rescaled by Eq. (17). It is straightforward to show that the scaling forms presented above are special instances of this scaling form. 1. For O = ϕ(T ), we use the scaling exponent y = − 2 3 , so that In the limit where N → ∞ while T is kept finite, let us define the reduced scaling forms It is straightforward to show that these scaling forms satisfy which reproduce the finite-T scaling hypothesis shown in Eq. (65). 2. We may choose O = T traj , which denotes a dominant time scale (in the microscopic unit before the rescaling by Eq. (17)) governing the evolution of the conditioned trajectory ensemble. The observable is inversely proportional to the spectral gap ∆Λ, whose scaling exponent is y = k − 4 3 ; thus Eq. (100) implies In the limit where T → ∞ while N stays finite, we define a scaling form where the first argument of F can take any value due to the initial state being irrelevant as T goes to infinity. Then we obtain which is consistent with the finite-N scaling hypothesis for ∆Λ shown in Eq. (98). The extended scaling hypothesis (100) will be useful for studying critical phenomena near a symmetry-breaking DPT observed by numerical or empirical sampling of histories, for which the system size and the observation period are both finite. V. CONCLUSIONS In this paper, we introduced a class of single-box systems coupled to a pair of particle reservoirs. In the joint limit where the maximum number of particles N and the observation period T go to infinity, we showed analytically that such systems exhibit symmetry-breaking dynamical phase transitions (DPTs) in the form of second-order singularities in current or activity large deviations. Although the systems are zero-dimensional, their DPTs were found to reproduce the same critical exponents as those of extended diffusive systems coupled to boundary reservoirs. In addition, for the special case of the Symmetric Antiferromagnetic Process (SAP), we numerically identified the scaling exponents governing how finite T or N alters the singular behaviors around a DPT. We also found theoretical explanations for these exponents, using a generic dynamical Landau theory, which imply that the same exponents apply to other single-box models in general. While our discussions focused on the cumulant generating functions defined for conditioned trajectory ensembles, it is natural to expect that these scaling exponents also govern the rounding of the conjugate large deviation functions at finite T or N , which are more readily observable in empirical experiments. Despite the huge difference in the number of degrees of freedom, the single-box models capture the essence of the symmetry-breaking mechanism involving the longest-wavelength mode of an extended diffusive system. Thus it seems reasonable to conjecture that the critical phenomena of these two kinds of systems belong to the same universality class -the role played by the macroscopic length scale L in an extended system should be fully equivalent to that of N in a single-box model. Based on these considerations, it would be interesting to apply our finite-N and finite-T scaling hypotheses to identifying symmetry-breaking DPTs from the numerical or empirical data generated by extended diffusive systems. with r ∈ {a, b}. Then, using the definitions of J T and K T shown in Eqs. (3) and (4) where we used a shorthand notation n s ≡ n(s∆t). Thus we can write e T Ψ(λ,µ) = where P n 0 denotes the initial state distribution, · I stands for the average over all possible sequences of I Dirac delta function. We also note thatn s corresponds to the auxiliary field variable in the standard Martin-Siggia-Rose (MSR) formalism [86]. The average · I can be evaluated using the following probability distribution of all possible outcomes (1, 0) with probability W R (n a , n s )∆t, (−1, 0) with probability W L (n a , n s )∆t, Thus we finally obtain Eqs. (6) and (7). Appendix B: Generalization to nonzero boundary driving If the hopping rate has a multiplicative form W R (n 1 , n 2 ) = U (n 1 ) V (n 2 ), the results discussed above can readily be generalized to the case of nonzero boundary drivinḡ n a =n b . In this case, the particle-hole symmetry requires Thus the nonzero boundary driving only modifies the axis of the Gallavotti-Cohen symmetry. Appendix C: A note on the steady-state distribution To ensure that the symmetric profile (27) gives the true optimal profile for λ and µ close to zero, we also require that ρ = 1/2 gives the typical state of the system in the (unconditioned) steady state. To identify the criteria for this requirement, we revisit the rate equations (1) and (2). Eq. (B3) implies that the rate equations can be combined into a single equation so that, givenv < 0, P s can be approximated by a Gaussian distribution Thus, if one observes the system in the steady state, the typical deviation of the initial state from ρ = 1/2 has the scale v/(N |v |). This deviation plays an important role in the finite-T corrections.
2019-05-14T09:42:49.000Z
2019-05-14T00:00:00.000
{ "year": 2019, "sha1": "03b067fa1115f14fb7b8e851bde050443d20e187", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1905.05486", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "52daeae586645517e14fe85b2349d5678fbfcc09", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
21693149
pes2o/s2orc
v3-fos-license
Glassy phases in Random Heteropolymers with correlated sequences We develop a new analytic approach for the study of lattice heteropolymers, and apply it to copolymers with correlated Markovian sequences. According to our analysis, heteropolymers present three different dense phases depending upon the temperature, the nature of the monomer interactions, and the sequence correlations: (i) a liquid phase, (ii) a ``soft glass'' phase, and (iii) a ``frozen glass'' phase. The presence of the new intermediate ``soft glass'' phase is predicted for instance in the case of polyampholytes with sequences that favor the alternation of monomers. Our approach is based on the cavity method, a refined Bethe Peierls approximation adapted to frustrated systems. It amounts to a mean field treatment in which the nearest neighbor correlations, which are crucial in the dense phases of heteropolymers, are handled exactly. This approach is powerful and versatile, it can be improved systematically and generalized to other polymeric systems. I. INTRODUCTION In the last 20 years much effort has been devoted to the theoretical study of heteropolymers [21,58]. One of the main motivations was to understand the statistical physics of protein folding [9,10,17,48,50,69]. Despite the insight that has been accumulated, the goal remains distant. On the one hand, most analytical studies have been limited to random bond models [20,60] (in which the interaction energies of all the couples of monomers along the chain are independent random variables), or to uncorrelated random copolymer sequences [19,57]. However, there are many indications that sequence correlations induced by natural selection play an important role for the folding and stability of proteins. On the other hand, in this difficult problem, analytic computations have to resort to some approximations which are not easy to control. It is thus important to have a variety of different techniques at hand in order to crosscheck the predictions. In this paper we develop a new tool for the analytical study of heteropolymers, based on the cavity method as used in various frustrated systems (a short account of our results has appeared in [43]). We use this method to investigate the phase diagram of copolymers with Markovian sequences. Within our approach we find copolymers to exist in three distinct dense phases (apart from the diluted coil phase at high temperature) depending upon the structure of the interaction energy matrix, the sequence correlations and the temperature: (i) The liquid globule phase in which distinct monomers are essentially uncorrelated and can freely rearrange within the globule (apart from obvious constraints on monomers that are close along the chain); (ii) the "frozen glass" phase in which the polymer is stuck in one out of a few well-separated low-energy conformations; (iii) a "soft glass" phase with broken ergodicity (in the thermodynamic limit) in which the thermodynamically relevant conformations form a continuum in configuration space. This last phase has never been predicted in an analytical computation (although such a possibility has been envisioned in phenomenological models [50,52], and a very similar phase seems to be present in the numerical results of [67] on the dynamics of heteropolymers.). Albeit frustrated, it has a much larger entropy, and appears already at a smaller density than the usual "frozen glass" phase. Some of the most successful tools used so far in the study of random heteropolymers are mean field approaches based on the replica method [20,57,60]. Crucial to these calculations was the identification of some relevant order parameter, and the proposition of a suitable Ansatz describing the phase transition in a coupled space of real space coordinates and replica indices. This type of approach is potentially very powerful, but it becomes quite complex for heteropolymers. On the one hand, it requires a physical intuition for identifying the relevant degrees of freedom and of their behavior. On the other hand, an Ansatz tailored to describe a certain type of physics may hide other, unexpected features. Our cavity method consists in a refined version of the Bethe Peierls approximation. While this also represents a kind of mean-field approximation, it differs fundamentally from the previous ones. Applying the Bethe-Peierls approximation to lattice heteropolymers allows to describe selfconsistently the frustration on a local microscopic level. This approach can be thought of as the first step in the series of cluster variational (or Kikuchi) approximations [33]. Its general philosophy consists in keeping track of local correlations inside some small region exactly, while treating the external degrees of freedom as an environment whose statistical properties have to be determined self-consistently. In the Bethe approximation, the only correlations which are treated exactly are the ones between neighboring sites on the lattice. This is an improvement with respect to the naïve mean field that treats distinct sites as statistically independent. Moreover, it is the first of such approximations to be meaningful for polymers, since the backbone structure induces strong correlations between neighbors [1,2,3,4,5,47]. Another potential advantage of the cavity method is that it can be used for one given polymer, without the need to average over an ensemble of sequences as in the replica method. While in the present work we focus on ensemble-averaged properties, one should keep in mind this possibility which could lead to interesting algorithmic developments in the future. Finally, the refined Bethe-Peierls approximation is supposed to be exact on locally tree-like structures (e.g., on random graphs). This is an important feature: It allows one to set up the mean-field analysis in a mathematically well-defined way, and its predictions can be checked against numerical simulations on those random "mean-field" lattices for which the theory is expected to be exact. Within our cavity method, any heteropolymer is found to undergo a glass transition at large enough densities. Two main schemes of glass transitions can occur, depending on the details of the sequence, each of them being associated with one of the types of glasses mentioned above. The transition to the frozen glass phase is a discontinuous transition, which is called random first order, or one step replica symmetry breaking (1RSB) transition in the replica language. It corresponds to the type of transition which has been found in many previous studies, of which the Random Energy Model (REM) [14] is the simplest archetype. The transition to the soft glass phase is a continuous one, corresponding to full replica symmetry breaking (FRSB). This is more in line with recent scenarios proposing a freezing that proceeds gradually from small scales to larger and larger structures [46,65]. In a series of papers exploiting a Gaussian variational technique to deal with the dynamics of heteropolymers, copolymers in particular, a much richer phase diagram was proposed, where the ultimate REM-like folding to a unique ground state is preceded by a less structured but still frustrated glassy phase [66,67,68]. As for the glass transition, the random copolymer was proposed to be in the same universality class as the Ising spin glass [46], which would imply a continuous transition with a full breaking of the replica symmetry. Beside providing an alternative and well controlled analytical approach, our cavity analysis adds to the above pictures in that it highlights the dependence of the scenario to be expected on the correlations of the monomer sequences. In order to keep the computations more transparent we avoid here the use of replicas (although it would be possible to write all of the ensemble-averaged cavity equations using replicas), but we keep to the traditional replica vocabulary of 1RSB and FRSB to denote the two types of transitions. We will apply here the general method to treat Markov-correlated sequences. However, a much wider range of possible applications of this technique is open. The paper is organized as follows: In Section II we define the lattice model and review the treatment of polymers in the grand canonical ensemble. We then introduce the basic ideas of the Bethe approximation and discuss the Θ-collapse from the random coil to the liquid globule phase. Section III discusses the shortcomings of the liquid solution and generalizes the method to the case where many pure states exist (as typically in a glassy phase). In particular, we propose a set of local order parameters that allow to distinguish both theoretically and experimentally between two different types of glass transitions. In Section IV we describe some basic tools for analyzing the glass transition. We present a local stability criterion for the liquid phase and the 1RSB cavity equations which are used to describe the glassy phase. This formalism is illustrated in Section V by considering the exemplary cases of alternating sequences with attractive or repulsive interactions of like monomers. It turns out that the two types of interactions imply very different phase transitions: either a continuously emerging "soft" glass phase or the "standard" discontinuous freezing transition. These two scenarios are found in the study of Markovian chains in Sec. VI. The properties of the strongly frozen phase is analyzed in Section VII by focusing on maximally compact conformations. We conclude with a summary of our results and a discussion of their relevance for protein folding. Several technical developments are included in the seven appendices. II. THE CAVITY APPROACH TO HETEROPOLYMERS In this Section we describe the type of heteropolymer models which we shall study. We derive their phase diagram under the assumption that the polymer is "liquid" meaning that any statistically relevant conformation is dynamically accessible to the molecule. In replica jargon this corresponds to assuming replica symmetry. The next sections will render more precise the regions of the phase diagram where this liquid phase is stable and corresponds to the physically relevant state. A. The lattice polymer model Our starting point is the standard model of lattice polymers [11,63], which we generalize for polymers living on a general graph G. We denote by i, j, . . . ∈ V the vertices of G (with |V| = V ), and by (i, j), . . . ∈ E the edges of G. Let ω = (ω 1 . . . ω N ), ω a ∈ V denote a self-avoiding walk (SAW) of length N on G. The position of a monomer along the chain is denoted by a, b, . . . ∈ {1 . . . N }, and we assume an interaction matrix e ab to be assigned. The corresponding energy reads: where the sum runs over couples of non-consecutive monomers which are nearest neighbors on the lattice. The choice of the matrix e ab is crucial. The standard homopolymer model is recovered by setting e ab = e 0 . A popular model in heteropolymer studies is the random bond model [60] which assumes the e ab to be independent identically distributed (i.i.d) quenched random variables. In this work we study the more realistic case where the interaction energies are determined by the underlying monomer sequence. The sequence will be given by {σ 1 , . . . , σ N }, with σ a ∈ A being the type of the monomer at position a in the sequence. The interaction energy of two monomers is assumed to depend only upon the monomer type: e ab = E σaσ b . In particular, we shall focus on copolymers (although the approach is general) where there are only two types of monomers: Interaction matrices E σ,σ ′ of particular interest are: • The HP model. A and B monomers represent (respectively) hydrophobic and polar aminoacids, and the interaction matrix is chosen accordingly, e.g., E AA = −1, E AB = E BB = 0. This is a popular toy model for protein folding [15]. • The polyampholyte. A and B are supposed to carry screened charges which suggests E AA = E BB = +1 and E AB = E BA = −1. Sometime we shall refer to this interaction matrix as the antiferromagnetic (AF) model. • The symmetrized HP model. We take E AA = E BB = −1 and E AB = E BA = +1. This is the standard model for copolymers with monomers that have a tendency to segregate [58]. We shall refer to it as the ferromagnetic (F) model. As for the graph G we shall consider two particular cases: (i) A V -sites portion of the ddimensional cubic lattice. (ii) A V -sites Bethe lattice, i.e., a random lattice with connectivity (k + 1). Its interest stems from the observation that, in the thermodynamic limit, our mean-field calculations are exact on such a graph. Both for our analytical computations and for the simulations on the Bethe lattice we shall need to consider periodic sequences with period L: σ i = σ i+L . The complete sequence is therefore determined by its first period (σ 1 . . . σ L ). Hereafter, we shall use the shorthand notation "monomer a" to refer to all monomers in positions a + nL with integer n. Furthermore, monomer indices always should be read modulo L. We expect the non-periodic case to be recovered in the L → ∞ limit, even if this limit is taken after the limit N, V → ∞. In order to understand the influence of the correlations in the sequence of monomers, we shall consider Markovian random copolymer chains in the large L limit. In these chains the probability of a monomer to be of a certain type depends only on the preceding monomer in the sequence. For the sake of simplicity we assume the two types of monomers to occur with the same frequencies. The statistical ensemble of the chains is then fully characterized by the probability π ∈ [0, 1] of a monomer to be of the same type as the preceding one. We study the system at thermal equilibrium at a temperature T = 1/β. We define a canonical free energy density as and its grand-canonical counterpart where the expectation value E G is taken with respect to the graph ensemble (whenever G is a random graph). The L → ∞ limit, and the expectation with respect to the sequence (σ 1 . . . σ L ) are (eventually) taken afterwards. The two free energies defined above satisfy the usual Legendre transform relation ω L (β, µ) = f L (β, ρ) − µρ. In order to describe free polymers (in equilibrium with the solvent) the chemical potential has to be adjusted to the critical value µ c such that ω L (µ c ) = 0 [12]. In the grandcanonical picture this critical line corresponds to a phase transition between an infinitely diluted phase for µ < µ c and a dense phase with non-vanishing osmotic pressure for µ > µ c . If this phase transition is continuous, the density on the coexistence line vanishes, while it is finite if the transition is first order. On this coexistence line, the tricritical point where the nature of the transition changes is nothing but the Θ-point where the collapse of the unconstrained polymer takes place. In a homopolymer, the above description captures the essential of the phase diagram [36]. However, in a heteropolymer, the low temperature dense phase will be strongly influenced by the sequence heterogeneity. Due to the connectivity of the polymer chain it is in general impossible to find a compact folding where all interactions are favorable. The system is frustrated, and a glass transition will take place at sufficiently low temperature. B. The Bethe Peierls approximation As already mentioned, the Bethe approximation is asymptotically exact on locally tree-like graphs. Following [40], we define a Bethe lattice as a random lattice with fixed connectivity. Such a lattice is locally tree-like since the typical loop size diverges as log V with the lattice size. In order to handle the heteropolymer problem on a d-dimensional hypercubic lattice within the Bethe approximation, our approach idealizes the graph as a Bethe lattice with the same connectivity, The local tree structure of the graph can be exploited in a recursion procedure. Suppose for a moment that the lattice is a tree, and let us single out a single branch of the tree which is rooted at one 'cavity site' 0 having only k neighbors i = 1, .., k. In the absence of 0, the branch would become a collection of k other branches, rooted at i = 1, .., k. This structure allows for a recursive computation of the probabilities of the polymer's conformations on the tree. We first list the possible local conformations of the cavity site 0 in its branch (see Fig. 1 α is the Boltzmann weight for the configuration α on i when the site 0 is absent. We will refer to these weight vectors on root-sites as cavity fields. The mapping between cavity fields, p (0) = I p (1) , ..., p (k) , can be written explicitly as: where C ≡ C[{p (i) }] is a normalization constant which enforces the condition α p (0) α = 1 and we have introduced the quantities The full lattice is built by merging k + 1 branches. Therefore, once the cavity fields have been computed, one can express any local quantity using the neighboring cavity fields. The monomer density ρ (i) at site i is a function of the k + 1 cavity fields p (j) on the j = 1, ..., k + 1 neighboring sites of i (recall that p (j) gives the probability of a local conformation on j in the absence of i): where we have defined the normalization constant The internal energy u ij of a link (i, j) can be written in terms of the cavity fields on i and j (giving the probabilities of local conformations on i and j in the absence of the link (i, j)): where n ij (a, b) is the probability of having a contact between two monomers a and b along the link (ij) of the graph. The normalization w l (p (i) , p (j) ) is given by For each edge (i, j) of a given graph, one can introduce a pair of cavity fields, describing respectively the probability of local configurations of the two points i and j in the absence of the edge (i, j). One can write a Bethe free energy, which is a functional of all these cavity fields and has Eqs. (4)-(7) as stationarity conditions. It reads where w (i) s and w (ij) l are the expressions given in (10) and (12), respectively. Notice moreover that the density (9) and the internal energy (11) can be obtained by differentiating the Bethe free energy with respect to the chemical potential µ and the inverse temperature β. It is easy to show that the above expressions are exact if the graph G is a tree. On a general lattice it holds approximately to the extent that one can neglect the correlations between the fields on the k + 1 neighbors of any site i, once the site i itself has been deleted. On a Bethe lattice, since the typical loop size diverges as log V in the large-V limit, these k + 1 sites neighbors of i are generically distant from each other, when i is absent. Therefore the correlations of their fields can be beglected, if the system is in a single pure state: at low temperature the Gibbs measure usually has to be decomposed into pure states, within which the correlations between two sites decay with their distance along the graph. We thus expect the above cavity approximation to become asymptotically exact, insofar as cavity fields are computed within one pure state. C. The liquid solution and the Θ-point Both on the random Bethe lattice and on the d-dimensional cubic graph, each site has generically the same environment within any distance R (as long as R is kept finite in the V → ∞ limit). A liquid phase is therefore expected to enjoy translational invariance and will be described by a set of fields p (i) α that is independent of the site. We thus look for a fixed point p (i) α ≡ p * α of the recursions (4)- (7). It turns out that the liquid solutions can be found by solving a system of |A| + 2 non-linear equations, |A| being the number of monomer species in the model. This is a great complexity reduction with respect to the 3L + 1 equations (4)- (7). The task can be further simplified by using particular symmetries of the interaction matrix. This is, for instance, the case of the F-and AF-models defined in Sec. II A, which are symmetric under the interchange A ↔ B. We refer to App. A for a detailed discussion of how the solution is obtained. As shown in Appendix A all the thermodynamic quantities depend upon the sequence (σ 1 . . . σ L ) only through the fractions ν σ of monomers of type σ. As a byproduct, the L → ∞ limit can immediately be taken. The physical meaning of this result is easily understood. In the liquid phase, the correlations induced by the sequence play some role just along the chain, and their net effect vanishes at large distance. In particular, the monomer a is surrounded by a certain fraction of monomers of type σ ′ which only depends on the type of a, σ a (apart from the sites occupied by the monomers a − 1 and a + 1, of course). Let us now discuss the various solutions of liquid type. The random coil phase is described by the trivial solution p * α = δ α,0 , which exists for any choice of the parameters. This phase has vanishing grand potential ω and density ρ. At high temperatures this is the only solution when µ is smaller than the critical chemical potential µ c given by exp(βµ c ) = 1/k. At µ c a non-trivial solution emerges continuously. The latter describes a liquid phase under pressure (ω > 0 for µ > µ c ) with a density that vanishes on approaching the critical line. The collapse of a free polymer from the random coil state to the liquid globule occurs at the so-called Θ-point. In the grand-canonical description, it appears as the tricritical point on the line exp(βµ) = 1/k. Expanding around p * α = δ α,0 , one obtains the following relation which determines the Θ-point temperature see App. A. This result has previously been obtained within the framework of the standard cluster variational method [54]. At temperatures below the Θ-point, β > β Θ , the grand-canonical phase transition becomes first order (see Fig. 2). The critical line µ c (β) is obtained by equating the grand potentials in the coil and globule phases, i.e., by solving ω = 0 for the globule solution. The density, internal energy, and free energy are obtained by plugging the globule solution p * α into Eqs. (9), (11), (13). In the low temperature region β > β Θ , the dense solution can be continued to values of the chemical potentials smaller than the critical one µ c (β), and ceases to exist on a spinodal line. Likewise, the trivial dilute solution stays locally stable beyond the coexistence line up to the for d = 2 [24], 3.716(7) for d = 3 [64], and 5.98(6) (d = 4) [53]. Moreover the authors of Ref. [35] found T Θ = 2.25(10) on the three-dimensional diamond lattice (connectivity k + 1 = 4). These results should be compared with the outcome of the Bethe approximation, cf. (14), which yields Finally, several numerical studies [6,41] have focused on the Θ-point of random bonds models, and have argued that its location is extremely well approximated by an annealed computation. Once again, this confirms that Eq. (14) is a reasonable approximation (the random-bond model is recovered by setting |A| = L, ν σ = 1/L and E στ i.i.d.'s random variables). This is also related to the numerical finding that the global collapse in protein folding dynamics is essentially unsensitive to the specific structure of the sequence, but only depends on its global composition [9]. III. GLASS PHASES If we follow the entropy density s(β) of the liquid solution as a function of temperature, we find that in any heterogeneous sequence s(β) turns negative at sufficiently low temperatures. This indicates the existence of a phase transition to a glass phase which breaks the translational invariance. As we will show, this glass transition can be of two types. In certain sequences the "entropy crisis" is preceded by a local instability of the cavity recursions (4)-(7) around the liquid fixed point p * α . This implies the divergence of a properly defined spin-glass susceptibility and signals a continuous glass transition towards a phase with fully broken replica symmetry. In other sequences, and in the Gaussian random bond model, this local instability is irrelevant since it occurs -if at all -in the region of negative entropy of the liquid globule. The glass transition is thus necessarily discontinuous (1RSB), as was predicted from replica calculations for the random bond model [59]. Dealing with the glass phases requires some modifications of the simple Bethe Peierls approximation which we have been using so far. In this section we will describe first some general properties of the glass phases, and explain the general technical tools that can be used to study glass transitions using the cavity method. A. Proliferation of pure states In a glassy phase, the space of conformations is expected to split up in a multitude of pure states that are separated by large free energy barriers. The slowest time scale of the system, corresponding to jumps between pure states, increases dramatically. In mean field approximation, or on the Bethe lattice, this time scale diverges and ergodicity is broken at the "dynamic" phase transition. The system eventually undergoes a "static" phase transition (with a non-analyticity in the thermodynamic potentials) at a lower temperature [8,34]. In a finite-dimensional model the "dynamic" phase transition becomes a crossover where the nature of the most important dynamical processes changes. Whether the "static" phase transition survives in a given model, or not, is not known in general. We shall not enter this dispute here since we have little to say about it. In any case, the mean-field-like Bethe approximation, assuming the existence of many pure states, yields some useful insight on the glass phase. Within one pure state, the conformational probabilities on a given site are well-defined [39,40]. However, there is no reason to assume the equality of local fields on different sites. Rather one expects that in a given pure state the sites will have different preferences for certain polymer conformations. To proceed, one has to use a statistical description of local fields. We shall not explain here all the details of this description, but just give the main definitions and refer the reader to [39,40] for detailed discussions. In a glassy phase, the number of pure states N V (ω) increases exponentially with the volume of the system. The complexity Σ(ω) is the monotonously increasing, concave function defined by N V (ω) ∼ exp(V Σ(ω)). The natural order parameter is the distribution of local fields over the pure states γ whose free energy density ω γ is fixed to a value ω 0 : An alternative description consists in using a Legendre transformation of the complexity, by introducing the parameter m = (1/β)Σ ′ (ω 0 ) and working at fixed m instead of fixed ω 0 [42]. This computation is equivalent to a 1RSB scheme with Parisi parameter m. From the free energy at fixed m, φ 1 (m), the complexity Σ(ω) is obtained through the Legendre transform: mβφ 1 (m) = mβω −Σ(ω). In a system with a discontinuous (1RSB) glass transition, this approach gives a full description. The complexity is strictly positive in the interval ω s < ω < ω d , corresponding to the interval m d < m < m s in the 1RSB parameter. The thermodynamically dominant metastable states are obtained by minimizing the one-replica free energy ω − β −1 Σ(ω). In an intermediate temperature regime T s < T < T d , the minimum is attained for some free energy ω * (corresponding to m * = 1), with ω s < ω * < ω d . Below the glass transition, T < T s , the minimum is attained at the lower edge ω * = ω s (with Σ(ω * ) = 0), corresponding to the 1RSB parameter 0 < m * < 1. In a system with a continuous glass transition (FRSB), the full solution should involve grouping states into clusters, and clusters into superclusters, building up a continuous ultrametric hierarchy. The approach above amounts to a 1RSB approximation of this full structure, and we shall not attempt to go beyond this level of approximation. B. Order parameters In this section we present two types of order parameters which can be used to identify the glass phase. For a polymer in Euclidean space, described by the position R i of monomer i, let us consider two replicas of the polymer in the same pure state. In the glass phase, provided the global rotation symmetry is broken, the local conformation of the two polymers will have a certain tendency to be the same while the liquid phase is completely disordered in this respect. In order to measure this effect, we introduce the scalar product of the distance vectors between nearby monomers in the replicas (1) and (2): We shall be interested in computing the average of this quantity when the replicas are constrained to remain in the same pure state. More precisely, we want to evaluate where we average over all states γ with their Boltzmann weigth w γ . This quantity is accessible numerically. We consider a polymer which is thermalized at time 0 in a configuration R i (t = 0). We let it evolve for a time t, to a configuration R i (t). The order parameter is given by the quantity Again, one can compute the typical distance q AB essentially characterizes the bias of single sites towards a specific monomer type, whereas the order parameters F (1,2) d measure the conformational similarity of the replicas in the vicinity of a given site, once the monomer on that site has been fixed. They measure the freezing of the local degrees of freedom of the polymer's backbone, similarly to the approach of [66,67,68]. In contrast the parameter q (1,2) AB is hardly sensitive to the geometric constraints induced by the backbone. A dynamical evaluation of the above order parameters is particularly convenient on finitedimensional lattices. Notice that the equilibrium probability for two independent replicas to have a finite overlap q (1,2) AB , vanishes with the volume of the lattice because of translation invariance. On the Bethe lattice it is more natural to work at a finite monomer density, (see Sec. V D). In this case, the random structure of the lattice acts as a "pinning field", and two replicas of the same system typically have a finite overlap. Following the practice from spin-glass theory, we shall measure the probability distribution of the quantity (19) with respect to the Gibbs measure: In a liquid phase, q (1,2) AB state vanishes and the function P AB (q) is a δ-function. In a glass phase q (1,2) AB state > 0 and the function P AB (q) becomes non-trivial, with support in the interval ]. In the case of a continuous transition, F (1,2) d state and q (1,2) AB state vanish at the transition point, while they exhibit a jump in the discontinuous case. IV. METHODS TO STUDY THE GLASS PHASES IN THE CAVITY APPROACH In this section we present the methods that we use to study the glass transition on the Bethe lattice. They are applied to various types of polymers in the next sections. A. Local instability towards a soft glass phase The simplest glass transition is the one associated with an instability of the liquid. The liquid solution is always embedded in the 1RSB formalism as the single pure state that exists at high temperature: it is described by the field distribution ρ(p) = δ(p − p * ). This solution becomes locally unstable if fluctuations around p * grow on average under the cavity recursion (4-7). This phenomenon occurs when where λ max is the largest eigenvalue of the transfer matrix for the propagation of deviations from the liquid under the recursion (4)- (7), (Notice that the stronger instability k|λ max | = 1 [13] is irrelevant on a random lattice, since it is associated to the establishment of a crystalline order that is inherently frustrated because of the presence of large loops.) Beyond the local instability, the distribution of local fields ρ(p) becomes non-trivial, but it remains centered around the unstable liquid fixed point. In physical terms this indicates that phase space begins to divide up into a small number of states that comprise a large number of microconfigurations. These states are characterized by weak local preferences for certain polymer conformations that deviate only slightly from the homogeneous liquid state. The instability (21) generally develops below a temperature T i . Calling T cris the temperature where the entropy vanishes, one can have two types of situations: • When T i < T cris , the local instability of the liquid is clearly irrelevant, and a discontinuous glass transition must take place at some temperature ≥ T cris . • When T cris < T i , either the instability drives a continuous glass transition (as we will see in specific examples, this seems to be the generic case when the instability occurs in a region where the liquid entropy is still large), or there exists again a discontinuous glass transition taking place at temperatures T > T i and rendering the instability irrelevant. It is also possible, that a first continuous glass transition towards a slightly frustrated phase undergoes a successive discontinuous phase transition at lower temperatures where a stronger degree of freezing takes place. Because of the relative simplicity of the liquid phase, it turns out that the stability condition (21) can be studied explicitly for AB copolymers with an interaction matrix which is symmetric The detailed calculation is given in Appendix B. The dangerous eigenvalues λ of the matrix M in (22) are found to obey the equation where the sign corresponds to ferromagnetic (+) and antiferromagnetic (-) interactions, respectively. The temperature dependent parameter w = L a=1 p * 2a /p * 0 characterizes the liquid solution and is independent of L, cf. App. A and Eqs. (B1), (B2). The sequence properties enter the above expression only through the autocorrelation function q i = (1/L) L a=1 σ a σ a+i . The local instability β i occurs at the smallest value of β where the characteristic polynomial (23) has a root with |λ| 2 k = 1. Usually, for attractive interactions between equal monomers, the relevant eigenvalue is λ = 1/ √ k while the instability occurs in general with λ = −1/ √ k in ampholytes. The location of the instability for the various types of interactions and sequences will be studied in the next sections. Let us just mention here that the (periodic) Gaussian random bond heteropolymer generically undergoes a discontinuous 1RSB glass transition, in agreement with previous studies [60]. B. Cavity recursion within the 1RSB approximation In order to study the glass phase itself, we need to compute the distribution of local fields of (15) for the Bethe lattice. We shall do it here within the 1RSB cavity formalism of ( [39,40]). We shall not rederive the full formalism but give the main ingredients needed for our study. The statistical average of the simple cavity recursion (4-7), which holds within a given pure state, leads to a recursion relation for this distribution: where I[p (1) , . . . , p (k) ] is given by (4)- (7), and Z is a normalization. The non trivial reweighting, which depends on the parameter m defined in Section III A, involves the free energy change induced by the recursion, which is given by ∆f is the normalization term appearing in (4)-(7). This reweighting accounts for the fact that the number of pure states increases exponentially with their free energy. The free energy is obtained by properly weighting the contributions of different pure states: where w s and w l are the site and link partition functions defined in Eqs. (10) and (12). The complexity Σ(ω) is obtained from φ 1 (m) through a Legendre transform: mβφ 1 (m) = mβω − Σ(ω). Note that the recursion relation (24) is the saddle point equation for the functional φ 1 (m) with respect to ρ(p). Close to a continuous glass transition, ρ is strongly peaked around the liquid fixed point p * , and we can expand the free energy as a function of the moments of the fluctuations p − p * over the pure states, as outlined in Appendix D. To leading order the corrections to the liquid free energy arise from fluctuations in the "replicon" mode, the unstable direction of the transfer matrix (22), whose magnitude grows as (T i − T ) 1/2 . The continuous glass transition is found to be of third order, where c is a positive constant. This is in contrast to discontinuous glass transitions which are (generally) of second order in the free energy. V. TWO EXEMPLARY CASES: THE ALTERNATING AMPHOLYTE AND HP MODEL In this section we apply the cavity 1RSB formalism to two specific sequences: the regularly alternating copolymer chains ABABAB . . . for ampholytic and symmetrized-HP interactions. These turn out to be rather extreme representatives in the ensemble of possible neutral copolymers, but they are the simplest ones, and they exhibit the characteristics of the continuous (ampholyte) and discontinuous (HP) transition in a very clear manner. The folding of an alternating copolymer on a regular Bethe lattice is a frustrated problem, while, clearly, on a regular cubic lattice, it would just behave as a homopolymer with homogeneous interactions E AB ≡ e. However, we expect that as soon as a certain number of defects are introduced in such sequences, their folding on the cubic lattice will be similarly frustrated. In terms of Markovian sequences, we consider here the case of π ≪ 1. While these sequences are expected to behave differently from the alternating one π = 0 on the cubic lattice, it is reasonable to assume that the π → 0 limit is smooth on the Bethe lattice. Then the Bethe approximation of π ≪ 1 sequences can be studied using the perfectly alternating sequence, as we do here here. Alternating chains are more easily studied with the cavity method, Before embarking on the details of the cavity computation for the alternating chains, we present here some simple arguments explaining the very different physical nature of the glass phase in the alternating ampholyte, which has a continuous transition, and in the symmetrized HP model, which has a discontinuous transition. Instead of a Bethe lattice, let us consider a regular tree and ask for a maximally dense polymer configuration such that all interactions are satisfied (AB interactions in ampholytes and AA or BB interactions in the symmetrized HP model). In Fig. 4 we show typical configurations for each case. While there is a stratified order in ampholytic configurations that manifests itself in strong long range correlations, the symmetrized HP model has an "ordered" structure that is highly correlated in the globule, a global frustration will not be able to establish. For the ampholyte, however, it will be favorable, even at lower density, to develop local (site) preferences for a certain monomer type and thus increase the probability of satisfied interactions. This mechanism is at the basis of the instability of local fields in the liquid. Note that in the first place this instability is related to the type of monomer accommodated on a given site rather than the backbone structure. The latter will only come into play at larger densities/lower temperatures. This qualitative discussion applies equally to correlated sequences which are not perfectly alternating but have a strong tendency to alternate (small π). At the other extreme, if one considers the case of π close to one, where consecutive monomers tend to be alike, one can apply the same type of considerations, but with the roles of ampholyte and HP-like chain reversed. We can thus conclude that the local instability of a HP-like chain with long blocks of like monomers is associated to the appearance of pure states characterized by the same monomer preferences for small regions on the lattice. This is reminiscent of the microphase separation (MPS) [18] which has been much discussed in this context and becomes relevant for sequences with a distinct block structure [16,21,25,57]. However one should remember that the present formulation of the cavity method, which neglects small loops in the lattice, does not allow any quantitative study of this phenomenon (this could be addressed using more refined cluster variational methods). Repeating the above arguments for more general cases of short range correlated sequences, one sees that in general a local instability is favored by sequences whose monomer distribution tends to be annealed (e.g., ampholytes with a tendency towards charge alternation along the sequence). It is interesting to note that such 'annealed sequences' naturally result from common protein design schemes [22,32,49,62]. B. The continuous transition in the AB ampholyte We start our quantitative study with the alternating ampholyte on a lattice with connectivity k + 1 = 6. For this polymer, the local instability of the liquid found from (23) On lowering the temperature, the preference of sites for certain conformations (and not only for the respective monomers), increases. This could be interpreted as a growing degree of freezing that affects larger and larger length scales. There is no sign of a strong (discontinuous) freezing transition. In App. E we explain how to compute the order parameter (17) within the cavity approximation. The result for the alternating ampholyte is shown in Fig. 6, which again shows a continuous transition. C. The discontinuous transition in the alternating HP model The case of the symmetrized-HP alternating sequence, always on a lattice with connectivity k + 1 = 6, is extreme in the opposite sense. The liquid solution is always locally stable, even in The computation of the order parameter (17) proceeds as in the case of the ampholyte. The result is shown in Fig. 8 and shows clearly the discontinuous transition. D. Numerical simulations As we already stressed, one advantage of our approach consists in the possibility of checking mean field computations using numerical simulations of well defined polymer models on a Bethe lattice. Here we want to demonstrate this feature by considering the alternating AB ampholyte. We made extensive simulations on Bethe lattices with connectivity (k + 1) = 6 and volumes V ranging from 100 to 800. For all of the data presented in this Section, we fixed β = 2.0 above the This problem can be overcome by simulating a melt of variable-length polymers, the length being finite in the thermodynamic limit. The single-polymer physics is recovered when the average length diverges. We refer to App. F for a detailed description of our algorithm. In Fig. 9 we show our numerical data for the average polymer length l . Notice that l ≈ 10 ÷ 25 within the dense phase. As will be clear from the other numerical results, this is enough for assuring small deviations from the infinite-length limit. The main effects are: a rounding of the collapse transition and a small shift of the soft glass transition (which occurs at µ i (β, finite l) ≈ −2.40923). In order to achieve equilibration within the soft glass phase we adopted the parallel tempering technique [27,38]. We tested equilibration using the method of Ref. [7], and always checked the acceptance rate for temperature-exchange moves to be larger than 50%. In Fig. 10 we plot the energy per lattice site and the monomer density, as functions of the chemical potential µ. Notice that the liquid -soft glass phase transition is barely discernible from the monomer density, and the energy curve is also quite smooth. The 1RSB cavity result gives a very good quantitative description of the transition. In order to get a finer description of the glass phase, we have measured the order parameter function P AB (q) defined in (20). In Fig. 11 we report our numerical data for this quantity at the highest chemical potential considered (µ = −1). Because of the large finite-V effects, it would be difficult to conclude from the numerics alone that the infinite-V function is non-trivial. However, the data agree with the 1RSB predictions for the Edwards-Anderson parameter, q EA ≈ 0.259. In the same figure (left frame) we consider the spin-glass susceptibility: This quantity diverges as µ → µ − c in the thermodynamic limit. In a finite size sample, its behavior is ruled by the usual finite-size scaling form From the cavity solution of the model, one finds that q 2 This result implies the following relation between the critical exponents defined in Eq. (28): In fact we find a nice collapse of data corresponding to different sizes using ν = 4 and η = 3/2. The comparison of q 2 AB with the 1RSB cavity prediction is quite good. An alternative approach for exploring the low energy structure of the system consists in coupling two replicas through their overlap, cf. Eq. (19). In practice, one adds a term of the form −N βǫq AB (s (1) , s (2) ) to the two-replica Hamiltonian and tries to estimate q EA as follows In Fig. 12 we show the numerical results for q AB N,ǫ on a large size lattice (V = 10 4 ) and several values of ǫ. In order to simulate large lattices, we did not use parallel tempering here. Furthermore, we adopted a weaker equilibration criterium, requiring q AB N,ǫ to be roughly time-independent on a logarithmic scale. Once again, the numerical results compare favorably with the outcome of the cavity calculation. VI. RANDOM MARKOVIAN COPOLYMERS One can show using the formula (23) that the local instability appears the earlier, the stronger the tendency of monomers to be annealed along the sequence, that is, the more A's and B's tend to alternate in an ampholyte, or to form blocks in an HP model. In both cases the autocorrelation function q i is large and its sign oscillates (alternating sequence) or remain positive ('blocky' sequence). To be more quantitative, let us consider a random copolymer chain in the limit L → ∞ characterized by the probability π ∈ [0, 1] of two neighboring monomers to be of the same type. The autocorrelation function of such a chain is (in the L → ∞ limit) q i = (2π − 1) i . In Figs. 13 and 14 we plot the inverse temperature β i at the local instability as a function of the parameter π for the ampholyte and symmetrized-HP models. This instability is certainly irrelevant when β i is larger than the inverse temperature of the entropy crisis of the liquid, β cris = 1.4525. This situation occurs for π > 0.4480 in ampholytes, and proves the existence of a discontinuous transition. But already when β i is smaller than, but close to, β cris , one should expect a discontinuous 1RSB transition to take place at a β < β i . In order to complete the diagram, we have numerically solved the cavity recursion by population dynamics for neutral sequences of period L = 20, but otherwise random composition. From the experience gained for the extreme case of the alternating HP-model (see below), we expected a kind of frozen solution with rather strong local conformational preferences to dominate the low temperature phase. Such a solution is rather non-trivial to find in a huge functional space, in particular since it has to be expected that it occurs in a discontinuous manner and cannot in general be found by randomly perturbing the liquid solution. We therefore proceeded by initializing the population in a highly polarized state that we will discuss in more detail in the next Section. This state actually corresponds to an unstable fixed point, but it turns out that at low temperatures, it is usually quite close to a stable non-trivial to avoid at the same time an asymmetry between A-and B-states which likely occurs in small populations, in particular in the case of attractive interactions among equal monomers. Our findings for the sequences of period L = 20 are summarized in the plots 13, 14 and 15. Figure 15 shows the variance (square of the standard deviation) of the local field for ↑ (a = 1) over the distribution ρ(p) for several sequences as a function of inverse temperature. This is a measure for the degree of the local bias away from the liquid. Almost independently of the particular sequence statistics we find that for β > β d ≈ 1.23 a strongly frozen phase (with very low internal entropy) exists with an associated dynamic transition at β d . Depending on the sequence statistics, the regime of higher temperatures is either entirely liquid (e.g., for π ≤ 0.50 in the ampholytes), or exhibits a weaker form of frustration in a phase of presumably fully broken replica symmetry. The latter continuously joins the liquid solution at the local instability predicted by (23). For the phase diagram in the β − µ-plane for either of the two scenarios we refer to Figs. 3. Fig. 13, holds for the HP-type models, but here, the continuous transition takes place at π ≥ 0.5. Notice that the π eff window displayed here is larger than in Fig. 13. The generic picture of a quench in temperature is thus the following: For ampholyte sequences with some tendency to alternation or HP-like-sequences with a preference for block formation, there is a continuous glass transition whose location depends strongly on the composition of the sequence. The corresponding glass phase is characterized by a relatively weak frustration and a rather small number of states that comprise many microconfigurations with some weak local preferences for certain conformations. This preliminary glass phase undergoes a further discontinuous phase transition at a lower temperature β d ≈ 1.23 that is almost independent of the sequence structure and might be called the effective freezing transition. For sequences with correlations of the opposite kind, the freezing transition is the only phase transition and occurs directly from the liquid. It is interesting to note that in numerical simulations of the folding dynamics of neutral HP-type copolymers, the dynamical glass transition was also found to be essentially independent of the sequence [9]. It is intriguing that the critical parameter of π separating the FRSB from the 1RSB freezing scenario is very close to π = 1/2 which corresponds to sequences without correlations. This is particularly interesting from the point of view of protein folding. The nature of correlations present in the amino acid sequences of natural proteins is still a matter of intensive debate. The analysis of Pande et al. [51] argues in favor of a tendency for sequences to be annealed, i.e., to exhibit positive correlations in the hydrophilicity and anticorrelations in the charge of amino acids, which would suggest a bias towards the FRSB freezing scenario for proteins. However, the studies by Irbäck et al. [28,29,30] rather point towards anticorrelations in the HP-type degrees of freedom which would favor a scenario with a direct transition from the liquid to the frozen glass. The discrepancies of these studies mainly concern the nature of long range correlations while on the level of nearest neighbor correlations, the protein sequences appear to be rather random, having π ≈ 1/2 with respect to both charge and hydrophobic/hydrophilic degrees of freedom. It would be very interesting to understand whether the folding of natural proteins takes advantage from their sequences being very close to the critical border between the two scenarios. On the other hand, as mentioned earlier, most protein sequence design schemes tend to result in (partially) annealed monomer chains which are therefore likely to exhibit the intermediate soft glass phase. VII. THE CLOSE-PACKED LIMIT In this section we provide a detailed analysis of the frozen phase in the limit of high density. We first show the existence of a special 'REM-like' fully polarized solution of the 1RSB cavity equations at temperatures below the liquid's entropy crisis. Then we show that this solution is stable in the close-packed limit of high densities. A. A fully polarized solution There always exists a 'fully polarized' solution to the cavity equation (24) which describes pure states consisting of essentially one unique frozen polymer configuration. In each such state, a given site only admits one specific local conformation. On averaging over the different pure states, the given site will be found in conformation α with frequency w α . The local field distributions then take the form where the fields e (α) are defined by e (α) α ′ = δ αα ′ . This distribution solves the cavity equations when the frequencies w α (β, m) coincide with the local fields of a liquid at the renormalized inverse temperature β ′ = mβ, i.e., w α (β, m) = p * α (β ′ = mβ). The replicated free energy of this fully polarized solution is φ 1 (β, m) = φ liq (mβ). The internal free energy of the corresponding frozen states is related to the liquid quantities via f pol (β, m) = d(mφ 1 (β, m))/dm = u liq (mβ)− µρ liq (mβ), and the complexity of states is found from Σ pol (β, m) = s liq (mβ). As is evident from the nature of the pure states, their internal entropy vanishes. Let us for a moment postpone the discussion of the relevance of this solution, and first discuss its physical interpretation. At each value of β we have to maximize φ 1 over 0 ≤ m ≤ 1, under the condition Σ ≥ 0. For temperatures above the liquid's entropy crisis, β < β cris , the maximum is attained at m = 1 and we have ω g = ω liq . When β > β cris , the static glass transition takes place and the free energy freezes to ω g = ω liq (β cris ), the Parisi parameter taking the value m s = β cris /β. So this solution describes a full freezing of the polymer in some isolated specific configurations, taking place at β = β cris . Notice that this scenario exactly parallels the one found in the REM. Our numerical study of the AB-copolymers in their highly frozen phase (beyond β d ≈ 1.23) finds a solution ρ(p) which is close to the form (31), although small deviations persist, and the polarization is not complete. In the particular case of the alternating chain we numerically confirmed that the optimal Parisi parameter is well fitted by m s = T /T s on the coexistence line. B. Stability analysis and the limit of maximal density Up to this point we have not discussed the range of validity of the polarized solution (31), and in particular, its stability. Unfortunately, this is a difficult problem, and we only can provide partial answers. The basic idea consists in perturbing the Ansatz (31) and checking whether the perturbation grows under the cavity iteration (24). A simple perturbation consists in adding to (31) some 'almost polarized' fields with a small total weight. Namely we take a field distribution of the form where ρ α (p) is concentrated on fields p close to e (α) . In fact, it is more convenient to think of it as a distribution over the 'small' fields p ≡ {p α ′ } α ′ =α . Hereafter, we shall use the notation ρ α ( p) instead of ρ α (p). Finally notice that the ρ α ( p)'s need not to be normalized. Normalization is enforced by the constant a in Eq. (32). Plugging the nsatz (32) into Eq. (24) we get to linear order in ǫ: Here we distinguished the distribution on the right-hand side, ρ α (·) from the one on the left-hand side ρ ′ α (·). In fact we are interested in the stability of the iteration (24) and not just in its fixed point. Here P (α 1 . . . α k |α 0 ) is the probability of finding conformations α 1 . . . α k on the k leaves of the branch in Fig. 1, constrained to the root being in conformation α 0 . This must be computed Instead of continuing in full generality, let us consider the example of an alternating F-model in the closed-packed limit with E AA = E BB = −E AB = −1 (remember that in this case we found a discontinuous phase transition with a highly polarized low temperature phase, cf. Sec. V C). Eqs. plus two equations obtained by interchanging A and B. Here we used the shorthand δ(x, y) = δ(x)δ(y) and expanded I[ q; α 2 . . . α k ] in the delta functions to linear order in q α for α = α 1 . The weights {f n } and g A/B are given by A little thought shows that, after one iteration of Eqs. (34), (35) we can set and that the linearized recursions decouple in the three 'sectors' The first sector corresponds to shifts of the chain and turns out to be marginally stable (the function I[ q; α 2 . . . α k ] has to be developed to second order in q). The other two sectors correspond to structural rearrangements of the backbone and become unstable when mβ < (mβ) c ≡ y c = 1/2 · log(2k − 3). This instability has a simple physical interpretation. The pure states described by (mβ) c have a free energy density f c = 1/2. This means that on average, a randomly chosen site has one violated neighboring bond, i.e., one neighbor occupied by a monomer of the opposite type. It is thus possible to rearrange the backbone of the alternating chain without paying energy by opening the chain at the given site and redirecting it in the direction of the violated bond, and propagating the rearrangement through the lattice, see This is a peculiarity of the infinite µ regime, the numerics at finite but large µ suggesting that the polarized solution is unphysical below y t . However, since y t < β cris , the polarized Ansatz still correctly describes the low temperature regime. What happens away from the µ → ∞ limit? The possibility of voids allows for new terms in the sum over conformations, cf. Eq. (33). It turns out that the iterations become unstable in the new sectors {0 →↑ a, ↑ a → 0} and {0 →↓ a, ↓ a → 0}. Physically, this means that the presence of voids in the lattice always allows for a rearrangement of the polymer configuration in some (perhaps very rare) regions, preventing a complete freezing in a single state. Still, at y ≥ y t a stable fixed point close to the polarized solution (31) exists. Let us finally notice that the stability of the polarized solution can be studied within a larger 2RSB Ansatz [45]. The results coincide with the simplified treatment presented here. These results are further confirmed if one studies the behavior of field distributions in the T → 0 limit following Ref. [44]. C. Exact enumerations on a cube In an attempt to verify the 1RSB or even REM-like nature of heteropolymers, Shakhnovich and Gutin have exactly enumerated all conformations of fully compact random 27-mers on a 3 × 3 × 3 cube and calculated the overlap distribution function P (q) as a function of temperature [26,61]. They interpreted their results in favor of a REM-like scenario where only a small number of states dominated the low temperature regime and P (q) exhibited typical features of a discontinuous glass phase. In view of our mean field predictions, one would expect to find a different scenario when repeating this analysis for copolymers (with a certain amount of sequence correlations) in their soft glass phase. We first repeated this enumeration study for random ampholytes, and found a P (q) order parameter very similar to the random bond case studied originally [61], in agreement with the results of [26]. However the same analysis done for correlated ampholytic sequences with various values of π did not show any clear dependence on π. This absence of evidence can have two origins. On the one hand it might be due to the extreme restrictions that full packing imposes on the conformations. We have seen above that the fully dense limit is very subtle since physically important degrees of freedom, which are found in a system with voids, are artificially suppressed, as has been put forward by many authors [50,56,68]. On the other hand it seems that these sizes are too small to study the true phase space structure of the glass phase. VIII. DISCUSSION AND CONCLUSION The cavity method approaches the lattice heteropolymer problem from a new point of view in that it analyzes the conformational degrees of freedom of chains with quenched-in sequences. Furthermore, this method allows to study the whole temperature range and describes the Θ − collapse and the low temperature physics within the same formalism. In this sense we believe it provides an interesting new perspective in the analytic studies of heteropolymer folding. With this local approach we have studied the frustration effects on a given site of the lattice. We find that the decisive features determining the nature of the low temperature physics are the short-range correlations in the monomer sequence. Polymers whose monomer distribution along the chain tends to be annealed have a proclivity to undergo a continuous glass transition to a soft glass phase before the strong freezing transition takes place. In oppositely correlated sequences the freezing occurs directly from the liquid phase. A weakly polarized phase with broken ergodicity and a high sensitivity to the specific sequence, has also been observed in the extensive numerical analysis of the phase diagram for specific hydrophilic/hydrophobic chains [68], and the qualitative differences found between selected sequences indeed reflect the general tendencies that we predict from the cavity analysis of the slightly different but closely related HP-like model. It would thus be very important to check the effect of sequence correlations through numerical simulations of polymers on a cubic lattice, using our mean field predictions as a guideline. One regime in which the small loops of the cubic lattice can yield a behavior which is qualitatively different from the present mean field analysis is the case where the polymer has a strong tendency to form local crumples, as it happens in block copolymers which undergo a microphase separation. In order to study such problems analytically, it would be interesting to improve the Bethe approximation by considering enlarged cavities that contain not only a single site but a small cluster of nearby sites. This actually amounts to a further step in the framework of the cluster variational method. For the homopolymeric case a first step in this direction has been carried out in [55]. Already on the level of the simplest copolymer model we found a surprisingly rich phase diagram as a function of temperature and sequence correlations. But clearly, the cavity method is amenable to a number of generalizations that allow to study more sophisticated models of biopolymers, including for instance backbone stiffness, orientational degrees of freedom, or additional structural constraints such as the saturation of monomer-monomer interactions, which are crucial, e.g., for the folding of RNA. APPENDIX A: FINDING THE LIQUID SOLUTION In this Appendix we show how the translation invariant liquid solution can be found by solving a set of |A| + 2 equations (instead of 3L + 1 equations as it may appear from Eqs. (4)- (7)). First of all it is convenient to make a change of variables defining It is easy to see that the cavity equations (4)- (7)), the free energy (13) and all the others observables, can be rewritten in terms of these 2L + |A| + 1 variables. In using the new variables, when not specified, we shall assume that the index σ belongs to the enlarged space {0} ∪ A. We will set The liquid fixed point has the translation invariant form w The corresponding equations are easily written: z ↓a = ke βµ z ↓,a−1 1 + w 0 It is important to notice that the above equations are invariant under the transformation z ↑a → γ · z ↑a , z ↓a → γ −1 · z ↓a for any positive γ: we shall fix this freedom below. The reader can easily check that any physical observable (such as the free energy, the local energy or the local density) is also invariant under such a transformation. This happens because, when following the chain along its conventional direction, each time we arrive at a site i, we are obliged to leave the site as well. The above equations admit of course the trivial coil solution z ↑a = z ↓a = 0. Moreover, if one has z ↑a 0 = 0 (z ↓a 0 = 0) for a particular a 0 , this implies z ↑a = 0 (z ↓a = 0) for any a. Therefore, we shall hereafter assume that z ↑a , z ↓a = 0 for any a. In this case Eqs. (A2)-(A3) imply the consistency Equations (A2) and (A3) are easily solved: where z ↑ , z ↓ are two integration constants. We can exploit the invariance mentioned above in order to fix z ↑ = z ↓ = z. Plugging the expressions (A6), (A7) into Eq. (A4), and using Eq. (A5), we get We are therefore left with a set of |A| + 2 equations (Eq. (A5) plus the |A| + 1 equations in (A8)) for |A| + 2 real variables (z and the |A| + 1 variables w σ ). As anticipated these equations depend on the sequence just through the frequencies ν σ , σ ∈ A. The reader will easily check that the same is true for any physical observable. Near the Θ point all w σ are small, and (A8) shows that to lowest order they satisfy w σ ≈ w 0 τ ∈A ν τ e −βEστ . By imposing that a non-trivial solution of (A5) should exist one immediately obtains the equation (14) for the location of the Θ point. (B2) We want to compute the local stability of the cavity recursions (4)-(7) around the above solution. We therefore imagine that the cavity fields for one of the sites 1, . . . k (let us say the site 1) have been slightly perturbed and compute the effect of such a perturbation on the site 0. To linear order we get: The constants A-H are all positive, and can be expressed in terms of the solution of Eqs. (B1)-(B2). In the following we will just need the combinations below: where we used the shorthand c ≡ cosh β. We must now identify the most relevant perturbation, i.e., the largest eigenvalue of the linear transformation (B3)-(B6). It is simple to show that the subspace    δw 0 = 0; is preserved by the iteration (B3)-(B6). It can be shown that the most relevant eigenvector lies indeed within this subspace. We restrict to it by defining the variables where we used σ a ∈ {+, −} for the polymer sequence. Using the new variables we can rewrite the iteration (B3)-(B6) as follows: where we introduced the notation s ≡ sinh β (for the F-model) or s ≡ − sinh β (for the AF-model), and the sequence correlation function Notice that q b = q −b . This remark allows us to sum Eqs. (B10) and (B11) and to introduce the Fourier transform (for p = 2πn/L, n ∈ {1, . . . , L − 1}): We obtain therefore We can now set δ (0) (p) = λδ (1) (p), δw (0) = λδw (1) , and solve for λ, thus recovering Eq. (23). Let us suppose that each state can be traced as the volume V of the system is changed. This gives us the volume-dependent potentials Ω γ (V ). If the state γ is to describe a molecule in equilibrium with the solvent it should exert no pressure on the walls of the container: We want to compute the typical value of the above quantity for states having a certain freeenergy density: Ω γ ≈ V ω. Let us step back for a moment and consider the extensive complexity Σ(Ω; V, µ), where we made explicit the dependence upon the volume V and the chemical potential µ. If we assume that states do not bifurcate and do not die (or come into existence) as the volume is changed, it is easy to show that [37], for almost any state γ: Using the asymptotic behavior Σ(Ω; V, µ) ≈ V Σ(ω, µ), and the general relations from Sec. III A we can establish the coexistence condition either in the (m, µ) or in the (ω, µ) plane (we always assume β and the energy parameters to be fixed). From (C2), we immediately obtain the condition in the (µ, ω) plane: This is suggestive of a balance between an "internal" osmotic pressure, ω, and an "interstate" pressure (Σ/∂ ω Σ). In the (m, µ) plane, the condition assumes a more compact form φ 1 (m, µ) = 0. This coincides with the condition for a unique pure state. If metastable states are considered, Eq. (C3) receives a non-vanishing contribution from the complexity: in particular, one obtains ω > 0. This is quite striking since we did not assume the system to equilibrate among states of a given free-energy (which indeed does not happen on the short time scales that are relevant to determine the boundary conditions with the solvent). In Fig. 17 we represent the condition (C3) in the (ω, µ) plane. Notice that in general metastable states (with Σ > 0) on the coexistence line correspond to lower chemical potential than that of thermodynamically relevant states. Let us finally consider the coexistence line at thermodynamic equilibrium. Dominant states are obtained by minimizing the free energy ω − β −1 Σ(ω, µ) with respect to ω. The coexistence chemical potential µ * is then obtained from Eq. (C3). In a more compact (but formal) way, it is The thick line shows the states which are in equilibrium with the solvent. In particular, we signal the coexistence chemical potentials for static and dynamic states. determined from the condition In the main body of the paper we focus on the behavior of the polymer on this line. Generally speaking, at high temperature the maximum in Eq. (C4) is attained at m = 1. Since φ 1 (m = 1, µ) = φ liq (µ), in this region the coexistence line is the same as for the liquid phase. At lower temperatures the maximum is attained for 0 < m * < 1 and the thermodynamic coexistence line lies above the liquid one. We refer to Fig. 3 for a summary of this behavior. APPENDIX D: EXPANSION OF MOMENTS AT THE CONTINUOUS GLASS TRANSITION Here we analyze the solution of the cavity recursion near the continuous transition to first non-trivial order in an expansion of its moments. Using both sides of the cavity recursion equation on the 1RSB level (24) in order to calculate the moments of the cavity fields, one obtains a set of coupled non-linear equations for the moments of the fields p α over the distribution ρ(p). It is convenient to change coordinates and define the fields ∆ µ = α A α µ (p α − p * α ) in such a way as to diagonalize the matrix (22). Hereafter we shall denote by µ = 1 the most instable ('replicon') direction in this matrix, and by λ the corresponding eigenvalue. Note that the prefactor of (kλ 2 − 1) in Eq. (D12) has to be positive for consistency. A negative value indicates that there is no stable solution close to the liquid fixed point and the glass transition would be discontinuous. By explicit calculation of this coefficient at the instability point we found this to happen only in very atypical sequences with highly non-symmetric interactions. Evaluating the coefficients w s,11 and w s,111 requires the knowledge of the replicon eigenvector. This can be derived for the case of copolymers with symmetric interaction matrix E AA = E BB = −E AB , and equally frequent monomer species, ν A = ν B = 1/2, extending the arguments of App. B. In particular we obtain (using the variables defined in Apps. A and B): Things simplify considerably in several important cases: (i) alternating copolymers; (ii) antipalindromic sequences; (iii) Markov sequences in the L → ∞ limit. In all this cases the ratio w 2 s,111 /w 3 s,11 vanishes. The basic reason is that, because of Eq. (D14), w s,111 turns out to be an odd function of {σ a }. In these cases the free energy φ 1 (m) takes the simpler form, cf. (26), At the glass transition the maximum of φ 1 is attained at m s = 0. The fourth order term will shift its position to m s ∝ kλ 2 − 1 ∼ T i − T , as we have explicitly checked in the alternating AB-ampholyte. APPENDIX E: COMPUTING THE ORDER PARAMETER IN THE CAVITY METHOD We show here how to compute the local structural order parameters (17) using the cavity method. In the spirit of the Bethe-Peierls approximation we treat the self-avoidance of the polymer chain just on a local level, forbidding it to leave a site on the edge on which it arrived, but neglecting further constraints that arise on a real space lattice. In the following, we call "non reversal random walks" (NRRW) this restricted class of walks on the cubic lattice. Let us rewrite the distance vector between monomers i and i + d as R i+n−1 . If the positions along the chain are statistically equivalent, the overlap F d state can be written as where we split the sum according to the length l over which the replicas stay together and put r (1) n = r (2) n = r n for n ≤ l. Note that once l is fixed the common part of the path and the two legs of length d − l can be considered as non reversal random walks, only subject to the constraint that the legs leave in different directions at the bifurcation. These random walks have all the same weight when averaging over pure states. Hence, in order to evaluate (E1) it is sufficient to calculate the probability P (l) state for two replicas in the same state to follow the same path over a distance l, from which we obtain the average being taken over the uniform distribution of two NRRW's after l common links. Using that in a NRRW one has r n 1 · r n 2 N RRW = 1/k |n 1 −n 2 | , and distinguishing the different possible conformations at the bifurcation, one easily finds f (l; d) = l + 2 l−1 j=1 l − j k j + 1 k n 1 +n 2 −2 . (E4) (The first two terms stem from the self overlap of the common part, the term in the middle is the cross term between the common part and a leg that continues straight with respect to the common part, and the last term is a negative contribution due to two legs leaving in opposite directions.) In the liquid state, P (l) liq is just given by the probability that two NRRW's stay together over a distance l, Upon injecting (E4), (E5), and (E6) in (E2) one may verify that F d liq = 0. In the glass phase, P (l) state is most easily evaluated as N (l) P (l) state , where N (l) is the number of rooted NRRW's of length l andP (l) is the probability for two replicas to stay on a specific path of length l. In the Bethe-Peierls approximation the latter can be computed within an enlarged cavity containing all sites of the path. The average over the states is done by averaging independently over the local field distributions on all neighboring sites, taking into account proper weighting factors: P (l) state = 1 L L a=1 ι∈I l dρ p (ι) P l;a W l;tot ({p ι } ι∈I l ) m ι∈I l dρ p (ι) W l;tot ({p ι } ι∈I l ) m , where we have introduced the set of indices I l labeling the neighbors of the l + 1 sites on the path: P l;a denotes the probability, given the local field configuration, for two replicas to both stay on the given path up to site l and to separate afterwards, under the condition to start off at site 0 with monomer a, The weights W (j) l;a± are the Boltzmann factors associated with a polymer starting with monomer a on site 0, staying on the path, and leaving it at the site l via neighbor (l, j), the sign ± indicating that monomer indices increase/decrease along the path. Notice that in Eq. (E9) we selected arbitrarily one of the two equivalent directions. In the above formulae, W l;tot and W l;a are the Boltzmann factors associated with the ensemble of all possible configurations on the path, and of the configurations restricted to have a monomer a on site 0, respectively. They are conveniently calculated recursively via W l;a/tot ({p ι } ι∈I l ) = C p (l,1) , . . . , p (l,k) W l−1;a/tot {p ι } ι∈I l−1 |p (l−1,k) = I p (l,1) , . . . , p (l,k) , where I denotes the cavity iteration functional as defined by (4-7), and C is the corresponding normalization constant. The initial conditions for (E11) are simply In this Appendix we describe our approach to numerical simulations of heteropolymers on the Bethe lattice. In the first part we define a model for finite length polymers. In the second one we present our Monte Carlo algorithm. Finite length polymers We consider a modified ensemble for a varying number of finite length random walks. More precisely, a configuration is defined by n mutually-avoiding SAW's. The chain i shall contain We introduced the chemical potential µ end which couples to the number of chain ends in the solution (or, equivalently, to the number of polymers). The single-polymer ensemble is recovered in the µ end → −∞ limit. Extending the cavity formalism to the finite-µ end case is quite straightforward. As an illustration, we can easily write down the generalization of Eqs. (4)- (7): where we used the shorthandsp a . The Monte Carlo algorithm As already mentioned in Sec. V D, numerical simulations of long fixed-length polymers are quite difficult on the Bethe lattice. We thus resort to simulating the variable-length ensemble corresponding to the free-energy (F1). The algorithm includes three types of moves illustrated graphically in Fig. 18: (a) monomer insertion/deletion; (b) chain extension/reduction; (c) two chain junction/disjunction. It is straightforward to show that these three moves ensure ergodicity. At each step of the algorithm the type of move and the location in the graph are chosen randomly. The move is then accepted according to the Metropolis rule in such a way as to satisfy detailed balance with respect to the variable length ensemble (F1). Evidently the algorithm is more efficient for moderate lengths of the polymers, i.e., not too large values of |µ end |. It can be therefore convenient, for producing equilibrated configurations, to gradually decrease |µ end | to the desired value. . This new field is then exchanged against an old field in the population with probability C[{p (i) }] m /C m max , proportional to the reweighting C[{p (i) }] m (normalized so as to make sure that the probability never exceeds 1). If the dynamics converges to a stationary distribution, its density satisfies the recursion equation (24). In the soft glass phase, the iteration converges rapidly since the distribution of fields remains centered around the unstable liquid fixed point. However, the algorithm considerably slows down in the frozen phase where the fields have strong biases towards given conformations. Since the biases of the k parent members are only rarely compatible with each other, the reweighting is usually very small. The population dynamics is then dominated by rare events with a low degree of frustration. Obviously, the probability of frustrated events rapidly increases with the number of different local conformations and thus with the length of the period L. For this reason we have limited our numerical simulations in the frozen glass phase to populations of 4000 fields for chains with L = 20.
2018-04-03T01:52:09.692Z
2004-01-09T00:00:00.000
{ "year": 2004, "sha1": "52a48bc7d5db8fe188845fb0732883e345b8284c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0401139", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "52a48bc7d5db8fe188845fb0732883e345b8284c", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine", "Physics" ] }
258405921
pes2o/s2orc
v3-fos-license
Disruption of Glioblastoma Multiforme Cell Circuits with Cinnamaldehyde Highlights Potential Targets with Implications for Novel Therapeutic Strategies Glioblastoma multiforme (GBM) is a major aggressive primary brain tumor with dismal survival outcome and few therapeutic options. Although Temozolomide (TMZ) is a part of the standard therapy, over time, it can cause DNA damage leading to deleterious effects, necessitating the discovery of drugs with minimal side effects. To this end, we investigated the effect of cinnamaldehyde (CA), a highly purified, single ingredient from cinnamon, on the GBM cell lines U87 and U251 and the neuroglioma cell line H4. On observing similar impact on the viability in all the three cell lines, detailed studies were conducted with CA and its isomer/analog, trans-CA (TCA), and methoxy-CA (MCA) on U87 cells. The compounds exhibited equal potency when assessed at the cellular level in inhibiting U87 cells as well as at the molecular level, resulting in an increase in reactive oxygen species (ROS) and an increase in the apoptotic and multicaspase cell populations. To further characterize the key entities, protein profiling was performed with CA. The studies revealed differential regulation of entities that could be key to glioblastoma cell circuits such as downregulation of pyruvate kinase-PKM2, the key enzyme of the glycolytic pathway that is central to the Warburg effect. This allows for monitoring the levels of PKM2 after therapy using recently developed noninvasive technology employing PET [18F] DASA-23. Additionally, the observation of downregulation of phosphomevalonate kinase is significant as the brain tumor initiating cells (BTIC) are maintained by the metabolism occurring via the mevalonate pathway. Results from the current study, if translated in vivo, could provide additional efficacious treatment options for glioblastoma with minimal side effects. Introduction The majority of malignant tumors of the brain are gliomas. Glioblastoma is an aggressive, major primary brain cancer with poor survival outcome [1]. Glioblastoma multiforme (GBM) is a grade IV tumor that has the ability to proliferate rapidly and become invasive and spreads throughout the brain [2]. The current treatment protocol for GBM is surgical resection, radiation therapy and adjuvant chemotherapy. Being recalcitrant to all the current modalities, GBM patients' survival rates are dismal at an average of 12-16 months [3]. Many studies are directed towards finding better treatment options, including employing bioengineering strategies such as the use of polymeric nanofibers in animal model systems [4]. Further, better understanding of the heterogeneous nature of GBM and effective treatment options for this cancer are needed. Understanding the cell circuits of GBM better could lead to novel treatment options for this aggressive cancer. Aerobic glycolysis, known as the Warburg effect, is utilized as an energy source by proliferating tumor cells. Furthermore, the mitochondria play a pivotal role as they control key cellular pathways including apoptosis leading to cell death. Thus, targeting mitochondrial function is considered as an important strategy to combat aggressive brain cancers such as GBM [5]. The chemotherapeutic drug currently in the clinic for treating GBM is temozolomide (TMZ), which has improved patient survival and is a DNA methylating agent with slow disease progression when compared to other drugs [6,7]. However, TMZ is reported to cause mutations as observed in the DNA in recurrent glioma [8]. To overcome the long-term deleterious side effects that could occur with TMZ, as well as to treat the glioblastomas that are insensitive to this drug, there is immediate need to discover safer drugs that could be used to efficaciously combat brain tumors. To this end, we opted to study the effect of a purified, single entity namely cinnamaldehyde (CA) (a component of cinnamon) on brain glioma cells (glioblastoma cells U87 and U251 and neuroglioma cells H4) with the anticipation of fewer side effects. Further, we investigated the impact of its isomer/analog TCA and MCA on U87 glioblastoma cell circuits. Importantly, to investigate the impact of CA on glioblastoma U87 cell circuits and elucidate its mechanism of action, we performed multilevel analysis (cellular, molecular and protein profiling) with the aim of identifying potential target molecules to efficaciously combat a devastating tumor such as GBM. Cell Toxicity Assay U87eGFP cells were plated in a 96-well plate at a density of 5000 cells per well in the culture medium. After 24 h, cells were treated with various concentrations of the compounds and were incubated for 72 h at 37 • C in an atmosphere of 5% CO 2 . The assays were performed in triplicates. Cell Counting Kit-8 assay from Bimake, (Houston, TX, USA) was used to assess the cell toxicity. The media was aspirated, and 200 µL of 10% CCK-8 solution in the complete growth medium was added. Absorbance was measured at 450 nm using an Infinite 200 Pro plate reader (Tecan, Männedorf, Switzerland) after 1.5 h of incubation at 37 • C. The viability of cells in the treated group was expressed after normalizing to the control group. Using Prism software (GraphPad, Boston, MA, USA), graphs were plotted, and IC 50 was determined. U251 and H4 cells were treated with CA to assess the impact on viability, similar to the viability assessment assay performed for U87eGFP. Three independent experiments were performed for each of the cell lines. Clonogenic Assay U87eGFP cells were plated in 12-well plates at 62,000 cells per well and after 24 h cells were either left untreated or treated in triplicate for 72 h with CA, TCA and MCA at 40 µM (IC 30 concentration). After this, cells were trypsinized and replated in 6-well plates at a density of 500 cells per well and incubated in media without the compounds for 14 days. Colonies were fixed and stained with crystal violet solution. For quantitative estimation of the colony forming efficiency, crystal-violet-stained colonies were lysed in 1% SDS, and the absorbance of the resulting solution was measured at 540 nm. Values of the treated groups were normalized to the control group and represented as percent of control, as described previously [9]. Flow Cytometry Assays U87eGFP cells were plated in 12-well plates at 62,000 cells per well and after 24 h, cells were either left untreated (control) or treated with either CA, TCA or MCA at the IC 50 concentration of 80 µM. Experiments were performed in triplicate, and cells were incubated for 72 h. After 72 h of incubation, cells from the media were collected, followed by procuring adhered cells via trypsinization. Cells from triplicate treatments were pooled and proceeded with Luminex Muse flow cytometry assays [10,11]. The following assay kits were used: Oxidative Stress kit (MCH100111), Annexin V and dead cell kit (MCH100105), MultiCaspase kit (MCH100109) and Mitopotential kit (MCH100110) as per the manufacturers' instructions (Luminex Corporation, Austin, TX, USA). Proteomic Analysis Proteomic analysis was performed via 2D DIGE and mass spectrometry, which was conducted by Applied Biomics Inc. (Hayward, CA, USA) employing previously published methodologies [10,11]. U87eGFP cells were treated with 40 µM and 80 µM concentrations of CA. Control cultures (no treatment) were maintained in parallel. Cells from control samples and treated samples were collected, washed with 1× PBS and then stored at −80 • C prior to sending the samples to Applied Biomics, Inc. (Hayward, CA, USA) on dry ice for proteomic analysis. Protocol was performed as described previously [12,13]. Statistical Analysis Two-way ANOVA with Dunnett's multiple comparison test and alpha set to 0.05 and Ordinary one-way ANOVA with Dunnett's multiple comparison test were calculated and mentioned in the legend section for figures. Impact of CA, TCA and MCA on U87eGFP Cells We opted to investigate the effects of CA, TCA and MCA on U87eGFP cells. To assess the potential of the compounds in inhibiting the viability of these cells, they were treated with varying concentrations of the three compounds for 72 h. All three compounds exhibited inhibition of cell viability in a dose-dependent manner, as shown in Figure 1A. The inhibition of cell viability was assessed with CCK-8 assay, and the IC 50 for CA, TCA and MCA was in the range of 70-80 µM with a p value < 0.05. To assess the effect on cell number during the earlier times of treatment, U87eGFP cells were treated with CA, TCA and MCA at an IC 30 concentration of 40 µM, and confocal images were taken using a Zeiss 700 confocal microscope. As seen in Figure 1B, the number of cells were reduced when compared to the controls in all the treated samples. Further, to investigate whether there was an impact on the proliferation of treated cells at this concentration, clonogenic assay was performed at the same concentration. On normalizing to the control, the percentage of the colony-forming efficiency of each of the treated samples was significantly lower than the control Figure 1C. Furthermore, there was a sustained effect because the decrease in the number of colonies in the treated group was maintained on withdrawal of the compounds. In studies with U251 and H4 cells with CA, a dose dependent inhibition of cell viability was also observed in both the cell lines with an IC 50 of 50-60 µM for U251 and 80-90 µM for H4 as shown in Supplementary Figure S1. number of colonies in the treated group was maintained on withdrawal of the compounds. In studies with U251 and H4 cells with CA, a dose dependent inhibition of cell viability was also observed in both the cell lines with an IC50 of 50-60 µM for U251 and 80-90 µM for H4 as shown in Supplementary Figure S1. performed to assess the cell viability. The data points are the mean of three replicates, and three such independent experiments were performed. The percentage of viable cells in treatment groups was calculated by considering untreated control values as 100%. There was a significant decrease in cell viability for all treatment groups. For both the CA-and MCA-treated group, the p-value < 0.0001 from 50 µM onwards, and for the TCA-treated group, the p-value < 0.0001 from 100 µM onwards. 50 microns. (C) Cells were treated for 72 h with 40 µM of either CA, TCA or MCA and were reseeded in 6-well-plates at a density of 500 cells per well and allowed to form colonies over a period of 14 days. The colonies were fixed and stained with crystal violet. For quantitative analysis, fixed colonies were lysed with 1% SDS, and an absorbance reading was taken at 540 nm. The data points are mean of three replicates, and three such independent trials were performed. The absorbance values for treatment groups were normalized to the control, and there was a significant decrease in the colony-forming efficiency of the treated cells when compared to control as per ordinary one-way ANOVA with Dunnett's multiple comparison test, and the **** p-value was less than 0.0001. Reactive Oxygen Species Levels Were Elevated in U87eGFP Cells after Treatment with CA, TCA and MCA To further elucidate the mechanism by which cell viability was impacted in U87eGFP cells by CA, TCA and MCA, studies were performed to assess the production of reactive oxygen species (ROS) using an Oxidative stress kit (as described in Materials and Methods). For the experiment, the cells were treated with the IC 50 concentration (80 µM) of each of the compounds and incubated for 72 h. An elevation in ROS was observed after treatment with each of the compounds. Representative curves obtained for the treated samples in comparison with the curves obtained for the control sample for each of the compounds are shown Figure 2A. The percentage values normalized to the control values are shown in Figure 2B. In all the treated samples, there was an increase in ROS production in the U87eGFP cells. (B) To assess whether changes in cell number had occurred earlier during the treatment period, U87eGFP cells were treated with 40 µM (IC30) concentration of CA, TCA and MCA and incubated for 72 h and were fixed and imaged using a Zeiss 700 confocal microscope with 20× objective with a scale bar of 50 microns. (C) Cells were treated for 72 h with 40 µM of either CA, TCA or MCA and were reseeded in 6-well-plates at a density of 500 cells per well and allowed to form colonies over a period of 14 days. The colonies were fixed and stained with crystal violet. For quantitative analysis, fixed colonies were lysed with 1% SDS, and an absorbance reading was taken at 540 nm. The data points are mean of three replicates, and three such independent trials were performed. The absorbance values for treatment groups were normalized to the control, and there was a significant decrease in the colony-forming efficiency of the treated cells when compared to control as per ordinary one-way ANOVA with Dunnett's multiple comparison test, and the **** p-value was less than 0.0001. Reactive Oxygen Species Levels Were Elevated in U87eGFP Cells after Treatment with CA, TCA and MCA To further elucidate the mechanism by which cell viability was impacted in U87eGFP cells by CA, TCA and MCA, studies were performed to assess the production of reactive oxygen species (ROS) using an Oxidative stress kit (as described in Materials and Methods). For the experiment, the cells were treated with the IC50 concentration (80 µM) of each of the compounds and incubated for 72 h. An elevation in ROS was observed after treatment with each of the compounds. Representative curves obtained for the treated samples in comparison with the curves obtained for the control sample for each of the compounds are shown Figure 2A. The percentage values normalized to the control values are shown in Figure 2B. In all the treated samples, there was an increase in ROS production in the U87eGFP cells. Programmed Cell-Death Pathway Was Impacted by CA, TCA and MCA in U87eGFP Cells After observing inhibition of cell viability and significant impact on the colony-forming efficiency in U87eGFP cells by CA, TCA and MCA, as well as an increase in ROS levels, we proceeded further to elucidate the mechanism of action. We opted to investigate the effect of all three compounds on the programmed cell-death pathways. To study the effect Programmed Cell-Death Pathway Was Impacted by CA, TCA and MCA in U87eGFP Cells After observing inhibition of cell viability and significant impact on the colony-forming efficiency in U87eGFP cells by CA, TCA and MCA, as well as an increase in ROS levels, we proceeded further to elucidate the mechanism of action. We opted to investigate the effect of all three compounds on the programmed cell-death pathways. To study the effect on the extrinsic pathway, we performed Annexin V flow cytometric assay on treating cells at the IC 50 concentration of 80 µM of each of the compounds for 72 h. Representative scatter plots of the control sample and samples treated with CA, TCA and MCA are shown in Figure 3A. Percent gated profiles of each of the cell populations in the untreated and treated groups are shown in Figure 3B. The total number of apoptotic cells in the treatment groups normalized to untreated cells and averaged from three independent experiments are represented by a bar graph shown in Figure 3C. The total number of apoptotic cells increased significantly in treated samples. The percent of early and late apoptotic cell populations normalized to the control population is represented in Figure 3D. A statistically significant increase in the late apoptotic population was observed in all the treated groups. Programmed Cell-Death Pathway Was Impacted by CA, TCA and MCA in U87eGFP Cells After observing inhibition of cell viability and significant impact on the colony-forming efficiency in U87eGFP cells by CA, TCA and MCA, as well as an increase in ROS levels, we proceeded further to elucidate the mechanism of action. We opted to investigate the effect of all three compounds on the programmed cell-death pathways. To study the effect on the extrinsic pathway, we performed Annexin V flow cytometric assay on treating cells at the IC50 concentration of 80 µM of each of the compounds for 72 h. Representative scatter plots of the control sample and samples treated with CA, TCA and MCA are shown in Figure 3A. Percent gated profiles of each of the cell populations in the untreated and treated groups are shown in Figure 3B. The total number of apoptotic cells in the treatment groups normalized to untreated cells and averaged from three independent experiments are represented by a bar graph shown in Figure 3C. The total number of apoptotic cells increased significantly in treated samples. The percent of early and late apoptotic cell populations normalized to the control population is represented in Figure 3D. A statistically significant increase in the late apoptotic population was observed in all the treated groups. Multicaspase Was Elicited by CA, TCA and MCA in U87eGFP Cells On observing the extrinsic pathway being invoked by CA and its isomer/analog, we investigated whether caspases, in particular MultiCaspase, were elicited. MultiCaspase activation was monitored by flow cytometry using the MultiCaspase assay kit. MultiCaspase scatter plots of U87eGFP cells untreated (control) and treated with 80 µM of either CA, TCA or MCA are represented in Figure 4A. The MultiCaspase profile of untreated and treated cells averaged from three independent experiments are shown in Figure 4B. A statistically significant decrease in the live cell population and a significant increase in the Caspase+/Dead population in TCA-and MCA-treated groups was observed. A bar graph showing total multicaspase+ cells normalized to the control with the mean and standard deviation from 3 trials is shown in Figure 4C. There was a statistically significant increase in the total number of caspase+ cells in the treated groups. After normalizing to the control group, both the caspase+ and caspase+/dead cell populations increased significantly, as shown in Figure 4D. The Intrinsic Programmed Cell-Death Pathway Was Impacted by TCA and MCA in U87eGFP Cells A significant impact was observed on the extrinsic programmed cell-death pathway by CA, TCA and MCA in U87eGFP cells, which led us to investigate whether the three compounds had an impact on the intrinsic programmed cell-death pathway as well. To assess the possible effect on the mitochondria, we performed flow cytometry using a mitopotential depolarization analysis kit. Representative scatter plots of U87eGFP cells treated with 80 µM of either CA, TCA or MCA after 72 h of incubation are shown in Figure 5A. The percent gated profiles of all the populations of cells of untreated and treated cells averaged from three independent experiments are shown in Figure 5B. A significant decrease in the live population of cells in all the treated groups was observed. Total depolarized cells normalized to the control with the mean and standard deviation from three trials are represented by a bar graph in Figure 5C. There was a statistically significant increase in the total number of depolarized cells in the TCA-and MCA-treated groups. Proteomic Analysis Reveals Entities of Pivotal Signaling Pathways Differentially Regulated in U87eGFP Cells after Administration of CA To further delineate the key entities that could have been impacted, protein profiling of U87eGFP cells treated with CA 40 µM and 80 µM along with untreated cells was performed using 2D gel electrophoresis and mass spectrometric analysis (as described in 'Materials and Methods'). Clear separation of proteins from each of the cell extracts is shown in Figure 6A. For proteomic analysis, the control proteins were tagged with Cy2, the 40 µM treated sample was tagged with Cy3 and the 80 µM treated sample was tagged with Cy5 prior to being subjected to electrophoresis. An overlay of the gel of CA treated at 40 µM and the control sample gel are shown in Figure 6B. Similarly, overlay of the gel of CA treated at 80 µM and the control sample gel are shown in Figure 6C. The number of differentially expressed proteins is shown in the heatmap of proteins in Figure 6D. Based on the fold changes obtained, we selected upregulated and downregulated proteins for further proteomic analysis via mass spectrometry. The details of the molecules profiled are represented in Table 1. Among the upregulated proteins, the entities belonged to pivotal signaling pathways. Actin cytoplasmic 2 which plays a role in ECM-circuit and 60 S ribosomal protein L17 which plays a role in regulating cell proliferation in certain tissues were also upregulated. Among components that control the cell cycle, thymidylate synthase was upregulated. Furthermore, stress protein such as biliverdin reductase A was also upregulated. Among the downregulated proteins, certain pivotal components of key signaling pathways were impacted. Importantly, phosphomevalonate kinase, which plays a role in the proliferation of cells as well as in the immune pathway and pyruvate kinase (PKM2) of the glycolytic pathway, which is central to the Warburg effect, was downregulated. Thus, cinnamaldehyde treatment had a profound impact on the viability of U87eGFP glioblastoma cells. formed using 2D gel electrophoresis and mass spectrometric analysis (as described in 'Ma-terials and Methods'). Clear separation of proteins from each of the cell extracts is shown in Figure 6A. For proteomic analysis, the control proteins were tagged with Cy2, the 40 µM treated sample was tagged with Cy3 and the 80 µM treated sample was tagged with Cy5 prior to being subjected to electrophoresis. An overlay of the gel of CA treated at 40 µM and the control sample gel are shown in Figure 6B. Similarly, overlay of the gel of CA treated at 80 µM and the control sample gel are shown in Figure 6C. The number of differentially expressed proteins is shown in the heatmap of proteins in Figure 6D. Based on the fold changes obtained, we selected upregulated and downregulated proteins for further proteomic analysis via mass spectrometry. The details of the molecules profiled are represented in Table 1. Among the upregulated proteins, the entities belonged to pivotal signaling pathways. Actin cytoplasmic 2 which plays a role in ECMcircuit and 60 S ribosomal protein L17 which plays a role in regulating cell proliferation in certain tissues were also upregulated. Among components that control the cell cycle, The details of the cell signaling pathways and the entities impacted by CA are represented in the schematic diagram in Figure 7. Discussion The need for efficacious drugs for treating aggressive brain tumors such as glioblastoma multiforme that could have potentially fewer side effects led us to investigate the effect of a purified, single entity of a naturally occurring compound from cinnamon such as cinnamaldehyde (CA). CA has been reported to inhibit proliferation of cancer cells of varied origins [14]. In the present study, the viability and proliferation potential of U87eGFP cells were impacted by CA. Furthermore, in the clonogenic assay, not only was there an impact observed on the proliferation potential of U87eGFP cells, but also a sustained response was observed on withdrawal of the compounds. Notably, an increase in ROS was observed in all the treated samples. Impact on the ROS levels is reported as one of the modes of action of many anticancer agents [15]. Furthermore, we observed cell death of U87eGFP cells occurring through the extrinsic programmed cell-death pathway, resulting in a significant increase in the population of apoptotic cells as well as cells with activated multicaspase. Additionally, the intrinsic cell-death pathway was impacted, leading to depolarization of the mitochondrial membrane potential; the increased population of cells with depolarized mitochondrial membranes was statistically significant in the TCA-and MCA-treated groups. With respect to the mechanism of inhibition in U251 and H4 cell lines by CA, we envision that it could be different from that is operational in U87 cells. U251 glioblastoma and H4 neuroglioma have different genetic makeups compared to U87. Therefore, analyses of both extrinsic and intrinsic pathways need to be investigated to arrive at conclusions on the underlying mechanism(s) for inhibition of proliferation in these cell lines. To further delineate the entities impacted by CA in U87eGFP cells, protein profiling using techniques such as 2D gel electrophoresis and mass spectrometry were performed, which are more sensitive and can detect protein changes in the femtomolar range. The technologies employed in the present study are more sensitive than Western blots and Discussion The need for efficacious drugs for treating aggressive brain tumors such as glioblastoma multiforme that could have potentially fewer side effects led us to investigate the effect of a purified, single entity of a naturally occurring compound from cinnamon such as cinnamaldehyde (CA). CA has been reported to inhibit proliferation of cancer cells of varied origins [14]. In the present study, the viability and proliferation potential of U87eGFP cells were impacted by CA. Furthermore, in the clonogenic assay, not only was there an impact observed on the proliferation potential of U87eGFP cells, but also a sustained response was observed on withdrawal of the compounds. Notably, an increase in ROS was observed in all the treated samples. Impact on the ROS levels is reported as one of the modes of action of many anticancer agents [15]. Furthermore, we observed cell death of U87eGFP cells occurring through the extrinsic programmed cell-death pathway, resulting in a significant increase in the population of apoptotic cells as well as cells with activated multicaspase. Additionally, the intrinsic cell-death pathway was impacted, leading to depolarization of the mitochondrial membrane potential; the increased population of cells with depolarized mitochondrial membranes was statistically significant in the TCA-and MCA-treated groups. With respect to the mechanism of inhibition in U251 and H4 cell lines by CA, we envision that it could be different from that is operational in U87 cells. U251 glioblastoma and H4 neuroglioma have different genetic makeups compared to U87. Therefore, analyses of both extrinsic and intrinsic pathways need to be investigated to arrive at conclusions on the underlying mechanism(s) for inhibition of proliferation in these cell lines. To further delineate the entities impacted by CA in U87eGFP cells, protein profiling using techniques such as 2D gel electrophoresis and mass spectrometry were performed, which are more sensitive and can detect protein changes in the femtomolar range. The technologies employed in the present study are more sensitive than Western blots and ELISA, which are semiquantitative [16]. Protein-profiling data revealed key upregulated and downregulated entities encompassing various pivotal cell signaling pathways. Among the upregulated entities was actin cytoplasmic 2. Cytoplasmic actin plays an important role in the cell and in cancers. With respect to cell transformation, actin cytoskeletal protein could play a structural and functional role in the cell's extracellular matrix (ECM) and is also involved in the communication between the nucleus and ECM [17]. Further, it is reported that a combination of FDA-approved oncology drugs with effects on the cytoskeleton could be considered as a combination therapy for GBM in future studies [18]. Thus, a combination therapy of drugs that act on the cytoskeleton with CA could be considered as a novel therapeutic option for GBM treatment if resistance occurs. The ribosomal protein, identified as 60 S ribosomal protein L17, was also among the upregulated proteins by CA. Interestingly, ribosomal proteins (RPs) have been reported to have extra-ribosomal functions [19] and the ribosomal protein L17 acts as a negative regulator of cell proliferation and inhibits vascular smooth muscle (VSM) growth [20]. We envision that an increase in this ribosomal protein could impact cell proliferation of glioblastoma cells, especially considering the propensity of GBM for neovascularization. Another upregulated entity was biliverdin reductase A (a cytoprotective agent). Increase in this molecule indicates that the treated cells are under stress and are activating molecules in the survival pathway. Interestingly, biliverdin reductase-based peptides could be considered as a novel therapeutic option because they inhibit cell proliferation [21]. Therefore, biliverdin reductase A peptides as inhibitors could be considered in combination with CA to prevent the cancer cells from using this pathway to become resistant. Another molecule that was upregulated was thymidylate synthase. Thymidylate synthase is under the control of CDK4 [22]. Therefore, CDK4 inhibitors, which are already under clinical trials, could be used in combination therapy to prevent the cancer cells from escaping cell death if resistance occurs. Among the pathways controlling immunity, the mevalonate pathway is shown to control chemoresistance, and impacting this pathway could cause immunological cell death [23]. Moreover, this pathway is involved in coordinating the input of energy requirements and the proliferation of cells [24]. A decrease in phosphomevalonate kinase observed during CA treatment of U87eGFP cells in the present study could have a significant effect on proliferation. In fact, we observed an impact caused by CA on cell proliferation in our cellular level studies. Furthermore, the decrease in phosphomevalonate kinase could be harnessed for alerting the immune system to the pathways that guard the cancers from being attacked because the mevalonate pathway is pivotal to cancer immune surveillance [25]. Of noteworthy is the report that patient-derived brain tumor initiating cells (BTIC) from glioblastoma are maintained by a Myc-regulated mevalonate pathway [26]. Similarly, the lipid-lowering drug Lovastatin is reported to possibly have an effect on glioblastoma stem cells by interfering with the mevalonate pathway [27]. Therefore, we envision that a reduction in phosphomevalonate kinase in the present study with CA could have an inhibitory effect on the maintenance of BTIC in glioblastomas, thus possibly preventing recurrence of this devastating cancer and providing a beneficial therapeutic effect. Notably, the key oncoprotein Myc has proven to be a difficult target to be inhibited and is considered undruggable at present. Thus, to circumvent the problem, impacting pathways controlled by Myc such as the mevalonate pathway by decrease in phosphomevalonate kinase could be considered as one of the strategies to inhibit its role in cancer-cell proliferation. One of the hallmarks of cancer cells is metabolic reprogramming, wherein aerobic glycolysis occurs instead of the regular respiratory pathway to harness energy, leading to what is termed the 'Warburg effect' [28]. One of the proteins that plays a key role in this metabolic reprogramming is pyruvate kinase M2 (PKM2), which is upregulated in many tumors including glioblastomas [29][30][31]. Further, microRNA-326 has been reported to regulate PKM2 and impact glioma cell survival [32]. Furthermore, it has also been reported that in U87MG, inhibition of PKM2 led to late apoptotic cell population [31]. In fact, in the present molecular mode of action study, we observed an increase in the late apoptotic cell population in CA-treated cultures. The drug TMZ, which is part of the current standard of care for glioblastomas, causes changes in cell metabolism and, in particular, decreased expression and activity of pyruvate kinase-PKM2. The significant decrease in the conversion of pyruvate to lactate observed in TMZ-treated glioblastoma cells is monitored by administering hyperpolarized (1-13 C) pyruvate and tracked by employing magnetic resonance spectroscopic imaging (MRSI). This method has been shown to be more reliable, more rapid and forms a biosensor for assessing the therapeutic effect of TMZ over measuring the tumor size, which takes days [33]. In the present study with CA, one of the proteins that was downregulated was PKM2, which was detected through protein profiling. Therefore, hyperpolarized pyruvate (1-13 C pyruvate) used in the assessment of therapeutic outcomes of TMZ in glioma cells could be employed to assess the effect of CA as well. Importantly, preferential expression of PKM2 has been reported in glioblastoma cells, and minimal expression has been observed in normal brain cells. Employing positron emission tomography (PET) tracer [ 18 F]DASA-23, which comprises 1-((2-fluoro-6-[ 18 F] fluorophenyl)sulfonyl)-4((methoxyphenyl)sulfonyl)piperazine, visualization of aberrant expression of PKM2 in cell culture, mouse models of glioblastoma multiforme (GBM), healthy human volunteers and patients with GBM have been reported [34]. Therefore, the same technology could be employed to assess the effect of CA in GBM as it affects the protein levels of PKM2. Future studies with CA, which has multifaceted effect as a monotherapy, and also studies in combination with current chemotherapeutics are warranted to provide the unmet need of efficacious treatment for aggressive brain tumors such as the glioblastoma multiforme.
2023-04-30T15:11:51.266Z
2023-04-28T00:00:00.000
{ "year": 2023, "sha1": "60aeec4651d0b53d9f05f43a0c44903338fac748", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/cells12091277", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e06795c6c83e0190777fb007a8e119574a682415", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
245130961
pes2o/s2orc
v3-fos-license
Assessment of Treatment Effect Estimators for Heavy-Tailed Data A central obstacle in the objective assessment of treatment effect (TE) estimators in randomized control trials (RCTs) is the lack of ground truth (or validation set) to test their performance. In this paper, we provide a novel cross-validation-like methodology to address this challenge. The key insight of our procedure is that the noisy (but unbiased) difference-of-means estimate can be used as a ground truth “label” on a portion of the RCT, to test the performance of an estimator trained on the other portion. We combine this insight with an aggregation scheme, which borrows statistical strength across a large collection of RCTs, to present an end-to-end methodology for judging an estimator’s ability to recover the underlying treatment effect. We evaluate our methodology across 709 RCTs implemented in the Amazon supply chain. In the corpus of AB tests at Amazon, we highlight the unique difficulties associated with recovering the treatment effect due to the heavy-tailed nature of the response variables. In this heavy-tailed setting, our methodology suggests that procedures that aggressively downweight or truncate large values, while introducing bias, lower the variance enough to ensure that the treatment effect is more accurately estimated. Introduction Causal inference is widely used across numerous disciplines such as medicine, technology, and economics to inform important, downstream decisions [Hernan and Robins, 2020]. Inferring causal relationships between an intervention and outcome requires estimating the treatment effect (TE): the difference between what would have happened given an intervention and what would have happened in its absence. A central difficulty is that these two events are never jointly observed [Rubin, 2005]. TE estimation leverages randomized controlled trials (RCTs)-which randomly assign the items of interest into either the treatment or control groups-to counter selection biases and allow causal effects to be estimated via a simple differences-in-means estimate. Indeed, the simplest "model-free" unbiased estimator of a treatment effect is the difference-of-means (DM) estimate [Rubin, 2005]. Such an estimator may, however, suffer from high variance in real-world scenarios which often involve heterogeneous, high-dimensional and heavy-tailed data 1 . A plethora of additional information is thus often used to improve TE estimates relative to this simple baseline. For example, pretreatment regression adjustments can significantly reduce the variance of a treatment effect estimate while adding little additional bias [Angrist andPischke, 2008, Imbens andRubin, 2015]. Similarly, a host of other regularization and robustness modifications can be used to trade off bias and variance. As the complexity of such estimators increases, so do the assumptions (and work) needed to establish their statistical validity. Moreover, it is necessary and challenging to develop a principled approach to selecting an estimator from the zoo of possibilities. One particular setting in which the severity of these problems is diminished, and which we argue arises in many practical applications, 2 is when large RCTs can be run in the same population. This setting provides an opportunity to get at the fundamental quantity of interest-the mean-squared error (MSE) of a given treatment effect estimator. Our simple insight is that the DM estimator can function as a noisy, but unbiased "label" for the treatment effect. Noisy MSE estimates for a TE estimator can then be computed by comparing this estimator to the (unbiased) difference-of-means estimator via a simple, held-out validation estimate (see Claim 1). Our goal in this work, is to judge the performance of TE estimators by pooling many noisy (but unbiased) estimates of their MSE's across many RCTs. Such a procedure is desirable because it targets the actual quantity of interest, the estimator MSE, in an assumption/estimator-agnostic fashion. The primary contributions of this work are as follows: • We process a corpus of 709 AB tests (arising from genuine RCTs) implemented at Amazon across several years and we highlight the heavy-tailed nature of the response and covariate variables. The unique challenges associated with heavy-tailed estimation require careful navigation of the bias-variance tradeoff which motivates the development of an objective selection procedure for TE estimation. • We present a selection scheme which borrows statistical strength across the corpus of AB tests in order to judge the relative performance of several commonly used TE estimators. • We use this framework to argue that in the presence of heavy-tailed data-as often arise in large-scale technology and logistics applications-aggressive downweighting and truncation procedures are needed to control variance. Related Work The literature on causal inference and treatment effect estimation is vast and a comprehensive review is beyond the scope of this paper. Hernan and Robins [2020], Imbens and Rubin [2015], Angrist and Pischke [2008], Hadad [2020] and Wager [2020] provide modern perspectives on both the theory and practice of treatment effect estimation. Cross-validation (CV) also has been (and remains) a major subject of statistical inquiry as it is amongst the most widely used tools to assess the quality of an estimator and perform model selection Bayle et al. [2020], Lei [2020], Stone [1974], Geisser [1975]. Relatively little work has been done in the intersection of these two domains. Part of the difficulty stems from the fact the standard procedure of CV breaks down for treatment effect estimation since the true treatment effect is never observed in data. Athey and Imbens [2016] and Powers et al. [2018] do provide model-specific selection methods in the context of treatment effect estimation. However, these works do not apply to arbitrary TE estimators. Closest to our work is that of , who use a data-splitting methodology to evaluate several risk functions to assess heterogeneous treatment effect estimators. This differs from our work in two principal ways. First, our framework is targets the problem of average treatment effect estimation-in many scenarios that we are interested in treatments cannot be individualized and must be applied in an all-or-nothing fashion to the entire population. Our statistical scheme also differs since we provide a provably unbiased estimate 3 of the mean-squared error of a TE estimator, and we introduce an aggregation scheme to borrow statistical strength across different AB tests to compare estimators. Additionally, our work uses a large corpus of 709 actual randomized, AB tests conducted at Amazon over the course of several years as our testbed for estimator selection in contrast to synthetic data simulations. One of our main motivations is to highlight the unique challenges associated with heavy-tailed data often present in applications arising at large-scale technology and logistics companies. Semiparametric TE estimators for heavy-tailed datasets inspired by similar applications have been explored in Fithian and Wager [2014] and Taddy et al. [2016]. However, these works do not address the problem of model selection which is our central focus. Specifically we focus on methods to select among simple estimators (with few to no tuning parameters) that are widely used in practice. Preliminaries Notation: We use bold-faced variables such as X and x to define vectors. We work within the Rubin potential outcomes model [Rubin, 2005] where we imagine we are given a domain of objects Y and a target variable of interest Y (·) given a possible intervention. For a fixed intervention I, our goal is to estimate the population average treatment effect (ATE): where Y (1) corresponds to the value of an experimental unit-in our case a product in the supply chain-given the treatment and Y (0) its unobserved counterfactual control (and vice versa). In general, we also allow the existence of other covariates in our model X ∈ X . In a given AB test, we first randomly sample an equal number of items into a treatment group, T , and a control group C. We further let the (X i , T i , Y i ) be the covariates, treatment dummy, and value of the ith item. By a standard argument, using the assumption of randomization (independence of {Y i (1), Y i (0)} and T i ), the differences-in-means estimator, provides an unbiased estimate of ∆ [Rubin, 2005]. A primary benefit of the DM estimator is that it is "model-free." That is, it makes no explicit assumptions on the data-generation process for Y i as a function of the other covariates. Dataset Description Our entire corpus of AB tests consists of 709 RCTs that were run at Amazon over the course of several years (dating back to 2017) on a population of products. The interventions in each AB test consist of various modifications and (potential) improvements to the way in which products are processed through the supply chain. The AB tests are constructed as RCTs with 50% of products in an RCT randomly placed in the treatment group and 50% in the control group. The AB tests vary in size from tens of thousands of products to those with several millions. Each AB test is run over the course of approximately 27 weeks with the intervention instituted at a trigger date at 10 weeks in the treatment group. At each week in an AB test, the response variable generated from each product is computed, as well as forecasted auxiliary covariates for that product (which might serve as a surrogate for its popularity). Each AB test was preprocessed to contain the averaged pretreatment response (denoted X), a strictly nonnegative averaged pretreatment auxiliary covariate (denoted D), averaged posttreatment response (denoted Y ), and binary treatment indicator (denoted T ) for each item. Auxiliary covariates (such as D) often arise in naturally occurring applications where it is feasible to forecast a related quantity to Y (such as the number of expected products needed in a time period to satisfy user demand). Heavy Tails and Hard Estimation The difficulties associated with treatment effect estimation of an intervention in large-scale commerce RCT datasets are many fold. The most salient difficulty for our consideration is that the response distribution over the range of products has a heavy tail. Similar heavy-tailed distributions are known to exist in user revenue distributions as well as user engagement metrics at large-scale technology companies [Fithian andWager, 2014, Taddy et al., 2016]. Estimation in this setting is difficult and requires balancing several considerations when considering the pros and cons of various estimation techniques. Our exploration of these issues serves a dual purpose: (1) to highlight the ubiquitous occurrence of such heavy tails in naturally occurring data, and (2) to motivate the need for a model selection procedure to navigate the bias-variance tradeoff. Let us investigate the data inside a single RCT to assist in further making this point. The RCT under consideration consists of 7208692 distinct products. This RCT (a representative choice) displays significant heavy-tail behavior, as shown in Fig We implement the Hill estimator to obtain an estimate of the power-law behavior η in the right tail distribution of ∼ y −η across all the RCTs under consideration. The Hill cutoff hyperparameter is chosen to discard points near the center of the distribution (i.e., near zero) and allows the formulation of a bias-variance tradeoff [Drees et al., 2000]. We avoid a more sophisticated data-driven choice of this cutoff since the precise Hill value is not of particular interest in our setting. 4 . Rather, it is apparent the power η can be conservatively judged to be between 1 − 3 in Fig. 1. Analyzing the response distribution across the entire corpus of 709 AB tests and choosing the Hill cutoff parameter at the 5th percentile shows that the average decay exponent is ≈ 2.32 with a standard deviation of 0.79, and median of 2.1476. Note η = 2 results in a Cauchy-like tail for which random variables have an infinite mean and will fail to concentrate under normalized addition. The heavy tail of the response variable is rather remarkable since it suggests that the response random variable may not possess a variance. The lack of a variance invalidates the application of the central limit theorem to the normalized mean and also destroys the large-sample behavior of the bootstrapped mean distribution [Athreya, 1987]. The sample complexity of mean estimation also fundamentally changes since finite-sample √ n-confidence intervals are no longer attainable [Cherapanamjeri et al., 2020]. The difficulties seen in this case study reinforce the conclusion that handling the heavy tails inherent in our data likely requires more sophisticated (regularized) estimators than the DM estimator. Ultimately this boils down to balancing the tradeoff between bias and variance in estimation. Navigating this bias-variance tradeoff is one of the primary motivations for our aggregation methodology for TE estimator selection. Validation Procedure for Treatment Effect Estimators In this section we present the key idea behind the validation procedure we use to assess the quality of an arbitrary treatment effect estimator,∆ E (·, ·), in the AB test denoted I. Let ∆ denote the population ATE shown in (1). Given the groups T and C we first randomly partition them into disjoint groups T 1 , T 2 and C 1 , C 2 . Now, consider the (potentially complicated) treatment effect estimator∆ E (T 1 , C 1 ) trained on the first fold of data. We can obtain an estimate of its performance by how well it targets the difference-of-means estimator computed on the hold-out set∆ DM (T 2 , C 2 ): A simple argument shows that this quantity is a noisy but unbiased MSE of the estimator (and thus it permits the relative comparison of two different estimators). Claim 1. Given two different treatment effect estimators A and B in the aforementioned setting, we have: Proof. We simplify the MSE of a treatment effect estimator E by centering the DM estimator around its mean and expanding the square: where the cancellation uses the independence of the first/second folds of data to factor the expectation over the two terms, and the unbiased estimation property of the DM estimator over the second fold [Rubin, 2005] 5 . We then obtain the following variances for two estimators A and B: from which the claim follows. This result motivates using held-out sample error as a metric to assess the relative merit of two estimators∆ A and∆ B . However, simply using this estimator on a single RCT provides a (potentially very) noisy estimate of the population error, not the population error itself. Indeed, if the estimator∆ DM (T 2 , C 2 ) is sufficiently good to estimate ∆ why even bother to use another estimator? Said another way, the error estimate in (3) will always suffer at least the variance of the unbiased estimate (2). In practice we can always use a cross-validated version of (3) to reduce the subsampling variance due to the random train/test splits. However, such a procedure will not decrease the variance of the DM estimator arising from the underlying heavy-tailed data. An Aggregation Scheme Aggregating the mean-squared errors requires handling a practical consideration. Since the AB tests and interventions across RCTs themselves may be different, the overall scales of the MSEs between different AB tests may be different. As an example, consider a corpus of two AB tests on which estimator A obtain errors {1, 10} and estimator B obtains errors {2, 9}. Simply averaging the errors or doing a rank-based test of performance would indicate both estimators are equivalent. However, intuitively we believe a relative improvement of estimator B from 10 to 9 on the second AB test does not outweigh the degradation from 1 to 2 on the first AB test. This observation motivates the definition of a normalized score to compare the estimators A vs B, as a function of the vectors of their noisy errors. 6 For each intervention i ∈ {I 1 , ..., I N } we define the normalized score: for i ∈ andB i ∈B. Where andB are defined according to (7) and (8) respectively. This normalized score vector (which we denote byŜ(Â,B)) implicitly binarizes each of its elements to bound them in the range [−1, 1]. Each element of this vector is a noisy score of estimator A's performance relative to B on one AB test in the corpus. 7 If the estimator has many elements that are positive, it suggests that estimator B has larger errors than estimator A. In this case, we would expect estimator A to be better than estimator B. To formalize this intuition, we use the following heuristic which implicitly treats each RCT equally independent of size. We use a two-sided one-sample t-test applied to this normalized score vector to test the null that the "population mean" of theŜ "distribution" is 0, i.e., that the performance of estimator A is indistinguishable from the performance of estimator B. Overall, this procedure interpolates between two extremes. A purely rank-based test of performance might only count the number of AB tests for which A is better then B irrespective of how much better one is in a particular AB test. Meanwhile, a procedure which only looks at the raw (unnormalized) RCT errors has the property that RCTs with large MSE values for both estimators would drown out signal from RCTs with small MSE values. We stress that the t-test heuristic provides a simple way of converting the information contained inŜ(Â,B) to a single number, but we recommend looking at the score histograms for a more complete picture. Results In this section we detail several simple and commonly used estimators for TE estimation and subsequently compare their relative performance. Estimators For the following estimators, we note that each admits a "Winsorization" which can be used to trade off bias and variance. To do this, we can simply Winsorize the covariates and targets, X, D, Y , in only the training fold, to reduce variance. The test folds are always left untrimmed/Winsorized so Claim 1 remains valid. Explicitly we define Winsorization at level 0.001 to Winsorize the X, Y distributions at P 0.1, P 99.9 and the (positive) auxiliary D distribution at P 99.9. Difference-of-Means (dm) The simple difference-of-means estimator,∆ 6 As noted earlier, in practice each error estimate is averaged over several resampled train/test splits, but we suppress this extra notation for clarity. 7 Our notion of a normalized score vector is element-wise transitive. That is, b−a a+b > 0 and c−b b+c > 0 imply c−a a+c > 0. can be interpreted in a regression framework by writing Y = Y (0) + T · (Y (1) − Y (0)). This implies we can estimate the ATE for the binary treatment using linear regression of the observed outcomes Y i on the vector (1, T i )-which is equivalent to computing (10). Under the assumption of randomization, taking expectations conditional on the treatment assignment yields: In this setting the noise model is heteroscedastic (and depends on T ). Difference-of-Median-of-Means (mom) Our definition of this estimator cannot be interpreted in the regression framework strictly speaking, but it is sufficiently similar that we describe it here. The formulation of (10) and relationship to (11) motivates the robust estimation of α and ∆. Equivalently, we replace the terms in 1 with the median-of-means estimators for some prespecified block size B to define, , B) to complete the regression analogy. We use mom1000 in our experiments to denote the median-of-means estimator chosen with 1000 total blocks. Generalized LR (and Generalized Difference-in-Differences) (gen dd) We are given access to a pretreatment item-specific covariate X i corresponding to the response value Y i . If these covariates are strongly correlated with the response value Y i , incorporating them into the regression can significantly reduce the variance. So, assuming the model, we can estimate the ATE for a binary treatment by regressing Y i onto (1, T i , X i ), where i represents a general conditionally mean-zero noise term (which may depend on X i ). Remark 1. When the covariate X is the pretreatment response variable we refer to the estimator as the generalized differencesof-differences estimator, since the population moment equation can be written as E[Y |X, T ] = α + T ∆ + Xβ. The connection to the standard difference-of-differences estimator can be seen by forcing β = 1 in the setting where X is the pretreatment value of Y . Accordingly, an alternate interpretation of the difference-of-differences estimator is that it can be implemented by first regressing Y i − X i onto (1, T i ), which is equivalent to constructing, Weighted Generalized LR (and Generalized Difference-in-Differences) (gen dd w1) Since all of the above estimators can be written as various forms of linear regression it is also possible to interpret them from the perspective of M -estimation as minimizing a sum of residuals defined as That is, we can consider estimation objectives of the form: for some sequence of weights w i . The choice we explore is that of simple weighted least-squares. That is, we take ψ to be the squared loss and define the weights as w = (1 + D) −γ (for γ > 0) for some nonnegative covariate D. In practice the covariate D is taken as an auxiliary covariate, which serves as positive surrogate capturing the shape of the distribution of Y . In this case the weighting has the effect of downweighting large values of Y : Estimator Comparisons In this section we present results obtained from a corpus of 709 RCTs performed at Amazon over several years as described in Section 1.3. We compare estimators by their out-of-sample MSE computed via the cross-validation procedure described in Section 3. We begin by studying several of the normalized score histograms to facilitate the comparisons of our estimators. In judging two estimators A, B via their score distributionŜ(Â,B), we note that a left-skewed score distribution indicates B is a better estimator (in terms of its MSE) than A. In Table 1, we use the t-test heuristic from Section 3.1 to summarize each score histogram. For the sake of brevity, we do not display all the methods tested in the table. We found that all the estimators weighted by different powers (1, 2, 3) of the inverse D distribution perform comparably. Overall, we see several phenomena that accord with our expectations. First, adjusting for the pretreatment covariate reduces variance (i.e., gen dd is better then dm). Second, downweighting large values of Y provides significant value: inverse weighting by D and Winsorization performs generically the best under our metric (gen dd w1 and all Winsorized estimators perform well). We also see that the dm estimator is dominated by every other method in Table 1; such as the median of median-of-means estimator (mom1000), whose robustness underlies its improved performance. We can also check that the results from Table 1 are stable with respect to using different resampled replicates to compute the cross-validated errors which feed into the error vectors inÂ/B as Tables 2 and 3 in Appendix A show. We summarize this table by converting it into a table of pairwise comparisons of wins/losses/ties using a p-value to determine the significance of the win or loss. The question of extracting an ordered ranking from the table of wins/losses is a classic problem. The natural procedure of simply summing up the number of row-wise wins is commonly referred to as Copeland/Borda counting method (see [Saari and Merlin, 1996] and references within). It has recently shown that such a simple method is robust to the misspecification of the ranking model and optimal under minimal assumptions [Shah and Wainwright, 2018] Applying such a method by inspection returns the following rankings: gen dd w1 wins.001 > gen dd wins.001 > dm wins.001 ≈ gen dd w1 > gen dd > mom1000 > dm Overall, these results suggest that aggressively Winsorizing and/or downweighting heavy tails can profitably trade variance for some additional bias. We also stress that although our ranking procedure via t-statistics is transitive, the score and t-statistic values between A and B are computed via relative normalization between just these two estimator errors. Hence the actual values across several estimators are not always directly comparable due to the different normalizations used. Thus, we should always look at the performance of two estimators directly, take their score histogram into consideration, as well as exercise common-sense checks to draw further conclusions. Conclusion In this work, we develop a simple methodology for treatment effect model/estimator selection which pools the performance of estimators across RCTs. The methodology allows us to compare estimators on a held-out data fold in an unbiased way. The results align with a priori intuitions of estimator performance. Our primary insight is that we should be trading off variance for more bias to reduce the MSE of treatment effect estimation in problems with heavy tails. In particular, reweighting the pretreatment least-squares estimator and Winsorization both have the potential to improve the accuracy of treatment effect estimation. Further investigation into better estimators (as judged by their held-out MSE) and their coverage is warranted. Given this framework several directions for further research present themselves: • Other Estimators: Implementing other more sophisticated treatment effect estimators and comparing their performance in our setting. Good estimator candidates include nonparametric methods with few tuning parameters (such as random forests) and more complicated estimators inspired by semiparametric extreme value theory. • Heterogeneity: We have focused on the estimation of the ATE and concluded that downweighting/trimming large values is important. Implicitly, this suggests the fluctuations from large values may be inestimable. Gaining a better understanding of the heterogeneity of treatment effect estimation in our setting and developing assessment methodologies tailored to this setting is worthwhile. • Coverage: Obtaining good confidence intervals is important for TE estimation. Developing reasonable procedures for constructing confidence intervals for weighted or biased estimators (and objectively assessing their coverage) is also a high-priority direction for further exploration. Acknowledgements The authors thank Robert Stine, Edo Airoldi, and Kenny Shirley for their valuable comments and feedback. 8 Such consistency results are not entirely necessary since our notion of a normalized score vector is element-wise transitive. That is, b−a a+b > 0 and c−b b+c > 0 imply c−a a+c > 0.
2021-12-15T06:35:02.241Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "b956b0b8165972e305f58bdc658e449cbc87dedd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4181052b4cbcdcd23ffebd110e7ab25aa98e6272", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
6824796
pes2o/s2orc
v3-fos-license
JAK-STAT1/3-induced expression of signal sequence-encoding proopiomelanocortin mRNA in lymphocytes reduces inflammatory pain in rats Background Proopiomelanocortin (POMC)-derived beta-endorphin1-31 from immune cells can inhibit inflammatory pain. Here we investigated cytokine signaling pathways regulating POMC gene expression and beta-endorphin production in lymphocytes to augment such analgesic effects. Results Interleukin-4 dose-dependently elevated POMC mRNA expression in naïve lymph node-derived cells in vitro, as determined by real-time PCR. This effect was neutralized by janus kinase (JAK) inhibitors. Transfection of Signal Transducer and Activator of Transcription (STAT) 1/3 but not of STAT6 decoy oligonucleotides abolished interleukin-4 induced POMC gene expression. STAT3 was phosphorylated in in vitro interleukin-4 stimulated lymphocytes and in lymph nodes draining inflamed paws in vivo. Cellular beta-endorphin increased after combined stimulation with interleukin-4 and concanavalin A. Consistently, in vivo reduction of inflammatory pain by passively transferred T cells improved significantly when donor cells were pretreated with interleukin-4 plus concanavalin A. This effect was blocked by naloxone-methiodide. Conclusion Interleukin-4 can amplify endogenous opioid peptide expression mediated by JAK-STAT1/3 activation in mitogen-activated lymphocytes. Transfer of these cells leads to inhibition of inflammatory pain via activation of peripheral opioid receptors. Background Inflammatory pain is often refractory to conventional treatments. In addition, currently available opioid analgesics have deleterious side effects such as apnoea or addiction, which have recently lead to an epidemic of overdoses, death and abuse [1]. However, the activation of opioid receptors on peripheral sensory neurons can inhibit pain without central or systemic adverse effects. This can be achieved by exogenous opioids or by endogenous opioid peptides derived from immune cells [2]. These findings are of clinical relevance since human pain is exacerbated by interrupting the interaction between endogenous opioids and their peripheral recep-tors [3], and is diminished by stimulating opioid secretion [4]. Advantages of targeting endogenous opioids include reduced tolerance, receptor down regulation, desensitization, off-site or paradoxical excitatory effects due to unphysiologically high exogenous agonist concentrations at the receptor [5]. Beta-endorphin 1-31 is the most prominent opioid peptide eliciting analgesia in peripheral inflamed tissue [2]. In this environment immune cells can locally secrete opioid peptides upon stimulation by stress, corticotropin releasing factor (CRF), catecholamines or chemokines [3,6,7], resulting in the activation of opioid receptors on peripheral terminals of sensory neurons and subsequent analgesic effects [8]. Consistently, stress-and CRFinduced analgesia is reduced in immunosuppressed rats and reconstitution of functional lymphocytes has been shown to reverse this effect [9][10][11]. In contrast to investigations on the release of opioid peptides from immune cells, the regulation of opioid gene expression and processing in such cells has not been studied in detail so far. Proopiomelanocortin (POMC) is the precursor of beta-endorphin and POMCrelated peptides are produced in the pituitary, hypothalamus, immune cells and other tissues [12][13][14]. The POMC gene comprises three exons that are transcribed into full-length POMC mRNA. Translation of exons 2 and 3 gives rise to a pre-propeptide. In neuroendocrine cells, the formation of the active peptides is accomplished by entering the regulated secretory pathway and involves extensive proteolytic cleavage [15]. Current knowledge about regulatory pathways of beta-endorphin production in lymphocytes is sparse, predominantly because full-length POMC mRNA is difficult to detect in leukocytes [10,[16][17][18][19][20]. Using a refined quantitative methodology, we have demonstrated signal sequenceencoding POMC mRNA (exons 2-3) and betaendorphin in lymph nodes draining inflamed tissue [21]. Others have shown that lymphocytic full-length POMC mRNA can be induced by concanavalin A (ConA), CRF, cytokines or phorbolester in vitro but did not delineate the relevant signaling pathways [22,23]. In pituitary cells, the transcription factors Tpit and Pitx1 [24,25], Nur77, and the janus kinase/signal transducer and activator of transcription (JAK/STAT) are involved in cytokineinduced POMC gene expression [26,27]. The latter pathway is also important in the hypothalamic transcription of the POMC gene induced by leptin [28]. Here we set out to examine cytokines and signaling molecules involved in POMC gene expression in lymphocytes and to test the functional relevance of POMC stimulation for the inhibition of inflammatory pain in vivo. Based on the cytokine expression profile in lymph nodes draining normal and inflamed tissue, in vitro assays were established to test candidate cytokines individually with respect to their potency to elevate POMC mRNA levels in naïve node cells. To enhance cellular activation by mimicking cell-cell contact, lymphocytes were exposed to the mitogen ConA [29]. We hypothesized that peripheral opioid analgesia can be amplified by transfer of cells primed ex vivo to express elevated POMC and beta-endorphin. Exon 2-3 spanning POMC mRNA in lymphocytes is upregulated by IL-4 To identify potential regulators of POMC gene expression, we compared the expression profile of inflammatory cytokines in lymph nodes draining normal vs. inflamed paws 2 h following intraplantar (i.pl.) injection of Complete Freund's Adjuvant (CFA). Among nineteen cytokines analyzed, only IL-1β and IL-4 were significantly up-regulated in comparison to lymph node lysates from healthy animals ( Table 1). Stimulation of lymph node-derived naïve lymphocytes with 5 ng/ml interleukin (IL)-1β for 2 h in vitro did not significantly elevate POMC exon 2-3 mRNA transcript levels over unstimulated controls ( Figure 1A). Dose-dependent increases of these mRNA transcripts were observed after incubation with IL-4; a significant elevation over control levels was obtained with 10 ng IL-4/ml ( Figure 1B). No differences were detectable between untreated and IL-2-, MIP-3α-, MCP-1-(data not shown), or ConA-treated cells ( Figure 1C). IL-4-induced POMC exon 2-3 mRNA expression in lymphocytes is mediated via JAK and STAT1/3 signaling The pan-JAK inhibitor pyridon 6 reduced the IL-4induced elevation of POMC mRNA. This inhibition was significant at concentrations of 0.3 and 0.6 μM ( Figure 1D). The JAK1/3 inhibitor A771726 (125 μM), but not the STAT6 inhibitor cyclic pifithrin-alpha, significantly decreased the IL-4-induced elevation of POMC mRNA ( Figure 1E, F). IL-4-induced POMC mRNA levels were significantly attenuated by STAT1/3 but not by STAT5 or −6 decoy oligonucleotides ( Figure 1G, H, I). After exposure of naïve cells to IL-4, cell lysates were analyzed using Western Blotting. (Figure 2A), while slight STAT3 acetylation at Lysine 685 was observed in unstimulated, IL-4-treated, and pyridon 6 pretreated cells and appeared to be unaffected by the cell treatments ( Figure 2C, Friedman test, p > 0.05). Akt phosphorylation at Serine 473 was present after IL-4-stimulation and absent in cells pretreated with pyridon 6 ( Figure 2B, Friedman test, p < 0.05). However, post hoc comparison for Akt phosphorylation remained insignificant. Phosphorylation of extracellular-signal regulated kinase (ERK) 2 (p42) and of mitogen-activated protein kinase p38 remained largely unaffected by IL-4-stimulation ( Figure 3) but phosphorylation of both kinases tended to increase in IL-4 plus pyridon 6 treated cells. Significant differences between treatment groups were observed for p42-phosphorylation (Friedman test, p < 0.05). Post hoc comparison using Dunn's test revealed significant differences for p42phosphorylation between untreated controls and IL-4 plus 0.66 μM pyridon 6 treated cells. IL-4 treatment elevates beta-endorphin content and release from mitogen-activated lymphocytes Cellular amounts of immunoreactive beta-endorphin did not change in naïve lymphocytes stimulated with IL-4 for 24 h ( Figure 4A). To mimick cell activation, naïve lymphocytes were incubated for 24 h with ConA, which had no effect on the cellular levels of immunoreactive betaendorphin ( Figure 4A). However, the combined stimulation with ConA and IL-4 significantly increased contents of immunoreactive beta-endorphin ( Figure 4A). Vesicular release was induced by ionomycin-treatment. Extracellular levels of immunoreactive beta-endorphin were significantly higher than controls when cells were prestimulated with combined but not separate ConA and IL-4 ( Figure 4B). Ionomycin-induced release of immunoreactive beta-endorphin from ConA/IL-4 stimulated cells was not significantly influenced by up to 1 μM pyridon 6 pretreatment ( Figure 4B). Transfer of mitogen-activated lymphocytes pretreated with IL-4 restores opioid antinociception in immune cell-depleted rats Four days after i.pl. injection of Complete Freund's Adjuvant (CFA), paw pressure thresholds (PPT) in inflamed (ipsilateral) paws were significantly lower than in noninflamed (contralateral) paws of rats immunosuppressed by cyclophosphamide (CTX, Figure 5). Intraplantar transfer of unstimulated or stimulated cells did not change the reduced PPT (hyperalgesia) in inflamed paws In A-C POMC mRNA ratios are given in relation to unstimulated controls; in D-I POMC mRNA ratios are given in relation to inhibitor-free, IL-4 stimulated controls. Data represent means ± SEM. Statistical analysis was performed on raw data of A+C using the Wilcoxon signed rank test; raw data of B and D-I were analyzed using the Friedmann and Dunn's test *P < 0.05; **P < 0.01. in comparison to the baseline levels ( Figure 5). However, i.pl. injection of 1.5 ng CRF completely reversed hyperalgesia in paws injected with ConA/IL-4-stimulated T cells compared to all other groups, such that PPT were similar to contralateral noninflamed paws ( Figure 5). CRF-induced increases of ipsilateral PPT values were significantly higher in animals receiving 1×10 5 (63.8 ± 4.4 g) and 5×10 5 (65.0 ± 7.3 g) ConA/IL-4 treated cells in comparison to 10×10 5 cells (53.8 ± 8.0 g) (One-Way ANOVA and Bonferroni's Test, P < 0.05). Therefore, subsequent experiments were performed with the lowest cell number. Four days after i.pl. CFA, PPT were analyzed in immunosuppressed ( Figure 6A) and in immunocompetent ( Figure 6B) animals pretreated with s.c. NLX prior to CRF injection. PPT in immunosuppressed versus immunocompetent rats were slightly but significantly lower in both contralateral (64.4 ± 1.0 g versus 68.1 ± 1.0 g, respectively; unpaired t-test, P < 0.05, 8 rats per group) and in inflamed paws (29.4 ± 1.2 g versus 36.04 ± 0.8 g, respectively, unpaired t-test, P < 0.05, 8 rats per group). Baseline PPT in inflamed paws were not influenced by NLX. In recipients of ConA/IL-4-stimulated cells, NLX completely reversed CRF-induced increases of PPT ( Figure 6A). In immunocompetent rats, CRF-induced PPT elevations in inflamed paws were significantly higher than in contralateral paws. This effect was abolished by NLX ( Figure 6B). STAT3 and Akt are phosphorylated in lymph nodes draining inflamed tissue in vivo At 1 and 2 h after induction of CFA-inflammation in vivo, phosphorylated STAT6 was undetectable in cells isolated from ipsi-and contralateral popliteal lymph nodes ( Figure 7A). Tyrosine-phosphorylation of STAT3 was observed in cells from ipsi-and contralateral nodes ( Figure 7B). Densitometry showed that at both 1 and 2 h post CFA-injection, this phosphorylation was significantly stronger in the ispilateral than in the contralateral node cells (Wilcoxon signed rank test, P < 0.05). Serinephosphorylation of STAT3 and Tyrosine-phosphorylation of STAT1 were not detectable in cells from ipsi-or contralateral nodes ( Figure 7B). Despite background, Threonine-phosphorylation of Akt was detectable in cells from the ipsi-and contralateral nodes, but no reliable densitometric analysis could be performed. Discussion In the present study we found that: i) signal sequenceencoding POMC mRNA expression can be induced in naïve lymph node-derived cells by IL-4 stimulation in vitro, ii) POMC exon 2-3 mRNA up regulation by IL-4 is at least partially mediated via the JAK-STAT pathway involving Tyrosine-phosphorylated STATs 1 and −3, but not STAT5, STAT6, ERK 2 or p38 MAPK, iii) IL-4 induces beta-endorphin production in mitogen-activated lymphocytes, and iv) in vivo transfer of IL-4-stimulated, mitogen-activated T lymphocytes restores CRF-induced, opioid-mediated analgesia in immune cell-depleted animals. Our previous studies have shown that the levels of signal sequence-encoding POMC mRNA in the draining lymph node increase as early as 2 h post induction of paw inflammation in rats [21]. In neuroendocrine cells the translational product of such mRNA transcripts -in contrast to that of truncated POMC transcripts lacking exon 2 -can enter the secretory pathway [30], which is a prerequisite for the formation and secretion of biologically active POMC-derived peptides including beta-endorphin. However, the regulation of POMC gene expression and processing in immune cells has not been studied in detail so far. We now identified elements that are involved in the transcriptional regulation of POMC expression in lymphocytes. ConA did not enhance POMC exon 2-3 mRNA levels within the relative short time frame of 2 h. Others found ConA-induced POMC mRNA elevation in splenocytes incubated for 21 h with this mitogen [22]. In our experiments IL-1β treatment only slightly elevated POMC exon 2-3 mRNA levels after 2 h. This resembles previous findings in human dermal endothelial and in corticotroph AtT-20 cells [31,32]. Similarly, IL-2 did not elevate the POMC mRNA in our study or in AtT-20 cells [32,33] but IL-4 induced a considerable increase. So far this cytokine was not investigated with respect to POMC mRNA expression in lymphocytes but it was found to stimulate proenkephalin mRNA in peripheral blood mononuclear cells [34]. The predominant pathway of IL-4-induced gene transcription in T and B cells involves JAK1/3 and STAT6 activation [35]. This was also found in case of the μopioid receptor [36]. Thus, we hypothesized that the induction of lymphocytic POMC gene expression by IL-4 is mediated via the JAK-STAT6 pathway. Indeed, the IL-4 effect was fully blocked by the pan JAK inhibitor pyridon 6 and by the JAK1/3 inhibitor A771726, but not by the proposed STAT6 inhibitor cyclic pifithrin-alpha. In addition, it was not attenuated by STAT6 but partially by STAT1/3 decoy oligonucleotides. Others demonstrated that phospho-STAT3 activated the POMC promoter through an indirect mechanism requiring an SP1-binding site [28], and that STAT3 and the AP-1 protein complex can cooperate in driving transcription [37]. In pituitary corticotrophs leukemia inhibitory factor induced POMC gene expression via binding of phosphorylated STAT1 and −3 homo-and heterodimers to the promoter [38]. In line with these findings, we detected considerable STAT1 phosphorylation in IL-4 treated lymphocytes. Also, following IL-4 stimulation STAT3 was strongly phosphorylated at Tyrosine 705, Effects of the injection (baseline versus CRF) and of the treatment (saline versus NLX) on ipsilateral PPT were analyzed using Two-Way RM ANOVA and Bonferroni's Test. § § §, CRF injection significantly elevated PPT in NaCl-treated animals (P < 0.001); ***, CRF-induced elevation of PPT is significantly reduced by NLX (P < 0.001). which was blocked by the pan JAK inhibitor pyridon 6. This is in agreement with IL-4-induced, JAK3-mediated phosphorylation of STAT3 in naïve cytotoxic T cells [39]. We did not observe STAT3 phosphorylation at Serine 727. Others demonstrated that DNA binding of STAT1 or −3 is not affected by Serine-phosphorylation [40], indicating that Tyrosine-phosphorylation is sufficient for the induction of gene transcription. In line with this notion, leukemia inhibitory factor-induced POMC transcription in the pituitary was abrogated by mutated STAT3 containing Phenylalanine instead of Tyrosine 705 [41]. IL-4 can also activate the phosphoinositide 3-kinase kinase/protein kinase B (Akt) pathway [42]. Concordantly, we found Akt phosphorylation. In addition, this pathway is known to activate Ras-Raf-Mitogen-Activated Protein kinases (MAPK) including phospho-p44/42 (ERK 1 and 2). We found that ERK 2 was already highly phosphorylated in unstimulated lymphocytes and was apparently not changed after treatment with IL-4 or with pyridon 6. Together, these findings suggest that the MAPK pathway is not essential for IL-4-induced POMC gene expression in lymphocytes, which resembles findings in AtT-20 cells [41]. This is also supported by the lack of STAT3 Serine-phosphorylation after IL-4 treatment, which would be expected via the Ras-Raf-MAPK pathway activation [43]. In contrast to human neutrophils [44], we found that p38 MAPK was phosphorylated in both naïve and IL-4-stimulated lymph node-derived cells. Additionally, phosphorylation of p38 was not significantly affected by pyridon 6. Thus, we conclude that p38 is not involved in IL-4-induced POMC gene expression in lymphocytes. We then analyzed whether IL-4-treatment increased the cellular beta-endorphin content and in vivo antinociceptive function. To obtain significant opioid peptide levels and release in vitro, we had to prime naïve cells with the mitogen ConA, similar to others [11]. This suggests that POMC gene expression and precursor processing are independently regulated in lymphocytes. Inflammatory cells express POMC processing enzymes [45,46], but their functional role in the regulation of processing pathways and beta-endorphin production have not been elucidated. To investigate antinociceptive effects of T cell-derived opioids, we used immune cell-depleted rats and stimulation with CRF, which has been shown to release opioid peptides in vitro and in vivo [9,16]. In recipients of ConA/ IL-4-stimulated T cells this resulted in strong antinociception in inflamed paws. In contrast to findings after intravenous cell transfer [11], this effect did not increase with rising cell numbers. Since we administered the cells directly to the site of inflammation, lower numbers may be required. The same CRF dose injected into immunocompetent animals induced a stronger antinociceptive effect than in immunosuppressed T cell recipients, indicating that CRF was not the limiting factor. The antinociceptive effect was blocked by naloxone-methiodide, consistent with the notion that it was mediated by opioid receptors on peripheral terminals of sensory neurons [8]. Together with our in vitro data, these findings indicate that the production of biologically active beta-endorphin is enhanced by treatment of mitogen-activated lymphocytes with IL-4, and that this strategy can be used to amplify opioid inhibition of inflammatory pain in vivo. Thus, we have discovered a new mechanism, adding to previous reports showing antinociceptive effects of IL-4 via the inhibition of pro-inflammatory cytokines [47,48]. In those studies, pain thresholds were determined 30 min after IL-4 injection and the effects were not reversed by naloxone. In our experiments, the opioid-dependent antinociceptive effects produced by passively transferred T lymphocytes pretreated for 24 h with IL-4 plus ConA were detected only after injection of CRF. This further supports the concept that IL-4 induces the production rather than the release of opioid peptides in activated lymphocytes. Finally, to obtain information on the JAK/STAT signaling molecules at early stages of inflammation in vivo, we analyzed lymph nodes dissected after induction of unilateral paw inflammation by CFA. Phosphorylated STAT6 was undetectable in all tissues. This finding indicates that IL-4 was not released after CFA-inoculation, at least not at substantial concentrations. However, STAT3 Tyrosinephosphorylation was increased in cells from ipsilateral lymph nodes. Taken together, these findings indicate that the in vivo elevation of POMC mRNA may be due to STAT3 phosphorylation and this effect can be mimicked in vitro by stimulating naïve lymphocytes with IL-4. In summary, our present and previous data show that beta-endorphin-containing lymphocytes infiltrating inflamed tissue can produce opioid mediated antinociception [11,49,50]. This is also supported by the present finding of increased nociception in immunosuppressed as compared to immunocompetent animals with hind paw inflammation. Furthermore, we have now demonstrated that POMC and beta-endorphin production as well as pain inhibition can be significantly augmented by IL-4-induced activation of the JAK/STAT pathway. This should spawn innovative approaches to pain therapy, for example antigenic stimulation (similar to vaccination), clonal expansion, or genetic manipulation of such cells [51]. Pain relief via enhancement of endogenous opioid production in immune cells may overcome limitations of conventional analgesics such as addiction, paradoxical hyperalgesia, cognitive impairment, nausea and constipation induced by opioid drugs, or gastrointestinal ulcers, bleeding, and cardiovascular complications produced by cyclooxygenase inhibitors. Conclusion The expression of POMC and beta-endorphin in lymphocytes is apparently linked to anti-inflammatory cytokines and JAK-STAT1/3 activation. Interleukin-4 effectively stimulated POMC transcription in naïve cells. Evidently, precursor processing is regulated independently from gene transcription by so far unidentified factors during cell activation. Our data provide a novel in vitro model to study the molecular mechanisms involved in opioid peptide synthesis in such cells and outline novel approaches to pain treatment by promoting production of immune cell derived opioid peptides. Experimental animals and induction of inflammation Experiments were approved by the animal care committee of the State of Berlin and strictly followed the guidelines of the International Association for the Study of Pain [52]. Male Wistar rats (225-300 g, Charles River Breeding Laboratories) received an i.pl. injection of 0.15 ml Complete Freunds' Adjuvant (CFA, Calbiochem, La Jolla, CA, USA) into the right hind paw under brief isoflurane (Rhodia Organic Fine Ldt., Bristol, UK) anesthesia. The inflammation remained confined to that paw throughout the observation period. Cytokine array Popliteal lymph nodes draining normal and inflamed hind paws (2 h post CFA) were dissected, homogenized, and lyzed. Expression of cytokines was analyzed using Ray-Bio TM Rat Cytokine Antibody Array 1.1 kits (RayBiotech, Inc., Norcross, GA, USA) following the manufacturer's instructions. Array-membranes carrying antibodies for the detection of nineteen cytokines and anti-rat IgG (loading control) were incubated with the lymph node lysates (500 μg protein/membrane) for 2 h. After washing, the membranes were incubated for 2 h with a mixture of biotin-conjugated antibodies, followed by peroxidaseconjugated streptavidin. Immunoreactive dots were subsequently visualized using an enhanced chemiluminescence (ECL) system, and membranes were exposed to autoradiograph hyperfilms for 10 to 30 seconds. Preparation of lymphocytes Healthy rats were sacrificed by isoflurane overdose. Popliteal, axillary, and inguinal lymph nodes were dissected and pooled. Our previous flow cytometry analyses had shown that about 95% of cells residing in naïve nodes express the hematopoietic cell marker CD45, 70-80% are CD3 + T lymphocytes, and 20-25% are IgG kappa light chain + B lymphocytes [21]. Lymphocytes were dissociated from surrounding tissue using 40 μm mesh cell strainers and were cultured in RPMI-1640 medium containing 1% penicillin/streptomycin under standard culturing conditions (37°C and 5% CO 2 ) unless otherwise stated. Cell transfection and decoy oligonucleotide-experiments Double-stranded decoy oligonucleotides (TIBMOLBIOL, Berlin, Germany) providing binding motifs for STAT6 (5 0 -gATCCTACTTCATggAAgAAT-3 0 ), STAT1/3 (5 0 -gATCgAgTTTACgAgAACTC-3 0 ), or STAT5 (5 0 -gATCg CATTTCggAgAAgACg-3 0 ) were prepared as described without chemical modifications [56]. Cells were diluted in antibiotic-free culture medium containing 10% serum and were plated for transfection on 6-well-plates (10 cm 2 surface area) using the BLOCKiT TM Transfection Kit (Invitrogen GmbH, Karlsruhe, Germany) according to the manufacturer's instructions. The transfection reagent Lipofectamine 2000 TM (5 μl/well) was mixed with 50, 100 or 200 pmol decoy oligonucleotide solutions to allow complex formation prior to addition of the mixture to the cells. Cells were then incubated at 37°C and 5% CO 2 . A non-target, fluorescein-labeled double-stranded RNA oligomer (BLOCKiT TM fluorescent oligo) was used as an indicator of transfection efficiency. Uptake of the BLOCKiT oligo was observed already at 6 h post transfection and persisted for at least 24 h in the cells, as assessed using a fluorescence microscope. Accordingly, medium was replaced 24 h after transfection by pure RPMI-1640 medium, then cytokine was added and cells were incubated for another 2 h at 37°C and 5% CO 2 . Thereafter, cells were collected on ice, centrifuged, and pellets were stored at −80°C until POMC exon 2-3 mRNA was assayed using qRT-PCR. Radioimmunoassay and Enzyme Immuno Assay Cellular content of beta-endorphin was determined by measuring immunoreactive beta-endorphin in cell lysates using a rat radioimmunoassay (RIA) kit according to the manufacturer's instructions (Phoenix Peptides Inc., Burlingame, CA, USA) and as previously described [21,57]. Briefly, lymphocytes were lyzed by sonication (15 sec, 1 impulse/sec) at a concentration of approximately 3 × 10 6 cells per 100 μl assay buffer and betaendorphin immunoreactivity was determined in 100 μl of the lysates in duplicate. The release of beta-endorphin was determined in cell supernatants using a human/rat-fluorescent EIA kit according to the manufacturer's instructions (Phoenix Peptides Inc.) as previosly described [7] Briefly, release was induced by incubation of approximately 6× 10 6 cells/120 μl RPMI-1640 medium containing 10 μM ionomycin (Sigma-Aldrich). Cells were then incubated for 7 min at 37°C and 600 rpm in a thermal heating block, chilled on ice, and centrifuged for 10 min at 450 × g and 4°C. Wells of EIA plates were loaded with 50 μl of the supernatants each; beta-endorphin immunoreactivity was assessed in duplicate. Western Blot analysis Western Blotting was performed as previously described [46]. Briefly, cells were sonicated and homogenized in RIPA buffer [50 mM Tris-HCl (pH 8.0); 150 mM NaCl; 1% Nonidet P-40 (v/v); 0.5% Desoxycholat (w/v); 0.1% SDS (w/v)] in the presence of protease and phosphatase inhibitors (Complete mini and PhosSTOP tablets, Roche). Proteins (30-50 μg/sample) were subjected to polyacrylamide gel-electrophoresis, the gels were composed of an upper stacking and a lower resolving part according to the method of Laemmli [58]. After separation, proteins were transferred at 350 mA/60 min to Immobilon-P membranes (Millipore Corperation, Billerica, MA, USA). Membranes were blocked in Tris buffered saline containing 2.5% bovine serum albumin and 0.1% Tween-20 for at least 30 min at room temperature. After blocking, blots were sequentially probed with the following polyclonal rabbit antibodies overnight at 4°C: anti-phospho-STAT1 anti-p38 (38 kDA). All antibodies were diluted 1/1000 in blocking buffer and purchased from Cell Signaling Technologies (Danvers, Massachusetts, USA). After incubation with peroxidase-conjugated secondary antibodies (goat anti-rabbit IgG and rabbit anti-mouse IgG purchased from Jackson ImmunoResearch Europe Ltd., Suffolk, UK) diluted 1/5000 in blocking buffer, immunoreactive bands were visualized using an ECL system. Peroxidaseconjugated anti-beta-actin (42 kDa) was purchases from Sigma-Aldrich and diluted 1:50,000 in blocking buffer; this antibody renders the use of a secondary antibody before overlay of the blot with ECL solution unnecessary. Exposure time of the blots to autoradiograph hyperfilms was 10 to 120 s. Bound antibodies were removed by stripping for 15 min at 50°C in 62.5 mM Tris-HCl containing 100 mM beta-Mercaptoethanol and 2% SDS. Controls included reprobing upon omission of primary or secondary antibodies. PCR primers and quantitative real-time PCR (qRT-PCR) Primers were designed to amplify POMC and ribosomal protein L19 mRNA transcripts using OLIGO Primer Analysis Software Version 5.0 for Windows. Oligodeoxynucleotides were synthesized and purified by TIBMOLBIOL. Real-time PCR assays were performed using the Fast start DNA Master SYBR Green I assay (Roche) according to the instructions of the manufacturer in a LightCycler 1.5 instrument including melting curve analyses. Positive controls contained pituitary cDNA; negative controls contained double-distilled H 2 O or RT -cDNA. Amplification was performed as detailed in Sitte et al. [21], all samples but positive and negative controls were run in duplicate. For some measurements, sensitivity for POMC mRNA amplification was enhanced using a semi-nested real-time PCR protocol as previously described [21]. The amount of POMC exon 2-3 transcripts (sense primer: 5 0 -CCCTCCT GCTTCAGACCTCCA-3 0 , antisense primer: 5 0 -TCTCTT CCTCCGCACGCCTCT-3 0 ) was normalized to the expression levels of ribosomal protein L19 using exon 2-5 spanning primers (sense primer: 5 0 -AATCGCCAATGC CAACTCTCG-3 0 , antisense primer: 5 0 -TGCTCCATGA GAATCCGCTTG-3 0 ), treatment effects were evaluated by applying the delta-delta CP method as detailed below. Immunosuppression of rats and measurement of nociceptive thresholds Rats were handled daily for 4 days. They were treated thrice at intervals of 48 h intraperitoneally (i.p.) with cyclophosphamide (CTX, Baxter Oncology) to induce depletion of immune cells as previously described [49]; a single i.pl. CFA injection into the right hind paw was given 72 h after the first CTX-injection. At 96 h post CFA-inoculation, immunosuppressed rats received purified T lymphocytes into inflamed paws (i.pl.); control animals were injected with vehicle (PBS). These T cells were obtained from pooled axilliary and inguinal lymph nodes of healthy donor rats as detailed above. Cells were treated for 24 h with/without ConA (1 μg/ml), IL-4 (10 ng/ml), or ConA plus IL-4 ex vivo. Then cell suspensions were depleted of MHC class II receptor + and CD45RA + cells (dendritic cells, monocytes/macrophages, and B lymphocytes) using magnetic cell sorting columns, anti-rat MHC class II receptor and anti-rat CD45RA beads (Miltenyi Biotec, Bergisch Gladbach, Germany), similar to Sitte et al. 2007 [21]. This procedure revealed > 95% pure T cell suspensions that were reconstituted at 1× 10 5 , 5× 10 5 and 10× 10 5 cells per 50 μl PBS for i.pl. injections. In the first set of experiments, animals received i.pl. CRF (1.5 ng/50 μl) to induce opioid peptide release 10 min after i.pl. T cell administration. In the second experiment, naloxone-methiodide (NLX; 10 mg/kg) or vehicle (saline) were injected subcutaneously (s.c.) 5 min after i.pl. T cell administration. Another 5 min later the animals received i.pl. CRF (1.5 ng/50 μl). Another group of immunocompetent rats received s.c. NLX or vehicle, followed by i.pl. CRF. Mechanical hyperalgesia was tested by measuring paw pressure thresholds (PPT) (modified Randall-Selitto test; Ugo Basile) as previously described [59]. Measurements were performed immediately before (baseline) and 7 min after T cell transfer, as well as 5 min post CRF injection. Three consecutive trials, separated by 10 s intervals each, were conducted and the average was calculated. The sequence of left and right paws was alternated between animals to avoid bias. The experimenter was blind to the treatment. Data processing POMC and ribosomal protein L19 qRT-PCR data were analyzed using the LightCycler software 3.5 (Roche). Levels of transcripts were assessed as crossing points (CP) when specific amplification exceeded background fluorescence using the Second Derivative Maximum analysis method of the system. Average PCR efficiencies were 1.89 for rpL19 and 1.82 for POMC exon 2-3 transcripts. All data were subsequently extrapolated using MS Excel 2003. Differences between treated and untreated cells are shown as mean POMC exon 2-3 mRNA ratios ± SEM and were calculated by applying the delta-delta method: Ratio = (PCR efficiency of POMC)^delta CP POMC(control-sample) / (PCR efficiency of rpL19)^delta CP rpL19(control-sample) . Based on this equation, enhanced POMC mRNA expression gives ratios > 1, decreased POMC mRNA expression gives ratios < 1. The cytokine array and western blot hyperfilms were scanned at 400 dpi and inverted for analysis by optical densitometry using Image J software 1.37v (Wayne Rasband, National Institute of Mental Health, Bethesda, Maryland, USA). Data are presented as mean % expression of loading control ± SEM after background correction. Statistical analysis All data were analyzed with GraphPad Prism Version 4.01 for Windows (GraphPad Software, USA). Normalized cytokine array data were analyzed using unpaired t-test with Welch's correction to account for the number of experiments performed. Statistical analysis was performed on normalized CP values (e.g. CP POMC -CP rpL19) in case of the qRT-PCR data; beta-endorphin immunoreactivity values (cellular content as well as amount in the supernatant in pg) were normalized to one million cells. Statistical significance with respect to qRT-PCR data was calculated using the non-parametric Wilcoxon signed rank test if two groups were compared. Multiple comparisons of matched qRT-PCR, RIA, EIA, and Western Blot data were performed using the non-parametric Friedman Test; post-hoc comparisons were performed by Dunn's test. Behavioral data (average PPT values) were analyzed by Two-Way repeated measures (RM) ANOVA and Bonferroni correction for multiple comparisons. For all tests, statistical significance was considered if P < 0.05.
2016-05-04T20:20:58.661Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "0bfbdb3f4288e5f6100049d110d627afc86eea3f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/1744-8069-8-83", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "74b5a78bf5ebe4853cfb6925645e88bc51478603", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236993015
pes2o/s2orc
v3-fos-license
Scrotal bridge flap reconstructive surgery for extensive penile paraffinoma: steps and outcomes from a single center: a case series To describe our scrotal bridge flap technique in reconstructive surgery for extensive penile paraffinoma, a debilitating late complication of penile subcutaneous foreign material injection intended to achieve penile augmentation. We reviewed the medical records of 10 patients who underwent reconstructive surgery with the scrotal bridge flap technique for penile paraffinoma at our center between 2016 and 2019. Complete excision of fibrotic tissue and the overlying skin was performed, and penile resurfacing was achieved by mobilizing the scrotal skin superiorly to wrap around the penile shaft, leaving a skin bridge at the median raphe. All 10 patients successfully underwent scrotal bridge flap penile reconstruction with satisfying results. The mean operation duration was 286.1 min (range 213–363 min). No immediate major complications were observed in any of the patients, and no patients required revision surgery. The scrotal bridge flap technique is a reliable method for reconstructive surgery after the excision of penile paraffinoma. Background Penile augmentation by subcutaneous foreign material injection was previously practiced by clinicians and was first described in 1899. However, it was subsequently understood to have highly damaging late complications, hence its absence in medical practice today. Penile paraffinoma is a late complication of attempted penile augmentation. Downey et al. [1] reported 214 cases between 1956 and 2017; the majority of cases were in Korea, Eastern Europe, and Southeast Asia. However, Svensøy et al. [2] have recently shown that it is more prevalent in certain areas than previously thought, with 680 patients treated at a single center in Thailand between 2010 and 2014. Here, we describe the scrotal bridge flap penile resurfacing technique that is used at our center to share our experience of treating this debilitating condition. Patients Between April 2016 and September 2019, a total of 10 patients underwent scrotal bridge flap reconstructive surgery for penile paraffinoma. All patients treated were Malays. The mean age at which the patients started foreign material injection was 26.5 years (range 15-46 years), and the mean time interval between the commencement of injections and presentation was 5.1 years (range 1-15 years). The most common symptoms were painful erection (10 patients), penile swelling with an inability to perform penetration (six patients), ulceration/infection (four patients), and difficulties with urination (three patients). All patients had extensive penile paraffinoma with swelling involving the penile shaft and part of scrotum and suprapubic region. Four patients had ulcers with discharge which was treated with antibiotic and daily dressing. Penile paraffinoma is a clinical diagnosis in patients with a history of subcutaneous penile foreign material injection. Imaging or biopsy is unnecessary for diagnosis in straightforward cases [1]. We performed ultrasound imaging in one patient who denied any foreign material injection, and biopsy in two patients with chronic non-healing ulcers on top of the paraffinoma to exclude squamous cell carcinoma; however, the results of the investigative imaging and biopsy had no impact on our management approach. All operations were performed as elective cases, and all patients were administered antibiotics prior to the operation and for 1 week postoperatively. All operations were performed by a single surgeon. Written consent was obtained from patients who had their photographs taken. Operative technique Patients were placed supine under general anesthesia. Adequate cleaning and draping were performed, exposing the target area in the usual manner, and a Foley catheter was inserted. The operation was divided into two stages. In the first stage, excision of all associated fibrotic tissue was performed, and in the second stage, penile resurfacing with a scrotal bridge flap was performed. Excision began with a circumferential skin incision just proximal to the corona and proceeded with the careful excision of fibrotic tissue together with the overlying skin. In cases wherein this was impossible because of extremely dense fibrosis, safe excision was instead meticulously performed by approaching from any area with a clear plane. The outcome was a completely denuded penile shaft down to the penile base ( Fig. 1A, B). Subsequently, we fully stretched the scrotal skin using stay sutures to avoid any redundant neo-penile skin. The flap was marked such that the height corresponded to the penile length and the width of the superior and inferior parts corresponded to the distal and proximal penile circumference, respectively. A careful skin incision was performed with 1 cm of scrotal bridge assigned at the inferior part of the median raphe (Fig. 1C). The flap, including the dartos fascia, was elevated from the underlying tunica vaginalis, thereby revealing the tunicacovered testes that were later embedded within a newly created scrotal pouch. The scrotal flap was subsequently mobilized superiorly to wrap around the penile shaft and sutured dorsally using absorbable 3/0 sutures. The proximal-and distal-end sutures were tension-free, especially proximally, to reduce the degree of postoperative neopenile skin edema (Fig. 1D-G). Outcome The mean operation duration was 286.1 min (range 213-363 min). No immediate major complications were observed in any patient, and no patients required immediate revision surgery. Almost all patients had neo-penile skin edema (nine patients), seven patients had limited superficial skin necrosis, four patients developed surgical site infection, one patient experienced superficial wound breakdown, and one patient developed hematoma. The mean length of hospital stay was 8.8 days (range 3-20 days). One patient experienced a cerebrovascular accident approximately 1 month postoperatively and developed erectile dysfunction. Otherwise, no patients reported de novo erectile dysfunction. Two patients reported redundant neo-penile skin; however, only one patient consented to a second operation, 4 months postoperatively, to remove the redundant neo-penile skin. The mean follow-up duration was 4.5 months (range 1-12 months). Conclusions Patients with penile paraffinoma may present with various symptoms, most commonly due to penile pain that is chronic, intermittent, or occurs during erection. They may similarly present with penile deformity, infection, phimosis or paraphimosis, voiding complaint, gangrene, and although rare, squamous cell carcinoma [1,2]. The granulomatous reaction caused by the foreign material could be localized to part of the penis or be extensively involving the entire penile shaft, suprapubic and scrotal areas. Once complications occur, treatment involves the complete excision of the foreign material alongside the overlying skin, due to the recurrent nature of the disease should any residual foreign material be left behind. Simple excision and primary suturing are adequate in limited instances of the disease; however, in cases of extensive penile paraffinoma involving the entire penile shaft, both with and without extension to the suprapubic region or scrotum, treatment is achieved by radical excision of the fibrotic tissue, alongside the overlying skin, and penile resurfacing to cover the skin defect [1,2]. Penile resurfacing is performed either with a graft or scrotal flap. Scrotal flaps have the advantage of being readily available, relatively extensile, and of a color similar to the penile skin compared to a skin graft. However, they have disadvantages, such as being unsuitable for patients with very small underdeveloped scrotum, presence of suture line at the ventral and dorsal sides of the penile shaft, and the problem of hair growth at the neo-penile skin. The use of a bilateral scrotal flap, and its alteration, is the most commonly reported technique for scrotal flap surgery [3][4][5]. The bilateral scrotal flap is called so because it basically uses two scrotal flaps, taken lateral to the penile shaft and inferiorly toward the median raphe, and each flap is raised superiorly and wrapped around the penile shaft with two T-style anastomoses at the distal end of the penile shaft, ventrally and dorsally. We believe that in cases of extensive penile paraffinoma, especially when it involves the entire penile shaft and part of the scrotum or the suprapubic area, using our technique would provide healthier scrotal flap These 10 cases are a slightly larger series than the four cases previously described by Salauddin et al. [6]. Otherwise, to the best of our knowledge, this technique had never been previously described in detail or published in any other literature. The outcomes of these cases are encouraging, with no cases of full-thickness flap necrosis requiring revision. Seven patients had limited superficial skin necrosis, which was treated conservatively. We identified four patients with surgical site infection, three of whom initially presented with penile ulceration and infection and had received prior treatment. This suggests that in cases of infected penile paraffinoma, the foreign material itself could still harbor infective microorganisms, even after the infection has been clinically resolved. Two patients had redundant neo-penile skin, stressing the need for meticulous measurement and stretching of the scrotal skin during flap elevation. In our series, we did not assess the degree of hair growth on the neopenile skin. Remarkably none of the 10 patients reported any discomfort or dissatisfaction due to hair growth. In our opinion, this could be because Malays generally do not have significant scrotal hairs. A limitation of this case series is the short duration of follow-up with only two patients being followed up until 12 months after the operation. In conclusion, the scrotal bridge flap technique is a reliable method for reconstructive surgery after the excision of penile paraffinoma. No immediate major complications were observed in any patients, and no patients required immediate revision surgery.
2021-08-13T13:47:31.628Z
2021-08-13T00:00:00.000
{ "year": 2021, "sha1": "aa4191f1a04dcb63cd221114d2ad7c6d66cb4ec8", "oa_license": "CCBY", "oa_url": "https://afju.springeropen.com/track/pdf/10.1186/s12301-021-00214-1", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aa4191f1a04dcb63cd221114d2ad7c6d66cb4ec8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
208583359
pes2o/s2orc
v3-fos-license
Tissue-specific expression of young small-scale duplications in human central nervous system regions Gene duplication has generated new biological functions during evolution that have contributed to the increase in tissue complexity. Several comparative genomics and large-scale transcriptional studies performed across different organs have observed that paralogs and particularly small-scale duplications (SSD) tend to be more tissue-specifically expressed than other gene categories. However, the major involvement of whole-genome duplications (WGD) was also suggested in the emergence of tissue-specific expression features in the brain. Our work complements these previous studies by exploring intra-organ expression properties of paralogs through multiple territories of the human central nervous system (CNS) using transcriptome data generated by the Genotype-Tissue Expression (GTEx) consortium. Interestingly, we show that paralogs, and especially those originating from young SSDs (ySSD), are significantly implicated in tissue-specific expression between CNS territories. Our analysis of co-expression of gene families across human CNS tissues allows also the detection of the tissue-specific ySSD duplicates expressed in the same tissue. Moreover, we uncover the distinct effect of the young duplication age, in addition to the SSD type effect, on the tissue-specific expression of ySSDs within the CNS. Overall, our study suggests the major involvement of ySSDs in the differentiation of human CNS territories and shows the added value of exploring tissue-specific expression at both the inter and intra-organ levels. . The divergence of spatial expression between paralogs can be approached by the study of gene tissue-specificity, which indicates whether a gene has a broad or narrow expression pattern across a collection of tissues (Zhang 2003;Freilich et al. 2006;Lan and Pritchard 2016). The comparison of transcriptomes between different mouse organs has shown that the brain was the one that expresses the highest proportion of tissue-specific paralogs in relation to the total number of genes expressed in the brain, while it does not express the highest proportion of tissuespecific singletons (Freilich et al. 2006). The brain is therefore a model perfectly suited to the detailed exploration of the transcriptional properties of the duplicated genes. Among the 60% of human genes considered as paralogs (S. , some come from wholegenome duplications (WGD) in early vertebrate lineage approximately 500 million years ago (McLysaght et al. 2002;Nakatani et al. 2007), the others come from small scale duplications (SSD) that have occurred throughout the evolution (Hakes et al. 2007). A comparison in mammals, notably in human, of the brain transcriptome with those of other organs has shown that WGDs tend to be enriched in brain-specific genes compared to SSDs (Satake et al. 2012;Guschanski et al. 2017;Roux et al. 2017). This supports the theory that genome duplications have allowed vertebrates to develop more complex cellular organizations, such as the different brain tissues (Holland 2009;Chen et al. 2011). In complement of the role of the WGDs in the tissue complexity, some theories support the idea that young duplicated genes tend to be preferentially expressed in evolutionarily young tissues (Domazet-Lošo and Tautz 2010). Moreover, a higher proportion of primate-specific paralogs were found to be upregulated in the developing human brain compared to the adult brain, whereas this expression pattern was not found for older duplications . Regarding recent duplications, that emerged in the human lineage, studies have suggested their contribution to human-specific adaptive traits, such as the gain of brain complexity (Sudmant et al. 2010;Dennis and Eichler 2016;Dennis et al. 2017;Guschanski et al. 2017). While the expression properties of paralogs between different organs, including the brain, have been well studied, we have little knowledge of the expression characteristics of duplicated genes between different regions of the same organ. Large-scale transcriptional profiling of neuroanatomic regions (Melé et al. 2015) allows us now to further investigate paralog expression between the different territories of the human central nervous system (CNS) according to their evolutionary properties. This present study explores in detail the expression patterns of paralogs between the different territories of the human CNS, using the GTEx resource, according to their evolutionary characteristics and gene families. We started assessing whether duplicated genes were associated with differences in expression between CNS tissues and we investigated their tissue-specificity. Secondly, we studied the evolutionary characteristics of tissue-specific paralogs such as their age and the type of duplication event. We then analyzed the organization of paralogs in families using co-expression to define coexpressed gene families and studied their tissue-specificity and evolutionary characteristics. A better comprehension of the biology of paralogs could also support our understanding of diseases, since disease-associated genes have been found to be over-represented in paralogs compared to singletons (Makino and McLysaght 2010;Dickerson and Robertson 2012;W.-H. Chen et al. 2013) and particularly in WGDs and old SSDs (Singh et al. 2014;Acharya and Ghosh 2016). Thus, we finally explored the association of paralogs with human brain diseases. 1/ Association of paralog expression with CNS differentiation We considered in our study all human protein coding genes and the information collected on duplication events in order to split the gene population into paralogs and singletons (S. ) (Methods). In a recent landmark contribution, the GTEx (Genotype-Tissue Expression) consortium used RNA sequencing technology to establish the landscape of human gene expression across a large collection of postmortem biopsies (Melé et al. 2015). Gene expression data for hundreds of individuals from 13 normal brain-related tissues (Methods) were obtained from the GTEx consortium. After filtering out low information content genes, abundance values of 16,427 proteincoding genes, including 10,335 paralogs and 6,092 singletons were conserved. Previous work by GTEx established the relevance of using gene expression data to cluster samples obtained from the same tissues, even though assigning samples to the correct CNS region was more difficult than for other organs (Melé et al. 2015). We extended this analysis by focusing specifically on CNS tissues and assessing whether paralog expression could better classify samples into tissues than singletons or all protein-coding genes. Our unsupervised hierarchical classification of human CNS samples, based on their pair-wise similarity in terms of correlation across gene expression values, was able to group together most samples belonging to the same tissue (Methods; Fig. 1). The choice of color gradients for tissues that anatomically overlap confirmed the ability of gene expression profiles to classify these tissues into neurologically relevant groups. Therefore, from the next result sections, we will pool together some of the 13 initial tissues that showed similar expression profiles in order to define a shorter list of 7 CNS regions (Methods) that will be used for the tissue-specificity analysis. The relevance of our experimental classification was evaluated according to the expected belonging of samples to the 13 brain-related tissues using the adjusted rand index (ARI) (Hubert and Arabie 1985). We observed that globally, the sample classification based on paralog expression (ARI = 0.197) was slightly better than the classification obtained using all protein-coding genes (ARI = 0.175) or singletons (ARI= 0.182). It should be noted that the quality of a clustering is likely to be influenced by the number of genes used in the analysis. Therefore, the better ARI score obtained with the paralogs compared to singletons could be partly due to the higher number of paralogs in relation to singletons. However, we also obtained a greater ARI with the paralogs in comparison to the ARI calculated from all protein-coding genes, thus suggesting a particular biological relationship between paralogs expression and CNS tissue differentiation. considered are: protein-coding genes, singletons and paralogous genes. Each CNS region is represented by a different color. The tissues belonging to the same anatomically defined CNS region are represented in the same color: blue for the cerebellum region (cerebellum and cerebellar hemisphere tissues), green for the cortex region (cortex, frontal cortex and anterior cingulate cortex tissues), purple for the basal ganglia region (putamen, nucleus accumbens and caudate tissues), and red for the amygdalahippocampus region (amygdala and hippocampus tissues). The remaining tissues are considered as independent CNS regions: pink for the hypothalamus region, yellow for the spinal cord region and black for the substantia nigra. In addition to this clustering analysis, we carried out another assessment by performing differential expression analysis of gene count data between all pairs of CNS tissues (Methods). We obtained a list of significantly differentially expressed genes (DEGs) for each pair of tissues (Supplemental Materials Table S3). By comparing the relative proportion of DEGs in paralogs and singletons, we observed that DEGs were significantly enriched in paralogs for 75 out of the 78 tissue-pairs tested (Chi-squared test, and threshold p-value = 6.41E-04 with Bonferroni correction to account for the number of tissue pairs). Furthermore, in order to assess the potential bias of expression level in these results, we calculated the overall expression of paralogs averaged over brain-related tissues and found it to be significantly lower than that of singletons (12 versus 37 RPKM respectively,. This observation, which implies less power in the DE tests for the former group, makes the enrichment of the DEGs in paralogs even more reliable. Overall, these complementary analyses on tissue clustering and differential expression illustrate the strong biological contribution of paralogous genes to expression differences between CNS territories. 2/ Tissue-specific expression of paralogs in CNS regions We further investigated these expression differences of paralogs between CNS territories by looking at their tissue-specificity. The detection of tissue-specific genes was performed using expression profiles quantified across the 7 CNS regions previously defined. From the collection of methods developed to measure tissue specificity, we selected the method based on Tau score because of its high sensitivity to detect tissue-specific genes (Yanai et al. 2005; Kryuchkova-Mostacci and Robinson-Rechavi 2017). The Tau score ranges from 0 for broadly expressed genes, to 1 for highly tissue-specific genes (Methods). Contrary to Tau score distributions reported in a previous study on different organs (Kryuchkova-Mostacci and Robinson-Rechavi 2017), the distribution of Tau scores across the CNS regions in the present study was not bi-modal and had a unique mode centered on low values ( Fig. 2A). Consequently, the Tau threshold for declaring a gene tissue-specific could not be visually defined. We thus developed an approach based on permutations to adapt this threshold choice to the case of similar tissues within a single organ system. We calculated an empirical p-value for each gene, based on permutations of the tissue labels, and then performed a False Discovery Rate (FDR) correction on the p-values for the multiple genes tested (Benjamini-Hochberg corrected p-value < 0.01) ( Fig. 2A). This approach led to a Tau threshold of 0.525. We found that 17% (2,829) of protein-coding genes expressed in the CNS regions were tissue-specific (Supplemental Materials Fig. S1). Moreover, we established that paralogs were significantly enriched in tissue-specific genes compared to singletons (19.2% of paralogs were tissue-specific, versus 13.9% of singletons, p-value = 2.045E-18, using a Chi-squared test) (Table 1). We confirmed this association between paralogs and tissue-specificity in addition to their expression level, by using a multivariate linear model, inspired from the analyses of Guschanski et al. 2017, that predicts the Tau score of a gene from its maximal expression over the CNS regions and its duplication status (Supplemental Materials Result S1 and Table S16A). Duplication), ySSD (younger SSD occuring after WGD events), oSSD (older SSD occuring before WGD events) and wSSD (WGD-old SSD occuring around WGD events). b Application of Chi-squared tests (or of Fisher's exact test when the Chi-squared test could not be applied) with a corrected p-value threshold = 7.14E-03 (Bonferroni correction for 7 statistical tests). c The odds ratio (>1 or <1) indicates the group (tested or non-tested respectively) in which there is an enrichment. d The paralog reference group includes the genes belonging to WGD, SSD and WGD-SSD categories and the paralogs without annotation. Although this method based on the Tau score can identify tissue-specific genes, it does not indicate which CNS region is targeted by this specificity (Yanai et al. 2005). In order to study the regional distribution of tissue-specific genes, we mapped each tissue specific gene to one CNS region (Supplemental Materials Table S4). Therefore, for each tissue-specific gene, we considered the anatomical region associated with the highest expression value to be the specific region (Fig. 2B). We discovered that the distribution of tissue-specific genes across CNS regions was very heterogeneous (Supplemental Materials Table S6) compared to an almost constant proportion of expressed genes across these regions (Supplemental Materials Table S5). The highest proportions of tissue-specific genes were found in the cerebellum (40.2%), spinal cord (20.9%) and hypothalamus (16.4%). The remaining tissue-specific genes (22.5%) were scattered over the last four brain-related regions. The distribution of tissue-specific paralogs across CNS territories was also highly heterogeneous and similar to the distribution obtained for all tissue-specific protein-coding genes (Supplemental Materials Table S6). In summary, we found that paralogs were more tissue-specific than other genes and that tissue-specific paralogs were concentrated in a limited number of CNS regions similarly to the other tissue-specific genes. Precisely, we observed that the paralogous status contributed to the tissue-specific property in addition of the expression value. 3/ Evolutionary and genomic properties of tissue-specific paralogs The date of an SSD can be estimated in relation to the WGD events and attributed to one of the three duplication age categories: younger SSD (after WGD events -ySSD), older SSD (before WGD events-oSSD) and WGD-old SSD (around WGD events -wSSD) (Methods) (Singh et al. 2014). Using our collection of paralogs with tissue-specific expression between CNS regions, we performed statistical tests to determine if they were enriched in particular duplication events (WGD or SSD) or dates of SSDs (oSSD, wSSD and ySSD categories). Genes can undergo both WGD and SSD duplication and can sometimes be retained after each duplication. Unless otherwise stated, when we refer to a duplication type from this point on in the paper, we are referring to genes that have been retained after this duplication type only (WGD or SSD), in order to make a clear distinction between the effects of the two duplication types. Of the 10,335 paralogs considered in our study, 5,114 are from WGD, 3,719 from SSD (1,192 from ySSD, 1,260 from wSSD and 1,267 from oSSD) and 1,502 unclassified (966 both WGD-SSD and 536 without annotation). We first observed that, among paralogs, SSD genes were significantly enriched in tissue-specific genes (22.6% of SSDs were tissue-specific versus 17.3% of the other paralogs, p-value = 9.022E-11), while on the opposite WGDs were depleted in tissue-specific genes (Table 1). However, we noticed that WGDs seemed slightly enriched in tissue-specific genes, compared to singletons (15.7% of WGDs were tissuespecific versus 14.4% of the singletons, p-value = 4.1E-02). Furthermore, when we performed the same analysis only on the paralogs duplicated around the WGD events (WGDs and wSSDs), the WGD genes were still significantly depleted in tissue-specific genes (15.7% of WGDs were tissue-specific versus 24% of wSSDs, p-value = 5.185E-12) ( Table 1). These tests allowed us to conclude that SSD paralogs were enriched in tissue-specific genes, independently of the potential effect of the duplication date on tissue-specificity. In addition to assessing the effect of duplication type, we also tested the association between duplication age categories and tissue-specificity, and found that ySSD were also enriched in tissuespecific paralogs (28.6% of ySSDs versus 18.0% of the remaining paralogs, p-value = 6.341E-18). Moreover, ySSDs were still enriched in tissue-specific paralogs when we performed the analysis on SSD paralogs only (28.6% of ySSDs versus 19.8% of the remaining SSDs, p-value = 3.483E-09). On the other hand, oSSDs were depleted in tissue-specific genes compared to other SSD paralogs (15.6% of oSSDs versus 26.2% of the remaining SSDs, p-value = 2.729E-13) ( Table 1). We confirmed the contribution of both duplication age and duplication type to the tissue-specificity of paralogs, independently of the effect of their maximal expression level, using multivariate linear models (Supplemental Materials Result S1 and Table S16C, D). In summary, we could conclude that ySSD genes were more tissuespecific than other paralogs, probably due to both their SSD origin and their duplication age. To refine the association between duplication age and tissue-specificity, we performed enrichment analyses using a short list of paralogs that came from human-specific duplication events (Methods) (Dennis et al. 2017) and found no significant associations (Supplemental Materials Table S19). However, the statistical test leading to this result may be underpowered because of the small number of genes and of the abundance estimation uncertainty of recent paralogs with high sequence identity (Dougherty et al. 2018). To obtain a complementary view of this tissue-specificity loss for very recent duplications, we examined the distribution of the Tau scores of paralogs according to their phyletic age (Supplemental Materials Fig. S4). We found that the maximum Tau scores were obtained for genes with phyletic ages around 0.12 which corresponds in most cases to ySSD duplication events that occurred around the separation of the Simians clade (Ensembl Compara GRCh37 p.13). This result seems to indicate that tissue-specific expression is not a property particularly associated with human-specific duplications, even though it seems to increase for slightly older ySSDs and to decrease afterwards. In summary, we found that SSD genes and in particular ySSD genes were more often tissue-specific than other paralogs due to their duplication origin and to the age of ySSD genes. 4/ Tissue-specificity analysis of co-expressed gene families We previously found that paralogs, and especially SSDs and ySSDs, were involved in territorial expression between the different CNS regions, notably through tissue-specificity. In this section, we tried to determine if the paralogs within gene families tended to share the same tissue-specificity across CNS regions. We studied the potential expression similarity between paralogs across CNS regions by using a co-expression analysis without a priori knowledge on their tissue-specificity. The study of co-expression allowed us to explore the higher level of organization of the paralogs into groups of genes with coordinated expression across CNS tissues and compare these modules of coexpressed paralogs across tissues against annotated gene families. The Weighted Gene Correlation Network Analysis (WGCNA) methodology was used to infer the correlation-based co-expression network. Contrary to previous studies that inferred a network per tissue and then compared modules between networks (Oldham et al. 2008;Pierson et al. 2015), in this study we carried out co-expression network inference by simultaneously using all the 13 CNS tissue samples profiled by the GTEx consortium in order to explore gene associations with tissue differentiation. We optimized the WGCNA to generate highly correlated co-expression modules of small size in order to compare them with the annotated gene families (Supplemental Materials Fig. S2; Methods). Indeed, out of our 3,487 gene families, 1,644 (47%) were constituted of only two genes. Our WGCNA analysis extracted 932 modules of co-expressed paralogous genes. Only 104 genes were not included in a co-expression module. The module size ranged from 2 to 911 genes with 84% of small size modules (modules with less than 10 genes) (Supplemental Materials Table S7). A high proportion of modules were enriched in molecular function and biological process GO terms indicating that our network inference approach captured shared biological functions among co-expressed paralogs (Supplemental Materials Result S4). To first check the relationship between co-expression and shared tissue-specificity, we analyzed the distribution of tissue-specific genes across the 932 modules of co-expressed paralogs and found that 177 modules included at least two tissue-specific genes. We then looked at whether within each of these modules the tissue-specific genes were expressed in the same or in different regions. We found that among these 177 modules, 66% and 92% consisted of tissue-specific genes associated respectively with the same region or at most two different regions (Supplemental Materials Table S15). Therefore, gene modules identified from correlation-based co-expression networks also capture shared tissuespecificity. This co-expression network analysis allowed us to classify the gene families into two categories, homogeneous and heterogeneous gene families, based on their patterns of expression across CNS tissues (Methods). A homogeneous gene family was defined by the property that the majority of its member genes were included in the same co-expression module. Out of the 3,487 gene families considered in this study, we identified 111 homogeneous families (with 257 co-expressed paralogs out of a total of 300 expressed paralogs in these families, the remaining 43 not co-expressed paralogs being removed from all tests on homogeneous family genes in the rest of the article) and thus 3,376 heterogeneous families (10,035 paralogs) (Supplemental Materials Tables S13 and S14). We showed by a permutation approach that this number of homogeneous families was significantly large, with an empirical p-value inferior to 10 -3 (Methods), suggesting that paralogs were more co-expressed across tissues when they came from the same family. The comparison of the average size of families between each category showed that homogeneous families were significantly smaller than heterogeneous ones (Welch statistical test, average size of homogeneous families = 2.89, average size of heterogeneous families = 3.84, p-value = 8.278E-10). A total of 53 of these homogeneous families were completely included in the same module of co-expression. Furthermore, some modules were found to comprise several homogeneous gene families (Supplemental Materials Table S9). A biological pathway enrichment analysis of the homogeneous family genes revealed that they were notably enriched in transcription factors and signaling proteins involved in neural development (Supplemental Materials Result S6 and Table S10). Before looking at shared tissue-specificity within homogeneous families, we investigated the association of tissue-specificity with these co-expressed families, and observed a significant enrichment of tissue-specific paralogs in genes coming from homogeneous families (4.7% of tissuespecific paralogs versus 2% of the other paralogs, p-value = 5.374E-12) ( Table 2). We then investigated the link between shared tissue-specificity and homogeneous gene families by categorizing families according to their tissue-specificity following the classification defined by Guschanski et al. 2017. Families composed of a majority of genes tissue-specific to the same regions were classified as tissuespecific families. We identified 58 tissue-specific families and we found a significant enrichment of tissue-specific families in homogeneous families (45% of tissue-specific families versus 2.5% of other families, p-value = 1.691E-69) ( Table 2). e Genes included into tissue-specific families. Only genes specific to the major tissue are considered. We then studied whether homogeneous families were associated with a type of duplication event or with a duplication age. We found that SSD and ySSD genes were both enriched in genes coming from homogeneous families (3.3% of SSD versus 2.1% of the other paralogs, p-value= 2.777E-04; 5.2% of ySSD versus 2.1% of the other paralogs, p-value = 5.758E-10) ( Table 2). We also found a significant enrichment of human-specific genes in homogeneous families, using ySSD genes as reference group, suggesting that the recent ySSDs tend to be more co-expressed than the other ySSDs (p-value = 3.868E-04, OR = 19.58) ( Table 2; Supplemental Materials Result S7). Similarly, SSD and ySSD genes were significantly enriched in genes coming from tissue-specific families (Supplemental Materials Table S17). Finally, we also analyzed the shared tissue-specificity of SSDs and ySSDs at the pair level but the very low number of tissue-specific paralog pairs did not allow to get significant results (Supplemental Materials Result S2). It can be expected that co-expression between two duplicates in a paralog pair will be associated with their proximity on the genome, as epigenetic co-regulation of gene expression partly depends on the proximity between genes on the genome ( Table S21); this supports the idea that paralog co-expression is favored by proximity along the genome. Moreover, we confirmed that the genomic proximity of duplicates was associated with recent SSDs and that the younger the SSD pair, the more the duplicate were found in tandem in the genome (Supplemental Materials Result S5). The tandem duplication explains why SSDs, and especially ySSDs, tend to be more co-expressed and to share more often the same tissue-specificity within their family than other paralogs. In summary, the gene co-expression network analysis performed on the CNS tissues allowed us to find that when several tissue-specific genes were clustered in the same module of co-expression, they were often expressed in the same CNS region or the same pair of regions. We showed that within gene families, the shared tissue-specificity of paralogs was associated with their co-expression across tissues and we classified gene families into two categories according to co-expression status. Homogeneous families were enriched in paralog pairs which were closely located on the genome in tandem duplication, probably due to the specific trend of SSD pairs to be duplicated in tandem. Indeed, these homogeneous families were enriched in SSDs, especially in ySSDs, and were associated with a shared tissue-specificity. 5/ Exploration of brain disorder-associated genes In addition to paralog implication in tissue-specific gene expression, another factor contributing to the importance of a gene is its potential association with disease. Indeed, disease-associated mutations preferentially accumulate in paralogous genes rather than singletons (Dickerson and Robertson 2012). In the case of duplication categories, it has been reported that the proportion of both Mendelian (monogenic) and complex (polygenic) disease genes are enriched in WGD genes in comparison to nondisease genes (W.-H. . We decided to refine theses analyses by considering only the genes that are associated with brain diseases. We therefore used the ClinVar database to collect a list of genes that harbored a Single Nucleotide Variant (SNV) or were located within a Copy Number Variant (CNV) and related to a brain disorder (Landrum et al. 2016) (Methods). We found that paralogs were enriched in brain disease genes (50.2% of paralogous genes, versus 46% of other genes, p-value = 3.740E-07) (Supplemental Materials Table S18). We further focused on paralog categories and observed that, among paralogs, neither WGDs or SSDs were enriched in brain disease genes (p-value = 0.555) but we noticed that ySSDs genes tended to be very slightly enriched in brain disease genes (53.1% of ySSD genes, versus 49.8% of other paralogs, p-value = 3.535E-02). However, brain disease genes tended to be slightly depleted in tissue-specific genes and were neither enriched in genes coming from homogeneous families or in human-specific paralogs (Supplemental Materials Table S18). In summary, brain disease genes are enriched in paralogs but not in WGDs in particular and the paralogs associated with brain diseases do not seem to be the same ones that we found in the previous result sections associated with tissue-specificity and co-expressed gene families. DISCUSSION As far as we are aware, this study is the first to focus specifically on the spatial expression of paralogs and gene families between the different human CNS territories based on post-mortem human tissues analyzed by the GTEx consortium. Previous studies based on gene expression analysis between organs have already established the important association between paralogs and tissue differentiation (Freilich et al. 2006;Kryuchkova-Mostacci and Robinson-Rechavi 2016). We showed that paralog expression could separate CNS tissues better than singletons, despite their low expression compared to singletons. Therefore, the relationship between paralogs and tissue differentiation is also true for comparisons of the different anatomical regions of the CNS. Paralogs are known to be more tissue-specific than other genes (Huminiecki and Wolfe 2004;Freilich et al. 2006;Huerta-Cepas and Gabaldón 2011;Guschanski et al. 2017). Among paralogs, SSDs (Satake et al. 2012) and in particular ySSDs (Kryuchkova-Mostacci and Robinson-Rechavi 2016) seem to be more often tissue-specific than other paralogs when comparing tissues from different organs. However, when considering the brain as a whole and comparing it with other organs, it has been found that WGDs tend to be enriched in brain-specific genes compared to SSDs (Satake et al. 2012;Guschanski et al. 2017;Roux et al. 2017). In our study between the tissues that composed the human CNS, we observed that paralogs, especially ySSDs were more tissue-specific than other genes. In addition, we found that even wSSDs were enriched in tissue-specific genes compared to other paralogs of the same age (WGDs), thus suggesting that the tissue-specificity between brain regions is not only associated with the young age of duplication but also with the type of duplication (i.e. with SSD duplications). Our results, although apparently contradictory, do not question the known involvement of WGDs in brainspecific expression. Indeed, the fact that an SSD gene tends to be more often specific to only one or just a few CNS anatomical regions than a WGD gene, implies that the average expression of SSD genes over the whole brain would be lower than the average expression of WGDs. Thus, this broad expression of WGDs within brain regions facilitates the detection of their brain-specific expression when comparing several organs, while the analysis of gene expression between organs may not promote the detection of some ySSDs specific to human brain. A previous study performed using gene expression profiles across mammalian organs established that most of tissue-specificity variance was explained by the expression level, in addition to the duplication status, with no significant contribution of the evolutionary time (Guschanski et al. 2017). Using multivariate linear models, we confirmed the major contribution of expression level and that of duplication status to tissue-specificity in CNS territories. The association with duplication status was more significant when we considered the maximal expression, which gives a better interpretation of gene abundance when studying the tissue-specificity than the average expression. Moreover, among paralogs, we found that the SSD duplication type explained also part of the tissue-specificity variance. Regarding the evolutionary time, low phyletic ages were also significantly associated with high tissuespecificity; a property potentially restricted to CNS tissues. Despite this global effect of the duplication age, we observed that tissue-specific expression did not seem to be associated with human-specific duplications, but rather with less recent ySSDs. We then studied the gene family level of organization using gene co-expression network analysis of paralogs across CNS tissues. We showed that modules of co-expressed genes were able to identify clusters of paralogs with the same tissue-specificity. The characterization of gene families according to the level of co-expression of their member genes has led to the identification of two categories of families: homogeneous families, which are composed of a majority of co-expressed genes, and heterogeneous families. We observed that homogeneous families were enriched in ySSD genes (particularly in human-specific genes) and tandem duplicate pairs, in agreement with a previous study showing that pairs of ySSD paralogous genes tend to be duplicated in tandem and co-expressed just after the duplication event (Lan and Pritchard 2016). A previous study established that when the two paralogs of an ySSD pair are tissue-specific, they tend to be specific to the same tissue more often than for other paralog pairs (Kryuchkova-Mostacci and Robinson-Rechavi 2016). We observed that it was also true for the CNS territories by showing the high co-expression of ySSD pairs and the enrichment of co-expressed families in tissue-specific families, where the majority of genes were tissue-specific to the same tissue. From the analysis of gene expression across human and mouse organs, Lan and Pritchard 2016 proposed a model for the retention of SSD duplicates appearing in mammals. In this model, pairs of young paralogs are often highly co-expressed probably because tandem duplicates are co-regulated by shared regulatory regions. In addition, this model is consistent with the dosage-sharing hypothesis in which down regulation of the duplicates, to match expression of the ancestral gene, is the first step enabling the initial survival of young duplicates (Lan and Pritchard 2016). Our analyses of ySSDs expression features between CNS territories seem to be concordant with this model, indeed ySSDs tend to be organized within small families of co-expressed genes and also weakly expressed in concordance with the sharing of the gene ancestral expression. Furthermore, our results in the CNS tissues seem to confirm that, after the initially high co-expression of SSD paralogs just after their duplication, they become more tissue-specific and less co-expressed in part through chromosomal rearrangement., suggesting a long term survival by sub-/neofunctionalization (Lan and Pritchard 2016). In the case of ySSDs tissue-specific in the same tissue, one of these duplicates might not preserve its coding potential in the long term and would lead to a pseudogene. This does not systematically imply its inactivation, indeed some transcribed pseudogenes associated with low abundance and high tissue-specificity may carry a regulatory function on their parental genes (Guo et al. 2014;Hezroni et al. 2017). With regard to the relationship between paralogs and human diseases, if we consider all the genes involved in Mendelian or complex genetic diseases, it is known that mutations accumulate preferentially in paralogs compared to singletons ( Ghosh 2016) potentially linked to their essentiality (Makino et al. 2009;Acharya and Ghosh 2016;Roux et al. 2017). Finally, in the case of SSD paralogs, disease genes are known to be enriched in oSSDs and depleted in ySSDs when compared to non-disease genes (Chen et al. 2014). Our study confirmed that paralogs were enriched in brain disease-associated genes. However, using our list of brain disease genes, we observed no enrichment in WGD or SSD duplications types. In conclusion, our intra-organ exploration of paralogs suggests the major implication of young SSDs in tissue-specific expression between the different human CNS territories. It will be relevant to explore the expression patterns of these young SSDs between anatomic regions of other complex organs to determine whether or not they are solely associated with the nervous system. happened on the species tree to the human leaf node, and they assigned the associated duplicate (Chen et al. 2012;W.-H. Chen et al. 2013). A second list of 20,415 genes was extracted from Singh et al. 2014. Human genes, duplication events and families This gene ID list was converted to HGNC gene symbols and intersected with the first list in order to annotate it (17,805 protein-coding genes in common). Thus, in the present study, we collected the duplication category for each paralog (Singh et al. 2014). Singh et al. obtained WGD annotations from (Tinti et al. 2012) and obtained their SSD annotations by running an all-against-all BLASTp using human proteins (Singh et al. 2012). Singh and co-workers defined genes as singletons if they were not classified as WGDs or SSDs and they obtained the duplication age for SSD genes from the Ensembl compara (Vilella et al. 2009). They classified paralogs into the following categories: WGD, SSD, ySSD (i.e. SSD with duplication date younger than WGD), oSSD (i.e. SSD with duplication date older than WGD) and wSSD (i.e. SSD with duplication date around the WGD events). There were 5,390 annotated paralogs originating from the WGD and 4,889 from SSD (2,104 from ySSD, 1,354 from oSSD and 1,431 from wSSD). Moreover, there were 2,607 paralogs without annotations and 1,198 paralogs annotated as both WGD and SSD (WGD-SSD). The WGD-SSD paralogs were not included into the WGD or the SSD duplication categories. However, the unannotated and WGD-SSD paralogs were both considered into the paralog group. We verified that these paralog duplication categories were consistent with the phyletic ages (duplication dates) collected from Chen and co-workers (Chen et al. 2012;W.-H. Chen et al. 2013) (Supplemental Materials Fig. S3). The list of our paralogous gene pairs and gene families is given in the Supplemental Materials Table S1. The evolutionary annotation of paralogous genes is indicated in the Supplemental Materials Table S2. The list of singleton genes is given in the Supplemental Materials Table S12. Furthermore for the analysis of the duplicate pairs, we considered only the 3,050 pairs which appeared twice in our paralog list (i.e. where the first paralog is associated with the second paralog and vice versa and where the duplication category annotation is the same for both paralogs); genomic distances between duplicate pairs were obtained from Ensembl (GRCh37/90). We also obtained a list of paralogous genes generated by human-specific duplication events (Dennis et al. 2017). From these human-specific duplications, 22 were in our list of paralogs and 8 were among the genes expressed in the CNS. Gene expression profiles in CNS tissues We obtained gene counts and RPKM (Reads Per Kilobase Million) values for 63 to 125 individuals (1259 post-mortem samples -RNA integrity > 6) distributed over 13 CNS tissues (cerebellum, cerebellar hemisphere, cortex, frontal cortex, anterior cingulate cortex, hypothalamus, hippocampus, spinal cord, amygdala, putamen, caudate, nucleus accumbens and substantia nigra) from the GTEx consortium data release 6 (GRCh37) (Melé et al. 2015). The CNS tissue associated with each GTEx patient sample used in our study is indicated in the Supplemental Materials Table S11. These gene expression data, calculated by GTEx took into account only uniquely mapped reads (https://gtexportal.org). After filtering out low-information content genes (genes with a null variance across samples and weakly expressed genes, with mean expression per tissue lower than 0.1 RPKM for all tissues), we kept for analyses a total 16,427 genes distributed across 10,335 paralogs (5,114 WGD,3,719 SSD,1,192 ySSD,1,260 wSSD and 1,267 oSSD, grouped in 3,487 families and 6,092 singletons. It should be noted that all analyses of the articles were performed on this list of expressed genes only, except for the analysis on brain disease genes. Moreover, the WGD-SSD paralogs were not included in the WGD or SSD categories. However, unannotated and WGD-SSD paralogs as well as all other duplication categories were considered to constitute the paralog group. Gene RPKM values were log-transformed (log2 (RPKM + 1)) and adjusted by linear regression for batch effects and various biological effects (platform, age, gender and the first 3 principal components of genetic data illustrating the population structure given by the GTEx Consortium; the intercept of the regression was not removed from the residuals in order to keep the mean differences between genes (https://www.cnrgh.fr/genodata/BRAIN_paralog). These filtered, log-transformed and adjusted RPKM values were used as input for unsupervised classification of brain tissues, as well as for gene co-expression network inference and for tissue-specificity analysis. Moreover, gene expression data for tissues considered to anatomically overlap were merged by calculating the average expression value across related tissues prior to the tissue-specificity analysis. Therefore, from an initial list of 13 tissues, we defined a shorter list of 7 CNS regions: cerebellum (cerebellum and cerebellar hemisphere), cortex (cortex, frontal cortex and anterior cingulate cortex), basal ganglia (putamen, nucleus accumbens and caudate), amygdala-hippocampus, hypothalamus, spinal cord and substantia nigra. Unsupervised clustering of gene expression profiles Gene expression profiles (filtered and adjusted RPKM values) generated by the GTEx Consortium for the 1,259 samples distributed across the 13 CNS tissues, were clustered by unsupervised hierarchical clustering using the pheatmap package of R version 3.4 (similarity measure: Pearson correlation, clustering method: average linkage). We estimated the relevance of the clustering according to the expected groups of CNS tissues. We evaluated, independently, the clusterings generated from proteincoding genes, paralogs and singletons, using adjusted rand index (Hubert and Arabie 1985) after cutting trees (so that we obtained 30 clusters for each gene category). Differential gene expression analysis Genes with low-information content were removed before differential gene expression (DGE) analysis. DGE analysis was performed by DESeq2 (Love et al. 2014) on count data for each pair of CNS tissues, with the "median ratio" between-sample normalization and using batch and biological effects as covariates. For each tissue pair, we then corrected gene p-values for the number of tested genes using FDR (Benjamini and Hochberg 1995) and obtained a list of significantly differentially expressed genes (DEGs) (FDR<0.05). Finally, we considered only the DEGs with a log2 fold-change greater than 0.5. Inference of gene co-expression networks The gene network inference was carried out using the Weighted Gene Correlation Network Analysis (WGCNA) methodology , which generates co-expression networks and identifies modules (groups) of co-expressed genes. We applied the WGCNA tool only to paralogous gene expression data (RPKM) across the GTEx samples of the 13 CNS tissues. Genes were grouped into modules according to their expression profile similarity. The module named "grey", which grouped genes that were not considered as co-expressed by WGCNA, was composed of genes with very low variability across all samples. Since we had removed the genes with no variance across tissue samples and those which were weakly expressed before performing the WGCNA analysis, the grey module was small in size (104 genes). Furthermore, if this filtering had not been performed, some of the genes with an overall weak expression might have been integrated into co-expression modules, thus creating a bias. One of our goals was to compare gene families to co-expression modules. Given that 47% of gene families have a size equal to 2, we optimized WGCNA parameters to obtain small highly co-expressed modules (Supplemental Materials Result S3). Homogeneous and heterogeneous families Definition. A gene family was defined as homogeneous if the majority, more than 60%, of its member genes were included in the same co-expression module. It should be noted that the total size of gene families was used to compute this percentage, even if some member genes were not in the list of expressed paralogs. Gene families which did not respect this homogeneity rule, i.e. those with member genes scattered over different co-expression modules, were defined as heterogeneous. Assessment of the significance of the number of homogeneous families. Starting from the paralog modules obtained with WGCNA, we used a permutation procedure (by permuting 1,000 times the module labels of paralogs and counting the number of falsely homogeneous families for each permutation) and were able to conclude that the number of homogeneous families was significantly large, since for each permutation the number of falsely homogeneous families was lower than the number that we obtained, leading to an empirical p-value inferior to 10 -3 . Tissue-specificity calculation Tau score calculation. To select tissue-specific genes, we used the score τ (Yanai et al. 2005) to estimate the degree of tissue-specificity of each gene in our set of CNS tissues: (1) In this equation, xi is the mean expression of a given gene in tissue i and n is the number of different tissues. varies from 0 to 1 where 0 indicates that the gene is broadly expressed and 1 that the gene is τ tissue-specific. For computation, genes must have a positive mean of expression in every CNS region. τ Although we log-normalized expression data with log2(RPKM+1) leading to positive expression values, the correction for batch and some biological effects induced some negative values in gene mean expression. We replaced the negative values by zeros to keep all protein coding genes (16,427 genes) for the score computation. We pooled expression data generated by GTEx for the 13 tissues into 7 τ CNS regions so that the score would not decrease artificially for genes specific to several close tissues. τ Tau score threshold defined by permutations. The score was computed for each gene and for the 7 CNS τ regions. We then plotted the score distribution obtained from all protein coding genes ( τ Fig. 2A). However, there is no general score threshold at which a gene is considered to be tissue-specific. τ To define a tissue-specificity threshold, we implemented a statistical method based on permutations. We applied 1000 permutations on the region labels assigned to the samples to shuffle the correspondence between samples and regions. For each permutation, scores were recomputed for each gene. The τ distribution of the 1000 X 16427 scores obtained from the permutations is given in the Figure 2. For τ each gene and its original score, a p-value was then calculated as the proportion of permutationτ based scores higher than the original score. The Benjamini-Hochberg correction for the number of τ τ genes tested was applied to all p-values. Genes with a corrected p-value lower than 0.01 were declared tissue-specific, which corresponded to a score threshold of 0.525 ( Fig. 2A). Visualization of gene τ profiles across brain regions at different windows of the score showed tissue-specificity beyond the τ τ score threshold of 0.525 (Supplemental Materials Fig. S1). However even for scores in the range [0.5τ 0.75] some genes were still expressed in two regions. Therefore for each tissue-specific gene, we considered that the CNS region with the highest expression value to be the specific region. ACKNOWLEDGMENTS This study received funding from the Université Paris-Sud (support to S.B.J.) and the Fondation pour la Recherche Médicale (support to S.C.). We are grateful to Marc Robinson-Rechavi for his feedback on the methods and results. We thank Steven McGinn and Elizabeth May for English language editing. We also thank Carène Rizzon, Margot Coréa, Olivier Jaillon, François Artiguenave and Morgane Pierre-Jean for constructive discussions. DISCLOSURE DECLARATION The authors declare that they have no competing interests.
2019-10-31T08:54:41.677Z
2019-10-29T00:00:00.000
{ "year": 2019, "sha1": "9b4f6decfa8b10b1703e6b593714a952426eed32", "oa_license": "CCBY", "oa_url": "https://hal-universite-paris-saclay.archives-ouvertes.fr/hal-03353892/file/bro1.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "c6b5a69a834ca27d5875d132e99175aadceef050", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
54707206
pes2o/s2orc
v3-fos-license
Preparation and characterization of 33 S samples for 33 S ( n , α ) 30 Si cross-section measurements at the n _ TOF facility at CERN Thin 33S samples for the study of the 33S(n,α)30Si cross-section at the n_TOF facility at CERN were made by thermal evaporation of 33S powder onto a dedicated substrate made of kapton covered with thin layers of copper, chromium and titanium. This method has provided for the first time bare sulfur samples a few centimeters in diameter. The samples have shown an excellent adherence with no mass loss after few years and no sublimation in vacuum at room temperature. The determination of the mass thickness of 33S has been performed by means of Rutherford backscattering spectrometry. The samples have been successfully tested under neutron irradiation. Introduction The preparation of thin sulfur samples is a difficult task because sulfur sublimates in vacuum at room temperature, adheres poorly or only for a short time to most solid backings and it is very volatile [1][2][3][4]. These difficulties are enhanced by the particularities required for an accurate study of the 33 S(n, ) 30 Si cross-section as a function of the neutron energy. The only two experiments with the goal of the measurement of the 33 S(n, ) 30 Si cross-section in a wide energy range reported different problems with the samples [5,6]. Thin deposits are needed for a low energy loss and good detection efficiency of the emitted alpha particles, but at the same time, the value of the crosssection in some energy ranges is expected to be low, therefore an adequate number of atoms per cm 2 is required. On the other side, the cross-section is expected to be high in the resonance region but for resolving the resonances a pulsed neutron beam is mandatory entailing a decrease of the neutron flux. With all of this in mind and with the knowledge of the outstanding characteristics of the Experimental Area 1 (EAR1) of the n_TOF-CERN facility in terms of energy resolution and instantaneous flux, it is also possible to take advantage of a higher number of neutrons making use of the beam of 8 cm diameter during the so-called fission campaign when the large collimator is installed [7]. This possibility implies an additional double challenge, the production of large samples and their accurate characterization with an adequate study of the homogeneity. In spite of these difficulties, some problems have been solved by different authors depending on the requirements of their experiments. Watson developed a technique for making sulfur targets for the purpose of proton capture studies [1]. The target was a thin layer of Ag 2 S but only 10 −8 at/b were present, which is an order of magnitude lower than the requirement for neutron-capture studies as those foreseen at n_TOF. The same can be concluded for ion-implanted S targets performed by different authors, as Schatz et al. [4]. That kind of targets has the additional drawback of the small dimensions (2 cm diameter) that in the case of the EAR1 at n_TOF would mean an important waste of neutrons making impossible this measurement in a reasonable time. Hedemann [2] showed a relatively good adherence of sulfur to formvar foils and made a multi-sandwich target of formvar-carbon-sulfur. However, migration and loss of sulfur was reported. Geerts et al. [3], based on Hedemann's work, produced a 33 S sample using a sandwich of formvar foils. No sulfur losses or migration was reported during an irradiation of the sample with thermal neutrons [3]. As formvar powder is usually diluted in a solution which is classified as dangerous (carcinogenic and toxic) many formvar solutions should be disposed as hazardous waste. Therefore, we decided to avoid using formvar in our work. All previous methods were based on the evaporation of sulfur in different conditions or ion implantation. There have been more attempts to prepare sulfur samples by deposition from a well-defined solution [4]. However, this method provided samples with significant inhomogeneities requiring large self-absorption corrections. Regarding uncertainties, the characterization of the samples was carried out by different techniques. In general around ±20% uncertainty for the number of atoms were obtained, but no information was provided for the homogeneity. In Ref. [4] an accuracy better than ±20% was obtained, but the adherence before and after the experiment was not investigated and long tails in the alpha spectrum towards lower energies were present due to significant inhomogeneities, which entailed a degradation in separating the alpha-induced signals from the background [4]. In this work, we present a method for making large 33 S samples, stable in vacuum and at atmospheric pressure, with no observable mass loss over a period of few years and without cover layer. An accurate determination of the number of atoms per cm 2 is also presented. The method is based on the evaporation in vacuum of 33 S powder onto a dedicated substrate and the characterization is based on Rutherford backscattering spectrometry (RBS). Preparation and characterization of 33 S samples As already pointed out, the production of 33 S samples for neutroninduced cross-section measurements is not straightforward. In addition to the mentioned factors, it was necessary to avoid the use of materials with elements that under neutron irradiation could produce charged-particles leading to undesirable signals in the spectra. Also, the substrate must be made of a conductive material in order to use the n_TOF experimental setup based on MICRO MEsh GAseous Structure (Micromegas) detector [8] for taking advantage of its high efficiency, low mass, and high neutron transparency that permits using several inbeam detectors [9]. Sample coating According to these requirements, several tests of the adhesion of natural sulfur were carried out at the Vacuum, Surfaces and Coating (VSC) group of the Technology Department at CERN. It was found that sulfur showed good adherence to commercially available copperplated kapton foils at moderate deposition temperature (60 • C). A strong bonding of S and Cu could be achieved due to the formation of a stable compound, similar to the case of S and Ag [1]. The production procedure of the final 6 samples is described in the following. A commercially available 50 μm kapton foil with 25 nm Cr and 5 μm Cu served as starting material. The Cu thickness was reduced for decreasing possible background from neutron-induced reactions on Cu. To this end, the superficial layer was removed by chemical etching. Then, in a magnetron deposition coating equipment a 10 nm Titanium adhesion layer and a 200 nm Cu layer were deposited without intermediate air exposure. Once the substrate was prepared the evaporation of 33 S powder, with an enrichment higher than 99% [10] was performed in a glass bell jar of 30 cm in diameter and 35 cm in height. Few milligrams of powder were loaded in a molybdenum boat (Balzers BD 482 056) with a Mo cover. The central hole of 5 mm in diameter was positioned at 11.5 cm from the substrate. The substrate was heated at 60 • C during 1 h. The chamber was externally heated during the same time and at the same temperature. The pressure was decreased to 6⋅10 −4 mbar with a rotary vane pump by pumping through a cold trap filled with liquid nitrogen (LN 2 ). The 33 S powder was completely evaporated passing a current of 70 A through the Mo boat during 5 min. A collimator (9 cm diameter) was used to fit the dimensions of the n_TOF neutron beam during the EAR1 fission campaign (8 cm diameter) and for avoiding possible edge effects. Once the evaporation was finished, the sample and the chamber were kept at 60 • C during 1 h. Fig. 1 shows the substrate before (top) and after (bottom) the evaporation of 33 S. The area in which the 33 S reacted with Cu is clearly noticeable by the dark color formation of a compound between Cu and 33 S. By evaporation of masses of 33 S batches with 5 and 15 mg, six samples of different thicknesses have been produced. Rutherford backscattering analysis The samples were characterized at the 3 MV Tandem Pelletron accelerator at the Centro Nacional de Aceleradores (CNA, Spain). At CNA, an accelerator line is dedicated to different Ion Beam Analysis techniques and in particular RBS [11]. The characterization of the samples was performed with a mono-energetic beam of 3.5 MeV 4 He ++ . The scattered 4 He ions were recorded in a Passivated Implanted Planar Silicon (PIPS) detector of 300 mm 2 , positioned at a scattering angle of 165 • . For calibration purposes a reference sample containing 18⋅10 −9 at/b of Pt deposited over a thick (0.5 mm) Si substrate was used. The sample holder was tilted by 7 • with respect to the beam direction to avoid channeling effects. In order to perform absolute RBS measurements the number of incident -particles must be precisely known. For this purpose and to suppress secondary electrons that can produce false current measurements, the sample holder was electrically isolated and was kept at a potential of 200 V potential, thus acting as a Faraday cup. In this way, the -current was measured directly at the sample, which was in contact with the sample holder. In addition, the sample holder is equipped with a XY stage using stepping motors with a precision of 100 μm. This allowed an accurate positioning of the sample in the beam. The RBS spectra were analyzed using the SIMNRA package [12]. Because of the dimensions of the samples (8 cm diameter) and the 4 He ++ beam spot (3 mm), several points were analyzed for each sample. The samples were scanned from one edge to other, not in the radial direction, passing through the center. The energy of the 4 He ++ was selected at 3.5 MeV because the scattering cross-section can be chosen as Rutherford and a good separation of the backscattered alphas by the different elements in the sample was achieved. Indeed, the energy of the 4 He-ion in the center-of-mass ( in MeV) at which the scattering cross-section deviates by 2% from its Rutherford value vs. atomic number ( ) is given by =0.041+0.232⋅ [13]. In case of 33 S, the corresponding energy in the laboratory system is 4.2 MeV (higher than 3.5 MeV). This energy is higher for heavier atoms. Therefore, in our simulations, we will use the Rutherford cross-section for the scattering of 4 H ++ in S, Cu, Cr and Ti. For the C, N and O, we will use the Fig. 2. RBS spectrum of the substrate measured using a 3.5 MeV 4 He ++ beam. The points correspond to the experimental data and the line to the SIMNRA simulation [12]. From higher to lower energy the peaks correspond to Cu, Cr, Ti and kapton elements; see text for details. evaluated (SigmCalc) cross section data from IBANDL database, IAEA, 2014 [14]. In order to perform an accurate and precise determination of the number of atoms of 33 S few points outside the area with sulfur were also analyzed by RBS. This allowed the determination of the number of atoms of the elements present in the substrate reducing the free parameters of the SIMNRA fit of the experimental data. Fig. 2 shows one of these points where the experimental RBS spectrum (black points) is compared with the SIMNRA simulation (red line). The biggest peak from 2500 to 2800 keV corresponds to Cu, the second in energy to Cr and third in energy to Ti. Then, below 1200 keV the different elements present in kapton are detected. The simulation of the area without 33 S provides a very good fit of the experimental data for Cu, Cr and Ti giving their number of atoms in the substrate. Below 500 keV the signals from the lighter elements of kapton are not perfectly fitted. This is due to multiple scattering effects at low energy which are difficult to simulate. Different points of the substrate provided the same values of the number of atoms per unit of area within uncertainty of each element. Fig. 3 shows a RBS spectrum (black points) in comparison with the SIMNRA simulation (red line) of a point with 33 S. Between 2000 and 2200 keV the -particles backscattered by 33 S are clearly detected with a good separation from the rest of elements allowing the determination of the total number of atoms of 33 S. The SIMNRA simulation provides a very good fit of the 33 S peak. From the comparison between Figs. 2 and 3 other differences could be noticed. The Cr and Ti peaks are not resolved due to the presence of 33 S. This is described by the SIMNRA simulation and the fit of the Cr-Ti peak remains very good. When sulfur is evaporated the Cu peak is split, which means that the 33 S only reacted with a part of the Cu layer in depth. The part of the Cu peak at higher energies corresponds to Cu that reacted with 33 S and the rest of the peak corresponds to the Cu not reacting with 33 S. The latter has a higher number of atoms of Cu per unit of area than the former. Also this fact is described by the SIMNRA simulation. In order to estimate the uncertainty, several simulations of each point were carried out. Once the experimental data were fitted with SIMNRA, the same simulation was performed varying the number of atoms of 33 S. The result of this study for each point demonstrated that ±2-3% difference in the number of 33 S atoms meant that the peak due to backscattered -particles by 33 S was not fitted. Thus, in order to provide a conservative estimation of the accuracy, 3% will be considered as a relative uncertainty of the mass. The process of data taking and fitting the experimental data with the SIMNRA code was performed for all the Fig. 3. RBS spectrum measured using a 3.5 MeV 4 He ++ beam for the 33 S samples. The points correspond to the experimental data and the line to the SIMNRA [12] simulation. From higher energy to lower energy the peaks correspond to Cu, Ti-Cr, 33 S and the elements of the kapton; see text for details. Therefore, we consider the samples as homogeneous with an additional 5% uncertainty in the number of atoms. Fig. 4 shows two RBS spectra of the same sample illustrating its homogeneity. One spectrum (black points) was obtained in the central area of the sample and the other (red points) at 3 cm from the center. Table 1 Performance of the samples under neutron irradiation The 33 S(n, ) 30 Si reaction has a -value equal to 3493 keV with no threshold [15]. Therefore, -particles of around 3.4 MeV can be detected under the irradiation with low energy neutrons. In this way, the performance of the samples for future experiments can be tested. At CNA, an accelerator-based neutron source has been developed. In particular, neutron beams practically following a Maxwell-Boltzmann distribution are produced for astrophysics studies [16] [17]. The energy spectrum of such beams has its maximum probability at 30 keV. The neutron flux can reach up to 10 8 n s −1 cm −2 , which is adequate for a test of the samples under neutron irradiation. The details of the neutron production method can be found in [16,17]. Sample 1 was irradiated with a neutron field similar to a Maxwellian at kT=30 keV and the emitted alpha particles were detected with a setup consisting of three PIPS detectors (500 μm). The distance from the sample to the neutron target was 3 cm and 4 cm from the sample to the PIPS. Fig. 5 shows a pulse-height spectrum obtained during the irradiation. The signals between 3.4-3.5 MeV correspond to the alpha particles produced in the 33 S(n, ) 30 Si reaction, which shows that the energy of the -particles is not significantly degraded by the sample. Other signals correspond to electronic noise. A second test was carried out at the Experimental Area 1 of the n_TOF-CERN facility. One 33 S sample and one sample without 33 S (see top photo in Fig. 1), were setup in the usual configuration of Micromegas detectors at n_TOF [9,18]. The signals detected by the Micromegas as a function of the time are shown in Fig. 6. Blue line corresponds to the detector with 33 S and red line to the detector without 33 S. Both detectors registered very low amplitude signals corresponding to noise and background forming the so called baseline. One large negative amplitude signal is shown in the detector with 33 S, which corresponds to an -particle. During the test, many signals with large amplitude were detected in the Micromegas with 33 S, meanwhile no signals with amplitude larger than the baseline were detected in the detector without 33 S. Therefore, the test shows the adequate selection of the substrate because of the lack of signals that could contaminate those due to the 33 S(n, ) 30 Si reaction. 6. Snapshot of the signals registered by two Micromegas detectors, one with 33 S (blue line) and one without 33 S (red line). It can be clearly seen a signal from an -particle below the baseline in the detector with 33 S. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) Conclusions Six samples of 33 S were produced at VSC-CERN for 33 S(n, ) 30 Si cross-section measurements at the n_TOF-CERN facility. The characterization by RBS performed at CNA has allowed an accurate determination (around 9% of uncertainty) of the number of 33 S atoms per unit of area present in the samples. From 2012 to present, the samples have been stored the major part of the time in a clean laboratory with normal air. During the different experiments presented in this work they were in high vacuum (10 −6 mbar). Under these conditions the samples have shown an excellent stability with no loss of mass. This was checked by means of several RBS analysis of the samples throughout these years. Therefore, we can conclude that for first time stable bare 33 S samples of large dimensions have been produced for 33 S(n, ) 30 Si crosssection measurements. The developed method provides homogeneous samples and avoids the sublimation of 33 S in vacuum at room temperature. The tests carried out at CNA and CERN demonstrated the good performance of the samples for future experiments with the aim of measuring the 33 S(n, ) 30 Si cross-section.
2018-11-04T02:33:01.027Z
2018-05-11T00:00:00.000
{ "year": 2018, "sha1": "0b735428357c3aa9f93e148d676b9e075013ffa0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.nima.2018.02.055", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "29f05dff40e5f7a743ec7d968a5424ba8ce087aa", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
20580493
pes2o/s2orc
v3-fos-license
A second tricilinc polymorph of 6,6′-diethoxy-2,2′-[propane-1,2-diylbis(nitrilomethylidyne)]diphenol The title Schiff base compound, C21H26N2O4, is a second triclinic polymorph of a previously reported room-temperature structure [Jia (2009 ▶). Acta Cryst. E65, o646]. Strong intramolecular O—H⋯N hydrogen bonds generate S(6) ring motifs. Intermolecular C—H⋯O interactions link neighbouring molecules into dimers with an R 2 2(16) ring motif. The mean planes of the two benzene rings are almost perpendicular to each other, making a dihedral angle of 88.24 (5)°. An interesting feature of the crystal structure is the intermolecular short C⋯O [3.1878 (13) Å] contact which is shorter than the sum of the van der Waals radii of the relevant atoms. The crystal structure is further stabilized by intermolecular C—H⋯π and π–π interactions [centroid–centroid distance = 3.7414 (6) Å]. The structure has a stereogenic centre but the space group is centrosymmetric, so the molecule exists as a racemate. Comment Schiff bases are one of the most prevalent mixed-donor ligands in the field of coordination chemistry. They play an important role in the development of coordination chemistry related to catalysis and enzymatic reactions, magnetism, and supramolecular architectures (Calligaris & Randaccio, 1987). Structures of Schiff bases derived from substituted benzaldehydes and closely related to the title compound have been reported earlier (Li et al., 2005;Bomfim et al., 2005;Glidewell et al., 2006;Sun et al., 2004;Fun et al., 2008). The molecule of the title compound ( Fig. 1), is a potentially tetradentate Schiff base ligand. The bond lengths (Allen et al., 1987) and angles are comparable to the earlier room-temperature polymorph which was published previously (Jia, 2009). Strong intramolecular O-H···N hydrogen bonds generate S(6) ring motifs (Bernstein et al., 1995). Intermolecular C-H···O interactions link neighbouring molecules into dimers with a R 2 2 (16) ring motif (Bernstein et al., 1995). The mean planes of the two benzene rings are almost perpendicular to each other making a dihedral angle of 88.24 (5)°. The interesting feature of the crystal structure is the short C18···O2 [3.1878 (13) Å, symmetry code: 1 -x, 1 -y, 1 -z] contact which is shorter than the sum of the van der Waals radii of the relevant atoms. The crystal structure, is further stabilizd by intermolecular C-H···π and π-π interactions [centroid to centroid distance of 3.7414 (6) Å]. The structure has a stereogenic centre but the space group is centrosymmetric, so the molecule exists as racemate. Experimental The synthetic method has been described earlier (Fun et al., 2008), except that 3-ethoxy salicylaldehyde and 2-methyl-2,3propanediamine were used as starting materials. Single crystals suitable for X-ray diffraction were obtained by evaporation of an ethanol solution at room temperature. Refinement H atoms of the hydroxy groups were positioned by a freely rotating O-H bond and constrained with a fixed distance of 0.84 Å. The rest of the hydrogen atoms were positioned geometrically and refined using a riding model with C-H = 0.95-1.00 Å and U iso (H) = 1.2 or 1.5 U eq (C). A rotating-group model was applied for the methyl groups. Fig. 1. The molecular structure of the title compound with atom labels and 50% probability ellipsoids for non-H atoms. Dashed lines indicate intramolecular O-H···N hydrogen bonds. Glazer, 1986) operating at 100.0 (1)K. sup-2 Figures Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > 2sigma(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
2018-04-03T05:02:16.047Z
2009-03-11T00:00:00.000
{ "year": 2009, "sha1": "aa1f4a340e455359c26b6b8d9718eae09a2bcf0d", "oa_license": "CCBY", "oa_url": "http://journals.iucr.org/e/issues/2009/04/00/at2736/at2736.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "df47d790fac946de8a537ed29fe7fe062d95c30b", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
239866463
pes2o/s2orc
v3-fos-license
Benin EFL Teachers ’ Beliefs on the Acceptance of Students with Stigma in their Classroom : Case Studies of “ Hounyos ” and “ Woli ” DOI: http://dx.doi.org/10.24018/ejsocial.2021.1.5.85 Vol 1 | Issue 5 | September 2021 1 Abstract — Today the concept of inclusion makes all educators responsible for creating supportive learning environment. In Benin schools, consensus is made to accept students with stigma. This article is to explore Benin EFL teachers ‘and students’ belief on the acceptance of two main categories of students with stigma (Woli and Hounyos). They are concerned with girls who are not allowed to be dressed on top and who keep scarification on their face and chest (Hounyos). The second category are members of celestial church, who keep their hair natural all their life without combing or brushing it (dreadlocks). They are known to be gifted and can predict future. Three public schools and one private school took part in this study. Questionnaire, interview, and classroom observation were used as an instrument in this research. Overall, nineteen students with stigma participated. The results from this research have shown that Benin EFL teachers adopt different approaches on the integration of those specific cases of students in the classroom. Suggestions are then formulated on how Benin EFL teachers should manage the concerned students. I. INTRODUCTION 1 Over the last 30 years there have been major international efforts to encourage inclusive educational developments. Inclusive education in many countries differs widely. In many developed countries, young people abandon school. Others are placed in a special program, while in others such as poorer countries, children are not able to attend classes due to a couple of reasons. Salamanca Statement [1]. As [2] has concluded: 'There appears, however, to be deep uncertainty about how to create inclusive environments within schools and about how to teach inclusively'. In Benin, a west Africa country, segregated education is in rise and more and more learners/students are far from being place into a mainstream classroom. Faced with these challenges, there is a need to increase interest in the concept of making education more inclusive and equitable for all. In fact, Inclusive education, as originally defined by the Salamanca Statement [1], refers to schooling in which all children, including children with severe disabilities, have access to regular classrooms with the help of adequate support. There are benefits while dealing with the term inclusion. Several research exit in inclusion but any of them show negative effects. The benefits of inclusion are concerned with better appreciation and understanding of individual differences and being prepared for adult life in a diverse society. In a well-designed inclusive classroom, students meet higher expectations both from their peers and their teachers. They may also see positive academic roles models in their classmates. In fact, "Woli and Hounyos" students are stigmatized in the classroom by their peers, and by the whole community. Woli are female idol worshipper while Hounyos are the white garment church prophets. Hounyos are adepts of vodoun, and they consult Fâ while Woli come from Celestial Church, which is a personality cult founded by an out of work carpenter in Benin (Africa) named Samuel Joseph Bilewu Oschoffa, who claimed that during a solar eclipse, while lost alone in the wilderness, he called out to The Lord in prayer and humble supplication (he was raised Methodist and knew from his Bible. [3] in his research on Vodoun and Fâ, has found that in Benin, Vodoun and Fâ are predominantly associated with two ethnic groups, the Fon and the Yoruba. Fâ divination is intrinsically connected to the environment, using materials such as kola and palm oil, leaves, and flour, plus animal sacrifice. Fâ (also known as Ifâ) represents the soul of people' culture. Fâ is the means of communication with the gods and ancestors, speaking to them through a unique and complex system of 256 symbols, each symbolizing 16 parables and 16 local expressions. The consultation process is complicated. Fâ divination usually involves a divining chain. The diviner throws the chains to the ground, where they turn up in one of the 256 possible formations. Each of these corresponds with a sign, which has a specific name. The sign is determined by the diviner and named aloud. This study attempts to analyze how they are treated and the possible solutions for their acceptance and their inclusion in schools. The main purpose of this paper was to promote the inclusion and the integration of "Hounyos and Woli" in the classroom. It is also to train EFL teachers so that they don't stigmatize this category of learners. A. Research Purpose The main purpose of this research was to promote the inclusion and the acceptance of "Hounyos and Woli" in the classroom. It is also to train EFL teachers so that they do not stigmatize this category of learners. B. Theoretical Framework of the Study This study was based on inclusion and diversity. Social diversity is common in many communities in today's globalized world This diversity can be seen in any classroom of learners. Both are important factors to consider in teaching contexts which aim to provide quality and relevant education to all learners. Teachers need to ensure that they design learning experiences that are responsive to learners' individual differences and learning styles. In fact, what is an inclusive school? An inclusive school means that all students are welcomed, regardless of gender, ethnicity, socioeconomic background, or educational need. To learn contribute to and take part in all aspects of school life. It is also concerning all students and marginalized groups, not only those with disabilities [4]. This research study took into account the case of "Hounyos and woli". Originally, this is their native name in the south of Benin. "Hounyos" are concerned with girls who are not allowed to be dressed on top and who keep scarification on their face and chest. The term "Hounyos" means someone who is selected or called by the divinity (vodoun) which stands for the spirit in the Fon and Ewe languages, pronounced [vodù] with a nasal high-tone u. Vodoun is practiced by the Aja, Ewe, and Fon peoples of Benin, Togo, Ghana, and Nigeria. "Hounyos" also hold a facial and body mark which enable people to recognize them easily. In fact, body marking has been used for centuries in parts of Africa to indicate a person's tribal heritage. Most of the time people want to carry the marks of their ancestors. Scarification is a form of body marking and it is the practice of incising the skin with a sharp instrument such as a knife, glass, stone, or coconut shell, in such a way as to control the shape of the scar tissue on various parts of the body. It has a meaning. The explanation of holding scars emphasizes social, political, and religious roles. Facial and body scarification are used for identification of ethnics' groups, families, and individuals. It is sometimes used to express beauty. Scars were thought to beautify the body. It is also performed on girls to mark stages of life: puberty, marriage, or the idea that girls belong to a special divinity. For example, the "Hounyos", an ethnic group in Benin, believes that scarring children usually on their face will connect them with their ancestors. The children are given new names, their hair is shaved, and they are taken to a convent where an oracle helps them to communicate with previous generations. Students who are "Woli" can predict future on people. They hold long braids on their head and are not allow to put shoes on. The long braid has a meaning. Some braids are a symbol of strength, wisdom, and are something that reflects their identity. Many of our readers stated the braid has a cultural significance, and many felt a connection to the creator, their ancestors, and the earth. C. Research Questions Two research questions were established for this study. 1. What views do Benin EFL teachers and students hold about "Hounyos and Woli" students? 2. How EFL teachers are trained professionally to meet the needs of students with stigma? D. Limitation of the Study This study was exclusively limited to "Hounyos". This is applied to girls (female) only. "Hounons" are allowed to wear cloths. This category is specific for boys (male). Here attention was focused on those who are half dressed. Also, this study examined only relationships and association between variables. Additionally, the present study was limited due to the sample's size of the population selected. A single girl Woli student was concerned for the study. Four secondary schools were involved: Malanhoui, Davié, Anavié, and Koutongbé in the department of Ouémé regions of Benin. II. LITERATURE REVIEW Little research had been made on the inclusion of students with stigma in Benin EFL classes. Consistent literature was not available on the inclusion of Woli and Hounyos students in Benin. This review is concerned with some related research about students with stigma, the definition of the term inclusion, the beliefs of EFL teachers on the integration and acceptance of these categories of students in class, and finally some factors that may impact students and teachers 'attitude toward inclusion. A. Learners with Stigma Stigma involves negative attitudes against someone based on a distinguishing characteristic such as a mental illness, health condition, or disability. Social stigmas can also be related to other characteristics including gender, sexuality, race, religion as in this particular case of this research study, and culture. [5] originally defined stigma as a mark or attribute that makes the person "from a whole and usual person to a tainted, discounted one". According to [6], Stigmatization is a social phenomenon leading to the marginalization of a specific member or a group of the community. It can lead to discrimination and loss of dignity as a result of prejudices by other members of the society. In addition, Stigma can be viewed as is a powerful social process of devaluing people or groups based on a real or perceived difference such as, class, race, and behavior. According to [7], Stigma is used by dominant groups to create, legitimize, and perpetuate social inequalities and exclusion. Stigma can also lead to discrimination, which is unfair and unjust treatment of an individual based on that socially identified status. The stigma attached to learners holding long hair "Woli" and "Hounyos" is most of the time a source of great anguish and shame. As a consequence of stigma some families may not send their child to school because of the belief that they are retarded and may not achieve well or favorably with their mates and therefore cannot succeed in life [8]. Examples of religious stigma might include, wearing particular head coverings or other religious dress (such as Jewish Yarmulke (explanation) or a Muslim headscarf) or wearing certain hairstyles or facial hair (such as Rastafarian dreadlocks or Sikh uncut hair and beard). B. Importance of Inclusion Students can feel ostracized that is excluded by the society in an education system or when learning through a curriculum that's not diverse or inclusive. In fact, inclusion refers to restructuring educational provision to promote `belonginga [9], i.e., all pupils in a school see themselves as belonging to a community, including those with significant disabilities. As such, inclusion embraces the concept of diversity as a natural state of being human or in educational terms, of being a learner [10]. An inclusive curriculum helps them see that all walks of life are relevant and important, and that they are in a safe environment where everyone is not only accepted but celebrated. Inclusive Education recognizes the right of all children to feel welcomed into a supportive educational environment in their own community. It refers to the capacity of ordinary local schools to respond to the needs of all learners. C. Promoting Inclusion Practices in the Classroom The world has initiated to introduce educational practices in accordance with various international and national declarations. Among them the Salamanca Statement and Framework for Action [1], and the Dakar Framework for Education for All (2000). Moreover, in 1990, the World Conference on Education for All was held in Jomtien, Thailand. Another conference in (2000) in Senegal gave rise to the Dakar framework for Education for All, in which the international community promises to ensure education as a right for all people, The UNESCO International Conference in Education was held in Geneva in 2008 and the focal point of this conference was the inclusion of a more diverse range of learners, in spite of capacity or personality, as well as the encouragement of respect for the needs and abilities of learners and deduction of all forms of inequity [11]. In fact, the greater part of the world's populations of children with disabilities lives in developing countries; out of a world population just about 150 million live in Africa, Asia, Latin America, the Caribbean and the Middle East [12]. In Africa, with particular reference to South Africa, according to [13] the country is in support of learners in needs according to Section 10 of the Constitution. Furthermore, promotion of Equality and Prevention of Unfair Discrimination Act 4 of (2000), is a core piece of enabling legislation aimed at facilitating the realization of the rights of all people in South Africa, particularly minority groups which have been historically marginalized in the classroom. In addition, there is a lack of research in the domain of Inclusive Education (IE) in Uganda. The 10 commitments in summary are that the Government of Botswana through the inclusive education policy commits to ensuring that all learners including those who have never been to school before, those who dropped out and those with special needs and /or risk of failure, will be encouraged and supported to get back to school and access education. The government further commits that the vocational training mechanisms will be made relevant and responsive to the children's needs and that teachers' skills will also be strengthened for effective teaching of diverse learners. Finally, access to schools will be strengthened through resource intensification that will make school environments user-friendly (e.g., via providing access ramps and paving school grounds) to all learners. [14] investigated a cross-cultural study of teachers' attitudes towards inclusion in a number of countries such as the USA, Germany, Israel, Ghana, Taiwan and the Philippines. The results generated that there were differences in attitude to inclusion between these countries. Teachers in the USA and Germany had the most positive attitudes. Positive attitudes in the USA were attributed to inclusion being widely practiced there as the result of Public Law 94-142. D. Attitudes of EFL Teachers towards Inclusion Other studies have indicated that school district staff who are more distant from students, such as administrators and advisers, express more positive attitudes to inclusion than those closer to the classroom context, the class teachers. Headteachers have been found to hold the most positive attitudes to inclusion. Other studies by [8] also examined mainstream and special teachers' perceptions of inclusion through the use of focus group interviews. The majority of these teachers who were not currently participating in inclusive programmes had strong, negative feelings about inclusion and felt that decision makers were out of touch with classroom realities. The teachers identified several factors that would affect the success of inclusion, including class size, inadequate resources, the extent to which all students would benefit from inclusion and lack of adequate teacher preparation. However, the results were significantly less positive in Ghana, the Philippines, Israel, and Taiwan. The authors reasoned that this could probably be due to limited or nonexistent training for teachers to acquire integration competencies; the limited opportunities for inclusion in some of these countries; and the overall small percentage of children who receive services at all (none of these countries had a history of offering children with SEN specially designed educational opportunities). Currently, there are a number of learners with special educational needs in the regular schools (Government of Botswana, 2017). The success of this initiative depends on a number of factors; one of which is the teachers' knowledge and pedagogical skills in teaching learners with special educational needs and exposure to positive practices of inclusion. The way student teachers experience knowledge and skill acquisition at the University lecture rooms and also during teaching practice is an essential part of understanding the potential success of providing effective teaching to learners with special educational needs. E. Factors Impacting Students and Teachers' Attitude toward Inclusion 1) Labeling Labeling refers to the process of identifying that a student meets eligibility criteria for special education services. It is also a process of creating descriptors to identify persons who differ from the norm. A label or tag is given to anyone who is dif can lead to bullying and marginalization in schools. Children change and develop but labels, unfortunately, tend to stick. This can ultimately become difficult for such learners to leave behind negative reputations. For example, in Benin, girls who are half dress on top are labeled as "Vodounssi or Hounyos" and students with long and uncombed hair are labeled "Woli" meaning somebody who has vision and can predict future. Assigning labels to students in education systems usually brings about negative effects including stigmatization, peer rejection, lower self-esteem, lower expectations, and limited opportunities. 2) Attitude An attitude is an expression of a favorable or unfavorable evaluation of a person, place, thing, or event. It is also the belief that one has towards people and surroundings. Regarding education, students' or teachers' positive attitude may influence their academic performance. [15] defined the word as a psychological tendency to view a particular object or behavior with a degree of favor or disfavor. According to [16] in her research of teenagers with Albinism for example, he notes that one of the results of negative attitudes learners with albinism experience in schools is name-calling. In addition, the negative attitude experienced by learners with albinism is that of being treated unfairly due to their skin color and they are called names that demean and humiliate them in school. According to [17], the attitudes of teachers, school administrators and peers affect the inclusion of children with disabilities or who are labeled. Some schoolteachers, including head teachers, believe that they are not obliged to teach children with disabilities or labeled students because they were not trained in special education. 3) Discrimination Discrimination is showing of favor, prejudice, or bias for or against a person on any arbitrary grounds, for example on the basis of sex, color, culture, and language [15]. Discrimination can be on the basis of ethnicity, nationality, age, gender, race, economic, disability and religion. Discrimination against "Woli and Hounyos" is based on myths and misconceptions about the origin and the culture of those specific students. III. RESEARCH PROCEDURE The survey was carried out in four main public secondaries. The study involved 520 regular students and 19 students in needs. 09 classes were visited. The selection was made on the basis of the presence of students with exceptionalities in each school. The researcher checked and controlled the effectiveness of students' presence prior to data collection. Questionnaire, interview, and classroom observation were used as an instrument in this research. Permission was granted from the Vice-Principal of each secondary school prior to school visit. A. Instruments Three main instruments were used in order to conduct the study. The first instrument was the questionnaire It consisted of reported situational variables. A Likert scale measuring beliefs relative to inclusion was used and composed of 11 items taken from the opinions relative to mainstreaming (ORM) scale [18] which were adapted by the researcher in Benin context. The questionnaire used consisted of 11 items. The second instrument was the interview and the third one was the class observation. Those instruments were used to judge the level of teachers and students' beliefs on the acceptance of students with Special Needs in their Classroom. Observation was conducted in classroom setting in 09 classes. Attention was focused on students 'interaction in class, and in the way EFL teachers manage Woli and Hounyos. Each observation lasts 55 minutes in the classroom. B. Findings and Data Analysis Data were presented and analyzed; each question was interpreted according to students' responses on the different statements. The objective was to measure students' beliefs relative to inclusion, and integration of "Woli and Hounyos" students. Interview responses from EFL teachers were carefully analyzed, summarized, and reported. The account of class observation was also summarized. C. Questionnaire Responses The statistical results from question 1 and 2 have shown that 439 students, that is 84.42% agree that they felt comfortable in working in group with a Woli student. 81 students disagree, that is 15.57 on the statement. The findings for the second question indicated a slight similarity between responses for question 1 and 2. The different reasons provided for the choices were common to all the learners who disagree. For them, Woli students dream a lot in class. Generally, they are absent minded. From time to time, they are tormented by bad spirits, and use interpretation and threat for any problem that may occur in classroom situation. For question 3, 449 students over 520, that is 86.34% agree that they ask questions related to "Woli and Hounyo" sources. Woli are originated from celestial church and Hounyos came from vodoun. 71 students, that is 13.65% confessed they do not ask them questions about their source. For question 4, they recognized the sincerity of those special students since people confirmed the same source. For question 5, 361 students that is 69.42% agree they have consulted a Woli in their classroom once in their life. 159 students, that is 30.57% disagree they have never consult a Woli student in their classroom. The reasons of the consultation differ from one student to the other. The majority of students (n=277) consult to know more about the important chapters to be revised for the tests. Another group of students (n=56) consult to learn more about their future, lucky, get a very nice girlfriend in the classroom, and whether they will become rich or not. A third group of students (n=28) consult to defeat evil. Satisfaction after the consultation also differs from a group of students to the other. Thus 130 students, that is 25% claimed they were satisfied with Woli's consultation. 164 students confessed they were sometimes satisfied with the consultation, that is 31.53%, and 221 students, that is 43.6% disagreed with the satisfaction. Below is the illustration of the degree of satisfaction of the Woli's consultation. The consultation is not free. 98% claimed that they pay money in exchange. 2% declared they do not pay. The reason may depend. For question 8, the majority of students (n=497), that is 95.57% confessed that consultation took place at the school field. 24 students, that is 04.61% declared to have consultation in the classroom. These findings may be interpreted as a means for students to secure themselves. It may also be probable that the consultation took place in the absence of the teacher or at the end of a lesson. Classrooms were generally far from the school administration office. Also, the setting in the school field was not appropriate because of the space available for such a practice. Students may pretend to play and do something else. But none of the students agreed on the statement that consultation occurred out of school. The data collected for question 9 were similar to those of the previous question. The majority of students (n=489), that is 94.03% have their consultation after classes, and 31 students, that is 05.96% accepted to have consultation before classes. This implies that students may come earlier before 08 o'clock. They may also lie to their parents at home that they have classes at 7o'clock. In the majority of cases there were ceremonies to perform after the consultation. Students (n= 244), that is 46.92% confessed that they do have such practices whereas 276 students, that is 53.07 % disagree not to be instructed to perform rituals. After consultation, the "Woli" student instructed his "customers" to practice what he/she recommended. Generally, he/she provided a black soap to use for bathroom. In addition, he/she demanded coconuts students going to break to defeat evil. Other should fast at least for 7 days. Another category of students should go to the sea to have a bath so that they cure themselves and leave their sin before coming at home. After consultation, there were never discussions at home with parents. Students never shared information with parents that they were under medication or prescription. D. Interview The researcher interviewed 07 teachers. All interviews were recorded. Recorded interviews were transcribed and then translated into English. Data collected through interviews were analyzed using the content analysis techniques. After transcribing the interviews, the researcher after several readings was able to make some sense of the data and constructed a system that allowed all of the data to be categorized systematically. The interviews' sessions lasted from 15-25 minutes depending on the way the participants are explaining the phenomenon. Participants shared their experiences. Each interview was face-to-face interview with the participating teachers. These interviews were recorded by researchers with teachers 'permission. It took approximately ten days to complete interviews from all 07 teachers. E. Interview Responses 1. How do you deal with the presence of "Woli" in your classes? -The management of Woli is sensible. They are object of distraction due to the fact that they are different. They hold long braid, they never put shoes. Their mates are curious to learn more about them. 2. How do you deal with the presence of "Hounyos" in your classes? It is not common to have girls who are half dressed in Benin classrooms. Having them remind me that I have a double responsibility regarding their interaction and socialization among the other normal students. 3. Do the presence of "Woli and Hounyos" impacts the classroom activity? The majority of teachers confessed that they handle problems when they occur. But a teacher turned that he experienced a case where the Woli student was really frustrated by the attitude of the administrator. The superintendent visited his class to control students who do not follow the regulation that this having their hair cut. At first sight the superintendent did not realize to have a Woli student. He thought he was a regular student and asked him to go out. A student whispered: "Good job dear supervisor! I am happy for the wholly student. He needs to be dismissed too. Why should teachers allow him and not the other? After the control, I draw the superintendent 'attention on the fact that the wholly was a special student. He was accepted but the atmosphere was tense, and this attitude impacts the progress of the instruction he gave. 4. Have you ever seen the manifestation of vodoun regarding "Hounyos" in the classroom? It is not common that Hounyos go into a trance in class, but it may happen after school day or during extracurricular activities such as working in group to clean the school yard or during a concert organized by the school. Here are the words of a female teacher: "I was terrified by the strength of the student who went into a trance. The fellow classmates ran away, and I thought she is going to die because of her breath and the switching position of her eyes" I was really confused, and I called on the administration for help. A male teacher turned that he was surprised by a Hounyo 'attitude when she started uttering words in unknown language. The words did not make sense. She went out, kept pronouncing the same words, again and again. Students were laughing. Finally, the Vice -Principal called her parents. Another female teacher confessed that "she experienced a case where a Hounyo student was crying, singing aloud, and performing vodoun danse in class". 5. What do you think about their performance? Most of the time Woli and Hounyos do not achieve well in class. Hounyos specifically are more likely to achieve well. But sometimes Woli students abuse and take advantage of their status to miss classes and to quit evaluation. A wholly student told me: "I can no longer stay in this classroom because I have a wrong vision. I predict that a bad event for the class". But there was no bad event. Also, the regular students confessed that they paid money to Woli students so that they predict chapters needed to be revised for tests. The money ranged from 100 hundred CFA to 1200 CFA out of class, if possible, for additional ritual. This may depend on each situation. Sometimes they have magic to help their friend to obtain agreement from girls. When they have health problems there is an interpretation which make them stay in church or in convent for healing. There is no alternative for hospital. 6. What are the relationships of these special students with their mates? The interaction is good. When it comes to work in pairs or collectively, they feel comfortable. Some students use their names directly. "Hounyos/Woli". There is no complex, they easily accept themselves. They play together during the break time. 7. Is there any warning from the administration before you have those special students in your class? 100% of teachers interviewed have confessed they did not receive a warning on the presence of "Woli and Hounyos" in their classroom. "Personally, I realized that I have two Woli in my class. I've never received some advice from the vice principal or the principal on the way I should treat those students. I know they are different, and I pay attention to the choice of my words while dealing with some topics such as religious views" 8. Do you think we should accept "Wholly and Hounyos" in the classroom? Out of 07 teachers, four (04) teachers were not sharing the point of view that we should accept Hounyos and Wholly in the classroom. They were against their inclusion due to their behaviour in the classroom. They are stigmatized and are constantly object of distraction. It is better to create a specific environment for them. Three (03) teachers believe that we should keep them with other students and explain their origin and status so that everybody is informed about their way of behaving. F. Classroom Observation To obtain reliable data, teachers were not informed on the objective of the class visit. Arrangement was made with the principal to check on the presence of "Woli and Hounyos" in the classroom selected to be observed. The researcher informed the teachers that she was going to work on learners and their interaction. The researcher has paid attention to the sitting arrangement of students and their interaction. Attention was also focused on Students' mark book to check on their achievement. Classroom was observed prior to interview. From the classroom observation, the researcher noticed that only a minority of teachers managed quite well the interaction and their involvement with the other students whereas the majority did not. On the nineteen selected students in need, only six (06) were well integrated and well treated by teachers, that is 31.57 %. The remaining (13) students, that is 68.42 % were stigmatized and not welcomed in the classroom. For example, among the classes observed there was a teacher who had asked a "Woli" student to continue the reading of the text and he called him with his nickname "Hey! The "Woli" in the back! are you with us or you're dreaming?" This makes everybody laugh but the boy disliked such a behavior, and it was not a fun for him. The teacher was expecting the researcher to laugh but it did not happen. So young teachers have the view that schools should exclude them from schools and create a special program for them. The researcher had noticed that some students had health problems, and this was linked to the fact that they went on a fast for hours and even for days. They were instructed to eat fruits only. Ultimately, they became weak and went into health crisis. IV. DISCUSSION The access in inclusive education has received less attention in Benin. However, to make it possible stakeholders have to have to further develop teachers' competencies in this particular form of education. It is then crucial to provide adequate documentation on the matter. All these will require collaborative efforts from educationalists programs. A. Beliefs of Teachers and Students EFL teachers and students' beliefs differ from one category to the other one. It is obvious that "Woli and Hounyos" are more and more involved in schools today in Benin. In the past parents never send them to school. It is also obvious that they were subjected to name calling. They were frequently bullied, beaten by their fellow students and treated as a misfit or outcast. Some of them were avoided by their peers who refused to sit, eat or play with them/ or interact. Others were always humiliated by their mates when they appeared in the classroom. Not all people do rely on their culture. The marginalization of such students is not visible today. This attitude has an impact on "Woli and Hounyos" performance. The findings in classroom observation and in the mark, book have shown that "Hounyos" students achieved well than "Woli". The percentages from the questionnaire, interview and classroom observation provides a deeper understanding of students and teachers' beliefs towards inclusion of "Woli" and "Hounyos" in Benin classroom context. The findings from the interview revealed that only a minority of EFL teachers accept the inclusion of those special students. Furthermore, the data collected on the ground have indicated that the presence of "Woli" in the classroom generates another area of interest, that is a business activity. "Woli" students considered consultation and ritual ceremonies more than their academic studies. The findings on the degree of satisfaction have also shown that the service provided was below the satisfaction expected by students. This means that the results from the consultation were not reliable. The regular students who discovered the truth became septic and have started developing doubt. They were fixed on their position and noticed that the consultation was a wrong story and never existed. Consequently some "Woli" students lose their reputation and influence on students. B. Professional Development There was no professional development that meet the needs of EFL practitioners in the domain. Being a teacher in Africa classroom context, especially with students in needs requires a training. Each student is unique and may come from varied background. When the teacher is not well equipped with special students he may be surprised. Then there should be more programs in colleges of education and universities aimed at equipping pre-service teachers the skills needed to teach students in the context of inclusion. Professionally teachers' attitude varied from experienced to beginners one. Veteran teachers were more likely to handle "Woli and Hounyos". They were well-treated, and their participation was effective. Non experienced teachers tended to minimize their presence and even laugh at "Woli" students. There is a sufficient consistency regarding this attitude because [14] also came to the conclusion that, in general, teachers with 14 years or less teaching experience had a significantly higher positive score in their attitude to inclusion compared with those with more than 14 years. They found no significant differences in attitudes to inclusion among teachers whose teaching experience was between one and four years, and nine years and ten and 14 years (no mention was made based on individual country). While observing teachers and students' attitudes in class, female teachers tend to lead Wholly and Hounyos students appropriately. They were more flexible han male students. This evidence is in accordance with some previous studies who had noted inconsistency with regard to gender. They found that female teachers had a greater tolerance level for integration and inclusion for special needs persons than did male teachers [19] [20]. [21], for example, found that there was a marginal tendency for female teachers to express more positive attitudes towards the idea of integrating children with behaviour problems than male teachers. The review has also demonstrated that some teachers have a positive view towards inclusion all over the world if conditions are met. This idea strongly correlated with the findings of [22] when he has explained that there are challenges in both developed and developing countries, such as gaps between policies and practices, discouraging behaviors towards inclusion, and lack of sufficient funding. Inclusive outcome is positive, and teachers felt if all facilities are provided, the results would be more encouraging. This study only considered the implementation of inclusive education at the secondary school level in Benin. Therefore, it is urgent to have advanced research in getting the impressions about the inclusion of types of students such as the students who are believers of other religious practice. Academic performance is of great importance in education because it is strongly linked to the positive outcomes' teachers value. For this research, attention was focused on the academic results of the 19 selected students. The researcher noticed that the single Woli girl, and the single Hounyo girl really performed well in class. Their interaction was different while compared to the other special students. They felt comfortable and did not develop any complex with their mates, and in group. Wholly students were not willing to perform well. Their participation was not visible, and their academic results were below the expected one. V. IMPLICATIONS The findings from this study shed light on some issues regarding Woli and Hounyos students' attitude in the classroom. The results have clearly indicated that teachers in general and particularly EFL teachers should change their beliefs toward the concept of inclusion in the classroom. Secondly, regular students also need a change in their attitude. The data collected were the indication that parents should be involved in the development of curriculum and sensitize Woli and Hounyos students on a number of attitude and behavior towards the school community. Apart from parents, the government of Benin should show more commitment to secondary education program with the adequate training for teachers and by sensitizing special students on good attitude in class. This idea was strongly supported by the studies of [23]and [24], who have focused educators' attention on the importance of training in the formation of positive attitudes towards integration. They have studied the attitudes of college teachers in the UK towards students with SEN and their integration into ordinary college courses. Their findings showed that college teachers who had been trained to teach students with learning difficulties expressed more favorable attitudes and emotional reactions to students with SEN and their integration than did those who had no such training. A pre-condition is also necessary prior to "Woli and Hounyos's acceptance in school. The school setting cannot be transformed to a ritual place where consultation becomes a routine and a business activity for the other regular students. All stakeholders should establish regulations so that Woli and Hounyos students are well-equipped of their limits. For instance, it will be strictly forbidden for "Woli and Hounyos" students to predict what is going to happen in the future, or to use threat towards their mate. In the same way "Woli and Hounyos" students will be subjected to the same punishment as regular students. VI. CONCLUSION This study has discussed the understanding and practices of inclusive education, and the main challenges in developing inclusive education in Benin EFL classes. The research has considered issues in inclusive classrooms in other countries, some practical definitions and terms used in religious context in Benin Republic. The results indicate that, in Benin a smaller minority of EFL teachers strongly approved inclusion. The findings have also shown that students with exceptionalities need to be taught differently. They need some accommodations to enhance the learning environment. Not everyone learns in the same way. More efforts are needed to be directed towards educational specialists in terms of the quality of training with regard to the content of curriculum for exceptional learners. Students in need should receive additional training on how to develop positive attitude in class. Although no previous research had not been undertaken for the inclusion of Hounyos and Woli in Benin, future research may examine the integration of other exceptional students in classroom such as the case of Mamiwata, Dansi, Hounons, and adeptes of Tronc.
2021-10-25T15:07:39.290Z
2021-09-06T00:00:00.000
{ "year": 2021, "sha1": "fefde2202a2ac866336e1724f06501868c3598f1", "oa_license": "CCBYNC", "oa_url": "https://www.ej-social.org/index.php/ejsocial/article/download/85/43", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ca6fb3f404ad27462d6bc9e184c2d929b5385843", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
15293791
pes2o/s2orc
v3-fos-license
Understanding Sensory Nerve Mechanotransduction through Localized Elastomeric Matrix Control Background While neural systems are known to respond to chemical and electrical stimulation, the effect of mechanics on these highly sensitive cells is still not well understood. The ability to examine the effects of mechanics on these cells is limited by existing approaches, although their overall response is intimately tied to cell-matrix interactions. Here, we offer a novel method, which we used to investigate stretch-activated mechanotransduction on nerve terminals of sensory neurons through an elastomeric interface. Methodology/Principal Findings To apply mechanical force on neurites, we cultured dorsal root ganglion neurons on an elastic substrate, polydimethylsiloxane (PDMS), coated with extracellular matrices (ECM). We then implemented a controlled indentation scheme using a glass pipette to mechanically stimulate individual neurites that were adjacent to the pipette. We used whole-cell patch clamping to record the stretch-activated action potentials on the soma of the single neurites to determine the mechanotransduction-based response. When we imposed specific mechanical force through the ECM, we noted a significant neuronal action potential response. Furthermore, because the mechanotransduction cascade is known to be directly affected by the cytoskeleton, we investigated the cell structure and its effects. When we disrupted microtubules and actin filaments with nocodozale or cytochalasin-D, respectively, the mechanically induced action potential was abrogated. In contrast, when using blockers of channels such as TRP, ASIC, and stretch-activated channels while mechanically stimulating the cells, we observed almost no change in action potential signalling when compared with mechanical activation of unmodified cells. Conclusions/Significance These results suggest that sensory nerve terminals have a specific mechanosensitive response that is related to cell architecture. Introduction Mechanical force is known to affect a diversity of physiological areas at the cellular level including cardiac, fibroblast, bone, and vascular cells [1][2][3]. Mechanotransduction is a topic of increasing research interest, particularly to those in the neural sciences, due to the ability of physically based forces to induce neuronal changes that are directly responsible for a host of complex and integrated responses. Mechanoreceptors of sensory neurons localize in specialized or encapsulated nerve terminals, providing a mechanism of response to pain, touch, pressure, vibration, vessel stretch, and propriocecption [4]. However, while initial evidence suggests that mechanobiology applies strictly to nerve terminals in neurons, the manner in which these terminals sense and respond to mechanical signals is still not well understood. Furthermore, previous studies investigating the molecular mechanisms of mechanotransduction in sensory neurons (by adopting neurite-free neurons to stimulate nerve terminals) used mechanical forces such as compression and hypo-osmotic stretch [5,6]. This stimulus was applied directly on the soma of acute dissociated dorsal root ganglion (DRG) neurons and generated an inward current detected via patch clamp recording [5][6][7][8][9][10]. Two recent studies measured pressure-activated currents on nerve terminals through the use of a glass pipette or a pressure jet [11,12]. As this is a nonspecific means of mechanically stimulating neural cells, this does not replicate a direct link to structural interactions, e.g., focal adhesions, found in mammalian cells [13]. To our knowledge, no research attempt has been reported that examined stretchactivated mechanotransduction on neurites in dissociated neurons through a specific adhesion mechanism while providing control over the cell-matrix mechanical properties. In our study, we first focused on developing an in vitro method to probe stretch-activated mechanotransduction and cytoskeletal structural links for nerve terminals in neurons. This allowed us to investigate the coupled behavior of mechanical stimulation and substrate interactions, with respect to cell structure, for affecting the critical neural function of action potential (AP) firing. To provide control over the mechanical stimulation and the cellmatrix interactions, we used polydimethylsiloxane (PDMS), which has a highly cross-linked three-dimensional structure, and offers high elongation properties with a relatively low modulus. PDMS is composed of a silicone T-resin cross-linked by a mixture of vinylterminated PDMS (base) and trimethylsiloxy-terminated polymethylhydroxosiloxane polymers (curing agent) [14]. PDMS can be modified to have various elasticity properties, which is useful when using force as an influential parameter to understand cell signaling, since it allows for precise modulation of the PDMS down to a single kPa, an elasticity similar to that found in native tissue [15]. The ability to modulate elasticity in the PDMS also provides for control over the amount of deformation that the substrate, and cells attached to the substrate, might experience under controlled deformations. We have, through this simple approach, leveraged material characteristics to emulate physiologically relevant interactions in nerve terminals experiencing mechanical stimulation for probing mechanotransductive response in DRG neurons. Probing mechanotransduction in neurites through elastomeric matrix control One goal was to build a PDMS culture-recording setup that allows for distal force application to nerve terminals through an elastic deformation with simultaneous recording of the consequent AP response on a neuron soma. To provide a system that would allow us to impose these mechanical forces, we first needed to successfully integrate our cell culture methodology with the indentation pipette force procedure and AP measuring techniques. First, PDMS substrates with low stiffness were fabricated to provide an elastic connection similar to what has been observed in living organisms [16]. Next, we coated the PDMS with either fibronectin or poly-L-lysine and cultured neurons on the modified elastic surface; neurite out-growths were observed using this approach. To impose mechanical force, we used a micromanipulator to bring a blunt pipette into contact with the PDMS and deform the PDMS through a vertical displacement of the pipette at a location that was near to, but not on, the neurite. Through this approach, we mechanically stimulated the neurite extension, but did not impose non-specific mechanical stimulation on the cells; a schematic of this method is shown in Figures 1a and 1b. While the displacement of the pipette was vertical, the deformation of the substrates caused a stretching of the cell along its cellsubstrate interface, as the cell was attached to the substrate (Fig. S1). Mechanical stimulation on this single neurite increased with increases in the indentation depth (the relationship of indentation depth to force application is described in Fig. S2). Thus, we were able to apply force on DRG neurites via specific attachments due to the molecular coating-substrate interactions. This also minimized direct probe interaction with the cells, which could be detrimental to cell function. We next demonstrated that the condition of whole-cell patch recording was stable and reliable when the PDMS was indented. The effect of indentation on serial or input resistance was minimal (,30%). In addition, the indentation did not rupture the soma or neurites via visualization with lucifer yellow (n = 5, Fig. S3). Before and during the application of mechanical stimulation, we recorded the evoked AP or the change of membrane potential on the soma (Fig. 1c,d). Effect of ECM and neurites on stretch-activated mechanotransduction The response of cells is often directly related to ECM interactions occurring between the cell and the substrate [17]. Thus, we probed these interactions by using poly-L-lysine and fibronectin, which have different cell adhesion characteristics. We found that the specificity of the cell-to-ECM interactions had a pronounced effect on the response of neurites (Fig. 2). We quantified the AP for cells when cultured with different substrate conditions to probe neurite response (Table 1). Only the neuritebearing neurons can display an evoked AP. In contrast, for neurite-free neurons, neither was an AP evoked nor was membrane potential altered during the application of mechanical stimulation at a 100 mm distance from the soma (Fig. 3). Even when the indentation was close to the soma (within 30 mm), the mechanical stimulation only caused limited changes of membrane potential (,10 mV) for neurite-free neurons (Fig. 4). We found that mechanical stimulation of neurites cultured on PDMS substrates that were coated with poly-L-lysine induced an AP in only 35% of DRG neurons (n = 40) after day 5 of cell culture (Fig. 2a,b and Table 1). Indentation of the PDMS did not induce an AP in neurons without neurite outgrowth by day 2 (n = 20), although most neurons did not exhibit outgrowth of neurites at this time ( Fig. 2c). In contrast, we found that fibronectin coating on PDMS substrates largely facilitated neurite outgrowth of cultured DRG neurons, with neurite extensions visible after only 2 days of culture (Fig. 2d). Fibronectin on the other hand did not promote an increase in the number of glia cells by day 2 (Table S1). Furthermore, at only 2 days of culture on fibronectin-coated PDMS, 26.7% of the DRG neurons (n = 30) had an AP under mechanical stretching, which was an improvement from 7.5% of the DRG neurons (n = 40) after 1 day, even with limited observable neurite outgrowth (Table 1). Not only did DRG neurons cultured on fibronectin extend neurites between days 1 and 2, but the number of them that responded to stretch via AP also increased. Furthermore, a lower threshold for the induction of an AP in these cultures was observed (213 vs. 160 mN). It is noted that although the stretch-induced AP was only found in neurons with neurite-outgrowth, a subset of neurite-bearing neurons (10/ 18 in D2+F group) did not display either AP induction or a change of membrane potential in response to stretch (Fig. 3d). Due to a robust AP response found in 44.4% (8/18) of neurite-bearing neurons, we subsequently used fibronectin coating after 2 days for additional studies. Involvement of cytoskeletal structure Since the cytoskeleton in many cell types is directly related to the ECM and mechanical response [1,2], we next probed the effects of cytoskeletal structure in the mechanotransductive response of neural cells, using the approach outlined in Figure 5. We first investigated microtubules since they are a significant component of neurons and are heavily involved in many signaling pathways [18]. We mechanically stimulated neurons and recorded the AP in a manner similar to the previous experiments outlined in Figure 1. We then used nocodazole to interfere with the polymerization of microtubules [19] and recorded the AP signaling under mechanical stimulation. We observed that nocodazole abrogated the stretch-evoked action potentials in 100% of the tested neurons (Fig. 5c). As it was evident that at least one of the cytoskeletal components, i.e., microtubules, influenced AP firing, we then proceeded to examine another major cytoskeletal constituent, actin [19]. We first mechanically activated the neurons and then incubated them with cytochalasin-D or latrunculin-A, both of which affect the polymerization of actin filaments. The stretch-evoked action potentials were suppressed in all neurons following this actin cytoskeleton modification (Fig. 5d,e). To examine the reversibility of the signal, we followed the addition of the agent and mechanical stimulation with a continuous washing procedure (lasting for 3 minutes) to remove the cytoskeletal modifiers via bath-perfusion as previously published [20]. We sought to probe the response of the neuron after the wash-out to investigate whether they remained functional for AP signaling. After the washing procedure, we mechanically stimulated the cell again. This time, no AP firing was observed, although the cells were still responsive as determined by a followon current injection to the cell, which evoked a well characterized AP response. We further confirmed that the effect of nocodazole, cytochalasin-D, and latruculin-A were due to the depolymerization of the cytoskeleton and not the inhibition of voltagedependent ion channels, as the neurons were still capable of firing APs through current injection even after being subjected to inhibiting cytoskeletal modifiers (Fig. S4). None of cytoskeleton modification agents altered the resting membrane potential of neurons. Effect of mechanosensitive channel blockers After determining that mechanical response was linked to the specificity of the ECM and the cytoskeleton, we were interested in examining whether there was a link involving mechanosensitive ion channels. To accomplish this, we used known mechanosensitive ion channel blockers (gadolinium chloride, amiloride, ruthenium red) that have been shown to inhibit most of the mechanosensitive current in neurite-free DRG neurons [5,6]. Application of these blockers did not show any inhibitory effect on stretch-activated conductance of neurites cultured on fibronectincoated PDMS substrate (Fig. 6). We found that gadolinium ions (Gd 3+ ) did not block the stretch-evoked action potentials in DRG neurons (Fig. 6a). While gadolinium ions, a non-specific blocker of mechanosensitive ion channels in most cell types including neurons and non-neuronal cells [5,6,[21][22][23], blocks mechanotransduction in most sensory neurons in vitro [5,6,11], many studies have shown that stretch-evoked afferent fiber responses are insensitive to gadolinium in vivo [24,25]. For in vivo studies, the inhibitory effect of gadolinium on afferent mechanotransduction has only been demonstrated in specialized primary afferents such as those of the knee joint and the carotid baroreceptor nerve [26,27]. In our recording system, gadolinium ions did not block the stretch-activated action potential in neurites (Fig. 6a). We opine that the mechanosensitive channels of neurites on elastic substrates coated with fibronectin may differ from those grown on more rigid glass or Petri dishes, or those using poly-L-lysine rather than fibronectin. In addition to investigating gadolinium, we examined amiloride and ruthenium red. Amiloride (or benzamil), does not block mechanically activated current in in vitro recording Figure 1. Experimental setup for using elastomeric substrates to probe mechanotransduction in neurites. a,Schematic of the mechanical stretching imposed on neurites in the recording chamber. DRG neurons cultured on a PDMS substrate were recorded with a recording pipette. Force on the neurites was generated by indenting the PDMS substrate with a glass pipette. The force from the indenting pipette was transmitted through the substrate to the cell. The signal from a DRG neuron was recorded through a whole-cell patch clamp set-up. b, An image of the system used to record stretch-evoked AP from neurites. The recording was performed in a recording chamber (asterisk) constantly perfused with ACSF. The neuron was connected to a recording pipette (white arrowhead) that was attached to a pre-amplifier. The neurites were stretched by the pipette (white arrow) indentation that was controlled with a micromanipulator. c, When the indentation pipette (white arrow) was placed on the surface of PDMS substrate near the neurite, no AP was generated (inset). d, As the pipette indented the PDMS through a vertical displacement (micromanipulator), a change in intensity in the differential interference contrast (DIC) image at the location of the pipette was observed (black arrow). This indentation, which imposed a force on the attached cell, evoked an AP (inset). doi:10.1371/journal.pone.0004293.g001 [6], but does inhibit mechanotransduction in airway afferents, intraganglionic laminar endings of vagal tension receptors, and knee joint afferents [24][25][26]. Our system did not reveal any amiloride-sensitive mechanotransduction (Fig. 6b); this may be related, again, to the substrate stiffness and ECM interactions in vivo. While rutheunium red and other TRP channel blockers inhibit mechanically activated current in soma [6] or slowadapting current in neurites of isolated DRG [11], our results using ruthenium red did not reveal inhibition of mechanically induced signaling (Fig. 6c). While the neurites showed specific mechanosensitive responses when probing their AP, the blocking of mechanosensitive channels had little effect on the AP response. Discussion Our studies have shown that neural action potential firing through nerve terminals is linked to specific mechanical deformation and extracellular matrix interactions. Since ECM-interfacing neurite outgrowth on soft substrates has been linked to integrins [28,29], this suggests the potential for the transmission of mechanical stimulation through transmembrane integrins in nerve terminals. The integrin pathways is known to be critical in a diversity of mammalian cells types as being part of the focal adhesion complexes that are linked to mechanotransduction [30,31]. While the exact mechanism for the activation is not The following result parameters were captured: D5, 5-day culture with poly-L-lysine; D2, 2-day culture with poly-L-lysine; D2+F, 2-day culture with fibronectin; and, D1+F, 1-day culture with fibronectin. Neuron diameters were determined, and the percentage of cells that responded to the indentation experiments with an AP signal was determined. AP indentation depth indicates the indentation depth needed to induce an AP for each condition. The indentation force at the neurite site is determined based on the equations provided in the Supplementary Information (Fig. S2) known even in highly studied cells in mechanical environments such as endothelial and cardiac cells, the overall link is known to exist due to the signaling pathways such as MAPK that are activated under mechanical stimulation. The presence of direct molecular links to integrins such as with RGD in supporting physiological structures including the basement membrane is one reason for this mechanistic link to be pursued. As the integrins are transmembrane they provide a link from the extracellular interactions directly into the intercellular structures such as the cytoskeleton. The cytoskeleton has been shown to be linked to the responses in a diversity of cells and responses [1,32,33]. Thus, disrupting these structural links within living cells often alters or abrogates critical cell functions. This has similarities in AP response where while the exact mechanism has not been demonstrated, the response has been shown to be related to mechanical stimulation and also the cytoskeleton here. While the actin/microtubule inhibitors effectively abolished the stretchinduced AP response in all neurite-bearing neurons, a washout of the drugs did not confer a reversible AP response to stretch (Fig. 5). The reason is that although the drugs were washed out, the effects of the actin/microtubule inhibitors were not reversible in our recording system, which was usually completed in 30 minutes and not reliable beyond 1 hour since actin filaments and microtubules need extended time (hours) to repolymerize into more fully developed and functional forms. While previous studies on mechanosensory transduction used sensory neurons cultured on coverslips coated with non-specific attachment proteins including poly-L-lysine, such conditions for other cell types have shown significantly different responses [34]. In each of our studies, we found that neurons cultured on PDMS coated with fibronectin promoted neurite extensions (Fig. 2). The neurons in these cultures also exhibited a lower threshold of stretchactivated action potentials when compared to neurons cultured . Effect of indentation distance from soma. Whole-cell patch recordings were performed in DRG neurons in 2-day culture with poly-L-lysine. All neurons were neurite-free. In the current clamp mode, the change of membrane potential was recorded when a maximal indentation depth (125 mm) was applied to the PDMS at a distance of 100 mm, 50 mm, or 30 mm away from the soma. When the indentation was placed on the soma directly, all neurons fired an AP (Fig. S5). p.0.05 comparison between any two distances, n = 5. doi:10.1371/journal.pone.0004293.g004 upon a poly-L-lysine coating, implying that cell-ECM interactions are extremely important in neural mechanotransduction. While displacements on nerve terminals with poly-L-lysine/ laminin-coated coverslips have previously been observed to display mechano-sensitive currents 92% of the time [11], only 44.4% (8/ 18 of D2+F) of neurite-bearing neurons cultured on fibronectincoated PDMS fired AP via distal stretch. The previous study used changes in pressure as the mechanical stimulus and there was not direct contact/linkage to the cell. Thus this approach had very little specificity in terms of ECM interactions. In addition, the discrepancy between these responses and the previously published experiments [11] could be due to the intensity of mechanical stimulation. The pressure that was used in the previous stimulation scenario was likely greater than the applied stress levels used in these studies, which may, at least partly, have been due to the stimulation mode(s) chosen (pressure vs. stretch). In addition, the substrate stiffness (glass vs. soft PDMS) could have had an influence, as stiffness has been shown to affect a variety of cell responses from motility to differentiation [35,36]. Furthermore, while mechanotransduction in vivo is functionally executed at the sensory nerve terminals projecting to peripheral tissues and cell bodies of primary sensory neurons have not been shown to be Figure 5. Examining stretch-activated mechanotransduction in neurites with respect to cytoskeletal structure. a-b, DIC images of neurons stretched in the direction of the indentation of the PDMS substrate. The edge of the neuron was marked with a solid arrow before the stretch and a dashed arrow after the stretch (the indentation pipette is not in the image). c, A baseline signal for each cell was recorded, and then an indentation was applied. Cytoskeletal modification agents were perfused into the system, and then a second indentation was applied. The cytoskeletal modifiers were then washed out using a continuous flow over 3 minutes followed by another indentation. A current injection (2 nA) was introduced afterwards while the AP was being recorded. Nocodazole (1 mg/ml), which disrupts microtubules, blocked stretch-activated AP but not the current injection-induced AP (n = 8). d, Cytochalasin-D, which disrupts actin filaments (1 mg/ml), blocked the stretch-activated AP but not the current injection-induced AP (n = 8). e, Latrunculin-A (1 mg/ml), which inhibits actin polymerization, blocked the stretch-activated AP but not the current injection-induced AP (n = 8). ACSF, artificial cerebrospinal fluid; RMP, resting membrane potential. Scale bar, 50 mm. doi:10.1371/journal.pone.0004293.g005 ordinarily mechanosensitive, most of the previous studies used neurite-free DRG neurons to probe the mechanical activation in testing whole-cell current response in vitro [5][6][7][8][9][10]. One advantageous manifestation of our culture system is that it implements an elastomeric substrate coated with fibronectin, which likely is more accurate in mimicking the in vivo environment for nerve terminals to sense mechanical forces. In the PDMS culture-recording setup, although D5 groups had the highest response rate for mechanical stimulation when compared with the other groups, the culture condition also show a high density of neurites and glia cells (Fig. 2). Therefore, we using this created a challenge for stretching a single neurite and avoiding the glia cells. In contrast, the neurons of the D2+F group provided two major advantages to probe the distal force-mediated mechanosensory transduction. First, fibronectin promoted neurite outgrowth on PDMS by day 2 and the neurite outgrowth was in a low-density and trackable level (Fig. 2). We could easily stretch a single neurite through a non-contact indentation. Also, the indentation was applied 10-15 mm next to a neurite, so that the neurite would receive similar mechanical forces when the same indentation depth was applied. Second, fibronectin did not promote proliferation of glia cells on PDMS by day 2, since the ratios of glia cells to neurons in D2 and D2+F groups were similar, 2.7 vs. 2.8 respectively (Fig. 2 & Table S1). These densities of glia cells allowed us to avoid direct indentation on glia cells and minimize the effect of glia cells on the mechanosensory transduction. Moreover, the mechanical stimulation had little effect on membrane potential change since the indentation was displaced 100 mm away from the soma, in which the distal mechanical force could only alter less than 3 mV of membrane potential on a neuritefree neuron, even the neuron was surrounding with glia cells (Fig. 3 & Fig. 4). When the neurite-free neurons received larger mechanical force as the indentation was closer, they may have a larger change in membrane potential. Nevertheless, the neurite-free neurons only altered less than 10 mV of membrane potential when the indentation was displaced in 30 mm distance (Fig. 4). Likewise, the effect of distal force on glia cells should be minimal. Although glia cells and neurons were side-by-side on PDMS substrate (Fig. 2), when the indentation was displaced adjacent to glia cells, no AP was induced in neurite-free neurons in D2, D1+F, and D2+F groups. However, although unlikely, we still cannot totally exclude a possibility that the mechanotransduction of neurites was an effect of gliatransmitters, which can be released when the glia cells receive enough mechanical stimulation [37]. We conclude that the distal force-mediated mechanosensory transduction we observed in neurites is fibronectin-influenced, and that the cytoskeleton is both involved and required for AP firing. These findings have implications in a diversity of fields including mechanotransduction in neurons, neuron-material interactions, and neural tissue engineering. PDMS Polydimethylsiloxane (PDMS, Sylgard 184) was purchased from Dow Corning Corp (Midland, MI, USA) and prepared with a 35:1 ratio of base to curing agent. The elastic modulus of PDMS with this ratio has a Young's modulus of approximately 88 kPa [16]. DRG primary culture CD1 mice (8 to 12 weeks old) were used for DRG primary culture. The usage of these animals was approved by the Institute Animal Care and Use Committee of Academia Sinica and Figure 6. The response of stretch-activated mechanotransduction in neurites to mechanosensitive (MS) channel blockers. Common MS channel blockers were used to probe their effect on AP signaling. A baseline signal for each cell was first recorded with no stimulation provided. This was followed by recording the AP signal following the application of force from an indentation with no channel blocker being used. Next, the blocker was applied to the neurite, and another indentation was made; the AP was recorded during both of these events. (a) The stretch-activated AP was not blocked by GdCl 3 (100 mM, n = 8), (b) amiloride (100 mM, n = 8), or (c) ruthenium red (5 mM, n = 8). doi:10.1371/journal.pone.0004293.g006 followed the Guide for the use of Laboratory Animals (National Academy Press). Mice were euthanized by the use of CO 2 to minimize suffering. Total DRG were acutely dissociated and processed as described [38]. The cells were seeded on a PDMS layer on the top of a coverslip and coated with poly-L-lysine (0.1%; Sigma, St. Louis, MO) or fibronectin (10 mg in 1 ml PBS; BD Biosciences, San Jose, CA), then cultured in a Petri dish using Dulbecco's modified Eagle's medium containing 1% Penicillin/ Streptomycin, and 10% fetal calf serum. Cell cultures were maintained in a 5% CO 2 incubator at 37uC for 1 to 5 days. Electrophysiology Whole-cell patch recordings of DRG neurons were performed as described previously [38]. The patch pipettes (64-0792, Warner Instruments, Hamden, CT) were prepared in 1-5 MV and filled with internal solution containing 100 mM KCl, 2 mM Na 2 -ATP, 0.3 mM Na 3 -GTP, 10 mM EGTA, 5 mM MgCl 2 , and 40 mM HEPES, adjusted to pH 7.4 with KOH. As the external solution, we used artificial cerebrospinal fluid (ACSF), which contained 130 mM NaCl, 5 mM KCl, 1 mM MgCl 2 , 2 mM CaCl 2 , 10 mM glucose, and 20 mM HEPES, adjusted to pH 7.4 with NaOH. A current clamp mode was used to record action potentials evoked by mechanical forces. The bridge was balanced for current clamping recording and the data were discarded if the serial resistance or input resistance varied .30% from the base recording (non-stretched state) during the indentation [39]. Chemicals were purchased from Sigma (St. Louis, MO). Mechanical stimulation We applied a mechanical force on the PDMS after first confirming that the neuron could be excited through the introduction of a voltage change and measurement of an AP. A flamed polished pipette (tip diameter ,4 mm) was used to indent the PDMS for the generation of mechanical stimulation through deformation of the substrate. The indentation pipette was located approximately 100 mm away from the main cell body of the recorded neuron and imposed a displacement on the PDMS where there were no glia cells. If a visible neurite was observed, the indentation was displaced adjacent to (,10-15 mm) the neurite extension, which allowed us to avoid nonspecific contact with the neurite and also provided specific control over the amount of indentation that was imposed on the PDMS for stimulating the cell. Indentation was controlled through the use of a micromanipulator (EMM-3SV, Narishige, Tokyo, Japan) positioned at an angle of 45 degrees to the PDMS surface. The displacement was applied with a 10.42 mm step until an AP response occurred or a maximum total displacement of 125 mm was reached. When an indented depth activated an AP response, we always applied the same indentation force again to determine whether the stretchactivated AP was repeatable. The duration of a displacement lasted less than 1 second. A minimum of 30 seconds was used between each indentation [7]. Chemical modulators for channels and the cytoskeleton Gadolinium chloride (100 mM), amiloride (100 mM), and ruthenium red (5 mM) were prepared in ACSF and bath-perfused while the indentation was applied. Nocodazole (1 mg/ml), cytochalasin-D (1 mg/ml), and latrunculin-A (1 mg/ml) were prepared in ACSF and bath-perfused to the recording chamber. The cytoskeleton modification agents were incubated with the cultured neurons for 10 minutes before the indentation was applied [20]. After the drugs were applied, the recording chamber was washed with ACSF for 3 minutes and then a 2-nA square pulse was delivered to evoke an action potential. Microscopy and immunofluorescent staining The differential interference contrast images of the recorded neurons were captured with a Charge-Coupled Device camera (XC-ST50, Sony, Japan) on an inverted microscope (IX71, Olympus, Tokyo, Japan). The fluorescent images were captured with a digital camera (Diagnostic Instruments, MI) on an Axiovert inverted fluorescent microscope (Axiovert 200, Zeiss, Germany). Cultured DRG neurons were fixed with 4% paraformaldehyde, and then incubated with PBS containing 10% bovine serum albumin for blocking and a primary antibody for protein gene product 9.5 (Chemicon, Temecula, CA) at 4uC overnight. The secondary antibody used was 6 mM Alexa FlourH 594 rabbit antiguinea pig IgG (Invitrogen, Carlsbad, CA), which was applied for 2 hours. The cells were mounted with VECTASHIELD-DAPI (Vector, Burlingame, CA) and imaged using a 636high numerical aperture oil immersion objective to examine morphology and fluorescent marker distribution. Statistical Analysis Results are presented as the mean6SEM. One-way ANOVA tests were applied for independent sample comparison [38]. A non-parametric Mann-Whitney method was applied for independent sample comparison in Figure 4. A p,0.05 was considered significant. Figure S1 Stretch neurites on PDMS substrate (movie). A neurite-bearing DRG neuron in day-2 culture with fibronectin was stretched through indentation on the PDMS. The duration of the indentation lasted for 0.64 seconds. The movie is composed of 16 frames. Found at: doi:10.1371/journal.pone.0004293.s001 (3.87 MB SWF) Figure S2 Calculation of indentation forces. We generated a stretching force on a single neurite through indentation with a glass pipette into the soft PDMS surface. The indentation depth, h, was determined using a micromanipulator as described in the Methods section and the indentation force, P, that we applied was determined using the following equation [1] :We generated a stretching force on a single neurite through indentation with a glass pipette into the soft PDMS surface. The indentation depth, h, was determined using a micromanipulator as described in the Methods section and the indentation force, P, that we applied was determined using the following equation [1] : Found at: doi:10.1371/journal.pone.0004293.s002 (0.07 MB TIF) Figure S3 Lucifer yellow visualization of neurons during stretching (movie). A representative movie shows that a neuritebearing DRG neuron (D2+F) was whole-cell patched and stretched by indenting the PDMS. The patched pipette was filled with internal solution containing 2 mg/ml Lucifer yellow. The duration of the indentation lasted for 0.52 seconds. The movie is composed of 13 frames. Found at: doi:10.1371/journal.pone.0004293.s003 (2.34 MB SWF) Figure S4 Examining the effect of cytoskeleton modifiers on AP firing. (a) A current injection was introduced to evoke an AP in a DRG neuron perfused with ACSF. Nocodazole (1 mg/ml), which disrupts microtubules, did not block current injection-induced AP in DRG neurons (n = 6). (b) Cytochalasin-D (1 mg/ml), which disrupts actin filaments, did not block current injection-induced AP in DRG neurons (n = 6). (c) Latrunculin-A (1 mg/ml), which inhibits actin polymerization, did not block current injectioninduced AP in DRG neurons (n = 6). Found at: doi:10.1371/journal.pone.0004293.s004 (0.06 MB TIF) Figure S5 Direct indentation on soma evoked an AP response in all neurite-free neurons of D2 culture (n = 5). Indented depth from cell surface to the position that fired an AP response is indicated. Found at: doi:10.1371/journal.pone.0004293.s005 (0.05 MB TIF) Author Contributions Conceived and designed the experiments: CMC PRL CCC. Performed the experiments: YWL CMC. Analyzed the data: YWL CMC. Contributed reagents/materials/analysis tools: YWL CMC PRL CCC. Wrote the paper: PRL CCC.
2014-10-01T00:00:00.000Z
2009-01-28T00:00:00.000
{ "year": 2009, "sha1": "e17eff4c9123886d5af47dde9ce7f7e36b4d7c81", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0004293&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e17eff4c9123886d5af47dde9ce7f7e36b4d7c81", "s2fieldsofstudy": [ "Engineering", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
266390727
pes2o/s2orc
v3-fos-license
Occlusal Contact Surface Changes and Occlusal Force Distribution Between Vacuum-Formed Retainers and Other Retainers: A Systematic Review The present systematic review was done to assess the available literatures on changes in the number of occlusal contacts (NOC), occlusal contact surface areas, and occlusal force distribution (OFD) with vacuum-formed retainers (VFRs) or clear overlay retainers during retention and to compare them with other retainers. Six electronic databases (Web of Science, Scopus, PubMed, Cochrane Library, Lilacs, and Google Scholar) were searched. Randomized controlled trials (RCTs) and controlled clinical trials (CCTs) reporting on occlusal contact changes with VFRs were included. A total of nine articles were included in this review: three RCTs, five prospective controlled trials (PCTs), and one CCT. The Cochrane risk of bias tool and ROBINS-I tool were used for risk of bias assessment. The three RCTs showed moderate risk of bias, and out of five CCTs, four showed low risk of bias, and one showed moderate risk of bias. One CCT showed a low risk of bias in the ROBINS-I tool. Two out of four studies reported improved occlusal surface area (OSA) with VFRs when assessed at the end of six months and 12 months; one out of four studies reported improved NOC; and one study reported a decrease in OFD anteriorly and an increase in OFD posteriorly after two months of retention. On comparison between the groups, the other retainer groups showed more NOCs compared to VFRs. The limited available evidence suggests an increase in OSA and no change in NOCs and OFD with VFRs during retention. No significant differences between VFRs and other retainers for OSA and OFD were noted, and more NOCs were noted for other retainer groups. Introduction And Background Occlusal contacts are defined as contacts between the occluding surfaces of teeth when the distance is less than 50 μm [1].When the distance is between 50 and 350 μm, they are called near-occlusal contacts.Adequate functional occlusal contacts are required for good masticatory performance and a healthy temporomandibular joint [2].The stability of corrected malocclusion is ensured with good occlusal interdigitation and the absence of any occlusal interferences.Occlusal settling is vertical and horizontal tooth movement into functionally stable interocclusal contacts after active orthodontic treatment [3].During active orthodontic treatment, functional occlusion is not permitted entirely due to the teeth being tied together.However, once active treatment ends, the released teeth will fall into full function and occlusion [4].Hence, the appliances designed for retention should not ideally interfere with the interdigitation and should allow settling to occur. Changes in occlusal contacts can be analyzed qualitatively with articulating papers, shim stock foils, silicone impressions, and occlusal waxes and quantitatively with photo-occlusion systems and T-scans [1].Qualitative occlusal registrations are susceptible to deterioration, cannot be repeated, and are unable to quantify occlusal stress [5].In the photo-occlusion system, a very firm photoplastic film layer (98 μm thick) is placed over the occlusal surfaces, and the film layer is examined using a polariscope to determine the relative tooth contact intensity but is complicated and not reproducible [6].The T-scan III system (Tekscan, Norwood, Massachusetts, United States) is a hand-held device that has a U-shaped pressuremeasuring sensor that fits into the patient's mouth between the occluding teeth and is connected to a computer [7].It records the sequence of occlusal contacts from the first point of contact to maximum intercuspation (MIP) which are represented as bars and columns on the three-dimensional (3D) window and can quantify occlusal contact timings and forces [8].3D imaging systems may be used to create 3D digital models of a patient's teeth, and the orthodontist can determine the size and shape of occlusal contact using software [9].Occlusal force distribution (OFD) and occlusal surface area (OSA) indicate how occluding contacts act functionally [10].Recently, few studies have evaluated OSA and OFD using the Tekscan system (Norwood, Massachusetts, United States) [10][11][12]. Retainers are usually worn after active orthodontic treatment to preserve the arch dimension and the alignment of the teeth.They may also facilitate post-treatment settling [13].Hawley-type retainers (HR) and Begg's wrap-around retainer (BGR) allow vertical settling as they hold only the lingual and buccal surfaces of the teeth [14,15].Fixed or bonded retainers allow occlusal settling which can be attributed to eruption and vertical mobility of posterior teeth during retention [16].Removable vacuum-formed retainers (VFRs) cover the occluding surfaces of teeth, thereby exerting a bite-block effect [10].They are well accepted by patients and are better than other removable retainers in terms of ease of swallowing fluids and esthetics [17].However, their occlusal coverage can impede vertical settling [18].Even though a few clinical trials [3,11,12,18] have assessed the occlusal contact changes with VFRs or Essix retainers at the end of retention, there are no systematic reviews addressing the same.To thoroughly assess the literature that is now available and report on it, the present review was conducted.The current review aims to compare VFRs to other retainers and critically evaluate the research that is currently available on changes in OSA, OFD, and the number of occlusal contacts (NOC) during the retention period. Review Protocol registration The present review was prepared according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement.Registration of the review was done with the PROSPERO database (CRD42021245209). Search strategy An electronic search of the literature published in the below-mentioned databases was carried out to identify all papers related to the research question: Google Scholar, PubMed, Scopus, Cochrane, and Cochrane Embase.OpenGrey and GreyNet International were searched for grey literature.Keywords were modified for each database.The search was done for articles published until July 2023 in Table 1. Data collection process The selection criteria for the papers in this systematic review are mentioned below. Inclusion Criteria Human prospective studies and randomized controlled trials (RCTs) (P) comparing VFRs (I) with other removable retainers/no retainers (C) for occlusal parameters (O) like OSA, OFD, and NOC assessed using either qualitative methods like articulating paper, silicone impressions, occlusal waxes, etc. or quantitative methods like T-scan, 3D digital models, or photo-occlusion system were included. Exclusion Criteria Case series, animal studies, and in vitro studies on occlusal contact changes with VFRs and studies measuring only transverse and anteroposterior changes during retention were excluded. The process for the selection of included studies is reported in the PRISMA flowchart (Figure 1).Duplicates were removed using EndNote software version 20 (Clarivate Analytics, Philadelphia, Pennsylvania, United States). Risk of bias assessment The risk of bias in the included RCTs was assessed using the Cochrane risk of bias tool.Each RCT was given one of three categories: high risk (defined as >1 domain), some concern (defined as >1 domain), or low risk.The ROBINS-I tool was utilized to evaluate the risk of bias for non-randomized trials.A fourth author (ABS) corrected the disparities after three authors (SS, RM, and RKJ) independently performed the risk of bias.Meta-analysis was not performed as most of the studies included had either a different time period of measurement, different parameters assessed at different sites, or different comparison appliances. Results The electronic search resulted in identifying a total of 4052 articles.Following duplicate removal, a total of 2597 articles were obtained.Further screening of the titles and abstracts for eligibility was done, and a total of 10 papers were obtained which were subjected to full text reading.From this, one article was excluded since there was no comparison group.The remaining nine studies were included for qualitative analysis.The identification and screening of the eligible studies and those included in the current review are given in the PRISMA flow diagram (Figure 1). Among the nine papers included, five were PCTs [4,10,[18][19][20] and one was CCT [3] and the other three were RCTs [11,12,21].A total of 184 patients were treated with VFRs in the included studies.The characteristics of the studies involved in the review are summarized in Table 2, and the results of individual studies are summarized in Table 3, Table 4, and Table 5. Risk of bias assessment The three RCTs involved in this review showed a moderate risk of bias [11,12,21] as assessed by the Cochrane risk of bias tool (Figure 2).In the study with some concerns, the bias was due to deviation from the intended intervention [21].Four out of five PCTs reported with low risk of bias [4,[18][19][20], and one study reported a moderate risk of bias [10] which showed bias in the measurement of outcomes, whereas one study showed bias in the selection of participants and missing data.One CCT reported low risk of bias [3] as assessed by the ROBINS-I tool (Figure 3). FIGURE 3: Risk of bias assessment of the included CCTs using the ROBINS-1 tool Three studies showed bias in the measurement of outcomes, whereas one study showed bias in the selection of participants and missing data. Study Characteristics In the study by Kara and Yilmaz, a total of 90 subjects in three different groups were studied (upper BR and HR or lower bonded retainer (HR group), upper BR and lower VFR or BR (VFR group), and upper and lower BR (BR group)) [19].The digital models were analyzed after one-year retention phase for OCAs and ABO castradiograph evaluation (CRE) scores.Lustig et al. conducted a prospective study to investigate the short-term OFD and OSA changes with a sample of 47 subjects (reliability group (15) and VFR and BGR (32)) [10], and Tscan II was used to assess parameters like OSA and OFD changes at three different time periods (debonding T0, two weeks T1, and two months later). NOCs of 30 subjects were examined during the retention phase by Sauget et al. (HRs in both arches (13), maxillary HRs (2), maxillary and mandibular VFRs ( 15)) [3], and vinyl polysiloxane bite registration was used to record the NOCs.In the prospective study conducted by Dinçer and Isik Aslan, the NOCs of 30 subjects (non-treated (15), upper and lower VFRs ( 15)) were evaluated with soft silicone bite registration at the beginning (T0), end of retention (T1), and 2.5 years (T2) later [18].Aslan et al. evaluated the NOCs in centric occlusion during the retention phase in 36 subjects (modified VFRs (18), full coverage VFRs ( 18)) with a silicone-based bite registration at the beginning (T1), six months (T2), and nine months (T3) [20].In the study by Varga et al., 167 subjects (86 with no treatment, 30 with maxillary and mandibular VFRs, 30 with BGR, and 30 with a combination of fixed mandibular canine-to-canine BR and VFR in the maxillary arch) were examined to determine the effect of retainers on maximum voluntary bite force (MVBF) and NOCs [4]. In the RCT conducted by Alkan and Kaya, 60 subjects (VFRs (30), HR and BR groups (30)) were assessed for changes in OFD and OSA using T-scan III at T0, three months (T1), and six months (T2) into the retention [11].In another RCT by Alkan et al., 45 subjects (VFR retainer (28), HR (17)) were assessed for OFD, individual tooth force (ITF), OSA using T-scan III after debonding (T0), three months (T1), six months (T2), and one year (T3) [12].In the study by P et al., OFD, occlusion time, and disocclusion time were assessed by T-scan III for BGR and VFRs at debonding (T0) and 10-12 months of retention (T1) [21].The primary outcomes of the present review were changes in occlusal contacts evaluated in the included studies as OSA, OFD, and NOCs which are elaborated below. Summary of Findings OSA or OCA: Four studies assessed the OSA or OCA changes with VFRs and compared them with other retainers [10][11][12]19].The measurements were taken in the anterior, posterior, left, and right segments of dental arches at the time of debonding (T0) and after either six months or one year of retention.In the anterior region, none of the included studies reported a significant increase in OSA with VFRs at the sixth or 12th month except in the study by Lustig.et al., where it was observed that there was an increase in OSA in the anterior segment reported at the end of two months.Two of the included studies [11,12] reported increased OSA posteriorly with VFRs at six months and 12 months.The study by Kara et al. [19] showed a reduction of OCA in subjects with VFRs, whereas both HR and BR groups showed an increase in OCA at the end of one year.The study by Lustig et al. [10] reported that OSA reduces after two months of debonding with VFRs.Two included studies [11,12] reported no significant difference in OSA with VFRs when compared to other retainers (HR, BR, and BGR).Kara et.al [19] reported a significant decrease in total OSA in the VFR group and when compared with HRs, increased OSA was reported with HRs after one year of retention. OFD: Four studies evaluated OFD changes with VFRs and compared them with HRs, BRs, and BGR using Tscan [10][11][12]21].OFD was recorded in the anterior, posterior, left, and right segments of dental arches in these studies.No changes in the OFD between the anterior and posterior dental segments at the end of six months to one year of retention with VFRs were reported in three studies [11,12,21].The study by Lustig et al. [10] reported that VFRs showed a decrease in OFD anteriorly and an increase in OFD posteriorly after two months of retention.No change in OFD between either side was noted in any of the studies except in one study [12] where they reported an increase in OFD on the right side one year into retention in the VFR group.All included studies reported no significant difference between VFRs and other retainers for OFD between sides and segments except for the study by Alkan et al., who reported an increase in OFD on the left side with HR compared to VFR and an increase in OFD on the right side for both HR and VFR groups [12]. NOCS: Of the included studies, three studies reported on changes in the NOCs with VFRs and compared it with other retainers (HR, BGR, and BR) [3,4,20], and one study compared with untreated control subjects [18].The NOCs were noted in the anterior, posterior, and total segments in most studies.The NOCs with VFRs improved in the anterior region in one study [20], with no change in another study [3,4], and were not evaluated in the rest of the studies [4,18].The NOCs with VFRs improved in posteriors in one study [18], and no significant improvement was noted in the rest of the studies [3,4,20].The total NOCs were evaluated in two studies [3,4,20], and both concluded no significant change with VFRs.On comparison between the groups, it was noted that the other retainers showed more NOCs when compared to VFRs [3,4,20]. Discussion This systematic review included a total of nine studies with three RCTs and six CCTs which evaluated the occlusal contact changes with VFRs and compared them with other types of retainers like HR, BGR, and BR.Changes in occlusal contacts were reported in available literature in terms of OSA, OFD, and NOC [22].Only studies reporting on these changes with VFRs and compared with other retainers were included in this review.Occlusal contact changes were recorded after the completion of fixed orthodontic treatment and were assessed for a maximum period of 2.5 years, but the time intervals varied in the included studies. OSA or OCA gives the area of the occlusal contact in mm² measured for individual teeth and was reported for either sides of the jaw or for different regions (anterior, posterior) [23].At the end of active orthodontic treatment, occlusal forces that are adequately distributed on either side of the jaws maintain adequate stability and good muscle balance [11].The stability of the corrected malocclusion is ensured by an adequate NOC during the retention phase [23].VFRs have gained popularity over the years due to their ease of construction and aesthetic appearance [10].However, due to the very design, it is assumed to have lesser vertical settling as compared to other retainers.HRs or BGRs are considered an effective method of retention following fixed orthodontic treatment due to their lack of occlusal coverage [24].However, according to the current systematic review, the occlusal contact changes with VFRs are comparable with the other retainers. Similar retention protocols were used in three studies: full-time wear for the first six months, followed by nighttime use for the following six months [11,12,19].OSA improved with VFRs over a period of six months to one year into retention as reported by Alkan et al. and Alkan et al., and these two studies reported a low risk of bias [11,12].Kara et al. [19] reported that OSA reduced with VFRs at the end of one year of evaluation, and this study had a low risk of bias. Distribution of occlusal forces in the two halves of the jaws' anterior and posterior regions were reported at two months [11,12], six months [11,12], and 12 months [12,21,10] in the included studies.The retention protocols were similar in three studies [11,12,21] except in the study by Lustig et al.where the evaluation was done for only two months.The OFD changes were recorded with T-scans in three studies [11,12,10]. According to three studies, OFD was uniform on both sides, with more in the posterior teeth and less in the anterior teeth towards the end of the retention phase.OFD at the end of retention is not affected by the type of retainers used. NOCs give an idea of how many teeth are in functional contact.On reviewing the literature qualitatively, we noted that there was no improvement in NOCs with VFRs and HRs were found to have better NOCs than VFRs [3,4,18,20].However, the studies included in this systematic review reported some differences in the assessment period, retainer wear protocol, retainer dimensions, and methods of evaluation.Different retention protocols were used in the involved studies with full-time wear ranging from three days to six months, followed by nighttime wear ranging from four weeks to three months.The dimensions of the material used to fabricate VFRs varied among the included studies; they ranged from 0.025 to 0.04 inches.The methods used for NOC registration include silicone-based impression materials in three studies [3,18,20] and plastic foils in one study [4].Since there were many differences in clinical protocols used and duration of treatment among the studies, pooling of data and a subsequent meta-analysis could not be done. Systematic reviews comparing the VFRs and HR retainers in terms of cost-effectiveness, patient satisfaction, survival time, and occlusal contacts concluded that there were very few differences between them and highquality studies are needed to determine which is a better retainer [25,26].A previous systematic review has reported that the NOCs improved in patients on HRs but there was no difference when compared to other retainers [27].Conclusions from that review may not be valid since they included studies that reported only on NOCs but an assessment of area and distribution of occlusal contacts is more important.A recently published systematic review on occlusal settling with removable and bonded retainers has concluded that Hawley retainers allowed better occlusal settling than Essix retainers which is in consensus with the present review.The present review differed from the review by Shoukat Ali et al., as only VFRs were specifically compared with other retainers and occlusal biting force was not considered [28]. In the current study, meta-analysis was not performed as there was a very high methodological heterogeneity reported.Studies included reported occlusal contact changes at varying time intervals, different retention protocols were employed, fabrication of retainers varied, and methods of evaluation were different. Limitations The review lacks a sufficient number of high-quality RCTs reporting on OCA or OFD, and only a small number of patients were treated with VFRs.Methodological differences among the included studies contributing to heterogeneity are one of the main limitations of the present review.Well-designed RCTs assessing the stability of corrected malocclusions along with OCA and OFD are required. Conclusions With the limited evidence available, it can be concluded that OSA improved with VFRs during retention and when compared to other retainers, there was no difference.OFD between either sides or anterior/posterior regions with VFRs during retention is similar to that of any of the retainers, and patients treated with Hawley retainers had greater occlusal contacts during retention than those treated with VFRs. FIGURE 2 : FIGURE 2: Risk of bias assessment of the included RCTs using the Cochrane risk of bias tool
2023-12-21T16:13:03.871Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "fc403ff8461bf683af59ced1b2868a593f77cc35", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/review_article/pdf/205281/20231219-29408-4h2gmy.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d3a1f95527df9360511d7b0a73ec71363c2fcd0f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
256748401
pes2o/s2orc
v3-fos-license
Characteristics and phylogenetic analysis of the complete mitochondrial genome of Microstomus achne Abstract Microstomus achne (Jordan and Starks, 1904) is an economically valuable flatfish belonging to the family Pleuronectidae and the only flatfish that inhabits Korea. Here, we report on the complete mitochondrial genome of M. achne and the phylogenetic relationship between close species. The mitogenome is 16,971 bp long and encodes 13 protein-coding genes (PCGs), 22 transfer RNAs, and two ribosomal RNAs. The phylogenetic analysis showed that M. achne clustered with Glyptocephalus stelleri, which supports the conclusion that M. achne belongs to the family Pleuronectidae. The results of this study provide a better understanding of M. achne. Introduction Microstomus achne (Jordan and Starks, 1904) is a benthic fish belonging to the order Pleuronectiformes and family Pleuronectidae, and its most striking morphological characteristics is the white spots on the body (Cooper and Chapleau 1998). The genus Microstomus was reported to comprise 10 nominal species worldwide, five of which have recently been excluded from the genus, and four of which are currently accepted as valid (Norman 1934;Cooper and Chapleau 1998;Froese and Pauly 2022). Of this genus, only M. achne is known to inhabit Korea (Kim et al. 2005). However, genetic information on members of these genera is remarkably limited, and in particular, no complete mitochondrial DNA sequences have yet been reported for this genus. We describe the completed mitochondrial genome of M. achne, which was acquired using next-generation sequencing, and we anticipate that this information will help to understand the phylogenetic status of M. achne. Materials We obtained the fin of a flatfish fin that was collected from Boryeong South Korea (36 22 0 N, 126 34 0 E) and deposited in the Pukyong National University storage facility (Figure 1 Methods The genomic DNA was extracted using the PureHelix TM Genomic DNA Prep Kit [Animal], Solution Type (NANOHELIX, Daejeon). For species identification using the cox1 sequence, the cox1 gene was amplified by PCR using the fish universal primer set (Ward et al. 2005) and sequenced by Macrogen (South Korea). The partial cox1 sequence was compared using a BLASTN (Johnson et al. 2008) search. Raw data from next-generation sequencing were obtained using the method described in Chae et al. (2022) and then deposited in the Sequence Read Archive (SRA) database (SRR21722070). The raw data were trimmed with Cutadapt ver. 4.1 (Martin 2011), and a contig sequence was produced with the default option in the de novo assembler of the CLC Genomics Workbench (ver. 20.04; QIAGEN). The circular form of the mitogenome was confirmed by using the "Map to Reference" tool in Geneious software (ver. 2021.2.2; https:// www.geneious.com) to map the filtered data onto the contig sequence. Annotation of this final sequence was performed in the MITOS Webserver (Bernt et al. 2013), after which the detailed annotation was manually corrected using SnapGene software (ver. 5.3.2; GSL Biotech LLC; snapgene.com). Finally, the completed circular form of the mitogenome sequence was registered at the NCBI GenBank (OP066370). The mitogenome map was prepared using ORDRWA (Greiner et al. 2019 ; Cynoglossus sinicus, JQ348998 ; Cynoglossus roulei, MK574671 ). Each mtDNA sequence was retrieved from GenBank. These nucleotide sequences of protein-coding genes (PCGs) were aligned and analyzed using a GTR substitution model and 1,100,000 chain length. The total length of the final mitogenome was 16,971 bp, and it comprised 13 PCGs, 22 tRNA genes, and two rRNA genes. Its gene order matched that of Glyptocephalus stelleri (MT258402). Among the PCGs, only nad6 was transcribed on the negative strand, and all others were transcribed on the positive strand (Figure 2). The ATG codon was used as a start codon in 12 PCGs (nad1, nad2, cox2, atp8, atp6, cox3, nad3, nad4l, nad4, nad5, nad6, and cob), while the GTG codon was used as a start codon in cox1. nad1, atp8, atp6, nad4l, and nad5 used the TAA stop codon. While the truncated T-codon terminated translation in nad2, cox2, nad3, nad4, and cob, and the truncated TA-stopped translation in cox3. The AGA codon and the TAG codon were utilized as stop codons in cox1 and nad6, respectively. The mitogenome of M. achne contained 22 tRNA genes, including two tRNA-L and two tRNA-S. Of these tRNA genes, 14 (tRNA-F, tRNA-V, tRNA-L2, tRNA-I, tRNA-M, tRNA-W, tRNA-D, tRNA-K, tRNA-G, tRNA-R, tRNA-H, tRNA-S1, tRNA-L1, and tRNA-T) were transcribed on the positive strand; the remaining tRNA genes (tRNA-Q, tRNA-A, tRNA-N, tRNA-C, tRNA-Y, tRNA-S2, tRNA-E, and tRNA-P) were transcribed on the negative strand ( Figure 2). In the predicted secondary structure, there were standard and abnormal structures in the tRNA loop or stem. tRNA-C and tRNA-S1 contained an abnormality in the D-loop. Imperfect base pairing in the T-loop was observed in tRNA-V, tRNA-W, tRNA-M, tRNA-N, and tRNA-E. In addition, incomplete base pairing in the acceptor stem was found in tRNA-F, tRNA-V, tRNA-I, tRNA-R, tRNA-H, tRNA-L1, and tRNA-T. Uniquely, tRNA-S1 had partial base pairing in every stem. The other genes possessed standard tRNA structure. The two rRNA genes were located close to the border of tRNA-V (Figure 2). Between tRNA-F and tRNA-V, a small rRNA was placed, while a large rRNA was placed between tRNA-V and tRNA-L2 (Figure 2). The small and large rRNA were 951 bp and 1713 bp in length, respectively. The putative control region was located between tRNA-P and tRNA-F and had a length of 1261 bp (Figure 2). Each species was identified as belonging to either family Pleuronectidae, family Cynoglossidae, or the outgroup (A. fulvescens). M. achne clustered with G. stelleri of the family Pleuronectidae and was separated from other nodes in the phylogenetic analysis (Figure 3). Discussion and conclusion Microstomus achne is the only flatfish belonging to the genus Microstomus that can be observed in South Korea. In addition, this genus lacks any complete mitogenomes. Here, the complete mitochondrial genome of M. achne was identified, and its genetic characteristics were elucidated. Significantly, the gene composition was the same as the general mitogenome composition of vertebrates (Pereira, 2000). In addition, gene composition and order had no significant features compared with the family Pleuronectidae. This is the first study to report the complete mitogenome of flatfish of the genus Microstomus. These data can be utilized to reveal the phylogenetic relationships between members of the genus Microstomus, especially M. achne. Acknowledgment This article was reviewed by Dr. Khawaja Muhammad Imran Bashir for English, the authors are thankful for his suggestions and support. Authors contributions Jun Young Chae, Moo-Sang Kim, Jinkoo Kim, and Hyung-Ho Lee conceived the idea for this study. Jun Young Chae conducted the experiments. Jun Young Chae wrote the manuscript with support from Moo-Sang Kim and Hyung-Ho Lee. Tae-Wook Kang and Jin Ho Kim performed the data analysis, and Jinkoo Kim supplied the specimen. All authors agree to be accountable for all aspects of the work. Ethical approval No ethical approval was required for this study. We used a flatfish fin from a specimen that was previously collected by the MBRIS, outside this study. This specimen was dead, and the sample was provided with permission by the MBRIS (permission no. 2022-130). Disclosure statement No potential conflict of interest was reported by the author(s). Data availability statement The genome sequence data supporting this study's findings are available in the GenBank of the NCBI at (https://www.ncbi.nlm.nih.gov/) under accession no. OP066370. The associated BioProject, SRA, and Bio-Sample numbers are PRJNA882383, SRR21722070, and SAMN30933649, respectively.
2023-02-11T16:08:16.060Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "a23109bccea7654cbe9b0c9b1ad915494859faa0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/23802359.2023.2172966", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "13dcad8e7c5e33ebf228e689b5875798a37d8b92", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
582736
pes2o/s2orc
v3-fos-license
Clinical Interventions in Aging Dovepress a Stewardship Intervention Program for Safe Medication Management and Use of Antidiabetic Drugs Background: Diabetes patients are complex due to considerations of polypharmacy, multimorbidities, medication adherence, dietary habits, health literacy, socioeconomic status, and cultural factors. Meanwhile, insulin and oral hypoglycemic agents are high-alert medications. Therefore it is necessary to require a multidisciplinary team's integrated endeavors to enhance safe medication management and use of antidiabetic drugs. Methods: A 5-year stewardship intervention program, including organizational measures and quality improvement activities in storage, prescription, dispensing, administration, and monitoring , was performed in the Second Affiliated Hospital of Zhejiang University, People's Republic of China, a 3,200-bed hospital with 3.5 million outpatient visits annually. Results: The Second Affiliated Hospital of Zhejiang University has obtained a 100% implementation rate of standard storage of antidiabetic drugs in the Pharmacy and wards since August 2012. A zero occurrence of dispensing errors related to highly " look-alike " and " sound-alike " NovoMix 30 ® (biphasic insulin aspart) and NovoRapid ® (insulin aspart) has been achieved since October 2011. Insulin injection accuracy among ward nurses significantly increased from 82% (first quarter 2011) to 96% (fourth quarter 2011) (P0.05). The number of medication administration errors related to insulin continuously decreased from 20 (2011) to six (2014). The occurrence rate of hypoglycemia in non–endocrinology ward diabetes inpatients during 2011–2013 was significantly less than that in 2010 (5.03%–5.53% versus 8.27%) (P0.01). Percentage of correct management of hypoglycemia by nurses increased from 41.5% (April 2014) to 67.2% (August 2014) (P0.01). The percentage of outpatient diabetes patients receiving standard insulin injection education increased from 80% (April 2012) to 95.2% (October 2012) (P0.05). Insulin injection techniques among diabetes outpatients who started to receive insulin were better than indicated in data from two questionnaire surveys in the literature, including the percentage checking injection sites prior to injection (85.6%), priming before injection (98.1%), rotation of injecting sites (98.1%), remixing before use (94.5%), keeping the pen needle under the skin for 10 seconds (99.4%), and using the pen needle only once (88.7%). On-site inspection indicated of great improvement in the percentage of drug-related problems in the antidiabetes regimen between the first and second quarter of 2014 (1.08% versus 0.28%) (P0.05). Conclusion: Quality improvements in safe medication management and use of antidiabetic drugs can be achieved by multidisciplinary collaboration among pharmacists, nurses, physicians , and information engineers. Introduction Diabetes patients are usually complex due to consideration of polypharmacy, multimorbidities, medication adherence, dietary habits, health literacy, socioeconomic submit your manuscript | www.dovepress.com 1202 Zhao et al status, and cultural factors. 1 Meanwhile, insulin and oral hypoglycemic agents are included in the Institute for Safe Medication Practices (ISMP) list of high-alert medications that bear a heightened risk of causing significant patient harm when used in error. 2 Therefore, safe medication use of antidiabetic drugs should arouse a wide concern. Physicians need to be aware of the pharmacological mechanism of each class of drugs, contraindications, pre cautions, drug-drug interactions (DDIs), and adverse effects to formulate a safe and effective management plan for diabetes patients. 3 Several studies have described that the situation in medication management and use (MMU) of antidiabetic drugs was not optimistic. Classen et al reported that 10.7% of patients exposed to insulin and hypoglycemic agents experienced associated adverse drug events, from the 2004 Medicare Patient Safety Monitoring System sample's medical records. 4 Geller et al estimated the insulin-related hypoglycemia and errors leading to Emergency Department visits and hospitalizations in the USA -the most commonly identified insulin-related hypoglycemia was attributed to reduced food intake and administration of the wrong insulin product. Severe neurologic sequelae and blood glucose levels of 50 mg/dL or less were documented in an estimated 60.6% and 53.4%, respectively. 5 Milligan et al analyzed adverse drug events in older people with diabetes in the care home setting, via incident reports obtained from the National Reporting and Learning Service in the UK during 2005-2009. They found 684 reports related to insulin and 84 incidents related to oral hypoglycemic drugs. The most common error category with both types of drug therapy was wrong or unclear dose. 6 A study in 2012 showed that three-fourths of the insulin pens users did not follow the manufacturer's instructions for proper administration and storage of insulin pens. 6 Therefore, it is necessary for clinicians and patients to coordinate and participate in rational use of antidiabetic drugs. Mitchell et al observed that correct usage scores were significantly higher if initial education on insulin pens was performed by a pharmacist or nurse. 7 Cohen discussed pharmacists' role in ensuring safe and effective hospital use of insulin in the inpatient setting by minimizing the likelihood of medication errors related to prescribing, transcription, dispensing, administration, storage, and communication. 8 However, there is little literature on multidisciplinary teams' integrated endeavors to continuously enhance safe MMU of antidiabetic drugs in large-scale hospitals. The Second Affiliated Hospital of Zhejiang University (SAHZU), a comprehensive academic medical center hospital in the People's Republic of China, successfully passed Joint Commission International (JCI) accreditation on February 24 of 2013. 9 SAHZU performed continuous quality improvements in safe MMU of high-alert medications during the journey to JCI accreditation and in the post-JCI accreditation era. The aim of this article was to discuss the effectiveness of stewardship intervention in MMU of antidiabetic drugs and provide some reference for international counterparts. Data collection A 5-year intervention program, covering the period from 2010 to 2014, focused on MMU of insulin/insulin analogs and hypoglycemic drugs in SAHZU, a 3,200-bed hospital with 3.5 million outpatient visits annually (data in 2013) in Zhejiang Province, which has a population of approximately 54.4 million. The implementation rate of standard storage of antidiabetic drugs in the Pharmacy as well as in the wards was calculated from on-site inspection results. The appropriateness of antidiabetic regimens for inpatients was evaluated by diabetes specialist nurses and auditing pharmacists. Data on insulin injection accuracy among ward nurses, coverage percentage of standard insulin injection education for diabetes outpatients, and insulin injection techniques among diabetes outpatients who started to receive insulin therapy were obtained from on-site inspection, record forms, and follow-up. Adverse drug reactions (ADRs) and medication errors related to antidiabetic drugs were retrospectively analyzed by retrieving data from an online no-fault reporting system for all staff. All hypoglycemia events were derived from a special online electronic platform for diabetes nursing, and the occurrence rate of hypoglycemia in diabetes patients was then calculated. The data presented in the study is available in the archives of the Drug and Therapeutics Committee of SAHZU. Access and use of these data need permission from the SAHZU Drug and Therapeutics Committee. The study was approved by The Ethics Committee at SAHZU and it was in compliance with the Helsinki Declaration. Comprehensive intervention measures Organizational measures SAHZU established a team of diabetes nursing specialists in September 2009. After 2-year endeavors, the team had ten head nurses as core members, three full-time diabetes specialist nurses, and 61 part-time diabetes specialist nurses, on the basis of "one part-time diabetes specialist nurse per ward". A three-level diabetic nursing management system Clinical Interventions in Aging 2015:10 submit your manuscript | www.dovepress.com Dovepress Dovepress 1203 safe medication management and use of antidiabetic drugs was thus formed: (1) primary nurses were responsible for providing basic nursing and patient education (first level); (2) part-time diabetes specialist ward nurses should further strengthen diabetic patient education and contact full-time diabetes specialist nurses for consultation, if necessary (second level); and (3) full-time diabetes specialist nurses provided consultation service and regular on-site inspection (third level). Academic salons were held quarterly by a team of diabetes nursing specialists. Physicians and diabetes nurses worked together to treat outpatients in the Diabetes Center. Key indicators of the diabetes specialist nursing service are listed in Table 1. According to JCI accreditation standards, a working group, named "MMU", was established in 2011 and played a pivotal role in quality and patient safety associated with medications. Information regarding ADRs, medication errors, and hypoglycemia events were reported to the Division of Medical Management, Division of Nursing, Pharmacy and Office of Quality Management. Targeted quality improvement activities were then carried out. standardized storage Standard storage of antidiabetic drugs focuses on the following points: (1) Unopened bottles or pens of insulin should be kept in the refrigerator until needed and may be used until the expiry date on the label. Insulin that is currently in use should be stored at room temperature for no more than 28 days and then discarded. (2) Storing two insulin formulations with similar-sounding names and similar-looking labels in close proximity could easily lead to confusion, therefore insulin must be stored in separate compartments corresponding to each patient, and each product in use should be accurately labeled with the patient name, identification number, start date, and expiry date ( Figure 1). (3) Organizational policy defines how medications delivered by the patient are identified and stored. Insulin formulations and oral hypoglycemic drugs should not be stored at the patients bedside. (4) In the Pharmacy, all antidiabetic drugs should be stored in a special location with standard labels indicating high-alert medications. In each ward, a standard label of high-alert medication should be pasted to the place where insulin/ insulin analog are stored. (5) Strengthened management was performed regarding look-alike and sound-alike (LASA) antidiabetic drugs. The list and color photos of LASA medications are available in the hospital local area network. LASA medications are placed apart from one another on the Pharmacy shelf, especially when LASAs don't act alike, eg, Tritace ® (ramipril tablets; Sanofi S.A., Paris, France) and Amary ® (glimepiride tablets; Sanofi S.A.), and Monopril ® (fosinopril sodium tablets; Sino-American Shanghai Squibb Pharmaceutical Co Ltd, Shanghai, People's Republic of China) and Glucophage ® (metformin hydrochloride tablets; Sino-American Shanghai Squibb Pharmaceutical Co Ltd). standardized prescribing Physicians are required to follow the current edition of National Guideline for Prevention and Treatment of Type 2 Diabetes Mellitus published by Chinese Diabetes Society. 10 When a physician prescribes a LASA medication via the electronic medical record (EMR), the interface of EMR will display a yellow background for this physician order. For LASA insulin, a special warning will be seen during prescription. For example, a warning (ie, "Please pay attention to the distinction between NovoMix 30 ® [biphasic insulin aspart] and NovoRapid ® [insulin aspart]") will display when a physician prescribes NovoRapid ® (Novo Nordisk A/S, Hellerup, Denmark). Nonstandard abbreviation of medical terminology (eg, "U" and "IU") is prohibited from use in physician orders of insulin. Insulin infusion speed and measurable goals for blood glucose level must be specified when the physician order is written. Sulfonylurea hypoglycemic drugs (eg, tolbutamide, glipizide, gliclazide, glibenclamide, glibornuride, gliquidone, glyclopyramide, and glimepiride) are contraindicated in patients with history of allergy to sulfonamide derivatives, such as antimicrobial sulfonamides, diuretics (hydrochlorothiazide, amiloride, and indapamide), COX-2 inhibitors (celecoxib and parecoxib), sulfonylureas, and probenecid. 11 If an insulin pen brought in by the patient 1204 Zhao et al must be continuously used after admission, medication reconciliation should be conducted, and the patients' written informed consent must be obtained. Physicians should verify whether the insulin pen is pharmacologically incompatible with other medications during the patient's stay in hospital. Additionally, the physician order for this insulin must be written, via EMR, in the orders, to let the auditing pharmacist know the use of this medication brought by the patient. standardized dispensing SAHZU improved the interface of the Pharmacy management information system for prescription auditing in January 2013. By this sophisticated software, pharmacists could see patient information, including age, diagnosis, allergy history, body weight, pregnancy status, clinical laboratory data (eg, blood glucose levels), and drug information, such as approved drug name, dose, administration route, dosing frequency, and the list of all current medications, all such information visually displays in the same interface. 12 Pharmacists should be aware of "near misses" related to inappropriate use of abbreviations, such as "U" and "IU" instead of "units". The potential consequence, clinical relevance, and risk management of DDIs associated with oral antidiabetic drugs are listed in Table 2. [13][14][15][16][17][18][19][20][21][22][23][24][25][26][27] Auditing pharmacists should check the appropriateness of drug combinations and communicate with physicians if controversial physician orders are identified. The inpatient Pharmacy started to provide centralized intravenous admixture service for insulin infusion preparation in November 2010. The dosing time for oral hypoglycemic drugs should be checked by pharmacists. Although most oral hypoglycemic agents should be ingested 15-30 minutes before a meal, some have specific requirements, for example, acarbose tablet should be taken at the start of main meals (taken with first bite of meal). Diamicron ® (gliclazide modified release; Servier Laboratories, Neuilly-sur-Seine, France) and Glucotrol XL ® (glipizide extended release; Pfizer, Inc., New York, NY, USA) should be given once daily with breakfast. Metformin should be taken with meals to help reduce stomach or bowel side effects. SAHZU introduced two unit-dose, automated dispensing machines in January 2011. An oral diabetic drug will be separately packaged into a polymer bag on which special dosing requirement is printed if it has special requirement of dosing time, which greatly helps nurses administer oral diabetic drugs at the right time. LASA insulin/insulin analogs should be dispensed with obvious distinction. Considering that antidiabetic drugs are fall risk-increasing medications due to potential hypoglycemic events, 28 SAHZU required that all insulins/insulin analogs and oral hypoglycemic drugs dispensed by the Pharmacy be labeled with increased fall risk warnings beside the identification of high-alert medications. If outpatients and inpatients receive antidiabetic therapy at discharge, they will get special written patent education from the Pharmacy, such as requirements for medication storage, dosing time, DDIs, and awareness of increasing fall risk. Administration All kinds of insulin/insulin analog should be administered according to the Chinese edition of "Injection Recommendation for Patients with Diabetes" 29 published by the Chinese Diabetes Society in 2011. In April 2012, diabetes specialist nurses initiated a plan-do-check-act (PDCA) cycle to improve the coverage percentage of standard insulin injection education for outpatients with diabetes. The process is as follows: (1) The attending endocrinology physician assures that there 21 Repaglinide is an alternative to nateglinide when fluconazole is coprescribed repaglinide Clarithromycin even low doses of clarithromycin can increase the plasma concentrations and effects of repaglinide, due to CYP3A4 inhibition. The risk of hypoglycemia may increase when the two drugs are comedicated. 22 reduction in repaglinide dosage may be necessary. Physicians may also consider an alternative antibiotic (eg, azithromycin) Cyclosporine AUC and C max of repaglinide will be markedly raised due to CYP3A4 and OATP1B1 inhibition by cyclosporine, resulting in increased risk of hypoglycemia. 23 nateglinide is an alternative to repaglinide when cyclosporine is coprescribed Gemfibrozil AUC and C max of repaglinide will significantly increase due to CYP2C8 and OATP1B1 inhibition by gemfibrozil. hypoglycemic effects may be potentially augmented. 24 Bezafibrate and fenofibrate are alternatives to gemfibrozil when repaglinide is coadministered. 25 Meanwhile, nateglinide is an alternative to repaglinide when gemfibrozil is coprescribed 26 Thiazolidinediones Pioglitazone Gemfibrozil AUC of pioglitazone will significantly increase due to CYP2C8 inhibition by gemfibrozil, especially in CYP2C8*3 carriers. 27 Hypoglycemic effects may be potentially augmented when gemifibrozil is coprescribed 1206 Zhao et al is an indication for using insulin and the patient has not ever received insulin therapy. A special seal, reading "please start insulin therapy only after receiving standard injection technique education", is then affixed on the outpatient's medical record. The patient will be instructed to go to the Diabetes Center for special education. Only after receiving injection education can the patient be permitted to get insulin syringe needles from the Diabetes Center. (2) A diabetes specialist nurse (ie, teaching staff) prepares two copies of the insulin injection training record sheet and education card for each patient. By repeated teaching and mock injection on the model, patients or authorized family gradually master injection skills. After signature by both teaching staff and the patient, a copy of the training record sheet and education card is given to the patient. (3) The teaching staff signs on the outpatient medical record so as to let the attending physician know that the patient has completed the process of insulin injection education. Meanwhile, diabetes specialist nurses conduct a telephone follow-up 1 week later, using the record sheet of insulin injection education for diabetic outpatients. Spot checks on insulin injection accuracy among ward nurses are conducted quarterly. Fifty nurses' activities are examined using mock injections on the model every quarter. Specialized training was then given. Furthermore, SAHZU has required that two licensed health care professionals must perform a "double check" prior to administering intravenous infusions of insulin, by implementing a standardized independent double-check process, since January 2013. Monitoring Physicians and nurses should document ADRs following antidiabetes therapy. Diabetic nurse specialists should investigate the occurrence rate and cause of hypoglycemia among diabetes inpatients. Furthermore, the process of diagnosis and treatment of hypoglycemia was standardized hospital-wide in April 2014. Outcome measures The outcome measures of the intervention program included implementation rate of standard storage in the Pharmacy as well as in wards, occurrence of dispensing errors related to antidiabetic drugs, insulin injection accuracy among ward nurses, the number of actual medication administration errors (MAEs) related to insulin/insulin analogs and oral hypoglycemic agents, occurrence rates of hypoglycemia in diabetes inpatients who were not hospitalized in the endocrinology ward, occurrence rate of hypoglycemia in diabetes patients in neurology wards, percentage of correctly managed hypoglycemia, the coverage percentage of standardized insulin injection education for diabetes outpatients, insulin injection techniques among diabetes outpatients who start to receive insulin therapy, and percentage of drug-related problems in the antidiabetes regimen. statistical analysis A descriptive analysis was performed. Pearson's chi-square test was used for testing percentage differences between two groups. A P-value 0.05 was considered to be statistically significant. A P-value 0.01 was considered to be highly significant. Results and discussion Implementation rate of standard storage SAHZU has achieved a 100% implementation rate of standard storage of antidiabetic drugs in the Pharmacy and wards since August 2012. Meanwhile, the phenomenon that a vial of regular insulin was used for multiple inpatients was abolished from then on. Medication errors In August 2011, there were six medication errors related to NovoMix 30 ® and NovoRapid ® , which were two products that looked very similar (Figure 2), including four near misses (one prescribing error and three dispensing errors) and two actual MAEs. The inpatient Pharmacy immediately performed quality improvements, emphasizing that NovoMix 30 ® must be marked with an additional label specifying "Novomix30 ® " and colored "blue", which was distinctive from the background color of NovoRapid ® (ie, "orange"). Since October 2011, SAHZU has achieved zero occurrence of dispensing errors related to NovoMix 30 ® and NovoRapid ® . The number of actual MAEs related to insulin/insulin analogs exhibited continuous decrease in number, from 20 (data in 2011) to six (data in 2014) (Figure 3), and the relative percentage of MAE subtype is presented in identified and intercepted 13 potential adverse DDIs, including clarithromycin-repaglinide (n=6), fluvastatinsulfonylurea (n=3), fluvoxamine-gliclazide (n=1), and metformin-contrast agents (n=3), in 2013. Alternative metabolizing enzyme inhibitors (clarithromycin, fluvastatin, and fluvoxamine) and cessation of metformin before contrast-enhanced examination were suggested by pharmacists, and these suggestions were accepted by physicians. Zero occurrence of abbreviations such as "IU" and "U" and of physician orders without noting infusion rate has been achieved since October 2012. From the beginning of February 2011, physician orders with inappropriate dosing time of oral hypoglycemic agents were abolished. Moreover, the appropriateness of insulin-glucose combination in total parenteral nutrition (TPN) admixture aroused pharmacist's special concern. The amount of regular insulin given (added directly to the TPN solution) depends on the plasma glucose level; if the level is normal and the final solution contains 25% dextrose, the usual starting dose is 5 to 10 units of regular insulin/L of TPN fluid. Anecdotally, in October 2013, a diabetic patient who just had thoracic surgery was prescribed with TPN therapy. Insulin, 28 units, was included in this TPN order; however, his physician mistakenly prescribed 5% glucose injection instead of 50% glucose injection as the type of carbohydrate. An auditing pharmacist successfully intercepted this prescribing-related near miss with potential severe hypoglycemia. inpatients (Table S1). In the first quarter, antidiabetes regimens for 1,200 diabetes patients were checked, and 13 cases (1.08%) were identified as having drug-related problems, including inappropriate choice of insulin (n=2), inappropriate drug combination (n=1), inappropriate dosing frequency (n=3), inappropriate dosing route (n=1), and poor awareness of medication reconciliation (n=6). Targeted lectures were then provided by a senior endocrinology physician, a diabetes nurse specialist, and a clinical pharmacist. In the second quarter, antidiabetes regimens for 1,400 diabetes patients were checked, and only four cases (0.28%) were observed with drug-related problems, ie, inappropriate choice of insulin (n=4). There was statistically significant difference in the percentage of drug-related problems in antidiabetes regimen between the two quarters (1.08% versus 0.28%) (P0.05 [chi-square test]). Medication reconciliation was effectively strengthened in the second quarter of 2014. Insulin injection technique The coverage percentage of standard insulin injection education for outpatients with diabetes successfully increased from 80% (April 2012) to 95.2% (October 2012) (P0.05 [chi-square test]). After interventions, in October 2012 insulin injection techniques were improved among diabetic outpatients who started to receive insulin therapy, including the percentage of checking injection sites prior to injection (85.6%), percentage of priming before injection (98.1%), percentage of rotation of injecting sites (98.1%), percentage of remixing before use (94.5%), percentage of keeping the pen needle under the skin for 10 seconds (99.4%), and Figure 5). Furthermore, the process of diagnosis and treatment of hypoglycemia was standardized hospital-wide, and continuous quality improvement was achieved regarding the percentage of correctly managed hypoglycemia during April-August 2014 ( Figure 6). limitations Although our program may be of interest to health care professionals elsewhere, it has several limitations. Firstly, the paper is largely descriptive. Ideally, it would have been even better if we had controls (another hospital without the program). Nevertheless, it includes a longitudinal follow-up, and one can appreciate the gradual improvement in outcome year by year. Secondly, the program seems simplistic, and it is obvious that if we undertake "strong measures", we may expect "strong results". We did not evaluate the pharmacoeconomic issue (ie, the cost/benefit ratio, the "human cost", antidiabetes efficacy, and satisfaction from patients and medical staff) or the applicability of these measures over time. Thirdly, the number of ADRs seemed too few, indicating further opportunity of improvements in ADR surveillance. percentage of using the pen needle only once (88.7%). The corresponding data seemed more optimistic than those from an insulin injection technique questionnaire survey in 16 countries and 20 centers in mainland People's Republic of China (Table 3). 30 Conclusion In this article, we introduced a 5-year continuous intervention program focusing on safe MMU of "high-alert" antidiabetic drugs, and summarized related risk-management measures and quality improvement activities in medication storage, prescribing, dispensing, administration, and monitoring that were implemented at SAHZU during the journey to JCI accreditation and post-JCI accreditation. The goal of intervention program has been achieved through multidisciplinary collaboration among pharmacists, nurses, physicians, and information engineers. 1211 safe medication management and use of antidiabetic drugs
2018-05-08T17:41:06.178Z
0001-01-01T00:00:00.000
{ "year": 2015, "sha1": "de40c758f735eeb8c811adcd592fd4c4c7c58d9a", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=26114", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c2e5fa85a6f60181628bf7cddf2e827311ba6da7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
246529838
pes2o/s2orc
v3-fos-license
Maladaptive functional changes in alveolar fibroblasts due to perinatal hyperoxia impair epithelial differentiation Infants born prematurely worldwide have up to a 50% chance of developing bronchopulmonary dysplasia (BPD), a clinical morbidity characterized by dysregulated lung alveolarization and microvascular development. It is known that PDGFR alpha–positive (PDGFRA+) fibroblasts are critical for alveolarization and that PDGFRA+ fibroblasts are reduced in BPD. A better understanding of fibroblast heterogeneity and functional activation status during pathogenesis is required to develop mesenchymal population–targeted therapies for BPD. In this study, we utilized a neonatal hyperoxia mouse model (90% O2 postnatal days 0–7, PN0–PN7) and performed studies on sorted PDGFRA+ cells during injury and room air recovery. After hyperoxia injury, PDGFRA+ matrix and myofibroblasts decreased and PDGFRA+ lipofibroblasts increased by transcriptional signature and population size. PDGFRA+ matrix and myofibroblasts recovered during repair (PN10). After 7 days of in vivo hyperoxia, PDGFRA+ sorted fibroblasts had reduced contractility in vitro, reflecting loss of myofibroblast commitment. Organoids made with PN7 PDGFRA+ fibroblasts from hyperoxia in mice exhibited reduced alveolar type 1 cell differentiation, suggesting reduced alveolar niche-supporting PDGFRA+ matrix fibroblast function. Pathway analysis predicted reduced WNT signaling in hyperoxia fibroblasts. In alveolar organoids from hyperoxia-exposed fibroblasts, WNT activation by CHIR increased the size and number of alveolar organoids and enhanced alveolar type 2 cell differentiation. Contraction was calculated with Image J software by measuring the diameter of the pellet, followed by a 2-tailed Student's t-test (Figure 4 L) or ANOVA followed by Tukey's multiple comparison (Figure 6 A). RNA-seq and bioinformatic analysis MACS-sorted PDGFRA + (CD140 + ) fibroblasts from either RA or O2 PN4, PN7, and PN10 mouse lungs were prepared for bulk RNA-seq. RNA sequencing was performed by Cincinnati Children's Hospital Medical Center's Gene Expression Core utilizing the Illumina HiSeq2500. RNA-seq FASTQ files were aligned using Bowtie to mouse genome version mm10 (4). Raw gene counts were obtained using Bioconductor's Genomic Alignment, which were subsequently made into normalized FPKM values using Cufflinks (5,6). DeSeq (Bioconductor) was used to calculate differential gene expression from raw expression values. Genes were deemed differentially expressed if they satisfied the following requirements: gene has a fold change >2, binomTest pvalue <.01, and RPKMs >1 in 2 of the 3 replicates in at least one condition being compared. Gene patterns were determined by comparing differential expressed genes from all three time points. Genes that were significantly changed or unchanged at the same time points were grouped together. Heatmaps of genes in particular patterns were z-score normalized and generated using Partek Genomics Suite (http://www.partek.com/pgs). Gene enrichment analysis was carried out using ToppGene's ToppFun, and functional enrichments within each profile were identified and all profiles were compared to each other using Toppcluster (7). Pvalues of functional enrichment hits where -log10 transformed for graphical visualization. Signature genes for Matrix and Myo fibroblast were determined by downloading and comparing PN7 and PN10 markers genes for respective cell types from LGEA https://research.cchmc.org/pbge/lunggens/mainportal.html (8). Only signatures genes identified at both time points for a particular cell type were assessed. Top 50 Lipofibroblast signature genes were obtained from a recently published mouse lung scRNA-seq study (9). Fold changes of any Matix, Myo or Lipo fibroblast marker genes significantly altered for a particular cell type, at any time point, were visualized in a heatmap generated by Pheatmap (https://cran.rproject.org/web/packages/pheatmap/index.html). Wnt related genes changes and predictive network was generated by Qiagen Ingenuity Pathway Analysis (IPF) using genes significantly altered at D4, D7 and/or Day10 (10). Immunofluorescence Immunofluorescent staining was performed on 5-µm slides sectioned from paraffinembedded lung tissue blocks. Slides were deparaffinized in xylene, rehydrated in a series of graded ethanol and washed in 1X PBS. When required, antigen retrieval in 10 mM citrate buffer (pH 6.0) was performed. Non-specific antigens were blocked in 4% normal donkey serum in PBS with 0.1% Triton X-100 (PBST) for 2 hours. Slides were incubated in primary antibodies (Supplemental Table 1) diluted in blocking buffer overnight at 4°C. After washing in PBST, slides were incubated in fluorescent secondary antibodies (1:200) and DAPI (1µg/ml) diluted in blocking buffer for 1 hour at room temperature. Slides were subsequently washed in PBST and mounted in Prolong Gold (Thermo Fisher Scientific). Z-stack images were captured on a Nikon AR1 inverted confocal microscope and further analyzed using Nikon Elements software. Antibody staining was quantified on Nikon Elements using a previously established protocol (11). In vitro r-spondin treatment of IMR90 cells Human IMR90 fibroblasts (ATCC ® CCL-186 ™ ) were cultured in growth medium (DMEM Hams F12; 10% FBS; 0.1% penicillin and streptavidin; 0.1% gentamycin and amphotericin) until passage 3 in a 100mm tissue culture plate. Cells were seeded on a 24-well plate, and upon reaching 70% confluency, treated with r-spondin conditioned media at a dilution of 1:500. Cells were harvested at 12, 24, and 36 hrs after r-spondin treatment and processed for RNA isolation and RT-qPCR gene expression analysis using the same methods as previously described. Morphometrics Morphometric analysis with a FIJI-macro was used to quantify alveolar structure including volume density of alveolar septa (Vvsep), mean linear intercept of airspaces (Lm), mean transsectional wall length (Lmw), and surface area density of airspaces (Svair) on PN4, PN7 and PN10 room air and hyperoxia exposed mouse lungs. Lungs were inflation-fixed and processed as previously described and 7-μm thick sections were stained with hematoxylin and eosin for analysis. FBs were treated for 24hrs continuously with CHIR after plating. Primary fibroblasts in collagen contraction assays were treated for 24hrs continuously with CHIR or PDGF-AA after seeding on top of collagen. Organoids were allowed to grow for 7 days before treatment with either CHIR or PDGF-AA added to the organoid media, continuously for 14 days (with media changes every two days). Supplemental Table 1 List of primary antibodies used in this study Supplemental Table 2 List of Taqman primers used in this study Supplemental Figure 6. Basal cells, proximal, and distal progenitors unaffected in the epithelium in organoids made with hyperoxia-exposed primary PDGFRA + fibroblasts A. Immunofluorescence images of organoids made from PN7 room air-and hyperoxia-exposed PDGFRA + fibroblasts and adult epithelial cells. Stained with P63, SOX2, CCSP, ECAD, and SOX9. Scale bar=100 μm B. Percentage of P63 + DAPI + nuclei over total DAPI + nuclei. C. Percentage of SOX2 + DAPI + nuclei over total DAPI + nuclei. D. Percentage of SOX9 + DAPI + nuclei over total DAPI + nuclei. In panels (B-D), a 2-tailed Student's t test was used. E. ARL13B immunofluorescence in organoids quantified using Nikon elements as antibody area over DAPI area. N=3-10 organoid transwells (replicates) used, 3 slides per transwell. One-way ANOVA followed by Tukey's multiple comparison was used to determine significance. F. Quantification of brightfield images of organoids using Nikon Elements software by size, organoids >500um diameter. One-way ANOVA followed by Tukey's multiple comparison was used to determine significance between three or more groups.
2022-02-05T06:23:43.376Z
2022-02-03T00:00:00.000
{ "year": 2022, "sha1": "051d9d508190328c496b6a366502849507c021c2", "oa_license": "CCBY", "oa_url": "http://insight.jci.org/articles/view/152404/files/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b470fc1333cfa07154b824f29ec09277ba78bda8", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
237261972
pes2o/s2orc
v3-fos-license
mTOR-Inhibition and COVID-19 in Kidney Transplant Recipients: Focus on Pulmonary Fibrosis Kidney transplant recipients are at high risk of developing severe COVID-19 due to the coexistence of several transplant-related comorbidities (e.g., cardiovascular disease, diabetes) and chronic immunosuppression. As a consequence, a large part of SARS-CoV-2 infected patients have been managed with a reduction of immunosuppression. The mTOR-I, together with antimetabolites, have been often discontinued in order to minimize the risk of pulmonary toxicity and to antagonize pharmacological interaction with antiviral/anti-inflammatory drugs. However, at our opinion, this therapeutic strategy, although justified in kidney transplant recipients with severe COVID-19, should be carefully evaluated in asymptomatic/paucisymptomatic patients in order to avoid the onset of acute allograft rejections, to potentially exploit the mTOR-I antiviral properties, to reduce proliferation of conventional T lymphocytes (which could mitigate the cytokine storm) and to preserve Treg growth/activity which could reduce the risk of progression to severe disease. In this review, we discuss the current literature regarding the therapeutic potential of mTOR-Is in kidney transplant recipients with COVID-19 with a focus on pulmonary fibrosis. disorders (e.g., chronic obstructive pulmonary disease, interstitial pulmonary fibrosis and chronic sequels of pulmonary bacterial and/or viral infections) have been associated with a significant increased risk of severe disease and death (Drake et al., 2020;Esposito et al., 2020;Higham et al., 2020). Recent reports have suggested that pulmonary fibrosis associated with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection may be triggered by both viral-and immune-mediated mechanisms (Liu et al., 2020a) and exacerbated by the acute respiratory distress syndrome (ARDS), occurring in approximately 40% of patients with COVID-19 . ARDS is characterized by a diffuse alveolar damage with an acute inflammatory exudative process and the release of huge amounts of inflammatory cytokines (Huang et al., 2020) followed by oxidative stress (Chernyak et al., 2020), an organizing phase and an excessive deposition of collagen and extracellular matrix components (Vasarmidi et al., 2020). Additionally, in this condition, SARS-CoV-2 infection in alveolar epithelial cells may provoke an infiltration of immune cells into the lung and an innate immunity activation (Alon et al., 2021). More than 40% of recovered COVID-19 patients developed pulmonary fibrosis (Li et al., 2021;Zou et al., 2021) and pulmonary impairment also persists after recovery (Mo et al., 2020;Ahmad Alhiyari et al., 2021). The disease duration may impact this condition (approximately 61% of patients with a disease duration greater than 3 weeks developed an important lung fibrosis after ARDS) (Thille et al., 2013;George et al., 2020). This complication occurred more frequently in older, co-morbid patients (mainly with systemic hypertension, diabetes, cardiovascular disease, obesity) who recovered after ICU stay and receiving mechanical ventilation. Additionally, smoking, chronic alcoholism (Ojo et al., 2020) and laboratory findings (lymphopenia, leukocytosis, and elevated lactate dehydrogenase) could predispose individuals to severe lung injury leading to an increased risk of mortality or pulmonary fibrosis in survivors (Liu et al., 2020b). In kidney transplant recipients, then, immunosuppressive agents may have an impact in pulmonary fibrosis (Gross et al., 1997). In particular, kidney transplant recipients treated with mammalian target of rapamycin inhibitors (mTOR-Is), particularly at high dosages, may develop pulmonary fibrosis (Pham et al., 2004;Weiner et al., 2007;Errasti et al., 2010;Xu et al., 2015;Tomei et al., 2016;Granata et al., 2018). -IS ON PULMONARY FIBROSIS IN KIDNEY TRANSPLANT RECIPIENTS: WHAT WE HAVE LEARNED FROM THE PRE-COVID-19 PERIOD Numerous clinical data have reported an incidence of pulmonary complications in mTOR-I-treated kidney transplant recipients of 2-11%, with the onset of symptoms until 5 years after the initiation of sirolimus or everolimus therapy (Pham et al., 2004;Champion et al., 2006;Weiner et al., 2007;Alexandru et al., 2008;Rodríguez-Moreno et al., 2009;Errasti et al., 2010;Bertolini et al., 2011). There are several lung manifestations including lymphocytic interstitial pneumonitis, lymphocytic alveolitis, bronchiolitis obliterans with organizing pneumonia, focal pulmonary fibrosis, diffuse alveolar hemorrhage or a combination thereof (Morelon et al., 2001;Vlahakis et al., 2004;Vandewiele et al., 2010). This clinical variability and the absence of specific signs and symptoms do not facilitate diagnosis (Molas-Ferrer et al., 2013). Radiographic tests, computed tomography (CT) and bronchoalveolar lavage (BAL), even if often unspecific, are useful diagnostic tools. In some cases, it is also required a lung biopsy that may reveal different histological patterns including the intra-alveolar non-necrotizing epithelioid granuloma, lymphocytic interstitial inflammation and a focal pattern of organizing pneumonia (Kirby et al., 2012). IMPACT OF MTOR This pulmonary toxicity seems to be dose-dependent since clinical and radiologic improvement has been observed in a large number of kidney transplant recipients after mTOR-Is dose reduction (Pham et al., 2004;Errasti et al., 2010). The pathogenic mechanism of mTOR-Is induced pulmonary toxicity is unknown but epithelial to mesenchymal transition (EMT) may have an important role (Zhang et al., 2016). During EMT, epithelial cells lose apical-basal polarity and cell-cell junctions and gain some mesenchymal traits of migration, invasion, and ability to produce extracellular matrix (ECM) (Kalluri and Weinberg, 2009). High dosages of mTOR-I may activate this complex biological process. A massive mTORC1 inhibition, may lead to a down-regulation of S6K and a subsequent hyper-activation of mTORC2 that, sustaining the phosphorylation of AKT at S473, could induce a feedback loop that stimulates PI3K-AKT signaling activating the cellular/molecular machinery leading to fibrosis (Wan et al., 2007;Breuleux et al., 2009;Masola et al., 2013). Therefore, based on these results, we encourage transplant clinicians, whenever possible, to prescribe low dose of everolimus/sirolimus in kidney transplant recipients in order to maximize therapeutic efficacy (also anti-fibrotic) and minimize the risk of developing this complication. SOME POTENTIAL POSITIVE EFFECTS OF MTOR-I IN COVID-19 PATIENTS WITH A FOCUS ON PULMONARY FIBROSIS The PI3K/AKT/mTOR pathway has been shown to be targeted by various viruses, including influenza virus, herpesvirus, hepatitis C virus and adenovirus (Moody et al., 2005;Sodhi et al., 2006;Bose et al., 2012;Le Sage et al., 2016), and recent studies have clearly reported its activation also during SARS-CoV-2 infection (Appelberg et al., 2020;Lokhande and Devarajan, 2021). The viruses subvert the mTOR pathway to sustain protein synthesis, cell survival and promote virus replication (Moody et al., 2005). mTOR is an evolutionarily conserved serine/threonine kinase, component of two multi-subunit complexes, mTORC1 and mTORC2. Activated mTORC1 induces metabolic effects such as mRNA translation, ribosome biogenesis, protein synthesis, mitochondrial metabolism, and adipogenesis. mTORC2 promotes cell survival, regulates the actin cytoskeleton, ion transport, and cell growth (Dowling et al., 2010). Thus, targeting this pathway might reduce SARS-CoV-2 pathogenicity. This effect has been also observed in rapamycin-treated cells infected with MERS-CoV and 1918 influenza A virus (Kindrachuk et al., 2015;Ranadheera et al., 2018). Likewise in patients with severe H1N1 pneumonia, early adjuvant treatment with rapamycin and corticosteroids was associated with a rapid virus clearance and a significant clinical improvement (Wang et al., 2014). Contrarily, an in vitro study performed in a human hepato cellular carcinoma cell line demonstrated that rapamycin and Torin-1 failed to block viral infection (Appelberg et al., 2020). However, Akt inhibitor MK-2206, probably by stabilization of mTORC1, showed significant inhibition of viral replication. Furthermore, the therapeutic potential of mTOR-Is in SARS-CoV-2 infection could also be linked to their immunomodulatory properties. mTOR pathway, in fact, has a central role in B and T cells development/proliferation. In B cells, mTOR inhibition, through the down-regulation of the transcription factor BCL6, may inhibit germinal center formation (Raybuck et al., 2018) and the proliferation of germinal center B cells thereby hindering development of memory B cells and long-lived plasma cells (Ye et al., 2017). In contrast rapamycin seems to have a minimal effect during the differentiation of germinal center B cells into long-lived plasma cells as well as on the maintenance of already-differentiated long-lived plasma cells (Ye et al., 2017). This effect could influence the response to vaccination in mTOR-I treated kidney transplant recipients. Moreover, rapamycin, downregulating the expression of activation-induced cytidine deaminase (AID), could also decrease antibody class-switch recombination altering the pattern of immunoglobulin (Ig) G and IgM specificities (Zhang et al., 2013). It has been speculated that this effect may lead to a reduction of the early-stage antibodies crossreactivity against SARS-CoV-2 and to a decrease of the antibody dependant enhancement (LDE). Both these conditions may antagonize the onset and development of severe symptoms . Specific deletion of mTOR in T cells, then, might impair their differentiation in Th1, Th2 or Th17 effector cells (by a direct down-regulation of STAT and other lineage specific transcription factors (Delgoffe et al., 2009)) and induce a significant enhancement of regulatory T cells (Treg) (Battaglia et al., 2005). At the same time, these agents may have immunostimulatory effects on memory CD8 + (Araki et al., 2009) and CD4 + T cells (Ye et al., 2017) by promoting the enhancement of memory precursor effector cells that could differentiate into long-lived memory cells (Araki et al., 2009;Ye et al., 2017). Additionally, mTOR-I when given at the early onset of the cytokine storm phase can hinder the IL-6 pathway and NLRP3 inflammasome-dependent release of IL-1β, thus preventing the progression to severe forms of COVID-19 (Omarjee et al., 2020). mTOR activation can also increase the activity of antiinflammatory cytokine IL-10 and inhibit the proinflammatory cytokine TNF-α (Weichhart et al., 2015). Therefore, it is unquestionable that mTOR-Is could act as a double edge sword in patients with COVID-19 (Ghasemnejad-Berenji, 2021) and a correct use of this medication may have a "yin or yang" clinical effects. The ongoing trials (NCT04341675, NCT04461340, NCT04948203) that evaluates the effects of sirolimus treatment in hospitalized COVID-19 patients will provide more information in the next future. NCT04341675 compares sirolimus (6 mg on day 1 followed by 2 mg daily for the next 13 days or until hospital discharge, whatever happens sooner) versus placebo in hospitalized patients with severe COVID-19. The primary outcome is death or progression to respiratory failure requiring advanced respiratory at day 28. NCT04461340 asseses the efficacy and safety of sirolimus as an adjuvant agent to the standard treatment protocol against COVID-19. It is a single-blinded randomized study in which participants are randomly assigned to sirolimus (oral dose of 6 mg on day 1 followed by 2 mg daily for 9 days) plus national standard of care therapy against COVID-19 or only national standard of care therapy against COVID-19. Interestingly, the trial NCT04948203 evaluates the efficacy of sirolimus in preventing post-COVID-19 pulmonary fibrosis. The hospitalized patients with <10% pulmonary fibrosis, evaluated by CT at admission, are divided in three groups according to different dose regimens of sirolimus (0.5, 1 or 2 mg orally daily) for 14 days. Pulmonary fibrosis is, then, evaluated by CT scan after 12 weeks. MTOR-I AND THE NEED TO REDUCE DOSAGE IN COVID-19 POSITIVE KIDNEY TRANSPLANT RECIPIENTS In COVID-19 kidney transplant recipients, at the moment, we do not have enough evidence to support the hypothesis that mTOR-I may antagonize recovery or promote pulmonary complications and contrasting results have been obtained in observational studies (Cravedi et al., 2020;Alberici et al., 2020;Fernández-Ruiz et al., 2020;Caillard et al., 2020;Coll et al., 2021;Favà et al., 2020;Hilbrands et al., 2020;Pérez-Sáez et al., 2020;Salto-Alejandre et al., 2021;Søfteland et al., 2021;Bossini et al., 2020;Crespo et al., 2020;Rodriguez-Cubillo et al., 2020;Meziyerh et al., 2020;Guillen et al., 2020;Zhang et al., 2020;Lauterio et al., 2020;Tanaka et al., 2020;Heron et al., 2021;Nair et al., 2020;Devresse et al., 2020;Maritati et al., 2020;Trujillo et al., 2020;Lubetzky et al., 2020) (Table 1). In studies with large cohort of kidney transplant recipients approximately 10-15% of them were treated with mTOR-Is and more than 50% of these patients stopped this treatment after hospital admission. However, this did not impact the clinical outcomes. Expert opinions suggested to discontinue this drug category in patients tested positive for COVID-19 with or without clinical or radiological evidence of lung disease (Vistoli et al., 2020). This choice could be due to the pulmonary toxicity associated with mTOR-Is (Meziyerh et al., 2020) or to a possible interaction between mTOR-Is and antiviral drugs commonly used in COVID-19 patients . The coadministration of hydroxychloroquine and chloroquine with mTOR-I (all CYP3A4 inhibitors) may theoretically increase their blood concentrations with the development of potential adverse effects/toxicities (including QT-prolongation) (Mirjalili et al., 2020). This condition has been also described in patients treated with Lopinavir/Ritonavir, protease inhibitor largely used in the treatment of human immunodeficiency virus (HIV) and 50-90% reduction in dose of sirolimus and discontinuation of everolimus hase been proposed (Barau et al., 2009;Meziyerh et al., 2020). However, although a withdrawal of immunosuppression may have positive effects by restoring the host immunity, it could expose patients to a high risk of acute rejection with negative clinical and psychological impact. Therefore, mTOR-Is discontinuation should be reserved to kidney transplant recipients with severe COVID-19, while it should be, if possible, avoid in asymptomatic/ paucisymptomatic patients in order to do not increase their risk to develop an immune-mediated graft impairment and to take advantage of some potential antiviral proprieties of these agents. Several clinical trials have reported a reduced rate of Cytomegalovirus and BKV infections in kidney transplant recipients treated with mTOR-Is alone or associated to low dosages of calcineurin inhibitors (CNI) compared to those in standard dose CNI regimen (Tedesco Silva et al., 2010;Brennan et al., 2011;Mallat et al., 2017;Tedesco-Silva et al., 2019;Hauser et al., 2021). The exact mechanism behind this protection is not clear. Compelling data suggest an antivirial role of mTOR-I by blocking cellular proliferation and impairing pathways critical for infection, signaling, and replication (Liacini et al., 2010;Clippinger et al., 2011). In addition, mTOR-I may have a direct anti-viral activity by increasing the percentage of multifunctional virus-specific CD4 T cells (Hauser et al., 2021). Furthermore, at the moment, the impact of the co-treatment of mTOR-I with other drugs frequently employed in the treatment of COVID-19 kidney transplant recipients (including corticosteroids, anti-inflammatory agents, monoclonal antibodies) has been only partially elucidated. We can only suppose that, being mTOR a central kinase of the cellular metabolism, its inhibition may have consequence on the pharmacological effects of these agents. As reported by Weichhart et al., corticosteroids, inducing the expression of REDD1, may potentiate the pharmacological inhibition of the mTOR pathway. This is in line with previous data obtained in patients affected by H1N1 influenza virus (Weichhart et al., 2011). Additionally, the inhibition of mTOR, preventing the immune hyperactivation of the signal via the STAT3 pathway may reduce the expression of receptors for IL-6 and IL-6 production (Terrazzano et al., 2020), that may influence the pharmacological effects of Tocilizumab. All the above mentioned effects need to be analyzed in specific research project involving organ transplant recipients. SARS-COV-2 VACCINE IN KIDNEY TRANSPLANT RECIPIENTS Solid organ transplant candidates and recipients are identified as a priority population for COVID-19 vaccines, given the higher risks associated with immunosuppressed status. Currently, vaccines employable for transplant recipients are BNT162b2 (Pfizer-BioNTech) and mRNA-1273 (Moderna). However, data regarding safety, immunogenicity, and efficacy in these patients are scarce. Some evidence indicates that solid organ transplant recipients who receive mRNA-based vaccines have low immunization rates (Benotmane et al., 2021;Boyarsky et al., 2021;Danthu et al., 2021;Grupper et al., 2021;Husain et al., 2021;Korth et al., 2021;Sattler et al., 2021) and also in patients with full dose vaccination has reported the development of COVID-19 (Caillard et al., 2021;Tsapepas et al., 2021). When quantitative titers were available, they were frequently below the median titer in immunocompetent patients. Moreover, Rincon-Arevalo et al., have recently described markedly diminished generation of antigen-specific B cells, especially, plasmablasts and memory B cells in kidney transplant recipients (Rincon-Arevalo et al., 2021). Factors associated with negative humoral response to vaccine were older age, high-dose corticosteroids treatments, maintenance with triple immunosuppressive medications and a regimen that includes anti-metabolite (Boyarsky et al., 2021;Grupper et al., 2021;Husain et al., 2021). The effect of mTOR-I on COVID-19 vaccine is controversial, with some studies reporting a more favorable humoral response (Benotmane et al., 2021;Cucchiari et al., 2021) and other that obtained opposing results (Rozen-Zvi et al., 2021) or no differences in immunosuppressive drugs between kidney transplant recipients tested positive and negative for SARS-CoV-2 IgG (Korth et al., 2021). Previous studies evaluating the response to vaccination in kidney transplant recipients reported that everolimus or sirolimus are associated with a significant rise in the antigenspecific IgG antibody level after pneumococcal, tetanus and influenza vaccines (Willcocks et al., 2007;Struijk et al., 2010). This could be due to the increment of CD8 + effector memory T cells obtained by mTOR-I (Araki et al., 2009;Turner et al., 2011). However, being these studies not COVID-19 specific and contrasting, we cannot draw definite conclusion on the effects of these drugs on COVID-19 vaccine response. The result of ongoing studies on this topic will help in future to better define this relationship. CONCLUSIONS AND PERSPECTIVES The rapid spread of COVID-19 has pushed physicians to make clinical decisions by the principle of maximizing benefits for the Frontiers in Pharmacology | www.frontiersin.org August 2021 | Volume 12 | Article 710543 largest number of patients. However, the optimal medical management of kidney transplant recipients with SARS-CoV-2 infection has not yet been established. The most common approach is the withdrawal of immunosuppressive drugs (including mTOR-ihibitors) in these patients to potentiate their immunocompetence and minimize the risk of clinical complications of severe COVID-19. However, at our opinion, in kidney transplant recipients in mTOR-Is-based immunosuppressive therapy, this "discontinuation strategy" should be reserved for patients with severe COVID-19. Instead, in asymptomatic patients or those with mild COVID-19 symptoms, a "wait and see approach" or a reduction of the dosage of these agents may be useful to minimize the risk of acute allograft rejection development and to exploit their potential anti-viral and anti-fibrotic effects. The reduction of the dosage may partially restore the host immunity facilitating the disease recovery, antagonize/mitigate the onset of cytokine storm, and preserve Treg growth and activity, which could reduce the progression to severe COVID-19. Moreover, additional clinical studies aimed to evaluate the impact of mTOR-I on the vaccines and to assess the efficacy and safety of the use of mTOR-I alone or in combination with other new anti-fibrotic agents in kidney transplant recipients with COVID-19 are necessary to allow a more efficient treatment of the acute clinical phase and facilitate the recovery from postacute COVID-19. Indeed, while most people with COVID-19 recover completely within a few weeks, some patients experience lasting symptoms (fatigue, shortness of breath, cough, joint pain, depression, muscle pain, headache, intermittent fever) that can continue for weeks or even months after initial recovery. Finally, we believe that molecular biology (particularly omics techniques) may represent powerful methods that could help kidney transplant clinicians to discover new therapeutic strategies for SARS-CoV-2 infection, to select new biomolecular targets and to personalize its treatments (Zaza et al., 2015). AUTHOR CONTRIBUTIONS SG and GZ searched the literature and wrote the manuscript. PC and GS contributed to the literature analysis and revised the manuscript. All authors read and approved the final manuscript.
2021-08-23T13:11:44.908Z
2021-08-23T00:00:00.000
{ "year": 2021, "sha1": "404d467f00eee9d52485d5fafc62a04e72277c2b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2021.710543/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "404d467f00eee9d52485d5fafc62a04e72277c2b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
26456075
pes2o/s2orc
v3-fos-license
mSTOPAmOLOGY AND HISTOCHEMISTRY OF YERSIN TYPE TUBERCULOSIS IN RABBITS Development of the Disease after intravenous Infection with Mycobacterium avium Cerny, L.: Histopathology and Histochemistry of Yersin Type Tuberculosis in Rabbits. Development of the Disease after intravenous Infection with Mycobacterium avium. Acta vet. Bmo, 48, 1979: 45-52. After intravenous aplication to rabbits of Mycobacterium avium the first morphological changes appeared 48 hours post infection. The most characteristic morphological feature of the infection was a strong proliferation of the cells of mononuclear macrophage system. Epitheloid cells were formed 7 days after infection and multinucleated giant cells appeared after the 10th day. In following intervals the process grew enormously above all in the liver and spleen. In this period the number of mycobacteria increased and this process was evident also in the cytoplasm of epitheloid cells. Starting from the 14th day after infection epitheloid cells in the center of larger foci underwent necrobiotic changes. Tuberculosis, Mycobacterium avium, Yersin type, macrophages, epithelaid cells, giant In rabbits, two types of development of tuberculous infection are known and they differ among other features by morphological changes as well.Villemin type tuberculosis is usually caused by Mycobacterium bovis or fairly low doses of Mycobacterium avium (Yamamoto et a!. 1961a, b), Morphological features of this type of tuberculosis appear in form of tuberculous nodules.The other type is the Yersin type tuberculosis described by Yersin and the course of this disease is usually fairly acute (Yamamoto et al. 1961a(Yamamoto et al. , b, c, 1962;;Cerny 1965;Mohelskli et at 1975 etc.), The description of this type of the disease gave Yamamoto et al. (1961a), Cerny (1965) and an electron microscopic study of the Yersin type tuberculosis conducted Mohelskli et a!.(1975).In the morphological picture dominant affection of the liver and spleen is prcminent with very clearly increasing number of mycobacteria and reaction of the cells of mononuclear macrophage system. In this paper which describes a preliminary experiment the main task is the study of development of microscopic changes in various organs and study of the growth of Mycobacteria in this type of tuberculosis. Materials and Methods In the experiment 15 rabbits were used weighing about 1,200 g.Experimental animals were infected intravenously with a suspension of virulent culture of Mycobacterium avium, the dosis being 0.02 mg for each rabbit.The animals were gradually sacrificed at intervals of 2, 4, 6, 12 and 24 hours and of 2, 5, 7, 9, 10, 12 and 14 days.Two remaining rabbits died 16 and 18 days after infection.Sacrificed animals were necropsied and smears from various organs were prepared and stained by Ziehl-Neelsen method.At the same time the organs from rabbits sacrificed up to 5 days after infection were cultivated for mycobacteria.Tissue samples from the lungs, liver, spleen and kidneys were fixed in neutral formol and paraffin sections were stained by hematoxylin-eosin and Ziehl-Neelsen method for mycobacteria. Results In none of the rabbits sacrificed 2, 4, 6, 12 and 24 hours after infection microscopic changes were present.Histopathological examination of the lungs, liver, spleen and kidney of these animals did not reveal any changes which could be explained as caused by the infection.In some of the liver sinusoids and in the spleen tissue, mononuclear 'cells were present.In the smears stained by Ziehl-Neelsen method finding of very few mycobacteria was only exceptional although by culturing the mycobacteria were present in all organs and in the blood of experimental animals. Two days after infection locally thickened alveolar septa were seen in lungs.Very rare alveolar macrophages were found and close to the walls of some blood vessels neutrophil leucocytes and monocytes were scattered.In the liver small groups of few mononuclear cells were seen.The spleen and kidneys were microscopically intact.In the sections from spleen and liver single mycobacteria were present.Cultures from all organs revealed mycobacteria. Five days after infection the interalveolar septa were clearly thickened.The thickness of the septa was caused by proliferation of mononuclear cells.Alveolar macrophages in alveoli were present.The changes of liver were clear and consisted of proliferating macrophages and increased number of Kupffer's cells in the liver sinusoids.In the spleen, proliferation of macrophages was fairly prominent.Mycobacteria were present in the sections of the liver and spleen and they were mo~tly phagocyted in cytoplasm of macrophages.Cultures of all organs revealed mycobacteria. Seven day.> after infection the dominant feature in all organs was the formation of epitheloid cells.In lungs small granulomas composed of macrophages and few epitheloid cells in the center were found.The sections of liver revealed increased number of small foci, scattered throughout entire parenchyma.These foci were formed in liver lobules and in portal areas.Very few similar foci were found in the kidney and they were mostly localized around the glomeruli.Proliferation of macrophages in the spleen was diffuse.In cytoplasm of macrophages and epitheloid cells in liver and lungs were present phagocyted mycobacteria. Microscopic changes in the organs of rabbits sacrificed 9 and 10 days after infection revealed that the progress of tuberculous process in lung was very limited and there was no solidification of the lung tissue.But the number of small foci in the liver increased considerably and the main feature was formation of epitheloid cells.In the spleen proliferation of macrophages and small foci of epitheloid cells were found.These cells were present in small nodules in the kidneys as well.Some of the epitheloid cells contained several mycobacteria.we1ve days after infection the changes in the lung did not increase in size and free alveolar macrophages were often vacuolised and in some of them nuclei revealed pycnotic changes.Tuberculous lesions in the liver increased considerably, the cellular foci were bigger and among epitheloid cells giant multinucleated cells were formed .. Similar granulomas were present in the spleen and they contained also forming giant cells.Foci in the kidneys were not considerably enlarged and the main type of cells present were epitheloid cells, similar to other organs.Mycobacteria were found practically in all foci, mostly in the liver and spleen and they were localized, for the most part, in the middle of tuberculous lesions. On 14 day after infection the spleen was macroscopically enlarged.The microscopic picture was not different from that in the lungs, but the alveolar macrophages mostly revealed necrobiotic changes.At this stage the liver showed ptofound microscopical changes.Tuberculous lesions developed further and their main constituents were epitheloid and giant cells.The structure of liver tissue was considerably destroy.!d.Similar development of the process was seen in the spleen.In both these organs the number of mycobacteria increased very strongly . Rabbits which died 16 and 18 days after infection showed again macroscopical enlargement of the spleen and microscopically, the main changes were again present in the liver and spleen.Structure of the liver was co.mpletely destroyed, growing tuberculous tissue forming irregular shapes connected with each other.Epitheloid cells in the centers of such foci revealed pycnotic and necrobiotic changes.Microscopic picture was characterised by very strong proliferlltionof the cells of mononuclear macrophages system and their change into epithdoid ~d giant cells.Similar features revealed the spleen with growing and connecting tuberculous foci.Necrotization of epitheloid and giant.cells in the middle.ofJ«i was similar to that in liver.Cytoplasm of epitheloid and giant cells contained numerous mycobacteria and few of them were present in necrotic tissue as well. Discussion In the development of tuberculous lesions caused in rabbits with big doses of Mycobacterium avium was evident that the first and very minute lesions formed during 48 hours after infection.Small nodules of macrophages were present 5 days after infection and after the 7th day epitheloid cells formed.Multinucleated giant cells were present after ten days.The course of the disease was typical for the Yersin type tuberculosis and the longest interval to death of experimental animal was 18 days.' Dominant feature of the morphological picture was destruction of the liver and spleen while the lesions in the lungs were not very grave.Another morphological characteristic was enormous reaction of the cells of the mononuclear macrophages system.This was very clearly seen in the liver and spleen where the macrophages proliferation was in the latest stages almost diffuse.This feature of Yersin type tuberculosis described Yamamoto et al. (1961Yamamoto et al. ( a, b, c, 1962)), Cerny (1965), Mohelska et al. (1975).This finding was connected with the development of the cells of the macrophage system.Spector and Lykke (1966) found that during 5 days after infection macrophages changed into epitheloid cells.In our cases this interval was 7 days, that was the time, when several epitheloid cells formed.Epitheloid cells were formed from alveolar macrophages, macrophages of the spleen, Kupffer's cells and blood monocytes.The origin of epitheloid cells seems to be clear (Hess et al. -1971, Erochin -1978, Turk -1978), but Volkmann (1976) expressed the opinion that macrophages of the liver and peritoneal cavity formed a special population. Considering the macrophages it is necessary to mention the dynamics of the growth of mycobacteria.It was evident that the number of mycobacteria present in the liver and spleen increased during the whole course of the disease.In this connection it would be pertinent to consider data ofTonaki et al. (1976) who found that after application of test emulsion with JI31 90-95 % of activity was found in the liver.It showed the very high phagocytic activity of Kupffer's,cells.One could consider that by intravenous propagation of mycobacteria the Kupffer's cells and the macrophages of spleen would phagocytose these mycobacteria very intensively and the end result of this process would be high grade of activation of these cells.This process is also connected with immunological processes.Nezelof and Vilde (1976) characterized granulomas as cellular ~ocieties which eliminated or surrounded foreign material or agent.In this process, in its first stage there was a non-specific phagocytosis which was a most simple defense mechanism.Tbe very high phagocytosis present in the Yersin type of tuberculosis shows that the immunological defense in this disease is incompetent.With this fact were connected the findings of Mariano et al. (1976) who found that macrophages and epitheloid cells gradually lost their phagocytic activity.This was connected with the loss of specific receptors and their ability to destroy mycobacteria diminished.In granulomas mycobacteria were found mainly in the center of foci i. e. there, where older epitheloid cells, with diminished phagocytic and destructive activity were present.It means that the high phagocytic activity of these cells is transient and the older.epitheloid and multinucleated giant cells are not able to destroy phagocytosed mycobacteria.Similar localization of the mycobacteria in the middle of tuberculous foci described also Otto and Bertram (l969).•' . . .In Yersin type tuberculosis 'mycobacteria act evidently as intracellular parasites (Cerny 1965;Mohelska et al. 1975).This is known to apply to tuberculosis in other species as well, e. g. in birds (Cerny 1965;HejJicek 1977).Therefore it is evident, that in Yersin type tuberculosis mycobacteria are, not destroyed in the process of phagocytosis and after necrotisation of epitheloid and giant cells they may liberate themselves to invade other cells.
2017-11-02T08:00:35.930Z
1979-01-01T00:00:00.000
{ "year": 1979, "sha1": "a305cd4428593730451d188062d867091ff480b5", "oa_license": "CCBY", "oa_url": "https://actavet.vfu.cz/media/pdf/avb_1979048010045.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "a305cd4428593730451d188062d867091ff480b5", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
13800688
pes2o/s2orc
v3-fos-license
Software Modernization and Replacement Decision Making in Industry: A Qualitative Study Software modernization and replacement decisions are crucial to many organizations. They affect greatly to the success and well being of the organizations and their people. The decisions like that are usually presumed to be rational and based on facts. These decisions and how they are made tell much about the decision makers and the decision making tools available to them. Interviews of 29 software modernization decision makers or senior experts were analyzed in order to find out how the decisions were made and what models and tools were used. It turned out that decisions are not as rational as supposed. Intuition is the dominant factor in decision making. Formal software engineering oriented decision support methods are not used. Most decision makers did not see intuition as a preferable way to make decisions. This might be because the preferred values are rationality and formality. Since the use of intuition is not particularly valued it is not necessarily admitted or documented either. However, truthful description and justification of decisions is important both from the practical and ethical point of views. INTRODUCTION The modernization and replacement decisions of information systems often have crucial impacts on modern organizations.Those decisions may cause remarkable changes in the life of the endusers of those systems and even in the general success of the organization in question.Those decisions should be as ethical and traceable as possible due to the fact that a single decision may lead to huge economical losses and the worsened life of many people.Those decisions may, of course, lead to economical miracles and general success.The positive possibility does not, however, weaken the ethical standards and traceability that should be required from such decisions. The old information systems that are modernized or replaced are normally called legacy systems.Those systems are, in many cases, vital to the organization that uses them.They are, however, hard and expensive to manage (Bennett and Rajlich 2000).It has been realized that the maintenance and evolution costs of legacy systems are normally somewhat between 40% and 90% of the total costs of the life-cycle of the system (Foster 1993) (Glass 2003) (Seacord, Plakosh and Lewis 2003).A brand new information system that is installed today may be a legacy system in the future. The vital role of legacy systems and the extensive costs caused by maintenance and evolution of such systems require that the decisions made regarding those systems are well-thought, ethically defendable, understandable and based on facts.In other words, such important decisions are required to be rational and traceable.Rationaly requires, in many cases, the use of formal decision making procedures which are based on hard facts.The traceability requires that every phase and reason for the decision be understood afterwards.At least a couple of software reengineering and modernization approaches which should satisfy these requirements have been proposed in the literature, see e.g.(Sneed 1995) and (De Lucia, Fasolino and Pompella 2001). The relatively high number of horror stories of information system decisions should make one wonder: Is the real-world decision making really that rational and traceable?The question is especially interesting because the accepted way to make decisions in the current business world requires that the decisions are based on objective analysis of real facts.We do, however, know that in the concrete world many decisions require intuition, which can bee seen as compressed expertise that allows the expert to use his/her experiences and knowledge in a fast and adaptive way (Sauter 1999).Therefore we should admit that intuitive and formal decision-making can be seen as the two sides of expert judgment (Simon 1987) (Patton 2003)(Sadler-Smith and Shefy 2004) (Miller and Ireland 2005). The results of the study were surprising.The reality is not as rosy as one would assume when reading the quidelines stated in the Software Engineering Code of Ethics and Professional Practices (Gotterbarn, Miller and Rogerson 1999).The role of rational and traceable approaches to decision making is astoundingly insignificant in the real-world, as will be seen in the rest of this article.The results show that the role of intuition is dominant and the existing rational approaches are not used in the real-world decision making.This will propose remarkable challenges to both the academia and industry if traceability and ethicality of decisions will be required. ARE REPLACEMENT AND MODERNIZATION DECISIONS RATIONAL AND TRACEABLE? Professional life includes many decision making situations.In modern software engineering profession those decisions are supposed to be rational and based on facts.The rational decision making is the backbone of the theory of economic decision making (Simon 1979).The software engineering decision making tools and models are more or less intentionally based on common decision making theories and strategies, like the expected utility theory (von Neumann and Morgenstern 1944) and the prospect theory (Kahneman and Tversky 1979).The conceptual model of software engineering decision maker is based on the idea of rational and intelligent people making decisions based on hard data and careful use of decision making tools and economic calculations.The software engineering decision making tools and methods are clearly based on that concept of decision making. Those models are mainly based on the idea of making rational decisions possible by providing tools by which the rationality can be achieved.Such models and tools may be grouped into general software cost estimation models and software maintenance estimation or decision models.Those software cost estimation models include models such as Boehm's COCOMO (Boehm 1981) (Halstead 1977).Several models can be used to support rational decision making in the field of software engineering.Models can be used to estimate for example maintainability (Coleman, Ash, Lowther andOman 1994)(Di Lucca, Fasolino, Tramontana andVisaggio 2004), software complexity effects (Gibson andSenn 1989)(De Lucia, Di Penta, Stefanucci andVenturi 2002), general maintenance cost drivers (Niessink and van Vliet 1998) and reengineering effects and strategies (Sneed 1995) (Warren and Ransom 2002). The concept of the rational and traceable decisions is necessary because many of the softwarerelated decisions are very crucial for various reasons including money and human well-being.For example the decision to replace an old system with a brand new one may be of extreme economical importance to the organization and its future.If the decision is incorrect, it could lead to a major economic loss and business failures.Also these decisions have a remarkable influence on the people who are working in the software end-user organizations.Therefore, these kind of decisions should be made according to rational decision process and the rationality should be able to be analyzed afterwards.The use of documented and at least semi-forman decision making frameworks is the expected way of decision making. If the decision making in the field of software engineering was as rational and ethical and traceable as it should, then the number of software engineering horror stories should be much smaller than it is.The amount of horror stories do, however, make one to wonder whether our models and tools are lacking or whether professional decision making is something else than one would assume.Therefore it is recommended to have a closer look at the actual decision making practices found in the industry: Do industrial decision makers perform rational, ethical and traceable decision making? In the reported study interviews of professionals were analyzed in order to get a better understanding of the decision making strategies, tools, and methods of experienced professionals. The main aim of the study was to get at least a better picture of the decision factors with the actual tools and techniques used by professionals. HOW THE STUDY WAS PERFORMED AND A BRIEF ANALYSIS OF THE DATA The material of the study consists of 29 interviews of people who have relevant roles in software engineering related decision making, especially software modernization or replacement decisions. The material was collected by one-to-one interviews of a semi-structured form.The interviews took place between August 2003 and February 2004.The interviewed people were found with the snowball sampling.That approach to sampling offers an established method for identifying and contacting populations which are hard to reach (Vogt 1998).The snowball starts with a core set of persons who are asked to name the best people to answer the posed questions.The named persons are contacted, interviewed, and so the snowball grows.In this study the sampling started from three experts. Interviewees were from 8 different organizations.There were 3 software supplier organizations and 5 software end-user organizations.From 29 interviewees 12 worked within software supplier organizations and 17 within end-user organizations.Their average working experience with information systems was 19 years and with modernization decision making or argumentation 8 years.Their average age was 48 years.There were 4 people performing the interviews, which has a slight effect on the flow of the interviews. Interviews contained total of 43 questions of which 15 questions were about background information and 28 questions about modernizations and decision making.The questions covered for example the following areas: • The professional background of the interviewee. • Who makes the modernization decisions? • Are there some guidelines or procedures for decision making and what type they are? • What different aspects of decision making are used? • Are the decisions based on methods and what methods are used? Additional questions were asked from some of the interviewees.Interviews were recorded with permission and afterwards transcribed.Transcribed interviews, which included questions, were about 141 600 words, so the average length for each interview was about 4 880 words.The language of the interviews was Finnish.An overview of the interview questionnaire and the first level analysis of the interviews has been presented in (Koskinen, Lintinen, Ahonen, Tilus and Sivula 2005).That analysis will not be repeated here. In a study like the reported one it is typical that different methods are simultaneously used and methods are used together (Eskola and Suoranta 2000).The research methods used in this study are quantification, identification of themes and use of discourse analysis.Quantification is a method where quantitative method is used with qualitative data.It usually means that occurrences of different cases are counted.In theming the data is divided into different themes and in discourse analysis the different meanings, which are given in text, are studied and analyzed. In the analysis only parts of the whole interviews were used.Those parts were the ones related to decision making.According to the background assumptions the decision making process was assumed to be professional, rational, traceable and ethical.That assumption was made because all of the interviewees were in positions in which they often make decisions that may lead to significant use of resources and long-time software engineering commitments.The analysis focused on those interview questions that were related to the decision making as a process or the various parts of the decision making process.General themes have been identified earlier as reported in (Koskinen et al. 2005).The analyzed questions were (the numbers of the questions presented below do not reflect the numbers of the corresponding questions in the original In the first phase of the analysis (quantification and making themes) the decision making processes were analyzed as they were described by the interviewees.The basic idea of the analysis was to find answers to the analysis questions: A: Is the use of intuition directly admitted?B: Is the use of intuition admitted later on in the interview? The themes were the direct admission of use and the later admission of use.The analysis of the direct admission and the later admission was performed by creating a table in which the interviews were classified according to the transcripts.The resulting table is shown as Table 1.The table is mainly based on question Q1. The analysis question B (later admitted use of intuition) was answered by analysing the answers to the interview questions Q2-Q5.In several cases it turned out that an interviewee had given a negative answer for the analysis question A, but later said e.g."I don't think we use any methods.I have not seen any.".In such case the answer to the analysis question B was considered positive (use on intuition admitted).In the same manner a negative answer to the analysis question A and the later statement "Yes, we always calculate."was considered as a negative answer to the analysis question B. The results of that classification are shown in Table 2. The second phase of the analysis was discursive analysis.Discursive analysis was used in order to find out C: Is the use of intuition evident based on the analysis of the transcripted texts?D: What formal, i.e. numeric or quantitative, methods or tools or techniques are used for decision support? The questions A, B, C, and D were decided to be sufficient in order to reveal the underlying formal or intuitive nature of the decision making processes of the interviewees. In this type of analysis the researchers try to see or understand what lies between the lines.A careful examination of the actual sayings of the interviewees and general impressions based on the analysis of the transcripts is the main tool for eliciting the answers.For example expressions like "We should use (quantitative methods)."and "There is nothing else than illusion" tell their story about missing formal methods in the real decision making processes or the methods' lack of suitability for the real world. It is worth noting that the interviewee's use of language revealed the use of intuitition.For example the interviewees could not name or describe the methods which were used or they used phrases like "We try to use quantitative methods" or "There is some price and then they give some value to it" or "We can do counting based on supposition and on experience and on things and values we had learnt".When the interviewee said, that they try to use quantitative methods, it clearly tells that the decisions are not based on formal methods.Personnel of that particular organization would like to use or they should use quantitative methods, but for some reasons it is not the way they usually work. Cases where the interviewee used words like "some" and "something" reveal that they do not have the information and the methods required for formal decision making.In those cases the interviewee knew that quantitative methods were the expected way of assessment, but they are not used.Similarly answers which tell that something "is based on supposition or experience" it can be concluded that decision making is based on intuition and fuzzy reasoning, not on formal methods.In every interview there were clear indicators that showed the dominant role of intuition in decision making.In other words, every one of the interviewed decision makers relied on intuition, not formal methods or procedures in their decision making.The decisions were not really backed up with hard, fact based data and evaluations -the decisions were based on the "gut feeling" of the decision making individual. One of the most interesting results from the analysis was that the interviewees were not able to name or describe any formal or semi-formal decision-making method.The most formal mentioned approaches were return on investment (ROI), cost estimation, and giving grades to different options.In most cases even that level of method definition was lacking.In addition to that, the vague methods mentioned by the interviewees are based on intuition -at least in the forms they were mentioned. UNDERSTANDING THE RESULTS When asked if the modernization decisions were based on intuition, 12 of 29 interviewees admitted it straight away in some level.Some of those interviewees said that they use intuition too much, some of them said that about 30% of decisions are based on intuition and some of them said that it happens.But a closer analysis of the answers for the later questions, which handled quantitative methods and quantification, reveals that 20 from 29 admitted intuition at that point.Admission could be seen in answers like "Really used?It is mostly the rule of thumb, that we count something" (when asked if some methods are used) or "I don't know how much there is real scientific base, but we have hands on experience".In other words, closer analysis showed beyond reasonable doubt that decision making relied on intuition, not on rational, fact analyzing methods or working procedures. It must be noted that the data analyzed in the reported study is fairly limited and the results may not be generalizable.It is reasonable, however, to consider the result fairly descriptive for Finland due to the reasonably uniform educational background and experience of the interviewees.For other countries the results may also be reasonably accurate.That accuracy can be assumed because the age of people in similar decision making positions is about the same in different countries and the educational background (actually the contents of the education, most likely master's degree in computer science, engineering or business economics) is surprisingly similar in industrialized western societies. According to the data and its analysis it is clear that rational decision making methods were not widely used.Interviewees described some methods like giving grades to different solutions or calculating costs, but they could not name or describe their methods very clearly or systematically.At least in the case of the interviewees intuition has the dominant role in making software replacement or modernization decisions.That is contrary to the normal expectations that are held regarding economically significant technical decisions. It is very interesting that most of the interviewees did not admit using intuition directly although they used it.It tells that somehow they see intuition as an unfavorable way to make decisions.It can be seen from the answers that most of the interviewees do not consider intuition as a part of normal, recommended decision making process and they do not see intuition as a professional way to make decisions as the literature states (Simon 1979) (Seacord et al. 2003)(Sadler-Smith and Shefy 2004). The state of the affairs seems to be that rational methods described in the literature are not in everyday use in information technology supplier or buyer organizations.In this study decision making is found out to be based on various mixes of intuition, calculation and estimation -mixes in which intuition is in the dominant role.However, the interviewed decision makers wanted to see that the decisions made in their organization are rational and formal.It even seems to be the case that they were very reluctant to admit the lack of rational fact-based methods and the reliance on intuition to themselves. It should, however, be noted that the concept of rationality has been questioned in some earlier studies, see e.g.(Parnas and Clements 1986).It could be that rational decision making is not as common human activity as expected.In order to understand the actual decision making and the possibility of creating useful tools and models for decision making a good understanding of the decision criteria should be obtained.Such understanding could provide valuable insights to the actual use of the formal making tools, methodologies and economic measures.For example Herbert Simon suggests that a good manager needs both analytical, rational skills and intuition (Simon 1987).He states: "Behaving like a manager means having command of the whole range of management skills and applying them as they become appropriate".Also, the recent scientific researches are bringing up the same idea (Patton 2003)(Sadler-Smith and Shefy 2004) (Miller and Ireland 2005). This does not mean that we should forget rationality and formality; it means that we should not forget or deny the role of intuition.It may become very dangerous if rationality is faked and real reasons behind the decisions are attempted to be hidden.This may lead to faulty decisions (Cook, Elder and Ward 1997).Also, it is clear that faking makes the decisions non-traceable (Parnas 1998). DISCUSSION Software engineering related decision making is an interesting phenomenon.The common assumption is that those decisions should be performed by using rational and formal decision making tools, methods and processes -the software engineering literature boasts an abundance of different tools and methods which could be used in order to provide rational support for the decision maker.Those tools and methods are not, however, widely used in the real world. The most interesting finding provided by the reported study is that the decision makers do not see intuition as a correct way of decision making.Yet they are using it extensively.That makes it very important to find out why rational methods are not widely used and why intuition is seen as an improper means to justify decisions.One reason could be that methods described in the literature are too problematic to be used in the real world.For example, in today's hectic business environment collecting the data needed for a specific analytical method might be too difficult and time-consuming (Patton 2003).Also the amount of the data needed for the righteous decisions might be too difficult to reach.The reason why intuition is seen as a bad decision making might be because of the education and traditions.In business, management and computer science education rationality is seen as the only way of making the decisions (Sadler-Smith and Shefy 2004) and therefore the values of software engineers and decision makers favor pure reason and formal decisions.It is also worth raising the question whether there is a way to shorten the gap between research world and the business world (Parnas 1998) (Glass 2003) regarding the tools and methods for actual decision making.Obviously research based on the real world cases is needed in this area. Due to the nature and the limited geographical area of this reported study, the results of the study may be less representative than the authors of this article assume.Anyway, results of this study can be used as a basis of the later discussion and studies of modernization decision making. The role of intuition should be brought out more clearly in decision making.It is dangerous to use intuition and then deny using it.In some cases it is typical to find the justification for decisions after the decisions are made (Parnas and Clements 1986).This may lead to the situations where decision making and its justification are thought to be rational and then the final results are surprising and not wanted.This makes the decisions and the decision process nontraceable.Non-traceability makes the important learning from the failures impossible (Miller and Ireland 2005).How can one learn something if he/she does not know what happened and how it happened?That kind of behavior makes also the later rational decision making and decision making process improvements difficult.How can the processes be improved if the experiments and documentations of the previous processes do not correspond to the reality? It is important that the decisions and the criteria behind the decisions are described honestly.Without truthful justification and description decisions can not be ethical and consequently they do not correspond to expectations of the world around us.Such decision making is not an acceptable policy for software engineering professionals. what extent are the modernization decisions based on intuition?Q2: To what extent formal or numerical methods are used to justify the modernization decisions?Q3: Which methods are used to justify the modernization decisions?Q4: To what extent there is a need for the numerical description of the potential benefits of modernization?Q5: To what extent the calculation of the potential costs has been possible in the previous modernization decisions? TABLE 1 : Direct admissions regarding the use of intuition. Evaluation and Assessment in Software Engineering 2006 TABLE 2 : The role of intuition is revealed or admitted later on.
2014-10-01T00:00:00.000Z
2006-04-10T00:00:00.000
{ "year": 2006, "sha1": "a98cbda78c647b3285e81c730e5549f5e336d1ad", "oa_license": "CCBY", "oa_url": "https://www.scienceopen.com/document_file/b5fc9f07-edde-48b9-906f-2f3f54b6eae6/ScienceOpen/001_Saarelainen.pdf", "oa_status": "HYBRID", "pdf_src": "CiteSeerX", "pdf_hash": "a98cbda78c647b3285e81c730e5549f5e336d1ad", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
15851949
pes2o/s2orc
v3-fos-license
Endothelial Dysfunction in the Smokers Can Be Improved with Oral Cilostazol Treatment Background Smoking is one of well known environmental factors causing endothelial dysfunction and plays important role in the atherosclerosis. We investigated the effect of cilostazol could improve the endothelial dysfunction in smokers with the measurement of flow-mediated dilatation (FMD). Methods We enrolled 10 normal healthy male persons and 20 male smokers without any known cardiovascular diseases. After measurement of baseline FMD, the participants were medicated with oral cilostazol 100 mg bid for two weeks. We checked the follow up FMD after two weeks and compared these values between two groups. Results There was no statistical difference of baseline characteristics including age, body mass index, serum cholesterol profiles, serum glucose and high sensitive C-reactive protein between two groups. However, the control group showed significantly higher baseline endothelium-dependent dilatation (EDD) after reactive hyperemia (12.0 ± 4.5% in the control group vs. 8.0 ± 2.1% in the smoker group, p = 0.001). However, endothelium-independent dilatation (EID) after sublingual administration of nitroglycerin was similar between the two groups (13.6 ± 4.5% in the control group vs. 11.9 ± 4.9% in the smoker group, p = 0.681). Two of the smoker group were dropped out due to severe headache. After two weeks of cilostazol therapy, follow-up EDD were significantly increased in two groups (12.0 ± 4.5% to 16.1 ± 3.7%, p = 0.034 in the control group and 8.0 ± 2.1% to 12.2 ± 5.1%, p = 0.003 in the smoker group, respectively). However, follow up EID value was not significantly increased compared with baseline value in both groups (13.6 ± 4.5% to 16.1 ± 3.7%, p = 0.182 in the control group and 11.9 ± 4.9% to 13.7 ± 4.3%, p = 0.430 in the smoker group, respectively). Conclusion Oral cilostazol treatment significantly increased the vasodilatory response to reactive hyperemia in two groups. It can be used to improve endothelial function in the patients with endothelial dysfunction caused by cigarette smoking. Introduction Normal endothelial functions include control of platelet adhesion, mediation of coagulation and immune function, and control of volume and electrolyte content of body components. 1) Endothelial dysfunction is a pathophysiological condition of abnormal process carried out by the endothelium. It is thought to be a key initial step in the development of athero-sclerosis and can be used as a prognostic marker in predicting cardiovascular events including stroke and heart attacks. 2)3) Endothelial dysfunction can be resulted from disease processes including hypertension, dyslipidemia, and diabetes as well as from environmental factors such as smoking. 1) Endothelial dysfunction can be characterized as the inability of arteries to dilate fully in response to appropriated stimuli. This can be detected by flow-mediated dilatation (FMD) using temporary arterial occlusion by inflating a blood pressure cuff to high pressures. 4) Flow-mediated changes in conduit artery diameter are caused by shear-stress induced generation of endothelia derived vasoactive mediators including nitric oxides. 5) 6) Cilostazol (Otsuka Pharmaceutical Co. Tokushima, Japan), 6-[4-(1-cyclohexyl-1H-tetrazol-5-yl)butoxy]-3,4-dihydro-2(1H)-quinolinone, is a 2-oxo-quinoline derivative with antithrombotic, vasodilator, antimitogenic and cardiotonic properties. 7) The compound is a potent inhibitor of phosphodiesterase (PDE)-3. It may be useful for treating chronic arterial occlusive diseases and symptoms of intermittent claudication. 8) Cilostazol inhibits platelet aggregation and has considerable antithrombotic effects in vivo. Also, this compound relaxes vascular smooth muscle and inhibits mitogenesis and migration of vascular smooth muscle cells. In the heart, it has positive inotropic and chronotropic effects. 7) 8) This present study investigated the effects of cilostazol that could improve the endothelial dysfunction, especially in young male smokers. Subjects We enrolled 10 healthy male non-smokers and 20 active male smokers, average 6.0 ± 3.3 pack years, with matching age and weight. They were all volunteers without having a past or present history of coronary diseases and the results of clinical laboratory tests showing no signs of hypertension, dyslipidemia or diabetes. They filled up the informed consent. Total 200 mg/day of oral cilostazol was administered divided into morning and evening for two weeks. The levels of smoking in the smoker group were remained constantly during the study period. The diameter of the brachial artery was measured in the response to an increase in blood flow (causing shear-stress) during reactive hyperemia (induced by transient inflation of a blood pressure cuff). This leads to endothelium-dependent dilatation (EDD). Endothelium-independent dilatation (EID) was defined as a proportional increase in the diameter to sublingual nitroglycerin, an endothelium-independent dilator. The brachial artery was scanned and the diameter measured during four conditions; at baseline, during reactive hyperemia (induced by transient inflation of a sphygmomanometer cuff), 20 minutes after hyperemia and finally after administration of a sublingual nitroglycerin. After measurement of baseline brachial artery diameter (BD0), a sphygmomanomter cuff was used to apply up to 200 mmHg of pressure for 5 minutes around the upper part of the arm. Follow-up brachial artery diameter (BDh) was measured within 5 minutes after deflation of a blood pressure cuff. Another baseline brachial artery diameter (BD1) was measured after 20 minutes of rest after reactive hyperemia. Vasodilator response was obtained (BDn) within 4 minutes after administration of a single high dose (400 mg) of sublingual nitroglycerin tablet. Vascular endothelial function test EDD was calculated according to the following formula: (BDh-BD0) × 100 (%) BD0 EID was calculated according to the following formula: Statistical analysis We used a commercial program, SPSS version 17.0 (SPSS Inc., Chicago, Illinois, USA) for Microsoft Windows, for statistical analysis. Numeric variables are expressed as mean ± SD. The difference of continuous variables between two groups was analyzed using a nonparametric test (Mann-Whitney U test). The difference between the baseline and follow-up data was analyzed using the paired sample t-test. Intraobserver and interobserver agreements in FMD were tested using the baseline values of 10 participants according to the statistical methods proposed by Bland and Altman. 9) All measurements were transformed to an equivalent percentage scale of agreement, according to the following formula; Agreement index = 100 -|X1st -X2nd| (X1st + X2nd)/2 in which X1st and X2nd are measures obtained in twice-repeated evaluation using same technique in the same patient. The measure of reproducibility was 2 SD of the intraobserver and interobserver agreement indexes. Therefore, these coefficient of variations (COV) were equal to 2 SD of |X1st -X2nd| (X1st + X2nd)/2. The intraobserver agreement was 90.8% and the COV was 8.9%. The interobserver agreement was 82.8% and the COV was 14.3%. A p value less than 0.05 was considered statistically significant. Results There was no significant difference in the clinical characteristics including age, body mass index, blood pressure, serum cholesterol profile, high sensitive C-reactive protein, and serum glucose level between two groups (Table 1). There was no significant side-effect associated with use of oral cilostazol except headache. Though majority subjects with headache were well controlled with oral analgesics, two of 20 smokers dropped out because of uncontrolled severe headache. Baseline EDD was significantly higher in the control group than in the smoker group (12.0 ± 4.5 in the control group vs. 8.0 ± 2.1% in the smoker group, p = 0.002, Fig. 1A). Baseline EID was similar in two groups (13.6 ± 4.5% in the control group vs. 11.9 ± 4.9% in the smoker group, p = 0.681, Fig. 1B). After two weeks of oral cilostazol treatment, follow-up EDD values were significantly increased in both groups (control group: 12.0 ± 4.5 to 16.1 ± 3.7%, p = 0.041, Fig. 2A; smoker group: 8.0 ± 2.1 to 12.2 ± 5.1%, p = 0.003, Fig. 2B). Follow-up EDD in the control group showed higher value than that of control group (16.1 ± 3.7 in the control group vs. 12.2 ± 5.1% in the smoker group, p = 0.018). Discussion In this FMD study with oral cilostazol, we demonstrated decreased baseline EDD level in the smoker group and significant improvement of EDD with administration of oral cilostazol in this group. However, cilostazol did not change the EID in this group. Nitric oxide (NO), also known as the endothelium-derived relaxing factor (EDRF), is a chemical compound which is an important signaling molecule in the mammalian bodies. It is biosynthesized endogenously from L-arginine and oxygen by various nitric oxide synthase (NOS). 10) It contributes to vessel homeostasis by inhibiting platelet aggregation, vascular smooth muscle contraction and growth, and leukocyte adhesion to the endothelium. 10) FMD is one of methods to exam endothelial function. Increased shear stress in the arterial wall, usually caused by occlusion of the artery, can open specialized ion channels in the endothelium and influxed calcium ions through these channels stimulate endothelial nitric oxide synthase (eNOS). Increased eNOS activity enhanced NO synthesis and elevated NO evokes vasodilation. 3) The vasodilatory effect of NO, EDRF, on the vessel (vasodilation) can be assessed by FMD. Arterial distensibility after reactive hyperemia is influenced by variable factors. FMD can be influenced by a single high-fat meal, 11) mental stress, 12) cigarette smoking, 13) hyperglycemia, 14) and changes in electrolytes (sodium and calcium). 15) Also, EDD can be decreased by theophylline (adrenoreceptor agonist) or ibuprofen (prostaglandin synthesis inhibitor). 16) Cigarette smoking is one of the well-known risk factors causing endothelial dysfunction and there are strong relationships between cigarette smoking, atherosclerotic burden, and ischemic heart disease. [17][18][19] Cigarette smoking induces initial atherosclerosis and promotes cardiovascular disease through multiple mechanisms including vasomotor, neurohormonal, and hemoatologic dysfunction, and increased oxidative stress 19)20) Endothelial dysfunction can be resulted from inhaled cigarette smoke. Smoking as few as 2 cigarettes a day doubles the number of damaged endothelial cells in the blood stream. [19][20][21] There are several drugs improve endothelial dysfunction including angiotensin converting enzyme inhibitors, angiotensin receptor blockers, and nifedipine in the hypertensive patients, statins, fibrates and omega-3 fatty acid in the dyslipidemic patients, metformin and rosiglitazone in the diabetic patients. 22) Few studies have been reported in the improvement of endothelial dysfunction in the smokers. Smoking cessation is one of the well-known methods to restore endothelial dysfunction. 23) Guthikonda et al. 24) published their study that allopurinol reverses endothelial dysfunction in the heavy smokers. Oida et al. 25) presented that oral cilostazol treatment can be associated with improvement of endothelial dysfunction. Cilostazol, a selective PDE-3 inhibitor, increases cyclic adenosine mono-phosphate (cAMP) in the platelets and inhibits platelet aggregation. Moreover, increased cAMP in the vascular smooth muscle cells activates protein kinase A and decreases intracellular calcium concentration. These effects result in vasodilation. 26) Yasuda et al. 26) reported that two weeks treatment of oral cilostazol increased tissue blood flow through the pedal vessels through vasodilation in the patients with peripheral arterial occlusive disease. Cilostazol has been approved by the Food and Drug Administration (FDA) in the treatment of peripheral arterial occlusive disease since 1999. 27) Recently, triple combination antiplatelet therapy, aspirin, clopidogrel and cilostazol, has been reported to reduce thrombotic complication and recurrence of restenosis after implantation of coronary stent, especially in the diabetic patients. 28) Additional effects of cilostazol on the vascular endothelium have been described. Cilostazol has been associated with increased NO production. Ikeda et al. 29) published their study showing cilostazol increased NO production in cultured vascular smooth muscle cells via cAMP pathway. Nakamura et al. 30) reported that cilostazol dilated thoracic aorta via released NO from aortic endothelium. Increased NO production with cilostazol treatment can be associated with increased EDD in our study. Our result is similar to that of the previously reported article by Oida et al. 25) However, the mean age of the participants was younger (late twenties vs. late thirties) and treated dosage was higher (200 mg per day vs. 150 mg per day) in our study. Because of younger age and shorter duration of smoking, the EDD level in our study may be associated with higher value than that of study by Oida et al. 25) Cilostazol improved reactive hyperemia in these eighteen smokers. This effect can be resulted from the NO production and relaxation of smooth muscle cells in the vessel. Endothelial dysfunction can be resulted from smoking as a result of in- A B After After 13.6 ± 4.5% 11.9 ± 4.9% 15.9 ± 4.7% 13.7 ± 4.3% p = 0.225 p = 0.430 creased oxidative stresses such as oxygen free radical which disturb normal endothelial functions. Therefore, cilostazol can improve endothelial dysfunctions caused by cigarette smoking. In this study, most of participants complained headache after cilostazol medication. Most normal participants complained severe headache despite of oral analgesic treatment. Moreover, two of 20 smokers dropped out due to severe headache. In conclusion, the administration of cilostazol improves the vasodilatory response to reactive hyperemia in the smokers.
2016-05-12T22:15:10.714Z
2011-03-01T00:00:00.000
{ "year": 2011, "sha1": "10075e1d0fa4ed2b295efdb2b2338ed3dbfbef17", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3079080?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "10075e1d0fa4ed2b295efdb2b2338ed3dbfbef17", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235071851
pes2o/s2orc
v3-fos-license
Human Rights of Indigenous Small-Numbered Peoples in Russia: Recent Developments In Russia, there exist legal norms providing for the protection of indigenous small-numbered peoples’ rights. Yet, indigenous small-numbered peoples face multiple challenges when it comes to the implementation of their rights. After a brief presentation of the Russian legislation on the rights of indigenous small-numbered peoples, peculiarities of the Russian legal system and impediments to the legal provisions regulating the status of indigenous small-numbered peoples, this article addresses several issues related to the implementation of indigenous small-numbered peoples’ rights in Russia today. One of the core issues is the attribution of individual members of indigenous communities to indigenous small-numbered peoples. Such an attribution is still challenging despite the newly adopted amendments to the 30 April 1999 Federal Law N 82-FL: ‘On Guarantees of the Rights of Indigenous Small-Numbered Peoples of the Russian Federation’. Another issue is application of the notion ‘foreign agent’ to individuals and non-commercial organizations. Still another issue is the State’s pressure on independent indigenous organizations. The final challenge is the possible impact of amendments to the Constitution approved by popular vote in July 2020 on the rights of indigenous small-numbered peoples. Introduction This article is devoted to a discussion of the present situation concerning the human rights of indigenous small-numbered peoples in Russia. The issues raised by the author are non-exhaustive but can be considered highly topical at the present time. In the first part of the article, the author presents the Russian legal acts on the status of indigenous small-numbered peoples, explains what peculiarities of the Russian legal system impact this status and also discusses one of the lingering challengesimpediments to the legal provisions regulating the status of indigenous smallnumbered peoples. The second part of the article focuses on the following issues: the attribution of individual members of indigenous communities to indigenous small-numbered peoples, the introduction of the notion 'foreign agent' into Russian legislation, the State's pressure on independent indigenous organizations, and the impacts of recently adopted amendments to the Constitution on the rights of indigenous small-numbered peoples. The topicality of the issues selected by the author has been emphasized at the international level. In its 113th session, The UN Human Rights Committee expressed its concern that "insufficient measures [are] taken to respect and protect the rights of indigenous peoples and to ensure that members of such peoples are recognized as indigenous." 1 Already in 2010, the Special Rapporteur on Indigenous Peoples' Rights, James Anaya, drew public attention to the fact that there was lack of effective implementation of laws on the rights of indigenous peoples. 2 For a long time, there was no unified legal procedure regarding the attribution of individual members of indigenous communities to indigenous small-numbered peoples at the federal level. Recently, the Federal Law (FL) of 6 February 2020 N 11-FL 'On Amendments in FL 'On Guarantees of the Rights of Indigenous Small-Numbered Peoples of the Russian Federation' (amendments in FL 'On Guarantees') was adopted. The law, which came into force on 7 May 2020, regards the establishment of registration procedures for persons belonging to indigenous small-numbered peoples and the establishment of a unified registry for these purposes. However, there is reason for concern. The origin of the determining criteria for the identification of persons belonging to indigenous small-numbered peoples is ambiguous. Moreover, the implementation procedures of the amendments are unclear. As a result, individuals belonging to indigenous small-numbered peoples may not receive benefits enshrined in law. Besides, indigenous activists are concerned about the survival of some small groups of peoples as indigenous small-numbered peoples due to the adoption of the amendments and after the 2020 Census, which has been postponed to 2021. When preparing the 2020 Census in the State Duma, the opinion was expressed that "Oroch people living on their land from time immemorial […] will be eliminated from the indigenous small-numbered peoples' list as a result of the 2020 Census." 3,4 Despite the entry into force of the amendments, the registry itself will start functioning from 2022. The Special Rapporteur on Indigenous Peoples' rights drew public attention to the application of the notion 'foreign agent' to certain individuals and legal entities, whose activities can be limited when they get this status. 5 Yet another concern is the State's pressure on independent indigenous organizations. For example, the work of the Center for Support of Indigenous Peoples of the North/Russian Indigenous Training (CSIPN) was terminated by the Ministry of Justice in 2019. According to the head of this Centre, state authorities continue terminating activities of groups who try to raise their voice. 6 An earlier but telling example is the suspension of the work of one of the largest indigenous organizations, the Russian Association of Indigenous Small-Numbered Peoples of the North, Siberia and the Far East (RAIPON), in 2013. After reopening, the organization has shown loyalty to the central government. 7 A final concern regards amendments to the Constitution approved by popular vote in July 2020 and their impact on the rights of indigenous small-numbered peoples of Russia. The Russian legislation The Russian Federation (RF) is not part of the Indigenous and Tribal Peoples Convention N 169 (ILO 169). 8 Russia did not endorse the Universal Declaration on the Rights of Indigenous Peoples although it was advised many times to do so. 9 The RF voted in favor of the Universal Declaration on Human Rights but is not bound by this document. However, the RF has joined other core international minority instruments and is consequently bound to protect indigenous rights due to this. 10,11 According to Article 69 (1) of the Constitution of the RF, "the RF shall guarantee the rights of the indigenous small peoples according to the universally recognized principles and norms of international law and international treaties and agreements of the RF." Moreover, according to Article 15 (4), "the universally-recognized norms of international law and international treaties and agreements of the RF shall be a component part of its legal system. If an international treaty or agreement of the RF fixes other rules than those envisaged by law, the rules of the international agreement shall be applied." The Constitution also sets forth in Article 72 (1(м)) that "protection of traditional living habitat and of traditional way of life of small ethnic communities" falls within the joint competence of the Federation and subunits of the RF. The rights of indigenous small-numbered peoples are partly enshrined in the Land Code of the RF, 12 the Water Code of the RF, 13 the Tax Code of the RF, 14 the Forest Code of the RF, 15 other federal laws and several decrees of the Government of the RF. In addition, some federal subunits have introduced legal regulations at the regional level. According to Kryazhkov, the intensity of legal regulation of the rights of indigenous small-numbered peoples in different subunits depends on many factors. These factors are: the ethnicity of the region (examples of sufficient regional legislation are the regulatory frameworks of the Republic of Sakha (Yakutia), the Nenets, the Khanty-Mansi and the Yamalo-Nenets Autonomous Okrugs); the economic situation (the economic indicators are among the highest in the country in the Khanty-Mansi and the Yamalo-Nenets Autonomous Okrugs, where there is extensive regional legislation on the rights of indigenous small-numbered peoples); the political will expressed in the readiness of the institutions of the public authority to understand the needs of small-numbered peoples and to create the necessary conditions to meet these needs, and; the proactive stance of the small-numbered peoples and their associations and their capability to clearly identify their interests and interact with bodies of state power. 16 1.2 Peculiarities of the Russian legal system The Russian legal system has some peculiarities inter alia due to the federal structure (federalism) of the State. The Russian Constitution defines three competence spheres of the State and the federal subunits of the State: the exclusive competence of the RF (Article 71), the joint competence of the RF and the subunits of the Federation (Article 72), and the full competence of the subunits of the RF (Article 73). There must be no contradiction between federal legislation and subunit legislation. In cases where there is a contradiction, federal legislation applies. This rule refers to the two former situations. If an issue falls within a subunit's competence, legal acts of this subunit prevail. The implications of federalism for the purposes of the present article can be outlined as follows. Firstly, the regulation of the legal status of indigenous smallnumbered peoples falls within the competence of the RF (Article 71 (в)). Secondly, Article 72 (1 (б)) refers the protection of the rights of the national minorities to the joint competence of the RF and its subunits. This results in joint regulation of this issue: there exist regulations at the federal level and at the level of the subunits. Another peculiarity of the Russian legal system is the existence of various types of subunits of the RF depending on their national-territorial status. According to Article 5 (1) of the Constitution of the RF, there exist "republics, krays, oblasts, cities of federal significance, an autonomous oblast, autonomous okrugs -which shall be equal subunits of the RF." Article 5 (4) reads, "[a]ll subunits shall be equal between themselves in mutual relationships with federal agencies of State power." Thus, the Constitution of the RF establishes equality of all the subunits of the RF. During the existence of the USSR, the Autonomous Okrugs 17 were established to secure the rights of indigenous small-numbered peoples. It is no longer possible to claim whether these subunits correspond with the number of indigenous small-numbered peoples currently inhabiting these areas. 18 Another peculiarity of the Russian legal system is the use of the term 'indigenous small-numbered peoples.' There are 47 indigenous peoples in the RF. 19 40 of these indigenous peoples reside in territories of the North, Siberia and the Far East of Russia. The Russian legislation introduces the term 'indigenous smallnumbered peoples.' FL 'On Guarantees' provides the following definition of indigenous small-numbered peoples of the RF in Article 1 (1): "peoples who live in the territories traditionally inhabited by their ancestors, maintain their traditional way of life and economic activity, number fewer than 50 000 and identify themselves as separate ethnic communities." There is a difference between the international term 'indigenous peoples' and the Russian term, which introduces the numerical criterion. Indigenous peoples numbering more than 50 000 members are denied state legal support because they do not fall within the definition of indigenous smallnumbered peoples according to the Russian legalisation. 20 Among such peoples are, for example, the Yakuts, Komi, Tuvans, Altaians, Khakas, Buryats, and the Karelians. Application of the numerical criterion can result in confusing situations. For example, the population of Nenets indigenous small-numbered peoples is approaching 50 000. If the population exceeds 50 000, the Nenets can lose their status as an indigenous small-numbered people and, consequently, will not receive state legal support. This example shows that application of the numerical criterion seems artificial and conditional in some situations. 21,22 1.3 Impediments to the legal provisions As mentioned in Subsection 1.1, the FL 'On Guarantees' comprises the core legal framework protecting the rights of indigenous small-numbered peoples. An analysis "of the content of this federal law allows to maintain that it has not achieved the required internal coherence and the completeness of the unity of legal regulation." 23 Similar challenges apply to two other federal laws on indigenous rights ('On General Principles' and 'On Territories'). Defective legislation is one of the main challenges regarding the protection of indigenous small-numbered peoples' rights in Russia. This problem is complex and encompasses smaller issues. The first issue is that the terms and concepts used in the legislation are not defined. This is typical of the norms on the protection of the traditional lands and ways of life of indigenous small-numbered peoples. For example, no consolidation of legal concepts such as 'the cultural heritage of indigenous small-numbered peoples, 'objects of the cultural heritage of indigenous small-numbered peoples,' and 'sanctuaries' has been carried out. Another example is the absence of a definition of 'discrimination' in the Russian legislation, which is relevant to several spheres of life of indigenous small-numbered peoples of Russia. According to Minority Rights Group Europe, this complicates implementation of anti-discriminatory provisions. 24 The second issue is the lack of specific norms defining legal mechanisms for the realization of the proclaimed rights. 25 Nevertheless, the state authorities of the Sakhalin Oblast have not elaborated a procedure for the allocation of land plots in the regional reserve 'Severny' to associations of small-numbered peoples with the purpose of carrying out traditional economic activities and traditional crafts on the territories of the nature reserve. Another example is that according to Article 8 (1(8)) of the FL 'On Guarantees', indigenous small-numbered peoples have the right to compensation for losses caused by damage to their traditional lands by economic activities of organizations of all forms of ownership. However, the federal legislation does not provide for procedures for the effectuation of compensations from legal entities responsible for harming the traditional lands inhabited by indigenous small-numbered peoples. 26 Still another example illustrating the lack of specific norms providing for the implementation of indigenous peoples' rights is the participation of these peoples in environmental and ethnological assessments. It is not clear how indigenous small-numbered peoples are to take part in such assessments when federal and regional state programs are developed. These programs concern the extraction of natural resources and environmental protection on the territories of indigenous small-numbered peoples. These three examples demonstrate the current need for the specification of implementation mechanisms of the legislation on indigenous peoples' rights. The third problem is the lack of unified existing practices in the form of lists (in Russian -'perechen'). This lack of consolidation results in a fragmentation of legal provisions. It is necessary to elaborate unified criteria which apply to phenomena in order to summarize them. For example, the current legislation does not provide a consolidated list of traditional catching methods and tools for fishing. This has resulted in confusion when executive bodies fail to identify methods and tools as traditional. In such cases, access to fishing grounds may not be granted. The fishing example was addressed by the Federal Arbitration Court of the Far East District. In its Ruling of 6 August 2013 on case N A24-40/2013, the Court agreed with the indigenous community's claim. The counterpart to the indigenous community in this case was the North East territorial department of the Federal Agency for Fishery. The Agency's main argument was that the indigenous community used non-traditional fishing methods (fishing with a mobile bottom dragnet). The Federal Arbitration Court ruled that the permit-issuing body wrongly identified this method as non-traditional, and as such had interpreted the legal norms incorrectly. This problem is discussed in detail in Zmyvalova's article "Indigenous Peoples of the Russian North and Their Right to Traditional Fishing." 27 The fourth challenging area concerns the subdivision of competences between public authorities located at different territorial levels within the state (Federal level, subunits level and municipal level). There is no clear distribution of powers between these authorities in several areas, like the protection of traditional lands and lifestyles of indigenous small-numbered peoples. The lack of a clear delineation of powers violates systemic approaches to regulating the mentioned relations. This, in turn, results in drawbacks in legal regulation at the regional and municipal levels. These drawbacks include gaps in regulation as well as excessive regulation, contradictions in the legal system of the State and ineffective protection of the rights of indigenous small-numbered peoples. 28 An example that illustrates the lack of distribution of competences is when bodies of executive power create additional requirements for the beneficiaries of rights. The Department of Natural Resources and the Non-Oil and Gas Sector of the Economy of the Khanty-Mansi Autonomous Okrug-Ugra puts forward additional requirements to members of the community when considering their applications for traditional fishing quotas. The community contested this practice in the Arbitration Court of the West Siberian District. In its Ruling of 26 August 2014 on case N A75-12108/2013, the Court also interpreted the legal norm restrictively and took the side of the authorities. The Court concluded that some members of the community did not reside on the territory of their traditional habitation because they did not have residence registration in this territory. This resulted in the necessity for each member of the community to apply individually for traditional fishing quotas. The fifth problematic issue is a lack of a systemic approach to the way in which subunits approach law-making concerning indigenous small-numbered peoples. Law-making practice in subunits is diverse. Some subunits have substantial practice, i.e. their laws are elaborated and detailed. Other subunits have superfluous practice, i.e. their law-making lacks a unified approach and understanding of priorities regarding the human rights of indigenous small-numbered peoples. The issue of indigenous languages is an illustration of such varying practice. Only a few subunits have elaborated laws regulating the status of the indigenous languages of the peoples who reside there. The Nenets Autonomous Okrug is an example of such a subunit. The Murmansk Oblast is an example of the opposite. (2) of FL 'On Acts of Civil Status'. The Court ruled that data on ethnic identity should be entered into a record of the birth of a child and on a birth certificate, at the request of interested parties. According to the Ruling, obligatory data in a birth certificate shall not include information about ethnic identity. A birth certificate is used by citizens in legal relations and is, therefore, an important source of information about and acknowledgment of the ethnic identity of a person. The act of attribution of ethnic identity is based on the principle of self-identification. As presented in court practice, a citizen is entitled to self-determine one's ethnic identity at any time, for any ethnicity and for an unlimited number of times. It is worth noting that Article 26 of the Constitution provides for individuals' right to freely indicate one's ethnic identity and that nobody can be forced to identify and indicate one's national identity. Despite the existing constitutional framework for indication of identity and the opinion of the Constitutional Court, members of indigenous smallnumbered communities face challenges in this regard. Until 1997, the issue of individual ethnic identification of indigenous smallnumbered peoples was resolved in a unified manner at the federal level. There was a line in passports where citizens could add their national identity. 30,31 This line was removed from passports in 1997. 32 However, passports that contain the national identity line were still in use until 2004. 33 Currently the line 'nationality' exists in documents certifying state registration of acts regarding civil status: birth, marriage, divorce, adoption, establishment of paternity, and change of name. 34 For a long time, there existed no mechanisms at the federal level that allowed for individual members of indigenous small-numbered peoples' communities to register their ethnic attribution. This led to the establishment of such procedures in the federal subunits. According to Plyugina: [o]n the one hand, this practice is not consistent with the constitutional distribution of competences of the Russian Federation and subunits of the Russian Federation according to which the regulation of human and civil rights and freedoms as well as the regulation of the rights of national minorities belongs to the competence of the RF (in this case it regards regulation, but not protection). On the other hand, without establishment of the fact of belonging to indigenous small-numbered peoples of persons who in fact are such, it is impossible to exercise the rights and freedoms arising from the corresponding legal status. 35 The following practices were used to determine national identity in subunits. The first and main practice applied by subunits is a record of national identity in a birth certificate or a court decision. For example, this is provided in the law of Khanty-Mansi Autonomous Okrug-Yugra of 14 November 2002 case N 62-O3 'On the Transport Tax in the Khanty-Mansi Autonomous Okrug-Yugra.' According to Article 4 (1(6)), documents confirming a citizen's belonging to an indigenous small-numbered people of the North residing in the territory of the Khanty-Mansi Autonomous Okrug-Yugra (the Khanty, Mansi, or Nenets peoples) is a birth certificate or a court decision entered into force on the established fact of a citizen's national identity. Similar provisions can be found in Article 6. in order to receive educational support it is necessary to submit the birth certificate of the student or one of his or her parents that indicates belonging to an indigenous small-numbered people of the North, Siberia and the Far East, or to submit a court decision entered into legal force on establishment of the fact of national identity for students from indigenous minorities from families where the only parent or at least one of the parents belongs to an indigenous small-numbered people. The second practice was a passport insert. Issuance of a so-called insert to a passport of a citizen of the RF was regulated by the Decree of the Government of the RF of 9 December 1992 case N 950 'On Temporary Documents Certifying the Citizenship of the RF' (invalid at present). In the subunits, there was an attempt to apply their own 'inserts'. A decree on such an annex existed in the Republic of Sakha (Yakutia) from 2000 to 2016. 37 The third practice was the use of archival records in some subunits, which were also used as evidence of belonging to an indigenous small-numbered people. Such a document together with a birth certificate and a court decision is specified in the There are also other documents that can confirm belonging to indigenous small-numbered peoples in the subunits. Examples of such documents are documents of local self-government (for example, certificates confirming the residence of a person within territories of traditional nature use and where they carry out traditional economic activities). In addition, information provided by indigenous communities is sometimes used to confirm the national identity of indigenous persons. Confirming national identity by court decision has become common practice in recognizing the national identity of indigenous persons. For example, in the Murmansk Oblast a member of the Sámi indigenous small-numbered peoples' community, Andrei Danilov, intended to exercise his special right to traditional hunting in 2019. Initially he applied to the Ministry of Natural Resources and Environment of the Murmansk Oblast to make a note in his hunter's ticket that he had the right to hunt as a Sámi. The Ministry rejected his application due to two reasons: he had to prove his individual ethnic identity and he had to prove that hunting supports his traditional way of life. According to Danilov, these requirements were illegal, and he addressed the Ombudsman of the Murmansk Oblast to protect his rights. The Ombudsman concluded that the procedure to establish individual ethnic identification involves applying to a court and submitting all the necessary documents. 38 Danilov did so regardless of the fact that his identity was already confirmed in his birth certificate. Even though some bodies of State power in other subunits recognize the record on national identity as sufficient, in the Murmansk Oblast this fact currently needs to be confirmed by a court decision. Courts, as a rule, access the totality of criteria for the attribution of individual ethnic identity indigenous persons. In some cases, a specific criterion is enough to determine the national identity of members of indigenous small-numbered communities. The existence of such varying practice at the subunit level, as well as the lack of unified practice at the federal level resulted in the need to create a unified procedure for determining the national identity of indigenous small-numbered peoples at the federal level. For example, the first head of RAIPON and later head of the Center of Development of Reindeer Herding and deputy director of the non-commercial partnership "The Russian Arctic Development Center," Khariuchi, spoke in favour of "adopting a normative act granting persons from indigenous small-numbered peoples the right to indicate their national identity in a special insert to a passport," 39 According to him, the FL 'On Guarantees' should be amended correspondingly. Khariuchi also proposed "to develop a decree of the Government of the Russian Federation 'On Approval of Regulations of the Certificate Confirming National Identity of Persons Belonging to Indigenous Small-Numbered Peoples of the North, Siberia and the Far East of the Russian Federation'." 40 According to Kryazhkov, "it would be right to develop a general procedure for ethnic identification of persons who want to be officially included into one or another small-numbered people in order to obtain privileges granted to these peoples. The absence of such an order creates problems." 41 To solve this problem Kryazhkov has proposed "to prepare a federal normative legal act on ethnic identification of persons from among indigenous small-numbered peoples." 42 According to Andrichenko, the procedures regulating the legal status of indigenous small-numbered peoples of Russia must be provided in the text of the FL 'On Guarantees'. 43 This has resulted in amendments to the law 'On Guarantees' initiated by the Federal Agency of Nationalities Affairs. It has also been proposed to create a federal registry of persons belonging to indigenous small-numbered peoples of Russia in order to simplify the allocation of privileges and state support. Commenting on the creation of such a registry, Fondahl, Filippova and Savvinova underline that "only with the adoption of the law calling for the establishment of an Indigenous registry […], has the issue started to be addressed on how an individual who is a member of an Indigenous people can authenticate her or his claim to be indigenous, in cases where such is required." 44 According to newly adopted amendments to the FL 'On Guarantees', indigenous persons must provide proof of their identity to be included in official lists of persons belonging to small-numbered peoples of Russia. Such an acknowledgement implies that the RF will grant certain privileges to these indigenous small-numbered peoples. 45 The amendments came into force on 7 May 2020. In addition to the previously mentioned purpose of these amendments to the FL 'On Guarantees', i.e. obtaining benefits by indigenous persons, "minimizing the corruption component in providing support to indigenous persons and reducing the number of abuses in the provision of benefits" 46 is indicated. Amendments to FL 'On Guarantees' contain general data concerning the registry's formalities, such as what information is to be entered in the registry and what materials and information is to be submitted by applicants to the authorized bodies for inclusion in the registry. 47 As determined in para 2 of FL of 6 February 2020 case N 11-FZ 'On Amendments into FL 'On Guarantees of the Rights of Indigenous Small-Numbered Peoples of the RF' in the section establishing procedures for registering persons belonging to indigenous small-numbered peoples', "[t]he procedure for conducting the list, providing data from the list, as well as interaction between federal bodies of executive power and local self-government with the authorized body about conducting the list, is determined by the Government of the RF." At present, this order does not exist, and is under development. 48 Moreover, in accordance with Article 7.1 (1(2)) of FL 'On Guarantees', bodies of State power, bodies of self-government and State extra-budgetary funds use data contained in the list (registry) and have no right to demand that persons belonging to small-numbered peoples submit documents containing data about their nationality. This provision comes into force on 7 February 2022 according to FL N 11-FZ 'On Amendments into the FL 'On Guarantees'. This means that such a list will have been created by 7 February 2022 in accordance with the legislator's plan. The amendments raised questions from indigenous small-numbered peoples and activists regarding interpretation and compliance. 49 According to their opinion, the amendments have anti-constitutional and discriminatory traits. When discussing the draft, it was noted that participation by indigenous small-numbered peoples in the preparation of the rules for establishing and maintaining the federal registry of indigenous small-numbered peoples was not presupposed. The purpose of submission of certain types of information for the registry is not clear, such as the personal number of a taxpayer and the insurance account number of an insured person. It is also unclear whether the child of indigenous parents who already have indigenous status, will also have to apply to be included in the registry or if this will happen automatically. Another question is whether a whole family can apply to be entered into the registry or if every family member must apply individually. The procedures for applying and submitting information are unnecessarily complicated. A problem may occur for those members of indigenous small-numbered peoples whose way of life or illiteracy prevents them from understanding all the nuances of the application procedure. When presenting the concerns of different stakeholders, Fondahl, Filippova and Savvinova identify three main challenges associated with the registry: "exclusion and inclusion in the registry; the burden of proof of indigeneity; and the question of who ultimately decides who is Indigenous." 50 Such concerns are supported by some indigenous peoples' organizations, for example, the local public organization 'Association of the Indigenous Small-Number Peoples of the North of the Evenki Municipal District of the Krasnoyarsk Kray 'Arun (Revival)' 51 and an informal group of leaders and activists of Indigenous small-numbered peoples of the North, Siberia and the Far East called 'Aborigen Forum'. 52,53 As a result of heated discussion on amendments to FL 'On Guarantees', the 'Revival' forwarded their opinion to the State Duma, but never received a response. Murashko, a Russian anthropologist and one of the co-founders of the former IWGIA Moscow, stated that only one proposal from indigenous small-numbered peoples, among many others, was taken into consideration when the final draft of the law was edited. It concerned Article 7.1 (3(8)). She concludes that the main challenge of the amendments is their compliance with the Constitution of the RF when it comes to the requirement to prove national identity and the probability of refusing self-determination. 54 The notion 'foreign agent' in the Russian legislation In 2012 a new notion was introduced into the Russian legislation -'a non-commercial organization functioning as a foreign agent' (foreign agent) in connection with amendments to the FL 'Non-Commercial Organizations'. 55 According to Article 2 (6) of FL 'On Non-Commercial Organizations,' "a non-commercial organization functioning as a foreign agent is understood in the present federal law as a Russian non-commercial organization that receives funds and other property from foreign states, their State bodies, international and foreign organizations, foreign citizens, stateless persons or persons authorized by them and (or) from Russian legal entities receiving funds or other property from the named sources (with the exception of open joint-stock companies with State participation and their branches) (foreign sources), and which participates, inter alia, in the interests of foreign sources, in political activities carried out on the territory of the RF." In short, according to the Law, NGOs must declare themselves 'foreign agents' if they exercise political activities and receive funds from abroad. The law introduces a number of compulsory provisions and sanctions for such organizations, such as, for example, their inclusion in a special registry; indication of their status as 'foreign agent' in all documents and publications; and keeping separate accounting of income and expenses received within the framework of errands from foreign sources and other errands. Such non-commercial organizations must submit reports on their activities more often than other NGOs. Besides, an authorized body is obliged to carry out planned inspections and is entitled to suspend by its decision the activity of a foreign agent NGO for not applying for inclusion in a special registry provided by law, etc. The legislation establishes fines for noncommercial organizations who do not wish to be included into the registry of foreign agents. As a result of these additional requirements, many organizations have chosen to cease their activity to avoid legal risks. 56 Turning to the history of the issue, the purpose of introducing a notion of a foreign agent and the corresponding normative regulation was to ensure publicity and transparency of finances coming from foreign sources to Russian non-commercial organizations participating in political activities. Besides, the aim was to ensure proper control over NGO activities financed from foreign sources and pursuing political goals, inter alia, in the interests of their financial donors. 57,58 In practice, this law has led to a gross violation of human rights. According to Alenkin "[w]hen analyzing the goals of legislative regulation and the practice of applying the relevant norms, one can observe a clear disproportion between them, manifested in the excessive interference of the State in the freedom of exercise of the right to associations." 59 Most studies dedicated to a 'foreign agent' notion emphasize the duality of the Russian state policy towards NGOs. On the one hand, the State introduces repressive legislation that forces NGOs to reject international partnerships. On the other hand, the authorities open state programs to support Russian NGOs. This situation has contributed to an increase in the number of 'pocket' NGOs, whose activities aim at legitimizing the ruling regime. 60 All this is complicated by the fact that "[t]he fundamental institutional problem is a weak development of Russia's institutions of charity, patronage and voluntarism. (…) It is aggravated by the lack of effective stimuli in the tax system for financing of non-commercial organizations by individuals and business. In this situation, foreign funding of Russian NGOs plays an important role." 61 The situation of human rights in Russia has attracted the attention of international bodies such as the Commissioner on Human Rights, 62 the Human Rights Committee, 63 the European Parliament 64 and others. The Special Rapporteur on Indigenous Peoples' rights has emphasized the negative effect of the Law on the rights of indigenous small-numbered peoples of Russia. 65 Commenting on the introduction of the term 'foreign agent', the International Working Group on Indigenous Affairs observed: "[c]ivil society is affected by a continually shrinking space." 66 Despite the criticism, the Law exists and is valid, but its provisions must be executed for the purposes for which the Law was created. Both international bodies and Russian scholars point to multiple challenges regarding the content of the legal provisions on 'foreign agent'. For example, the Human Rights Committee notes that "the definition of 'political activity' in the Law is very broadly construed." 67 The definition permits authorities to register NGOs carrying out various activities as 'foreign agents,' without their consent or a court decision. These activities can relate to public life and include human rights and environmental issues. 68 According to the Committee, "procedure of removal from a 'foreign agent' list is complex." 69 To conclude, the Committee recommends reviewing the procedural requirements and sanctions applicable under the law to ensure their necessity and proportionality. Besides, the Committee recommends dropping the notion 'foreign agent' from the law. 70 Minority Rights Group Europe states that a lack of legal clarity regarding the term 'political activity' means that it can be subject to wide interpretation. 71 The Russian scholar Korneichuk also states that the law contains "many errors of logical, legal and ethical character." 72 In this regard, "the legislator should clearly identify these 'functions of foreign agents' that serve as the basis for including NGOs in the registry. It is necessary to determine what kind of political activity is prohibited, taking into consideration that public activity is one of the basic human rights that cannot be limited." 73 As noted by some scholars, there exists "perception in the public mind of the notion a 'foreign agent' as a synonym for 'people's enemy', 'a traitor'." 74 Thus, Korneichuk argues, 'foreign agent' must be replaced by the term 'a Russian agent of a foreign principal'. 75 The International Development Foundation for Indigenous Peoples of the North, Siberia and the Far East, 'Batani', is one of the indigenous organizations recognized as a 'foreign agent' in accordance with this law. According to the author's opinion, it is not only the number of listed NGOs that raises concern but the fact that NGOs have been threatened with being listed. For example, the NGO 'The Fund of Sámi Heritage' gained the attention of the regional Ministry of Justice. 76 The employees of the fund were informed about a possible unscheduled inspection. According to the head of the fund, Sámi activist Danilov, the grounds for the inspection was a complicated relationship with the prosecutor's office. 77 The purpose of the inspection was to investigate if the fund and its activity fell within the definition of a 'foreign agent'. According to Danilov, the actual reason for the inspection was the prosecutor's office's concern about the fact that the fund had sent a complaint to a UN Organization. In its complaint, the fund revealed the fact that the lands of traditional Sámi habitation had been granted to a hunter's club, which was approved by the Government of the Murmansk Oblast. This situation is a vivid illustration of the threat to NGOs' activities. The novelty of the 2019 Russian legislation is the expanded range of 'foreign agents' included. Now even individuals can be recognized as 'foreign agents', in cases where they -disseminate messages and materials made and (or) disseminated by a foreign mass media source acknowledged as a 'foreign agent', and (or) by a Russian legal entity established by foreign mass media and (or) participating in the abovementioned messages and materials; -get economic support or property from a foreign state, bodies thereof, international and foreign organizations, foreign citizens or stateless persons or persons authorized by them, foreign mass media, any of them acknowledged as a 'foreign agent'; Russian legal entities established by mass media acknowledged as a 'foreign agent' and getting financing from abroad. The decision to recognize individuals as 'foreign agents' was taken by the Ministry of Justice in agreement with the Ministry of Foreign Affairs. Individuals acknowledged as 'foreign agents' are required to establish a Russian legal entity which in turn will disseminate printed, audio, visual and other types of messages and materials (also via Internet) aimed at an unlimited number of persons on the territory of the RF. When establishing such a Russian legal entity, an individual must report to an executive body authorized by the Government, and must comply with requirements established for NGO 'foreign agents'. They are required to indicate on their materials that their messages and materials are issued and disseminated by a 'foreign agent'. At the moment, it is difficult to estimate how many individuals belonging to indigenous small-numbered peoples will be recognized as foreign agents due to the fact that the procedure for recognizing individuals as foreign agents has not yet entered into force and is at the project stage. 78 However, it seems that there may potentially be a large number of such citizens. The State's pressure on indigenous organizations with divergent opinions The current situation in Russia is characterized by the increased attention of the State on and control over organizations concerned with human rights, and indigenous peoples' rights, in particular. A vivid example is the case of RAIPON, although it is not recent. RAIPON was founded in 1990 and is an umbrella organization that unites 40 indigenous peoples of Russia. In 2012, the Ministry of Justice ordered to terminate all activity of the organization for six months. 79 This event raised concerns in international society. 80 According to some activists, the reason for suspending the activities of the organization was RAIPON's opinion on indigenous peoples' rights. After the ban, RAIPON was revived. The organization changed the character of its relations with the Kremlin, becoming more cooperative and compliant. 81 The current leader of RAIPON is a deputy of the State Duma, representing the United Russia party. Some former members of RAIPON had to move abroad to secure their personal safety because of the authorities' actions. Among them are Pavel Sulyandziga, who moved to the USA, and Dmitry Berezhkov, who moved to Norway. A more recent example illustrating the State's pressure on indigenous organizations is the case of CSIPN. 82 This is one of the oldest organizations representing the interests of indigenous small-numbered peoples of Russia. Its activity in Russia was terminated in November 2019 by a court's decision. 83 CSIPN is the only organization of indigenous small-numbered peoples of Russia which has special consultative status and accreditation power in agencies and structures of the UN such as UNE-SCO, the UN Environmental Program, the Food and Agriculture Organization of the UN, the UN Executive Secretary of the Convention on Biological Diversity, and the UN Economic and Cultural Council. In addition, CSIPN has full membership in the educational network of the University of the Arctic, various worldwide networks of indigenous peoples of the world and promotes participation of indigenous small-numbered peoples in international forums important for them. Thus, IWIGA states: "[t]his decision affects one of the best established and last remaining internationally known Indigenous organizations in Russia." 84 The termination of the organization's activity by court order was based on formalities regarding the organization's location, inconsistency of the organization's charter with legislation, and non-registered educational activity of the organization. Even though the organization immediately started to correct their documentation, the Moscow City Court rejected giving additional time for editing the documentation. 85 The Center's appeal and cassation complaints have been rejected by the corresponding courts. 86,87 Spokesperson for Foreign Affairs and Security Policy/European Neighbourhood Policy and Enlargement Negotiations, Kocijancic, thinks that the termination of this organization goes "against an independent and active civil society. […] It is vital to create the proper conditions of the State's support for NGOs and to foster an open and inclusive environment for their growth." 88 Kocijancic maintains that the active position of such organizations is essential for every democratic society, and a tribute to fundamental human rights and freedoms. She claims that "the European Union stresses that the fundamental right to freedom of association is enshrined in the Russian constitution and is part of the Russian Federation's international obligations." 89 According to the head of CSIPN, their activity was terminated because they support the local population, which has interfered with the State's activities regarding the extraction of oil, gas and gold. 90 Indigenous small-numbered peoples need access to their lands and to their territories where they carry out reindeer herding, hunting and fishing and these territories are areas where extractive industries take place. In this conflict of interests, the State has taken the side of the extraction companies. According to the organization's lawyer, Vaipan, CSIPN will go as far in protecting its rights as to appeal to the European Court of Human Rights (ECHR). 91 These two examples of relatively large NGOs demonstrate how aggravated the situation is. Smaller organizations have even less power to oppose and resist the State. Amendments to the Constitution Amendments to the Constitution of the RF were approved by popular vote in July 2020. 92 Prior to the vote, many indigenous small-numbered peoples expressed their opinions on necessary changes to the Constitution. For example, in the Nenets Autonomous Okrug, the Nenets people proposed that the rights of indigenous smallnumbered peoples to protect their languages and traditional nature use, primarily for reindeer herding, be specified and guaranteed in the Constitution. 93 Moreover, some representatives of the regional authorities declared that "regional legislators were officially withdrawn from this process" 94 (from the process of discussing amendments to the Constitution). Even though the proposals were forwarded to the State Duma, they were ignored at the federal level. 95 The new text of the constitutional Article 68 (1) reads: "State language of the RF throughout its territory is Russian as the language of a State-forming people that is part of a multinational union of equal peoples of the RF." This amendment has met with a lot of resistance. The Council of the World Forum of the Tatar Youth appealed to the State Duma to reject the amendment "as contrary to the principles of a democratic Federal State". 96 The All-Tatar Public Center, standing in opposition to the authorities, manifested that the amendment violates the rights of 25 million non-State forming citizens of the RF. The State Duma objected saying that activists should have expressed their position when the amendments were under consideration in the working group. 97 Some expressed their opinion that Russia is on the way to forming a unitary State where one people enjoy special status while others, though recognized as equal, are afforded downgraded status. 98 It is worth noting that it was primarily representatives of the republics and non-indigenous small-numbered peoples who criticized this amendment. In addition to the Tatars, representatives of the Kabardino-Balkarian Republic expressed their opinion: "[t]he head of Kabardino-Balkarian human rights center notes that the amendments should be treated as a continuation of the Kremlin's antifederal policy." 99 This situation is potentially dangerous for indigenous small-numbered peoples who, compared to peoples of the republics, have an even vaguer status. The Kremlin's representatives refused to see the inherent contradiction between the amendment and the Constitution. 100 Ex-member of the UN Permanent Forum on Indigenous Issues, Loode, remarks that the amendments to the Constitutions can be criticized for their internal inconsistence, and, more importantly for the excessive moral hierarchy which they create for the people of Russia. According to him, this can have unpredictable consequences for Russia's national stability. Loode states that "internal inconsistency is reflected in the labelling of the Russians as 'a State-forming people', while the same sentence defines the Russian Federation as a union of peoples with equal rights. If the latter is true and is the main issue then why distinguish one from the other? How does this correlate with the idea of equal rights? Obviously, this concept was added to allay fears of overt apartheid. But the logical inconsistency remains and is not easy to explain. 101 Loode informs that this amendment will surely raise more confusion among the non-State forming peoples of Russia. Article 79 was also amended and reads as follows: The RF can participate in international associations and delegate part of its powers to them in accordance with international treaties of the RF if this does not entail limitations on the rights and freedoms of man and citizen and does not contradict the foundations of the Constitutional structure of the RF. Decisions of the international bodies adopted on the basis of the provisions of international treaties of the RF and which in their interpretation contradict the Constitution of the RF, are not subject to enforcement in the RF. The constitutional changes concern the second part of the mentioned article. Despite the existence of Article 15 (4), which establishes the priority of international law, the Russian authorities still seem to be trying to limit the sphere of influence of international law and give priority to Russian law (Article 15 was not amended in a popular vote because it cannot be changed through amendments but only via adoption of a new Constitution). The above-mentioned amendments to Article 79 potentially mean that the decisions of bodies such as ECHR will not necessarily come into effect in Russia. In cases where indigenous organizations appeal to the Court for protection of their rights, as for example CSIPN intended to do (See Subsection 2.3), these rights can still be violated. The article which directly concerns the rights of indigenous small-numbered peoples is Article 69. At present, this Article is complemented with paragraphs 2 and 3 (originally, it provided guarantees only to indigenous small-numbered peoples): 2. The State protects the cultural identity of all peoples and ethnic communities of the RF, and guarantees the preservation of ethno-cultural and language diversity. 3. The RF provides support to compatriots living abroad in the exercise of their rights, ensuring protection of their interests, and preserving all-Russian cultural identity. Concerns have been raised by some indigenous activists about the supremacy of national law over international. This increases the possibility to manipulate the decisions of international bodies. Besides, a covert signal is given to the bureaucracy that there is no need to comply with international law when dealing with international issues. 102 From the point of view of the author of the present article, it is unclear why paragraph 3 was added to the article which concerns indigenous small-numbered peoples. Moreover, it is confusing why the legislator focuses only on the cultural identity and preservation of ethno-cultural and linguistic diversity and ignores other rights of indigenous small-numbered peoples, such as land rights and rights to natural resources. Murashko is of a similar opinion, emphasizing that the meaning of Article 69 of the Constitution is blurred because of the new clause 2 in that it limits the range of indigenous small-numbered people's rights to the 'preservation of ethno-cultural and linguistic diversity,' meaning that the rights to land and resources are ignored by default. 103 She highlights that para 3 of the Article 69 does not fit into the context of the Article since it does not regard indigenous small-numbered people's rights. Commenting on para 2 and 3, Berezhkov and Sulyandziga point out: "[u]nfortunately, we were unable to find any intelligible explanations from the official and/or officially registered documents for why it was necessary to make these amendments to Article 69 of the Constitution, despite the fact that we carefully followed the entire path of the bill in the State Duma." 104 Therefore, they conclude the following: "we can assume that there are no public documents explaining the meaning of the amendments to Article 69, at least such explanations are not reflected in the official 'bill's passport' in the web-system of the State Duma." 105 Conclusion At present, 47 indigenous small-numbered peoples live in the RF. The size of an indigenous group is determined by the Census data. Nevertheless, individual ethnic identification of a particular person to obtain special rights has been problematic and nonsystematic up to date. Amendments to the FL 'On Guarantees' have been adopted regarding a person's attribution to an indigenous group. A registration procedure and a unified registry are currently under elaboration by the RF Government. It is planned that the registry will come into use in 2022. It is important to note that the state authorities have undoubtedly tried to regulate this issue, however, many questions have arisen regarding these amendments. Indigenous small-numbered peoples claim that many of their proposals were not taken into consideration. It is unclear why certain types of information must be submitted to the registry; whether children of people who already have indigenous status will have to apply to be included in the registry or if this will happen automatically; whether a whole family can apply to be entered in the registry or if every family member must apply individually. Besides, the application and submission procedures are unnecessarily complicated. Alongside the problem of the attribution of individual members of indigenous communities to indigenous small-numbered peoples, this article has highlighted several other issues currently problematic for the implementation of indigenous smallnumbered peoples' human rights in Russia. One of these issues is the introduction of the notion 'foreign agent' in relation to NGOs as well as individuals since 2019. NGOs and individuals recognized as such must comply with additional legal requirements to continue their activity, a burden which may result in the termination of their activities. Those organizations and individuals not yet recognized as such face the threat of enrollment in these lists. This situation is typical of indigenous organizations and individuals belonging to indigenous small-numbered peoples. Still another issue is the State's pressure on independent indigenous organizations. Finally, the author discusses the amendments to the Constitution. The introduction of 'state-forming people' and the constitutional provision on the possibility of not fulfilling the decisions of international bodies adopted on the basis of the provisions of international treaties of the RF, can potentially affect the human rights of indigenous small-numbered peoples of Russia negatively. In addition, Article 69 is complemented with two paragraphs which have nothing to do with indigenous small-numbered peoples (para 3) nor do they cover the whole range of indigenous rights (para 2).
2020-12-17T09:12:07.057Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "34b97f42e358b8ecb6a86b5bafb32f434383faf7", "oa_license": "CCBY", "oa_url": "https://arcticreview.no/index.php/arctic/article/download/2336/4823", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "09024cdb79e6c19ea36afd6310afe63b12d9756f", "s2fieldsofstudy": [ "Law", "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
155251996
pes2o/s2orc
v3-fos-license
Analysis of subbrow upper blepharoplasty by measuring the lid-to-brow distance As Korea has developed into an aging society with evolving socio­ economic standards, upper blepharoplasty has become one of the most popular rejuvenation procedures. Conventional upper bleph­ aroplasty through a supratarsal incision has been widely accepted, and involves excision of redundant skin and muscle with or with­ out creation of an eyelid fold. However, the drawbacks of this technique, including insufficient removal of redundant lateral skin, a scar beyond the lateral canthus, and an unnatural postoperative appearance, have led to plastic sur­ geons to avoid conventional upper blepharoplasty [1]. Moreover, since Asians tend to have higher eyebrows than Caucasians and more pretarsal and suborbicularis fat, the drawbacks of the conven­ tional procedure have become increasingly apparent [2,3]. The infrabrow excision blepharoplasty technique was used by many Korean surgeons even before it was first reported. This tech­ nique includes elliptical excision of the infrabrow skin and subcu­ taneous tissue with or without the orbicularis oculi muscle (OOM) to alleviate lid redundancy [4]. However, only a few reports have described suture fixation of the OOM in the flap inferior to the su­ praorbital rim [5]. Woo Ju Kim, Han Koo Kim, Tae Hui Bae, Woo Seob Kim Subbrow blepharoplasty (subbrow lift) with submuscular fascia fixation is an advanced infrabrow excision technique that addresses the shortcomings of conventional blepharoplasty and allows the patient' s natural eyelid crease or previously created surgical lid cre ase to be defined more clearly, while correcting lateral hooding and avoiding an unnatural postoperative appearance. Herein, we introduce subbrow blepharoplasty with submuscular fascia fixation to correct upper lid redundancy. Changes in perior bital adnexal distances were measured to confirm the usefulness of this upper blepharoplasty technique. Patients Sixty upper lids in 30 patients underwent subbrow upper blepha roplasty between June 2016 and October 2017. The primary indi cations for the procedure included: (1) upper lid dermatochalasis with lateral hooding; (2) desire to maintain the natural lid crease; (3) prior lid operations; (4) consideration of eyebrow tattooing; and (5) desire for a more natural contour with rapid recovery. All patients had sitting preoperative frontal view photographs of the eyes, and the distance from the upper lid margin to the lower edge of the eyebrow was measured at the lateral limbus (LBDL) (Fig. 1). These measurements were repeated immediately after sur gery and at every followup visit for 6 months. The experimental design was approved by the Institutional Re view Board of ChungAng University Hospital (IRB No. 1902001 16245) and performed in accordance with the principles of the Dec laration of Helsinki. Surgical techniques With the patient supine on the operating table, the location of the supraorbital nerve was marked by palpating the supraorbital notch. An upper incision line was drawn by following the inferior margin of the eyebrow, and the midpupil plane and brow peak point were marked over the eyebrow. The amount of skin to be excised was determined by pinching the surplus skin with forceps in the infra brow area. A lower incision line was then drawn in a lazyS or scal pel shape, increasing in width laterally based on the amount of skin to be excised (Fig. 2). Under local anesthesia using 2% lidocaine with 1:100,000 epi nephrine solution, incisions were made with a no. 15 blade. To avoid hair follicle injury, the upper incision was made in a beveled fash ion, along with hair follicles. A lower incision was made deep to the level of the OOM. Starting from the most lateral end of the in cision, skin and OOM excision proceeded medially and became more superficial at the medial end to avoid nerve injury. To fix the OOM on the inferior flap, we made a slit incision over the supraorbital rim by splitting the muscle with small Metzenbaum scissors. The fixation points were usually at the midpupil plane, brow peak point, and 1 cm lateral to the brow peak point, as previ ously marked during preoperative design. The number of fixation points can be altered based on an individual patient's eyebrow con tour and the degree of lateral hooding. The anchoring depth of the OOM in the inferior flap was determined by the thickness of soft tissue in the upper lid. Three or more transverse 40 Ethibond su tures were fixed to the submuscular fascia, just above the supraor bital rim periosteum (Fig. 3). Subcutaneous sutures were placed using 60 Monocryl and the skin was closed with 60 or 70 nylon. Statistical analysis Preoperative and postoperative LBDL values were analyzed using SPSS version 19.0 (IBM Corp., Armonk, NY, USA) and differences were considered to be statistically significant if Pvalues were <0.05. The paired ttest was used to characterize the statistical significance of differences between preoperative and postoperative LBDL mea surements. Correlations between differences in LBDL values and resection width or eyelid crease were analyzed using the Spearman correlation test and the Fisher exact test. Table 1). The preoperative mean LBDL was 23.20 and 23.19 mm on the right and left, respectively. The mean LBDL at 6 months postop eratively was 22.40 and 22.37 mm on the right and left, respectively. The average difference between the preoperative and postoperative LBDL measurements was −0.800 and −0.833 mm on the right and left, respectively (P = 0.047 and P = 0.070) ( Table 2). The average re section width was 9.5 mm (range, 8-12 mm). RESULTS Most patients were satisfied with the postoperative outcomes, including improvement of their visual field, a lightened feeling of the lateral lids, and a rejuvenated appearance. Reviving the eyebrow contour and providing a naturalappearing eyelid crease with in conspicuous scars yielded the most positive feedback from patients (Fig. 4). No serious complications related to wound dehiscence, lagoph thalmos, brow ptosis, sensory changes, or hypertrophic scars were reported. DISCUSSION As Korea becomes an aging society with access to advanced surgi cal techniques, older Koreans may request not only aesthetic bleph aroplasty, but also improvement of their visual field. For the past few decades, conventional blepharoplasty through a supratarsal incision to excise redundant skin and muscle has been used to manage changes associated with aging, such as dermato chalasis and baggy eyelids. However, this technique has drawbacks, including insufficient removal of lateral skin and an unnatural post operative appearance. Moreover, classic blepharoplasty cannot ad dress lateral hooding unless a lengthy excision is designed beyond the lateral canthus [6]. Subbrow upper blepharoplasty was first described by Parkes et al. [7] as a method to correct skin redundancy between the eyebrow and upper lid. In Asia, this technique was introduced by Sugimoto and has been widely performed in Korea and Japan [4,8]. This tech nique has been found to be more suitable for Asians, who have high er eyebrows than Caucasians and more pretarsal and suborbicu laris fat, and are therefore vulnerable to the drawbacks of conven tional upper blepharoplasty. Several earlier studies described subbrow upper blepharoplasty using a different approach. Kim et al. [8] applied infrabrow exci sion blepharoplasty in Asian women, reported that the technique was useful, and presented potential indications. The supraorbital rim periosteal fixation technique was first reported by Lee and mod ified by Kim [5]. However, most previous studies had the limitation of analyzing the effects of subbrow upper blepharoplasty using subjective crite ria, with a lack of objective information. The importance of preop erative analysis of ocular adnexal measurements has been clearly described in other studies [9]. Therefore, our study focused on ob taining information with objective values that could represent post operative changes. To achieve our goal of a more naturallooking contour, we hy pothesized that the distance between the eyebrow and upper lid margin after surgery would be the same or minimally changed. According to Gunter and Antrobus [10] an aesthetically pleas ing brow peaks somewhere between the lateral limbus and lateral canthus. Ideally, the distance between the eyebrow and upper lid margin is the widest at the same location. Moreover, the LBDL is the feature that can be most strongly affected by blepharoplasty, along with resolution of lateral hooding. Therefore, we decided to measure the LBDL as a measure of postoperative outcomes. Although the mean distance between the eyebrow and upper lid margin was shortened by only 0.800 and 0.833 mm on the right and left, respectively (P = 0.047 and P = 0.070), the results were aes thetically more harmonious, similar to the effect of rhytidectomy in the periorbital region, as reported in another study [4]. The pos itive feedback we received from our patients indicated that they were satisfied with their natural and youthful appearance. Additionally, the minimal change in LBDL can be analyzed from two different points of view. The first perspective is the relationship between the resection width and the change of LBDL. We would logically expect a smaller change in LBDL if the resection width is shorter. However, since the Spearman correlation coefficient for these two variables was 0.08 and 0.185 on the right and left, respec tively (P = 0.673 and P = 0.328), this trend was not statistically sig nificant. Another result that needs to be emphasized is that even though our operation was performed without distinguishing between dou ble eyelid crease and single eyelid crease, there was no statistically significant association between the eyelid crease and the change of LBDL using the Fisher exact test (P = 1.000 and P = 0.503 on the right and left sides, respectively). This seemingly paradoxical association between a minimal change in LBDL and high levels of aesthetic satisfaction may reflect the fact that the actual distance is disguised by skin laxity. Although the redundant skin was resected, the upper lid skin can stretch to recover the original distance between the eyebrow and upper lid margin. This effect can be maximized in combination with fat re moval to reduce the upper lid volume. Another explanation for this association is the resolution of lat eral hooding. No matter the cause, most of our patients had lateral hooding that interfered with the visual field and resulted in an aged appearance. Therefore, resolution of lateral hooding not only pro vided a younger appearance, but also improved the visual field with out a clear change in LBDL. This feature of subbrow upper bleph aroplasty shows the potential of combining this procedure with browpexy and muscle plication or other procedures [11]. Subbrow upper blepharoplasty is widely performed in Asia us ing various operative techniques and criteria. The following are some modifications and clinical points to be noted regarding our subbrow upper blepharoplasty technique that were not reported in previous studies. First, according to several studies, the anchoring depth of the in ferior flap is anatomically uncertain, but is critical in reducing com plications such as wound dehiscence and hypertrophic scarring [12]. To determine the appropriate anchoring depth of the inferior flap, we gently pulled the needle in a cephalic direction to confirm the elevation of the upper lid. This practical trick is used to reduce relapse of lateral hooding and to obtain a naturalappearing eyelid (Fig. 5). The next surgical modification we used is the musclesplitting technique at the fixation points. Instead of other blunt and invasive techniques, we made slit incisions on the OOM with small Metzen baum scissors to reach the submuscular fascial plane. This technique may help the patient to recover faster, while not causing additional injury to the nerves. The fixation plane in this technique-the submuscular fasciais not a wellknown anatomic structure. It is posterior to the OOM on the medial portion and posterior to the frontalis muscle on the lateral portion (Fig. 6). It can be recognized by gently pulling on the inserted needle after placing the suture with adequate resistance and little laxity. The reason for this pulling procedure is that fixa tion on the periosteum can result in dimpling of the skin and dis placement of the eyebrows. In summary, subbrow upper blepharoplasty can simultaneously yield functional and cosmetic improvements. It is a flexible technique that can be modified and combined in any clinical setting. Howev er, the main obstacle in our experience is that the indications are limited to certain populations; moreover, the technique is not rec ommended for severe ptosis. Further studies comparing this tech nique with other operative techniques are required. Subbrow upper blepharoplasty with submuscular fascia fixation helps individuals with lateral hooding of the upper eyelid, while overcoming some drawbacks of conventional upper blepharoplasty and preventing the creation of a definite change in the periorbital contour. In this study, we measured the lidtobrow distance and con firmed that this procedure resulted in a minimal change, but the results were aesthetically favorable. This seemingly paradoxical finding can be explained by the characteristics of the surgical pro cedure, which alleviates lateral hooding, as objectively confirmed using photographs. In conclusion, we strongly believe that among the various bleph aroplasty techniques, subbrow upper blepharoplasty with submus cular fascia fixation has definite usefulness for Asians with lateral hooding. However, further research with more data is needed. Conflict of interest No potential conflict of interest relevant to this article was reported. Ethical approval The study was approved by the Institutional Review Board of Chung Ang University Hospital (IRB No. 190200116245) and performed in accordance with the principles of the Declaration of Helsinki.
2019-05-17T13:33:50.233Z
2019-04-29T00:00:00.000
{ "year": 2019, "sha1": "adbd8ddac7641c2db8289e018b4007ec6d599480", "oa_license": "CCBYNC", "oa_url": "http://e-aaps.org/upload/pdf/aaps-2019-01606.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "36cbe9a66c542c7e95437ae20d93f4872be7df8d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
19022007
pes2o/s2orc
v3-fos-license
Exploring Holocene Changes in Palynological Richness in Northern Europe – Did Postglacial Immigration Matter? In mid to high latitudes glacial and interglacial cycles have repeatedly changed the area available for plant growth. The speed at which plants are able to colonize areas at the onset of an interglacial is hypothesized to limit their distribution ranges even today (migrational lag). If the spread of plants would have been generally slow then plant diversity in previously glaciated areas would be expected to increase over time. We explore this hypothesis using results from six palynological investigations from two previously glaciated regions: central Sweden and north-eastern Germany. Rarefaction, slope of rank order abundance, and taxa accumulation plots were used to evaluate richness and evenness in pollen data in an attempt to separate richness from evenness. These analyses show little change in palynological richness for the northern sites throughout the Holocene. In contrast, the southern sites show an increase in richness and evenness during the early Holocene; this may be explained by the different initial conditions at the onset of the Holocene. A strong rise in palynological richness around 6000 and 1000 years ago at the southern sites can be attributed to the regional initiation of agriculture and major opening of the forest, respectively. For the northern sites there is no evidence for increased taxonomic diversity through time that could be due to delayed immigration of species. Introduction To forecast the impact of climate change on biological diversity it is crucial to have knowledge on the ability of plants to shift their distribution in response to climate change [1].Until recently, many studies suggested that plants will not be able to track climate change [2].New research indicates that plants are moving fast in response to a warmer climate [3].However, monitoring periods are too short to evaluate the effects of ecosystem resilience, species adaptation and rare long distance dispersal events [4,5].Therefore insights from the late Quaternary history of ecosystems are essential to help answer this question [6][7][8].The biodiversity of the temperate forests are shaped by glacial-interglacial cycles that repeatedly changed the habitat available to plants [9].Studying the patterns and processes of changing plant distributions at the end of glacial periods will also help to understand the spatial differences in biodiversity [10,11]. The cold and dry climate of the glacial period expelled plants from many areas where they occur today.Continental ice sheets covered large parts of the northern hemisphere, eliminating any plant growth except possibly on isolated nunataks [12].Thus nearly the complete flora that we find in previously glaciated areas today, must have spread into these regions sometime between deglaciation and the present day.It has long been assumed that the distances between Last Glacial Maximum (LGM) occurrences to present-day occurrences would affect the timing when the species in question appeared in a particular region [13,14].The time span -between the time that a taxon could have been present at a location due to climate warming and/or deglaciation and its first appearance -has been referred to as migrational lag. The application of species distribution models to present-day climate suggested that many species did not fill their climatically described range [15], rejuvenating the question whether species limits are controlled by slow spread out of their LGM distributions [16,17].On the other hand, the rapid spread of alien species, like Senecio inaequidens in Europe [18], shows that herbs may expand their ranges over hundreds of kilometres in few decades. Compilations of pollen and macrofossil investigations have been used to infer distributional changes of major tree species indicating that some trees assumed their current range late during the Holocene and may still be in the process of extending their distribution [14].However, it is difficult to decide whether these boundary shifts were controlled by Holocene climate change [19][20][21], land-use change [22] or limited seed dispersal and time required to reach reproductive age [23,24]. While compilations of pollen diagrams have revealed detailed accounts on the Late-Glacial and Holocene histories of mainly trees and shrubs [25,26], little comparative use has been made of the many other pollen types that are recognised in pollen analytical investigations (but see [27][28][29] among others for regional comparisons).The reason for the lack of sub-continental comparisons lies in relating the number of distinct pollen types in the sample (palynological richness) to plant species richness in the area around the sample location.This problem has two major causes: differences in pollen production and dispersal among species and restricted taxonomic resolution in pollen analysis where identification often stops at the genus and sometimes even at the family or subfamily level [30].Regardless, many pollen diagrams depict more than 100 pollen taxa and the pollen key for central Europe and adjacent areas by Beug [31] differentiates 586 types based on 2500 investigated species of flowering plants.Thus a substantial proportion of floristic richness is captured by palynological richness, even if this representation is biased by large families like Poaceae that yield a single pollen type for almost all non-cereal grasses. The aim of this study is to explore the potential effect of delayed or slow plant spread on palynological richness through time.Different approaches to evaluate changes in palynological richness through time are considered and new analyses are suggested. If migrational lag is an important mechanism that shaped current plant distributions, it should have worked gradually through time, increasing the floristic richness of a region.As a consequence palynological richness should have been low shortly after deglaciation.The increase in the number of species may slow or stop at some point in time, when all species with potential occurrence have reached the region.As a general process this should have occurred across all floristic geoelements and therefore be visible in the palynological record.As dispersal mechanisms differ between taxonomic groups an ordered arrival could be expected. Additional factors potentially influencing diversity through time are changes in climate and human land-use, which are difficult to separate.However, the trend and magnitude of both factors are well documented for central and northern Europe so that their influence on changes in palynological richness can be evaluated. Thus, by analysing and comparing palynological richness in selected pollen diagrams with good taxonomic resolution, we can explore whether migrational lag is an important mechanism that shaped floristic richness in previously glaciated areas.Finding a gradual increase in the number of pollen types through the Holocene, particularly in recently deglaciated areas would point towards an importance of migrational lag, unless it may be caused by the action of man or climate change.A lack of such an increase would indicate that the effect of migrational lag is negligible on a timescale of hundreds to thousands of years. Site Selection and Additional Resources Published pollen diagrams were selected from two regions with different climate that were glaciated during the last ice age [32] (Figure 1) and are home to similar forests in structure and species composition.As the degree of taxonomic differentiation in standard counts varies between investigators diagrams were selected from authors coming from the same palynological school to allow between-site comparisons.Pollen diagrams from three small lakes were selected from north-central Sweden, an area that was ice covered until the early Holocene.Holtja ¨rnen and Klotja ¨rnen [33] are today situated in the southern boreal forest.The area around Holtja ¨rnen became ice free around 10600 years ago.After the retreat of the glacier, the vicinity of Klotja ¨rnen was submerged and through isostatic uplift became a peninsula reaching into the Baltic Sea at about 9500 years ago.Abborrtja ¨rnen is situated further north in the mid Boreal forest and the record starts around 9700 years ago, soon after deglaciation [34]. Northeast Germany was ice covered during the southernmost extent of the last glaciation and the area was ice free for about 5000 years before the beginning of the Holocene.Krebssee is of similar type and size as the Swedish lakes and situated one kilometre away from the river Oder [35].Tegeler See is a large lake exceeding 1 km 2 , but the diagram [36] comes from a bay with a diameter of 200 m.Schwanengraben is an elongated depression about 50 m broad and several hundred meters long, which was a lake during the Late-Glacial and early Holocene and developed into a bog around 9000 years ago [37]. Pollen diagrams for these sites with a standardized selection of taxa are provided in Figure S1 Pollen counts are generally exceeding 1000 grains per sample for the northern sites and in most samples from Schwanengraben.For Tegeler See and Krebssee the counts range between 1000 and 2000 grains and between 600 and 1500, respectively.The raw pollen counts from the six sites were reduced to taxa exclusively coming from upland vascular plants to reduce site specific changes in the aquatic and telmatic environment (set A).A further restriction, excluding pollen types that come from archaeophytes or neophytes (see Table S1 in Supporting Information) aims to reduce the obvious impact that human land-use had on regional plant diversity (set B).Unless otherwise indicated the analysis were carried out on set A. In addition to the selected three pollen diagrams from northeast Germany, further information on the first appearance of selected pollen taxa in this region was obtained from a local database.Pollen analytical investigations carried out between 1974 and 2000 at the department of Ecosystem Science and Plant Ecology at the Technical University Berlin were partly collected into a database.Investigations were initially restricted to the area of West-Berlin with the addition of sites in Brandenburg after 1990.The intention of the database was to collect information on the first occurrence of pollen types from herbaceous vegetation and selected trees.Pollen types of common trees and those of higher taxonomic level (genera, families) have not been included (Table S2 in Supporting Information).The database contains information from 113 sites with varying sample number.Samples were assigned to wellrecognizable regional pollen zones [36], which have been radiocarbon dated at a number of sites [35]. Extracting Diversity Information from Pollen Data Pollen diagrams are mainly produced from sediments that accumulate in lakes and wetlands, and only where the focus of the investigation is this particular ecosystem, palynological richness can be directly used to describe for example the species diversity of water plants in a lake.Pollen that reaches the lake or wetland from beyond its limits does not have a defined area of origin.The probability of a pollen grain to be deposited at a site decreases with distance of the parent plant from the site.In absolute terms this differs largely between plants depending on pollen production and dispersal properties.Thus palynological richness cannot be related to a particular area and may be best compared to the regional species pool (gamma diversity).It depends heavily on the size of the pollen count [38], but there is no natural threshold that would indicate how many pollen grains should be counted per sample.In combination with the taxa specific pollen productivity and dispersal characteristics, this means that diversity measures including abundance are biased and potentially erroneous [39].This leaves palynological richness itself as an important diversity measure for pollen data, which has to be expressed to a standard number of pollen grains counted to make it comparable between samples.Where pollen sums differ between samples or sites this is achieved using the rarefaction technique [29].However, the evenness of a pollen sample determines the number of pollen types that may be encountered at a given pollen sum [30,40].Peros and Gajewski [41] find a positive correlation between palynological richness and evenness and a negative trend for palynological richness and pollen concentration in a surface sample dataset from the Canadian arctic.Changes in pollen concentration can be caused by changes in sedimentation rate and thus pollen accumulation rates should be used in such comparisons [42].Using pollen accumulation rates, it is theoretically possible to overcome the effect of differential pollen productivity [42].However, such estimates are highly dependent on the accumula-tion rate of the sediment which in turn can usually only be estimated with a high uncertainty.Thus in most cases the uncertainty will be larger than the signal, unless the focus of the investigation is the change in diversity from vegetation with high pollen production to one with low pollen productivity. Diversity and Evenness Indices and Analyses The richness of pollen types and their equitability in abundance are strongly connected.Depending on the size of the pollen sum, palynological richness is more influenced by one or the other [43].Using rarefaction, palynological richness was calculated to a basis of 500 pollen grains from vascular upland plants, which is assumed to reveal in particular the richness of pollen types in the sample. As with estimates of richness, indices of evenness from samples with different count size are potentially biased.For this reason we used two indices that are not affected by differences in sample size.Pollen sample evenness was calculated using a modified version of the E Q index [44].A sample based threshold of 0.3% was applied for the inclusion of taxa to avoid the influence of single finds, which makes it independent of the number of grains counted.For the remaining taxa the slope of the regression between the proportion of rank order and the logarithm of proportional abundance (b') was transformed according to Smith and Wilson [44]: E Q = -2p 21 6 arctan(b9).Rarefaction to low pollen sums (10-50) may also be a good indicator for pollen sample evenness, as abundant types will dominate such a small sample and the chance to encounter less abundant types is low [43].Here we use rarefaction to a count of 30 grains as a comparison to the above evenness index. Finding a pollen type in one sample but not in the neighbouring samples does not necessarily mean that the parent plant only occurred in the surroundings of the site during that particular time.For pollen types from plants with low productivity it may be safe to assume that the plant also occurred in the area before and after the period for which the pollen type was encountered.Little use has been made of this concept in Quaternary palynology, while it is readily applied to studies of older sediments [45].To capture the information carried by sporadically occurring pollen types, counts were combined over consecutive samples into periods of 2000 years and 1500 years for the youngest period.All Late-Glacial samples were combined for each of the southern sites.These combined samples were used to calculate the number of taxa common or changing between periods, yielding estimates of beta diversity through time.In these combined samples, the number of taxa encountered still depends on the overall number of pollen counted and rarefaction was calculated to the lowest combined count for the respective site.The analysis was subsequently carried out for a combination of the northern and southern sites, respectively, omitting pollen types from archaeophytes and neophytes (set B) and using time periods of 1000 years. In pollen counts, as in floristic surveys the number of pollen types and species increases with the pollen sum and sampling effort, respectively.However, in floristic surveys with defined area it is theoretically possible to find all species.Thus the relationship between the accumulated number of species encountered and sampling effort should follow an asymptote and the total number of species may be estimated [46].Weng et al. [47] suggested that such an asymptote may exist in palynological data as well but point out that it has not yet been observed. Here we calculated the accumulation of taxa over consecutive samples, which represents a species time relationship [48], while it also captures the increased sampling effort, as the number of pollen counted was also accumulated and varies between samples.Assuming taxa were not lost over the course of the Holocene, the curve can give insights on a changing species pool through time and this relationship may help to separate pollen type richness from evenness.Samples were accumulated in chronological order, beginning with the oldest sample from the northern sites and the first Holocene sample in the southern sites.The linear relationships in logarithmic space were described by linear regression models using the same slope but different intercept parameters.The residuals between the model and the observations were calculated in normal space and plotted against time. A weak lowess smoother was applied to all results yielding scattered values to improve visual comparison without removing too much variance.Lowess was applied with a span of 0.2 to the rarefaction and evenness analysis and a span of 0.1 to the residuals from the taxa accumulation models.All calculations and computations were carried out using the R platform [49] and the vegan package [50]. Results The sample-based palynological richness of upland vascular plants shows different patterns between the two groups of sites, but similar patterns within the groups (Figure 2).The northern sites show little change over the course of the Holocene.Holtja ¨rnen and Klotja ¨rnen start with relatively high values while the northernmost site, Abborrtja ¨rnen, starts with low values and samples from this site maintain slightly lower palynological richness.All southern sites start with a low number of pollen types per samples, which is in some cases even lower than the values obtained for the northern sites.The southern sites increase in palynological richness in tree steps: during the early Holocene, after 6000 cal.BP and with a further increase over the last 1000 years.This pattern does not change when the analysis is run with the dataset excluding pollen types from archaeophytes and neophytes, although the values are somewhat lower for the last 6000 years.The similarity in the pattern of the southern sites shows that all sites portray the same regional development even though site characteristics differ. The two different measures of evenness indicate low palynological evenness throughout the Holocene for all sites, with different patterns for the two groups of sites (Figure 3).The overall pattern for the southern sites is similar in both assessments, with lowest evenness for the oldest samples.For the northern sites the detailed patterns differ somewhat between sites.Interesting to note is the 2000 cal.BP drop in the E Q ' evenness at Klotja ¨rnen, caused by the reduction of pollen from understory vegetation with the expansion of Picea abies. The overall taxonomic composition changed little through time at the northern sites (Figure 4).The small increase in taxa for the last 1500 years is clearly linked to the opening of the forest and cultivation of crops near Holtja ¨rnen and Klotja ¨rnen.Apart from the high palynological richness in the most recent period, Holtja ¨rnen shows highest richness in the oldest period and Abborrtja ¨rnen for the mid Holocene while Klotja ¨rnen shows a small increase over the last 5000 years. The southern sites show a general increase in the number of taxa.Over the course of the Holocene the number of pollen types lost from one period to the next is often lower than the number of types gained and in consequence species are accumulating through time (Figure 4).The Tegeler See record shows an almost 50% turnover of species between the Late-Glacial and the early Holocene, while at Schwanengraben there is mainly a gain of taxa at the onset of the Holocene and at Krebssee the number of lost taxa is slightly higher than newly gained taxa. Krebssee shows increased turnover rates towards the most recent time period. Combining the three regional sites into one, based on the restricted dataset (set B) reveals additional insights (Figure 5).At the northern sites the number of pollen types is elevated in the oldest bin as well as for the mid Holocene.The expansion of Picea abies around 2500 cal.BP appears to be linked to the highest species turnover.Interesting is that this time period shows also the highest loss of pollen types at the combined southern sites.The southern sites also show a peak in shared species around 4000 cal.BP. The compilation of selected pollen taxa from 113 sites in and around Berlin adds some information on the regional change in pollen taxa diversity through time (Figure 6).The calculation of taxa gained from one period to the next shows four peaks: two smaller at the onset of the Allerød and the Holocene and two higher at 9500 and 5900 cal.years BP.The highest loss of taxa can be seen at 10,500 cal.BP.The regional turnover of typical Late-Glacial pollen taxa to Holocene taxa did not occur at once, but was gradual over the early Holocene.Over the last 6000 years the gain of taxa is higher than the loss resulting in an accumulation of taxa with the highest richness for the most recent period.The histogram indicating the earliest appearance of these pollen types in the region shows that most types were already present in the Late-Glacial (Figure 6b).Higher first occurrences are seen for the earliest Holocene, but also for the periods after 6000 cal.years BP, when mainly pollen from archaeo-and neophytes appear for the first time (see Table S2 in Supporting Information). The accumulation of taxa versus sampling effort (Figure 7) does not reach an asymptote at any of the sites, but can be described by a power function.Although pollen types from archaeo-and neophytes are excluded, samples dating to the last few hundred years show an increase in accumulated taxa.The log-transformed accumulation curves show linear relationships with an overall slope of w = 0.27 and intercepts between 0.35 and 0.55.The individual curves often show deviations from the overall rate of increases or can be divided into sections with different increase as can be seen for Holtja ¨rnen, Klotja ¨rnen and Schwanengraben.However, also these diagrams follow the overall rate of taxa accumulation for the major part of the record.It appears that the rate of taxa accumulation is mainly determined by the pollen count, influenced by pollen sample evenness [40] and potentially the true diversity of pollen types in the samples.Thus, by subtracting the overall relationship between taxa accumulation and pollen count, the deviation from that trend should mainly reflect changes in evenness and palynological diversity.As evenness can be estimated independently this allows an evaluation of changes in pollen type and potentially floristic diversity around the site through time (Figure 8).At the northern sites, the power functions describe the increase in pollen taxa accumulation extremely well for the last 6000 years, but underestimate the number of taxa in the earliest samples from Holtja ¨rnen and Klotja ¨rnen.Reversely the three southern sites and also Abborrtja ¨rnen contain fewer than expected taxa in the oldest samples after the onset of the Holocene. Did Immigration Influence Diversity? All measures used here show that pollen type richness is lowest at the northern sites and highest at the southern sites.This agrees with the general impoverishment of the vascular plant flora along this latitudinal and temperature gradient and shows that palynological richness captures interregional differences in floristic diversity.The pollen diagrams from the three northern sites show that there is no substantial increase in palynological richness through time except for the last 1000 years, which is due to human land-use.The southern sites show changes in palynological richness and in the rate of taxa accumulation, which in part are due to agricultural practice that started in this region around 6000 cal.BP [51] and influenced the regional species pool as well as the landscape structure.The lack of these features at the northern sites shows that here the regional species pool did not expand through time.Hence there is no evidence to indicate that slow plant migration had a strong influence on the floristic diversity at these northern sites on time scales of hundreds to thousands of years.This does not mean that there may not be any delay in the arrival of individual species due to dispersal biology and slow population growth in the order of hundreds of years.Birks and Birks [52], for example, show a 450 years delay for the arrival of Betula pubescens in western Norway after the onset of the Holocene warming.Here we did not evaluate individual species but the whole assemblage based on presence and absence of pollen types, without making interpretations on pollen coming from local sources versus long distance transported pollen [53].Due to the high pollen production of most European trees their pollen is often found already in the early Holocene.Thus, their respective pollen taxon entered the taxa accumulation curve early, regardless of interpretations on the timing of their arrival.Therefore, the taxa accumulation curve is mainly determined by the appearance of herbaceous pollen. Pollen diagrams depicting the successive arrival of trees led early palaeoecologists to suspect species would be lagging behind the spatial expansion of their climate envelop at the beginning of the Holocene [54].Based on the comparison of the distribution of climate parameters with the distribution of species, Svenning and Skov [15] argue that on average European trees realise only 40% of their potential range, providing new support for the existence of migrational lag [16].While supporting these findings, Normand et al. [17] find little evidence that the distribution of plants in northern Europe could be explained by a slow spread out of presumed LGM distributions.Thus the distribution of plants in previously glaciated areas of northern Europe probably established quickly after the onset of the Holocene and following deglaciation, with little discernible effect of a migrational lag.Southern Europe, on the other hand, holds many plants that could thrive further north [17] and some of them have been introduced beyond their natural ranges where they are naturalised.The question remains why these species have not managed to reach other areas.If the flora in northern Europe has changed little over the course of the Holocene, its composition may be dominated by those that managed to arrive early.On the other hand, little is known about the Holocene distributional changes of the southern deciduous oaks like Quercus frainetto as they share the same pollen type with species that are wider distributed.Thus it is difficult to infer if these species survived the LGM in particular locations and later expanded their distribution over large areas as we can reconstruct it for northern Europe.Alternatively these populations could have merely expanded out of scattered groups of trees that occurred during the LGM in approximately the same area as today.Such knowledge could inform on the question on whether these species would spread and eventually fill their potential ranges if given enough time.It may be that many southern species did not lag in their migration, but did not spread from their LGM distributions. Early Holocene Palynological Richness At the three northern sites land became only available for the colonization of plants during the early Holocene, which occurred in a different manner at the different sites.The earliest samples of the three northern sites contain only a few pollen types that disappear entirely from the overlying samples.Overall the pioneer vegetation at the northern sites was mainly composed of the same plants that are found in the boreal forest around the sites today.At the southern sites, the onset of sedimentation started at different Figure 8. Power functions and residuals of the expected versus observed number of accumulated pollen types.Linear regression models with constant slope and varying intercepts were fitted to the taxa accumulation plot (Figure 7A).Left panels show these power functions in normal space and the right panels show the residuals indicating the difference between the observed and the expected number of accumulated taxa through time.A lowess smoother with a span of 0.1 was applied to emphasise trends.Only pollen types from upland vascular plants without archaeophytes or neophytes (set B) were considered.doi:10.1371/journal.pone.0051624.g008 times after deglaciation and the early pioneer phases occurred in conjunction with Late-Glacial climate oscillations making it difficult to separate the role of climate on species diversity and palynological richness.The species composition changed strongly at the onset of the Holocene with several species disappearing and many becoming rare for at least a few thousand years (Figure 4, Table S2 in Supporting Information). The oldest samples from Holtja ¨rnen and Klotja ¨rnen show high palynological richness and a higher than expected number of taxa (Figure 8).This may be caused by a large proportion of open vegetation near the sites during the early Holocene.Decreasing trends in palynological richness over the early Holocene are found in diagrams from western Denmark and southern Sweden [27,55].Seppa ¨ [56] showed how the expansion of pine woodland around a site in northern Finland lowered palynological richness.Around Abborrtja ¨rnen pine forest quickly established and palynological richness in the oldest samples is low.Woodland had already established by the time Klotja ¨rnen emerged from the Baltic Sea, but the shore remained close to the lake for some time and through isostatic uplift new land became gradually available for plants to colonize.The pollen diagram from Holtja ¨rnen is documenting the arrival of plants from a long distance away and while populations were building up, the landscape remained partly open.Interesting to note is also that the oldest samples from Holtja ¨rnen contain more pollen from Ulmus and Corylus avellana than the youngest samples from the site, indicating that these thermophilous elements were already part of the early-Holocene vegetation mosaic dominated by Betula and later by Pinus sylvestris.Thus, in particular these two species with different dispersal mechanisms show no delay in their arrival. The opposite process determined palynological richness at the southern sites.Here Betula species and P. sylvestris formed woodlands during the Allerød period.Many of these woodlands reduced in size, but survived in sheltered places during the cold Younger Dryas period [57].With the onset of the Holocene warming these populations could quickly expand to dominate the landscape together with a few other Late-Glacial survivors.Long distance dispersed propagules found new ground near Holtja ¨rnen and Klotja ¨rnen with little competition for populations to expand, in the south new arrivals had to compete with the established plants for resources.While this probably did not prevent the establishment of a species, it may have slowed population expansion. Holocene Change in floristic and Landscape Diversity When introducing the rarefaction technique to pollen analysis Birks and Line [29] state that palynological richness might be influenced by physical features of the site.It is thus surprising to see the that E(T 500 ) curves for the southern sites run parallel for most of the Holocene even though the characteristics of these sites are very different, ranging from a forest hollow to the embayment of a large lake.Also palynological richness estimated for pollen diagrams from three Estonian peat-lands ranging from 30 to over 200 ha show no major effect of basin size [58]. The taxa accumulation curves, on the other hand, reflect local site specific differences more strongly.Here the smallest site yields the largest number of pollen types per count and its change from a shallow lake to bog had a large effect on the accumulation of new taxa.Assuming that the linear relationship of the taxa accumulation curve in log-log space is largely a sampling effect, the residuals can inform on changes in evenness and species immigration.After accounting for pollen types coming from archaeophytes and neophytes the taxa accumulation curves for Krebssee and Schwanengraben follow well the predicted values for the last 8000 years, while their E(T 500 ) curves increase markedly around 5000 cal.years BP.This rise in palynological richness is not caused by the early appearance of archaeophytes, but by proportional changes in previously present taxa.Thus, sample based palynological richness seems to be a good indicator of changes in landscape diversity but holds little information on the size of the regional species pool.At Tegeler See the residuals of the observed versus the predicted taxa accumulation rise after 5000 cal.years BP, while the curves for the other two southern sites remain flat.This pattern for Tegeler See may be site-specific, possibly connected to the regional increase in new man-made environments. An increasing number of pollen types shared between consecutive time periods (Figures 4, 5) can be observed for Tegeler See and Krebssee.This is also visible in the selected taxa dataset from Berlin/Brandenburg sites (Figure 6).These patterns are partly caused by the addition of pollen taxa from cultivated species and associated weeds, but also represent the more consistent occurrence of pollen types that were previously encountered.The latter is often also connected to the opening of the forest for agriculture, increasing the habitat for many herb taxa (apophytes) that were abundant during the Late-Glacial, but rare during the early to mid-Holocene dominance of trees.However, the landscape diversity may also have slowly increased due to autogenic processes.On the nutrient poor sandy soils, the pine dominated forest that developed quickly at the onset of the Holocene may have possessed a large resilience towards the expansion of thermophilous trees.Over time fire and wind throw created gaps that could be sized by previously rare species and thus for example Quercus could slowly increase in abundance changing in turn the environment for other plants [59]. Also temperature changes through the Holocene may have influenced floristic diversity at the investigated sites, which may be reflected by the mid-Holocene maximum number of pollen types observed for the northern sites (Figure 5).In this respect the loss of taxa around 2500 cal.BP could also be interpreted as a reaction to the late Holocene climate cooling, which is difficult to separate from the effect that the expansion of Picea abies had on the boreal ecosystem [60].A climatic cause would explain the parallel reduction of pollen types in the diagrams from Brandenburg during the first expansion of Fagus sylvatica.However, here this hypothesis has to compete with the consideration that this effect may have been caused by a reduced settlement activity at the transition from the Bronze to the Iron Age. Species Accumulation Curves The accumulation of pollen taxa is compared against accumulated pollen counts, which is a measure of sampling effort.However, each sample represents a snapshot in time.Plants that were distant to the sampling site or rare in its surrounding at one time may be closer to it or abundant during a consecutive time period and release more pollen to the site.Thus, in effect the pollen taxa versus pollen count relationship constructed here is effectively a species-time relationship.Preston [61] suggested that species-time relationships would work similarly to species area curves assuming a linear relationship in log-log space [48].Recently species time curves have received attention on ecological [62,63] and geological time scales [48,64].However, they have so far not been explored for pollen data on the late Quaternary timescale, which lies in between long ecological observations of several decades and geological time measured in thousands to millions of years.The particular problems of pollen data with regards to undefined space and the influence of evenness are setting these datasets apart from long term ecological datasets.On the other hand, the lack of speciation and the rarity of extinction differentiate late Quaternary data from longer geological time series. The exponent w = 0.27 for the power function found here is well in the range of the literature for species-time relationships from long ecological datasets [63].As the number of pollen types per pollen count depends on the pollen sample evenness this should influence the exponent w, being smaller at higher evenness and larger at lower evenness.This is exemplified by the positive residuals for Holtja ¨rnen and Klotja ¨rnen, coinciding with higher evenness, while the negative residuals for the southern sites are characterized by lower evenness (Figures 3 and 8).The evenness measures used here are biased towards taxa with high abundance and the trends in taxa accumulation in log-transformation may yield further insight in evenness changes.An even vegetation composition will mean that many taxa will be found in one sample and only few more in the consecutive sample.In an uneven vegetation with the same number of species, chances are high that different taxa are captured in consecutive samples resulting in a steeper slope w.This effect of evenness on the slope in speciestime relationships has so far received little attention.However, the negative relationship between mean annual richness and slope w in species-time relationships from ecological datasets compared by White et al. [63] indicates that this effect is of general importance. Conclusions At three sites in central Sweden palynological richness has not increased over the course of the Holocene and species composition has changed only little.Thus there is no evidence for increased taxonomic diversity due to delayed immigration of species.At the southern sites stepwise increases in palynological richness are observed for the early Holocene around 6000 and 1000 years ago.Here humans have actively introduced agricultural and other useful plants with associated weeds starting around 6000 years ago.In two out of three sites pollen types from these archaeophytes and neophytes can account for the increase in newly appearing taxa. The Late-Glacial vegetation development at the southern sites led to an early-Holocene advantage for boreal taxa, which at least reduced the rate of population expansion for newcomers and lead to a low vegetation evenness and possibly diversity.Taxa accumulation curves from pollen diagrams can help evaluating past changes in vegetation structure and diversity. Figure 2 .Figure 3 . Figure 2. Palynological richness.Number of pollen types encountered per sample based on a count of 500 pollen grains (E(T 500 )) from upland vascular plants (set A), showing only Holocene samples.Lowess smoothers with a span of 0.2 were applied to emphasise trends.doi:10.1371/journal.pone.0051624.g002 Figure 4 .Figure 5 . Figure 4. Changes in pollen type diversity between time periods for individual sites.Bars mark the absolute number of taxa in combined samples over time periods of 2000 and 1500 years.Horizontal bars mark the number of taxa based on the rarefaction to the pollen sum of the smallest combined sample per site.Lines indicate the direction of change in taxa composition between periods: common = number of common taxa between two adjacent periods; win = number of taxa gained from one period to the next; loss = number of taxa lost from one period to the next; change = sum of win and loss indicating species turnover.The analysis is based on the pollen types from upland vascular plants (set A). doi:10.1371/journal.pone.0051624.g004 Figure 6 .Figure 7 . Figure 6.Pollen type diversity and first occurrence in the Berlin/Brandenburg database of selected taxa.A) Bars mark the number of different taxa found per time interval.Lines indicate the direction of change in taxa composition between combined samples: common = number of common taxa between two adjacent combined samples; win = number of taxa gained from one period to the next; loss = number of taxa lost from one period to the next; change = sum of win and loss indicating species turnover.B) Histogram showing the number of taxa that appear for the first time in a given time interval.doi:10.1371/journal.pone.0051624.g006
2018-04-03T06:18:20.225Z
2012-12-11T00:00:00.000
{ "year": 2012, "sha1": "c3bf6cb01e31d0c091957f3a3750147ddcb95ffc", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0051624&type=printable", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "c3bf6cb01e31d0c091957f3a3750147ddcb95ffc", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
235398742
pes2o/s2orc
v3-fos-license
MiR-24-3p attenuates IL-1β-induced chondrocyte injury associated with osteoarthritis by targeting BCL2L12 Background MiR-24-3p has been reported to be involved in an osteoarthritis (OA)-resembling environment. However, the functional role and underlying mechanism of miR-24-3p in chondrocyte injury associated with OA remains unknown. Methods The expression of miR-24-3p was determined using reverse transcription quantitative PCR analysis in OA cases and control patients, as well as IL-1β-stimulated chondrocyte cell line CHON-001. The cell viability was analyzed by CCK-8 assay. Apoptosis status was assessed by caspase-3 activity detection. The pro-inflammatory cytokines (TNF-α and IL-18) were determined using ELISA assay. The association between miR-24-3p and B cell leukemia 2-like 12 (BCL2L12) was confirmed by luciferase reporter assay. Results We first observed that miR-24-3p expression level was lower in the OA cases than in the control patients and IL-1β decreased the expression of miR-24-3p in the chondrocyte CHON-001. Functionally, overexpression of miR-24-3p significantly attenuated IL-1β-induced chondrocyte injury, as reflected by increased cell viability, decreased caspase-3 activity, and pro-inflammatory cytokines (TNF-α and IL-18). Western blot analysis showed that overexpression of miR-24-3p weakened IL-1β-induced cartilage degradation, as reflected by reduction of MMP13 (Matrix Metalloproteinase-13) and ADAMTS5 (a disintegrin and metalloproteinase with thrombospondin motifs-5) protein expression, as well as markedly elevation of COL2A1 (collagen type II). Importantly, BCL2L12 was demonstrated to be a target of miR-24-3p. BCL2L12 knockdown imitated, while overexpression significantly abrogated the protective effects of miR-24-3p against IL-1β-induced chondrocyte injury. Conclusions In conclusion, our work provides important insight into targeting miR-24-3p/BCL2L12 axis in OA therapy. Introduction Osteoarthritis (OA) as a highly prevalent degenerative joint disease causes severe pain, joints stiffness, and even disability in older and middle people worldwide [1], whose primary characteristics include articular cartilage degradation caused by the imbalance of extracellular matrix (ECM) components, joint inflammation, and subchondral bone sclerosis [2,3]. Chondrocytes, as the only cells in the healthy cartilage, play a crucial role in maintaining the balance of the extracellular matrix and tissue homeostasis [4]. Several risk factors such as proinflammatory cytokines and abnormal mechanical stressinduced molecular events (apoptosis, cell death, necrosis, and ECM degradation) in chondrocytes have been reported to be closely correlated with the pathological process of OA [5][6][7]. Therefore, gaining a better understanding on the molecular mechanisms underlying chondrocyte injury is of great significance in developing effective therapies against OA. MicroRNAs (miRNAs/miRs) have been reported to regulate a variety of biological processes, such as cell proliferation, differentiation, and apoptosis, and miRNAs are identified as important regulators involved in the development and progression of human diseases, including OA [8,9] by negatively modulating protein-coding gene expression via binding to the 3′-untranslated region (3′ UTR) of target mRNAs [10]. Among then, miR-24-3p played important functional roles in several diseases. For example, miR-24-3p was highly expressed in tumor tissues and promoted the cell proliferation, migration, and invasion in cancer cells, including lung cancer [11], prostate cancer [12], and bladder cancer [13]. Tan et al. [14] and Xiao et al. [15] reported that miR-24-3p exerted cardioprotective effects in myocardial ischemia/reperfusion (I/R) injury. Similarly, Shen et al. [16] demonstrated that miR-24-3p may ameliorate inflammatory response and cellular apoptosis in hepatic I/R process, which might be a potential therapeutic target for preventing liver I/R development and progression. Interestingly, a recent study by Ragni et al. [17] who pointed that miR-24-3p was involved in adipose-derived mesenchymal stem cells (ASCs) regulating cell homeostasis and regenerative pathways in an OA-resembling environment. However, the involvement and underlying mechanism of miR-24-3p in chondrocyte injury associated with the pathogenesis of OA remains unknown. B cell leukemia 2-like 12 (BCL2L12), a new member of the apoptosis-related BCL2 gene family contains a highly conserved BH2 domain, and a BH3-like motif and a proline-rich region. So far, it still exerted controversy on the role of BCL2L12 as an anti-apoptotic or proapoptotic factor in the control of apoptosis, which was considered to be cell type-dependent [18,19]. In our previous investigation, BCL2L12 was identified as a potential target gene of miR-24-3p. Moreover, BCL2L12 expression level was observed to be significantly upregulated in the osteoarthritic samples contrary to the physiologically healthy samples [20]. Based on these facts, we thus speculated that miR-24-3p played a critical role in the pathogenesis of OA by regulating chondrocyte injury via targeting BCL2L12. To validate our hypothesis, we first analyzed the expression of miR-24-3p in OA cartilage tissues and IL-1β-stimulated human chondrocyte cell line CHON-001. We next tested the impacts of miR-24-3p overexpression on cell viability, apoptosis, inflammation, and cartilage ECM degradation in the in vitro cultured IL-1β-induced OA chondrocyte. Moreover, we explored the association between miR-24-3p and BCL2L12 in IL-1β-induced OA chondrocyte. Knee tissue collection Human cartilage specimens were collected after total knee arthroplasty from 32 patients who were diagnosed as OA (aged 42-58 years, 22 males and 10 females) according to the American College of Rheumatology (ACR) classification criteria [21]. Meanwhile, the cartilage from 32 nonarthritic knee joints of the donors who suffered from a trauma without known history of joint disease were used as normal controls (aged 36-55 years, 21 males and 11 females). The collection of specimens were under the approval of the Ethics Committee of Baoshan District Shanghai Integrated Traditional Chinese and Western Medicine Hospital (Shanghai, China). Informed consent was signed by each participant. Cell culture and stimulation CHON-001, a human chondrocyte cell line derived from normal articular cartilage was purchased from American Type Culture Collection (ATCC, Manassas, VA, USA). CHON-001 cells were cultured in Dulbecco's modified Eagle's medium (DMEM, Gibco, Grand Island, USA) with 10% fetal bovine serum (FBS, Gibco) and 0.1% mg/ ml G-418 (Gibco) at 37°C under a humidified atmosphere containing CO 2 . The stable cultured CHON-001 cells were stimulated with 10 ng/mL IL-1β (Sigma Aldrich, St. Louis, MO, USA) for 24 h to establish OA model in vitro. Cell transfection The specific miR-24-3p mimics and scramble negative control (miR-NC), small interfering RNA targeting BCL2L12 (si-BCL2L12) and its negative control (si-NC), as well as pcDNA3.1-BCL2L12 overexpression vector and pcDNA3.1 empty vector were synthesized by Gene-Pharma Co., Ltd. (Shanghai, China). Next, CHON-001 cells at a density of 5 × 10 5 cells/well were seeded into six-well plates and transfected with the above oligonucleotides or vectors according to the experimental requirements in accordance with the manufacturer's instructions of lipofectamine 2000 (Invitrogen, CA, USA). Forty-eight hours after transfection, CHON-001 cells were stimulated with IL-1β (10 ng/ml) for 24 h, which were harvested for further studies. Cell viability assay Transfected CHON-001 cells were plated onto a 96-well plate at a density of 3 × 10 3 cells/well and cultured for 0, 24, 48, and 72 h, respectively. At each time point, cells in each well were incuabted for 2 h with 10 μL Cell Counting Kit-8 (CCK-8) solution (Dojindo, Kumamoto, Japan) at 37°C. The absorbance was then measured at a wavelegnth of 450 nm by a microplate reader (Bio-Rad, Hercules, USA). The experiment was performed in triplicate. Caspase-3 activity analysis Apoptosis of CHON-001 cells was assessed by analyzing the caspase-3 activity in accordance with the instructions provided by commercial Caspase-3 Colorimetric Activity Assay Kit (Millipore, Billerica, MA, USA). With an ELISA reader (Bio-Rad Laboratories, Inc., Hercules, CA, USA), the absorbance at a wavelegnth of 405 nm was measured and normalized by control group. The experiment was performed in triplicate. Enzyme-linked immunosorbent assay (ELISA) Inflammation status of CHON-001 cells was evaluated by determining the release of pro-inflammatory cytokines (TNF-α and IL-18) in the cellular supernants in accordance with the instructions provided by Valukine ELISA kit (R&D Systems, Inc., Minneapolis, MN, USA). The experiment was performed in triplicate. Western blot analysis Extraction of total protein sample was performed using ice-cold RIPA lysis buffer (Beyotime Biotechnology, Shanghai, China), and protein concentration was determined using a BCA Protein Assay Kit (Beyotime Biotechnology). After separation of protein sample (30 μg) through 10% SDS-PAGE, we tranferred the separated protein onto PVDF membranes (Millipore) and blocked them with tris-buffered saline and Tween (TBST) containing 5% skim milk for 2 h at room temperature. Then, the membarnes were incuabted overnight at 4°C with primary antibodies against BCL2L12, MMP-13, ADAM TS-5, ACAN, COL2A1, and GAPDH (all from Abcam Cambridge, MA, USA), followed by incubated with horseradish peroxidase-conjugated secondary antibody at room temperature for 1.5 h. All the targeted protein bands were visualized using enhanced chemiluminescence detection reagents (GE healthcare Life Science, Pittsburgh, USA). Statistical analysis All quantitative data were analyzed using GraphPad Prism 6.0 (GraphPad Software Lin., La Jolla, USA) and presented as mean ± standard deviation (SD) of three independent experiments. Differences between two groups were evaluated by Student's t test, and differences among multiple groups were investigated by oneway analysis of variance followed by Tukey's test, which were considered to be statistically significant when P value less than 0.05. Results Expression level of miR-24-3p was downregulated in OA cartilage tissues and IL-1β-induced chondrocytes. To confirm whether miR-24-3p was involved in the pathological process of OA, we first collected the cartilage tissues from OA patients and age-matched normal controls and determined the expression of miR-24-3p using reverse transcription quantitative PCR. As shown in Fig. 1a, the expression of miR-24-3p in patients with OA was significantly lower than that in matched normal controls. Moreover, we established the OA model in vitro using IL-1β-stimulated CHON-001 cells. Consistently, miR-24-3p expression was distinctly decreased in IL-1β-stimulated chondrocytes compared with that in untreated control group (Fig. 1b). These results indicated that miR-24-3p expression was suppressed in an OA microenvironment. Overexpression of miR-24-3p significantly inhibited IL-1βinduced chondrocyte injury in vitro To further investigate the functional role of miR-24-3p during the progress of OA, we manipulated the expression level of miR-24-3p in IL-1β-stimulated CHON-001 cells and tested the transfection efficiency of miR-24-3p mimics using reverse transcription quantitative PCR. As depicted in Fig. 2a, reduced miR-24-3p expression in CHON-001 cells under IL-1β stimulation was significantly elevated by transfection with miR-24-3p mimics compared with miR-NC transfection, which confirmed that miR-24-3p overexpression was successfully constructed in vitro. Subsequently, we analyzed the effect of miR-24-3p overexpression on IL-1β-induced chondrocyte injury. The results from caspase-3 activity assay (Fig. 2b) and CCK-8 assay (Fig. 2c) showed that miR-24-3p overexpression significantly reversed the increased apoptosis and decreased cell viability induced by IL-1β stimulation in CHON-001 cells. Analysis of inflammation by ELISA assay revealed that the release of TNF-α (Fig. 2d) and IL-18 (Fig. 2e) in culture supernatants was significantly elevated by IL-1β stimulation, which was attenuated after miR-24-3p mimics transfection. Furthermore, we investigated the influences of miR-24-3p on IL-1β-induced cartilage degradation by analyzing the expression of MMP-13, ADAMTS-5, COL2A1, and ACAN in IL-1β-stimulated chondrocytes. The results from western blot analysis exhibited that miR-24-3p overexpression weakened the IL-1β-induced elevation of MMP-13 and ADAMTS-5 protein expression, as well as markedly reversed the IL-1β-induced reduction of COL2A1 and ACAN protein expression in CHON-001 cells (Fig. 2f). These data demonstrated that miR-24-3p could reverse the effects of IL-1β stimulation on apoptosis, inflammation, and cartilage ECM degradation. MiR-24-3p suppressed BCL2L12 expression by directly targeting its 3′UTR Next, we performed bioinformatics perdition to identify the putative targets of miR-24-3p by using TargetScan 7.1. Among the predicted targets, BCL2L12 was reported to be associated with OA pathogenesis, which thus was selected as a potential target of miR-24-3p. As shown in Fig. 3a, miR-24-3p and its binding sites in the 3′-UTR of BCL2L12 are highly conserved. To validate their interaction, luciferase reporter assay was performed in CHON-001 cells. As illustrated in Fig. 3b, cotransfection of miR-24-3p and BCL212 3′-UTR luciferase reporter plasmids significantly reduced the luciferase activity, whereas a mutated BCL2L12 3′UTR sequence prevented this reduction. To further confirm that BCL2L12 was negatively regulated by miR-24-3p, the Fig. 1 Expression of miR-24-3p in OA cartilage tissues and IL-1β-induced chondrocytes. a The expression of miR-24-3p in OA cartilage tissues (n = 32) and normal cartilage tissues (n = 32) was determined by reverse transcription quantitative PCR. b The expression of miR-24-3p in IL-1βinduced chondrocytes and normal untreated chondrocytes was detected by reverse transcription quantitative PCR. Data were presented as mean ± SD of three independent experiments. ***p < 0.001, compared with control mRNA and protein expression levels of BCL2L12 were analyzed by reverse transcription quantitative PCR and western blot analyses. We found that miR-24-3p mimics transfection significantly suppressed the expression of BCL2L12 at the mRNA (Fig. 3c) and protein (Fig. 3d) levels in IL-1β-stimulated CHON-001 cells. These data suggested that BCL2L12 might be a direct target of miR-24-3p. Knockdown of BCL2L12 imitated the protective effects of miR-24-3p against IL-1β-induced chondrocyte injury in vitro As BCL2L12 as a target of miR-24-3p was upregulated in IL-1β-stimulated CHON-001 cells, we then transfected si-BCL2L12 or si-NC into CHONO-001 cells under IL-1β stimulation to investigate the possible effects of BCL2L12 on IL-1β-induced chondrocyte injury in vitro. The data of western blot analysis showed that the expression of BCL2L12 protein was obviously downregulated after si-BCL2L12 transfection in IL-1βstimulated CHON-001 cells (Fig. 4a). Using constructed BCL2L12 silenced cell model, we performed a series of functional assays using CCK-8 assay, caspase-3 activity assay, ELISA assay, and western blot analysis. Our data indicated that downregulation of BCL2L12 reversed the repression of cell viability (Fig. 4b) and elevation of caspase-3 activity (Fig. 4c) mediated by IL-1β in CHON-001 cells. Additionally, increased concentration of proinflammatory cytokines (TNF-α and IL-18) in IL-1βstimulated CHON-001 cells was attenuated after BCL2L12 knockdown (Fig. 4d). In IL-1β-stimulated CHON-001 cells, we also found that knockdown of BCL2L12 downregulated the protein expression of BCL2L12, MMP-13, and ADAMTS-5, while upregulated the protein expression of ACAN and COL2A1 (Fig. 4e). Discussion Investigation on the functional role of pivotal miRNAs associated with pathogenesis of OA may assist to in developing potential therapeutic strategies for OA patients. Here, we first found that the expression of miR-24-3p was significantly downregulated in OA cartilage tissues compared with normal cartilage tissues as well as IL-1β-stimulated CHON-001 cells compared to the control group. In fact, CHON-001 cells are the only components in healthy cartilage and mainly participate in maintaining and producing new cartilaginous matrix, whose apoptosis was positively associated with cartilage destruction in patients with OA [23,24]. Higher levels of inflammatory cytokines, such as interleukin (IL)-1β and tumor necrosis factor (TNF)-α are frequently found in OA patients [25]. Accumulating evidence has indicated that IL-1β-stimulated CHON-001 cells could be used as OA model in vitro [26][27][28]. Therefore, it was appropriate to used IL-1β-stimulated CHON-001 cell model to investigate the functional role of miR-24-3p on inflammation and apoptosis involved in the pathogenesis of OA. The luciferase activity was measured in CHON-001 cells following co-transfecting with WT/MUT BCL2L12 3′-UTR plasmid and miR-24-3p with the luciferase reporter assay. **p < 0.01, compared with miR-NC; Chondrocyte cell line CHON-001 was transfected with miR-24-3p mimics or miR-NC, followed by IL-1β stimulation. The untreated cells were used as control group. c Expression level of BCL2L12 mRNA was determined via reverse transcription quantitative PCR. Data were presented as mean ± SD of three independent experiments. ***p < 0.001, compared with control; ###p < 0.001, compared with IL-1β + miR-NC; d Expression level of BCL2L12 protein in CHON-001 cells was detected with Western blotting Functionally, we further demonstrated that overexpression of miR-24-3p remarkedly IL-1β-induced inflammation, caspase-3 activity, and cartilage ECM degradation in chondrocytes. Consistent with our data, miR-24-3p has been reported to exert protective effects against ischemia/reperfusion (I/R) injury [14,15] and hepatic I/R process [16]. On the contrary, miR-24-3p upregulation could promote intervertebral disc degeneration through IGFBP5 and the ERK signaling pathway [29]. According to the report by Ragni et al. [17] who showed a strong capacity for adipose-derived MSCs (ASCs) to reduce matrix degradation activities, we thus inferred that miR-24-3p suppressed OA progression might via inhibiting apoptosis, inflammation, and ECM degradation. In molecular levels, we further demonstrated that miR-24-3p overexpression weakened IL-1β-induced cartilage degradation, as reflected by reduction of MMP-13 and ADAMTS-5 protein expression, as well as markedly elevation of COL2A1 and ACAN protein expression in IL-1β-stimulated CHON-001 cells. As our best knowledge, the ECM is an important structure for maintaining the internal stability and structural integrity of cartilage and protecting the ECM from degeneration is one way to maintain chondrocyte function. The upregulation of MMP and ADAMTS production and downregulation of collagen and proteoglycan levels are correlated with the increases in apoptotic cells and ECM degradation in OA, which lead to matrix degradation [30]. In the other hand, chondrocyte apoptosis and inflammation are known to associate with the risk of cartilage loss and progression, as well as the clinical characteristics of OA [31,32]. Under inflammatory conditions, including IL-1β stimulation, chondrocytes, as Fig. 4 Effects of BCL2L12 knockdown on IL-1β-induced apoptosis, inflammation, and cartilage ECM degradation. Chondrocyte cell line CHON-001 was transfected with si-BCL2L12 or si-NC, followed by IL-1β stimulation. a The expression of BCL2L12 was detected by reverse transcription quantitative PCR. b Cell viability was examined by CCK-8 assay. c Caspase-3 activity was analyzed using by commercial Caspase-3 Colorimetric Activity Assay Kit. d The release of TNF-α and IL-18 in supernatant of CHON-001 cells from different groups was determined by ELISA assay. Data were presented as mean ± SD of three independent experiments. *p < 0.05, **p < 0.01, ***p < 0.001, compared with si-NC; e The protein levels of BCL2L12, MMP-13, ADAMTS-5, ACAN, and COL2A1 were measured by Western blot assay the only cell type residing in the cartilage, participate in the catabolic activities that ultimately cause the degradation of cartilaginous ECM [33]. In this study, miR-24-3p mimics inhibited the production of pro-inflammatory cytokines (TNF-α and IL-18) and enhanced matrix protein expression (COL2A1 and ACAN) while suppressing the levels of catabolic factors (MMP13 and ADAMTS-5), suggesting that miR-24-3p reduced inflammation and cartilage ECM degradation. Furthermore, we performed luciferase reporter analysis to confirm that BCL2L12 was a direct target gene of miR-24-3p. In IL-1β-stimulated chondrocytes, the expression of BCL2L12 was significantly upregulated, which was notably decreased after miR-24-3p overexpression. We further demonstrated that BCL2L12 knockdown imitated, while overexpression significantly abrogated the protective effects of miR-24-3p against IL-1β-induced apoptosis, inflammation, and cartilage ECM degradation. Similar to the pro-apoptotic of BCL2L12 in IL-1β-stimulated chondrocytes, BCL2L12 participated in the induction of aberrant Th2-biased inflammation in the intestinal mucosa [34] and chronic rhinosinusitis [35] with allergy. Additionally, BCL2L12 exerts proapoptotic effects implicated in various malignancies, including laryngeal squamous cell carcinoma [36], breast cancer [37], and acute myeloid leukemia [38]. Based on Fig. 5 BCL2L12 participated in the miR-24-3p-induced protective effects against IL-1β-induced chondrocyte injury in vitro. Chondrocyte cell line CHON-001 was co-transfected with miR-24-3p mimics and pcDNA3.1-BCL2L12 or pcDNA3.1, followed by IL-1β stimulation for 24 h. a The expression of BCL2L12 was detected by reverse transcription quantitative PCR. b Cell viability was examined by CCK-8 assay. c Caspase-3 activity was analyzed using by commercial Caspase-3 Colorimetric Activity Assay Kit. d The release of TNF-α and IL-18 in supernatant of CHON-001 cells from different groups was determined by ELISA assay. e, f The protein levels of BCL2L12, MMP-13, ADAMTS-5, ACAN, and COL2A1 were measured by Western blot assay. Data were presented as mean ± SD of three independent experiments. **p < 0.01, ***p < 0.001, compared with miR-NC + pcDNA3.1; ##p < 0.01, ###p < 0.001, compared with miR-24-3p mimics + pcDNA3.1 these evidences, we thus concluded that miR-24-3p plays a pivotal role in the pathogenesis of OA though directly targeting on BCL2L12. Conclusions In summary, our data indicated that miR-24-3p expression level was lower in the OA cases than in the control patients and IL-1β decreased the expression of miR-24-3p in the chondrocytes. Overexpression of miR-24-3p suppressed apoptosis, inflammation, and ECM degradation in IL-1β-stimulated chondrocytes by targeting BCL2L12. These preliminary data might provide important insight into targeting miR-24-3p/BCL2L12 axis for developing potential therapeutic strategies for OA patients.
2021-06-11T14:19:51.501Z
2020-09-17T00:00:00.000
{ "year": 2021, "sha1": "d3a5b8e0e981ef1a53e3a2a6c01b437dbd7cde8a", "oa_license": "CCBY", "oa_url": "https://josr-online.biomedcentral.com/track/pdf/10.1186/s13018-021-02378-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d3a5b8e0e981ef1a53e3a2a6c01b437dbd7cde8a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
54588898
pes2o/s2orc
v3-fos-license
Expanding the panel of oleochemicals by altering the fatty acid hydrocarbon backbone Summary : While industrial oleochemistry is able to transform the carboxylic group of natural fatty acids, the fatty chain remains generally unchanged. This paper relates several attemps made at Cirad to propose alternatives through reactions performed under industrialy affordable conditions: normal or soft cracking, acylation of aromatic rings by a fatty acid, condensation of fatty acids to symetrical ketones or methyl ketones and subsequent derivatization\; theses processes share the use of cheap heterogeneous catalysts, reduced level of by‐products among other "green advantages". lubricants for example to meet various requirements (high activity, low toxicity, renewable source), whereas most of the large scale industrial oleochemical reactions concern the carboxylic group and yield derivatives having the same chain structure as the starting natural fatty acids, i.e. normal chains (figure 1). Whereas it is well known that chain functionalization may provide useful properties, this suffers only a few exceptions such as pyrolysis of castor oil, ozonolysis of the oleic chain, guerbet alcohols or dimer acids (and iso-acid as by-product). Those give access to shorter or longer chains, or branched, or even to chains bearing a second polar group. Compared to bulk petrochemicals for example, oleochemicals would be even more attractive if we could expand this panel of available chemical structures. This is suitable either for environment protection (renewable resources, biodegradability) and for expanding marketing opportunities of oleaginous crops. This is particularly important under the present situation, i.e. limiting of green house effects, international trade agreements and constraints on oleaginous crops for food. From the organic chemist's point of view there is a limited but not negligeable number of potential reaction sites on the most common fatty chains which in turn opens access to an impressive number of derivatives. It is almost impossible to make even a summary under the scope of the present paper. Thus, from the pure chemical side, the field is not limited. After being the main source of organic chemicals until the start of the 20 e century, the use of oleochemicals did not progress, out of the soap market, because of the competition with petrochemistry. Nowadays there is again a great interest towards oleochemicals because of environment as well as economic and social reasons. The new industrial developments are focused on functionnalized well defined oleochemicals for detergents or lubricants for example. The present paper reports on several ways we investigated at Cirad, to produce new intermediate oleochemicals that could deal with significant market opportunities, either cracking into smaller structures or at the opposite condensation to achieve larger hydrocarbon backbones (figure 2). Cracking Cracking is among the reactions most widely performed at industrial scale because of its involvement at early stage for petroleum refining. In the oleochemical field this reaction is mainly applied to ricinoleic acid to get intermediate chemicals for the synthesis of polymers. We started by applying this poorly selective reaction to vegetable oils over a silica-alumina (acidic) catalyst which gave an access through a single step process -contrary to petroleum refining -to a wider range of light or medium size hydrocarbons ( figure 3). This cracking was first seen as a possible route to biofuelsboth gasoline and diesel oil for fueling al kinds of engines -but, why not to see it now also as an access to those commodities (hydrocarbons) today available from non renewable origin [4][5][6][7]. Except in some special places in the world (very isolated areas like islands), cracking is of course not cost efficient today but it depends on the availability and on price evolution of crude petroleum. Later the catalytic cracking of capric acid and methyl oleate as model compounds was investigated over various heterogeneous catalysts (kaolinite, montmorillonite, silica-alumina, hydrotalcite and alumina) [8] because it allows a better selectivity compared to thermal cracking [9] (figure 3). The chemical composition of condensed and gaseous products had been determined to assess the selectivities. The bifunctionality of the oleate leads to a wide panel of hydrocarbons with these catalysts but the saturated acid shows an interesting selectivity for olefins over alumina or hydrotalcite [8,10]. With octanoic acid [11,12], the selectivity for olefins as a whole is higher than 60 %, about 25 % for the sole C15 fraction under appropriate conditions of temperature and residence time, whereas under high reactor temperature (500°C) shorter olefins are favored. Mechanistic investigations showed that the most probable pathway with alumina involves condensation of two fatty acid molecules to the C15 symetrical ketone (di-heptyl ketone) then subjected to reduction-dehydration to yield olefins and/or to cracking itself (figure 4). These classes of compounds from renewable origin and rather clean cracking process could once replace similar fractions obtained currently from petroleum. Out of the cracking which is not a very selective route although we could orientate to useful products, at the opposite we also investigated condensation reactions as an access to larger structures. Acylation by a fatty acid Alkylaromatics are among the most common petroleum based intermediates used for producing surfactants (linear alkyl benzene sulfonates). With the objective of providing an hyphen between petro-and oleochemistry the acylation of aromatic hydrocarbons, a Friedel -Crafts reaction (figure 5), was tried, successfully, over acidic solid mineral catalysts instead of using a "classical" homogeneous Lewis acid catalyst such as AlCl 3 . For example a Ce 3 + exchanged faujasite type zeolite leads to aryl alkyl ketones with a selectivity of 99 % ; yields showed a bell shaped curve, prefered starting fatty acids being from C8 to C18 owing to increasing electron donating effect of the fatty chain ; then for longer fatty acids the steric effect overpass the electronic influence. When the reagent is an alkylated aromatic ring like toluene for example, the selectivity for the para isomer is 95 % or better, owing to the shape selectivity brough by the narrow channels of the faujasite structure of this acidic catalyst [13]. In comparison another solid acid catalyst like an Al 3 + or Ce 3 + exchanged montmorillonite -a layered aluminosilicate without narrow channels -shows a selectivity for the para isomer close to that obtained with the reference homogeneous catalyst AlCl3 [14]. Those pure alkyl aromatic ketones half based on a renewable resource are thus obtained easily with a high yield and with a cheap catalyst, and owing to their structure close to alkyl aromatics already used [15], they may then undergo a second step to reach active series as surfactants or lubricants for example. Ketonic condensation We already talked about condensation of fatty acid to ketone, as an intermediate product, in the course of soft catalytic cracking over alumina. It has been another research target to try to stop at the ketone stage. The condensation of two fatty acids into ketones over an uncommon heterogeneous catalyst ( figure 6), a bauxite, has been quite deeply investigated, in particular the properties of this "green", uncommon catalytic system, as a route to a wide panel of oleochemicals, just like fatty acids, methyl esters and fatty alcohols [16]. When starting from lauric, palmitic or oleic acids for example, two classes of compounds can be obtained, a symmetrical (long chain) ketone and a methyl ketone as a by-product (along with the corresponding alcanes or olefins), from the degradation of the former under the thermal conditions used (350-390°C) [17]. Contrary to the alumina alone already investigated the bauxite is a complexe mixture of oxides and the catalytic activity for ketonic condensation is a function of the iron content, as confirmed by Mössbauer spectroscopy, the optimum between conversion and selectivity being in the range of 20 % (wt as Fe 2 O 3 ). Under optimized conditions, in this case, it is possible to get the ketone at a temperature low enough to prevent the cracking itself and stay at the condensation step. The mechanism proposed on the basis of dedicated experimental work involves chemisorption of fatty acid as iron soap, decarboxylation of the soap into an adsorbed alkyl anion and then condensation of both soap and carbanion [18]. Water and CO 2 are the sole stoechiometric co-products, which makes the process even greener. Under suitable experimental conditions the selectivity for the symetrical ketone is better than 95 % at conversion rate higher than 90 %. On the other side methyl ketones may be obtained with a selectivity better than 80 % when working with a mixture of fatty acid and acetic anhydride (2/1 molar ratio) [19]. The fixed bed flow pilot reactor built at Cirad (see picture page 365) allowed producing samples of several kilograms for further functionalisation of these ketones here seen as intermediate chemicals in addition to their own functional uses (inks, polishes...) [16,[19][20][21]. Two promizing series of derivatives where considered by transformation of the ketone group [18], (i) the production of secondary alcohols, then esterified into fatty acid esters (waxes) and (ii) nitrogen containing compounds like amines and quaternary ammonium salts ( figure 7). Need to mention that those derivatives of ketones can be considered as having a branched structure and actually this was the objective of the work, on the basis of known effect of a side chain on physical and physicochemical properties. Catalytic chemical processes compatible with industrial application where worked out. In particular for getting the amino derivatives, the Leuckart reaction was chosen and "heterogeneized" with a selected (solid) catalyst, bringing a new feature for this well known reaction among classical homogeneous phase organic chemistry and then it was optimized to achieve a selectivity of 92-99 % while having an almost complete conversion of the starting ketone [22]. Dozens of derivatives from ketones, covering the range of carbon condensation of C7 to C35 where synthetized and their properties checked (melting point, viscosity, wetting, foaming, biocide, biodegradability...) ; some where then tested for application by private partners either in France or in Malaysia. In all cases, as expected, the branched structure of the fatty chain brought different features compared to normal (linear) ones. Among others, the area per polar head of mono alkyl trimethyl ammonium salts, is much greater than that of the corresponding di-alkyl salt having the same carbon condensation ; thus surface coverage is more efficient while using a lower amount of active product. A consequence of branching on the carbon adjacent the nitrogen atom to (not bearing the second chain also attached to the heteroatom itself), access to the polar group should be easier for interacting with an adsorption site on a solid surface for example [23]. In the case of wax esters, rheological and low temperature properties which are key points for lubrication for example, where adapted by changing the carboxylic acid used for esterifying the intermediate secondary alcohol ; esters from short/ medium/long chain, saturated or monounsaturated fatty acids and dibasic acids, were obtained even from those hindered secondary alcohols [24,25]. These alcohols themselves when seen as end products show interesting properties for cosmetics [26]. Conclusion This short report shows examples based on a an analysis of the existing industrial oleochemical production and on a chemical strategy, opening an "easy" access to intermediate chemicals, either hydrocarbons or heteroatomic molecules ; this as an attempt to fill up the gap between the rather poor panel of natural feedstocks (fatty acids) currently available on one side and the wider range of petrochemicals marketed today or useful active compounds expected by the formulators downstream on the other side. To achieve this goal we relied on using chemical catalysis [27], "heterogeneization" of the catalytic system for working under a continuous process or for an easy separation of the catalyst when working under batch conditions, choosing a cheap and readily available catalyst. This and the investigation of the chemical mechanism where of course key points for improving the selectivity, boosting the productivity of the reactor, and thus lowering the price of these products with the aim to fit a market large enough to justify starting a production. From the technical point of view targetting branched structures was definitely a good choice. Finding markets for the "ketone path" needs private companies ready to invest in this chemistry (heterogeneous catalysis and/or high temperature), uncommon in the field of oleochemistry but daily used in petrochemistry. This is another point to be considered for giving more chances to innovation to come out. It is a fact that restructuring the fatty chain involves working on the very stable C-C bonding, thus needing high temperature reactions to overcome the high activation energy of most of the reactions in this field. On the basis of the above expertise, we are now investigating chain functionalisation at the C = C double bond, through catalytic processes able to be performed by existing reactors in today oleochemical plants. Integrated Green Chemistry (renewable resources, environment friendly processes, low toxicity and biodegradable products) offers to oleochemists a wide area to be explored. Illustrations Fixed catalyst bed pilote flow reactor for fatty acid condensation to ketones.
2019-03-20T13:15:10.236Z
2003-09-01T00:00:00.000
{ "year": 2003, "sha1": "65f099e61157bf136f996e061ce6f32813719e81", "oa_license": "CCBY", "oa_url": "https://www.ocl-journal.org/articles/ocl/pdf/2003/05/ocl2003105-6p365.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "30360f3c666881ece5da7fdc184fa363de35f30c", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }